But why does the pilot then comment that they are in the CUTTOF position and move it to RUN? A mechanical failure would have to also move the physical switch in the cockpit for the audio recording to make sence.
You have the exact CVR audio? The report says "one of the pilots is heard asking the other why did he cutoff" which I interpreted to mean one of them noticed the engines shutting down, and asked the other if he did that.
Then he would have asked the other pilot why the engines are shutting down. It seems a lot more probable that he glanced at the switches before asking such an explicit question.
From the preliminary report, quote: "In the cockpit voice recording, one of the pilots is heard asking the other why did he cutoff. The other pilot responded that he did not do so."
I'm trying to understand your complaint here... you think you need to hear their voices with your own ears to believe it?
But from the audio recording it seems like one pilot is noticing them bering in the CUTOFF position, and asking why (and moving it back). If the switch was actually in RUN, but some other issue caused the signal to be sendt, the pilot would see it beeing in the RUN position, not CUTTOF.
This is very clearly EAFR data, so the logical/electrical switch state. Nothing about the mechanical state of the switches has been mentioned, except a picture that shows their final state to be in the RUN position (which makes sense given the relight procedure was ongoing).
From what I understand, the relight procedure involves cycling these back to CUTOFF and then to RUN anyway. So it is not clear if they were mechanically moved from RUN to CUTOFF preceding the loss of thrust, or cycled during relight.
You can't yet - what we have is this sentence from the report: "In the cockpit voice recording, one of the pilots is heard asking the other why did he cutoff.
The other pilot responded that he did not do so."
It's not a direct quote or transcript, it's reported speech.
I am staying at a north American hotel with a pool now, and I have noticed that absolutely nobody showers before (and they come with dry hair), despite the sign asking them to do so. I have been wondering if this is a cultural difference between Europe and America.
But given Israels behaviour on the West Bank and Gaza over the last decades there is no reason to belive that it would make them stop their human rights violations.
First, how much of coding is really never done before?
And secondly, what you say are false (at least if taken literally). I can create a new programming language, give the definition of it in the prompt, ask it to code something in my language, and expect something out. It might even work.
> I can create a new programming language, give the definition of it in the prompt, ask it to code something in my language, and expect something out. It might even work.
I literally just pointed out the same time without having seen your comment.
Second this. I've done this several times, and it can handle it well. Already GPT3.5 could easily reason about hypothetical languages given a grammar or a loose description.
I find it absolutely bizarre that people still hold on to this notion that these languages can't do anything new, because it feels implausible that they have tried given how well it works.
If you give it the rules to generate something, why can't it generate it? That's what something like Mockaroo[0] does. It's just more formal. That's pretty much what LLM training does, extracting patterns from a huge corpus of text. Then it goes one to generate according to the patterns. It can not generate a new pattern that is not a combination of the previous one.
> If you give it the rules to generate something, why can't it generate it?
It can, but that does not mean that what is generate is not new, unless the rules in question constrains the set to the point where onely one outcome is possible.
If I tell you that a novel has a minimum of 40,000 words, it does not mean that no novel is, well, novel (not sorry), just because I've given you rules to stay within. Any novel will in some sense be "derived from" an adherence to those rules, and yet plenty of those novels are still new.
The point was that by describing a new language in a zero-shot manner, you ensure that no program in that language exists either in the training data or in the prompt, so what it generates must at a minimum be new in the sense that it is in a language that has not previously existed.
If you then further gives instructions for a program that incorporates constraints that are unlikely to have been used before (but this is harder) you can further ensure the novelty of the output along other axes.
You can keep adding arbitrary conditions like this, and LLMs will continue to produce output. Human creative endeavour is often similarly constrained to rules: Rules for formats, rules for competitions, rules for publications, and yet nobody would suggest this means that the output isn't new or creative, or suggest that the work is somehow derivative of the rules.
This notion is setting a bar for LLMs we don't set for humans.
> That's pretty much what LLM training does, extracting patterns from a huge corpus of text. Then it goes one to generate according to the patterns.
But when you describe a new pattern as part of the prompt, the LLM is not being trained on that pattern. It's generating on the basis of interpreting that what it is told in terms of the concepts it has learned, and developing something new from it, just as a human working within a set of rules is not creating merely derivative works just because we have past knowledge and have been given a set of rules to work to.
> It can not generate a new pattern that is not a combination of the previous one.
The entire point of my comment was that this is demonstrably false unless you are talking strictly in the sense of a deterministic view of the universe where everything including everything humans do is a combination of what came before. In which case the discussion is meaningless.
Specific models can be better or worse at it, but unless you can show that humans somehow exceed the Turing computable there isn't even a plausible mechanism for how humans could even theoretically be able to produce anything so much more novel that it'd be impossible for LLMs to produce something equally novel.
I was referring as new as some orthogonal dimension in the same space. If we're referring to your definition, any slight changes in the parameters results in something new. I was arguing more about if the model knows about axes x and y, then it's output is constrained to a plane unless you add z. But more often than not it's output will be a cylinder (extruded from a circle in the x,y plane) instead of a sphere.
The same thing goes for image generation. Every picture is new, but it's a combination of the pictures it founds. It does not learn about things like perspectives, values, forms, anatomy,... the way an artist does which are the proper dimensions of drawing.
> that humans somehow exceed the Turing computable
Already done by Gödel's incompleteness theorems[0] and the halting problem[1]. Meaning that we can do some stuff that no algorithm can do.
You completely fail to understand Gödel's incompleteness theorems and the halting problem if you think they are evidence of something humans can do that machines can not. It makes the discussion rather pointless if you lack that fundamental understanding of the subject.
For some types of comment, it really would be tempting to automate the answers, because especially the "stochastic parrot" type comments are getting really tedious and inane, and ironically comes across as people parroting the same thing over and over instead of thinking.
But the other answer is that often the value in responding is to sharpen the mind and be forced to think through and formulate a response even if you've responded to some variation of the comment you reply to many times over.
A lot of comments that don't give me any value to read are comments I still get value out of through the process of replying to for that reason.
We use libraries for SOME of the 'done frequently' code.
But how much of enterprise programming is 'get some data from a database, show it on a Web page (or gui), store some data in the database', with variants?
It makes sense that we have libraries for abstraction away some common things. But it also makes sense that we can't abstract away everything we do multiple times, because at some point it just becomes so abstract that it's easier to write it yourself than to try to configure some library. Does not mean that it's not a variant of something done before.
> we can't abstract away everything we do multiple times
I think there's a fundamental truth about any code that's written which is that it exists on some level of specificity, or to put it in other words, a set of decisions have been made about _how_ something should work (in the space of what _could_ work) while some decisions have been left open to the user.
Every library that is used is essentially this. Database driver? Underlying I/O decisions are probably abstracted away already (think Netty vs Mina), and decisions on how to manage connections, protocol handling, bind variables, etc. are made by the library, while questions remain for things like which specific tables and columns should be referenced. This makes the library reusable for this task as long as you're fine with the underlying decisions.
Once you get to the question of _which specific data is shown on a page_ the decisions are closer to the human side of how we've arbitrarily chosen to organise things in this specific thousandth-iteration of an e-commerce application.
The devil is in the details (even if you know the insides of the devil aren't really any different).
> Once you get to the question of _which specific data is shown on a page_ the decisions are closer to the human side of how we've arbitrarily chosen to organise things in this specific thousandth-iteration of an e-commerce application.
That's why communication is so important, because the requirements are the primary decision factors. A secondary factors is prior technical decisions.
> First, how much of coding is really never done before?
Lots of programming doesn't have one specific right answer, but a bunch of possible right answers with different trade-offs. The programmers job isn't just to get working code neccesarily. I dont think we are at the point where llm's can see the forest for the trees, so to speak.
So, how does that relate to this quote from the article?
>ty, on the other hand, follows a different mantra: the gradual guarantee. The principal idea is that in a well-typed program, removing a type annotation should not cause a type error. In other words: you shouldn’t need to add new types to working code to resolve type errors.
It seems like `ty`'s current behaviour is compatible with this, but changing it won't (unless it will just be impossible to type a list of different types).
You could have a `list[int | str]` but then you need to check the type of the elements in the list on usage to see if they are `int` or `str` (if you are actually trying to put the elements into a place that requires an `int` or requires a `str` but wouldn't accept an `int | str`...).
If your code doesn't do that then your program isn't well typed according to Python's typing semantics... I think.
So you can have lists of multiple types, but then you get consequences from that in needing type guards.
Of course you still have stuff like `tuple[int, int, int, str]` to get more of the way there. Maybe one day we'll get `FixedList[int, int, int, str]`....
We haven't decided yet if this is what we want to do, though. It's also possible that we may decide to compromise on the gradual guarantee in this area. It's not an ironclad rule for us, just something we're considering as a factor.