Pure speculation, but I would guess people first mixed wax and lamp oil in different ways to still get the burning effect of oil, with less of the cost of the oil, then added a wick to help light the oil/wax.
Then eventually that product morphed over time to the point where they realized the oil wasn't actually a necessary component
Good idea, it sounds plausible! But it still leaves open the question of how oil lamps were invented. How did someone figure out that a wick would be helpful?
Rope was around long before the wick. It seems conceivable that rope shavings or pieces or old rope were an easy way to start a fire.
This was then used with oil to make an even better fire starter or means of transferring fire. Eventually someone realises that a rope soaking in oil is easily lit and sustains a flame.
Before wicks, how do you burn oil? It's not easy to just ignite a bucket of lamp oil (putting aside what you might make the bucket out of). Probably you soak other fuel like wood or rags in the oil and burn the result. It's not a huge step from there to accidentally find out that you can make do with one piece of wood or cloth or string for a lot of oil, assuming you have something to put the oil in.
1. Someone dips a rope in flammable oil before lighting it, and sees that it's quite flammable.
2. Some other time, someone tries to use a rope dipped in flammable oil as a fuse to trigger the lighting of the oil once it burns back to the oil
3. They notice that the fuse keeps burning but doesn't burn back - in other words, the wicking effect
4. They shorten the rope and reshape the pot, and that's an oil lamp.
The first oil-lamps were basically a bowl with animal-fat and plant parts as wick. It's not hard to imagine how ten thousands years ago a hunting tribe could discover such a device by accident or on purpose.
great to learn from the headline that this tech only works for disaster response maps, and isn't usable for other types of maps, like mapping out the front lines of a war
Why would casualties necessarily go up with surveillance? Every argument for precision targeting can be reversed for evasion.
In Ukraine it’s relatively rare for large numbers of troops to be concentrated, because each side knows its opponents would observe the formation and make it a priority target. This makes something like the battle of the Somme unlikely to be repeated.
In call of duty do casualties go up when both sides have UAVs, compared to when both are without?
Arma Reforger has very good mods depicting drone combat like flying fpvs and bomber drones. Bohemia interactive simulations also focuses on drones in their newest warsim release
As someone who's in the field, I hate how drones and robotics are now associated with anything related to wars. It just kills the passion, and now whenever you mention you work in it, do it, or are interested in it, you get that suspicious look and even a knock on your door.
Unless you do drones only on your private property, they are inherently a creepy, invasive technology (even though I think they're super cool and like playing with them, too)
Do you think that airplanes, helicopters, and balloons are also inherently a creepy, invasive technology as well? Because from the perspective of capturing imaging data from the air there is really no functional difference between those and UAS...
You do not need wealth to make 'great art'. It's nice to have access to the best software tools, the highest quality paints, the finest instruments, etc - but those have never been needed. It's pretty reductive to think of art that way.
Wealth is the time required to make great art- Every great artist needs this.
Currently, some artists are able to make a living from their art, and can spend the time and effort to create great art without being independently wealthy- but that is going to become increasingly difficult
Every 6 months since chatgpt launched, everyone keeps telling me that LLMs are going to be amazing in a year from now and they'll replace programmers, just you wait
They're getting better, but a lot of the improvement was driven by increases in the training data. These models have now consumed literally all available information on the planet - where do they go from here?
The "time to amazingness" is falling quickly, though. It used to be "just a few years" a few years ago, and has been steady around 6 months for the last year or so.
I'm waiting for the day when every comment session on the internet will be full of people predicting AGI tomorrow.
As far as I understand, coding ability of AIs is now driven mostly entirely by RL, as well synthetic data generated by inference time compute combined with code execution tool use.
Coding is arguably the single thing least affected by a shortage of training data.
We're still in the very early steps of this new cycle of AI coding advancements.
Yeah... There are improvements to be made by increasing the context window and having agents reference documentation more. Half the issues I see are with agents just doing their own thing instead of following established best practices they could/should be referencing in a codebase or looking up documentation.
Which, believe it or not, is the same issue I see in my own code.
Give the LLM access to a VM with a compiler and have it generate code for it to train on. They're great at next.js but not as good with swift. So have it generate a million swift programs, along with tests to verify they actually work, and add that to the private training data set.
As you can see from the other commenters on here, any perceived limitation is no longer the fault of the LLM. So where we go from here is gaslighting. Never mind that the LLM should be good at refactoring, you need to keep doing that for it until it works you see. Or the classic you’re prompting it wrong, etc.
Let's hope you are right, and that the limitations of AI will never improve, and that all of us get to live out our full natural lifespans without AI xrisk
The fundamental question is "will the LLM get better before your vibecoded codebase becomes unmaintainable, or you need a feature that is beyond the LLM's ceiling". It's an interesting race.
sort of, except I think the future of llms will be to to have the llm try 5 separate attempts to create a fix in parallel, since llm time is cheaper than human time... and once you introduce this aspect into the workflow, you'll want to spin up multiple containers, and the benefits of the terminal aren't as strong anymore.
I feel like the better approach would be to throw away PRs when they're bad, edit your prompt, and then let the agent try again using the new prompt. Throwing lots of wasted compute at a problem seems like a luxury take on coding agents, as these agents can be really expensive.
So the process becomes: Read PR -> Find fundamental issues -> Update prompt to guide agent better -> Re-run agent.
Then your job becomes proof-reading and editing specification documents for changes, reviewing the result of the agent trying to implement that spec, and then iterating on it until it is good enough. This comes from the belief that better, more expensive, agents will usually produce better code than 5 cheaper agents running in parallel with some LLM judge to choose between or combine their outputs.
Who or what will review the 5 PRs (including their updates to automated tests)? If it's just yet another agent, do we need 5 of these reviews for each PR too?
In the end, you either concede control over 'details' and just trust the output or you spend the effort and validate results manually. Not saying either is bad.
If you can define your problem well then you can write tests up front. An ML person would call tests a "verifier". Verifiers let you pump compute into finding solutions.
I'm not sure we write good tests for this because we assume some kind of logic involved here. If you set a human to task to write a procedure to send a 'forgot password' email, I can be reasonably sure there's a limited number of things a human would do with the provided email address, because it takes time and effort to do more than you should.
However with an LLM I'm not so sure. So how will you write a test to validate this is done but also guarantee it doesn't add the email to a blacklist? A whitelist? A list of admin emails? Or the tens of other things you can do with an email within your system?
They probably won't. But it doesn't matter. Ultimately, we'll all end up doing manual labor, because that is the only thing we can do that the machines aren't already doing better than us, or about to be doing better than us. Such is the natural order of things.
By manual labor I specifically mean the kind where you have to mix precision with power, on the fly, in arbitrary terrain, where each task is effectively one-off. So not even making things - everything made at scale will be done in automated factories/workshops. Think constructing and maintaining those factories, in the "crawling down tight pipes with scewdriver in your teeth" sense.
And that's only mid-term; robotics may be lagging behind AI now, but it will eventually catch up.
As well, just because it pasts a test doesn't mean it doesn't do wonky, non-performant stuff. Or worse, side effects no one verified. Plenty often the LLM output will add new fields I didn't ask it to change as one example.
Tipping has lost its meaning and it is simply a money grab these days in many establishments, as your experience demonstrates. Like tipping for food to go.
I only tip when I sit down and good service is actually provided.
Then eventually that product morphed over time to the point where they realized the oil wasn't actually a necessary component
reply