Hacker Newsnew | past | comments | ask | show | jobs | submit | drcode's commentslogin

Pure speculation, but I would guess people first mixed wax and lamp oil in different ways to still get the burning effect of oil, with less of the cost of the oil, then added a wick to help light the oil/wax.

Then eventually that product morphed over time to the point where they realized the oil wasn't actually a necessary component


Good idea, it sounds plausible! But it still leaves open the question of how oil lamps were invented. How did someone figure out that a wick would be helpful?

Rope was around long before the wick. It seems conceivable that rope shavings or pieces or old rope were an easy way to start a fire.

This was then used with oil to make an even better fire starter or means of transferring fire. Eventually someone realises that a rope soaking in oil is easily lit and sustains a flame.


Before wicks, how do you burn oil? It's not easy to just ignite a bucket of lamp oil (putting aside what you might make the bucket out of). Probably you soak other fuel like wood or rags in the oil and burn the result. It's not a huge step from there to accidentally find out that you can make do with one piece of wood or cloth or string for a lot of oil, assuming you have something to put the oil in.

1. Someone dips a rope in flammable oil before lighting it, and sees that it's quite flammable. 2. Some other time, someone tries to use a rope dipped in flammable oil as a fuse to trigger the lighting of the oil once it burns back to the oil 3. They notice that the fuse keeps burning but doesn't burn back - in other words, the wicking effect 4. They shorten the rope and reshape the pot, and that's an oil lamp.

The first oil-lamps were basically a bowl with animal-fat and plant parts as wick. It's not hard to imagine how ten thousands years ago a hunting tribe could discover such a device by accident or on purpose.

But junk food is fun: Maybe we can eat junk food, have fun in life, and still be reasonably healthy, with the help of science

(and if glps don't work well enough to allow this, then maybe the next medicine in the pipeline)


The drug just makes you eat less.

great to learn from the headline that this tech only works for disaster response maps, and isn't usable for other types of maps, like mapping out the front lines of a war

Isn't that a good thing? or at least not bad?

Having a transparent battlefield doesn't necessitate an increase or decrease in casualties.


If you have it and the enemy doesn't you almost certainly will win. If both of you have it, casualties probably go up.

Why would casualties necessarily go up with surveillance? Every argument for precision targeting can be reversed for evasion.

In Ukraine it’s relatively rare for large numbers of troops to be concentrated, because each side knows its opponents would observe the formation and make it a priority target. This makes something like the battle of the Somme unlikely to be repeated.

In call of duty do casualties go up when both sides have UAVs, compared to when both are without?


>In call of duty do casualties go up when both sides have UAVs, compared to when both are without?

Are there any other games updating their play style to recognize the heavy use of drones in war now?


Arma Reforger has very good mods depicting drone combat like flying fpvs and bomber drones. Bohemia interactive simulations also focuses on drones in their newest warsim release

As someone who's in the field, I hate how drones and robotics are now associated with anything related to wars. It just kills the passion, and now whenever you mention you work in it, do it, or are interested in it, you get that suspicious look and even a knock on your door.

Unless you do drones only on your private property, they are inherently a creepy, invasive technology (even though I think they're super cool and like playing with them, too)

Do you think that airplanes, helicopters, and balloons are also inherently a creepy, invasive technology as well? Because from the perspective of capturing imaging data from the air there is really no functional difference between those and UAS...

The tech will be useful both for wars as well as for the disaster recovery efforts after your federal funding is cut down for boycotting the wars.

A matter of time, people in power will take care of that.

Then a flight plan will be uploaded to the Tet style drones to carry on their duties.


I feel like the correct, boring answer is "they didn't focus on getting people the best website for what they were searching for"


> Incidentally, my 24th birthday was quickly approaching

What would have been hilarious is if he had said "my 14th birthday was quickly approaching" at this point in the post


This would've made me R-O-F-L.


Yes, independently wealthy musicians and authors will still be able afford investing the time required to make great art


You do not need wealth to make 'great art'. It's nice to have access to the best software tools, the highest quality paints, the finest instruments, etc - but those have never been needed. It's pretty reductive to think of art that way.


Wealth isn't about "the best software tools"

Wealth is the time required to make great art- Every great artist needs this.

Currently, some artists are able to make a living from their art, and can spend the time and effort to create great art without being independently wealthy- but that is going to become increasingly difficult


> {In late July of 2025} all LLMs hit a ceiling of complexity beyond which they cease to understand the code base

Fixed that for you


Every 6 months since chatgpt launched, everyone keeps telling me that LLMs are going to be amazing in a year from now and they'll replace programmers, just you wait

They're getting better, but a lot of the improvement was driven by increases in the training data. These models have now consumed literally all available information on the planet - where do they go from here?


The "time to amazingness" is falling quickly, though. It used to be "just a few years" a few years ago, and has been steady around 6 months for the last year or so.

I'm waiting for the day when every comment session on the internet will be full of people predicting AGI tomorrow.


As far as I understand, coding ability of AIs is now driven mostly entirely by RL, as well synthetic data generated by inference time compute combined with code execution tool use.

Coding is arguably the single thing least affected by a shortage of training data.

We're still in the very early steps of this new cycle of AI coding advancements.


Yeah... There are improvements to be made by increasing the context window and having agents reference documentation more. Half the issues I see are with agents just doing their own thing instead of following established best practices they could/should be referencing in a codebase or looking up documentation.

Which, believe it or not, is the same issue I see in my own code.


Give the LLM access to a VM with a compiler and have it generate code for it to train on. They're great at next.js but not as good with swift. So have it generate a million swift programs, along with tests to verify they actually work, and add that to the private training data set.


As you can see from the other commenters on here, any perceived limitation is no longer the fault of the LLM. So where we go from here is gaslighting. Never mind that the LLM should be good at refactoring, you need to keep doing that for it until it works you see. Or the classic you’re prompting it wrong, etc.


Let's hope you are right, and that the limitations of AI will never improve, and that all of us get to live out our full natural lifespans without AI xrisk


The fundamental question is "will the LLM get better before your vibecoded codebase becomes unmaintainable, or you need a feature that is beyond the LLM's ceiling". It's an interesting race.


If you're from the future, please let us know and we can alert the relevant authorities to ~~disect~~ help you.


because I predicted that AI will get better in future months?


sort of, except I think the future of llms will be to to have the llm try 5 separate attempts to create a fix in parallel, since llm time is cheaper than human time... and once you introduce this aspect into the workflow, you'll want to spin up multiple containers, and the benefits of the terminal aren't as strong anymore.


I feel like the better approach would be to throw away PRs when they're bad, edit your prompt, and then let the agent try again using the new prompt. Throwing lots of wasted compute at a problem seems like a luxury take on coding agents, as these agents can be really expensive.

So the process becomes: Read PR -> Find fundamental issues -> Update prompt to guide agent better -> Re-run agent.

Then your job becomes proof-reading and editing specification documents for changes, reviewing the result of the agent trying to implement that spec, and then iterating on it until it is good enough. This comes from the belief that better, more expensive, agents will usually produce better code than 5 cheaper agents running in parallel with some LLM judge to choose between or combine their outputs.


Who or what will review the 5 PRs (including their updates to automated tests)? If it's just yet another agent, do we need 5 of these reviews for each PR too?

In the end, you either concede control over 'details' and just trust the output or you spend the effort and validate results manually. Not saying either is bad.


If you can define your problem well then you can write tests up front. An ML person would call tests a "verifier". Verifiers let you pump compute into finding solutions.


I'm not sure we write good tests for this because we assume some kind of logic involved here. If you set a human to task to write a procedure to send a 'forgot password' email, I can be reasonably sure there's a limited number of things a human would do with the provided email address, because it takes time and effort to do more than you should.

However with an LLM I'm not so sure. So how will you write a test to validate this is done but also guarantee it doesn't add the email to a blacklist? A whitelist? A list of admin emails? Or the tens of other things you can do with an email within your system?


Will people be willing to make their full time job writing tests?


We’ll just have an LLM write the tests.

Now we can work on our passion projects and everything will just be LLMs talking to LLMs.


I hope sarcasm.


They probably won't. But it doesn't matter. Ultimately, we'll all end up doing manual labor, because that is the only thing we can do that the machines aren't already doing better than us, or about to be doing better than us. Such is the natural order of things.

By manual labor I specifically mean the kind where you have to mix precision with power, on the fly, in arbitrary terrain, where each task is effectively one-off. So not even making things - everything made at scale will be done in automated factories/workshops. Think constructing and maintaining those factories, in the "crawling down tight pipes with scewdriver in your teeth" sense.

And that's only mid-term; robotics may be lagging behind AI now, but it will eventually catch up.


As well, just because it pasts a test doesn't mean it doesn't do wonky, non-performant stuff. Or worse, side effects no one verified. Plenty often the LLM output will add new fields I didn't ask it to change as one example.



Having command line tools to spin up multiple containers and then to collect their results seems like it would be a pretty natural fit.



Why would spinning containers remove the benefits? Presumably there is a terminal too interacting with the containers.


Nah, if parallelism will help, it'll be abstracted away from the user.


Tmux?


I was kinda pissed when my local mall got a "barista robot", and it asks for a 20% tip when you swipe your card


Tipping has lost its meaning and it is simply a money grab these days in many establishments, as your experience demonstrates. Like tipping for food to go.

I only tip when I sit down and good service is actually provided.


Let me introduce you to Hard 2632, a device for 32 byte demos: https://xayax.net/hard2632/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: