My favorite part about this (and all GenAI) comments section is where one person says, "This is my personal experience using AI" and then a chorus of people chime in "Well, you're using it wrong!"
I personally prefer the one where everyone tells you that your error is because you used the outdated and almost unusable version from yesterday instead of the revolutionary release from today that will change everything we know. Rinse and repeat tomorrow.
Not to mention the variations of "you need to prompt better" including now "rules files" which begs the question: wouldn't just writing code be a much better way to exercise control over the machine?
In my tests of using AI to write most of my code, just writing the code yourself(with Copilot) and doing manual rounds with Claude is much faster and easier to maintain.
One very irritating problem I am seeing in a bunch of companies that I have invested in and my money is at stake is that they have taken up larger investments from normal VCs who are usually dumb as rocks but have a larger share is that they are pushing heavily for AI in the day to day processes of the company.
For example, some companies are using AI to create tickets or to collate feedback from users.
I can clearly see that this is making them think far less through the problem and a lot of this sixth sense understanding of the problem space happens through working through these ticket creation or product creation documents which are now being done by AI.
That is causing the quality of the work to become this weird drone like NPC like state where they aren't really solving real issues yet they're getting a lot of stuff done.
It's still very early so I do not know how best to talk to them about it. But it's very clear that any sort of creative work, problem solving, etc has huge negative implications when AI is used even a little bit.
I have also started to think that a great angel investment question is to ask companies if they are a non AI zone and investing in them will bring better returns in the future.
The same can be said about the other side. It is rarely phrased has "LLM is a useful tool with some important limitations" but "Look the LLM manage to create a junior-level feature, therefor we won't need developers in 2 years from now".
It tends to be the same with anything hyped / divisive. Human tend to exaggerate in both direction in communication, especially in low-stake environment such as an internet forum, or when they stand to gain something from the hype.
You seem to have confused "The whole AI thing is nonsense. [Anecdote]." with "The whole AI thing is nonsense because [anecdote]." I see a lot of "LLMs are not useful. e.g. the other day I asked it to do X and it was terrible." That is not somebody saying that that one experience definitively proves that LLMs are useless, or saying that you should believe that LLMs are useful based only on that one anecdote. It is people making their posts more interesting than just giving their opinions along with arguments for those opinions.
Obviously their views are based on the sum of all their experience with LLMs. We don't have to say so every time.
Yeah. I was made a manager in Feb 2022 with 5 directs and 9 headcount to fill. Hired 5, and then by June 2022 all remaining headcount was cut. In January 2023 we had our first-ever layoffs in the company's 25-year history.
It is so hard to fathom that a leader trusted with millions of dollars of other people's money can be so disengaged from recruiting as to not see a hard wall of cash crunch, months if not years ahead.
You can't assume fundraising will always go swimmingly. You have to always be in survival mode, and if that means not hiring aggressively, then you put on the breaks until the money comes in .
Either as a leader you are clueless about your business cash needs, you are clueless about risk management, or you are clueless about the market, all of which make you not a suitable leader for a long-term company.
The issue was interest rates. Money was free in Feb 2022; the interest rate was literally 0%, and so any cash-generating investment at all is profitable. Fed started raising rates in Apr 2022, at which point leaders started freaking out because they know what higher rates mean, and by Jun 2022 the Fed was raising them in 0.75% increments, which was unheard of in modern economics. By Jan 2023 the rate was 4.5%, which meant that every investment that generates an internal rate of return between 0% and 4.5% is unprofitable. That is the vast majority of investment in today's economy. (We also haven't yet seen this hit fully - a large number of stocks have earnings yields that are lower than what you can get on a savings account, which implies that holding these stocks over cash is unprofitable unless you expect their earnings to grow faster than the interest rate drops, which doesn't seem all that likely in today's environment.)
Now, you'd have a point if you complained about how centralization of government and economic power with the President and Fed chair, respectively, is a problem. That is the root cause that allows the economy to change faster than any leader can adapt. There used to be a time when people would complain about centralization of executive power on HN, but for some reason that moment seems to have passed.
> The issue was interest rates. Money was free in Feb 2022; the interest rate was literally 0%, and so any cash-generating investment at all is profitable. Fed started raising rates in Apr 2022, at which point leaders started freaking out because they know what higher rates mean, and by Jun 2022 the Fed was raising them in 0.75% increments, which was unheard of in modern economics. By Jan 2023 the rate was 4.5%, which meant that every investment that generates an internal rate of return between 0% and 4.5% is unprofitable.
"Unheard of in modern economics" is carrying quite a lot of weight there. The last time the rates were increased by 0.75% was 1994, and while that's not recent, it's pretty silly to imply that CEOs should be making long-term investments assuming that it would be literally unprecedented for that to happen. Interest rates have changed only a few dozen times _at all_ since then, so yes, they haven't been increased by that much recently, but there's never going to be enough of a sample size over a period of a couple decades that it would be reasonable to assume a precedent that will never be broken.
The crux of your argument seems to be that because the interest rates happen to be set a certain way at a certain time, it would be irrational not to make decisions based on how profitable they'd be at that exact moment in time. The problem with this line of thinking is that plenty of investments are only realized over long enough period of time that by your own admission, people can't possibly react fast enough to avoid those turning into a loss. My question is, why put yourself in a position where you can't adapt fast enough in the first place? The way interest rates are set should not be news to the people making these decisions in companies, so it's not crazy to expect that maybe the people who are betting their company's success on something from less than three decades before being "unprecedented in modern economics" could think at least _a little_ longer term than "literally anything is profitable in this exact moment, so there's no need to think about what might come next".
Because they are publicly traded and subject to lots and lots of checks on corporate governance. The CEO actually didn't want to lay people off (and did a shit-poor job of it when he did). He was getting pressure from the board, who in turn was getting pressure from a lot of activist hedge funds.
Small-fry who operate secretly are able to take the long view and enrich themselves off the masses' stupidity. CEOs of a multi-trillion-$ company that is ~10% of the retirement portfolio of every American are not. At that level you have to go with the market consensus, because you will be ousted and deemed not a fit steward of the enterprise that you are entrusted with otherwise.
> Small-fry who operate secretly are able to take the long view and enrich themselves off the masses' stupidity. CEOs of a multi-trillion-$ company that is ~10% of the retirement portfolio of every American are not.
From my math, you're off by several orders of magnitude, unless somehow we're not talking about Automattic anymore.
> Fed started raising rates in Apr 2022, at which point leaders started freaking out because they know what higher rates mean, and by Jun 2022 the Fed was raising them in 0.75% increments, which was unheard of in modern economics.
You're basically making the case that it happened fast, and went up high, but everyone who paid attention to interest rates understood it was only a matter of time till it had to at least revert back to pre-covid rates (whether you think that's 1.5 or 2.3 or something, depending on how you measure), and that obviously there would need to be real layoffs after.
The excuse is really saying "it turned out more extreme than we thought", but was the behavior take responsible assuming non-extreme rate changes?
If you're venture based and were expecting another round sometime soon. With higher interest rates there were more compelling alternatives for LPs than to invest in Venture, causing a trickle down chilling of the fund raising environment for venture backed companies and requiring them to come up with accelerated plans to reach profitability - including cutting staff and optimizing for survival over growth.
My employer actually has roughly $100B of cash on hand.
The issue is that they're a publicly-traded company, with a fiduciary responsibility to shareholders. If they're investing in an internal product that will make back 1% of the money invested in it over the next couple years, but they could have been investing in Treasury Bills that make back 4.5%, they are committing financial malpractice and will be sued accordingly.
I'd have hoped someone at Google would know this is a myth.
The idea that choosing a 1% strategic internal investment over a 4.5% T-bill constitutes actionable "financial malpractice" or a breach of fiduciary duty leading to successful lawsuits is incorrect. Courts recognize that running a business requires strategic choices and risk-taking, not just maximizing immediate, risk-free yield. A lawsuit would fail unless plaintiffs could show the decision was tainted by disloyalty, bad faith, or gross negligence in the decision-making process, none of which are implied by simply choosing a lower-yield strategic project.
Hence why no one ever gets sued for this. It doesn't happen. It lives in the minds of HNers and Redditors to provide a very convenient excuse for their employers, or in general companies, making abhorrent decisions purely based on feels and short-term next-quarter profits/stock price, regardless of the negative externalities they inflict on soeciety.
I’ve seen many companies have this problem. They base hiring against planned revenue instead of current revenue. In a sense you have to - if you’re planning on growing 100% for several years on the back of new products and a big sales team you must hire in advance. It’s what the VC model is founded on. The downside is when you miss the revenue, you have to cut deep. And it’s usually worse because your hiring standards dropped in hyper growth.
“It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
If you are the CEO saying 'we are planning for bad things in the future' while every other CEO is saying 'the arrow only goes up' guess which company the stock market punishes and who gets removed by the board versus who's options become worth more?
If we were on the other side of those galaxies, wouldn't they look like they were spinning counter-clockwise? Or are they measuring spin some other way?
From my understanding, the big bang requires that the proto-universe was in a completely homogenous state that was then pushed out of that equilibrium for some reason. But that reason doesn't require non-zero angular momentum. It only requires that a the proto-universe was homogenous and now the universe isn't. And that is what separates pre and post big bang. I could be wrong, I am not a cosmologist. Would be happy to hear from one though.
What causes a perfectly symmetric ball on top of a perfectly symmetric hill to roll down via one side? (Probably quantum randomness if everything else is perfectly symmetric)
I was wondering the same thing -- "direction of spin" is ambiguous on its own, you also need to pick which direction is up.
But if objective spin directions are roughly evenly split because the universe is isotropic, the spins from our viewpoint ought to be evenly split as well.
If they're not evenly split, the universe must have a preferred axis, which would be an amazing discovery. I guess if the preferred axis just happens to align with our own galaxy, that would support the alternative theory that it's due to an observation effect such as doppler shift.
Either way, it's incredibly cool to have such a simple but totally unexpected observation pop up out of nowhere.
That is correct, "clockwise" only makes sense relative to a single observer: on Earth we set up out coordinate system so that the Milky Way's directed axis of rotation points one way, and most galaxies have it pointing the other way. "Clockwise / counterclockwise" makes sense for images coming from telescopes but it's not cosmologically meaningful.
Note that this is not that easy to determine:
When done manually, the determination of the direction of rotation of a galaxy can be a subjective task, as different annotators might have different opinions regarding the direction towards a galaxy rotates. A simple example is the crowdsourcing annotation through Galaxy Zoo 1 (Land et al. 2008), where in the vast majority of the galaxies different annotators provided conflicting annotations. Therefore, the annotations shown in Fig. 1 were made by a computer analysis that followed a defined symmetric model (Shamir 2024e).
The point is that we would typically assume a 50-50 ratio regardless of where you are in the universe.
The actual paper makes more sense: "the number of galaxies in that field that rotate in the opposite direction relative to the Milky Way galaxy is ∼50 per cent higher than the number of galaxies that rotate in the same direction relative to the Milky Way."
>For any important or hard technical questions relevant to anything I do, the LLM results are consistently trash. And if you are an expert in the domain you can’t not notice this.
This is also my experience. My day job isn't programming, but when I can feed an LLM secretarial work, or simple coding prompts to automate some work, it does great and saves me time.
Most of my day is spent getting into the details on things for which there's no real precedent. Or if there is, it hasn't been widely published on. LLMs are frustrating useless for these problems.
Folks really over index when an LLM is very good for their use case. And most of the folks here are coders, at which they're already good and getting better.
For some tasks they're still next to useless, and people who do those tasks understandably don't get the hype.
Tell a lab biologist or chemist to use an LLM to help them with their work and they'll get very little useful out of it.
Ask an attorney to use it and it's going to miss things that are blindingly obvious to the attorney.
Ask a professional researcher to use it and it won't come up with good sources.
For me, I've had a lot of those really frustrating experiences where I'm having difficulty on a topic and it gives me utter incorrect junk because there just isn't a lot already published about that data.
I've fed it tricky programming tasks and gotten back code that doesn't work, and that I can't debug because I have no idea what it's trying to do, or I'm not familiar with the libraries it used.
It sounds like you're trying to use these llms as oracles, which is going to cause you a lot of frustration. I've found almost all of them now excel at imitating a junior dev or a drunk PhD student. For example the other day I was looking at acoustic sensor data and I ran it down the trail of "what are some ways to look for repeating patterns like xyz" and 10 minutes later I had a mostly working proof of concept for a 2nd order spectrogram that reasonably dealt with spectral leakage and a half working mel spectrum fingerprint idea. Those are all things I was thinking about myself, so I was able to guide it to a mostly working prototype in very little time. But doing it myself from zero would've taken at least a couple of hours.
But truthfully 90% of work related programming is not problem solving, it's implementing business logic. And dealing with poor, ever changing customer specs. Which an llm will not help with.
> But truthfully 90% of work related programming is not problem solving, it's implementing business logic. And dealing with poor, ever changing customer specs. Which an llm will not help with.
Au contraire, these are exactly things LLMs are super helpful at - most of business logic in any company is just doing the same thing every other company is doing; there's not that many unique challenges in day-to-day programming (or business in general). And then, more than half of the work of "implementing business logic" is feeding data in and out, presenting it to the user, and a bunch of other things that boil down to gluing together preexisting components and frameworks - again, a kind of work that LLMs are quite a big time-saver for, if you use them right.
Strongly in agreement. I've tried them and mostly come away unimpressed. If you work in a field where you have to get things right, and it's more work to double check and then fix everything done by the LLM, they're worse than useless. Sure, I've seen a few cases where they have value, but they're not much of my job. Cool is not the same as valuable.
If you think "it can't quite do what I need, I'll wait a little longer until it can" you may still be waiting 50 years from now.
> If you work in a field where you have to get things right, and it's more work to double check and then fix everything done by the LLM, they're worse than useless.
Most programmers understand reading code is often harder than writing it. Especially when someone else wrote the code. I'm a bit amused by the cognitive dissonance of programmers understanding that and then praising code handed to them by an LLM.
It's not that LLMs are useless for programming (or other technical tasks) but they're very junior practitioners. Even when they get "smarter" with reasoning or more parameters their nature of confabulation means they can't be fully trusted in the way their proponents suggest we trust them.
It's not that people don't make mistakes but they often make reasonable mistakes. LLMs make unreasonable mistakes at random. There's no way to predict the distribution of their mistakes. I can learn a human junior developer sucks at memory management or something. I can ask them to improve areas they're weak in and check those areas of their work in more detail.
I have to spend a lot of time reviewing all output from LLMs because there's rarely rhyme or reason to their errors. They save me a bunch of typing but replace a lot of my savings with reviews and debugging.
My view is that it will be some time before they can as well because of the success in the software domain - not because LLM's aren't capable as a tech but because data owners and practitioners in other domains will resist the change. From the SWE experience, news reports, financial magazines, etc many are preparing accordingly, even if it is a subconscious thing. People don't like change, and don't want to be threatened when it is them at risk - no one wants what happened to artists and now SWE's to happen to their profession. They are happy for other professions to "democratize/commoditize" as long as it isn't them - after all this increases their purchasing power. Don't open source knowledge/products, don't let AI near your vertical domain, continue to command a premium for as long as you can - I've heard variations of this in many AI conversations. Much easier in oligopoly and monopoly like domains and/or domains where knowledge was known to be a moat even when mixed with software as you have more trust competitors won't do the same.
For many industries/people work is a means to earn, not something to be passionate in for its own sake. Its a means to provide for other things in life you are actually passionate about (e.g. family, lifestyle, etc). In the end AI may get your job eventually but if it gets you much later vs other industries/domains you win from a capital perspective as other goods get cheaper and you still command your pre-AI scarcity premium. This makes it easier for them to acquire more assets from the early disrupted industries and shield them from eventual AI taking over.
I'm seeing this directly in software. Less new frameworks/libraries/etc outside the AI domain being published IMO, more apprehension from companies to open source their work and/or expose what they do, etc. Attracting talent is also no longer as strong of a reason to showcase what you do to prospective employees - economic conditions and/or AI make that less necessary as well.
I frequently see news stories where attorneys get in trouble for using LLMs, because they cite hallucinated case law (e.g.). If they didn't get caught, that would look the same as using them "productively".
Asking the LLM for relevant case law and checking it up - productive use of LLM. Asking the LLM to write your argument for you and not checking it up - unproductive use of LLM. It's the same as with programming.
>Asking the LLM for relevant case law and checking it up - productive use of LLM
That's a terrible use for an LLM. There are several deterministic search engines attorneys use to find relevant case law, where you don't have to check to see if the cases actually exist after it produces results. Plus, the actual text of the case is usually very important, and isn't available if you're using an LLM.
Which isn't to say they're not useful for attorneys. I've had success getting them to do some secretarial and administrative things. But for the core of what attorneys do, they're not great.
For law firms creating their own repositories of case law, having LLMs search via summaries, and then dive into the selected cases to extract pertinent information seems like an obvious great use case to build a solution using LLMs.
The orchestration of LLms that will be reading transcripts, reading emails, reading case law, and preparing briefs with sources is unavoidable in the next 3 years. I don’t doubt multiple industry specialized solutions are already under development.
Just asking chatGPT to make your case for you is missing the opportunity.
If anyone is unable to get Claud 3.7 or Gemini 2.5 to accelerate their development work I have to doubt their sentience at this point. (Or more likely doubt that they’re actively testing these things regularly)
Law firms don't create their own repos of case law. They use a database like westlaw or lexis. LLMs "preparing briefs with sources" would be a disaster and wholly misunderstands what legal writing entails.
I find it very useful to review the output and consider its suggestions.
I don’t trust it blindly, and I often don’t use most of what it suggests; but I do apply critical thinking to evaluate what might be useful.
The simplest example is using it as a reverse dictionary. If I know there’s a word for a concept, I’ll ask an LLM. When I read the response, I either recognize the word or verify it using a regular dictionary.
I think a lot of the contention in these discussions is because people are using it for different purposes: it's unreliable for some purposes and it is excellent at others.
> Asking the LLM for relevant case law and checking it up - productive use of LLM.
Only if you're okay with it missing stuff. If I hired a lawyer, and they used a magic robot rather than doing proper research, and thus missed relevant information, and this later came to light, I'd be going after them for malpractice, tbh.
Surely this was meant ironically, right? You must've heard of at least one of the many cases involving lawyers doing precisely what you described and ending up presenting made up legal cases in court. Guess how that worked out for them.
The uses that they cited to me were "additional pair of eyes in reviewing contracts," and, "deep research to get started on providing a detailed overview of a legal topic."
Honestly it's worse than this. A good lab biologist/chemist will try to use it, understand that it's useless, and stop using it. A bad lab biologist/chemist will try to use it, think that it's useful, and then it will make them useless by giving them wrong information. So it's not just that people over-index when it is useful, they also over-index when it's actively harmful but they think it's useful.
You think good biologists never need to summarize work into digestible language, or fill out multiple huge, redundant grant applications with the same info, or reformat data, or check that a writeup accurate reflects data?
I’m not a biologist (good or bad) but the scientists I know (who I think are good) often complain that most of the work is drudgery unrelated to the science they love.
Sure, lots of drudgery, but none of your examples are things that you could trust an LLM to do correctly when correctness counts. And correctness always counts in science.
Edit to add: and regardless, I'm less interested in the "LLM's aren't ever useful to science" part of the point. The point that actual LLM usage in science will mostly be for cases where they seem useful but actually introduce subtle problems is much more important. I have observed this happening with trainees.
Reddit's LocalLLama has a lot of these. 3090s are pretty popular for these purposes. But they're not trivial to build and run at home. Among other issues are that you're drawing >1kW for just the GPUs if you have four of them at 100% usage.
If there's a lot of tryptophan in the bacteria, the ribosome scoots along the mRNA quickly and a kink forms later in the mRNA that boots the RNA polymerase off of the DNA prematurely, and stops the cell from making the mRNA that encodes for proteins that make tryptophan.
If there is a deficiency of tryptophan in the cell, the ribosome stalls, and a different kink forms in the mRNA that allows the RNA polymerase to keep on transcribing the genes necessary to make tryptophan.
This only works in bacteria, because there's no nucleus, and transcription and translation can occur simultaneously.
wow. and the TRP (and LAC) operons are mentioned in a context of "see, bacterial gene regulation is so simple" vs. the insanity of eukaryotes with little use of operons, lots of use of distal enhancers etc.
>How long will your corner store survive if the IRS found out you were processing transactions in your own currency?
Offering someone credit in USD isn't making your own currency. The closest widespread version of that are community currencies[0], which the US doesn't seem to particularly care about -- I'm guessing because they're generally pegged to the dollar and promote local economies.
What fresh hell is this?
reply