Hacker Newsnew | past | comments | ask | show | jobs | submit | bgwalter's commentslogin

The question isn't whether this future arrives. Looking at the money and talent flowing into AI development, it's inevitable. The question is whether you'll be ready when it does, and whether you'll be working on the parts of product development that actually matter in that world.

And there it is: inevitable. The whole article is written in a pseudo-religious manner, probably with the help of "AI" to collate all known talking points.

I think the author is not working on anything that matters. His company is one of a million similar companies that ride the hype wave.

What matters is real software written before 2023 without "AI", which is now stolen and repackaged.


It looks like a specialized proprietary application to identify defect patterns in lithography, similar to these papers:

https://blogs.sw.siemens.com/calibre/2024/04/03/ai-ml-rules-...

That seems to be one of the legitimate uses of "AI", as opposed to the generative nonsense. It also makes sense that the company is in the EU. Companies there tend to focus on real things as opposed to hot air. It also means that one cannot evaluate Mistral by focusing on its chatbot performance, since the real business seems elsewhere.


A surprising number of people are in favor of (or pay lip service to) some kind of VAT on robots and "AI":

https://en.wikipedia.org/wiki/Robot_tax

Make it 50% of the sales price like with cigarettes, since "AI" makes people dumber.


What are the next steps after the install? Discover that local LLMs are inadequate or freeze the machine (as stated in the article)?

If you are at NIH, perhaps wait until you are fired?

It is very sad that the whole scientific ecosystem is jumping on the hype train. There are no interesting articles any longer, no real scientific discoveries. Just article after article how to feed the bureaucratic LLM machinery and become a good apparatchik within it.


While I agree it's sad that a many scientific companies are jumping on the LLM hype train, there are many researchers producing fantastic work with and without the aid of LLMs.

Some are incorporating llms in a nice way, eg integrating into docs.


Is the LLM-doc integration some kind of monetization scheme where they expect users to pay them to disable it?

So we are in the "it will be useful later" phase. Here is a little stock chart from the dotcom bubble until today:

https://companiesmarketcap.com/juniper-networks/stock-price-...


People investing in startups that sell Adderall without prescription or other telemedicine schemes are welcome in this administration, too.

The odd thing is that Trump always uses drugs as a pretext to intimidate other countries. He used fentanyl as an excuse for Canada sanctions and Cocaine from Venezuela as an excuse to drone a "drug boat" carrying 11 people. Neither Canada nor Venezuela are anywhere near the prime exporters of drugs to the US.

So the question is whether "crypto" is just used for self enrichment of the elites or if there are larger plans to make it the primary means of slush funds for various South American rebels. The fact that regulations are basically mostly ignored [1] in a bipartisan manner would fit both theories.

[1] Occasional efforts by Sen. Warren are noted.


This is the original "master plan" from 2025:

https://www.tesla.com/master-plan-part-4

It becomes immediately obvious that there is no plan, it is just advertising the things Tesla has been doing already.

The picture at the top that shows a robot which, as usual, performs an easy task in a clean, dust and dirt free environment, may indicate that they'll focus more on robots. That is it.

Since the stock price is decoupled from all realities, they can do whatever they want.


Yes, Google with udm=14 is much better than "AI". "AI" might work for the trivia-type questions from this article, which most people aren't interested in to begin with.

It fails completely for complex political or investigative questions where there is no clear answer. Reading a single Wikipedia page is usually a better use of one's time:

You don't have to pretend that you are parallelizing work (which is just for show) while waiting three min for the "AI" answer. You practice speed reading and memory retention. You enhance your own semantic network instead of the network owned and controlled by oligopoly members.


There is no "from scratch" for "AI". Claude will read the original, launder it, strip the license and pass it off as its own work.

Indeed, LLMs cannot do truly novel thinking, and the laundering analogy is spot-on.

However they're able to do more than just regurgitating code, I can have them explain to me the underlying (mathematical or whatever) concept behind the code and write new code from scratch myself, with that knowledge.

Can/should this new code be considered as derivative work, if the underlying principles were already documented in literature?


They can regurgitate explanations as well as code. I'd strongly recommend doing actual research: you'll find better (less-distorted, better laid out, more complete) explanations.

There is literally a GitHub repository, six years old, that ports an out-of-tree ftape driver to modern Linux:

https://github.com/Godzil/ftape

Could it be that Misanthropic has trained on that one?


> Maybe this driver have problems on SMP machines.

> Maybe this driver have problems on 64Bit x86 machines.

Ouch. The part where it says it’s not possible to use a normal floppy and the tape flip anymore seemed odd enough, but those last points should scare anyone away from trying these on anything important.


Yes, Godzil's repo could have the issues you point out but still give hints to Claude what APIs to replace. Or the latest possibly-Claude-plagiarized version perhaps has the same issues.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: