Funny, on mobile safari the site squeezes all screenshots horizontally so they still fit 3 per line. Any impression of the aspect ratio is gone unless you open them one at a time.
The problem with statements like yours is that everyone praises the LLM for doing things they (the humans praising) can do in their sleep.
You sound like you've done that CSV to SQL a lot of times, are subconsciously aware of all the pitfalls and unwritten (and perhaps never specified) requirements and you're limited by just your typing speed when doing it again.
I can use LLMs for stuff I can do in my sleep as well.
I move that even for you or me, LLMs ain't worth much for stuff we can't do in our sleep. Stuff we don't know or have only introductory knowledge of. You can use them to generate tutorials instead of searching for them, but that's about it.
Generating tutorials is good, but is it good because a LLM did it or because you can't find a good tutorial by searching any more?
So you want to use a LLM on a code base. You have to feed it your code base as part of the prompt, which is limited in size.
I don't suppose there's any solution where you can somehow further train a LLM on your code base to make it become part of the neural net and not part of the prompt?
This could be useful on a large ish code base for helping with onboarding at the least.
Of course you'd have to do both the running and training locally, so there's no incentive for the LLM peddlers to offer that...
Modern tools don't fine tune on your code base but use RAG; select the context to feed it to the LLM with each request. The better the context inference algorithm, the better the results. See if your tool tells you what files it selected.
I want to train the LLM on the whole code base and then pass a hand picked context specific to what I'm asking.
So it doesn't only suggest what can be found on w3schools and geeks4geeks and maybe stackoverflow, but also whatever idioms and utility functions my code base has.
> LLMs are better at reading terrible documentation than the average programmer.
LLMs don't go and read the terrible documentation for you when prompted. They reproduce the information posted by other people that struggled with said terrible documentation, if it was posted somewhere.
It's still better than a modern web search or struggling with the terrible documentation on your own - for introductory stuff.
For going into production you have to review the output, and reading code has always been harder than writing it...
This is is wildly incorrect. Documentation categorically isn't excluded from LLMs' training sets, and they are very well able to summarize that documentation when asked.
Most single player games work just fine(tm) if you apply the right amount of emulation, regardless of age, and they can't be killed by their IP owners.
It's not important what it is. Pocket was bought overpriced by Mozillas incompetent officers on a project to diversify their income by playing VC. They then decided to piss all over their Firefox browser users by forcefully integrating it with the browser. It was a terrible decision, a terrible business, and now way past its best by date its finally taken out to pasture. Sadly the incompetent officers continue their tenure and soon we'll write this obituary for Mozilla, as their C-level compensation inevitably outstrips what Google is willing to pay for a 1% market share and declining browser.
I know that, I want to determine the level of incompetence.
A page saving service is one thing, adding a curation/discovery team (paid by the subscriptions or by the sites who want to get on that curated list?) on top of that is another.
reply