> “However, we notice – based on the social comments and international media coverage – that for many guests this period is ‘the most wonderful time of the year’.”
How to make your corporate response sound even more AI than the actual AI...
> "And here’s the part people don’t see: the hours that went into this job far exceeded a traditional shoot. Ten people, five weeks, full-time.”
If it didn't even save time, then what was the point?
Looking at it I see familiar elements, which are used by an artist going by the name Gossip Goblin to draw apocalyptic visions of a humanity far in the future that, for the N-th time, almost wiped itself out via increasingly invasive body modifications.
Back in the day I worked for the makers of Yellow Dog Linux, and I think because of these scarcities we had a pretty good model of buying Apple Risc hardware at OEM prices and putting Linux on them, mostly for university scientists but also for enthusiasts. There was a lot of work keeping Linux running on hardware created by a company that was ambivalent about having alternative operating systems on its hardware, but it was fun and a great group of people to work for.
The big thing most people from outside the Acorn era of Arm are missing here is the Risc PC never had decent floating point support. For pure integer stuff the StrongArm upgrade was, at lauch, simply astounding, but floats . . . nope. (The StrongArm upgrade merely needed to be in the slot near the vents too, it had no active cooling or even a serious heatsink).
Oddly the later lower end A7000 came in a A7000+ variant which did have an FPU, probably because Arm needed to test their FPU out somewhere.
The main reason was to run Windows (3.1) inside a window on Risc OS in parallel. You could copy/paste between them too iirc. Use of floating point code from Risc OS was so non-uniform (i.e. the culture was rolling your own fixed point code) that any attempt to speed that up via offloading to another type of CPU would have only worked for a specific configuration of everything. The x86 cards available weren't exactly speed demons either.
At one time there was a lot more community excitement about shoving many Arms on a single board, or a DSP, but the StrongArm upgrade was already fast enough to oversaturate the bus making such a thing pointless.
Around this time Win95 overtook Risc OS in terms of realistic UX/cost as well.
With hindsight the Risc PC existed so there was an upgrade path for people from the Archimedes for particular software before ports of that to PCs were completed. e.g. Sibelius. Acorn knew they didn't have a chance.
Aider can be a chat interface and it's great for that but you can also use it from your editor by telling it to watch your files.[1]
So you'd write a function name and then tell it to flesh it out.
function factorial(n) // Implement this. AI!
Becomes:
function factorial(n) {
if (n === 0 || n === 1) {
return 1;
} else {
return n \* factorial(n - 1);
}
}
Last I looked Aider's maintainer has had to focus on other things recently, but aider-ce is a fantastic fork.
I'm really curious to try Mistral's vibe, but even though I'm a big fanboi I don't want to be tied to just one model. Aider lets tier your models such that your big, expensive model can do all the thinking and then stuff like code reviews can run through a smaller model. It's a pretty capable tool
Very much this for me - I really don't get why, given a new models are popping out every month from different providers, people are so happy to sink themselves into provider ecosystems when there are open source alternatives that work with any model.
The main problem with Aider is it isn't agentic enough for a lot of people but to me that's a benefit.
Man they don't solve it for me. They charge much more for using a credit card vs a checking account, especially when going across currencies, and I consider it pretty dumb to share my checking account information around when I can control things much more easily with a credit card. And literally any fee they charge is more than what nano charges. It's just that nobody takes nano :(
If it's dandelions, wait a few seasons (now that you've used Roundup) and then eat them! The leaves taste like arugula (the younger the better). The heads, when they bloom, can be dried, ground, and baked into cookie recipes. If you let the heads close, pick them before they start transforming into seeds and either pop them into your mouth raw while you're doing yard work or save them, bread them, and fry them up for a nutty flavor. The roots apparently make a good caffeine-free coffee replacement but who the hell wants to replace coffee?
That's what I was thinking as I was listening to the "be like clippy" video linked in the parent. Those local models probably won't be able to match the quality of the big guys' for a long time to come, but for now the local, open models have a lot of potential for us to escape this power consolidation before it's complete and still get their users 75-80% of the functionality. That remaining 20-25%, combined with the new skill of managing an LLM, is where the self-value comes in, the bit that says, "I do own what I built or learned or drew."
The hardest part with that IMO will be democratizing the hardware so that everybody can afford it.
Hopes that we all will be running LLM models locally in the face of skyrocketing prices on all kinds of memory sound very similar to the cryptoanarchists' ravings about full copies of blockchain stored locally on every user's device in the face of exponential growth of its size.
The only difference is that memory prices skyrocketing is a temporary thing resulting from a spike in demand from incompetent AI megalomaniacs like Sam Altman who don't know how to run a company and are desperate to scale because that's the only kind of sustainability they understand.
Once the market either absorbs that demand (if it's real) or else over-produces for it, RAM prices are going to either slowly come back down (if it's real) or plunge (if it isn't).
So we'll see what happens. People used to think crypto currencies were going to herald a new era of democratizing economic (and other) activity before the tech bros turned Bitcoin into a pyramid scheme. It might be too late for them to do the same with locally-run LLMs but the NVidias and AMDs of the world will be there to take our $.
Glad I'm not most users. I'm down for 80% of the quality for an open weight model. Hell I've been using Linux for 25 years so I suppose I'm used to not-the-greatest-but-free.
reply