Point to be noted is that Rudolf Hoss wasn't the leader of a random Nazi camp. He was the commandant of the Auschwitz concentration camp, where 1.1 million people were murdered, making him one of the biggest mass-murderer of the last century.
Can you please explain to me why AI chips don't matters anymore because of DeepSeek? I thought it was just a better model, but perhaps I didn't get it?
Deepseek used older generation chips and developed a model that takes significantly less compute. Making having access to tons of the latest nvidia hardware unnecessary.
Being forced to live with more HW restrictions usually results in more reliance on SW creativity and better optimizations instead of lazy developers bloating SW to fill all available resources.
Just like how it's no surprise that websites developed where everyone has the latest and grates fully loaded M silicon MacBooks also sufferer from horrible lack of optimizations because "it works on my machine" while being a stuttery mess everywhere else.
Websites and the like are a different world from ML training, where devs seem to be more performance-conscious. But there's a weird reliance on CUDA because devs (rightfully) don't trust the alternatives.
Yeah, seen the same with PC gaming. Minimum specs absolutely exploding for no real reason other than the fact most gamers were buying the latest top tier cards. Then the Steam Deck came out and devs are forced to consider the fact that a 2D pixel art game shouldn't be lagging out on a SoC capable of producing stunning 3D graphics in a properly optimized game.
Well, the Steam Deck's release coincided with the popularity of reconstruction techniques. The Deck didn't "force" devs to consider optimization so much as it just gave them a low-end reconstruction target to play with. Without FSR and XESS, there's no doubt that the Deck would be a solidly last-gen console.
Strictly speaking, a lot of games really shouldn't be playable on the Steam Deck. Baldur's Gate III and Cyberpunk 2077 are both CPU-bound before reaching 60fps and can barely keep their head above 30fps running at 360p internal resolution. The Deck's saving grace is that it can tap into the same dynamic resolution mode that last-gen consoles depend on for consistent framerates.
But there has been a long term suspicion in the AI community that the ultra expensive to compute and very expensive to run humongous LLM approach is a dead end, or at lest fully unnecessary (and as such monetary wise a dead end).
I mean think about it, the target crown jewel of AI was never to find ways to train on insane amounts of data, but to be able to get as good as possible results with only as much data as necessary but no more. Because for a lot of use cases there simply isn't that much data.
And from everything we know the structure of language is not so complex that you need this insane amount of data and model size.
It's just we worked around of problems by throwing more compute and data on it instead of solving them proper. Similar we try to reformulate any little-data use case by reformulating it in a way where we hope to take advantage of the mass "causal text" data modern foundational LLMs where trained on and fine tune and instrument the model using the "little data" of the use case.
But conceptually this is ... sub-par and non desirable. And sure that we made it work with this trickery is quite magnificent.
And sure this huge LLMs do more then encode language, they encode miscellaneous knowledge/data, too.
But a messy, hallucination prone, non properly updateable and potentially outright copyright or privacy law violating encoding of data...
So many systems already do use RAG like approaches to get supply the knowledge in a updateable much more well defined fore and "only" use the LLM to find the right search queries and combine things together into human readable responses.
In turn the moment we have small LLMs which still work well for language structure they likely will very reliable win through a lot of reasons (the ones mentioned above and they are also much cheaper) and that even through they are _way_ more complicated to use then "just prompting a LLM". But most advanced assistants are anyway already way more complicated then "just prompting a LLM".
Or in other words the technical breakthrough anyone (including OpenAI) would like the most (OpenAI: financially, as long as it's an internal secret) is one which eliminates the need for having the latest bleeding edge ML chip tech. And DeepSeek is seen by some as a signal that exactly such a change is going to happen. Also I have heard rumors (which I don't believe) that one reason for OpenAI to go non-open was because they realized that, too. And with cheap to run open models they would lose the competitive benefit of competition not being able to do from scratch training even if they want to.
"It has since waned as people spend less time staring at their screens: as demand for consumer electronics fell, so did that for cobalt."
Stop me if I'm wrong, but the article totally missed the fact that new car battery technologies are mostly cobalt free.
I feel it better explains the lack of demand than the article explanation.
> Even a boom in electric vehicles has not been sufficient to counteract this, since manufacturers have done their best to reduce use of the formerly super-expensive metal.
Biological resources like that are exceptionally vulnerable because harvesting them reduces the rate of creation of replacement whales, in a vicious cycle.
But they have enormous stockpiles, compared to the biological resources. The upper 1 kilometer of the Earth's continental crust contains about 10^19 tonnes of cobalt. There is no similar stockpile of whales.
>"These days, LiFePO4 batteries are found everywhere and have various applications, including use in boats, solar systems, and vehicles. LiFePO4 batteries are cobalt-free and less expensive than most alternatives. It is non-toxic and has a longer shelf life."
The gas prices spike has also other causes, like gas-powered electricity plants replacing or partly replacing nuclear facilities like Fessenheim in France. Gas-powered electricity plants are flourishing everywhere in the world, including China to diversify from coal, obviously at some point the available offer is not going to be enough.
I don't know much about US, but for China it didn't last for months, 1 or 2 weeks maybe. It has been notified to the WHO at the beginning of January, on 23 January Wuhan was going full lockdown. And not lockdown like in the West, citizens were not allowed to step a foot outside even for groceries and the army was bringing rations to the people.
I remember then in March, a French newspaper highlighting how PRC hid the severity of the virus... I mean if they took these extreme measures in Wuhan, it's certainly not for a mild virus.
Then the title of the article is highly misleading.
Using "Scientists" instead of "Some scientists" implies that the great majority of the scientists believed in this hypothesis, and there is no evidence for that.
Also, the scientists with the most power were the ones doing the misleading. For whatever reason, the power structure has been such that the scientists willing to mislead are elevated to positions where they are given the opportunity to mislead the most people, and not challenged by other scientists^1 like you would expect in a community dependent on debate to find the truth.
And it's not as if this is leading to a substantial change in the community. If we're being honest, scientists with positions of power are just as likely to mislead us in the future when they believe the truth might be harmful to their funding.
I don't have a strong opinion about what happened, it may or may not be a lab leak. It's still ongoing and I feel it will be a very long and sterile debate with few scientific facts to prove anything.
Granted, if the title of the article was "Scientists believed that Covid had natural origins...", this article wouldn't attract my attention as much, because I already believe that to be the popular opinion. But if, in this hypothetical title, "Scientists" only represented a minority of scientists, it would still be a very misleading title.
Yes, some scientists did and they didn't want speculation and suppressed the fact with political means. There were other scientists that did entertain the thought publicly and they were branded as being anti-science. Just for entertaining the thought, not for saying the had any conclusive evidence.
Did you notice though that the lab leak in mainstream media was stated as being conclusively not a lab leak - not even potentially? That's what I saw, only relatively recently did the tune change.
Oddly enough the mainstream media I saw stated that it was conclusively a lab leak, orchestrated by the democrats, in conjunction with soros, gates, leftists, globalist and the CCP. There was zero potential that it could be anything else at all, or that even one of the supposed conspiring parties was not involved.
That's one of the tough parts about headline interpretations because you're right. I think editors use it as a sort of con knowing it can leave them with a way out. Here's an NPR headline using the same term that could be seen as just as misleading:
I think if you qualify every bit of a statement, it comes off as verbose, which mollifies the impact. If you have half a brain, you should know that it means "some scientists" not "all scientists". I suppose it's normal to read absolutes into things you know nothing about or are unwilling to invest thought into. Then the ambiguity leads to lots of pointless quibbles.
I think it is useful to let the employee know what are the accepted delays in this company, for future projects. But the boss or the manager should have said that earlier, at the beginning of the project, then the employee knows what to expect.
That's quite clumsy to say it only once the project has been released. Anyway, don't take it too personally, he/she probably just wanted to keep the pressure on you so you keep improving and never stay satisfied with your current pace. That's not very good management, but that's the way it is in most companies.
I know it will be an unpopular opinion here, but bringing back the HDMI and SD card port is making the macbook much thicker and will eat up some space than can be used by battery instead, all that for ports I will never use. I wish there was another option without these ports.
Interesting, honestly didn't know that was a thing.
Is this the first macbook that's hit the 100WHr limit, or has this been a barrier previously?
But I mean like, that's it? No doubt there are gains to be made with a better battery, charging, cycling, cost and so forth. But without changing the regulations, a better battery can only get you more space, not higher capacity. Wild to consider.
It's not. There's _no_ law that dictates battery size. 100Whr is the largest size you can take on airlines. Economically, no laptop manufacturer is going to make a laptop you can't take on a plane.
Some laptops have two batteries, one of which can be hot-swapped or removed and charged while the other keeps running in the meantime. Would love to see that on Macbooks and could be a fair compromise for the airplane problem, though obviously it will not happen. (The battery was user-removable on early Macbooks but that was long ago)