They have clear numbers for things, but it's not obvious how those numbers would map to what you're trying to run.
If I charged compute based on the number of micro-ops executed then that would be a clear definition, but the actual cost would not be something you could predict, as it would depend on what architecture of CPU you ended up running it on.
AWS is even more complicated and variable than that as for cloud storage you have to deal with not only the costs of the different storage classes, but also early deletion fees, access charges, etc. Combined it makes it impractical to work out how much deleting a file from cloud storage will save (or cost). Sure you could probably calculate it if you knew the entire billing history of the file and the bucket it is in, but do you really want to do that every time you delete a file?
While I don't know enough to say if this is intentional, as it could result from simply blindly optimizing for profit, this sort of pricing model is anti-capitalistic as it prevents consumers from truly making informed decisions. We see the same thing is the US healthcare system, where no one can actually tell you how much an operation will cost ahead of time. That creates a very inefficient (but very profitable) market.
That seems like a recipe for corruption. Representatives would need to supplement their salary somehow just to afford housing. The easiest means would be to leverage their position. Even if you make it illegal, the demand will make it happen anyway.
We could make it something like 10x the median salary and still have the alignment you want, but would be more resistant to that effect.
Well, that's what you'd expect from an LLM. They're not designed to give you the best solution. They're designed to give you the most likely solution. Which means that the results would be expected to be average, as "above average" solutions are unlikely by definition.
You can tweak the prompt a bit to skew the probability distribution with careful prompting (LLMs that are told to claim to be math PHDs are better at math problems, for instance), but in the end all of those weights in the model are spent to encode the most probable outputs.
So, it will be interesting to see how this plays out. If the average person using AI is able to produce above average code, then we could end up in a virtuous cycle where AI continuously improves with human help. On the other hand, if this just allows more low quality code to be written then the opposite happens and AI becomes more and more useless.
That's not true. Preventing jams can actually require more complicated designs. To avoid jams the device should limit the range of motion a device is capable of and deal with dirt and debris. That can require additional parts to stabilize the motion, sealed components, specialized alloys to match thermal expansion, or more complicated motions that clear contaminants.
Given equal precision and alloys used in construction a musket will jam less than a more complex gun. However a musket and bullets built with 1700 technology/precision will jam more than a gun built to the best best modern standards standards (though there are some pretty bad modern guns out there too that may not stack up so well).
A bolt action is simple compared to any semi-automatic gun. Particularly if it is a single shot bolt action thus dispensing with the complexity of a magazine feed.
I think the big factor there is that you'd have to wait over a decade for the transplant to be the correct size. I'm also not sure that we have the technology to keep a brainless body alive for such a long period - the brain is involved in a large number of processes that we don't yet have a way to replicate. And then you'd need a complex surgery to perform the transplant.
Pluripotent cells work fine in many animals with no apparent problems and avoid all of the issues with the clone approach. If pluripotent cells turn out to cause problems, then we could always engineer a kill switch to make sure they die off after the limb is regrown.
You end up with a thermal runaway situation where the hotter the photovoltaics get, the more energy they convert to heat (instead of reflecting or converting to electricity).
Don't worry, they're not following the "deprecate and cancel" playbook for that. They seem to be using the "copy a competitor poorly" one. The few features I liked about it, that distinguished it from Slack, disappeared in the latest update.
If I charged compute based on the number of micro-ops executed then that would be a clear definition, but the actual cost would not be something you could predict, as it would depend on what architecture of CPU you ended up running it on.
AWS is even more complicated and variable than that as for cloud storage you have to deal with not only the costs of the different storage classes, but also early deletion fees, access charges, etc. Combined it makes it impractical to work out how much deleting a file from cloud storage will save (or cost). Sure you could probably calculate it if you knew the entire billing history of the file and the bucket it is in, but do you really want to do that every time you delete a file?
While I don't know enough to say if this is intentional, as it could result from simply blindly optimizing for profit, this sort of pricing model is anti-capitalistic as it prevents consumers from truly making informed decisions. We see the same thing is the US healthcare system, where no one can actually tell you how much an operation will cost ahead of time. That creates a very inefficient (but very profitable) market.
reply