Enriching uranium to levels like 60% is hard. But from there on you don't need thousands of centrifuges anymore. This means that they can set up covert enriching plants and get weapons grade uranium without drawing attention.
Because the whole purpose of these kinds of projects at Google is to keep regulators at bay. They don't need these products in the sense of making money from them. They will just burn some money and move on, exactly the way they did hundreds of times. But what kind of company has such a free pass to burning money? The kind of company that is a monopoly. Monopolies are THAT profitable.
This shows that the overall price level (the cumulative inflation embodied in the PCEPI) has increased by about 2.39 times over the period, which is 239%.
The thing that bugs me to no end when talking about inflation in an historical context, is that everyone forgets to consider how the indexes of consumption it's calculated from (PCEPI, CPI, etc.) are NOT static, and very arbitrarily are changed over time, often to make inflation seem lower than it actually is for the consumer.
Overall, historical comparisons of inflation numbers are so imprecise to be practically worthless the longer the timescale. You can expect the real figure to be much greater in reality for consumers, given the political incentive to lie over inflation data.
ChatGPT is horrible at producing Dutch rhymes (for Sinterklaas poems) until you realize that the words it comes up with do rhyme when translated to English.
> 2009, renowned artist Ai Weiwei published an image of himself nude with only a 'Caonima' hiding his genitals, with a caption "草泥马挡中央" (cǎonímǎ dǎng zhōngyāng; 'a Grass Mud Horse covering the center'. One interpretation of the caption is: "fuck your mother, Communist Party Central Committee"). Political observers speculated that the photo may have contributed to Ai's arrest in 2011 by angering Chinese Communist Party hardliners.
Right but I wouldn't call those things fundamentally different. That's just having different words; the categories of idiosyncrasies are still the same.
As most languages allow expressions of algorithms, they are all Turing complete and, thus, are not fundamentally different. The complexity of expressions of some concepts is different, though.
My favorite thing is a "square." I put that name to an enumeration that allows me to compare and contrast things with two different qualities expressed by two extremes.
One such square is "One can (not) do (not do) something." Both "not"'s can be present and absent, just like a truth table.
"One can do something", "one can not do something", "one can do not do something" and, finally, "one can not help but do something."
Why should we use "help but" instead of "do not"?
While this does not preclude one from enumerating possibilities thinking in English, it makes that enumeration harder than it can be in other languages. For example, in Russian the "square" is expressible directly.
Also, "help but" is not shorter than "do not," it is longer. Useful idioms usually expressed in shorter forms, thus, apparently, "one can not help but do something" is considered by Englishmen as not useful.
I agree in general, but I think that "open" is actually a pretty straightforward word.
As I see it, "Open your heart", "Open a can" and "Open to new experiences" have very similar meanings for "Open", being essentially "make a container available for external I/O", similar to the definition of an "open system" in thermodynamics. "Open a bank account" is a bit different, as it creates an entity that didn't exist before, but even then the focus is on having something that allows for external I/O - in this case deposits and withdrawals.
It is shocking that in this day and age people can burn $250M and fail to deliver a robot. Last time I checked cameras can be bought for a couple dollars and any SBC has GigaFlops of compute power.
The Society of Mind by Marvin Minsky will help anyone interested in the topic of multimodality. The book covers several interesting ideas about organizing systems made up of more than one "model" or agent.
Learning architectures come in all shapes, sizes, and forms. This could mean there are fundamental principles of cognition driving all of them, just implemented in different ways. If that's true, one would do well to first understand the extremely simple and go from there.
Building a very simple self-organizing system from first principles is the flying machine. Trying to copy an extremely complex system by generating statistically plausible data is the non-flying bird.
Robotics is more accessible than ever. The tech is here to build almost anything you want. Dream big! We can now buy cameras for a couple dollars, microphone arrays, sensors, motors and drivers. 3D printers are everywhere, stock components also available.
Or 'manage your expectations', download ROS and build another mediocre turtle bot powered by some Nvidia Teraflop chip (if they allow you to boot their sdk), and spend a year learning "abstractions" defined by other people for other projects.
It doesn't need to be permanent. If humans could escape from their embodiment temporarily they would certainly do so. Being permanently bounded to a physical interface is definitely a disadvantage.
I believe we already have the technology required for AGI. It perhaps is analogous to a lunar manned station or a 2 mile tall skyscrapper. We have the technology required to build it, but we don't for various reasons.
reply