I think you could do most of it as a point and click. Perhaps with the exception of that one command (if you know what I mean) because the mere possibility of it would be revealing in a point-and-click. But you could do that in a Sierra AGI type graphical adventure because that still has a parser.
Perhaps you could do a hierarchical approach somehow, first generating a "zoomed out" structure, then copying parts of it into an otherwise unspecified picture to fill in the details.
But perhaps plain stable diffusion wouldn't work - you might need different neural networks trained on each "zoom level" because the structure would vary: music generally isn't like fractals and doesn't have exact self-similarity.
>You can do MCMC like AlphaGO and see ten moves ahead.
The existence of adversarial attacks shows that most neural networks have pretty bad worst-case performance. Thus sticking GPT-3 into alpha-beta or MCTS could just as easily give you an ungeneralizable optimum, because optimizers are by nature intended to find extreme responses. Call it a Campbell's law for neural nets.
The actual AlphaZero nets are probably more robust because they were themselves trained by MCTS, although they still don't generalize very well out-of-sample: IIRC AlphaZero is not a very strong Fischer Random player.
In the same way. Most proposed fusion systems use deuterium-tritium fusion where a significant amount of the energy is carried away as neutrons, so direct energy conversion wouldn't be possible anyway.
From the article you referenced:
> ITER will not produce enough heat to produce net electricity and therefore is not equipped with turbines to generate electricity. Instead, the heat produced by the fusion reactions will be vented.
So in a fusion plant, the particle energy would turn into heat (by the particles interacting with matter), this would heat up water (or some other carrying fluid), turning a turbine that produces electricity. See also https://en.wikipedia.org/wiki/DEMOnstration_Power_Plant which contains some diagrams showing just how that would be done.
More exotic reactions (e.g. p-B11) have been proposed, where almost no energy is in the form of neutrons. Theoretically, you could then use electrostatic devices to capture the energy directly without any of the mess with Carnot efficiency. However, getting p-B11 fusion going is much harder than d-t.
> What you call "american ideas" is the only thing that works in the anonymous environment.
What about BitTorrent or its various file-sharing predecessors? It has no cash, they had no cash. Or Tor? Exit nodes don't demand money as compensation from attracting the attention of people in authority.
The current crowning achievement of formal methods is, as I understand it, seL4. It is a formally proven microkernel of about 8500 LOC. There's still a while to go until they can scale to 100kLOCs, unfortunately.
A problem with both seL4 and CompCert is that the code written to express the proofs is huge, much larger than code that actually does stuff. This puts a ceiling on the size of the projects we can verify.
F* is a language that tries to address that, by finding proofs with z3, a smt prover; z3 can't prove everything on its own but it cuts down proof code by orders of magnitude. They have written a verified cryptography stack and TLS stack, and want to write a whole verified http stack.
F* (through Low, a verified low-level subset of F) can extract verified code to C, which is kind of the inverse than the seL4 proof: seL4 begins with C code and enriches it with proofs of correctness; hacl* (a verified crypto F* lib) begins with a proven correct F* code and extracts C code (I gather the actual crypto primitives is compiled directly to asm code because C has some problems with constant time stuff). This enables hacl* to make bindings to other languages that can just call C code, like this Rust binding
There are ways to keep an AI in sealed hardware and making sure it can't affect the world, for instance by using an objective function that only deals with mathematics, and doesn't deal with the real world at all.
E.g. the AI is given a fixed amount of hardware and told to produce an algorithm that solves some NP-complete problem (say integer programming) in expected time as close to polytime as possible, as well as a mathematical proof that the algorithm satisfies the claimed close-to-polytime complexity bound. Then humanity can just solve the NP-complete problems separately once they have the algorithm.
This objective function doesn't care about the physical world -- it doesn't even know that a physical world exist -- and so it's about as likely to directly affect the physical world as MCTS or AlphaGo.
The "AI is going to run out of control" is a very compelling narrative (as everybody who has read the Sorcerer's Apprentice understands). But that doesn't make it true. Beware the availability heuristic.
(Incidentally, I think AI destroying mankind because it's too smart is an unlikely outcome. It's much easier for the AI to subvert the human-designed sensors linked to its objective function; and if the AI is sufficiently smart and the sensors aren't perfect, then it can always do so.)
These counterarguments are only possibly effective because you're imagining some particular kind of AI. When there is a useful AI, of course we will want it to be able to interact with people and have it control physical things in the real world. Just like existing computers do.
That seems to be a DRM problem. Let's say that you want the camera to track all modifications of the picture. Then, analogous to DRM, there's nothing stopping the forger from just replacing the CCD array on the camera with a wire connected to a computer running GIMP.
To patch the "digital hole", it would be necessary to make the camera tamperproof, or force GIMP to run under a trusted enclave that won't do transformations without a live internet connection, or create an untamperable watermark system to place the transform metadata in the picture itself.
These are all attempted solutions to the DRM problem. And since DRM doesn't work, nor would this, I don't think.
Just make it zero-knowledge. You use the ID server to prove that you're not a sock puppet of someone already registered, but that's all the site needs to know.
That's for reinforcement learning, right? What is the adversarial learning problem in say, classification based on Solomonoff?
If hypercomputation is possible, then anything based on Kolmogorov complexity would be SOL, but if not... is Solomonoff induction just too expensive in practice?
On topic, I would myself recommend Coloratura - https://ifdb.org/viewgame?id=g0fl99ovcrq2sqzk - for the sense of wonder/unusual protagonist.