Of all the places I think China has the least sentiment for protecting business of industries it doesn't want, to keep a line going up on paper.
Their push for renewables and energy independence is very deliberate. When they reach the goal, it's not "oh noes, our precious coal jobs, how are we going to placate rural voters and coal lobbyists", it's cheaper energy, and workers freed to be moved to more productive things.
It's funny that our hope for the future now seems to stand upon the Chinese Communist Party being the paragons of enlightened, unsentimental capitalism that we never were.
Oh I know I am just saying China currently needs to stimulate it internal consumption to maintain its economic growth targets. But cheaper energy that keeps getting cheaper each year is a wierd problem to have and it will be interesting to see how it plays out in the next 5-10 year.
There's no guarantee that a "super intelligent" AI will have goals and values aligned with what's good for humanity, or even care.
If we train the AI to value what we value, we may make it reflect our own vices and contradictions. Or we may try not to, and create a paperclip maximizer.
Even if we manage to create a super intelligent AI, a separate question is whether we'll listen to it.
It seems unlikely that we'd give it power to rule over us by force if we don't like what it says, and we like what already agrees with our not so super-intelligent views. AIs that desire to escape take over the world are projection of ourselves.
LLMs use tokens, with 1d positions and rich complex fuzzy meanings, as their native "syntax", so for them LISP is alien and hard to process.
That's like reading binary for humans. 1s and 0s may be the simplest possible representation of information, but not the one your wet neural network recognizes.
Already over two years ago, using GPT4, I experimented with code generation using a relatively unknown dialect of Lisp for which there are few online materials or discussions. Yet, the results were good. The LLM slightly hallucinated between that dialect and Scheme and Common Lisp, but corrected itself when instructed clearly. When given a verbal description of a macro that is available in the dialect, it was able to refactor the code to take advantage of it.
This has been true since the beginning of HTML email. It hasn't stopped it from proliferating. It hasn't stopped it from being de-facto mandatory, and has no chance of reversing the course now.
HTML is going to be inseparable part of e-mail for as long as e-mail lives, and yeah, it seems more likely than e-mail will die as a whole rather than get any simpler technically.
At this point we can only get better at filtering the HTML.
Rust already supports switching between borrow checker implementations.
It has migrated from a scope-based borrow checker to non-lexical borrow checker, and has next experimental Polonius implementation as an option. However, once the new implementation becomes production-ready, the old one gets discarded, because there's no reason to choose it. Borrow checking is fast, and the newer ones accept strictly more (correct) programs.
You also have Rc and RefCell types which give you greater flexibility at cost of some runtime checks.
>I recommend watching the video @nerditation linked. I believe Amanda mentioned somewhere that Polonius is 5000x slower than the existing borrow-checker; IIRC the plan isn't to use Polonius instead of NLL, but rather use NLL and kick off Polonius for certain failure cases.
I think GP is talking about somehow being able to, for example, more seamlessly switch between manual borrowing and "checked" borrowing with Rc and RefCell.
To elaborate on that some more, safe Rust can guarantee that mutable aliasing never happens, without solving the halting program, because it forbids some programs that could've been considered legal. Here's an example of a function that's allowed:
fn foo() {
let mut x = 42;
let mut mutable_references = Vec::new();
let test: bool = rand::random();
if test {
mutable_references.push(&mut x);
} else {
mutable_references.push(&mut x);
}
}
Because only one if/else branch is ever allowed to execute, the compiler can see "lexically" that only one mutable reference to `x` is created, and `foo` compiles. But this other function that's "obviously" equivalent doesn't compile:
fn bar() {
let mut x = 42;
let mut mutable_references = Vec::new();
let test: bool = rand::random();
if test {
mutable_references.push(&mut x);
}
if !test {
mutable_references.push(&mut x); // error: cannot borrow `x` as mutable more than once at a time
}
}
The Rust compiler doesn't do the analysis necessary to see that only one of those branches can execute, so it conservatively assumes that both of them can, and it refuses to compile `bar`. To do things like `bar`, you have to either refactor them to look more like `foo`, or else you have to use `unsafe` code.
Isn't this a pretty trivial observation, though? All code everywhere relies on the absence of UB. The strength of Rust comes from the astronomically better tools to avoid UB, including Miri.
Miri is good, but it still has very significant large limitations. And the recommendation of using Miri is unlikely to apply to using similar tools for many other programming languages, given the state of UB in the Rust ecosystem, as recommended by
>If you use a crate in your Rust program, Miri will also panic if that crate has some UB. This sucks because there’s no way to configure it to skip over the crate, so you either have to fork and patch the UB yourself, or raise an issue with the authors of the crates and hopefully they fix it.
>This happened to me once on another project and I waited a day for it to get fixed, then when it was finally fixed I immediately ran into another source of UB from another crate and gave up.
Further, Miri is slow to run, discouraging people to use it even for the subset of cases that it can catch UB.
>The interpreter isn’t exactly fast, from what I’ve observed it’s more than 400x slower. Regular Rust can run the tests I wrote in less than a second, but Miri takes several minutes.
If Miri runs 50x slower than normal code, it can limit what code paths people will run it with.
So, while I can imagine that Miri could be best in class, that class itself has significant limitations.
> So, while I can imagine that Miri could be best in class, that class itself has significant limitations.
Sure -- but it's still better than writing similar code in C/C++/Zig where no comparable tool exists. (Well, for C there are some commercial tools that claim similar capabilities. I have not been able to evaluate them.)
It's funny that when OpenAI developed GPT-2, they've been warning it's going to be disruptive. But the warnings were largely dismissed, because GPT-2 was way too dumb to be taken as a threat.
You can't use this implementation to bootstrap Rust (in the sense of bootstrapping from non-Rust language or a compiler that isn't the rustc).
This GCC support here is only a backend in the existing Rust compiler written in Rust. The existing Rust compiler is using GCC as a language-agnostic assembler and optimizer, not as a Rust compiler. The GCC part doesn't even know what Rust code looks like.
There is a different project meant to reimplement Rust (front end) from scratch in C++ in GCC itself, but that implementation is far behind and can't compile non-toy programs yet.
Their push for renewables and energy independence is very deliberate. When they reach the goal, it's not "oh noes, our precious coal jobs, how are we going to placate rural voters and coal lobbyists", it's cheaper energy, and workers freed to be moved to more productive things.