That's a misconception. Earth is still pretty much in an equilibrium and emits as much as it gets from the sun. Global warming is due to the heat staying a bit longer in the system, not due to emitting less.
If I have a tap flowing into a bucket, then once the bucket is full the amount of water flowing over the top will exactly equal the amount of water flowing in from the tap.
If I increase the size of the bucket, the the amount of water flowing over the top will also exactly equal the amount flowing from the tap.
There's also scale to consider: global warming is more like if my bucket was very soft, and I've stretched the plastic a little bit to give it slightly more volume (the analogy breaks down beyond this point).
Drilling is essentially an O(N^2) method. You need to replace your drill bit every X meters, and the time it takes to replace it about linear in the current depth.
That's just economy of scale, though. It's always more expensive to be the early adaptor. In Switzerland, 15% of all buildings are heated using geothermal heat pumps.
Yes. And in Switzerland, I believe most new houses have some other type of heat pump (drilling for geothermal is not allowed everywhere, or too expensive). This all still needs electricity; but many houses now install photovoltaics. (At least where I live.)
I was once working in a company producing software / operating systems for smart cards (such as the chips on your credit cards). We developed a simulator for the hardware that logged all changes to registers, memory and other states in a very large ring buffer, allowing us to undo / step backwards through code. With RAM being large, those chips being slow, and some snapshotting, we were usually able to undo back to the reset of the card. That was a game changer regarding debugging the OS.
So maybe we have different definitions of "time travel". But I recall that
- if a compiler finds that condition A would lead to UB, it can assume that A is never true
- that fact can "backpropagate" to, for example, eliminate comparisons long before the UB.
There may be different definitions, but also a lot of incorrect information. Nothing changes with C23 except that we added a note that clarifies that UB can not time-travel. The semantic model in C only requires that observable effects are preserved. Everything else can be changed by the optimizer as long as it does not change those observable effects (known as the "as if" principle). This is generally the basis of most optimizations. Thus, I call time-travel only when it would affect previous observable effects, and this what is allowed for UB in C++ but not in C. Earlier non-observable effects can be changed in any case and is nothing speicifc to UB. So if you call time-travel also certain optimization that do not affect earlier observable behavior, then this was and is still allowed. But the often repeated statement that a compiler can assume that "A is never true" does not follow (or only in very limited sense) from the definition of UB in ISO C (and never did), so one has to be more careful here. In particular it is not possible to remove I/O before UB. The following code has to print 0 when called with zero and a compiler which would remove the I/O would not be conforming.
int foo(int x)
{
printf("%d\n", x);
fflush(stdout);
return 1 / x;
}
In the following example
int foo(int x)
{
if (x) bar(x);
return 1 / x;
}
the compiler could indeed remove the "if" but not because it were allowed to assume that x can never be zero, but because 1 / 0 can have arbitrary behavior, so could also call "bar()" and then it is called for zero and non-zero x and the if condition could be removed (not that compilers would do this)
I think the clarification is good, probably the amount of optimizations that are prevented by treating volatile and atomics as UB barriers is limited, but as your example show, a lot of very surprising transformations are still allowed.
Unfortunately I don't think there is a good fix for that.
There is unconditional use of a pointer b, which is UB if b is null. However, there is an earlier branch that checks if b is null. If we expected the UB to "backpropagate", the compiler would eliminate that branch, but both gcc and clang at O3 keep the branch.
However, both gcc and clang have rearranged the side effects of that branch to become visible at the end of the function. I.e. if b is null, it's as if that initial branch never ran. You could observe the difference if you trapped SIGSEGV. So even though the compiler didn't attempt to "time-travel" the UB, in combination with other allowed optimizations (reordering memory accesses), it ended up with the same effect.
The current trend is somewhat exponential, though, and solar is now finally cheap enough to compete even without subsidies. That will hopefully lead to an even stronger boost.
But "works in limited cases" is absolutely not enough, given what it promises. It drove into static objects a couple of times, killing people. Recent videos still show behavior like speeding through stop signs: https://www.youtube.com/watch?v=MGOo06xzCeU&t=990s
Meaning that it's really not reliable enough to take your hands off the wheel.
Waymo shows that it is possible, with today's technology, to do much much better.
It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.
What they do claim is that with human supervision, it lowers the accident rate to one per 5.5 million miles, which is a lot better than the overall accident rate for all cars on the road. And unlike Waymo, it works everywhere. That's worthwhile even if it never improves from here.
Fwiw you can take your hands off the wheel now, you just have to watch the road. They got rid of the "steering wheel nag" with the latest version.
Well the recent NHTSA report [1] shows Tesla intentionally falsified those statistics, so we can assume Tesla-derived statements are intentionally deceptive until proven otherwise.
Tesla only counts pyrotechnic deployments for their own numbers which NHTSA states is only ~18% of all crashes which is derived from publicly available datasets. Tesla chooses to not even account for a literal 5x discrepancy derivable from publicly available data. They make no attempt to account for anything more complex or subtle. No competent member of the field would make errors that basic except to distort the conclusions.
The usage of falsified statistics to aggressively push product to the risk of their customers makes it clear that their numbers should not only be ignored, but assumed to be malicious.
> It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.
"By 2019 it will be financially irresponsible not to own a Tesla, as you will be able to earn $30K a year by utilizing it as a robotaxi as you sleep."
This was always horseshit, and still is:
If each Tesla could earn $30K profit a year just ferrying people around (and we'd assume more, in this scenario, because it could be 24/7), why the hell is Tesla selling them to us versus printing money for themselves?
They do plan to run their own robotaxis. But there are several million Teslas on the road already. They're just leaving money on the table if they don't make them part of the network, and doing so means they have a chance to hit critical mass without a huge upfront capital expenditure.
... and then react in a split second, or what? it's simpler to say goodbyes before the trip.
> They just think they'll get there.
of course. I think too. eventually they'll hire the receptionist from Waymo and he/she will tell them to build a fucking world model that has some object permanence.
The driving into static objects thing is horrible and unacceptable, I agree. As I understand, this occurred because Autopilot works by recognizing specific objects: vehicles, pedestrians, traffic cones - and avoiding those. So if an object isn't one of those things, or isn't recognized as one of those things, and the car thinks it's in a lane, it keeps going.
Yes, it was a stupid system and you are right to criticize it. And as a Tesla driver in a country that still only has that same Autopilot system and not FSD, I'm very aware of it.
But the current FSD is rebuilt from the ground up to be end-to-end neural, and they have the occupancy network now (which is damn impressive) giving a 3d map of occupied space, which should stop that problem occurring.
https://dms.umontreal.ca/~andrew/PDF/CycleLengths1.pdf