I really identify with this way of thinking. One domain where it is especially helpful is writing concurrent code. For example, if you have a data structure that uses a mutex, what are the invariants that you are preserving across the critical sections? Or when you're writing a lock-free algorithm, where a proof seems nearly required to have any hope of being correct.
I was trying to understand how an altitude of 470,000 ft compares to other things, so I looked up a few numbers.
470k feet is 143 km. The altitude record for an air-breathing aircraft is 38 km. There are some very low earth orbit satellites that orbit in the sub-200 km range (https://en.wikipedia.org/wiki/Very_low_Earth_orbit). The ISS orbits at about 400 km and typical LEO is 800 km. ICBMs have an apogee altitude of 1000 km or more.
(Of course, the energy required to get up to some altitude is only a small fraction of the energy required to get into orbit at that altitude. https://what-if.xkcd.com/58/ is a relevant read.)
I guess to me it seems like common sense that a system that has substantially fewer crashes also has substantially fewer deaths. Maybe we can't make definitive statements about the expected number of deaths yet, but I think the most reasonable best guess with the information we have is that waymo deaths will be much lower.
The alternative requires a scenario where waymo is especially likely to get into fatal accidents while being very good at avoiding non-fatal ones, right? Seems far-fetched.
I would not make the same inference because we know that ML systems generally struggle with robustness and the long tail, while humans tend to be exhibit much more flexibility to adapt to distribution shifts or unusual situations. For a concrete example of this in computer vision, see work such as the Imagenet—C dataset where simple distribution shifts generally tank ML models but do not impact human performance.
But regardless, the claim here isn’t “under some assumptions that some people (and not others) find reasonable, we can extrapolate and predict that self driving cars will be found to be safer”
It’s “self driving cars are safer”, which there isn’t enough evidence to claim yet.
I've done this for a few years as well. I use a terminal+tmux for most work including quickly editing files here and there but for some reason when I get to "proper" focus-mode programming my brain just wants a separate "application" to look at. And usually the terminal is on my secondary monitor while the "editor" terminal is full-screen on the main monitor.
I used to gvim but realized that I was getting almost no benefit (and occasionally the differences between gvim and my terminal caused minor annoyances).
Two things I do that help in this regard:
I use a tweaked config for the "editor" instance of the terminal that has a slightly different background color from my main terminal. This keeps them separate in my mind.
I use dedicated shortcuts for focusing each application I use (browser, terminal, slack, etc) and the "editor" terminal has its own shortcut. (The --class flag that the post mentions kitty has would be pretty helpful in this regard. My terminal doesn't have that so my shortcuts are based on title, which works well enough most of the time.)
I own a variety of monitors and can easily tell the difference between 60hz and 120hz. All things being equal, I of course prefer 120hz (or 165hz as some of my gaming monitors support).
I also own monitors at resolutions from 1440p to 4k.
For doing work (programming, where I'm mostly looking at text), resolution makes a huge difference. I only do coding on high DPI screens and I would upgrade to 5k or 6k or 8k displays if I were confident that my hardware and OS would support them well. (TFA was very helpful in that respect.) In these settings, high refresh rate makes only a marginal difference to my experience.
For gaming, refresh rate makes a much bigger difference, and resolution makes a somewhat smaller difference -- my hardware can't reasonably drive many of the games I play at 4k or higher anyway. So I just use cheaper, lower-resolution monitors that operate at high refresh rates for gaming.
Someday I guess I'll just be able to spend $300 for an 8k monitor at 240hz and then I won't have to make this kind of choice. (In fact, in the several years since I last bought gaming hardware I think the options for high-refresh-rate 4k monitors have gotten much better; I might use 4k for gaming if I were buying today.)
But for now, I'll always pick resolution over refresh rate for doing work, and it's not because I can't tell the difference.
I want the best of both worlds. There exist 144 Hz 4K 27" monitors, such as the LG 27GP950/27GP95R. They're still LCD panels rather than OLED, but I am confident that OLED panels with these dimensions will arrive fairly soon.
I've been using two 27" 4k 144hz monitors for over a year on my personal machine. I'm doubtful if my work setup would be able to drive them to full capability.
I considered that, but it doesn't cohere with "This is a lot better than the corruption being widely known". That is, the poster I was replying to seemed to be advocating "don't investigate, or else corruption will be widely known, even though nothing is done against it".
Anyway, regardless of their intended meaning, my point about in- and out-groups stands.
You interpreted it pretty differently from what I mean: Investigation-less corruption is often unofficially known/suspected, but can't be acted on. Investiation enables acting on it.
Of course the laws need to he tight on corruption in the first place. The kinds of corruption seen in the US are typically perfectly legal (we also have loopholes that should be closed, which makes me wonder why people often choose the illegal forms of corruption).
Well that's partly sampling bias. You're not likely to own a Tesla if you hate the touch controls. My wife and I went through the car buying process this year and test drove a Model Y. I was pretty meh on the touch interface and my wife hated it. We ended up with a non-Tesla EV.
Maybe it is, maybe it isn't. Like I said I was skeptical at first. There's also a bit of acclimating to a new way of interacting with the car - e.g. trusting it to turn on the lights and wipers for you, as well as learning how to use the voice controls. I don't think it's biased to say that automation and voice are more ergonomic than buttons and knobs while driving. The screen is mainly used for visualization in the Tesla. The navigation experience is so much better than other cars.
I became intimately familiar with negative dentries while debugging a slow service deploy a few years ago.
A deploy that was normally very fast would sometimes hang for a few minutes during a phase where all it had to do was delete the old application directory and move the new one into place.
Turned out that the application was writing a bunch of tempfiles into the cwd and then immediately deleting them. Nothing ever touched that directory while the negative dentries accumulated for weeks or months. When someone finally deployed, the first rmdir that came along bore the cost of deleting all those negative dentries. It hung for seconds or minutes while the kernel essentially cleared out the entire dcache, deleting linked list elements one by one. It showed up in perf as being stuck inside shrink_dcache_parent.
This is actually easy to reproduce:
$ mkdir /tmp/foo
$ touch /tmp/nodelete
# create and delete 100k files
$ for i in $(seq 1 10); do bash -c 'for i in $(seq 1 10000); do rm $(mktemp /tmp/foo/XXXXXX); done' &; done; wait
...
$ time rmdir /tmp/foo
rmdir: failed to remove '/tmp/foo': Directory not empty
rmdir /tmp/foo 0.00s user 0.02s system 91% cpu 0.024 total
$ time rmdir /tmp/foo
rmdir: failed to remove '/tmp/foo': Directory not empty
rmdir /tmp/foo 0.00s user 0.00s system 81% cpu 0.003 total
Both rmdirs fail, but the first one takes 24ms. If you create and delete more files, it takes longer and longer.
At some point we probably would've noticed the memory leak as well (I found an 18 GB slab on one host while this was happening) but the machines in question have huge amounts of ram.
I worked around the issue by making the application reuse tempfile names.
My conclusion at the time was that it was not, strictly speaking, a bug. It seemed to be a sharp edge that was WAI.
Considering it again now, I do think it's essentially a bug, but it seems to be a known thing at this point. What I described is the same issue addressed by this unmerged patch: https://lkml.org/lkml/2017/9/18/739 (see discussion here: https://lwn.net/Articles/814535/). And it's mentioned in the article in this HN link:
> Those dentries still take up valuable memory, and they can create other problems (such as soft lockups) as well.
Even if it's just a performance anomaly, these are good reports for kernel developers to have. If nothing else, it helps expand developers' understanding of the sorts of workloads people have had problems with. In the case of really complex systems, it can take a number of reports to spot the pattern, or in the case of proposed fixes, enough pains points to justify the risk of making a change. A report like this takes 60 seconds to cut and paste into an email to a mailing list. Or use the kernel.org bugzilla that is triaged by helpful volunteers. Every voice counts.
> Requires a domain name to be the first part of the module identifier
This is only true if you want the module to be publicly 'go get'-able. Private modules can be named whatever you want.
(Some tools use whether or not the first import path segment contains a '.' as a heuristic for "is this package stdlib", and those won't work correctly on a module that doesn't use a dot. There's a proposal, not yet accepted, to document this as a naming requirement for modules: https://github.com/golang/go/issues/32819 This is of course a looser requirement than "must be a domain name".)