Software engineering was life changing for me. I studied cognitive and computer sciences (CMU '03) - and when I graduated I realized I didn't know much about writing software.
Luckily, I found a software engineering course (Berkeley, 2005) after graduating and they let me attend without being enrolled. That professor (Kurt Keutzer) had wisdom! Literally life changing.
These engineering skills are totally different from what you learn in a computer science curriculum. I highly recommend a course like the one OP links.
It's a fact that wealthy donors support specific research that interests them.
On the one hand, hasn't research always been funded like this? Wealthy patrons have always supported work that somehow gratified them. And history is replete with despicable personalities who have nevertheless financed good science.
On the other hand, when the patron's interests turn out to be questionable, the research supported by those interests can be examined. It's okay to give it a second thought in light of new information about the patron.
I happen to think this article raises valid questions. For example, I have questions about the idea of buying a visiting fellowship. I have questions about the mechanisms by which faculty become oddly encumbered by donations.
My personal knowledge management project, Gthnk (gthnk.com), would appear to plug in easily as a Source - without any special plugin necessary. I really like what you've made!
I've heard the typical ICU/hospitalization course involves bilateral pneumonia, apparently requiring intubation and ventilation.
It appears to be survivable as long as equipment and personnel are available. And that's the problem: availability.
When health systems are overwhelmed, even people who could survive pneumonia with mechanical ventilation will die - because the equipment supply is exceeded and there are no ventilators left.
So yes, there are treatments for pneumonia, but they are limited in availability.
In the psychological sciences, it seems like you're damned if you do and damned if you don't.
When a phenomenon with a large effect size is demonstrated with tens or hundreds of participants, everybody crows about how the sample size should have been larger.
On the other hand, when a small effect size requires millions of observations to detect, now the criticism is that the effect is too small to matter.
At any rate, this effect is small - but it is reliable. The only crappy part about this study is the ethical boundaries it crossed. In most other ways, this study was kindof amazing...