Are we seeing a similar phenomenon here with autonomous vehicles, for example Waymo has had a steadier approach it seems than others: while, for example Cruise seems like it didn’t.
OpenAI was also designed to avoid those mistakes, until people started seeing dollar signs. I am not sure that laws can be made to outlaw human nature (greed being an inalienable part of it). And even if they were, I am not sure they could be adequately enforced.
Here’s a paradoxical take. If you build an AI you want it to be super intelligent and take power away from humanity. Why would you want humans to have power over something more intelligent and powerful than us, that’s called a weapon.
The concept of a public benefit company seems really useful to society, it would be good if that could be strengthened in law and became more widespread. Since having a pure duty to maximise profit for shareholders is not that easy to reconcile with ethical behaviour.
> Paul Christiano, Dario Amodei, and Geoffrey Irving write equations on a whiteboard at OpenAI, the artificial intelligence lab founded by Elon Musk, in San Francisco, July 10, 2017.
Can someone explain the architecture in that photo? It shows two pretty heavy wooden pillars each literally right next to another much larger steel I-beam pillar. I don't get it. It seems like the wooden pillars would be totally redundant, and it looks kinda dumb having them so close.
The answer is hiding in the caption: the I-beam is earthquake retrofitting. The wooden beams are original. (Huge I-beams cross and/or frame many indoor spaces in SF at odd/surprising angles -- a consistent visual signature.)