Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can accuse everyone of *isms, but there are a lot of us who have just never been persuaded by the many articles written to scare us about the effects of AI.

The concerns about all the jobs going away have been persistent but the labor market continues to be extremely strong. As long as entrepreneurship is creating new business opportunities and new ways to turn labor into value, it's completely possible that the effects of automation are drowned out by new applications of labor. In fact, this is what we've seen throughout the industrial revolution: constant anxiety about automation that fails to materialize in to mass unemployment.

The concerns about polluting the information space are still hypothetical as well. The current information space is a complete garbage fire and it's not due to generative AI. Might it get worse if bots start spamming people with GPT generated text on social media? Yeah it might. But social media is already terrible and we need to do something about it regardless of that.



The key issue in automation transitions is the transition of affected individuals to other sources of income.

In previous technological revolutions, affected workers were hurt by their loss of income, and some no doubt fell into poverty without ever recovering. Not everyone can be retrained for new types of jobs immediately - (1) they may not have the needed foundational knowledge or the cognitive flexibility/ability, and (2) there might not be enough of the new types of jobs emerging quickly enough for them. Not every displaced miner can become a coder, or be competitive for junior dev jobs.

(Why should the state provide for these workers? Well, primarily for humaneness, and also social stability.)

The rewards of automation (cost savings as well as profits) are reaped by (1) the capital owners of the automation technology companies (and their higher-paid employees), as well as by (2) the companies and consumers using the new automation; therefore those owners and beneficiaries could be asked to bear at least part of the costs of supporting, retraining, and placing in jobs the workers they displaced. In a nutshell: Redistribution during structural unemployment caused by technological transitions.

A humane policy would provide the above types of support for workers displaced by automation. Ideally it would already be handled by existing unemployment policy, but in many countries such support is limited or minimal.

Corporate taxation might need some rethinking along the lines of job-displacement effects of companies (a tricky question, I admit - I've come across one or two proposals for assessing the automation level of companies for taxation purposes). The cross-border dynamics add further complexity, given that automation will displace many jobs outsourced across borders.

Given that the current AI revolution looks like it will be causing even larger and faster changes than previous revolutions, such policies are imo needed as a prerequisite (one of several) for allowing the development of powerful job-displacing AI.


There are two, mostly disjoint groups warning about AI. There are the people worried about comparatively mundane effects from comparatively mundane systems: job loss, spam, disinformation, maybe an occasional unfair loan-application rejection. These concerns don't have nothing going for them, but in all but the worst-case-scenario versions, these just aren't bad enough to make AI not be worth it.

Then there's the people looking ahead, foreseeing a future where superintelligent AIs are more powerful than humanity, and worried that most possible variations of those superintelligences are incentivized to destroy us.

I think this open letter puts much too much emphasis on the petty stuff, I think because they're trying to appeal to people who are allergic to anything that requires extrapolating more than a little bit into the future. But buying more time for alignment research, before we tackle superintelligence, does meaningfully improve humanity's odds of survival, so I hope this happens anyways.


> But social media is already terrible and we need to do something about it regardless of that.

So then instead of finding a solution to those issues, let's instead focus all resources on a tech which will make the issues worse...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: