AI risks are in the air — from speculation that AI, decades or centuries from now, could bring about human extinction to ongoing problems like bias and fairness. While it’s critically important not to let hypothetical scenarios distract us from addressing realistic issues, I’d like to talk about a long-term risk that I think is realistic and has received little attention: If AI becomes cheaper and better than many people at doing most of the work they can do, swaths of humanity will no longer contribute economic value. I worry that this could lead to a dimming of human rights.
We’ve already seen that countries where many people contribute little economic value have some of the worst records of upholding fundamental human rights like free expression, education, privacy, and freedom from mistreatment by authorities. The resource curse is the observation that countries with ample natural resources, such as fossil fuels, can become less democratic than otherwise similar countries that have fewer natural resources. According to the World Bank,“developing countries face substantially higher risks of violent conflict and poor governance if [they are] highly dependent on primary commodities.”
A ruler (perhaps dictator) of an oil-rich country, for instance, can hire foreign contractors to extract the oil, sell it, and use the funds to hire security forces to stay in power. Consequently, most of the local population wouldn’t generate much economic value, and the ruler would have little incentive to make sure the population thrived through education, safety, and civil rights.
What would happen if, a few decades from now, AI systems reach a level of intelligence that disempowers large swaths of people from contributing much economic value? I worry that, if many people become unimportant to the economy, and if relatively few people have access to AI systems that could generate economic value, the incentive to take care of people — particularly in less democratic countries — will wane.
Marc Andreessen recently pointed out that Tesla, having created a good car, has an incentive to sell it to as many people as possible. So why wouldn’t AI builders similarly make AI available to as many people as possible? Wouldn’t this keep AI power from becoming concentrated within a small group? I have a different point of view. Tesla sells cars only to people who generate enough economic value, and thus earn enough wages, to afford one. It doesn’t sell many cars to people who have no earning power.
Researchers have analyzed the impact of large language models on labor. While, so far, some people whose jobs were taken by ChatGPT have managed to find other jobs, the technology is advancing quickly. If we can’t upskill people and create jobs fast enough, we could be in for a difficult time. Indeed, since the great decoupling of labor productivity and median incomes in recent decades, low-wage workers have seen their earnings stagnate, and the middle class in the U.S. has dwindled.
Many people derive tremendous pride and sense of purpose from their work. If AI systems advance to the point where most people no longer can create enough value to justify a minimum wage (around $15 per hour in many places in the U.S.), many people will need to find a new sense of purpose. Worse, in some countries, the ruling class will decide that, because the population is no longer important for production, people are no longer important.
What can we do about this? I’m not sure, but I think our best bet is to work quickly to democratize access to AI by (i) reducing the cost of tools and (ii) training as many people as possible to understand them. This will increase the odds that people have the skills they need to keep creating value. It will also ensure that citizens understand AI well enough to steer their societies toward a future that’s good for everyone.
Keep working to make the world better for everyeone!
Andrew—— The Batch @ DeepLearning.AI