Tech companies are racing to develop human-level artificial intelligence, the development of which poses one of the greatest risks to humanity. Last week, software engineer and video game developer John Carmack announced that he had raised $20 million to start Keen Technologies, a company dedicated to building all-human AI. He is not the only one. There are currently 72 projects around the world focused on developing human-level AI, also known as AGI, i.e. AI that can perform any cognitive task at least as although humans.
Many have expressed concern about the effects that the current use of artificial intelligence, which is far from human, is already having on our society. The rise of populism and the attack on the Capitol in the United States, the Tigray War in Ethiopia, the increase in violence against Kashmiri Muslims in India and a genocide directed against the Rohingya in Myanmar, have all been linked to the use of artificial intelligence algorithms in social media. . Social media sites using these technologies have shown a propensity to show hateful content to users as they identify such posts as popular and therefore profitable for social media companies; this, in turn, caused gross harm. This shows that even for today’s AI, a deep concern for safety and ethics is crucial.
But the blueprint of cutting-edge tech entrepreneurs is now to build much more powerful human-level AI, which will have far greater effects on society. These effects could, in theory, be very positive: automating intelligence could, for example, free us from work that we would rather not do. But the negative effects could be just as significant, or even greater.
Oxford scholar Toby Ord has spent almost a decade trying to quantify the risks of human extinction from various causes, and has summarized the results in a book aptly titled “The Precipice”. Supervolcanoes, asteroids and other natural causes, according to this rigorous academic work, have only a slim chance of leading to complete human extinction. Nuclear war, pandemics and climate change rank a bit higher. But what trumps this apocalyptic ranking exercise? You guessed it: artificial intelligence at the human level.
And it’s not just Ord who believes that full human-level AI, as opposed to today’s relatively impotent vanilla version, could have extremely serious consequences. The late Stephen Hawking, tech CEOs like Elon Musk and Bill Gates, and AI scholars like Stuart Russell of the University of California, San Francisco have all publicly warned that human-level AI could lead to nothing short of disaster, especially if developed without extreme caution and a deep respect for safety and ethics.
And who will now build this extremely dangerous technology? People like John Carmack, a proponent of “hacker ethics” who previously programmed children’s video games like “Commander Keen.” Will Keen Technologies now build human-level AI with the same concern for security? When asked on Twitter about the company’s mission, Carmack replied “AGI or bust, via Mad Science!”
A democratic society should not let tech CEOs determine the future of humanity without considering ethics or safety.
Carmack’s lack of interest in this kind of risk is nothing new. Before starting Keen Technologies, Carmack worked alongside Mark Zuckerberg at Facebook, the company responsible for most of the damaging effects of AI described earlier. Facebook has applied technology to society without worrying about the consequences, in keeping with its motto “Move fast and break things”. But if we’re going to build human-level AI that way, the thing to break might be humanity.
In the interview with computer scientist Lex Fridman where Carmack announces his new company AGI, Carmack shows utter contempt for anything that restricts the unfettered development of technology and the maximization of profits. According to Carmack, “most people with vision are slightly less efficient.” As for the “ethics things of AI,” he says, “I’m really staying away from these discussions or even really thinking about them.” People like Carmack and Zuckerberg might be good programmers, but just aren’t wired to consider the big picture.
If they can’t, we must. A democratic society should not let tech CEOs determine the future of humanity without considering ethics or safety. Therefore, we all need to educate ourselves about AI at the human level, especially non-technologists. We need to come to a consensus on whether AI at the human level indeed poses an existential threat to humanity, as most scholars say about AI safety and existential risks. And we need to know what to do about it, where some form of regulation seems inevitable. The fact that we do not yet know what kind of regulation would effectively reduce risk should not be a reason for regulators not to tackle the issue, but rather a reason to develop effective regulation with the highest priority. Non-profit organizations and academics can help with this process. Doing nothing – and letting the likes of Carmack and Zuckerberg determine the future of all of us – could very well spell disaster.
on artificial intelligence