Artificial Intelligence Debate Recap
Our live debate on artificial intelligence took many sharp turns and thoughtful steps in search of solutions to AI’s urgent challenges, from weaponized drones and military robots to data breaches and job automation, as well as amplified its opportunities, from curing diseases to mitigating climate disaster and saving lives. The debate tackled the most contentious question of all: Will AI help or harm humans globally?
Who stands to gain the most, who loses the most, who welcomes AI the most, and who fears its consequences most acutely were the questions driving the debate among four AI experts, including Nick Bostrom, the Swedish philosopher who sounded the alarm. Nick cautioned that AI’s short-term benefits conceal lethal long-term risks: It could destroy humanity. Kenyan data scientist Muthoni Wanyoike argued for optimism, saying that AI will improve equality among nations. British author Dex Torricke-Barton welcomed AI but warned about slow-to-adapt governments and lawmakers. And Joy Buolamwini, a Ghanaian American computer scientist, called for public oversight to reduce AI’s built-in bias.
A production of Qatar Foundation, the debate aimed for common ground, and the speakers found it, agreeing that AI is rising inevitably and the urgent goal is to harness its power safely.
Moderator Ghida Fakhry set out careful distinctions between weak AI, strong AI and superintelligent AI, with escalating risks and rewards. She asked whether AI is a “fundamental threat to the future of humankind” or if “the benefits outweigh the risks.”
Nick’s caution was in the air even before he took the stage, but the debate started with optimism from Muthoni, who welcomed AI as a force for economic and social empowerment, especially for women and girls in Africa, where Muthoni is co-founder of Nairobi’s Women in Machine Learning and Data Science. She celebrated AI with “mindful optimism,” which “disentangles us from the fear and fantasy of an AI apocalypse.”
Doomsday scenarios make headlines, but Muthoni called out that AI “promises to bring unparalleled benefits” by lifting “millions out of poverty” and improving the lives of independent farmers in Africa, up to 60 percent of whom “are women, who until the advent of mobile technology had little to no access to financial technology.”
“From where I stand, this is a future filled with a strong African voice,” Muthoni said, “with strong African youth representation — representation of African women, African scientists and African innovators. And not just Africans but representation of the whole world.”
AI helps us identify the future we want, and not just the future we fear.
In Ghida’s follow-up, she pressed Muthoni on whether her optimism withstands the harsh intrusion of government surveillance and AI weapons, asking not “who controls AI” but whether “AI might eventually control us.”
“AI has not caused the wars that we have right now,” Muthoni answered. “We as humans have the ability to ruin the universe and destroy it completely or to ensure our continued existence.”
Dex Torricke-Barton took the floor next, welcoming AI and raising concerns not about technology but about tech-illiterate politicians who blame tech leaders for ethical missteps. Dex argued that fears of killer robots and life-crushing AI are sensationalized.
“How many people in this audience have a smartphone?” Dex asked. “That’s pretty much all of you. AI is built into all the apps and services you’re using today, and the world hasn’t ended yet.”
AI can harm, Dex said, but “we have to stay calm and put that into perspective…Think of everything wrong with the world today,” from climate change to the refugee crisis, “and really, killer robots is the thing that gets you out of bed in the morning and gets you really angry? That sounds like you’ve lost perspective to me.”
AI, like all technology, can be used for good or evil. Technology in and of itself is not evil. It is simply a tool. It is neutral.
A solution, Dex proposed, is for tech leaders and policymakers to work together rather than blaming the private sector. “If you’re looking for the tech industry to solve all of our problems with AI, I’m sorry to say you’re probably deluding yourself. The problems we face are societal.”
In the debate’s most revealing moment, Ghida challenged Dex’s defense of tech companies, asking if Dex — a former communications executive at Google and Facebook — wasn’t “letting the Googles and Facebooks of this world a little easily off the hook. Don’t you think that the Mark Zuckerbergs and other leaders should be taken to task?”
Dex took the question to heart, saying lawmakers should spend more time learning about the very AI they’re criticizing: “Politicians love to deflect from the real question, which is, why do we have a society that is so deeply divided?”
Who better to make the opposite case, warning about AI’s potentially lethal threat, than the AI expert who wrote the book on it? The author of Superintelligence, Nick Bostrom argued that future AI could slip free of human control and bend life in any direction, including possible extinction.
“I don’t think it is ridiculous to have a conversation about lethal autonomous weapons,” Nick said, “even though there are other problems in the world today. If you want to prevent human society from going down this avenue of leveraging AI to make more lethal weapons, it’s easier to do so before there are large arsenals deployed.”
Our capabilities are improving more rapidly than our wisdom. We need to try to get our act together as best we can.
Nick said there’s an “overhyping” of AI’s possibilities now and an “underhyping” of its future possibilities, with “risks to the very survival of our species.”
The debate took a turn to social and political equality when Joy Buolamwini called out the “discrimination built into” algorithms and data sets. A computer scientist at MIT’s Media Lab and founder of the Algorithmic Justice League, Joy said AI’s promoters are “overconfident and underprepared” to tackle its “abuse and bias.”
Without public oversight, AI can amplify inequality and “compound the very social inequalities that its champions hope to overcome,” she said.
Machine bias is well-documented: Predictive policing has been shown to misidentify and inflate the criminality risks of people of color. And human exploitation is baked into the data mining of vulnerable communities: “We are witnessing the exploitation of the data wealth of the global south,” Joy said, referring to low-income countries primarily in Africa, Asia and Latin America.
AI and data colonialism is here. We must fight these trends before it’s too late. We must bend our AI system toward justice and inclusion.
The debate was livestreamed on Facebook and Twitter, where viewers added their voices and voted for solutions, announced by debate correspondent Nelufar Hedayat.
The livestream votes came in throughout the debate:
Common ground, not division, was the debate’s focus, promoted by the debate’s next speaker.
Govinda Clayton, the debate’s bridge-building “connector” and a conflict-resolution expert, tied the arguments together and framed the challenge: AI’s rise is inevitable, but how soon and how widely should regulations be implemented?
The moderator opened the floor to audience questions in the debate’s Majlis forum for consensus-building, with sharp input from students at Qatar Foundation’s Education City, the innovative collection of top universities in Doha, including the debate’s venue, Northwestern University in Qatar.
Ghida welcomed comments from viewers in Gaza, Palestine; Nairobi, Kenya; and Oakland, California, in the U.S., where people joined the debate through walk-in Portals equipped with livestream devices from the design team at Shared_Studios.
The debate ended as constructively as it began — as a conversation, not a contest, to find a promising path to an AI future that expands opportunities for as many people as possible and reduces risks. The night came to a close by inviting everyone to share your own solutions @DohaDebates with the hashtag #DearWorld.