Doha Debates– Don't settle for a Divided World
Doha Debates is a production of Qatar Foundation
Learn more at www.qf.org.qa
Q&A / May 30 2019

Q&A With Dr. Ali Minai

by Daniel King
DD-thenewarmsrace
play
04:37
VIDEO

AI: The New Arms Race?

Will artificial intelligence improve or harm the world? Will it help or destroy humans in the long run? Questions like these are too simplistic, says a leading AI expert and educator, Dr. Ali Minai, who argues that digital destruction and digital paradise are not the only options. AI will bring and has already brought some of each, depending on people’s power, privilege and place in the world. The editor of nine books and the director of the Complex Adaptive Systems Laboratory at the University of Cincinnati, Ali researches these topics hands-on as a professor of electrical engineering and computer science, and a member of the neuroscience faculty. He is a board member and past president of the International Neural Network Society, and he is the chief AI consultant to Doha Debates, where he helps frame our conversations about AI’s risks and rewards. We asked Ali to weigh in: Is AI worth the risk?

 

Q: Is AI growing for better or for worse?

ALI: Both. AI poses many dangers that people are now waking up to, such as biases in automated decision-making, the creation of hyper-realistic false information, the profiling and micro-targeting of people for commercial exploitation. And then there is the fear that AI will take away millions of jobs. One area where there’s serious alarm is autonomous weapons. Every technology that can be used as a weapon ultimately has been throughout history. It is guaranteed to happen with AI. In principle, when you have an intelligent weapon, it could reduce unintended casualties, but it will also be much more lethal and tempting to use. That’s why it is critical to develop rules and treaties for intelligent autonomous munitions just as we have for nuclear, chemical and biological weapons. Unfortunately, military imperatives often trump regulations and treaties.

On balance, I think the benefits of AI are greater. While weapons will become more lethal and the loss of jobs will hit some people hard, humanity will also benefit immensely. Diseases will be diagnosed better, robots will perform complex surgery more reliably, self-driving cars will allow disabled people to travel more easily, and assistive technologies will improve the lives of elderly people. All these benefits probably outweigh smart weapons and economic dislocation in the end, just as happened with past technological revolutions. But yes, for the person at the other end of that smart weapon or pink slip, AI will be a serious problem.

 

Q: What are the chances that AI’s values will align with human values and interests?

ALI: That is a huge question when it comes to AI. First we have to note that human values are variable: There have been societies that practiced human sacrifice. To us that’s completely abhorrent, but it was practiced for thousands of years. There’s only some agreement across humans on universal values, but it’s less than we would like to think. When we create intelligent machines, they will bring their own values.

Building truly general intelligence is like building a new species—a different animal. It may be smarter than a human, and it will develop its own values. We will not be able to tell it what its values should be, any more than a baboon can tell us.

DD-AI-thefutureofjobs
play
04:36
VIDEO

Losing Your Job to A.I.

“Building general intelligence is like building a new species, a different animal. We will not be able to tell it what its values should be, any more than a baboon can tell us.”

—Dr. Ali Minai

 

Q: The world faces massive job loss from automation. Some estimates say 30–40 million jobs eliminated by 2030. How can we stay encouraged?

ALI: That’s true. There will be job losses. But there will also be job gains — unimaginable new categories of jobs will appear, like we saw with the automobile, the internet, the smartphone. Some losses will be offset by the gains, but unfortunately, as we see throughout history, those whose jobs are lost and those who gain jobs are often not the same. We are going to see a fair amount of that. It’s going to be socially and geographically uneven, and the consequences will be faced much more by those who have least to do with AI. People with technological savvy or advanced skills will probably be fine.

 

Q: What’s the most important choice to make about AI’s future?

ALI: The most important choice is to decide what we want AI to be: our servant or our collaborator. So far, all the focus is on creating smarter tools like systems for predicting the stock market, analyzing images, processing video, driving a car — servants. This is narrow AI. But if we build true general intelligence, it will have to be autonomous, not only in its behavior but in its purposes and goals. It may work with us, not for us, and once we have that, we will have created something that can potentially control us and become our master. We are far from that point now, but I fear that by the time we realize we’re there, it may be too late.

 

Q: Is it all worth the risk? Even the risk of creating harmful AI?

ALI: We should think about that but we should also realize that once a thing is possible, it will be done, no matter its hazards — that is the lesson of history. We have seen this with nuclear technology. We may see it with gene technology. We’ve created monsters before, and we said we would not unleash them, and but we always do. For AI, we already see it with social media and “fake news” or false information, and the degree of difficulty we’re having putting that genie back into the bottle. I don’t think we can ultimately do very much about the hazards. We just have to be prepared for them. Given that, it is extremely important to concentrate on fully realizing all the benefits of AI. The risk will come whether we like it or not. Let’s reap the rewards too.

DD-AI-etchicsinAI
play
08:36
VIDEO

The Ethics of Artificial Intelligence

Q: Are parts of the world in an AI arms race already?

ALI: Yes, there is an AI arms race. Like most races of this kind, it is driven by geopolitical issues. If AI is the newest tool in the arsenal of influence — military influence, economic influence and social influence — countries and groups will use that. It’s going to be a much more chaotic world.

For some, progress will be beneficial. For others, problematic. The lesson of the last 100, 150 years, since the ignition of the technological age, is that on average, things become much better for most people. Almost no one has to die of bacterial infection anymore. Millions still do, but in principle they don’t have to. People don’t have to suffer during surgery or go without hearing the voice of a distant loved one for years, or die in a plague or epidemic. There is still a lot of suffering in the world, but a lot less than there was a century or two ago. So I think that, on average, things will continue to get better for people, but it doesn’t mean that for certain large populations they will not get much worse. And then you add climate change, and all bets are off. In fact, AI may well be a savior there!

 

Q: What about AI’s built-in bias? Is there “good” bias and “bad” bias?

ALI: Yes, bias is fundamental to intelligence. Intelligence can be seen as nothing but having an especially useful set of biases. Whenever we interact with something new, we do so with certain assumptions and expectations. In the language of mathematics we call these “priors.” The probabilities we assign to outcomes depend in part on our priors, and the difference between a good and bad decision is often the quality of our priors. Much of what we do as intelligent agents is based on instinct and intuition, not calculation. These instincts and intuitions arise from our biases. We see a new insect and guess whether it’s harmless or dangerous based on the way it looks. That’s a bias! We might be wrong, but that bias serves us well enough in most cases and lets us make decisions much faster than calculation. So bias in itself is not bad.

The problem is that not all biases are useful or desirable. In AI, there is some low-quality data and poorly thought-out algorithms. As AI becomes increasingly autonomous, there’s a danger that these biases will go undetected and become normalized. A machine, after all, “couldn’t possibly be biased!” Or so many people think.

But there’s another level of hazard with biases. There are biases that are a genuine part of human intelligence and in remote prehistory were actually useful. The bias against people who don’t look like one’s own group may have been important for the survival of prehistoric hunter-gatherers. The instinct is still part of our mind, but today we call it racism and xenophobia and consider it an evil. How will we get machines to agree? The biases that intelligent machines have will ultimately represent their values, which may not be the same as ours. We should be concerned about that.

 

Q: What’s the proper role of regulation for AI?

ALI: There will need to be regulation. As long as we’re using AI as tools, regulation will be very similar to what we see with guns or financial transactions. Everything has its positive and negative, but I don’t think the downside to AI is much bigger than the upside. When you think about what AI can do to alleviate human suffering, to make our lives easier, to entertain us, I think the positives of AI as a tool are huge. It would be a mistake to try to control that by overregulation, but we’ll have to have some regulation to prevent harm — and that will vary from place to place, as we see with other regulation.

With smart weapons, cyberwarfare and autonomous general intelligence, regulation will only have limited effect. Countries, corporations and autonomous AI entities will make their own choices — and we’ll have to live with them.

Doha Debates on Artificial Intelligence
play
1:00:40
FULL DEBATE

Artificial Intelligence