We thought it would be another decade.
Instead, Google’s DeepMind project created an artificial intelligence that defeated the best Go player in the world last year.
AI is suddenly on the tip of every tongue from Silicon Valley to Singapore, after decades of residing solely in theory and science fiction movies.
The technology promises to revolutionize everything from healthcare to finance to Netflix recommendations, potentially saving lives and improving the quality of life for people all over the globe. The AlphaGo computer program suddenly made it all seem possible.
But artificial intelligence also scares a lot of people, even really smart and important people. Elon Musk recently said, “AI is a fundamental existential risk for human civilization.”
Thanks for giving us nightmares, Elon.
Despite AlphaGo’s success, there are still numerous problems with implementing super-intelligent AI, both philosophical and practical. Here are some of today’s biggest concerns with artificial intelligence and how they’re being addressed:
The Skynet problem
What happens when robots finally gain consciousness? Naturally they nuke humans to ash, right?
It’s probably the most common philosophical issue that comes up when discussing artificial intelligence, and the one your friends talk about the most often: When AI gains consciousness and realizes it’s smarter than us, what happens? After all, we’re smarter than animals, and look what we’ve done to them.
The truth is that we’re a long way off from AI gaining what we might call consciousness. We still don’t really know what consciousness is, how it happens or have a consistent definition of it. All we know is that we believe we have it. A few years ago, we thought we were getting close to a scientific understanding of consciousness, yet we’re still not sure how to spot it.
There’s also the zombie problem: What if we create an AI, put it in a robot body that looks just like ours, and it performs all the actions humans do? For some people, that means it’s simply mimicking human actions, not truly a self-aware being with free will.
It’s possible that someday we will have a scientific definition of consciousness, and understand it as a process the same way we now understand the cardiovascular system. We may even look back and realize some programs we use today actually already had some form of sentience.
But we probably won’t be looking out over a nuclear wasteland and fighting robots. AI does not need a robot body for starters, it just needs computational power. The Internet and cloud computing are the limbs of artificial intelligence.
There are multiple ways to address the potential problems when AI finally starts to run off on its own, making its own decisions, whether we call that consciousness, intelligence or something else. One prominent solution is to program AI which aligns with human values. This ensures that artificial intelligence and humanity are working together, enabling AI to help us, not harm us.
“Robots aren’t going to try to revolt against humanity,” explains Anca Dragan, an assistant professor at UC Berkeley, “they’ll just try to optimize whatever we tell them to do. So we need to make sure to tell them to optimize for the world we actually want.”
That means the problem isn’t whether AI will enslave us all, it’s figuring out what we actually want the world to look like as a human race.
The idiot savant problem
Now, to the practical problems.
AlphaGo is mind blowing, but it’s really good at just one thing. If you ask it to schedule your day, it can’t do it.
For artificial intelligence to reach its potential we have to figure out how to give it what the really smart people working on this problem call general intelligence.
Even AI that is designed to do one thing really well, like find the cure for cancer, must have some level of general intelligence. It might be able to sift through data and highlight anomalies, but it must also have some ability to reason, to make cognitive leaps and provide meaningful assertions.
Solving the problem of general intelligence may also help to solve the problem of human-value alignment. As Hiroshi Yamakawa, a leading researcher on artificial general intelligence in Japan, puts it: “Even if superintelligence exceeds human intelligence in the near future, it will be comparatively easy to communicate with AI designed to think like a human, and this will be useful as machines and humans continue to live and interact with each other.”
We can continue to put AI to use performing simple tasks, like driving a car or playing a game, but until general intelligence is possible it won’t enable the kinds of dramatic leaps in technology we’re all hoping for.
The power problem
Forbes points out that the machine learning techniques where AI has so far shown the most promise, “require a huge number of calculations to be made very quickly.”
That means lots of computer power is needed if we ever hope for AI to help solve some of humanity’s biggest problems. All that data crunching isn’t easy.
But we’re already getting there. Cloud computing opened the door, and is one of the main reasons we’ve seen some breakthroughs in AI in recent years. Yet as AI gets more sophisticated and the amount of potential data we can feed into continues to grow, processing power still isn’t where it needs to be.
The answer may lie in quantum computing, a befuddling notion to the common observer. As WIRED explains, “Quantum computing takes advantage of the strange ability of subatomic particles to exist in more than one state at any time. Due to the way the tiniest of particles behave, operations can be done much more quickly and use less energy than classical computers.”
The important part to understand for the non-PhD is that quantum computing may possess the potential to provide AI with the processing firepower it needs to take a giant leap. Dare we say, a quantum leap?
AI’s got 99 problems …
Suggested Reading:
Artificial intelligence faces many more than the three issues listed here, and within each of these topics are a litany of other issues. But we have time. The truth is that we’re still long way off from true superintelligence, or anything we might call a sentient program.
Elon Musk was being provocative when he warned of AI’s threat, but for good reason: He doesn’t want to see a landscape where one company makes the critical breakthrough in AI, and essentially has the power to take over the world by wielding it.
An open-source approach, where diverse players are coming together to share research and practices, will help us to solve the problems of AI in thoughtful ways that will ultimately benefit us all.
Regardless of how we get there, it would be pretty cool to teach robots how to high five, wouldn’t it?