Everyone’s talking about “AI” these days. But, whether you’re looking at Siri, Alexa, or just the autocorrect features found in your smartphone keyboard, we aren’t creating general purpose artificial intelligence. We’re creating programs that can perform specific, narrow tasks.
Computers Can’t “Think”
Whenever a company says it’s coming out with a new “AI” feature, it generally means that the company is using machine learning to build a neural network. “Machine learning” is a technique that lets a machine “learn” how to better perform on a specific task.
We’re not attacking machine learning here! Machine learning is a fantastic technology with a lot of powerful uses. But it’s not general-purpose artificial intelligence, and understanding the limitations of machine learning helps you understand why our current AI technology is so limited.
The “artificial intelligence” of sci-fi dreams is a computerized or robotic sort of brain that thinks about things and understands them as humans do. Such artificial intelligence would be an artificial general intelligence (AGI), which means it can think about multiple different things and apply that intelligence to multiple different domains. A related concept is “strong AI,” which would be a machine capable of experiencing human-like consciousness.
We don’t have that sort of AI yet. We aren’t anywhere close to it. A computer entity like Siri, Alexa, or Cortana doesn’t understand and think as we humans do. It doesn’t truly “understand” things at all.
The artificial intelligences we do have are trained to do a specific task very well, assuming humans can provide the data to help them learn. They learn to do something but still don’t understand it.
Computers Don’t Understand
Gmail has a new “Smart Reply” feature that suggests replies to emails. The Smart Reply feature identified “Sent from my iPhone” as a common response. It also wanted to suggest “I love you” as a response to many different types of emails, including work emails.
That’s because the computer doesn’t understand what these responses mean. It’s just learned that many people send these phrases in emails. It doesn’t know whether you want to say “I love you” to your boss or not.
As another example, Google Photos put together a collage of accidental photos of the carpet in one of our homes. It then identified that collage as a recent highlight on a Google Home Hub. Google Photos knew the photos were similar but didn’t understand how unimportant they were.
Machines Often Learn to Game the System
Machine learning is all about assigning a task and letting a computer decide the most efficient way to do it. Because they don’t understand, it’s easy to end up with a computer “learning” how to solve a different problem from what you wanted.
Here’s a list of fun examples where “artificial intelligences” created to play games and assigned goals just learned to game the system. These examples all come from this excellent spreadsheet:
“Creatures bred for speed grow really tall and generate high velocities by falling over. ” “Agent kills itself at the end of level 1 to avoid losing in level 2. ” “Agent pauses the game indefinitely to avoid losing. ” “In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children). ” “Since the AIs were more likely to get “killed” if they lost a game, being able to crash the game was an advantage for the genetic selection process. Therefore, several AIs developed ways to crash the game. ” “Neural nets evolved to classify edible and poisonous mushrooms took advantage of the data being presented in alternating order and didn’t actually learn any features of the input images. ”
Some of these solutions may sound clever, but none of these neural networks understood what they were doing. They were assigned a goal and learned a way to accomplish it. If the goal is to avoid losing in a computer game, pressing the pause button is the easiest, fastest solution they can find.
Machine Learning and Neural Networks
With machine learning, a computer isn’t programmed to perform a specific task. Instead, it’s fed data and evaluated on its performance at the task.
An elementary example of machine learning is image recognition. Let’s say we want to train a computer program to identify photos that have a dog in them. We can give a computer millions of images, some of which have dogs in them and some don’t. The images are labeled whether they have a dog in them or not. The computer program “trains” itself to recognize what dogs look like based on that data set.
The machine learning process is used to train a neural network, which is a computer program with multiple layers that each data input passes through, and each layer assigns different weights and probabilities to them before ultimately making a determination. It’s modeled on how we think the brain might work, with different layers of neurons involved in thinking through a task. “Deep learning” generally refers to neural networks with many layers stacked between the input and output.
Because we know which photos in the data set contain dogs and which don’t, we can run the photos through the neural network and see whether they result in the correct answer. If the network decides a particular photo doesn’t have a dog when it does, for example, there’s a mechanism for telling the network it was wrong, adjusting some things, and trying again. The computer keeps getting better at identifying whether photos contain a dog.
This all happens automatically. With the right software and a lot of structured data for the computer to train itself on, the computer can tune its neural network to identify dogs in photos. We call this “AI.”
But, at the end of the day, you don’t have an intelligent computer program that understands what a dog is. You have a computer that’s learned to decide whether or not a dog is in a photo. That’s still pretty impressive, but that’s all it can do.
And, depending on the input you gave it, that neural network might not be as smart as it looks. For example, if there weren’t any photos of cats in your data set, the neural network might not see a difference between cats and dogs and might tag all cats as dogs when you unleash it on people’s real photos.
What Is Machine Learning Used For?
Machine learning is used for all kinds of tasks, including speech recognition. Voice assistants like Google, Alexa, and Siri are so good at understanding human voices due to machine learning techniques that have trained them to understand human speech. They’ve trained on a massive amount of human speech samples and become better and better at understanding which sounds correspond to which words.
Self-driving cars use machine learning techniques that train the computer to identify objects on the road and how to respond to them correctly. Google Photos is full of features like Live Albums that automatically identify people and animals in photos using machine learning.
Alphabet’s DeepMind used machine learning to create AlphaGo, a computer program that could play the complex board game Go and beat the best humans in the world. Machine learning has also been used to create computers that are good at playing other games, from chess to DOTA 2.
Machine learning is even used for Face ID on the latest iPhones. Your iPhone constructs a neural network that learns to identify your face, and Apple includes a dedicated “neural engine” chip that performs all the number-crunching for this and other machine learning tasks.
Machine learning can be used for lots of other different things, from identifying credit card fraud to personalized product recommendations on shopping websites.
But, the neural networks created with machine learning don’t truly understand anything. They’re beneficial programs that can accomplish the narrow tasks they were trained for, and that’s it.
Image Credit: Phonlamai Photo/Shutterstock.com, Tatiana Shepeleva/Shutterstock.com, Sundry Photography/Shutterstock.com.