close x
download

Brochure

Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form

brochure
News

The Problem With AI: Machines Are Learning Things, But Can’t Understand Them

Chris Hoffman, How-To Geek
9.1.2020

Whenever a company says it’s coming out with a new “AI” feature, it generally means that the company is using machine learning to build a neural network. “Machine learning” is a technique that lets a machine “learn” how to better perform on a specific task.

Everyone’s talking about “AI” these days. But, whether you’re looking at Siri, Alexa, or just the autocorrect features found in your smartphone keyboard, we aren’t creating general purpose artificial intelligence. We’re creating programs that can perform specific, narrow tasks.

Computers Can’t “Think”

Whenever a company says it’s coming out with a new “AI” feature, it generally means that the company is using machine learning to build a neural network. “Machine learning” is a technique that lets a machine “learn” how to better perform on a specific task.

We’re not attacking machine learning here! Machine learning is a fantastic technology with a lot of powerful uses. But it’s not general-purpose artificial intelligence, and understanding the limitations of machine learning helps you understand why our current AI technology is so limited.

The “artificial intelligence” of sci-fi dreams is a computerized or robotic sort of brain that thinks about things and understands them as humans do. Such artificial intelligence would be an artificial general intelligence (AGI), which means it can think about multiple different things and apply that intelligence to multiple different domains. A related concept is “strong AI,” which would be a machine capable of experiencing human-like consciousness.

We don’t have that sort of AI yet. We aren’t anywhere close to it. A computer entity like Siri, Alexa, or Cortana doesn’t understand and think as we humans do. It doesn’t truly “understand” things at all.

The artificial intelligences we do have are trained to do a specific task very well, assuming humans can provide the data to help them learn. They learn to do something but still don’t understand it.

Computers Don’t Understand

Gmail has a new “Smart Reply” feature that suggests replies to emails. The Smart Reply feature identified “Sent from my iPhone” as a common response. It also wanted to suggest “I love you” as a response to many different types of emails, including work emails.

That’s because the computer doesn’t understand what these responses mean. It’s just learned that many people send these phrases in emails. It doesn’t know whether you want to say “I love you” to your boss or not.

As another example, Google Photos put together a collage of accidental photos of the carpet in one of our homes. It then identified that collage as a recent highlight on a Google Home Hub. Google Photos knew the photos were similar but didn’t understand how unimportant they were.

Machines Often Learn to Game the System

Machine learning is all about assigning a task and letting a computer decide the most efficient way to do it. Because they don’t understand, it’s easy to end up with a computer “learning” how to solve a different problem from what you wanted.

Here’s a list of fun examples where “artificial intelligences” created to play games and assigned goals just learned to game the system. These examples all come from this excellent spreadsheet:

  • “Creatures bred for speed grow really tall and generate high velocities by falling over.”
  • “Agent kills itself at the end of level 1 to avoid losing in level 2.”
  • “Agent pauses the game indefinitely to avoid losing.”
  • “In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children).”
  • “Since the AIs were more likely to get “killed” if they lost a game, being able to crash the game was an advantage for the genetic selection process. Therefore, several AIs developed ways to crash the game.”
  • “Neural nets evolved to classify edible and poisonous mushrooms took advantage of the data being presented in alternating order and didn’t actually learn any features of the input images.”

Some of these solutions may sound clever, but none of these neural networks understood what they were doing. They were assigned a goal and learned a way to accomplish it. If the goal is to avoid losing in a computer game, pressing the pause button is the easiest, fastest solution they can find.

Read the full story and more related stories on How-To Geek

...
more posts