close x
download

Brochure

Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form

brochure
News

Six Questions To Ask Yourself When Reading About AI

Gary Marcus & Ernest Davis, Quartz
12.9.2019

Hardly a week goes by without some breathless bit of AI news touting a “major” new discovery or warning us we are about to lose our jobs to the newest breed of smart machines.

Rest easy. As two scientists who have spent our careers studying AI, we can tell you that a large fraction of what’s reported is overhyped.

Consider this pair of headlines from last year describing an alleged breakthrough in machine reading: “Robots Can Now Read Better than Humans, Putting Millions of Jobs at Risk” and “Computers Are Getting Better than Humans at Reading.” The first, from Newsweek, is a more egregious exaggeration than the second, from CNN, but both wildly oversell minor progress.

To begin with, there were no actual robots involved, and no actual jobs were remotely at risk. All that really happened was that Microsoft made a tiny bit of progress and put out a press release saying that “AI…can read a document and answer questions about it as well as a person.”

That sounded much more revolutionary than it really was. Dig deeper, and you would discover that the AI in question was given one of the easiest reading tests you could imagine—one in which all of the answers were directly spelled out in the text. The test was about highlighting relevant words, not comprehending text.

Suppose, for example, that I hand you a piece of paper with this short passage:

Two children, Chloe and Alexander, went for a walk. They both saw a dog and a tree. Alexander also saw a cat and pointed it out to Chloe. She went to pet the cat.

The Microsoft system was built to answer questions like “Who went for a walk?” in which the answer (“Chloe and Alexander”) is directly spelled out in the text. But if you were to ask it a simple question like “Did Chloe see the cat?” (which she must have, because she went to pet it) or “Was Chloe frightened by the cat?” (which she must not have been, because she went to pet it), it would not have been able to find the answers, as they weren’t spelled out in the text. Inferring what isn’t said is at the heart of reading, and it simply wasn’t tested.

Microsoft didn’t make that clear, and neither did Newsweek or CNN.

Read the full story on Quartz

...
more posts