close x
download

Brochure

Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form

brochure
News

AI's Hardest Problem? Developing Common Sense

Gary Marcus & Ernest Davis, LinkedIn
4.10.2019

Artificial Intelligence has seen radical advances of many kinds over the last years, roundly beating human champions in games like Go and poker that once seemed out of reach. Advances in other domains like speech recognition, machine translation, and photo tagging has become routine. Yet something foundational is still missing: ordinary common sense.

Common sense is knowledge that is commonly held, the sort of basic knowledge that we expect ordinary people to possess, like “People don’t like losing their money,” “You can keep money in your wallet,” “You can keep your wallet in your pocket,” “Knives cut things,” and “Objects don’t disappear when you cover them with a blanket.” Without it, the everyday world is hard to understand; lacking it, machines can’t understand novels, news articles, or movies.

The great irony of common sense—and indeed AI itself—is that it is stuff that pretty much everybody knows, yet nobody seems to know what exactly it is or how to build machines that possess it.

People have worried about the problem since the beginning of AI. John McCarthy, the very person who coined the name “artificial intelligence,” first started calling attention to it in 1959. But there has been remarkably little progress. Neither classical AI nor deep learning has made much headway. Deep learning, which lacks a direct way of incorporating abstract knowledge (like “People want to recover things they’ve lost”) has largely ignored the problem; classical AI has tried harder, pursuing a number of approaches, but none has been particularly successful.

One approach has been to try to learn everyday knowledge by crawling (or “scraping”) the web. One of the most extensive efforts, launched in 2011, is called NELL (short for Never-Ending Language Learner), led by Tom Mitchell, a professor at Carnegie Mellon and one of the pioneers in machine learning. Day after day—the project is still ongoing—NELL finds documents on the web and reads them, looking for particular linguistic patterns and making guesses about what they might mean. If it sees a phrase like “cities such as New York, Paris, and Berlin,” NELL infers that New York, Paris, and Berlin are all cities, and adds that to its database. If it sees the phrase “New York Jets quarterback Kellen Clemens,” it might infer the facts that Kellen Clemens plays for the New York Jets (in the present tense—NELL has pretty much no sense of time) and that Kellen Clemens is a quarterback.

Read the full story on LinkedIn

...
more posts