close x
download

Brochure

Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form

brochure
News

We Have Already Let The Genie Out of The Bottle

Tim O'Reilly, The Rockefeller Foundation
8.7.2020

"it's not artificial intelligence we have most to fear but artificial single-mindedness"

How will we make sure that Artificial Intelligence won’t run amok and will be a force for good?

There are many areas where governance frameworks and international agreements about the use of artificial intelligence (AI) are needed. For example, there is an urgent need for internationally shared rules governing autonomous weapons and the use of facial recognition to target minorities and suppress dissent. Eliminating bias in algorithms for criminal sentencing, credit allocation, social media curation and many other areas should be an essential focus for both research and the spread of best practices.

Unfortunately, when it comes to the broader issue of whether we will rule our artificial creations or whether they will rule us, we have already let the genie out of the bottle. In his book Superintelligence: Paths, Dangers, Strategies, Nick Bostrom posited that the future development of AI could be a source of existential risk to humanity via a simple thought experiment. A self-improving AI, able to learn from its experience and automatically improve its results, has been given the task of running a paper clip factory. Its job is to make as many paper clips as possible. As it becomes superintelligent, it decides that humans are obstacles to its singular goal and destroys us all. Elon Musk created a more poetic version of that narrative, in which it is a strawberry-picking robot that decides humanity is in the way of “strawberry fields forever.”

it's not artificial intelligence we have most to fear but artificial single-mindedness

What we fail to understand is that we have already created such systems. They are not yet superintelligent nor fully independent of their human creators, but they are already going wrong in just the way that Bostrom and Musk foretold. And our attempts to govern them are largely proving ineffective. To explain why that is, it is important to understand how such systems work. Let me start with a simple example. When I was a child, I had a coin-sorting piggy bank. I loved pouring in a fistful of small change and watching the coins slide down clear tubes, then arrange themselves in columns by size, as if by magic. When I was slightly older, I realized that vending machines worked much the same way and that it was possible to fool a vending machine by putting in a foreign coin of the right size or even the slug of metal punched out from an electrical junction box. The machine didn’t actually know anything about the value of money. It was just a mechanism constructed to let a disk of the right size and weight fall through a slot and trip a counter.

If you understand how that piggy bank or coin-operated vending machine works, you also understand quite a bit about systems such as Google search, social media newsfeed algorithms, email spam filtering, fraud detection, facial recognition and the latest advances in cybersecurity. Such systems are sorting machines. A mechanism is designed to recognize attributes of an input data set or stream and to sort it in some manner. (Coins come in different sizes and weights. Emails, tweets and news stories contain keywords and have sources, click frequencies and hundreds of other attributes. A photograph can be sorted into cat and not-cat, Tim O’Reilly and not-Tim O’Reilly.) People try to spoof these systems—just like I and my teenage peers did with vending machines—and the mechanism designers take more and more data attributes into account so as to eliminate errors.

A vending machine is fairly simple. Currency changes only rarely, and there are only so many ways to spoof it. But content is endlessly variable, and so it is a Sisyphean task to develop new mechanisms to take account of every new topic, every new content source and every emergent attack. Enter machine learning. In a traditional approach to building an algorithmic system for recognizing and sorting data, the programmer identifies the attributes to be examined, the acceptable values and the action to be taken. (The combination of an attribute and its value is often called a feature of the data.) Using a machine-learning approach, a system is shown many, many examples of good and bad data in order to train a model of what good and bad looks like. The programmer may not always know entirely what features of the data the machine-learning model is relying on; the programmer knows only that it serves up results that appear to match or exceed human judgment against a test data set. Then the system is turned loose on real-world data. After the initial training, the system can be designed to continue to learn.

content is endlessly variable

If you’ve used the facial recognition features of Apple or Google’s photo applications to find pictures containing you, your friends or your family, you’ve participated in a version of that training process. You label a few faces with names and then are given a set of photos the algorithmic system is fairly certain are of the same face and some photos with a lower confidence level, which it asks you to confirm or deny. The more you correct the application’s guesses, the better it gets. I have helped my photo application get better at distinguishing between me and my brothers and even, from time to time, between me and my daughters, until now it is rarely wrong. It recognizes the same person from childhood through old age.

A Human-Machine Hybrid

Note that these systems are hybrids of human and machine—not truly autonomous. Humans construct the mechanism and create the training data set, and the software algorithms and machine-learning models are able to do the sorting at previously unthinkable speed and scale. And once they have been put into harness, the data-driven algorithms and models continue not only to take direction from new instructions given by the mechanism designers but also to learn from the actions of their users.

In practice, the vast algorithmic systems of Google, Facebook and other social media platforms contain a mix of sorting mechanisms designed explicitly by programmers and newer machine-learning models. Google search, for instance, takes hundreds of attributes into account, and only some of them are recognized by machine learning. These attributes are summed into a score that collectively determines the order of results. Google search is now also personalized, with results based not just on what the system expects all users to prefer but also on the preferences and interests of the specific user asking a question. Social media algorithms are even more complex, because there is no single right answer. “Right” depends on the interests of each end-user and, unlike with search, those interests are not stated explicitly but must be inferred by studying past history, the interests of an individual’s friends and so forth. They are examples of what financier George Soros has called reflexive systems, wherein some results are neither objectively true or false, but the sum of what all the system’s users (“the market”) believe.

The individual machine components cannot be thought of as intelligent, but these systems as a whole are able to learn from and respond to their environment, to take many factors into account in making decisions and to constantly improve their results based on new information.

That’s a pretty good definition of intelligence, even though it lacks other elements of human cognition such as self-awareness and volition. Just as with humans, the data used in training the model can introduce bias into the results. Nonetheless, these systems have delivered remarkable results—far exceeding human abilities in field after field.

In those hybrid systems, humans are still nominally in charge, but recognition of and response to new information often happens automatically. Old, hand-coded algorithms designed by human programmers are being replaced by machine-learning models that are able to respond to changes in vast amounts of data long before a human programmer might notice the difference. But sometimes the changes in the data are so significant—for example, makeup designed specifically to fool facial recognition systems, astroturfed content produced at scale by bots masquerading as humans or deepfake videos—that humans need to build and train new digital subsystems to recognize them. In addition, the human mechanism designers are always looking for ways to improve their creations.

Read the full story on The Rockefeller Foundation

...
more posts