close x


Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form


Exploring AI Dependence Upon ‘Artificial Stupidity’ For Autonomous Cars

Lance Eliot, AI Trends Insider

The role of Artificial Stupidity needs to be included in the discussion of Artificial Intelligence for self-driving cars, to be realistic.

We all generally seem to know what it means to say that someone is intelligent.

In contrast, when you label someone as “stupid,” the question arises as to what exactly that means. For example, does stupidity imply the lack of intelligence in a zero-sum fashion, or does stupidity occupy its own space and sit adjacent to intelligence as a parallel equal?

Let’s do a thought experiment on this weighty matter.

Suppose we somehow had a bucket filled with intelligence. We are going to pretend that intelligence is akin to something tangible and that we can essentially pour it into and possibly out of a bucket that we happen to have handy. Upon pouring this bucket filled with intelligence onto say the floor, what do you have left?

One answer is that the bucket is now entirely empty and there is nothing left inside the bucket at all. The bucket has become vacuous and contains absolutely nothing. Another answer is that the bucket upon being emptied of intelligence has a leftover that consists of stupidity. In other words, once you’ve removed so-called intelligence, the thing that you have remaining is stupidity.

I realize this is a seemingly esoteric discussion but, in a moment, you’ll see that the point being made has a rather significant ramification for many important things, including and particularly for the development and rise of Artificial Intelligence (AI).

Can intelligence exist without stupidity, or in a practical sense is there always some amount of stupidity that must exist if there is also the existence of stupidity?

Some assert that intelligence and stupidity are Zen-like yin and yang. In this perspective, you cannot grasp the nature of intelligence unless you also have a semblance of stupidity as a kind of measuring stick.

It is said that humans become increasingly intelligent over time, and thus are reducing their levels of stupidity. You might suggest that intelligence and stupidity are playing a zero-sum game, namely that as your intelligence rises you are simultaneously reducing your level of stupidity (similarly, if your stupidity rises, this implies that your intelligence lowers).

Can humans arrive at a 100% intelligence and a zero amount of stupidity, or are we fated to always have some amount of stupidity, no matter how hard we might try to become fully intelligent?

Returning to the bucket metaphor, some would claim that there will never be the case that you are completely and exclusively intelligent and have expunged stupidity. There will always be some amount of stupidity that’s sitting in that bucket.

If you are clever and try hard, you might be able to narrow down how much stupidity you have, though there is still some amount of stupidity in that bucket.

Does having stupidity help intelligence or is it harmful to intelligence?

You might be tempted to assume that any amount of stupidity is a bad thing and therefore we must always be striving to keep it caged or otherwise avoid its appearance. But we need to ask whether that simplistic view of tossing stupidity into the “bad” category and placing intelligence into the “good” category is potentially missing something more complex. You could argue that by being stupid, at times, in limited ways, doing so offers a means for intelligence to get even better.

When you were a child, suppose you stupidly tripped over your own feet, and after doing so, you came to the realization that you were not carefully lifting your feet. Henceforth, you became more mindful of how to walk and thus became intelligent at the act of walking. Maybe later in life, while walking on a thin curb, you managed to save yourself from falling off the edge of the curb, partially due to the earlier in life lesson that was sparked by stupidity and became part of your intelligence.

Of course, stupidity can also get us into trouble.

Despite having learned via stupidity to be careful as you walk, one day you decide to strut on the edge of the Grand Canyon. While doing so, oops, you fall off and plunge into the chasm.  

Was it an intelligent act to perch yourself on the edge like that? Apparently not.

As such, we might want to note that stupidity can be a friend or a foe, and it is up to the intelligence portion to figure out which is which in any given circumstance and any given moment.

You might envision that there is an eternal struggle going on between the intelligence side and the stupidity side.

On the other hand, you might equally envision that the intelligence side and stupidity side are pals, each of which tugs at the other, and therefore it is not especially a fight as it is a delicate dance and form of tension about which should prevail (at times) and how they can each moderate or even aid the other.

This preamble provides a foundation to discuss something increasingly becoming worthy of attention, namely the role of Artificial Intelligence and (surprisingly) the role of Artificial Stupidity.

For my indication of the grand convergence that has led to today’s AI, see this link:

For the importance of AI having self-awareness, see my article here:

For why it is crucial to have AI algorithmic transparency, see my review here:

For my assessing whether AI can have the motivation, see the article here:

Exploiting Artificial Stupidity For Gain

When referring to true self-driving cars, I’m focusing on Level 4 and Level 5 of the standard scale used to gauge autonomous cars. These are self-driving cars that have an AI system doing the driving and there is no need and typically no provision for a human driver.

The AI does all the driving and any and all occupants are considered passengers.

On the topic of Artificial Stupidity, it is worthwhile to quickly review the history of how the terminology came about.

In the 1950s, the famous mathematician and pioneering computer scientist Alan Turing proposed what has become known as the Turing test for AI.

Simply stated, if you were presented with a situation whereby you could interact with a computer system imbued with AI, and at the same time separately interact with a human too, and you weren’t told beforehand which was which (let’s assume they are both hidden from view), upon your making inquiries of each, you are tasked with deciding which one is the AI and which one is the human.

We could then declare the AI a winner as exhibiting intelligence if you could not distinguish between the two contestants. In that sense, the AI is indistinguishable from the human contestant and must ergo be considered equal in intelligent interaction.

There is a twist to the original Turing test that many don’t know about.

One qualm expressed was that you might be smarmy and ask the two contestants to calculate say pi to the thousandth digit.

Presumably, the AI would do so wonderfully and readily tell you the answer in the blink of an eye, doing so precisely and abundantly correctly. Meanwhile, the human would struggle to do so, taking quite a while to answer if using paper and pencil to make the laborious calculation, and ultimately would be likely to introduce errors into the answer.

Turing realized this aspect and acknowledged that the AI could be essentially unmasked by asking such arithmetic questions.

He then took the added step, one that some believe opened a Pandora’s box, and suggested that the AI ought to avoid giving the right answers to arithmetic problems.

In short, the AI could try to fool the inquirer by appearing to answer as a human might, including incorporating errors into the answers given and perhaps taking the same length of time that doing the calculations by hand would take.

Starting in the early 1990s, a competition was launched that is akin to the Turing test, offering a modest cash prize and has become known as the Loebner Prize, and in this competition, the AI systems are typically infused with human-like errors to aid in fooling the inquirers into believing the AI is the human. There is controversy underlying this, but I won’t go into that herein. A now-classic article appeared in 1991 in The Economist about the competition.

Notice that once again we have a bit of irony that the introduction of stupidity is being done to essentially portray that something is intelligent.

This brief history lesson provides a handy launching pad for the next elements of this discussion.

Let’s boil down the topic of Artificial Stupidity into two main facets or definitions:

1)      Artificial Stupidity is the purposeful incorporation of human-like stupidity into an AI system, doing so to make the AI seem more human-like, and being done not to improve the AI per se but instead to shape the perception of humans about the AI as being seemingly intelligent.

2)      Artificial Stupidity is an acknowledgment of the myriad of human foibles and the potential inclusion of such “stupidity” into or alongside the AI in a conjoined manner that can potentially improve the AI when properly managed.

One common misnomer that I’d like to dispel about the first part of the definition involves a somewhat false assumption that the computer potentially is going to purposefully miscalculate something.

There are some that shriek in horror and disdain that there might be a suggestion that the computer would intentionally seek to incorrectly do a calculation, such as figuring out pi but doing so in a manner that is inaccurate.

That’s not what the definition necessarily implies.

It could be that the computer might correctly calculate pi to the thousandth digit, and then opt to tweak some of the digits, which it would say keep track of, and do this in a blink of the eye, and then wait to display the result after an equivalent of the human-by-hand amount of time.

In that manner, the computer has the correct answer internally and has only displayed something that seems to have errors.

Now, that certainly could be bad for the humans that are relying upon what the computer has reported but note that this is decidedly not the same as though the computer has in fact miscalculated the number.+

There’s more than can be said about such nuances, but for now, let’s continue forward.

Both of those variants of Artificial Stupidity can be applied to true self-driving cars.

Doing so carries a certain amount of angst and will be worthwhile to consider.

For my detailed review of the Turing Test, see this link:

On the problems of probabilistic reasoning in AI, take a look at my indication:

Common sense reasoning is an open-ended challenge and needs to be considered, see my article:

A controversial perspective is that perhaps we need to restart our understanding and approach to AI, see this discussed here:

Read the full story on AI Trends Insider

more posts