What Can’t AI Do?

What Can’t AI Do?

Econometrics can be used to illustrate the differences between artificial and human intelligence. It is crucial to have a firm understanding of tacit knowledge and the limitations of AI before implementing it.

Econometricians use the "red bus-blue bus" problem as a thought experiment to demonstrate a key issue that arises when trying to quantify the probability of an individual making a particular choice from multiple options using statistical estimation. If you are indifferent between taking a car or red bus to work, then an estimate of your probability of picking either option is a coin flip.

Now, introduce a third transportation choice to the two different scenarios and assume that the traveler remains indifferent between alternative choices. In the first scenario, open a new train route so that the alternatives available to the apathetic traveler are car, red bus, and train. The new estimated probabilities are now one-third car, one-third red bus and one-third train. The odds have not changed from the two-choice scenario, and are still one-to-one-to-one.

What if the bus were blue- would the traveler still take it? What is the probability of taking a car, red bus, or blue bus?

This is because the actual choice is exactly the same as in the first two-choice scenario, i.e., taking a car versus taking a bus. In other words, a red bus and a blue bus represent the same choice; the color of the bus is irrelevant to the traveler’s transportation choice. So, the probability that the apathetic traveler would select either a red or blue bus is simply one-half of the probability that the person would take the bus. However, the method by which these probabilities are estimated is incapable of deciphering these irrelevant alternatives; it codes car, red bus, and blue bus as one-to-one-to-one like in the scenario with the train.

TACIT KNOWLEDGE

"Tacit knowledge" refers to a quantifiable or commonly understood outcome that a human achieves by performing a task that can't be codified by a repeatable rule, as opposed to abstract knowledge, which is describable, rule-bound and repeatable.

Algorithmic Shortcomings

The red-bus/blue-bus (non-)choice is a good example of how algorithmic computation can fail. In their raw forms, models cannot distinguish subtleties of linguistic description that human beings have little or no trouble grasping. Intuitively, we understand why the red bus and the blue bus are identical when considering transportation alternatives. It is clear that there would be a difference in the choice set if a train is introduced instead of a bus of another color.

Many of the tasks we perform rely on tacit, intuitive knowledge that is difficult to codify and automate, an example of this is Polanyi's paradox.

When we say "doing something," we mean an action that leads to a quantifiable or commonly understood outcome, which can't be expressed with a repeatable rule. This kind of human performance is called "tacit knowledge," according to Polanyi. He makes a distinction between this type and abstract knowledge, which is describable, rule-bound and repeatable.

According to economist David Autor, the reason that machines have not taken over all human careers is due to Polanyi's paradox. This outcome has not been achieved through automation because it relies on formal methods of communication that cannot express tacit knowledge.

Evolutionary Skills

Moravec's paradox states, in short, that

  • It is expected that the difficulty of reverse-engineering any human skill is roughly proportional to the amount of time that skill has been evolving among animals.
  • The abilities that humans have gained through experience seem natural to us because they are now subconscious.
  • However, the skills that appear effortless may not be as difficult to reverse-engineer as one would expect.

Though it may not be immediately apparent, mental reasoning and abstract knowledge require very little computation, while skills such as sensorimotor skills, future-outcome visualization, and perceptual inference are much more computationally demanding. According to Moravec, "It is easier to make computers exhibit adult-level intelligence on tests or playing checkers, and difficult or impossible to give them the skills of a one-year old when it comes to perception and mobility."

Moravec's and Polanyi's paradoxes both suggest that humans have only developed abstract thinking in the last few thousand years. Additionally, humans have developed intuitive skills that cannot be described. These skills are based on our environment and experience, and they predate explanation.

The Future of AI Is Complementary

The existence of these paradoxes has implications for resource allocation in artificial intelligence. If tacit skills are difficult or impossible to codify, then the simplest tasks humans perform will require a lot of time, effort and resources to teach to machines.

As a skill becomes more difficult for humans to perform, it becomes more difficult to describe and replicate by machines. Why invest resources to develop AI that performs increasingly simple tasks?

Even though Moore's Law predicts an increase in computer processing power, the way we communicate with computers has not changed much since the 1970s. Development of artificial intelligence will slow as diminishing returns set in when the opportunity cost of researching AI becomes too expensive in terms of allowing machines to perform increasingly simple human tasks.

According to Autor, the future of AI should be based on its complementarities with human skills rather than its substitutability for them. The advent of electronic calculators and, later, computers revolutionized the field of statistics by allowing for incredibly complex computations to be carried out in a fraction of the time.

The change in computational means allowed machines to take on low-level, repeatable arithmetic tasks, thus complementing teams of statistical researchers. By doing this, statisticians and their students were able to work on more difficult statistical problems that need creative thinking, something that computers cannot do well. The current understanding of AI and its interaction with human abilities needs to be reconsidered in terms of the types of problems it is being developed to solve.

This might also interest you