What just happened? Rise of interest in Artificial Intelligence - OPINION

  14 August 2019    Read: 1792
  What just happened? Rise of interest in Artificial Intelligence -   OPINION

Subbarao Kambhampati ,

PhD, a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies 

Artificial Intelligence, or AI for short, has become quite the public buzzword.

Companies and investors are pouring money into the field. Universities — even high schools — are rushing to start new degree programs or colleges dedicated to AI. Civil society organizations are scrambling to understand the impact of AI technology on humanity, and governments are competing to encourage or regulate AI research and deployment. One country, the United Arab Emirates, even boasts a minister for AI.

At the same time, the world’s militaries are developing AI-based weaponry to defeat their enemies, police agencies are experimenting with AI as a surveillance tool to identify or interrogate suspects, and companies are testing its ability to replace humans in menial or more meaningful jobs — all of which may change the equation of life for all of the world’s people.

Our fixation with AI seemingly started only a couple of years ago. Yet, the pursuit of AI — the quest to get computers to show intelligent behavior — traces back to the very dawn of computing, when Alan Turing, considered the father of computing, asked a question: "Can Machines Think?" The field itself was christened at a Dartmouth workshop in 1956. Over its more than 60 years of existence, AI has balanced precariously between being an irresistible scientific quest and a seemingly quixotic enterprise, falling in and out of public favor. 

Only now, thanks to key advances in computing and ubiquitous data capture (including such technology as smartphones), have its true promises — and its perceived threats — begun to seem more of a reality.

As a graduate student working in AI in the mid-1980s, I recall that the general public reception of it was mild derision, with multiple books on the implausibility of the very field.

To understand what changed most recently to make AI a household word, it is instructive to compare the progress in AI to the stages at which children show different facets of intelligence.

Children typically start by showing signs of perceptual and manipulative intelligence — the ability to see, hear and smell the world, and to manipulate physical objects around them. They go on to show signs of social intelligence along with emotional and communicative intelligence. Finally, they graduate to cognitive intelligence — the ability to reason in abstract symbolic terms — that underlie most intelligence assessment tests. 

In contrast, AI's quest to have computers show facets of intelligence went in an almost opposite direction. The last time AI was in the public imagination was in the early ’80s, when so-called expert-systems technology was being used to automate the reasoning processes in many industries. In the ’90s, we had progress on general-reasoning systems, with the decisive win of IBM’s “Deep Blue” computer over chess master Garry Kasparov. It was only in the early 2000s that AI started making progress towards perceptual intelligence, which has driven today’s significant interest in AI.

In other words, we had AI systems defeating humans in chess, a task popularly considered the zenith of cognitive intelligence, long before they had the perceptual/manipulation capabilities needed to recognize the pieces on the board by sight and move them — quite a stark contrast to how children learn to play chess.

Understanding why the progress in AI happened in this opposite way provides a very useful perspective on the rise of interest in AI. The first attempts at getting computers to show intelligent behavior focused on programming them with our theories of intelligent problem-solving. This approach worked fine for facets of intelligence, such as reasoning, for which we do have conscious theories.

However, as the philosopher and polymath Michael Polanyi famously remarked, we know more than we can say. We have no consciously accessible theories for many aspects of our intelligence, including perception — how we "see" the world around us.

As babies, we learned how to do perception from observation. (After all, human babies hang around for years just being cute, observing and soaking up the world around them.) For AI, progress on perception and other tacit-knowledge tasks thus had to await breakthroughs in algorithms that can learn from observation.

To be more precise, it had to await infrastructure that made it possible to capture and provide training data to the learning algorithms. Although it is fashionable to say that we are producing more data than ever, the reality is that we always produced data, we just didn't know how to capture it in useful ways. The emergence of the internet, worldwide web, smartphones and the associated infrastructure made it possible to capture the data that is being produced in useful forms, and then to make it available to learning algorithms.

It is easy to underestimate the importance of this data-capture infrastructure.

Not long ago, the whole field of computer vision used to revolve around a handful of benchmark images — most prominently, the face of a Swedish model, named Lena (pronounced “Lenna”), that became the standard test image used in image-processing. In contrast, now even the smallest benchmarks have millions of images. Simply put, the worldwide web has become a sort of Jungian collective subconscious, which has been leveraged to train many AI systems for tacit tasks such as vision and language.

The data-capture infrastructure, in conjunction with the computational infrastructure, have breathed new life into some machine-learning approaches that have long been around. The resulting technology, branded "deep learning," has been behind some of the most impressive feats of AI in perceptual intelligence. These feats, in turn, have captured public interest to an extent that far surpassed anything AI experienced before.

In a way, this fascination is not hard to understand. When Deep Blue defeated Kasparov, it was a hot news item for a few days but faded away, as it didn't affect our day-to-day lives. However, the fruits of the advances in perceptual intelligence are translating into visual and voice recognition capabilities of our smartphones, thus significantly impacting our daily lives. Once AI captured the public imagination, it invariably led to many misconceptions, misperceptions and fears about the capabilities and potential impacts of the technology.

So, having made early strides on explicit-knowledge tasks involving reasoning over programmed information, AI technologies have recently started making impressive progress on tacit tasks — in large part, thanks to the availability of infrastructure for capturing training data. This has, in turn, led to a slew of applications of AI technology, and has captured public imagination.

It is in a way ironic that it is the advances in perceptual intelligence — the ability to see and hear the world that we humans share with all animals — that have fueled the recent resurgence of interest in AI. A distinguishing characteristic of human intelligence is, of course, the ability to combine perception with cognitive and social intelligence and common sense models of the world to support long-term planning and collaboration. Achieving this requires a seamless combination of tacit and explicit knowledge and tasks — something that AI technologies have yet to master.

So what next?

Are we at the threshold of human-level or even super-human-level Artificial Intelligence? Will AI systems get along with us? Is data really the new oil driving AI, or is it mere snake oil? Is widespread technological unemployment inevitable with the current rise of AI? Is there an AI race — and if so, are we winning or losing it? Can we defend our reality against the onslaught of AI-powered fake reality? Is AI an amplifier or a cause of societal biases?

Armed with this article’s perspective on the rise of AI, in future columns we will examine some of the misperceptions, policy implications and societal impacts of this burgeoning technology.

The Hill


More about: #AI  


News Line