What kind of intelligence is artificial intelligence?
The initial goal of AI was to create machines that think like humans. But that is not what happened at all.
Credit: Annelisa Leinbach
KEY TAKEAWAYS
AI researchers aimed to understand how thinking works in humans, and then to use that knowledge to emulate thinking in machines. That is in no way what has happened, however. As stunning as the advances in the field are, artificial intelligence is not actually intelligence at all. Understanding the difference between human reasoning and the power of predictive associations is crucial if we are to use AI in the right way.
“ChatGPT is basically auto-complete on steroids.”
I heard that quip from a computer scientist at the University of Rochester as my fellow professors and I attended a workshop on the new reality of artificial intelligence in the classroom. Like everyone else, we were trying to grapple with the astonishing capacities of ChatGPT and its AI-driven ability to write student research papers, complete computer code, and even compose that bane of every professor’s existence, the university strategic planning document.
That computer scientist’s remark drove home a critical point. If we really want to understand artificial intelligence’s power, promise, and peril, we first need to understand the difference between intelligence as it is generally understood and the kind of intelligence we are building now with AI. That is important, because the kind we are building now is really the only kind we know how to build at all — and it is nothing like our own intelligence.
The gap in AI delivery
The term artificial intelligence dates back to the 1950s, when electronic computers were first being built, and it emerged during a 1956 meeting at Dartmouth College. It was there that a group of scientists laid the groundwork for a new project whose goal was a computer that could think. As the proposal for the meeting put it, the field of artificial intelligence believed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Through much of the field’s early years, AI researchers tried to understand how thinking happened in humans, then use this understanding to emulate it in machines. This meant exploring how the human mind reasons or builds abstractions from its experience of the world. An important focus was natural language recognition, meaning the ability for a computer to understand words and their combinations (syntax, grammar, and meaning), allowing them to interact naturally with humans.
Over the years, AI went through cycles of optimism and pessimism — these have been called AI “summers” and “winters” — as remarkable periods of progress stalled out for a decade or more. Now we are clearly in an AI summer. A combination of mind-boggling computing power and algorithmic advances combined to bring us a tool like ChatGPT. But if we look back, we can see a considerable gap between what many hoped AI would mean and the kind of artificial intelligence that has been delivered. And that brings us back to the “autocomplete on steroids” comment.
Modern versions of AI are based on what is called machine learning. These are algorithms that use sophisticated statistical methods to build associations based on some training set of data fed to them by humans. If you have ever solved one of those reCAPTCHA “find the crosswalk” tests, you have helped create and train some machine learning program. Machine learning sometimes involves deep learning, where algorithms represent stacked layers of networks, each one working on a different aspect of building the associations.
Machine learning in all its forms represents a stunning achievement for computer science. We are just beginning to understand its reach. But the important thing to note is that its basis rests on a statistical model. By feeding the algorithms enormous amounts of data, the AI we have built is based on curve fitting in some hyperdimensional space — each dimension comprises a parameter defining the data. By exploring these vast data spaces, machines can, for example, find all the ways a specific word might follow a sentence that begins with, “It was a dark and stormy…”
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every ThursdayFields marked with an * are required
In this way our AI wonder-machines are really prediction machines whose prowess comes out of the statistics gleaned from the training sets. (While I am oversimplifying the wide range of machine learning algorithms, the gist here is correct.) This view does not diminish in any way the achievements of the AI community, but it underscores how little this kind of intelligence (if it should be called such) resembles our intelligence.
Intelligence is not opaque
Human minds are so much more than prediction machines. As Judea Pearl has pointed out, what really makes human beings so potent is our ability to discern causes. We do not just apply past circumstances to our current circumstance — we can reason about the causes that lay behind the past circumstance and generalize it to any new situation. It is this flexibility that makes our intelligence “general” and leaves the prediction machines of machine learning looking like they are narrowly focused, brittle, and prone to dangerous mistakes. ChatGPT will be happy to give you made-up references in your research paper or write news stories full of mistakes. Self-driving cars, meanwhile, continue to be a long and deadly way from full autonomy. There is no guarantee they will reach it.
One of the most interesting aspects of machine learning is how opaque it can be. Often it is not clear at all why the algorithms make the decisions they do, even if those decisions turn out to solve the problems the machines were tasked with. This occurs because machine learning methods rely on blind explorations of the statistical distinctions between, say, useful email and spam that live in some vast database of emails. But the kind of reasoning we use to solve a problem usually involves a logic of association that can be clearly explained. Human reasoning and human experience are never blind.
That difference is the difference that matters. Early AI researchers hoped to build machines that emulated the human mind. They hoped to build machines that thought like people. That is not what happened. Instead, we have learned to build machines that don’t really reason at all. They associate, and that is very different. That difference is why approaches rooted in machine learning never produce the kind of General Artificial Intelligence the founders of the field were hoping for. It may also be why the greatest danger from AI won’t be a machine that wakes up, becomes self-conscious, and then decides to enslave us. Instead, by misidentifying what we have built as actual intelligence, we pose the real danger to ourselves. By building these systems into our society in ways we cannot escape, we may force ourselves to conform to what they can do, rather than discover what we are capable of.
Machine learning is coming of age, and it is a remarkable and even beautiful thing. But we should not mistake it for intelligence, lest we fail to understand our own.
Source: https://bigthink.com/13-8/what-kind-of-intelligence-is-ai/