Developing AGI: Where We Are and What the Future Holds
How do you create Artificial General Intelligence (AGI)? No one precisely knows. If we wait for a complete understanding with robust mathematical models of AGI, we may never get started. Given that, a more experimental and iterative development approach is in order.
At present, no one knows precisely how to create Artificial General Intelligence.
Sure, we can loosely define Artificial General Intelligence, or AGI, as a concept in which computer systems can respond in the same ways that intelligent humans do across a broad spectrum of situations.
We know how AGI differs from Artificial Intelligence (AI), which can often exceed human abilities but only in very limited situations.
We know that our only AGI model today is the human brain, so studying brain functions and building biologically plausible approaches should lead to quicker development of AGI.
We also know that intelligence and thinking largely occur in neurons in the neocortex as a result of their digital, or spiking, function. Not much DNA data, however, is devoted to neocortex formation. Together, these set limits on the maximum complexity of AGI software. As a result, the brain – and AGI – must be possible with repeating patterns of simpler neural circuits and overall AGI capacity is bounded by neuron counts.
But if we wait for a complete understanding with robust mathematical models of AGI, we may never get started. Given that, a more experimental and iterative development approach is in order. And rather than tackling the hardest problems first, like chess or language, AGI needs to create its underpinnings by emulating the capabilities of a three-year-old. Every child is able to leverage his or her abilities into becoming a general intelligence with some years of additional training, so the basics are already in place. Finally, the spiking nature of neurons also directs any simulation of AGI into areas of development outside of AI’s classic perceptron/backpropagation approach.
Intelligence has evolved since early man, but the brain’s structure has not. With that in mind, AGI might begin with basic techniques, such as finding one’s way or cause and effect. It also suggests human intelligence, which develops within the context of human goals, emotions, and instincts, would form a poor basis for AGI. Human intelligence has evolved and is largely about survival, whereas AGI can be planned and be largely about being intelligent. Hence, AGI is unlikely to be like human intelligence.
Finally, without interaction with the real world, AI will always be narrow. The real world is so variable and complex that initially, simulators can speed development. But overall, AGI requires robotics. Once an AGI has been developed via interaction with the real world, the abilities can be cloned to static hardware and the knowledge, abilities, and understanding will be retained. But at some point, understanding the real world is a necessary component of general intelligence.
Granted, some of this reasoning may be subject to dispute, and may eventually prove to be in error. But that’s precisely the point. The development of AGI via a simulated brain and its simulated entity can settle philosophical differences one way or the other. Moreover, its structure must have the flexibility to continually adapt to new information as that information becomes available.
At present, prototype AGI is capable of doing a lot of things, including building up a mental model of surroundings from vision, avoiding obstacles while moving in a simulated environment, moving objects to achieve a goal, learning words associated with object features, responding to voice commands and producing spoken responses, and planning a series of actions to achieve a goal. It cannot, however, do many of these things at once or do them in response to complex data.
With that in mind, what does the future hold? To date, AGI development has been on a small scale. Prototypes have been limited to encounters with just a few objects and a few attributes, and learning just a few words. This allows a system to be constructed that can truly understand just a few object types before moving on. Think in terms of how a three-year-old learns about his or her environment. What is there for that three-year-old to understand about simple blocks – shape, stacking, falling, inertia, color, planning, goals, following verbal directions, giving verbal descriptions – all things that we might associate with a true AGI, but on a tiny scale.
With just a few parameters, we can learn which processes work and which ones don’t. Once these small-scale issues are overcome, the structure of the simulation can be scaled up to huge arrays of neurons representing categories such as shape, color, or size. Armed with the ability to infer categories from what would otherwise be random incoming neural spikes, it appears likely that general intelligence will emerge.
Within coming development iterations, the prototype should be able to explore a simulated environment and understand what is there to be learned. In a two-dimensional world, this understanding extends to learning that some objects are moveable and can be moved to accomplish certain goals.
It also paves the way to becoming a three-dimensional simulator. With just a few possible objects and actions, the prototype should be able to learn everything possible about that environment, including object persistence and the passage of time, planning for the future, and the simple physics of gravity. With these abilities which are common to any three-year-old, horizons can gradually be expanded to real-world interactions.
All of these advances are likely to be gradual. At each step of the way, we will need to make certain that prototypes are progressing toward becoming a useful asset. Ultimately, though, all of this suggests that AGI is not some far-off fantasy. It will be upon us sooner than most people think. Knowing that, it is essential for us to recognize where AGI can be controlled and limited for the benefit of mankind rather than its demise.