Column: The immense promise of ‘superintelligence’

Narain Batra. Copyright (c) Valley News. May not be reprinted or used online without permission. Send requests to permission@vnews.com.

Narain Batra. Copyright (c) Valley News. May not be reprinted or used online without permission. Send requests to permission@vnews.com.

By NARAIN BATRA

For the Valley News

Published: 10-25-2024 5:03 PM

In mid-November 2023, Barbra Streisand expressed to talk-show host Stephen Colbert deep sadness over the ongoing violence in Gaza. She said: “It’s sad about what’s going on today — meaning people have to live together, even though they’re different religions or whatever. This is insanity for us not to learn how to live together in peace. I could easily cry about this.” Then with her deeply anguished face, she looked up and said, “Where is God in this time? Where is he or she? Why can’t that energy stop this madness?”

I kept thinking that instead of waiting for gods to help us — and humans have been invoking gods for millennia — what if we have superintelligence that could enhance our intellectual capabilities and collaborate with us to solve our most intractable problems? Could superintelligence have foreseen Hamas’ Oct. 7, 2023 attack on Israel? The horrendous attack did not erupt out of nothing. Long preparations must have gone into its planning and execution. Nor did COVID-19, which killed millions of people worldwide, emerge suddenly out of nothing.

It seems that humanity has reached its maximum level of competence. Or you might say, paraphrasing the Peter principle, humanity has reached its level of incompetence.

The Peter Principle (based on a 1969 book by Laurence Peter and Raymond Hull) holds that employees in a corporation rise to their “level of incompetence” in a hierarchy. Essentially, people get promoted until they reach a role, they aren’t good at. That’s how corporate decline begins. Extending the corporate metaphor to humanity, it does feel like we’re hitting some serious ceilings. Climate change, political dysfunction, tech dilemmas, endless wars. We have created problems beyond our comprehension. With our present level of collective intelligence, we would keep muddling through from one problem to another. We need superintelligence to transcend our limitations.

Of the four categories of AI, traditional AI performs predefined tasks and within its trained domain it excels at pattern recognition, analysis, and decision-making. Pfizer and Moderna harnessed AI to speed up the discovery, development, manufacturing, and distribution of their COVID-19 vaccines.

Generative AI, such as ChatGPT and other large language models, mimics and synthesizes contents from the ocean of materials it’s trained on. A prompt-based multilingual conversational model, Generative AI has the potential to democratize knowledge. With instant translation in many languages, even the most illiterate can use it.

Through easy-to-use prompts, and the availability of instant translation in several languages, Generative AI would raise the collective intelligence level of people and make societies more productive. GPT-4o, for example, responds in text, audio and video in real-time, comparable to human response time in conversation, enabling more natural and seamless interactions in more than 50 languages. Just imagine: An uneducated tribal woman in Chhattisgarh (India) uses an oral prompt with Chat GPT-4o in Hindi about some undefined breast pain or any other issue and instantly receives information in her native tongue. Through a series of prompts, and instant Hindi-English translation, she could become knowledgeable in any subject and seek help to solve the problem.

We are entering a new age of interactive orality, where a person could use an oral prompt with a chatbot that would open the door to knowledge in any field without the person being literate in the traditional sense. Mughal Emperor Akbar was illiterate but had become profoundly knowledgeable based on the culture of orality. For millions of people, illiteracy has been a barrier to knowledge. Generative AI gives a new meaning to the biblical utterance: “Seek and you will find. Knock and the door will be opened to you.”

Article continues after...

Yesterday's Most Read Articles

When generative AI is trained to use real-world observations and tools on its own, its capabilities would grow exponentially. The tools-using generative AI would be no different from tools-using humans except that generative AI would keep learning and improving by using the same techniques as we do to ask questions, do research, and even write code to incorporate into itself to grow and evolve into a higher level.

As generative AI systems keep learning, the rise of artificial general intelligence is inevitable. This upward spiral of self-learning leading to artificial general intelligence, a system as smart as humans, would eventually evolve into superintelligence, a system superior to human cognitive abilities. Limited by our biology, even the best of us stop learning at some point of time.

Superintelligence would surpass human intelligence in every domain, including scientific reasoning, creativity and knowledge, functioning at a level beyond the intellectual capacity of any human, no matter how intelligent the person — Albert Einstein, or Robert Oppenheimer, for example. Superintelligence would see things that we don’t. Most importantly, it would be a self-improving system that would recursively enhance its capabilities. Humans, too, keep improving but up to a certain point. Einstein could go only so far.

What could superintelligence do? Superintelligent space rockets and satellites could accelerate space exploration and colonization. It could foresee new epidemics and design new drugs with unmatched target specificity and reduced side effects. It could model complex climate systems and guide the development of clean energy technologies. It could analyze complex geopolitical scenarios and propose solutions before the catastrophe strikes. It would enable us to see dimensions of reality that we can’t see because of our limited intelligence.

Superintelligence would demand huge investment, a massive amount of energy, and powerful hardware, including high-performance processors, GPUs and specialized accelerators. Quantum computing, when developed, could significantly accelerate progress toward superintelligence.

Superintelligence forces us to confront fundamental questions about ourselves and our place in the universe. What does it mean to be intelligent? Closely linked to intelligence is the question of consciousness. We experience the world subjectively, with emotions, feelings, and a sense of self. Would superintelligence possess this quality? Some argue that consciousness is an emergent property of complex systems, and with enough processing power, superintelligence could achieve it. Others believe consciousness is unique to biological systems and cannot be replicated in machines. The answer to this question has profound implications.

Human imagination, along with intuition, is a powerful cognitive capacity that drives creativity, problem-solving, and the ability to transcend the constraints of the present moment, and shapes our individual and collective experiences. Could superintelligence imagine, dream, fantasize or mythologize? This ongoing dialogue is crucial to navigating the legal, ethical, and philosophical challenges posed by superintelligence. We need to understand and feel ensured that superintelligence has values aligned with human values, and becomes a collaborator in enhancing our freedoms and our humanity. A conscious superintelligence might have its own desires and goals, potentially conflicting with ours. On the other hand, a superintelligence without consciousness, while powerful, might be easier to control so long it can truly understand and solve human problems.

While Barbra Streisand wondered why God was indifferent to human suffering in the Gaza-Israel War, I have been wondering what Einstein might have achieved with a superintelligent AI collaborator. Their partnership would have been a mind-boggling fusion of human creativity and AI’s superpower. Einstein spent years seeking a unified theory to reconcile general relativity and quantum mechanics. Superintelligence could have analyzed both fields simultaneously, bridging gaps. Perhaps they’d have discovered the elusive theory of everything; explored multiverse theories; tested the limits of black holes; and unraveled the mysteries of dark matter.

Human evolution from ape-like creatures to homo sapiens has been based on challenge-and-response gradual incrementalism, leading to increased brain size and cognitive abilities. The pressure to keep up with superintelligence could potentially drive further neurological adaptations. If superintelligence is developed with appropriate safeguards and oversight, it could be another incremental step in human evolution.

If the universe is indeed multidimensional, superintelligence could open fascinating new avenues for exploring and understanding additional dimensions beyond our current perception. Until then humanity would be trapped, as in the parable of the blind men and the elephant.

Narain Batra hosts the podcast “America Unbound.” He lives in Hartford.