Considering
the Singularity: A Coming World of Autonomous Intelligence (A.I.),
The original
version of this article was published in 21st
Century Opportunities and Challenges, Howard Didsbury,
Ed., 2003.
Outline
A.
What
is the Singularity? With increasing anxiety, many of our best thinkers have seen a looming "Prediction Wall" emerge in recent decades. There is a growing inability of human minds to credibly imagine our onrushing future, a future that must apparently include greater-than-human technological sophistication and intelligence. At the same time, we now admit to living in a present populated by growing numbers of interconnected technological systems that no one human being understands. We have awakened to find ourselves in a world of complex and yet amazingly stable technological systems, erected like vast beehives, systems tended to by large swarms of only partially aware human beings, each of which has only a very limited conceptualization of the new technological environment that we have constructed. Business leaders face
the prediction wall acutely in technologically dependent fields (and what
enterprise isn't technologically dependent these days?), where the ten-year
business plans of the 1950's have been replaced with ten-week (quarterly)
plans of the 2000's, and where planning beyond two years in some fields
may often be unwise speculation. But perhaps most astonishingly, we are
coming to realize that even our traditional seers, the authors of speculative
fiction, have failed us in recent decades. In "Science
Fiction Without the Future," 2001, Judith Berman
notes that the vast majority of current efforts in this genre have abandoned
both foresighted technological critique and any realistic attempt to portray
the hyper-accelerated technological world of fifty years hence. It's as
if many of our best minds are giving up and turning to nostalgia as they
see the wall of their own conceptualizing limitations rising before them.
Increasing technological autonomy, however we choose to measure it, is one key assumption behind the singularity hypothesis. Were it to be proven incorrect in coming years, singularity models would have to be fundamentally revised. However, data to date give every indication that autonomy is dramatically increasing every year. Writers on the singularity topic now suggest that progressively more human-independent computer evolution must eventually transition to a "runaway positive feedback loop" in high-level machine computation, from our perspective. We are well on the way down the autonomy path within the computer hardware domain. Since the 1950's, every new generation of computer chip (integrated circuit) has been designed to a greater and greater degree not by human beings but by computer software. In other words, an ever-decreasing fraction of human (vs. machine) effort is involved in the hardware design process every year, to produce any fixed amount of computer complexity, however we choose to define that complexity In fact, 1978 was the last time entirely human designed (non-software aided) chips were routinely attempted. The 1980's saw the rise of powerful chip design software, the 1990's the emergence of electronic design automation (EDA) software, and recently, evolvable hardware approaches have produced a few specialized chips that are "grown" entirely in silico, without any human intervention whatsoever, beyond initial configuration of the design space. Such systems discover useful algorithms that are often incomprehensible to human designers. Self-replicating robots, while also still quite primitive, have recently passed the proof-of-concept stage, and are now benefiting from powerful advances in simulation and rapid prototyping technologies. It is now well known that software follows a slower complexity/ performance doubling rate than the hardware substrate. Commonly cited measures are six years, vs. 18 months, for a doubling in price performance, a figure that must vary widely with algorithm, development approach, and software class. But even here, we have seen surprising autonomy advances in recent years. In an accelerating emergence since the 1980's, we have seen several new sciences of emergent, evolutionary, and "biologically-inspired" computation, such as artificial life, genetic algorithms, evolutionary programming, neural nets, parallel distributed processing, and connectionist modeling. These new computer sciences, though still limited, have created a range of useful commercial applications, from pattern recognition networks in astronomy that seek out supernovas, to credit card fraud-detection algorithms which substantially outperform classical programs. These industries, while still underdeveloped and of limited scalability, now employ tens of thousands of programmers in a new, primarily "bottom up (self-guiding), and only secondarily top down (human coded) approach to software design. Perhaps even more importantly, biologically-inspired approaches have demonstrated that they can increase their own adaptive complexity in real-world environments entirely independent of human aid, when given adequate hardware evolutionary space. And it is clear that the hardware space, or "digital soil" for growing these new systems will become exponentially cheaper and more plentiful in coming decades. Both Ray Kurzweil (The Age of Spiritual Machines) and Hans Moravec (Robot) have recently proposed that perhaps even as early as 2020 to 2030 we will have sufficient hardware complexity, as well as sufficient insights from cognitive neuroscience (reverse engineering salient neural structure of the mammalian brain), to create silicon evolutionary spaces that will develop higher-level intelligence. But in what may be the most interesting and profound observation, there is now good evidence that technological systems enjoy a multi-millionfold increase in their speeds of replication, variation, operation (interaction/selection), and evolutionary development by comparison to their biological progenitors. Many of these speedup factors appear to range between 1-30 million for higher order processes, with a proposed "average" of 10 million (electrochemical vs. electronic communication speeds). Therefore, if it is
true that accelerating autonomy is an intrinsic feature of any learning
system, as some systems theorists have proposed, and if it is also true
that today's technological systems are learning on average ten million
times faster than the genetic systems which preceded them, and thousands
of times faster than the human beings who catalyze them, then we can expect
substantial increases in machine autonomy in coming years. This speed
differential has been measured by a number of different approaches, and
it is not yet clear which is the most important learning metric. Commonly
used genetic-technologic comparisons are data input rates, output rates,
communication speed, computation speed at the logic gate and in the entire
system, memory storage and erasure speed, and cognitive architectural
replication speed, among others. Even the evolutionary developmental history which allowed australopithecus to advance very quickly, in evolutionary timescales, through homo habilis and homo erectus to modern homo sapiens, over a span of 8-10 million years, represents less than one year in the hyper-accelerated technologic evolutionary developmental time. We begin to suspect, incredibly, that even this type of higher-level "discovered complexity" will be recapitulated within the coming machine substrate in one very interesting year of development only a few decades from the present date (2041? 2061?). So it is that many sober and skeptical thinkers now expect that the semi-intelligent systems of the 21st century, as they become truly self-improving and evolutionary, will rapidly reinvent within the technologic substrate at first all of the lower functions of autonomy and intelligence, and in one final brief burst, even the higher functions of the human species. Thus even such functions as human language, self-awareness, rational-emotive insight, ethics, and consciousness, complex and carefully-tuned processes that we consider the essence of higher humanity, are likely to become fully accessible to tomorrow's technologic systems. What happens after this occurs must be even more dramatic, as you can well imagine.
C. Self-Organizing and Self-Replicating Paths to Autonomous Intelligence (A.I.) The better we come to understand the way intelligence develops in complex systems in the universe, the more clearly we'll perceive our own role and limits in fostering technological evolutionary development. Top-down A.I. designers assume that human minds must furnish the most important goals to our A.I. systems as they develop. Certainly some such goal-assignment must occur, but it is becoming increasingly likely that this strategy has rapidly diminishing marginal returns. Evolutionary developmental computation (in both biological and technological systems) generally creates and discovers its own goals and encodes learned information in its own bottom-up, incremental, and context-dependent fashion, in a manner only partially accessible to our rational analysis. Ask yourself, for example, how much of your own mental learning has been due to inductive, trial-and-error internalization of experience, and how much was a deductive, architected, rationally-directed process. This topic, the self-organization of intelligence, is observed in all complex systems to the extent that each system's physics allows, from molecules to minds. In line with the new paradigm of evolutionary development of complex systems, we are learning that tomorrow's most successful technological systems must be organic in nature. Self-organization emerges only through a process of cyclic development with limited evolution/variation within each cycle, a self-replicating development that becomes incrementally tuned for progressively greater self-assembly, self-repair, and self-reorganization, particularly at the lowest component levels. At the same time, progressive self-awareness (self-modeling) and general intelligence (environmental modeling) are emergent features of such systems. Most of today's technological systems are a long way from having these capacities. They are rigidly modular, and do not adapt to or interdepend with each other or their environment. They engage not in self-assembly, but are mostly externally constructed. In discussing proteins, Michael Denton reminds us of how far our technological systems have to go toward this ideal. Living molecular systems engage extensively in the features listed above. A protein's three dimensional shape is a result of a network of local and non-local physical interdependences (e.g., covalent, electrostatic, electrodynamic, steric, and solvent interactions). Both its assembly and its final form are a developmentally computed emergent feature of that interdependent network. A protein taken out of its interdependent milieu soon becomes nonfunctional, as its features are a convergent property of the interdependent system. Today's artificial neural networks, genetic algorithms, and evolutionary programs are promising examples of systems that demonstrate an already surprising degree of self-replication, self-assembly, self-repair, and self-reorganization, even at the component level. Implementing a hardware description language genotype, which in turn specifies a hardware-deployed neural net phenotype, and allowing this genotype-phenotype system to tune for ever more complex, modular, and interdependent neural net emergence is one future path likely to take us a lot further toward technological autonomy. At the same time, as Kurzweil has argued, advances in human brain scanning will allow us to instantiate ever more interdependent computational architectures directly into the technological substrate, architectures that the human mind will have less and less ability to model as we engage in the construction process. In this latter example, human beings are again acting as a decreasingly central part of the replication and variation loop for the continually improving technological substrate. Collective or "swarm" computation is also a critical element of evolutionary development of complexity, and thus facilitating the emergence of systems we only partially understand, but collectively utilize (agents, distributed computation, biologically inspired computation), will be very important to achieving the emergences we desire. Linking physically-based self-replicating systems (SRS's) to the emerging biologically inspired computational systems (neural networks, genetic algorithms, evolutionary systems) which are their current predecessors will be another important bottom up method, as first envisioned by John Von Neumann in the 1950's. Physical SRS's, like today's primitive self-replicating robots, provide an emerging body for the emerging mind of the coming machine intelligence, a way for it to learn, from the bottom up the myriad lessons of "common sense" interaction in the physical world (e.g., sensorimotor before instinctual before linguistic learning). As our simulation capacity, solid state physics, and fabrication systems allow us to develop ever more functional micro, meso and nano computational evolutionary hardware and evolutionary robotic SRS's in coming decades (these will be functionally restricted versions of the "general assembler" goal in nanotechnology) we may come to view our technological systems simulation and fabrication capacity as their "DNA-guided protein synthesis", their evolutionary hardware and software as their emerging "nervous system" and evolutionary robotics as the "body" of their emergent autonomous intelligence. At best, we conscious humans may create selection pressures which reward for certain types of emergent complexity within the biologically inspired computation/SRS environment. At the same time, all our rational striving for a top down design and understanding of the A.I. we are now engaged in creating will remain an important (though ever decreasing) part of the process. Thus at this still-primitive stage of evolution of the coming autonomous technologic substrate a variety of differentiated, not-yet-convergent approaches to A.I. are to be expected. Comparing and contrasting the various paths available to us, and choosing carefully how to allocate our resources will be an essential part of humanity's role as memetically driven catalysts of the coming transition. In this spirit, let me now point out that on close inspection of the present state of A.I. research, one finds that there are very few investigators remaining who do not acknowledge the fundamental utility of evolution as a creative component in future A.I. systems. Those nonevolutionary, top-down A.I. approaches which still remain in vogue (whether classical symbolic or one of the many historical derivatives of this) are now few in number, and despite decades of iterative refinement, have consistently demonstrated only minor incremental improvements in performance and functional adaptation. To me, this is a strong indication that human-centric, human-envisioned design has reached a "saturation phase" in its attempt to add incremental complexity to technologic systems. We humans simply aren't that smart, and the universe is showing us a much more powerful way to create complexity than by trying to develop or deduce it from logical first principles. Thus we should not be surprised that on a human scale the handful of researchers working on systems to encode some kind of "general intelligence" in A.I., after a surge of early and uneconomical attempts in the 1950's to 1970's, now pale in comparison to the 50,000 or so computer scientists who are investigating various forms of evolutionary computation. Over the last decade we have seen a growing number of real theoretical and commercial sucesses with genetic algorithms, genetic programming, evolutionary strategies, evolutionary programming, and other intrinsically chaotic and interdependent evolutionary computational approaches, even given their current primitive encapsulation of the critical evolutionary developmental aspects of genetic and neural computational systems and their currently severe hardware and software complexity limitations. We may therefore expect that the numbers of those funded investigators who currently engage in this new evolutionary developmental paradigm will continue to swell exponentially in coming decades, as they are following what appears to be the most practical path to increasing adaptive computational complexity. In coming years, in concert with more traditional futures organizations like the World Future Society, our own organization, the Acceleration Studies Foundation, will do its own part to improve multidisciplinary dialog on the understanding and management of increasingly autonomous technological change. Ours is apparently the first human generation to be definitively surpassed, in computational complexity, by evolutionary developmental technological systems initially constructed by human invention. What is perhaps even more interesting is that this appears to be a natural, universally-permissive process, facilitated by the special structure of the physics of the microcosm (small scales of matter, energy, space, and time). Join us in an ongoing critical dialog on what may be the single most important issue of the human era. What an amazing time to be alive. © 2003, Acceleration
Studies Foundation. Small portions (10% or less) of this article may be
freely reproduced with credit or link to ASF/Accelerating.org
|