Accelerating Change 2005. September 16-18, Stanford University. Artificial Intelligence and Intelligence Amplification. Transforming Technology, Empowering Humanity
 
 

What is Accelerating Technological Change?

Since the first stone, bone, and wood tools were used in neolithic times, archeologists, anthropologists, and technology scholars have noted a continual acceleration in the human use of technology. When considering the history of human civilization as an average, distributed network (not the rise or fall of specific societies), with each new generation we have used more and smarter technology, adopted it faster, and generated far more productivity with it than was physically possible in our parent's time.

Why does this continual acceleration in technology use and effectiveness occur? Perhaps most importantly, new technological systems, guided by human initiative, continually use dramatically less physical resources (matter, energy, space, and time) to accomplish any degree of physical or computational change. As a result, they perennially avoid the normal limits to exponential growth that we see in any system of fixed complexity, such as a bacterial population growing in a lake, which will multiply exponentially only until it runs out of local resources.

Human emotion (for ourselves, our fellow humans, and the world at large), human discovery, human creativity, and serendipity each play key roles in the drama of accelerating technological capacity, but in recent centuries it is our scientific discovery, even more than our creativity, that has become the prime driver of accelerating change. Discoveries in the "physics of the microcosm" (semiconductors, lasers, fiber optics, etc.) have been particularly important to the accelerating development of computing machines over the last 120 years.

We do not yet know why our universe is so easily understood by simple mathematics, or why our technologies turn out to be so computation friendly and resource efficient when we create them at small scales, but we do know that discoveries in the microcosm (structures built and operated at the microscopic scale, like integrated circuits) have continually removed short-term barriers to computational and technological advance. We are also learning that discoveries in the "nanocosm" (the scale of molecular structures, guided by human rather than by evolutionary experiment) are continuing this breathtaking pace of change. Consider just a few recent examples:

In the late-1990's, the International Technology Roadmap for Semiconductors (ITRS) consortium, a group responsible for forecasting the future of semiconductor capacity development, projected that depositing metal on silicon for integrated circuit production would run into a process miniaturization block circa 2005. But in 2001, a University of Massachusetts team discovered that depositing metal as a supercritical fluid, rather than as a gas or liquid, was a practical way to sidestep the roadblock.

In 2003, Intel's Andy Grove noted that gate leakage current was becoming a significant problem in the miniaturization of gallium arsenide semiconductors. Then we discovered how to cheaply make lower power "multicore" chips, to significantly delay the arrival of this problem. At the same time, other researchers quickly discovered that hafnium arsenide exhibits a thousand times (10^4) less leakage than gallium arsenide, making it one of several contenders expected to keep Moore's Law healthy for many years hence.

In 2004, when looking for better ways to make lasers across the EM spectrum, a researcher discovered that when a hollow optical fiber is filled with hydrogen gas (a device known as a "photonic crystal") it converts energy a million times (10^6) more efficiently than all previous microlaser systems.

In 2005, working with new "nanostructured" absorbant lattices, Toshiba scientists discovered they could charge a standard lithium ion battery sixty times (10^2) faster than previous batteries. These new batteries will go into production in 2006.

When humanity makes advances in efficiency or productivity at the "macrocosmic" scale (such as annual GDP growth in a national economy, a new supply chain algorithm, a new business process, etc.), we will typically see advances of a few percentage points (eg. 3%, or 0.03X, for annual GDP growth), and occasionally, a few hundred percentage points (eg., 300%, or 3X, for a clever new business model). But in the microcosm and nanocosm, tens, hundreds, thousands, and even millions of times greater advances are common, as seen above. We do not yet know why this is the case, but over the 20th century we have seen a sustained record of dramatic and unreasonably effective advances in our microcosmic technologies. Today, both the pace and the globalization of these advances continue to accelerate.

Looking at the future of chip miniaturization today, even when we contemplate what may be an approaching "Moore's Law limit," circa 2015 to logic gate miniaturization, when our MOS gate sizes will be so small that electrons can no longer be kept from spontaneously "quantum tunnelling" between neighboring circuits, we can forsee further miniaturization in the realms of optical computing, quantum dots, and molecular computing. And if such alternative computing platforms do not readily emerge, we further realize that a gate miniaturization limit will simply move us into an era of system miniaturization, a process already well under way (e.g., multicore processors, systems-on-a-chip: cellphone-on-a-chip, GPS-on-a-chip, etc.).

In such an environment, the development of highly parallel and massively modular computing systems (e.g., Danny Hillis and his Connection Machine) might then become economically feasible. Today's modestly-parallel computer architectures, as seen in graphics render farms, distributed computing, and early grid computing, portend tomorrow's deeply biologically inspired, and evolutionary hardware platforms. Such "horizontal acceleration" may remain subdued until the exponential economies realized by today's "vertical acceleration" (logic gate miniaturization) reach at least a temporary plateau.

Fat-fingered 21st century humans have learned how to create multi-million mirror MEMS devices (i.e., optical waveguides), to teleport light, and to run quantum computing algorithms on a single atom of calcium. How long might these physical and computational accelerations continue? How long can we continue to make astounding discoveries in the microcosm?

Rolf Landauer and others note that there is no minimum physical energy of computation. We are beginning to see, and may eventually utilize physical structure as far down as the Planck scale, the minimum dimensions of space and time as revealed in modern physical theory. As Eric Chaisson observes, today's Pentium chips already have seven orders of magnitude greater free energy rate densities than any living system on Earth. Seth Lloyd has estimated that the "ultimate laptop" has black hole-level energy densities, and there are even plans to attempt to create "extreme black holes" in tomorrow's high energy physics experiments. Truth has become stranger than fiction in many of our leading technology research environments.

Today we live in an era of continual surprise. Many serious observers now expect the capacities and intelligence of our information, sensing, storage, and communications technologies to continue their stunning rate of progress for as far as we can see into this new century.

Sustained exponential growth in our basic computational and communications capacities will predictably enable a host of new products and services that are presently impossible. If Moore's Law for microprocessors and memory, as just one of many technology exponentials, continues to double approximately every 18 months, continuing this process another 15 years will yield another 1,000X greater processing, storage, and communications capacity. And in some special domains, such as graphics processors, computational capacity has been doubling even faster than 18 months, for at least eight years.

What new emergences will such exponential growth enable? Which coming applications, enabled by accelerating change, have the greatest strategic importance? Which will be the most useful and enduring, and why? How can we best promote their balanced development?

Gaining foresight with regard to the meaning, implications, risks, and opportunities of accelerating technological change has become both our greatest lever for moving the world and our most fundamental educational priority.

We can best rise to this challenge and responsibility as a multidisciplinary and multibiased community of minds, one that champions both diversity and critical judgement. Increasing awareness of accelerating technological change is an important first step, and we believe better analysis, forecasting, and action plans must also ensue.

We hope you can join us at Accelerating Change and lend your insight and energy to our growing community.

[For more, see "Understanding the Accelerating Rate of Change," Ray Kurzweil and Chris Meyer, 2003.]

Key Questions
How does computation affect our environment?
What is accelerating technological change?
Why is accelerating change important?
What is the universal story of accelerating change?
What is the "technological singularity" hypothesis?
Where might accelerating change take us in the 21st century?
What are our main benefits and risks with regard to accelerating change?
How do we improve the study of accelerating change?

 

Outreach | Education | Research | Advocacy

© 2016 ASF: Acceleration Studies Foundation | A 510(c)(3) Nonprofit Corporation | Contact