Kurzweil, Founder and CEO, Kurzweil Technologies
Technology in the 21st Century: An Intimate Merger with Our Bodies
Three-dimensional molecular computing and other advances
will provide the computing power to emulate human intelligence within
a couple of decades, but will we have the software? I believe we
will, guided by the results of a grand project to reverse-engineer
the human brain and understand its methods, an endeavor now in its
early stages. But the exponential growth of the power of information-based
technologies is not limited to computational price-performance.
Communication bandwidth and price-performance, the shrinking size
of technology, genetic sequencing, our knowledge of the human brain,
and human knowledge in general are all accelerating. Within the
next couple of decades, microscopic-size computers will be deeply
integrated in the environment, our bodies and our brains, providing
vastly extended longevity, full-immersion virtual reality incorporating
all of the senses, experience “beaming,” and vastly
enhanced human intelligence.
Denton, Senior Research Fellow in Human Genetics,
Biochemistry Department, University of Otago, New Zealand
Are Not Machines: The Unique Characteristics of Living Things Depend
on the Unique Properties of Special Types of Matter
Mechanists from Descartes to Kurzweil have argued that
there is no fundamental difference between organisms and machines
– that organisms are merely advanced machines, and that eventually
all the unique properties of organisms including self replication
and self awareness will be instantiated in "man made"
machines constructed out of non biological materials.
Here I offer
a critique of this position, held by Kurzweil and other mechanists.
I argue that advances in molecular biology in the past half century
have revealed that many of the unique properties of cells and particularly
their previously mysterious self replicative ability depend primarily
on the unique properties of the special biomaterials used in their
construction. The most important of these materials are the DNA
double helix and the 1000 protein folds. The double helix lends
itself naturally to self replication and the 1000 protein folds
that it encodes possess an unparalleled suite of chemical and physical
properties as well as the unique capacity to self assemble or crystallize
into their native 3D structures.
From this I
argue that when the material basis for other vital phenomena such
as awareness and intelligence is finally elucidated, these will
also prove (like self replication and other vital phenomena now
well understood) to be substrate dependent and arise from
the natural emergent properties of very special categories of matter.
Therefore, I speculate that computing systems as we know them today,
and as we may know them in the near future built out of non biological
materials (e.g., carbon nanotube computing, optical computing, etc.),
no matter how complex or intricate, will never produce biological
phenomena such as "awareness" and "human intelligence."
Tuomi, Visiting Scientist, European Commission's Joint Research
Centre, Institute for Prospective Technological Studies, Seville,
Law and Kurzweil's Hypothesis: Accelerating Technical Change, Infinite
Innovative Demand, or Bad Data?
Moore's Law has often been used to illustrate the phenomenal
speed of technical change. Ray Kurzweil has proposed that progress
in computing reveals that Moore's Law gives a conservative estimate
of technical change. According to Kurzweil, technology progresses
at a double exponential speed. This leads to a rapidly growing pace
of technical change, and creates in the next few decades a "Singularity"
where human and technical evolution will be qualitatively transformed.
According to Kurzweil, computers will soon become more intelligent
This paper is
a critical study of Kurzweil's hypothesis. It presents data on long-term
economic growth and developments in computing technology and discusses
the conceptual foundations of Kurzweil's proposal. Empirical evidence
does not seem to support Kurzweil's hypothesis. A conceptual analysis
of Kurzweil's proposal highlights challenges for all generic claims
about accelerating technical change. The presented analysis, however,
leads to the conclusion that the growth rates of specific indicators,
such as total world GDP, might approach growth rates that characterize
information and communication technologies. Although we are probably
not moving toward a Kurzweilian "singularity," it is possible
that the current approximate 15 year doubling time of key elements
of our global economy will shrink by an order of magnitude in the
Eric Drexler, Co-Founder and Chair, Foresight Institute
Toward the Feynman Vision
advances in nanotechnologies will provide a basis for developing
nanofactories able to control the structure of matter with digital
precision at the atomic level, fulfilling a vision first articulated
by Richard Feynman. Developing nanofactories will, however, require
a broad, long-term systems engineering effort in a field now organized
and funded as a collection of small, short-term science projects.
Confusion regarding long-term goals has inhibited the emergence
of a focused effort; clarity regarding long-term goals can enable
President, Genetic Programming Inc.
Inspired Computing for Automating Innovations
programming (GP) is an automated invention machine and now delivers
routine human-competitive machine intelligence. GP starts from a
high-level statement of what needs to be done and uses the Darwinian
principle of natural selection to breed a population of improving
programs over many generations.
There are now
15 instances where GP has created an entity that either infringes
or duplicates the functionality of a previously patented 20th-century
invention, 6 instances where it has done the same with respect to
post-2000 patented inventions, 2 instances where GP has created
a patentable new invention, and 13 other human-competitive results
produced by GP.
Up to now, GP
has delivered qualitatively more substantial results in synchrony
with the relentless iteration of Moore's Law. We can therefore confidently
predict a future acceleration of the automation of the invention
process. The inventions produced by GP exhibit the same kind of
creativity characteristic of human-produced inventions.
Devlin, Executive Director, CSLI, Stanford University
Within the next twenty years, information will disappear, killed
by the very technologies that were developed to handle it. Knowledge
resides in people's heads; data resides in libraries, in newspapers,
and stored in various kinds of magnetic and optical media; information
lies somewhere in between. Knowledge is what we use to act wisely.
It cannot be bought or sold except by buying or selling the heads
that contain it. Data can be, and is, traded, but it is worthless
without the appropriate knowledge, from which its value derives.
In the days of print communication and the early days of information
technology, it made sense to talk about information as a commodity.
Print information could be weighed, and as Shannon showed, electronically
communicated information could also be measured, provided there
was not too much of the stuff.
and communication technologies are of such a speed and capacity
that it no longer makes sense to think and talk about information.
When you are in a small boat in the middle of the ocean, talk about
water is irrelevant; the waves and the currents are what matter.
Likewise, as we make our way in today's "ocean of information,"
the key concepts are media and interaction.
My talk outlines
the research carried out at Stanford's own CSLI
that supports the above claims, and describes a new Stanford program,
Media X, designed
to meet the challenges of a new world where information is as valueless
as water in the ocean, and media and interaction are everything.
Goertzel, Founder and CEO, Biomind
Today, AI is an important subfield of computer and cognitive
science, playing a supporting role in a number of areas of applied
computing. However, the vast majority of work being done involves
“narrow AI” rather than true autonomous, creative Artificial
General Intelligence (AGI). Over the next few decades – powered
by the ongoing advances in computing hardware – AI is likely
to assume more of a leadership role, integrating and ultimately
driving all the diverse aspects of 21st century technology.
There is a new
generation of AI systems on the horizon, incorporating a complex-evolving-systems
attitude and a host of technical software and hardware innovations
not previously accessible. This includes neural net oriented systems
like Peter Voss’s a2i2 system and Hugo de Garis’s CAM-Brain,
innovative logic-oriented systems like Pei Wang’s NARS and
Stuart Shapiro’s Sneps, and the Novamente AI Engine under
development by the speaker’s R&D team.
In this talk,
after reviewing the current state of AGI theory and technology,
I will turn to the role that AGI is likely to play in the technology
of the early 21st century. In one case after another, it would seem,
cutting-edge technologies display complexities that AGI systems
will be able to manage much more easily than humans. A particularly
large role may be played by hybrid AI systems, which fuse a general-intelligence
component with a specialized-AI component. Emphasis will be placed
on the power of AGI to enhance work in biotech, nanotech, fundamental
physics, and distributed cognition.
speaking, we propose that the infusion of AGI through various areas
of advanced technology may serve as the transition phase prior to
a Vingean Singularity that is driven and dominated by sophisticated
N. Gardner, Complexity Theorist
The Selfish Biocosm hypothesis asserts that the anthropic
qualities exhibited by our universe can be explained as incidental
consequences of a cosmological replication cycle in which a cosmologically
extended biosphere supplies two of the essential elements of self-replication
originally identified by John von Neumann.
hypothesis asserts that the emergence of life and intelligence are
key epigenetic thresholds in the cosmological replication cycle,
strongly favored by the physical laws and constants of inanimate
nature. Under the hypothesis, those laws and constants function
as the cosmic counterpart of DNA: they furnish the "recipe"
by which the evolving cosmos acquires the capacity to generate life
and ever more capable intelligence. The hypothesis reconceptualizes
the process of earthly evolution as a minor subroutine in the process
of cosmic ontogenesis. A falsifiable implication of the hypothesis
is that the emergence of increasingly intelligent life is a robust
phenomenon, strongly favored by the natural processes of biological
evolution and emergence.
A. Dembski, Associate Professor at the Institute for Faith
and Learning, Baylor University, Texas
Universe or Intelligent Design?
To reach the conclusion that the universe is infinite,
physicists (a) make some observations; (b) fit those observations
to some mathematical model; (c) find that the neatest model that
accommodates the data extrapolates to an infinite universe; (d)
conclude that the universe is infinite. In my presentation, I will
examine the logic by which physicists reach this conclusion. I will
show that there is no way to empirically justify the move from (b)
to (c). I will also show how the history of science encourages scientists
to be skeptical of grand extrapolations like the one in (c). Far
from being viewed as a compelling truth that no rational human being
can deny once exposed to the evidence, an infinite universe should
therefore properly be viewed as a metaphysical hypothesis consistent
with certain physical theories but hardly mandated by them. By contrast,
I will argue that the hypothesis of intelligent design – that
a designing intelligence has left clear marks of intelligence in
the biophysical universe – is not a metaphysical hypothesis
at all but a fully scientific one. In particular, I will argue that
whereas an infinite universe does not (and indeed cannot) admit
independent evidence, intelligent design can. Finally, I will indicate
why an infinite universe, though sometimes introduced to get around
fine-tuning problems and other phenomena that seem to call for design,
in fact cannot get around the problem of design.
Bostrom, Research Fellow, Oxford University
Selection Theory: Why We Need it When Thinking about the Big Picture
improbable was the evolution of intelligent life on Earth? Is there
other intelligent life in the galaxy? What are our chances of colonizing
the universe? When considering such questions about the “big
picture” about the distribution of observers and about our
place in the world, it is crucial to take into account observation
selection effects (OSEs). An OSE can be thought of as a kind of
filter through which our evidence has passed. Our evidence is restricted
not only by limitations in our measurement apparatuses but also
by the fact that all the evidence we have is preconditioned on the
existence of a suitably positioned observer to “have”
the evidence (and to build the measurement instruments). Ignoring
OSEs leads to anthropic biases. The theory of how to correct for
such biases is a recent development. This paper reviews the basics
of observation selection theory and discusses some of its applications.
Papadopoulos, Executive Vice-President and CTO, Sun Microsystems
Entropy: A Driver of Change
Computer networks are just the latest example of the fundamental
characteristics of all networks - they accelerate the entropy
of a system. Modern computer networks all erode the structure of
monolithic systems into connected components. This happens via three
outcomes: decomposition of the system, distribution of components,
and the increasing specialization and scaling of the components.
This force accelerates the change in a system, can turn consumer
technology into fashion, and shifts the emphasis to creating value,
which now must be derived by the logical re-integration of these
H. Calvin, Theoretical Neurobiologist, University of Washington
What? The Nimble and the Ponderous
Fast is always relative to something. Faster than
earlier is accelerating change. Generally, when you see
accelerating growth, you immediately think of cancer – unless,
of course, you play the stock market, when you think of selling
short to make money on the downside.
But most problems
associated with rapid change are due to being faster than some
interacting process. For example, faster in the center of the
stream than the edges leads to turbulence. The difference between
an expansion and an explosion is whether other
objects have time to get out of the way. Always ask, “Faster
than what?” and remember that old joke about the two guys
being chased by the bear. You don’t have to run faster than
the bear, only faster than the other guy.
also operates on the difference in two independent growth
rates. Two sheets of cells, where the layers are sticky, create
a curved surface when one layer grows faster; so faster-slower
can be creative as well as destructive. In prenatal development,
the various sets of relative growth rates have to be carefully
controlled. Otherwise, birth defects result.
some things are nimble and others are ponderous. The speed of technological
change means that major societal changes can be induced in less
than a decade without planning or consent. It took less than a decade
to go from the knowledge of energy available from the atomic nucleus
to a bomb. The web took only a few years to achieve a billion web
pages, indexed by free search engines. But the speed of reaction
(new policies) tends to be much slower; the Euro common currency
took fifty years, two generations of politicians. Achieving consensus
can take decades for complex issues. What happens in the meantime?
Bloom, Visiting Scholar, New York University
Infinity of Singularities": From the Big Bang to the 23rd Century
A singularity is a dramatic break, a phase change, a transition
to some fundamentally new domain. Today we're looking at a moment
in the near the future when our technological tool kit will rachet
us beyond the limits of biology and make us something more than
human. This won't be the first time. Evolution makes singularities
a habit. Quantum leaps in the very nature of being pepper the history
of this cosmos, the history of life, and the history of human beings.
Atoms were a shock when they first came together roughly 300,000
years after the Big Bang. Nearly four billion years ago megateams
of smart molecules perfected a trick the cosmos had never seen –
self-replication…life. Over 200 million years ago, social
insects – bees, termites, and ants – created a swarm
intelligence that solitary insects fundamentally cannot comprehend.
Eight thousand years ago, a simple invention – the baked-mud
brick – created a cultural singularity that carried Stone
Age humans far past anything dreamed by Pleistocene man.
In this special
45/60 minute video presentation, paleopsychologist, post-Darwinist,
and internationally-acclaimed author Howard Bloom (The Lucifer
Principle, The Global Brain, Reinventing Capitalism),
will paint the grand sweep of such events, and consider their potential
implications for the coming years. We're in for a special treat:
there are few minds on the planet as deeply multidisciplinary, few
who are as broadly concerned with the interpretation and integration
of all the modes and senses of Earth's intelligences. Aaron Hicklin
has said "Howard Bloom…may just be the new Stephen Hawking,
only he's not interested in science alone; he's interested in the
Smart, Founder and President, Acceleration Studies Foundation
and the Singularity: Understanding Accelerating Change
There is an emerging group of individuals who study our universal
and technological records of accelerating computational change.
This topic is also called "the singularity," among many
who discuss it on the internet, after an article by science fiction
author Vernor Vinge, "The
Coming Technological Singularity," 1993. Will the 'meta-trend'
of accelerating technological change ever slow down? Are there cosmological,
computational, or systems theory interpretations for our universal
history of continuously accelerating change? What classes of technological
development now seem the most likely candidates for producing socially
and economically transformative near-future events? We'll discuss
these and related topics in light of the latest literature on accelerating
In the paradigm
of "evolutionary development" in modern developmental
biology, we are learning that while the vast majority of molecular
pathways of biological systems are evolutionary, chaotic,
contingent, and fundamentally unpredictable, a special subset of
predictable developmental convergences are statistically
very likely to emerge in all unfolding organisms over time. This
process of "self-organization" and predictable emergence
happens in any cycling complex adaptive system whose development
is guided by a special, simple set of initial parameters,
constraints, and coupling constants, such as the 22,000 genes (a
very small amount of information by comparison to the developed
organism) which guide the unfolding of human beings. Over many cycles,
these simple parameters become carefully tuned to use local chaos
to produce predictable form and function in some "far future"
time. There is mounting indirect evidence that our universe is itself
a cycling complex adaptive system in the multiverse, a primarily
evolutionary and unpredictable system that is nevertheless also
based on self-tuned "anthropic" developmental parameters.
Curiously, our universal parameters appear to encode developmental
processes involving accelerating increases in intelligence, interdependence,
immunity, miniaturization, and resource efficiency in special physical
systems over time.
One recent insight
from this "developmental systems theory" is that technological
computation, in general, is a local developmental process that is
following an accelerating and increasingly biology-independent (autonomous)
trajectory. Another is that "top-down" engineering paradigms
like biotechnology, as a general process, will be far less important
in coming years than the "bottom-up" evolutionary development
of information technology in affecting our rates of personal, social
and technological change. Yet another is that humanity's descendants
will apparently not be colonizing outer space, but are instead rapidly
transitioning to an increasingly complex and adaptive "inner
space," one that may even involve some form of constrained
universal transcension (a "developmental singularity")
relatively soon in cosmologic time. We will briefly consider the
arguments, evidence, and testability of these hypotheses, and their
present broad implications in science, technology, business, political,
and social domains.
Jurvetson, Managing Partner, Draper Fisher Jurvetson
Capital in a World of Accelerating Change
I will provide a venture capitalist's perspective on technology
change with a particular focus on nanotechnology. How do technology
futures inspire the investment decisions of today and motivate the
search for a pragmatic business path to the frontiers of the unknown?
As the nexus of the sciences, how will nanotech affect the pace
of progress in the increasingly interrelated fields of biotech,
materials design and information technology? As the lab sciences
become information sciences driven by simulation and modeling, how
should we expect the pace of progress to change?
Wright, Visiting Scholar, University of Pennsylvania
and Interdependence: Nonzero Sumness as the Arc of History
Human history is approaching what might be called a “moral
singularity.” Ever since the stone age, technological evolution
has been expanding the scope of social organization and with it
the web of interdependence; people have found their fortunes interlocked
with the fortunes of people farther and farther away. By the late
19th century, this long-distance correlation of fortunes was already
so great that, as the saying in financial circles went, “When
England sneezes, Argentina catches pneumonia.” But this interdependence
is now making a quantum leap, and moving well beyond the realm of
economics. Because of progress in fields ranging from biotechnology
to infotechnology to nanotechnology, the well-being of, for example,
affluent Americans will increasingly depend on the well-being of
people in less developed countries half a planet away. The result
will be a series of long-distance non-zero-games – games whose
outcome is either win-win or lose-lose. Successfully playing these
games will mean radically enhancing our ability to see things from
the perspective of people in situations radically different from
ours – to appreciate their grievances and goals as never before.
If we don’t thus enhance our moral imaginations, some very
bad outcomes, possibly including catastrophe, could ensue.
M. Crawford, Autonomy and Robotics Area Lead, NASA Ames Research
Robotics & World-Modeling
Over the next decade NASA will launch a series of increasingly
complex exploration missions to some of the harshest and least understood
environments in the solar system. These missions will search for
signs of life, investigate the nature and history of other planets,
and increase our understanding of ourselves, our planet, and our
place in the universe. Some of these remote craft could be out of
earth contact for as much as a week at a time working in environments
that humans have never experienced or, in some cases, never seen.
These requirements are forcing NASA to confront issues in autonomy
and robotics that have been largely sidestepped in terrestrial applications.
In this talk
I will argue that creating this level of autonomous robustness requires
a fundamental change in how we conceptualize software. In particular,
the procedural notion of software as a set of boxes each of which
computes a mathematical function must give way to a model-centric
view in which our primary focus is on creating a declarative use-independent
model of the world. Autonomous agents built around such models will
have the ability to introspect about how their actions affect themselves
and their world, to learn from experience, and to respond robustly
to circumstances and failures not anticipated by their designers.
These capabilities will be essential both for NASA's increasingly
complex exploration missions and for terrestrial applications ranging
from search and rescue robotics to intelligent agents exploring
the "world" of the internet.
Lennig, Senior Vice President of Engineering, Nuance
Human language is the richest and most natural way for people to
communicate with each other. As computers take on some of the characteristics
that we attribute to human intelligence, it is natural for people
to want to communicate with computers using human language. This
is the linguistic interface.
exists in both its primary spoken form and its derivative written
form. We define a linguistic interface as technology for communication
between humans and machines based on either spoken or written human
language. Primitive linguistic interfaces have existed since the
early days of computing. Advances in speech recognition, speech
synthesis, and computational linguistics have provided the core
technology for the evolution of such interfaces so that today some
impressive examples of natural language interfaces to machines are
being deployed in real applications.
will review the history of linguistic interfaces, discuss the progress
that has been made to date, present a vision for the future, and
then discuss the challenges that lie between the present state of
technology and our vision of the future conversational user interface.
O'Reilly, Founder and President, O'Reilly & Associates
of New Memes in the Media
Richard Dawkins coined the term "meme" to describe
ideas that propagate themselves from mind to mind, reproducing in
a way that is analogous to gene transmission. Those seeking to promote
change need to understand what makes one idea "catch on"
and quickly become part of popular culture, while another remains
an academic footnote. There's no science to this (yet), but there
are lessons to be learned. Technology publisher and activist Tim
O'Reilly will talk about what he's learned from helping foster the
early commercialization of the Internet, the open source software
movement, and web services, and from fighting software patents.
He shares his ideas for "meme engineering" and offers
suggestions for working with the press as well as new media outlets
in order to spread your ideas.
Peterson, President, Foresight Institute
the Next Technological Revolution: Getting Advanced Nanotech Sooner
We can use the laws of physics, of economics, and of human
nature to project what's coming: total control of the structure
of matter, down to the level of individual atoms. This is molecular
manufacturing. The basis of this technology will be molecular machine
systems, already found in nature. We are now learning to design
and build new molecular machine systems. This will bring powerful
abilities: benefits for medicine, the environment, space development,
and alleviating poverty, as well as downsides such as offensive
military use, terrorism, and domination of populations.
Our task is
to maximize and spread the benefits and minimize the problems coming
from this very powerful technology. We need calm, clear thinking
about abstract, complex, scary topics. We need improved tools for
such debates, and we need to educate the public about what's coming
and what strategies are likely to work. To spread the benefits,
development should be speeded and intellectual property issues need
close attention. To minimize accidents, we can improve and implement
the Foresight Guidelines, already drafted. The major challenge is
preventing deliberate abuse. We examine various scenarios and suggest
the one that currently looks most likely to head off problems: open,
cooperative international development, including of defensive technologies,
combined with stable, trustworthy institutions.
policy formulation, and public education on this controversial topic
has been in progress since the late 1970s, and organized since 1986.
Major progress has been achieved, but challenges remain. We'll examine
next steps, and how individuals can participate.
Mayfield, CEO, Socialtext
Enterprise software traditionally yields efficiencies by
automating business processes within hierarchies to achieve economies
scale and speed. However, in a services economy, an increasing share
of knowledge work is business practice supported by social networks.
Environmental conditions are increasingly turbulent, requiring greater
lateral information flow, depreciating process and rewarding economies
of span and scope.
To date, the
majority of informal collaboration takes place with email and attachments.
Email is no longer a productivity tool thanks to commercial and
occupational spam. The latent value of communication is lost –
there is no institutional memory. Now, software is returning to
the collaborative roots of the Internet and the Democratic nature
of the PC revolution.
adapts to its environment rather than requiring the environment
to adapt to it. Entrusting users to make their own connections,
share resources and design their own spaces while guided only by
social convention surprisingly works. Relatively simple tools and
rules reveal social networks and yield complex emergent behavior
from the bottom-up. Perhaps there is greater value in augmenting
our capabilities than automating them – for organizations,
emergent democracy and fostering social capital.
A. Hunt, Peace Scholar and Activist, Author of The Future
Change and World Peace
Advances in nanotechnology, biotechnology, artificial intelligence,
information technology, and a host of other fields offer incredible
sea-change opportunities. Jacob Bronowski remarked that “in
every age there is a turning point, a new way of seeing and asserting
the coherence of the world.” As we discuss the prospects for
a new turning point, we must not lose sight of the bigger picture
for humanity. The conscious endeavor to promote change often has
unintended consequences. Sometimes these consequences are a windfall,
and sometimes they pose new dilemmas. Any concerted effort to promote
change, therefore, must involve selectivity. We must be selective
in our efforts to hasten change, trying to understand how the change
is likely to play out in the future. Even when we are selective,
however, we encounter competing visions of what our society should
look like. What kind of society do we wish to live in? What is our
responsibility to other people in other societies? What kind world
do we wish to create? Do we march along in creating new technology
while the vast majority of humanity has no access to our achievements?
If we are to avoid future calamities – including wars, sub-national
violence, and massive privations -- we must *compassionately* accelerate
change. Compassionately accelerating change means a sustained reordering
of our priorities to tackle the root causes of human misery in all
regions of the world. This is an ethical responsibility, and a universal
responsibility, that we cannot escape. And it is the standard by
which our technological achievements will ultimately be assessed.
Finnern, Collaboration Manager, SAP Developer Network
Thomas H. Davenport (Director,
Accenture Institute for Strategic Change) wrote, "Enterprise
systems: The second most important technology of the last decade.
Only the Internet had greater impact on business." These enterprise
systems are the ones that enable companies to accelerate the pace
of change and be more proactive about it. SAP, the third largest
independent software company with 12 million users and 28,900 employees
in 120 countries is by far the biggest player in the area of enterprise
systems. SAP is less a technology company and more a business process
improvement company. In this presentation, we will look at the trends
in enterprise systems and business process improvement, as well
as the effects that AutoID(RFID), Sensor Networks, and other technologies
have on the acceleration of these processes.
Guillen , President, Artificial Development, Inc.
Building a 20 billion neuron emulation of the human cortex.
CCortex is a system build to mimic the inner workings of
the human brain. Marcos Guillen will explain how CCortex works,
what portions of the human brain it emulates, and how the system
interacts with the outside world. Guillen will also outline CCortex
applications and Artificial Development's (http://ad.com)
for CCortex-based products.