Intelligent Robots Will Soon Leave the Realm of Science Fiction
One of the oldest dreams of science has been creating a human-like machine. Teams of researchers have been working for half a century to create machines that can do useful work and also share some characteristics of animals and humans ? in other words, robots.
We are all familiar with robotic factories that manufacture such things as cars. That¡¯s a very different sort of concept, though, from ¡°autonomous robots.¡± Industrial robotics involves very expensive prearranged pathways that machines travel on for the purpose of welding or otherwise assembling parts in a repetitive fashion.
A real ¡°robot,¡± by contrast, could come into your office building and take a stroll around, figure out the layout, and then clean it every night. And that¡¯s only a starting point. One of the fundamental characteristics of such a successful robot will be autonomy. In other words, it should require minimal attention from a human operator, because it mimics the ¡°intelligence¡± of a living creature.
In some ways, we are already closer than you might think to that goal. For about $200 you can get a robot vacuum cleaner ? the Roomba from iRobot ? that will roam around on its own, detect where dirt is, clean it up, and then go park itself in its charging stand when it¡¯s finished. Pretty nifty. But, we¡¯re still a long way from a robot that can put your child¡¯s toys away, fry eggs and bacon, or even trim a hedge.
According to some of the leading scientists who work toward developing robots, there are a number of trends that will change all that. Hans P. Moravec founded the Robotics Institute at Carnegie Mellon University in 1980 and is now at work on developing more capable perception techniques for robots that should allow freely-navigating utility robots to be developed within this decade. He contends in a Scientific American1 article that increasing computing power is setting the pace for progress.
To figure out what sort of computing power would be needed to build real robots, Moravec compared animal brain development over time to how computer power has developed. For example, the size of the largest animal brain has, on average, doubled about every 15 million years. The power of robot controllers has been doubling every two years. Right now, as Moravec asserts in the journal Communications of the Association of Computing Machinery,2 computers have the processing power to enable behavior at the lower range of vertebrate complexity: analogous to some of the smaller fish or larger insects.
To appreciate the analogy, let¡¯s take a look at the progress of robotics research and its relationship to available computing power.3
In 1950, researchers developed a device with a phototube eye and two vacuum tube amplifiers that drove relays and motors. It would dance around when it got near a lighted recharging hutch. When its batteries ran low, it would enter for recharging. These simple tropisms resemble the intelligence level of a bacterium.
In the early 1960s, researchers created a machine they called ¡°the beast,¡± using transistors. It could wander the hallways using sonar. When its batteries ran low, it would find a wall outlet and plug itself in. Once recharged, it would continue wandering. This deliberate-seeming behavior is about at the level of single-celled animals, like amoebae.
In 1970, Stanford University researchers built the first mobile robots controlled by computers. They used the school¡¯s massive mainframes for radio control. These ¡°mobile carts,¡± as they were called, could follow white lines on the floor and identify certain objects. Their intelligence could be compared to that of early multi-celled animals.
Skip to 1980, again at Stanford: With computers now on chips and able to perform a million calculations a second ? and with much more adept programmers ? a new version of the cart could map and negotiate obstacle courses, covering about 100 feet in five hours with both the intelligence and speed of a garden variety slug.
By 1990, computer chips had reached 10 million computations per second. This allowed reliable two-dimensional mapping and navigation in real time using sonar range measurements; this is comparable to the performance of the smallest fish or a medium-sized insect.
By the year 2000, computers could perform a billion calculations a second and memory had exploded into the megabyte range. Researchers built camera-equipped robots with dense three-dimensional mapping abilities. This guppy-like intelligence is enough to create robots that can map an office building and then clean its floors.
When he began this work in the 1970s, Moravec was going against accepted wisdom. At the time, most scientists believed that we already had all the computer power we needed; we just had to find clever ways to apply it to robotic applications. Moravec set about calculating the switching operations used by the brain and comparing them to switching in computers.
Using the data then available, he estimated a trillion operations per second for a human brain. The computers available at the time could only perform on the order of a million operations per second, and they had been at that speed for two decades. They¡¯d remain there for another 15 years, but Moravec was not discouraged.
In fact, while the cost per transistor had been dropping by a factor of 100 every decade, from $100 in 1950 to about one cent in 1970, top speed didn¡¯t begin to pick up until the 1990s. However, by 1995, speed had finally reached 100 million operations a second. And, just as Moravec predicted, robots began to succeed at tasks scientists had only dreamed of before, such as driving fast over long distances. These accomplishments are described in a 2004 Tech Report from the Robotics Institute.
At the same time Moravec watched computer technology take big strides forward, he continued to refine his calculations for biological processing and found that matching the human brain¡¯s functionality would require 100 trillion operations a second, a goal that might be reached for $1,000 in 2030 at the present rate of development.
As he explains in the journal Cerebrum,4 Moravec used the retina as a proxy for measuring the computing power of nervous tissue. The retina is one-tenth of a millimeter thick and two centimeters across. It processes a million image areas in parallel about 10 times a second. The equivalent in computer terms would require a billion operations a second.
By that measure, it would require 50 billion computing operations a second to perform the functions of each gram of neural tissue. That¡¯s how he¡¯s able to state with confidence that our computers in 2005 are roughly like a guppy¡¯s brain, because that brain weighs one one-hundredth of a gram.
Looked at this way, the question becomes, ¡°How long will it take until we can emulate the functionality of the human brain cost-effectively?¡± Through simple arithmetic, you can calculate that mimicking the functionality of the human brain, at 1,500 grams, would require a computer capable of 100 trillion instructions per second.
That point is probably closer than most of us think. We expect the steady progress of Moore¡¯s Law, coupled with the programmable logic devices discussed earlier and the power of massively parallel computing architectures, to progressively move the barriers out of the way.
Just last year, for example, a team of engineers at Virginia Tech hooked up 1,100 dual-processor Macintosh G5 machines in tandem. They broke the 10-trillion-operations-a-second barrier for the amazingly low cost of just $6 million. With this kind of power available at lower prices, the science and business of robots is finally taking off in a big way.
Moravec¡¯s own company, called Seegrid, [www.seegrid.com] is now developing vehicles that can navigate by themselves using about a billion operations a second. When Sony created a robotic pet, called AIBO, a dog-like creature that originally cost $2,500, executives expected only a few people would buy it. However, it has been such a hot item that it¡¯s even hard to find one on eBay at times.
AIBO can learn more than 100 words or phrases as commands. It responds to petting and can learn to perform tricks. In fact, AIBO now competes in the annual matches in Japan in which robots play soccer. Other companies are already chasing this so-called ¡°entertainment robot¡± market, as well as the potentially huge market for cleaning and transport robots.
Based on these compelling trends, here are six forecasts for your consideration:
First, by sometime in 2007, a standard robotic navigation head will become available. It will be programmed with software designed for a specific task and then retrofitted onto existing transport, cleaning, or security robots, which can then be ¡°trained¡± by non-specialists. Many of these will incorporate a scanning sensor made by the German company, Sick. Siemens already offers a navigation package with a Sick scanner for mapping.
Second, by 2010, mass consumer applications will begin to appear for high-resolution, three-dimensional perception in robots. At first this will produce highly-upgraded versions of the robotic vacuum cleaners now available. These will, for example, empty their own dustbins as well as plan their own routes and schedules. They¡¯ll work for months unattended. Moravec¡¯s company, Seegrid, will be in the competition for this market.
Third, by 2015, larger utility robots will appear with manipulator arms able to run a variety of programs with tens of billions of calculations a second. Using RFID tags, for example, such a robot could shop in a store where products are labeled with such tags. Or it could put labeled items away ?for example, loading and unloading the dishwasher and returning items to the proper cabinets.
Fourth, by 2020, so-called ¡°universal robots¡± will appear that can do most simple chores. This will boost the power of the previous generation to allow such things as cutting the lawn or cleaning the windows ? but only when told to do so.
Fifth, by 2030, mammal-like brainpower and cognitive abilities will appear in computers. This will lead to a whole new generation of robots with conditioned learning and the ability to select behavior based on past experience. They will be able to discern and adapt to special circumstances. These robots will be able to read instructions and assemble kit furniture. They¡¯ll be able to tell when the windows are dirty and decide to clean them. Able to mentally plan and rehearse tasks, they¡¯ll have the ability to explore, find appropriate things, and combine them to perform useful functions, such as locating a carton of eggs and making breakfast. As such, they¡¯ll be able to fill in the enormous shortfall in human ¡°eldercare givers¡± that we¡¯ll be experiencing at that time.
Sixth, as we reach mid-century, fourth-generation robots will be reaching for the 100-trillion-operations-per-second mark and will develop the ability to ¡°think abstractly¡± and to ¡°reason¡± from real-world models they generate through experience. This is the sort of robot that could, for example, conduct medical diagnosis and treatment of a patient without the help of a human doctor. It could also perform all the functions of a soldier. Some of the implications are exciting and others are scary. It¡¯s up to science and society to determine which come to fruition.
References List : 1. Scientific American, January 2005, ¡°Insights: You, Robot,¡± by Chip Walter. ¨Ï Copyright 2005 by Scientific American, Inc. All rights reserved.2. Communications of the Association of Computing Machinery, October 2003, ¡°Robots, After All,¡± by Hans P. Moravec. ¨Ï Copyright 2003 by the Association for Computing Machinery, Inc. All rights reserved.3. To access Hans P. Moravec¡¯s article on robotics which was written for Encyclopaedia Britannica, visit the Field Robotocs Center website at: www.frc.ri.cmu.edu/~hpm/project.archive/robot.papers/2003/robotics.eb.2003.html4. Cerebrum, Spring 2001, ¡°Robots: Re-evolving Minds at 107 Times Natures Speed,¡± by Hans P. Moravec. ¨Ï Copyright 2001 by Dana Press. All rights reserved.