//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
One of the most difficult problems when building intelligent machines – especially at the edge – can be taking a behavior or feature that was designed or trained in one environment and making it work in another. Your robot controller, vision system, or neural network may perform perfectly until the temperature, light level, or radiation exposure changes, then rapidly degrades or fails. In the 1950s, researchers performed the same process that made life such success – evolution – could be used to optimize all kinds of engineering systems.
With the growing momentum behind building intelligent machines, there has been an increase in evolutionary research applied to this field. What is important here is that the emphasis is not on finding the most effective solutions, but on the most robust those: robust against noise, variability and failures within the hardware where they will be implemented. This functionality will be crucial to the success of many artificial intelligence (AI) technologies, especially those used in hostile environments such as space, and those using emerging analog technologies such as memristors.
In engineering and computer science, the concept of evolution is much the same as in biology. Essentially, a set of initial configurations – potential solutions to the problem at hand – are defined within a set of constraints (which components can be used, how can they connect to each other, etc.). These are designed to perform a task such as controlling a robot with a sensor to avoid obstacles. The success of each solution is measured using a fitness function and the worst are then eliminated.
Every solution – good and bad – is represented by a genetic code that determines its shape, wiring, structure, and anything else that is allowed to change with evolution. The top performers are either bred (their genetic codes are somehow combined), mutated (part of the code is changed randomly), or both. This is repeated several times, essentially searching the state space for increasingly successful configurations. This happens without the need for a designer’s insight. One of the advantages of this approach is that, as in nature, the seemingly insignificant benefits of poorly performing solutions can be turned into major benefits later on.
This is not new, even in robotics. One of the most compelling examples of the evolution of AI was produced in 1994 by Karl Sims. At the time, Sims worked for Thinking Machines, which gave him access to one of the most powerful supercomputers around.
In a simulated environment, it evolved virtual creatures (including body morphology, sensors and controllers) that learned – through survival of the fittest – to swim, walk and grab an object competitively. (see figure 1).
Check out a video of the evolved creatures below. Although this project is virtual, it showed the potential of using the approach to evolve not only algorithms, but Material.
This work was interesting in three respects. First, it represented the first fully evolved hardware robot controller. Second, it showed how evolution could exploit subtle elements of structure to accomplish the target task as efficiently as possible, but such solutions were (inevitably) hardware dependent. This means that they would not work (or work well) when replicated to other seemingly identical machines. Third, what the team demonstrated a few years later was that evolution could be the solution to its own problem, as long as hardware variability was built into the process.
More recently, researchers from the same group have resumed work in this area. From digital FPGAs of the 1990s (albeit unclocked, which gave them continuous dynamics), they transitioned to scalable controllers in a 16×16 all-analog field programmable transistor array in 2020. By creating enough noise and of variability in the simulators in which the controller evolved, they were able to provide a low-specification robot with sophisticated obstacle avoidance behaviors by poor sensors.
Within the neuromorphic engineering community, Katie Schuman and her colleagues at Oak Ridge National Laboratory and the University of Tennessee have worked for years to develop optimized neural networks. In 2020 they published a paper, “Evolutionary Optimization for Neuromorphic Systems”, showing how they could create systems to operate within normal hardware constraints, such as limited weight resolution or delays in synapses and neurons. . However, they pointed out that with further development, the type of results presented could “…be used as part of a neuromorphic hardware co-design process to develop new hardware implementations.”
Olga Krestinkaya and her colleagues worked on exactly this, with special emphasis on analog chips. Their co-design process not only takes into account the known limitations of the specified technology, but also the inherent (but unknown) variability of the underlying devices. The team is particularly focused on the properties of memristors as an enabling technology that will never have the device uniformity inherent in digital memories (see Figure 2).
A few months ago, Žiga Rojec and his colleagues at the University of Ljubljana in Slovenia showed that it is possible to go even further by taking into account not only non–idealities or variability, but also the pure and simple failure. One of the most notable applications of early neuromorphic systems, especially analog, might be satellites: size, weight, and power are critical, but not price. Such systems must be sufficiently tolerant of radiation and the vast temperature variations of space to function well. Rojec’s research shows that, through evolution, an analog chip can be designed to produce satisfactory results even in the presence of short circuit faults.
It is perhaps inevitable that a bio-inspired technology will find its progress made possible by a bio-inspired optimization technique. Time will tell us.