Advancing AI: Mimicking Decision Making

The idea of ​​a killer robot, capable of making its own lethal decisions autonomously, is something that defines The terminator in the 1984 James Cameron film.

Fortunately for humanity, autonomous killer robots don’t exist yet. Despite enormous technological advances, truly autonomous robots remain in the domain of science fiction.

In late 2020, the excitement that has fueled autonomous vehicle initiatives began to wane. Uber sold its autonomous driving division in late 2020, and while the regulatory framework for autonomous vehicles is far from clear, technology remains a major hurdle.

A machine operating at the edge of a network, be it a car or a robot or a smart sensor to control an industrial process, cannot rely on back-end computing for real-time decision making. Networks are unreliable and a latency of a few milliseconds can mean the difference between a near miss and a catastrophic accident.

Experts generally accept the need for edge computing for real-time decision making, but as those decisions evolve from simple binary “yes” or “no” answers to a semblance of intelligent decision making, many believe that current technology is not adequate.

The reason is not only because advanced data models cannot adequately model real-world situations, but also because the machine learning approach is incredibly fragile and lacks the adaptability of intelligence to the natural world.

In December 2020, during the virtual Intel Labs Day event, Mike Davies, director of Intel’s Neuromorphic Computing Laboratory, discussed why he felt that existing approaches to computing require rethinking. “Brains are truly incomparable computing devices,” he said.

Compared to the latest autonomous racing drones, which have onboard processors that draw around 18w of power and can barely fly a pre-programmed route at a walking pace, Davies said: “Compare that to the cockatiel parrot, a bird with a brain tiny that consumes about 50 milliwatts of energy ”.

The bird’s brain weighs just 2.2 grams compared to the 40 grams of processing power required in a drone. “With that meager energy budget, the cockatoo can fly at 22 mph, forage for food and communicate with other cockatoos,” he said. “They can even learn a small vocabulary of human words. Quantitatively, nature beats computers three to one in all dimensions. “

Trying to outmaneuver brains has always been the goal of computers, but for Davies and the research team at Intel’s neuromorphic computing lab, the immense work in artificial intelligence is, in a way, losing its meaning. “Today’s computing architectures are not optimized for that kind of problem,” he said. “The brain in nature has been optimized for millions of years.”

According to Davies, while deep learning is a valuable technology for changing the world of smart peripheral devices, it is a limited tool. “It solves some types of problems extremely well, but deep learning can only capture a small fraction of the behavior of a natural brain.”

So while deep learning can be used to allow a racing drone to recognize a gate to fly, the way it learns this task is not natural. “The CPU is highly optimized to process data in batch mode,” he said.

In deep learning, to make a decision, the CPU needs to process vectorized sets of data samples that can be read from disks and memory chips, to match a pattern with something it has already stored, ”said Davies. “Not only is the data organized in batches, it must also be distributed evenly. “This is not how data is encoded in organisms that have to navigate in real time,” he added.

A brain processes data sample by sample, rather than in batch mode. But it also needs to adapt, which involves memory. “There is a catalog of past history that influences the brain and adaptive feedback loops,” Davies said.

Making decisions to the limit

Intel is exploring how to rethink the architecture of a computer from the transistor up, blurring the distinction between CPU and memory. His goal is to have a machine that processes data asynchronously in millions of simple processing units in parallel, reflecting the role of neurons in biological brains.

In 2017, it developed Loihi, a 128-core design based on a specialized architecture manufactured using 14nm process technology. The Loihi chip includes 130,000 neurons, each of which can communicate with thousands of others. According to Intel, developers can access and manipulate resources on the chip programmatically through a learning engine that is built into each of the 128 cores.

When asked about the application areas of neuromorphic computing, Davies said that it can solve problems similar to those of quantum computing. But while quantum computing is likely to remain a technology that will eventually appear as part of cloud data center computing, Intel has aspirations to develop neuromorphic computing as coprocessor units in edge computing devices. In terms of timescales, Davies said he expects the devices to ship within five years.

In terms of a real-world example, researchers at Intel Labs and Cornell University have shown how Loihi could be used to learn and recognize dangerous chemicals in the outdoors, based on the architecture of the mammalian olfactory bulb, which provides the brain the sense of smell. .

For Davies and other neurocomputing researchers, the biggest hurdle is not in the hardware, but in getting programmers to change a 70-year-old traditional way of programming to understand how to program a parallel neurocomputer efficiently.

“We are focusing on developers and the community,” he said. “The hard part is rethinking what it means to program when there are thousands of neurons interacting.”

Add Comment