Winner Dude

Reverse-engineering Insect Brains to Create Robots

Emmanuel Daniels
Get in Touch
Latest posts by Emmanuel Daniels (see all)

Opteran, a University of Sheffield spin-out company, is a British startup with a distinct perspective on neuromorphic engineering in comparison to the majority of the industry. The company used a process known as “reverse engineering” to figure out how insect brains work in order to develop new navigation and collision avoidance algorithms that can be implemented in robots. 

Opteran’s new approach to artificial intelligence, which draws direct biological inspiration for the system’s algorithm, is dubbed “natural intelligence.” This method differs from other computer vision approaches currently in use. These other methods primarily rely on traditional AI/deep learning or photogrammetry, a technique that uses 2D photographs to infer information about 3D objects such as dimensions. 

Opteran’s natural intelligence does not require any training data and does not require training, functioning more like a biological brain. Deep learning, as it currently exists, is capable of narrow artificial intelligence; it can perform precisely defined tasks within a limited environment, such as a computer game. 

Deep learning, on the other hand, necessitates massive amounts of training data, as well as computation and power consumption. Opteran wants to circumvent the limitations of deep learning by closely mimicking what brains do in order to build autonomous robots that can interact with the real world while adhering to a strict computation and energy budget. They will be able to do so as a result of this. 

Professor James Marshall, chief scientific officer at Opteron, stated in a recent presentation at the Embedded Vision Summit, “our purpose is to reverse- or re-engineer nature’s algorithms to create a software brain that enables machines to perceive, behave, and adapt more like natural creatures.” This was about the company’s mission to develop a software brain that will allow machines to perceive, behave, and adapt more naturally. 

“Imitating the brain to develop artificial intelligence is an old idea, dating back to Alan Turing,” he said. “On the other hand, deep learning is based on a cartoon of a tiny part of the primate brain’s visual cortex, ignoring the vast complexity of a real brain… Contemporary neuroscience methods are increasingly being used to provide the data required for us to accurately reverse engineer the process by which real brains solve the problem of autonomy. 

To successfully reverse-engineer brains, researchers must first study animal behavior, neuroscience, and anatomy all at the same time. Opteran’s research has focused on honeybee brains because of their ability to coordinate complex behaviors while remaining simple enough. Honeybees can fly up to 11 kilometers and accurately relay information contained in their mental maps to other bees. All of this is accomplished by a brain the size of a pinhead that uses energy extremely efficiently and contains fewer than a million neurons. 

Opteran’s successful reverse engineering of the algorithm used by honeybees allows them to estimate optical flow accurately (the apparent motion of objects in a scene caused by the relative motion of the observer). While running on a small FPGA, this algorithm can perform optical flow processing at a rate of 10 kHz while consuming less than one watt of power. 

“This performance outperforms the current state of the art in deep learning by orders of magnitude in all dimensions,” including robustness, power, and speed, according to Marshall. 

The proliferation of artificial intelligence has resulted in an increase in the number of “brain-inspired” technologies. We investigate what the term “neuromorphic” means in modern parlance in this special project on neuromorphic computing. 

Algorithms in Biology

A mathematical model of biological motion detection was developed in the 1960s using insect brain experiments as the primary data source. The model is known as the Hassenstein-Reichardt Detector, and it has been validated in a number of ways using a variety of experimental approaches. 

According to this model, the brain receives signals from two receptors in the eye that are adjacent to one another. A single receptor’s input is delayed for a while. When the brain receives both signals at the same time, a neuron in that region of the brain fires, indicating that the object you’re looking at is moving. Rep this process with the other signal delay to ensure that it works regardless of the direction in which the object moves (hence the symmetry in the model). 

Marshall explained in his presentation that, while adequate for modeling motion detection in fruit flies, the Hassenstein-Reichardt Detector is highly sensitive to spatial frequency (the distribution pattern of dark and light in an image) and contrast, and thus is not a good fit for generalized visual navigation. 

“Honeybees do something cleverer,” according to Marshall, “which is a novel arrangement of these elementary units.” Honeybees accomplish this by arranging their basic components in various ways. “Honeybee flying behavior is highly resistant to changes in spatial frequency and contrast, implying that another factor is at work.” 

Opteran created its visual-inertial odometry estimator and collision avoidance algorithm using honeybee behavioral and neuroscientific data (on the right in the diagram above). This algorithm was compared to FlowNet2s, which was thought to be a cutting-edge deep learning algorithm at the time, and it was discovered to be superior in terms of theoretical accuracy and noise robustness. Marshall points out that deep learning implementation would also necessitate GPU acceleration, which would come with a power penalty. 

In the Real World of Robots 

It’s an intriguing idea, but how does it hold up in practice? Indeed, Opteran has been putting its algorithms to use in real-world robotics. The company has created a robot dog demo called Hopper, which has a form factor similar to Boston Dynamics’ Spot. Hopper employs a vision-only edge solution based on Opteran’s collision prediction and avoidance algorithm. When a potential collision is detected, a simple controller causes the vehicle to turn away from the threat. 

Additionally, Opteran is working on a 3D navigation algorithm that is inspired by honeybees. This solution will be similar to today’s SLAM (simultaneous location and mapping) algorithms, but it will also handle route planning, semantics, and path planning. Marshall claimed that while running on the same hardware, it would only consume a fraction of a Watt of power. 

“Another significant savings is in terms of map size generated by using this approach,” he explained. “Whereas traditional photogrammetry-based SLAM generates map sizes on the order of hundreds of megabytes to gigabytes per meter squared, which causes significant problems when mapping large areas,” the researcher explained. “This presents significant challenges when mapping large areas.” 

Computer Hardware and Programs 

Opteran’s development kit makes use of a small Xilinx Zynqberry FPGA module that weighs less than 30g and consumes less than 3W of power. The use of two cameras is required. The development kit uses Raspberry Pi cameras, which are only $20 each, but Opteran will work with original equipment manufacturers (OEMs) to calibrate its algorithms for other types of cameras as the product develops. 

Opteran’s FPGA in its current iteration supports simultaneous execution of the company’s omnidirectional optical flow processing and collision prediction algorithms. Marshall believes that if the need arises, future hardware may migrate to larger FPGAs or GPUs. 

The company is working on a software stack that will be used in robotics applications. Following the installation of an electronically stabilized panoramic vision system, collision avoidance and navigation are added. A decision engine is currently being developed to allow a robot to choose where it should go and under what conditions it should do so (due in 2023). 

There will be elements such as social engines, causal engines, and abstract engines in the not-too-distant future. Robots will be able to interact with one another, infer causal structures in real-world environments, and abstract general principles from previously encountered situations thanks to these advances. No rule-based or deep learning systems will be used; instead, all of these engines will be based on biological systems. 

Opteran completed a $12 million funding round last month with a successful closing. This funding will be used to help Opteran commercialize its natural intelligence approach as well as develop the remaining algorithms in its stack. Customer pilots have previously used stabilized vision, collision avoidance, and navigation capabilities in cobot arms, drones, and mining robots. 

According to Marshall, potential future research directions could include studying other animals with more complex brains.

Author

Coffee-Drinker, eReader Addict, Blogger. I’m very busy and awesome