There has been more and more buzz on the use of analog computing these days – and for good reason. It has long been recognized that digital circuitry is better for high-resolution processing with many bits of precision. However, it turns out that the converse is also true – if you don’t absolutely need very-high resolution, then analog implementations of compute functions are much more efficient than digital.
A particularly good example of analog efficiency is the multiply accumulate (MAC) function, a critical and dominant component of the neural networks used for machine learning (ML) applications. MAC functions require many transistors in a digital implementation, while they can be done in just a few transistors in analog - and at dramatically lower power. Using analog as a replacement for digital MAC functions is commonly known as analog in-memory computing
. And, while adopting some analog components for the core MAC operation is expected to reduce power consumption as compared with their all-digital counterparts, these chips are otherwise clocked processors that still operate within the traditional digital paradigm, interfacing with digital sensors, and delivering an always-on system power within the 1-10mA
On the other hand, Aspinity’s analogML™
architecture takes the idea of leveraging ultra-low power analog computing to a new level – to the entire AI system in fact – enabling what we call power-intelligent devices
that focus their power only on what’s important and ignore the rest. AnalogML is an analog machine learning core that interfaces to analog sensors and performs signal conditioning, inferencing, and classification on natively analog sensor data – a completely analog system front end. AnalogML delivers the lowest always-on AI system power not just because the processor itself consumes near-zero power for inferencing, but also because early inferencing in the analog domain allows the MCU and other higher power digital chips to remain off unless something important is detected (a glass break, a voice, a vibration anomaly, etc.), leading to an astounding reduction in always-on system power to <100μW
The bottom line is that term ‘analog computing’ means different things to different people, especially when talking about the integration of ML functionality into endpoints. To some, analog computing means replacing a very small percentage of digital circuitry with analog to reduce the power consumption of ML accelerators. To Aspinity, analog computing means using analog sensors to interface to a fully analog machine learning core that keeps downstream components off unless they are needed, delivering the lowest always-on AI system power - power intelligence
at its best.