Myth #5: AnalogML is just for machine learning applications

Share

Aspinity’s analogML™ core includes a tiny, low-power, analog neural network for inferencing. Which is why it’s so often grouped with other tinyML processors, but the truth is that it’s so much more than that.

Built on the company’s proprietary RAMP™ (Reconfigurable Analog Modular Processor) technology—a novel analog processing platform that brings the versatility, performance, and consistency of digital computing into the analog domain—the analogML core also includes software-programmable analog blocks that provide sensor interfacing, feature extraction, data compression, and other analog functionality.

Block diagram of analogML core

 

While current battery-operated always-on endpoints continuously sense, digitally process, and transmit huge amounts of data to the cloud or a central processing location, they can also leverage many of the analogML core’s computing capabilities to extend battery life and reduce size and cost as well as significantly reduce the burden on cloud processing.

Predictive maintenance applications, such as vibration analysis of factory-floor machinery, are a good example. Today, these always-on predictive-maintenance applications first collect sensor data from multiple sensor nodes (the endpoints) on each piece of equipment, then digitize, process, store, and transmit all data to the cloud for FFT and other higher-power analyses. These sensor nodes require large amounts of memory, the capability to transmit significant amounts of data, and a battery large enough to digitize, process and transmit all data without requiring frequent recharging.  That’s a tall order not easily filled, but the analogML core offers numerous power and data optimization benefits for this class of applications—even without tapping into its neural network for inferencing. So how does that work?

In addition to inferencing, the analogML core:

  • Provides direct interfacing to multiple types of analog sensors as well as pre-processing of raw analog data directly from the sensors, e.g., filtering and amplification, offloading essential calculations from the MCU.
  • Promotes higher levels of data analysis by extracting important features from the analog sensor data and calculating more usable information, such as crest factor or peak velocity.
  • Delivers ultra-low-power analog pre-processing and feature extraction—reducing the amount of data requiring digitization, processing, storage and transmission. This translates into extended battery life (or a smaller battery), potentially eliminating or otherwise reducing the functional requirements of other system components (e.g., processing, memory, wireless capabilities). And this decreases the size and reduces the cost of the sensor nodes.
  • Keeps downstream digital components asleep until it sends a wake-up trigger for additional battery life extension.

 

And machine learning, too

Of course, predictive maintenance applications can also leverage the full capabilities of the analogML core by using the neural network to detect early failure warning signs and alert operators to changes that indicate an upcoming machine fault. Integrating this funcitonality keeps the entire digital system off unless imminent failure is detected, leading to the longest battery life, smallest size, and lowest-cost solution for the sensor nodes.

Regardless of how the analogML core is implemented in a predictive maintenance system or other always-on sensing application, it’s so much more than a traditional machine learning processor: It’s a unique, software-programmable analog processing chip that offers inferencing as just one of its many analog processing capabilities that together reduce power, size and cost.