Our Technology

Aspinity’s AnalogML™ IP delivers the most efficient accelerator architecture for AI computing.

Digital Accelerator Analog In-Memory-Computing (IMC) Aspinity AnalogML™
Data Movement HighContinual movement of instructions, parameters, and intermediate data to/from cache LowParameters are embedded; still some movement of instructions and intermediate data to/from cache LowParameters and configuration (i.e., instructions) are embedded; Signals stream through
Parameter Precision HighHigh-precision datatypes often supported Low-MediumLimited to ≤ 8 bits precision, much less in many implementations HighProprietary analog memory enabled 10+ bits precision
MAC Precision HighHigh-precision datatypes often supported Low“Memory as compute” precision limited by linearity, noise, and mismatch HighMemory co-located with compute allows linear multiplication circuits to be used
MAC Efficiency Low1000’s of transistors and transfer instructions and operands from memory Medium Efficient multiplication in analog; still fetch input vector from memory HighEfficient multiplication in analog
NN Efficiency LowData movement to/from memory MediumMultiply is in memory but activation requires digital HighMAC & activation all within analog circuitry
Robustness to Temperature & Manufacturing Variation HighLimited issues with digital circuitry LowChallenged with long signal chains HighSilicon-proven and dynamic approach to on-the-fly trimming

Precision Analog Memory

High Performance Acceleration

Aspinity patented analog memory is co-located in circuits
10+ bits precision for storing parameters (weights, etc.)
Permanent and accurate parameter storage (no memory fetch)

Weight quantization not required to save power

Most efficient MAC and NN with AnalogML

  • Fully analog representation is inherently more efficient than digital.
  • Analog multiply accumulate functions use <10 transistors (compared to 1000’s in digital).
  • All-analog NN layer includes MAC and activation functions (no digital conversions required).
  • Memory-at-compute eliminates time and energy spent fetching data.
  • Intermediate values not stored in memory – all signals are streamed through layer-to-layer as analog signals.

AnalogML enables robust and accurate immunity to analog variability

  • Dynamic software-driven solution for on-the-fly variation trimming
  • Leverages high-precision analog memory for localized fine tuning/trimming
  • Not susceptible to environmental or manufacturing variations
  • Repeatable and consistent analog computing platform can be scaled to larger neural networks and smaller process nodes to meet the needs of next generation AI computing

Modern, easy-to-use Python based training and execution

Integration with common
ML frameworks

Simple and quick compilation
to AnalogML hardware

Supports CNN, RNN, DNN, and other model types