RAMP™ Technology Platform

An innovative near-zero power analog computing paradigm

RAMP Neural Processing Technology

Aspinity's RAMP (Reconfigurable Analog Modular Processor) technology is an analog processing platform that brings the versatility, performance, and consistency of digital neural processing systems into the lower-power analog domain to achieve the increasing levels of efficiency required for implementing sophisticated  AI at the edge.

While it has long been recognized that analog can help us build much more power-efficient neural networks and AI systems –the implementation into deployable AI products has been stymied by the challenges typically associated with low-power analog circuit design such as susceptibility to manufacturing and environment variations, requirement for different fixed function analog circuits, and the lack of available high-precision analog memory that can be implemented at the compute elements.  

RAMP enables a new level of power-efficiency for edge applications such as always-on sensing, generative AI, and myriad others with an analog neural processing technology that delivers:
  • Near-zero power analog inferencing
  • Scalability in network size and technology node
  • High volume repeatable manufacturability
  • Flexibility and programmability

Key Characteristics of RAMP

  • Configurable analog blocks (CABs):  RAMP technology is comprised of parallel, independent analog circuit blocks that operate in the subthreshold domain. Each of these blocks is implemented in a very small footprint and each can be independently powered only when they are needed for a specific task.   
  • Complex decision-making capability: RAMP technology leverages non-linear analog circuitry to improve the performance of typical analog tasks, make decisions, and classify incoming sensor information.
  • High-precision analog memory at compute:  Aspinity’s patented 10-bit analog non-volatile memory (NVM) is implemented in standard CMOS with no add-ons and sits alongside the analog computing elements in each CAB. It can be used to both store neural network weights as well as biases and activations for other compute circuits. Additionally, it can store the high precision values needed to finely trim out variations in analog circuit performance that arise from environmental conditions or the CMOS manufacturing process.  
  • Software programmability: The complete functionality of a RAMP-based chip can be abstracted from hardware into software, enabling a flexible analog platform in which all aspects of the chip (connections of circuit blocks, parameters, etc.) can be programmed in software and stored in on-chip memory.
  • Repeatability and predictability: The programmability of RAMP-based chips allows software ‘trimming’ to offset the process variations of standard CMOS technology during high volume manufacturing, eliminating the challenges associated with implementing large-scale analog neuromorphic computing platforms.

Applications

Near-zero power neural acceleration:
The combination of ultra-low power analog circuits that are (1) co-located with, but separate from high-precision analog NVM, (2) immune to manufacturing and environmental variation, and (3) fully software programmable and configurable opens the door for RAMP technology to deliver a level of AI compute efficiency that is not addressed with today’s technology.

RAMP analog neural processing is extremely flexible and can:
  • Be applied to a variety of NN topologies (CNNs, RNNs, etc)
  • Scale up in NN size (number of parameters)
  • Scale down in CMOS technology node (to 40nm or beyond)

As a result, RAMP technology is uniquely positioned to scale to 150+ TOPS/W to address the need for highly efficient NPUs (neural processing units) in mobile device chipsets that will enable the rapid growth of generative AI applications in mobile devices.  

Near-zero power always-on edge sensing:
Because RAMP enables myriad functions in analog beyond inferencing,  it is an extremely effective solution for ultra-low power always-on sensing at the edge. This functionality has been proven out in silicon with the development of Aspinity’s first product, the AML100 analogML (analog machine learning) processor. The AML100 is a fully analog processing platform that uses just 10's of µAs to process and classify raw, unstructured sensor data in the analog domain, keeping the MCU or other downstream digital processors asleep unless a relevant event is detected. The AML100 reduces always-on AI system power by up to 100x, delivering a 20x or more improvement in battery life and dramatically reducing the amount of data transmitted to the cloud.

Learn more about the AML100
 

Request More Information

Contact us for more information and to discuss how RAMP technology can improve the power and data efficiency in your device.