Neuromorphic processors operate within strict power and area constraints while executing complex neural network workloads. Current designs demonstrate power efficiency of 2-5 TOPS/W for mixed-precision operations, but face memory bandwidth limitations when processing large models. On-chip memory—typically limited to 10-20MB—creates bottlenecks during data-intensive operations that require frequent off-chip memory access, diminishing the theoretical computational advantages these architectures promise.

The fundamental challenge in neuromorphic computing lies in balancing computational density against energy efficiency while maintaining the flexibility needed for diverse neural network topologies and learning paradigms.

This page brings together solutions from recent research—including resistive memory grids for parallel voltage multiplication, specialized circuits that eliminate ADC requirements between network layers, dynamic data processing mapping architectures, and resource-based spike encoding for distributed neuromorphic systems. These and other approaches demonstrate how hardware designers are addressing the specific computational patterns of neural networks while minimizing the energy and latency costs that traditional von Neumann architectures impose.

1. Neural Network Processor with Fetch Unit Utilizing Dynamic Data Processing Mapping Tables

FURIOSAAI CO, 2024

A neural network processor that accelerates deep learning computations through optimized data routing and processing. The processor employs a fetch unit with multiple routers, each with a data processing mapping table that determines how input data is processed based on node identifiers. A fetch network controller dynamically rebuilds these tables to create a software topology that matches the specific calculation requirements of the neural network, enabling efficient reuse of data patterns and minimizing memory accesses.

2. Neural Network Interface Circuit with Integrated Signal Processing and Feedback-Controlled Comparator

TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LTD, 2024

A neural network interface circuit that eliminates the need for analog-to-digital converters (ADCs) between successive layers, reducing chip area and power consumption. The interface circuit integrates signals from memory cells, generates intermediate voltages, and drives analog voltages to subsequent layers through a feedback-controlled comparator circuit.

3. Flexible Artificial Ag NPs:a–SiC0.11:H Synapse on Al Foil with High Uniformity and On/Off Ratio for Neuromorphic Computing

Zongyan Zuo, Chengfeng Zhou, Zhongyuan Ma - MDPI AG, 2024

A neuromorphic computing network based on SiC

4. Neuromorphic Computing and Its Application

Tejasvini Thakral, Lucky Lamba, Manjeet Singh - Wiley, 2024

Neuromorphic computing is a rapidly developing field that seeks to emulate the neural structure and function of the human brain using hardware and software technologies. In recent years, the development of neuromorphic computing has been fueled by advancements in semiconductor technology and the need for more efficient and intelligent computing systems. This review chapter provides an overview of the state-of-the-art in neuromorphic computing, including the principles and concepts that underlie the technology, its key applications, and the challenges and opportunities that lie ahead. In this chapter discussion about the potential of neuromorphic computing to enable a wide range of applications, including sensory processing, robotics, machine learning, and cognitive computing. In this chapter discussions are made on some of the key challenges associated with the development of neuromorphic computing systems, including scalability, power consumption, and programming models. Overall, this review chapter provides a comprehensive overview of the current state of the art in neuromorphic co... Read More

5. Analog Neuromorphic Circuit with Resistive Memory Grid for Parallel Voltage Multiplication and Current Summation

UNIVERSITY OF DAYTON, 2024

Analog neuromorphic circuit that implements resistive memories to perform parallel computation, enabling simultaneous execution of multiple operations. The circuit comprises a grid of resistive memory cells that multiply input voltages in parallel, generating currents that are then added in parallel to produce output signals. This architecture enables efficient parallel computation with minimal power consumption, suitable for applications such as image recognition.

US2024185050A1-patent-drawing

6. Mixed-Precision Neural Processor with Depth-Wise Convolution and Zero-Skipping Mechanism

SAMSUNG ELECTRONICS CO LTD, 2024

A mixed-precision neural processor with depth-wise convolution that supports both direct convolution on image data stored in planar-wise order and depth-wise separable convolution. The processor optimizes computation by skipping zero-value activations and weights, and employs a shuffler to efficiently access activations cache lanes. It also includes a novel architecture that enables efficient convolution when activations and weights frequently have zero or near-zero values.

7. Integrated Circuit with Deep Learning Accelerator and On-Chip Memory for Matrix Operations

MICRON TECHNOLOGY INC, 2024

A low power and high performance integrated circuit for accelerating artificial neural networks (ANNs) using a specialized accelerator called Deep Learning Accelerator (DLA) and on-chip memory. The DLA is optimized for matrix operations and vector-matrix multiplication, while the memory is for storing large input/output vectors. This allows breaking down large ANN computations into smaller ones that fit the DLA's granularity, and using the memory for temporary storage instead of off-chip memory. This reduces energy consumption and latency compared to using a general-purpose processor for ANNs.

US11874897B2-patent-drawing

8. Neuromorphic Discrete Fourier Transformation Using Spiking Neurons with Weighted Synaptic Couplings

TELEFONAKTIEBOLAGET LM ERICSSON, 2024

Neuromorphic implementation of discrete Fourier transformation (DFT) using spiking neurons that enables energy-efficient, asynchronous, and event-driven DFT computation. The DFT is performed by mapping frequency domain components represented by spikes in input neurons to time domain components in output neurons using weights in neuromorphic couplings. The output neurons sum spikes from the input neurons weighted by the couplings. This allows generating a time domain signal for orthogonal frequency division multiplexing (OFDM) from spike-encoded frequency domain components.

WO2024003374A1-patent-drawing

9. Neuromorphic Computing Device with 3D Memory Array for Voltage-Based Neural Network Computations

TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LTD, 2023

A neuromorphic computing device that performs complex analyses such as image processing and speech recognition using a neural network model. The device includes a 3D memory array that stores weight values of the neural network model, and a controller that applies input voltages to the memory array and receives output voltages corresponding to computations of the neural network model. The memory array performs multiplication operations based on stored weights and applied voltages, with current from local bit lines combined through interconnects to achieve efficient accumulation of results.

10. Neuromorphic Computing between Reality and Future Needs

Khaled S. Ahmed, Fayroz Farouk Shereif - IntechOpen, 2023

Neuromorphic computing is a one of computer engineering methods that to model their elements as the human brain and nervous system. Many sciences as biology, mathematics, electronic engineering, computer science and physics have been integrated to construct artificial neural systems. In this chapter, the basics of Neuromorphic computing together with existing systems having the materials, devices, and circuits. The last part includes algorithms and applications in some fields.

11. Neuromorphic Computing Activation Function with Comparator, Capacitor, and Ramp Voltage Generator

IBM, 2023

Implementing and calibrating hardware-based activation functions for neuromorphic computing systems. The activation function comprises a comparator circuit, a capacitor, and a ramp voltage generator circuit. The comparator circuit compares the input voltage stored in the capacitor to the ramp voltage, generating a pulse duration that encodes the activation output value of the non-linear activation function.

US2023306251A1-patent-drawing

12. Wireless Transmission System for Spiking Neural Network Data Using Resource-Based Spike Mapping

ERICSSON TELEFON AB L M, 2023

Efficiently transmitting spiking neural network data over wireless networks. The method involves mapping spikes generated by neuromorphic applications to radio resources based on neuron identity, spike properties, and resource availability. This allows transmitting sparse, bursty neural data without protocol overheads. Receivers identify the resources containing spikes by detecting signals rather than demodulating.

WO2023163619A1-patent-drawing

13. Spiking Neural Network Data Encoding and Transmission Method with Variable Protocol Data Unit Sizing in Wireless Networks

ERICSSON TELEFON AB L M, 2023

Communication of spiking neural network (SNN) data in wireless networks to support distributed neuromorphic applications. The method involves encoding, packaging, and transmitting spikes from a neuromorphic transmitter node to a receiver node. The encoding involves determining factors like spike priority, delay sensitivity, and grouping. The spikes are grouped into protocol data units (PDUs) with sizes based on factors like spike type, encoding, and network characteristics. Multiple PDUs may be assigned priorities. The PDU sizes are optimized to balance delay, size, and buffering. The PDU sizes are also chosen to match transport block sizes. The transmitter demultiplexes and decodes received PDUs to generate spikes at the receiver.

WO2023158352A1-patent-drawing

14. 2D Array Neuromorphic Processor with Grid-Structured Axon, Synapse, and Neuron Circuits Incorporating Time-Division and Shared Adder Resources

SAMSUNG ELECTRONICS CO LTD, 2023

A 2D array-based neuromorphic processor for neural networks, comprising axon circuits, synapse circuits, and neuron circuits arranged in a grid structure. The synapse circuits store weights and output operation values based on time information, while the neuron circuits perform multi-bit operations using the operation values and time information. The processor enables efficient neural network processing through a time-division method and shared adder resources.

US2023214637A1-patent-drawing

15. Introduction to Neuromorphic Computing Systems

L. Jubair Ahmed, S. Dhanasekar, K. Martin Sagayam - IGI Global, 2023

The process of using electronic circuits to replicate the neurobiological architectures seen in the nervous system is known as neuromorphic engineering, also referred to as neuromorphic computing. These technologies are essential for the future of computing, although most of the work in neuromorphic computing has been focused on hardware development. The execution speed, energy efficiency, accessibility and robustness against local failures are vital advantages of neuromorphic computing over conventional methods. Spiking neural networks are generated using neuromorphic computing. This chapter covers the basic ideas of neuromorphic engineering, neuromorphic computing, and its motivating factors and challenges. Deep learning models are frequently referred to as deep neural networks because deep learning techniques use neural network topologies. Deep learning techniques and their different architectures were also covered in this section. Furthermore, Emerging memory Devices for neuromorphic systems and neuromorphic circuits were illustrated.

16. Neuromorphic Computing Architecture with Modular Spiking Neural Network on FPGA

TATA CONSULTANCY SERVICES LTD, 2023

A neuromorphic computing architecture for energy-efficient AI applications, implemented on a field-programmable gate array (FPGA) platform. The architecture employs a spiking neural network (SNN) technique, where neurons are arranged in a modular and parallel fashion based on application-specific features. The design optimizes the number and position of neurons using a heuristic technique, enabling high clock frequencies and efficient processing. The architecture achieves significant improvements in energy efficiency, latency, and throughput compared to traditional computing methods.

US2023122192A1-patent-drawing

17. Method for Training Analog Resistive Processing Units with Static Bound Management Parameters

INTERNATIONAL BUSINESS MACHINES CORP, 2023

A method for reducing runtime cost of analog resistive processing unit (RPU) systems for neuromorphic computing by learning static bound management parameters. The method includes training a first artificial neural network model, retraining the model using matrix-vector compute operations that incorporate bound management parameters, and configuring the RPU system to implement the retrained model with learned static bound management parameters.

US2023097217A1-patent-drawing

18. Analog Near-Memory Multiplication-and-Accumulate Circuit with Variable Data Flow Control

QUALCOMM INC, 2023

Power efficient near memory analog MAC system that reduces the amount of data flow to the memory and decreases the amount of processing time. The system includes a multiplication-and-accumulate (MAC) circuit for the multiplication of a plurality of input neurons from a previous layer in a machine learning application with a plurality of filter weights to form a plurality of products.

US11574173B2-patent-drawing

19. In-Memory Computing Device with Parallel Column Memory Array for Neural Network Convolutions

MACRONIX INTERNATIONAL CO LTD, 2023

In-memory computing device for executing convolutions in neural networks, comprising an array of memory cells storing kernel matrix elements in parallel columns, and driver and sensing circuitry to apply input vectors and sense output currents, respectively, to compute output matrix elements.

20. Artificial Neural Network Circuit with Temperature-Compensating Memristor-Based Crossbar and Normalizing Processing Circuit

DENSO CORP, 2023

An artificial neural network (ANN) circuit that suppresses performance degradation due to temperature changes, comprising a crossbar circuit with memristors and a processing circuit. The crossbar circuit transmits signals between neurons with memristors providing variable resistance weights. The processing circuit calculates signal sums for each output bar, with the memristor conductance values set to cooperate and give a desired weight to the signal. The processing circuit normalizes the calculated sum based on the number of output bars and a resistor value.

US11562215B2-patent-drawing

21. Processing Element with Precision-Selectable Multiplier and Saturating Adder for Weighted Input Activation Calculation

22. Organic multilevel (opto)electronic memories towards neuromorphic applications

23. Neural Network Evaluation System with On-Chip Memory and Spatially Parallel Matrix Multiplications

24. Training Method for Neural Network Memristor Crossbars with Voltage Margin Adjustment to Mitigate Write Threshold Variations

25. Neural Network Apparatus with Selective Bitwise Operation Processing

Get Full Report

Access our comprehensive collection of 96 documents related to this technology