Neuromorphic processors operate within strict power and area constraints while executing complex neural network workloads. Current designs demonstrate power efficiency of 2-5 TOPS/W for mixed-precision operations, but face memory bandwidth limitations when processing large models. On-chip memory—typically limited to 10-20MB—creates bottlenecks during data-intensive operations that require frequent off-chip memory access, diminishing the theoretical computational advantages these architectures promise.

The fundamental challenge in neuromorphic computing lies in balancing computational density against energy efficiency while maintaining the flexibility needed for diverse neural network topologies and learning paradigms.

This page brings together solutions from recent research—including resistive memory grids for parallel voltage multiplication, specialized circuits that eliminate ADC requirements between network layers, dynamic data processing mapping architectures, and resource-based spike encoding for distributed neuromorphic systems. These and other approaches demonstrate how hardware designers are addressing the specific computational patterns of neural networks while minimizing the energy and latency costs that traditional von Neumann architectures impose.

1. Neural Network Processor with Fetch Unit Utilizing Dynamic Data Processing Mapping Tables

FURIOSAAI CO, 2024

A neural network processor that accelerates deep learning computations through optimized data routing and processing. The processor employs a fetch unit with multiple routers, each with a data processing mapping table that determines how input data is processed based on node identifiers. A fetch network controller dynamically rebuilds these tables to create a software topology that matches the specific calculation requirements of the neural network, enabling efficient reuse of data patterns and minimizing memory accesses.

2. Neural Network Interface Circuit with Integrated Signal Processing and Feedback-Controlled Comparator

TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LTD, 2024

A neural network interface circuit that eliminates the need for analog-to-digital converters (ADCs) between successive layers, reducing chip area and power consumption. The interface circuit integrates signals from memory cells, generates intermediate voltages, and drives analog voltages to subsequent layers through a feedback-controlled comparator circuit.

3. Flexible Artificial Ag NPs:a–SiC0.11:H Synapse on Al Foil with High Uniformity and On/Off Ratio for Neuromorphic Computing

Zongyan Zuo, Chengfeng Zhou, Zhongyuan Ma - MDPI AG, 2024

A neuromorphic computing network based on SiC

4. Neuromorphic Computing and Its Application

Tejasvini Thakral, Lucky Lamba, Manjeet Singh - Wiley, 2024

Neuromorphic computing is a rapidly developing field that seeks to emulate the neural structure and function of the human brain using hardware and software technologies. In recent years, the development of neuromorphic computing has been fueled by advancements in semiconductor technology and the need for more efficient and intelligent computing systems. This review chapter provides an overview of the state-of-the-art in neuromorphic computing, including the principles and concepts that underlie the technology, its key applications, and the challenges and opportunities that lie ahead. In this chapter discussion about the potential of neuromorphic computing to enable a wide range of applications, including sensory processing, robotics, machine learning, and cognitive computing. In this chapter discussions are made on some of the key challenges associated with the development of neuromorphic computing systems, including scalability, power consumption, and programming models. Overall, this review chapter provides a comprehensive overview of the current state of the art in neuromorphic co... Read More

5. Analog Neuromorphic Circuit with Resistive Memory Grid for Parallel Voltage Multiplication and Current Summation

UNIVERSITY OF DAYTON, 2024

Analog neuromorphic circuit that implements resistive memories to perform parallel computation, enabling simultaneous execution of multiple operations. The circuit comprises a grid of resistive memory cells that multiply input voltages in parallel, generating currents that are then added in parallel to produce output signals. This architecture enables efficient parallel computation with minimal power consumption, suitable for applications such as image recognition.

US2024185050A1-patent-drawing

6. Mixed-Precision Neural Processor with Depth-Wise Convolution and Zero-Skipping Mechanism

SAMSUNG ELECTRONICS CO LTD, 2024

A mixed-precision neural processor with depth-wise convolution that supports both direct convolution on image data stored in planar-wise order and depth-wise separable convolution. The processor optimizes computation by skipping zero-value activations and weights, and employs a shuffler to efficiently access activations cache lanes. It also includes a novel architecture that enables efficient convolution when activations and weights frequently have zero or near-zero values.

7. Integrated Circuit with Deep Learning Accelerator and On-Chip Memory for Matrix Operations

MICRON TECHNOLOGY INC, 2024

A low power and high performance integrated circuit for accelerating artificial neural networks (ANNs) using a specialized accelerator called Deep Learning Accelerator (DLA) and on-chip memory. The DLA is optimized for matrix operations and vector-matrix multiplication, while the memory is for storing large input/output vectors. This allows breaking down large ANN computations into smaller ones that fit the DLA's granularity, and using the memory for temporary storage instead of off-chip memory. This reduces energy consumption and latency compared to using a general-purpose processor for ANNs.

US11874897B2-patent-drawing

8. Neuromorphic Discrete Fourier Transformation Using Spiking Neurons with Weighted Synaptic Couplings

TELEFONAKTIEBOLAGET LM ERICSSON, 2024

Neuromorphic implementation of discrete Fourier transformation (DFT) using spiking neurons that enables energy-efficient, asynchronous, and event-driven DFT computation. The DFT is performed by mapping frequency domain components represented by spikes in input neurons to time domain components in output neurons using weights in neuromorphic couplings. The output neurons sum spikes from the input neurons weighted by the couplings. This allows generating a time domain signal for orthogonal frequency division multiplexing (OFDM) from spike-encoded frequency domain components.

WO2024003374A1-patent-drawing

9. Neuromorphic Computing Device with 3D Memory Array for Voltage-Based Neural Network Computations

TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LTD, 2023

A neuromorphic computing device that performs complex analyses such as image processing and speech recognition using a neural network model. The device includes a 3D memory array that stores weight values of the neural network model, and a controller that applies input voltages to the memory array and receives output voltages corresponding to computations of the neural network model. The memory array performs multiplication operations based on stored weights and applied voltages, with current from local bit lines combined through interconnects to achieve efficient accumulation of results.

10. Neuromorphic Computing between Reality and Future Needs

Khaled S. Ahmed, Fayroz Farouk Shereif - IntechOpen, 2023

Neuromorphic computing is a one of computer engineering methods that to model their elements as the human brain and nervous system. Many sciences as biology, mathematics, electronic engineering, computer science and physics have been integrated to construct artificial neural systems. In this chapter, the basics of Neuromorphic computing together with existing systems having the materials, devices, and circuits. The last part includes algorithms and applications in some fields.

11. Neuromorphic Computing Activation Function with Comparator, Capacitor, and Ramp Voltage Generator

IBM, 2023

Implementing and calibrating hardware-based activation functions for neuromorphic computing systems. The activation function comprises a comparator circuit, a capacitor, and a ramp voltage generator circuit. The comparator circuit compares the input voltage stored in the capacitor to the ramp voltage, generating a pulse duration that encodes the activation output value of the non-linear activation function.

US2023306251A1-patent-drawing

12. Wireless Transmission System for Spiking Neural Network Data Using Resource-Based Spike Mapping

ERICSSON TELEFON AB L M, 2023

Efficiently transmitting spiking neural network data over wireless networks. The method involves mapping spikes generated by neuromorphic applications to radio resources based on neuron identity, spike properties, and resource availability. This allows transmitting sparse, bursty neural data without protocol overheads. Receivers identify the resources containing spikes by detecting signals rather than demodulating.

WO2023163619A1-patent-drawing

13. Spiking Neural Network Data Encoding and Transmission Method with Variable Protocol Data Unit Sizing in Wireless Networks

ERICSSON TELEFON AB L M, 2023

Communication of spiking neural network (SNN) data in wireless networks to support distributed neuromorphic applications. The method involves encoding, packaging, and transmitting spikes from a neuromorphic transmitter node to a receiver node. The encoding involves determining factors like spike priority, delay sensitivity, and grouping. The spikes are grouped into protocol data units (PDUs) with sizes based on factors like spike type, encoding, and network characteristics. Multiple PDUs may be assigned priorities. The PDU sizes are optimized to balance delay, size, and buffering. The PDU sizes are also chosen to match transport block sizes. The transmitter demultiplexes and decodes received PDUs to generate spikes at the receiver.

WO2023158352A1-patent-drawing

14. 2D Array Neuromorphic Processor with Grid-Structured Axon, Synapse, and Neuron Circuits Incorporating Time-Division and Shared Adder Resources

SAMSUNG ELECTRONICS CO LTD, 2023

A 2D array-based neuromorphic processor for neural networks, comprising axon circuits, synapse circuits, and neuron circuits arranged in a grid structure. The synapse circuits store weights and output operation values based on time information, while the neuron circuits perform multi-bit operations using the operation values and time information. The processor enables efficient neural network processing through a time-division method and shared adder resources.

US2023214637A1-patent-drawing

15. Introduction to Neuromorphic Computing Systems

L. Jubair Ahmed, S. Dhanasekar, K. Martin Sagayam - IGI Global, 2023

The process of using electronic circuits to replicate the neurobiological architectures seen in the nervous system is known as neuromorphic engineering, also referred to as neuromorphic computing. These technologies are essential for the future of computing, although most of the work in neuromorphic computing has been focused on hardware development. The execution speed, energy efficiency, accessibility and robustness against local failures are vital advantages of neuromorphic computing over conventional methods. Spiking neural networks are generated using neuromorphic computing. This chapter covers the basic ideas of neuromorphic engineering, neuromorphic computing, and its motivating factors and challenges. Deep learning models are frequently referred to as deep neural networks because deep learning techniques use neural network topologies. Deep learning techniques and their different architectures were also covered in this section. Furthermore, Emerging memory Devices for neuromorphic systems and neuromorphic circuits were illustrated.

16. Neuromorphic Computing Architecture with Modular Spiking Neural Network on FPGA

TATA CONSULTANCY SERVICES LTD, 2023

A neuromorphic computing architecture for energy-efficient AI applications, implemented on a field-programmable gate array (FPGA) platform. The architecture employs a spiking neural network (SNN) technique, where neurons are arranged in a modular and parallel fashion based on application-specific features. The design optimizes the number and position of neurons using a heuristic technique, enabling high clock frequencies and efficient processing. The architecture achieves significant improvements in energy efficiency, latency, and throughput compared to traditional computing methods.

US2023122192A1-patent-drawing

17. Method for Training Analog Resistive Processing Units with Static Bound Management Parameters

INTERNATIONAL BUSINESS MACHINES CORP, 2023

A method for reducing runtime cost of analog resistive processing unit (RPU) systems for neuromorphic computing by learning static bound management parameters. The method includes training a first artificial neural network model, retraining the model using matrix-vector compute operations that incorporate bound management parameters, and configuring the RPU system to implement the retrained model with learned static bound management parameters.

US2023097217A1-patent-drawing

18. Analog Near-Memory Multiplication-and-Accumulate Circuit with Variable Data Flow Control

QUALCOMM INC, 2023

Power efficient near memory analog MAC system that reduces the amount of data flow to the memory and decreases the amount of processing time. The system includes a multiplication-and-accumulate (MAC) circuit for the multiplication of a plurality of input neurons from a previous layer in a machine learning application with a plurality of filter weights to form a plurality of products.

US11574173B2-patent-drawing

19. In-Memory Computing Device with Parallel Column Memory Array for Neural Network Convolutions

MACRONIX INTERNATIONAL CO LTD, 2023

In-memory computing device for executing convolutions in neural networks, comprising an array of memory cells storing kernel matrix elements in parallel columns, and driver and sensing circuitry to apply input vectors and sense output currents, respectively, to compute output matrix elements.

20. Artificial Neural Network Circuit with Temperature-Compensating Memristor-Based Crossbar and Normalizing Processing Circuit

DENSO CORP, 2023

An artificial neural network (ANN) circuit that suppresses performance degradation due to temperature changes, comprising a crossbar circuit with memristors and a processing circuit. The crossbar circuit transmits signals between neurons with memristors providing variable resistance weights. The processing circuit calculates signal sums for each output bar, with the memristor conductance values set to cooperate and give a desired weight to the signal. The processing circuit normalizes the calculated sum based on the number of output bars and a resistor value.

US11562215B2-patent-drawing

21. Processing Element with Precision-Selectable Multiplier and Saturating Adder for Weighted Input Activation Calculation

REBELLIONS INC, 2023

Processing element that selects a multiplier for performing calculation according to a weight and a size of an input activation. The element includes a weight register configured to store weights, an input activation register configured to store input activations, a flexible multiplier configured to receive a first sub-weight of a first precision included in the weight, receive a first sub-input activation of the first precision included in the input activation, and generate result data by performing multiplication calculation of the first sub-weight and the first sub-input activation as the first precision or a second precision different from the first precision according to the first sub-weight and the first sub-input activation and a saturating adder configured to receive the result data and generate a partial sum.

22. Organic multilevel (opto)electronic memories towards neuromorphic applications

Lin He, Zuchong Yang, Zhiming Wang - Royal Society of Chemistry (RSC), 2023

In the past decades, neuromorphic computing has attracted the interest of the scientific community due to its potential to circumvent the von Neumann bottleneck.

23. Neural Network Evaluation System with On-Chip Memory and Spatially Parallel Matrix Multiplications

APPLIED BRAIN RESEARCH INC, 2022

Evaluation of neural engineering framework style neural networks that can be used for machine learning. The evaluation includes on-chip memory, a plurality of non-linear components, an external system, a first spatially parallel matrix multiplication, a second spatially parallel matrix multiplication, an error signal, plurality of sets of factorized network weights, and an input signal.

US11537856B2-patent-drawing

24. Training Method for Neural Network Memristor Crossbars with Voltage Margin Adjustment to Mitigate Write Threshold Variations

DENSO CORP, 2022

Training method for artificial neural network circuits with memristor crossbars that reduces training accuracy drops caused by write voltage threshold variations. The method sets voltages to ensure a minimum write voltage is applied to the target memristor, while preventing unintended updates of non-target memristors. The write voltage is set to be at least VTH+dV, where dV is a positive margin voltage based on the memristor threshold variation.

US11537897B2-patent-drawing

25. Neural Network Apparatus with Selective Bitwise Operation Processing

SAMSUNG ELECTRONICS CO LTD, 2022

A neural network apparatus that processes neural network operations by selectively performing operations on individual bits of activation and weight inputs, rather than processing the entire inputs at once. The apparatus determines which bits to operate on and performs the operation only on those bits, producing a partial output value.

US11531870B2-patent-drawing

26. Application-Hardware Co-Design: System-Level Optimization of Neuromorphic Computers with Neuromorphic Devices

Catherine D. Schuman, James S. Plank, Garrett S. Rose - IEEE, 2022

The design of neuromorphic computers offers the opportunity to innovate across the entire compute stack, from materials and devices, to algorithms and applications. Here, we provide a discussion of challenges associated with full-stack co-design and how we have addressed those challenges with a few use cases for particular neuromorphic devices.

27. An Overview of Artificial Neuromorphic Circuits

Maloth Santhoshi, Shovan Barma, Debaprasad Das - IEEE, 2022

This paper report a brief review of the neuromorphic computing systems along with a brief overview of their working principle, their design methodologies. Also this papers provides an insight to the new design techniques which opens up the possibility to attain high speed and large complexity with a lower energy cost. The neuropmorphic systems which are inspired by biological phenomenon's are most promising candidate for next generation information processing and computations. The scopes, opportunities and challenges faced by the neuromorphic computing systems has been also discussed.

28. Analog Domain Convolution Engine with Parallel Dot Product Computation Using Analog Filter Circuits

NOKIA TECHNOLOGIES OY, 2022

A convolution engine for in-memory computing that performs convolution operations fully in the analog domain with improved power consumption and performance. The engine comprises a plurality of analog filter circuits, each receiving a portion of the input window and corresponding filter weights, and a processor generating the weights to compute dot products in parallel. The dot products are stored to an output feature map representing the convolution result.

29. Synapse Weight Update Method Using Device Characteristic-Based Compensation in NVRAM Cells

INTERNATIONAL BUSINESS MACHINES CORP, 2022

A method for updating synapse weights in neuromorphic systems using non-volatile analog memory (NVRAM) cells, where the update amount is controlled by a device characteristic-based compensation mechanism to maintain consistent weight updates across cells and wafers.

30. Neuromorphic Circuit with Phase Change Synapse and Post-Neuron Capacitor-Based Firing Mechanism

JIANGSU ADVANCED MEMORY TECHNOLOGY CO LTD, ALTO MEMORY TECHNOLOGY CORP, 2022

An artificial neuromorphic circuit and operation method that utilizes circuits to build an artificial neural network system. The circuit comprises a synapse circuit with a phase change element, first and second switches, and a post-neuron circuit with an input terminal, switch circuit, capacitor, and output terminal. The post-neuron circuit charges the capacitor through the switch circuit in response to a first pulse signal, generates a firing signal based on the capacitor voltage and threshold voltage, and produces control signals to turn off the switch circuit and control the second switch to adjust the phase change element's current magnitude.

31. Neural Network Computing Unit with Integrated ROM-Based Analog Crossbar and Time-Domain Interface

ROBERT BOSCH GMBH, 2022

A neural network computing unit that integrates memory and computation in a single chip, enabling low-power and high-throughput processing of neural networks. The unit employs a ROM-based architecture where weights are stored in a read-only memory and computations are performed using analog circuits, eliminating the need for digital-to-analog conversion and reducing data transfer overhead. The unit's time-domain interface enables efficient activation and readout of the analog multiply-and-add crossbar network, leveraging pulse-width modulation and ratiometric measurement techniques to achieve high precision and linearity.

32. Neural Network Training Utilizing Resistive Memory-Based Weighted Link Conductivity

WESTERN DIGITAL TECHNOLOGIES INC, 2022

Training neural networks that reduces the risk of loss of data and improves the performance of the neural network when making predictions or classifications in the presence of noise. The training includes setting a weight for each of the plurality of links as a conductivity of a respective plurality of memory cells in the resistive network, the plurality of memory cells can include at least one of a resistive random-access memory (ReRAM), memristors, or phase change memory (PCM).

33. 3D Stacked Non-Volatile Memory Device with Integrated Neural Network Processing and Through-Silicon Via Connectivity

SANDISK TECHNOLOGIES LLC, 2022

A non-volatile memory device for neural networks that integrates memory and processing in a 3D stacked architecture. The device comprises multiple bonded die pairs, each consisting of a memory die with non-volatile memory cells and a peripheral circuitry die with control circuits. The memory dies store weights for neural network layers, while the peripheral circuitry dies perform multiplication operations to generate output values. Through-silicon vias connect the die pairs, enabling data transfer between them. The device enables efficient neural network inference by propagating input values through the stacked die pairs, with each pair performing a multiplication operation using the stored weights.

34. Spiking Neural Network Architecture with Modular Sub-Network Composition for Pattern Recognition

INNATERA NANOSYSTEMS BV, 2022

Composing spiking neural networks for pattern recognition through a unique response method that enables compositional building of pattern recognizers from spiking neurons. The method involves training sub-networks of spiking neurons to recognize specific features, and then combining these sub-networks to recognize complex patterns. The sub-networks are pre-trained to recognize specific features, and can be combined in a modular fashion to recognize more complex patterns. This approach enables efficient and scalable pattern recognition in applications such as voice recognition, gesture recognition, and medical signal analysis.

US2022230051A1-patent-drawing

35. Neuromorphic Device with Packetized Modulated Spike Signal Transmission and Reception Circuits

SAMSUNG ELECTRONICS CO LTD, 2022

Efficiently implementing a spiking neural network using packetized and modulated spike signals to connect neuromorphic devices in a neuromorphic system. The neuromorphic device has a neuron block, spike transmission circuit, and spike reception circuit. The spike transmission circuit generates a non-binary transmission signal from the neuron block's spikes, packets the spike data, modulates it, and sends it. The spike reception circuit receives the modulated packets, demodulates, depackets, and converts back to spikes for the neuron block. This enables parallel spike transfer and reduction in inter-device connections compared to binary spike signals.

36. In-Memory Computing Circuit with SRAM-Based Parallel XNOR Operations and Integrated Accumulation Mechanism

SOUTHEAST UNIVERSITY, 2022

An in-memory computing circuit for fully connected binary neural networks that performs forward propagation calculations using digital logic on SRAM bit lines, eliminating explicit memory accesses and reducing power consumption through parallel XNOR operations and read-write separation. The circuit integrates memory and computation, leveraging SRAM bit lines for both storage and computation, and employs a delay chain for accumulation and activation operations.

37. Neural Network Processing Circuit with Core-Divided Filter Computations and Dynamic Compiler Scheduling

PERCEIVE CORP, 2022

A neural network processing circuit that enables efficient computation of large neural networks by dividing filter computations across multiple cores and processing units, with a compiler that optimizes computation and memory usage by dynamically scheduling computations and allocating resources. The circuit includes a set of cores that compute dot products of input values and corresponding weight values, and a channel that aggregates these dot products and performs post-processing operations. The compiler assigns each layer to a particular number of cores, assigns filter slices to different weight value buffers, and specifies which segment will perform post-processing on each output value.

38. Dynamic Neural Network Architecture with Temporal Filtering and Nonlinear Node Processing

APPLIED BRAIN RESEARCH INC, 2022

A system and method for dynamic neural networks that enables processing of temporal signals using temporal filters and static or time-varying nonlinearities. The system employs a novel approach to implementing feed-forward, recurrent, and deep networks that can process dynamic signals by incorporating temporal filters on the input and/or output of each node. This allows for the realization of acausal filtering, which is critical for real-time interaction with the world in domains such as manufacturing, auditory processing, video processing, and robotics. The system enables the use of a wide variety of synaptic filters, including linear and nonlinear filters, and allows for heterogeneous distributions of synaptic filters within the network.

US11238337B2-patent-drawing

39. Neural Network Processor with Integrated Multi-Layer Safety Mechanisms and Redundant Spatial Mapping

HAILO TECHNOLOGIES LTD, 2022

A neural network processor with integrated safety mechanisms to ensure reliable operation in critical applications. The processor incorporates multiple safety features, including data stream fault detection, redundant allocation, cluster interlayer and intralayer safety, layer control unit instruction addressing failure detection, weights safety, and neural network intermediate results safety. These mechanisms provide redundancy by design, redundancy through spatial mapping, and self-tuning procedures to modify static and dynamic behavior, addressing system-level safety in situ.

US11237894B1-patent-drawing

40. Neural Network Processing Engine with Layered Computing Elements and Dedicated Memory Integration

HAILO TECHNOLOGIES LTD, 2022

Adaptive neural network (ANN) processing engine that can be used for computational purposes such as machine vision. The engine includes a number of network layers, each including computing elements, associated dedicated memory elements, and related control logic and operative to process an input data stream associated with the ANN.

US11216717B2-patent-drawing

41. Weight Matrix Circuit with Resistive Memory Devices Exhibiting Non-linear Current-Voltage Characteristics

POSTECH ACADEMY-INDUSTRY FOUNDATION, 2022

Weight matrix circuit for improving calculation accuracy of an artificial neural network circuit using resistive memories. The circuit includes n input lines, m output lines, and nm resistive memory devices each connected to the n input lines and the m output lines and each having a non-linear current-voltage characteristic.

42. Neuromorphic Computing for Scientific Applications

Robert M. Patton, Prasanna Date, Shruti Kulkarni - IEEE, 2022

Neuromorphic computing technology continues to make strides in the development of new algorithms, devices, and materials. In addition, applications have begun to emerge where neuromorphic computing shows promising results. However, numerous barriers to further development and application remain. In this work, we identify several science areas where neuromorphic computing can either make an immediate impact (within 1 to 3 years) or the societal impact would be extremely high if the technological barriers can be addressed. We identify both opportunities and hurdles to the development of neuromorphic computing technology for these areas. Finally, we discuss future directions that need to be addressed to expand both the development and application of neuromorphic computing.

43. Encoding Integers and Rationals on Neuromorphic Computers using Virtual Neuron

Prasanna Date, Shruti Kulkarni, Aaron Young, 2022

Neuromorphic computers perform computations by emulating the human brain, and use extremely low power. They are expected to be indispensable for energy-efficient computing in the future. While they are primarily used in spiking neural network-based machine learning applications, neuromorphic computers are known to be Turing-complete, and thus, capable of general-purpose computation. However, to fully realize their potential for general-purpose, energy-efficient computing, it is important to devise efficient mechanisms for encoding numbers. Current encoding approaches have limited applicability and may not be suitable for general-purpose computation. In this paper, we present the virtual neuron as an encoding mechanism for integers and rational numbers. We evaluate the performance of the virtual neuron on physical and simulated neuromorphic hardware and show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor. We also demonstrate its utility by using it in some of the mu-recursive functions, which are t... Read More

44. Spiking Neural Network with Binary/Ternary Error Signal Backpropagation and Unified Data-Error Propagation Infrastructure

COMMISSARIAT ENERGIE ATOMIQUE, 2021

A spiking neural network that enables backpropagation training using binary or ternary error signals, eliminating the need for floating-point multiplications. The network implements a modified backpropagation algorithm that leverages the same infrastructure for both data propagation and error backpropagation, utilizing binary or ternary encoding of errors to adapt to the hardware constraints of spiking neurons.

45. Neuromorphic Arithmetic Device with Offset Accumulator and Cumulative Synapse Array for Offset-Corrected Multiplication

ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, 2021

A neuromorphic arithmetic device that performs offset correction in neural processing. The device includes an offset accumulator to accumulate measured offset data, a bit extractor to obtain average offset data, and a cumulative synapse array to accumulate multiplication values and correct them based on the average offset data. The device operates by measuring offset data, accumulating it, extracting the average offset, calculating multiplication values, accumulating them, and correcting the result based on the average offset.

46. Neuromorphic Device with Binary Weight Synapse Circuits and Temporal Domain Binary Vector Processing

SAMSUNG ELECTRONICS CO LTD, 2021

A neuromorphic device for neural network processing that uses binary weight values and temporal domain binary vectors to reduce model size and operation count. The device stores binary weights in synapse circuits and converts input feature maps into temporal domain binary vectors, which are then processed using a crossbar array circuit to perform convolution computations. The device achieves learning performance and classification accuracy comparable to traditional 32-bit floating point neural networks.

47. Synaptic Unit Circuit for Signal Transmission in Hardware-Implemented Spiking Neural Networks

INTERNATIONAL BUSINESS MACHINES CORP, 2021

Synaptic unit circuit for transmitting signals between neurons of a hardware-implemented, spiking neural network. The circuit includes a synaptic unit connecting a pre-synaptic neuron to a post-synaptic neuron.

48. Memory Unit with Adaptive Clamping Voltage and Calibration for Single-Cycle Multi-Bit Neural Network Operations

NATIONAL TSING HUA UNIVERSITY, 2021

A memory unit for computing-in-memory applications that enables multi-bit neural network operations in a single cycle. The unit employs an adaptive clamping voltage scheme and calibration mechanism to generate multiple bit-line currents from a single non-volatile memory cell, eliminating the need for multiple cycles and reducing errors caused by current overlap. The clamping voltage is dynamically adjusted based on the reference voltage and bit-line current to achieve precise control over the output currents.

US11195090B1-patent-drawing

49. Neuromorphic Processor with Synapse Element Incorporating Dual Variable Resistance Memory Cells and Transistors

SAMSUNG ELECTRONICS CO LTD, 2021

A neuromorphic processor with improved dynamic range and reduced power consumption, comprising a synapse element with a novel architecture that combines multiple variable resistance memory cells and transistors to enable both positive and negative output voltage ranges while operating from a single power supply voltage. The synapse element stores synaptic weights in two bit cells, receives input through multiple wordlines, performs calculations, and outputs results through bitlines, achieving a wider dynamic range and lower power consumption compared to conventional neuromorphic processors.

US11176993B2-patent-drawing

50. Resistive Processing Unit Architecture with Separate Matrices for Weight Update Accumulation and Inference Operations

INTERNATIONAL BUSINESS MACHINES CORP, 2021

A resistive processing unit (RPU) architecture that enables efficient weight update and read operations in RPU cells. The architecture employs separate matrices for weight update accumulation and inference operations, allowing independent execution of these tasks within a crossbar array of tunable resistive devices. The weight update accumulation circuitry maintains an accumulation value and outputs a control signal when a threshold is reached, while the weight update control circuitry adjusts the conductance level of the tunable resistive device in response to the control signal. This architecture enables symmetric weight updates and read operations in RPU cells, overcoming limitations of tunable resistive devices.

US11157810B2-patent-drawing

51. Analog Neuromorphic Circuit with Resistive Memory-Based Non-Binary Dot-Product Execution

52. Neural Processing Unit with Mixed-Precision Spatial Fusion and Load Balancing Architecture

53. Neural Network Computing Device with On-Device Quantizer Utilizing Calibrated Gain and DC Offset for Weight Data Quantization

54. Deep Learning Accelerator with Wafer-Scale Integrated 2D Mesh of Processing Elements and Flow-Based Computation

55. Integrated Circuit Architecture with Mesh Array and Border Cores for In-Memory Computing

Get Full Report

Access our comprehensive collection of 96 documents related to this technology