NEUROMORPHIC IN-MEMORY COMPUTING FOR ENERGY-EFFICIENT AI: A COMPREHENSIVE REVIEW


Pournima P R
Department of ECE, B E S Institute of Technology, Bengaluru, Karnataka, India
Abstract
As the demand for artificial intelligence continues to grow, there is a need for specialized hardware that can efficiently process and power AI algorithms. This is where VLSI (Very Large-Scale Integration) design comes into play. By integrating millions, or even billions, of transistors onto a single chip, VLSI design enables the creation of powerful AI-integrated circuits. However, designing hardware specifically for artificial intelligence poses its own set of challenges. The complexity of AI algorithms and the need for parallel processing capabilities demand innovative solutions. VLSI designers must carefully consider factors such as power consumption, heat dissipation, and the integration of specialized neural network accelerators. In addition to these considerations, VLSI designers must also address the scaling limitations of semiconductor technology. As AI algorithms become more complex, the demand for advanced VLSI architecture increases. Designers must find ways to enhance transistor density and performance while overcoming the limitations imposed by the physical laws governing semiconductor manufacturing. In-Memory Computing (IMC) is an emerging paradigm designed to overcome the limitations of the traditional Von Neumann architecture, particularly for energy-efficient AI applications. Traditional computing systems suffer from the "memory wall"—a bottleneck caused by the constant transfer of data between the CPU and memory. This leads to inefficiencies in latency, bandwidth, and energy consumption, especially for AI tasks which involve huge volumes of matrix operations and data movement.
Keywords: Very Large-Scale Integration (VLSI), Artificial Intelligence (AI), In-memory computing (IMC), Neuromorphic, multiply-and-accumulate (MAC), RRAM (Resistive RAM), MRAM (Magnetoresistive RAM), PCM (Phase-Change Memory), FeFET (Ferroelectric FET), spiking neural networks (SNNs)
Journal Name :
EPRA International Journal of Research & Development (IJRD)

VIEW PDF
Published on : 2025-07-11

Vol : 10
Issue : 7
Month : July
Year : 2025
Copyright © 2025 EPRA JOURNALS. All rights reserved
Developed by Peace Soft