This seminar offers an in-depth exploration of neuromorphic computing from biological foundations to modern chip implementations. It bridges neuroscience, machine learning, and VLSI hardware design, guiding students through the principles and engineering practices of brain-inspired computation.
The course begins with an introduction to the motivation and history of neuromorphic computing, followed by the biophysical basis of neuronal communication, including membrane potential, action potentials, and spiking neuron models (Hodgkin–Huxley, Izhikevich, and Leaky-Integrate-and-Fire). It then examines mechanisms of synaptic plasticity such as Hebbian learning, STDP, BCM, and Oja’s rule, and extends to short-term plasticity, homeostatic regulation, winner-take-all networks, and three-factor learning.
Students will study the algorithmic side of spiking neural networks (SNNs)—their coding schemes, temporal dynamics, and training methods (e.g., STDP, reward-modulated learning, surrogate gradients). On the hardware side, the course provides case studies of leading neuromorphic processors such as IBM TrueNorth and Intel Loihi, analyzing their architectures, asynchronous-synchronous design methodologies, learning engines, and scalability features.
Each participant will present and discuss recent research papers on neuromorphic architectures and algorithms, and prepare a written report summarizing key insights.
Learning outcomes:
-
Understand the neurobiological foundations underlying spiking computation and synaptic learning.
-
Model and analyze spiking neurons, plasticity mechanisms, and learning rules.
-
Evaluate and compare digital neuromorphic chips in terms of architecture, scalability, and energy efficiency.
-
Develop critical perspectives on how neural principles can guide next-generation computing hardware.
Assessment:
Based on two paper review presentations and a final technical report.
Prerequisites:
Basic knowledge of neural networks and computer architecture.
- Kursleiter*in: Dedong Zhao