Fall 2017: CS 7960 Special Topics: Neuromorphic Architectures

General Information:

Course Description:

The course will cover hardware approaches for implementing neural-inspired algorithms. Neural-inspired algorithms use a variety of models for (i) the neuron (e.g., perceptrons and spiking neurons), (ii) connectivity among neurons (e.g., feed-forward, recurrent, reservoir), (iii) training (e.g., back-propagation and {brace yourself} spike timing dependent plasticity), etc. The course will briefly discuss these algorithms, but will primarily focus on state-of-the-art hardware approaches to implement these algorithms. These approaches will ultimately yield accelerator chips that will be used for a variety of cognitive tasks in datacenters, mobile devices, self-driving cars, etc.

The course does not have any formal pre-requisites, but is intended primarily for graduate students with some familiarity in architecture and/or machine learning. The lectures will be self-contained, i.e., I will provide sufficient background in architecture and machine learning to make the material accessible. Most class lectures will be based on recent research papers (see tentative schedule below). Students will also work in groups on semester-long projects -- the projects will compare the implementations of various cognitive tasks with different algorithms and hardware approaches.

University Support:

College of Engineering Policies (Disability, Add, Drop, Appeals, etc.): Guidelines from the college.

Class rosters are provided to the instructor with the student's legal name as well as "Preferred first name" (if previously entered by you in the Student Profile section of your CIS account). While CIS refers to this as merely a preference, I will honor you by referring to you with the name and pronoun that feels best for you in class, on papers, exams, group projects, etc. Please advise me of any name or pronoun changes (and please update CIS) so I can help create a learning environment in which you, your name, and your pronoun will be respected.


The following is a tentative guideline and may undergo changes. The class project accounts for 50% of the final grade. 40% will be based on two take-home exams. 10% will be based on class participation and class presentations.

Tentative (last year's) Class Schedule

Dates Lecture Topic
Tue Aug 22 Overview, landscape, history of neural-based hardware
Thu Aug 24 Intro to Deep Learning Algorithms
Tue Aug 29 The DianNao Architecture
Thu Aug 31 The DaDianNao Architecture
Tue Sep 5 Deep Compression
Thu Sep 7 EIE and Cnvlutin Architectures
Tue Sep 12 Analog Accelerator ISAAC
Thu Sep 14 Analog Accelerator ISAAC
Tue Sep 19 Spiking Neuron Intro
Thu Sep 21 TrueNorth Architecture
Tue Sep 26 TrueNorth Details
Thu Sep 28 TrueNorth Stochasticity
Tue Oct 3 Comparing SNNs and MLPs
Thu Oct 5 Apps on TrueNorth
Tue/Thu Oct 10/12 Fall Break
Tue Oct 17 Google TPU, MSR Catapult/Brainwave
Thu Oct 19 Eyeriss, ShiDianNao Dataflow
Tue Oct 24 Training with vDNN
Thu Oct 26 Systolic Arrays I -- sort, matrix ops, graphs
Tue Oct 31 Systolic Arrays II -- add, mult, eqn solver
Thu Nov 2 Scaledeep, SCNN
Tue Nov 7 NeuroSurgeon
Thu Nov 9 Project discussions
Tue Nov 14 Project presentations
Thu Nov 16 Project presentations
Tue Nov 21 Molecular Dynamics Accelerator
Thu Nov 23 Thanksgiving
Tue Nov 28 Sequence Alignment
Thu Nov 30 Emerging Machine Learning Workloads, LSTM, RNNs (Vivek)
Tue Dec 5 Project presentations
Thu Dec 7 Project presentations