Spring 2021: CS/ECE 7710 Neuromorphic Architectures


General Information:


Course Description:

The course will cover hardware approaches for implementing neural-inspired algorithms. In recent years, machine learning and AI have re-emerged as effective solutions to a number of difficult and economically relevant problems. They will likely enable autonomous vehicles, healthcare solutions, assistive technologies, etc. These solutions will be deployed in datacenters, mobile phones, self-driving cars, and sensors. The course will start with a brief primer on why machine learning has made significant strides in the past decade. We will then move to discussing specialized processors (accelerators) that can efficiently execute a large family of machine learning algorithms (for both inference and training). We will focus our discussions on accelerators for artificial/spiking neural networks, and convolutional neural networks -- areas that have dominated recent architecture conferences. We will end the course by discussing how the learned concepts can apply to other relevant application domains, e.g., genomic analysis.

The course does not have any formal pre-requisites, but is intended primarily for graduate students with some familiarity in architecture and/or machine learning. The lectures will be self-contained, i.e., I will provide sufficient background in architecture and machine learning to make the material accessible. Most class lectures will be based on recent research papers (see tentative schedule below). Students will also work in groups on semester-long projects -- the projects will compare the implementations of various cognitive tasks with different algorithms and hardware approaches.


University Support:

College of Engineering Policies (Disability, Add, Drop, Appeals, Safety, etc.).

School of Computing Policies

Class rosters are provided to the instructor with the student's legal name as well as "Preferred first name" (if previously entered by you in the Student Profile section of your CIS account). While CIS refers to this as merely a preference, I will honor you by referring to you with the name and pronoun that feels best for you in class, on papers, exams, group projects, etc. Please advise me of any name or pronoun changes (and please update CIS) so I can help create a learning environment in which you, your name, and your pronoun will be respected.


Grading:

The following is a tentative guideline and may undergo changes. The class project accounts for 50% of the final grade. 40% will be based on two take-home exams. 10% will be based on class participation and class presentations.


Tentative Class Schedule

Dates Lecture Topic
Tue Jan 19 Overview, landscape, history of neural-based hardware
Thu Jan 21 Intro to Deep Learning Algorithms
Tue Jan 26 Custom SIMD Architectures: DianNao
Thu Jan 28 The DaDianNao Architecture
Tue Feb 2 Deep Compression
Thu Feb 4 Deep Compression Architectures
Tue Feb 9 Systolic architectures: Eyeriss
Thu Feb 11 Commercial architectures: Google TPU, Tesla FSD
Tue Feb 16 Commercial architectures: NVIDIA Volta, Graphcore, Intel NNP
Thu Feb 18 Architectures for Training: intro, vDNN, ScaleDeep
Tue Feb 23 More Training Innovations: HyPar, GIST, PipeDream
Thu Feb 25 Analog Accelerators: ISAAC
Tue Mar 2 Spiking Neuron Intro
Thu Mar 4 TrueNorth Architecture
Tue Mar 9 Take-Home Midterm Exam, Project Planning
Thu Mar 11 Take-Home Midterm Exam, Project Planning
Tue Mar 16 Comparing SNNs and ANNs
Thu Mar 18 Self Driving Car Pipeline
Tue Mar 23 Exploiting Variable Precision
Thu Mar 25 Simba, Planaria
Tue Mar 30 Ineffectuals
Thu Apr 1 In-Memory Processing
Tue Apr 6 Gradient Overheads, 3D CNN Architectures
Thu Apr 8 Project discussions
Tue Apr 13 Sequence Alignment
Thu Apr 15 Accelerators for Precision Medicine
Tue Apr 20 Systolic Arrays -- sort, matrix mult, eqn solvers
Thu Apr 22 Project Presentations
Tue Apr 27 Project Presentations
due May 5 Take-Home Final Exam, Project Reports