0

Github GitHub - NeuromatchAcademy/course-content-dl

 3 years ago
source link: https://github.com/NeuromatchAcademy/course-content-dl
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Neuromatch Academy (NMA) Deep Learning syllabus


August 2-20, 2021

Objectives: Gain hands-on, code-first experience with deep learning theories, models, and skills that are useful for applications and for advancing science. We focus on how to decide which problems can be tackled with deep learning, how to determine what model is best, how to best implement a model, how to visualize / justify findings, and how neuroscience can inspire deep learning. And throughout we emphasize the ethical use of DL.

Please check out expected prerequisites here!

Confirmed speakers:

Course materials


Coming soon... stay tuned...

Course outline


Week 1: the basics


Mon, August 2, 2021: Intro to DL academy

coordinated by Konrad Kording (U Penn)

Description Welcome, introduction to Google Colab, meet and greet, a bit of DL history, DL basics and introduction to Pytorch


Tue, August 3, 2021: Linear DL

coordinated by Andrew Saxe (Oxford)

Description Gradients, AutoGrad, linear regression, concept of optimization, loss functions, designing deep linear systems and how to train them


Wed, August 4, 2021: Multi-layer Perceptrons (MLPs)

coordinated by Surya Ganguli (Stanford)

Description From neuroscience inspiration, to solving the XOR problem, to function approximation, cross-validation, training, and trade-offs


Thu, August 5, 2021: Optimization

coordinated by Ioannis Mitliagkas (MILA)

Description Why optimization is hard and all the tricks to get it to work


Fri, August 6, 2021: Regularization

coordinated by Lyle Ungar (U Penn)

Description The problem of overfitting and different ways to solve it


Week 2: doing more with fewer parameters


Mon, August 9, 2021: Parameter sharing: Convnets and recurrent neural networks (RNNs )

coordinated by Alona Fyshe (U Alberta)

Description How the number of parameters affects generalization, and what convolutional neural networks (Convnets) and RNNs can do for you to help


Tue, August 10, 2021: Modern Convnets

coordinated by Alexander Ecker (U Goettingen)

Description Transfer learning, continual learning, and reply and why they're essential


Wed, August 11, 2021: Modern RNNs

coordinated by James Evans (DeepAI)

Description Memory, time series, recurrence, vanishing gradients and embeddings


Thu, August 12, 2021: Attention and Transformers

coordinated by He He (NYU)

Description How attention helps classification, encoding and decoding


Fri, August 13, 2021: Generative models (VAEs & GANs)

coordinated by Vikash Gilja (UCSD)

Description Variational auto-encoders (VAEs) and Generative Adversarial Networks (GANs) as methods for representing latent data statistics


Week 3: Advanced methods


Mon, August 16, 2021: Unsupervised and self-supervised learning

coordinated by Blake Richards (McGill) and Tim Lillicrap (Google DeepMind)

Description Learning without direct supervision


Tue, August 17, 2021: Basic Reinforcement Learning (RL) ideas

coordinated by Jane Wang (Google DeepMind)

Description How RL can help solve DL problems


Wed, August 18, 2021: RL for games

coordinated by Tim Lillicrap (Google DeepMind) and Blake Richards (McGill)

Description Get to learn how RL solved the game of Go


Thu, August 19, 2021: Continual learning / Causality / future stuff

coordinated by Joshua T. Vogelstein (Johns Hopkins) and Vincenzo Lomonaco (U Pisa)

Description How can we get a causality, how to generalize out of sample, what will the future bring?


Fri, August 20, 2021: Finishing Proposals and Wrap-up

coordinated by The NMA-DL Team

Description This day is dedicated to group projects and celebrating course completion


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK