Deep learning has become an indispensable tool in computer vision, natural language processing, and is increasingly applied to neuroimaging data. It has very quickly surpassed human performance in natural image recognition and a variety of image-to-image translation methods are now popular as another tool to map the brain. In this annual course, we present state-of-the-art methods in deep learning with a focus on how these new techniques can readily be applied to your brain imaging data.
Ariel is a data scientist and hacker who runs the NeuroHackademy, a bootcamp for neuroimagers to gain skills while doing code sprints on a variety of innovative projects pushing the limits of brain mapping. An instructor with Data/Software Carpentry since 2013, he's a strong believer in using hackathons for education, and is particularly interested in using structural MR imaging to map the brain.
Patrick uses deep learning combined with Baysian probabilistic modeling to not only improve results in contexts of medical image segmentation, but to also get a measure of uncertainty in predictions. While an expert in computer vision and machine learning, he has also worked in computational neuroscience, using deep learning networks to model the brain.
During his Master's in Biophotonics at CERVO Anthony developed machine learning techniques to help biologists investigate synaptic proteins at the nanoscale. He's particularly enthusiastic about applying cutting-edge deep learning techniques to help researchers uncover the wonders of our brain.
Amy is a master of computer vision who specializes in realistic data augmentation, transforming her input data to augment the space of realistic configurations they might appear in. She is a master of detecting weak signals in the presence of large noise sources, and is at the forefront of generative modeling for learning robust models with limited data.
In his early work Robin mapped the world, but he now maps the brain at the Neuromedical AI Lab Freiburg, with an emphasis on EEG. He is the lead developer of BrainDecode, a software library for deep learning-based EEG decoding. He also has work on pure deep learning research, with a particular focus on invertible network architectures that can potentially be used for more interpretable decoding than common approaches.
Yu is a neuroscientist devoted to developing machine learning tools to better understand human brain organization. Her previous work involves building a new brain atlas using diffusion and functional MRI. Her current research interest is to decode and simulate brain activities using deep artificial neural networks, with a special focus on graph-based modeling.
Adriana is on the forefront of machine learning research, and develops new methods to make deep learning more closely model the structure of the data. From networks producing parameters for genetics analysis to active acquisition for MRI reconstruction, Adriana pushes the limits of what's possible in biomedical sciences.
Vince was one of the first scientists to get deep learning to work at all in neuroimaging, and has applied it extensively to modeling functional magnetic resonance imaging to build better maps of the brain. His group has investigated comparatively rare models like deep Boltzmann machines and has investigated using deep learning for fusing modalities at multiple brain scales.
Ariel is a data scientist and hacker who runs the NeuroHackademy, a bootcamp for neuroimagers to gain skills while doing code sprints on a variety of innovative projects pushing the limits of brain mapping. An instructor with Data/Software Carpentry since 2013, he's a strong believer in using hackathons for education, and is particularly interested in using structural MR imaging to map the brain.
Saige studies neuro-development, and is interested in discovering low-dimensional representations that might predict how we grow. She uses deep learning and careful MR sequence design to characterize neo-natal brain growth in challenging segmentation environments and predict brain age as a biomarker for early detection of disorders and degeneration.
Hannah is a computer vision expert who uses big data to map the brain at the microscopic level. She is interested in how deep learning can help us characterize and discover the variability in the cytoarchitecture of our brains, and works on understanding the connectivity and laminar structure of the reconstructed histological data in the BigBrain.
Grace is interested in parallels between how biological neural circuits relate to artificial models. Using deep learning tools, she studies vision and attention, with a particular interest in how Hebbian learning in artificial networks is reflected in brain recording data. She is particularly strong at scientific communication, and her podcast offers listeners a gentle and accessible introduction to all things neuro.
Bliss is a computer scientist, statistician, and trained classical ballet dancer. He explores how deep generative models of biomedical data can be used to quantify variability between populations and improve statistical power analyses. He has also explored how machine learning techniques can be applied to characterize low dimensional connectome dynamics.
Anders is a computational guru whose extensive work parallelizing neuroimaging pipelines on GPUs made deep learning a natural extension of his research program. He's interested in pushing the limits of generative neural network models to fill in missing modalities and scale large-scale neuroimaging research across incomplete datasets.
Steffen records high-resolution magnetic resonance imaging to create quantitative susceptibility maps that reflect information on biological tissue properties, predominantly myelin, iron and calcium. He uses deep learning to estimate these maps to study neurodegenerative diseases.
Jakob has innovatively applied standard deep learning segmentation techniques hierarchically to produce maps of individual tracts in diffusion MRI. He is extending this so that it easily generalizes to other problems and these deep architectures are more accessible for researchers to be able to adapt them to their specific use cases.
Andrew teaches neuroimagers the basics (and advanced) of deep learning methods in order to recruit collaborators who will teach him some neuroscience. This year, he co-organized hands-on educational courses at Resting State and Brain Connectivity, the Montreal Artificial Intelligence and Neuroscience conference, and introduced deep learning as an instructor at the BrainHack Summer School.
Pamela uses machine learning to study Attention Deficit and Hyperactivity Disorder in fMRI and EEG data. An expert in unsupervised learning, she also recently presented work on the interpretability of fMRI weight maps at the Interpreting, Explaining and Visualizing Deep Learning Workshop of NIPS 2017, and co-organized the Neuroimaging and Machine Learning 2017 Workshop.
Andrew uses deep learning to automate portions of neuroimaging workflows. His work automating infant structural MRI quality control is being integrated into the LORIS databasing system, and he now works on using deep learning to predict future diagnoses in Alzheimer's Disease, Autism Spectrum Disorder, and Major Depression. He recently organized a deep learning-themed Brainhack.
Anisha combines the speed of inference in deep learning and crowdsourcing techniques to achieve expert-level ratings for structural MRI scans. Co-organizer of this year's OHBM hackathon, Anisha develops open-source web apps like Braindr and Mind Control to facilitate the analysis of neuroimaging data - and construct great datasets for deep learning.
Chris has made contributions to the WEKA toolbox at the University of Waikato in New Zealand, which hosts one of the largest open-source repositories of learning algorithms in the world. He now uses Generative Adversarial Networks (GANs) to bring the power of deep learning to smaller-sized medical imaging datasets.
Pim uses cutting-edge deep learning techniques to characterize neurodevelopment, and has worked on (preterm) infants and adult brain MRI. His work goes beyond automatic tissue segmentation to allow quantification of brain characteristics and prediction of neurodevelopmental impairments. He recently co-organised MICCAI workshops on neonatal, fetal and pediatric image analysis (PIPPI2016, FIFI2017) and is organizing the IEEE ISBI special session on Image Analysis of the Developing Brain in April 2018.
Alex is an expert in computer vision and interpretable machine learning. He has moved on from designing self-driving cars' localization systems and object recognition systems to medical applications. Co-organizer of the ACCV 2016 Workshop on Interpretation and Visualization of Deep Neural Nets, he has some of the best insight into the interpretability of deep neural networks in the world, winning the Best Paper award at ICML 2016 with his paper Analyzing and Validating Neural Networks Predictions.
Non-exhaustive list of learning materials for those who want more!
Deep learning typically involves running the backpropagation algorithm on a large computational graph. This can be parallelized efficiently and easily with the help of tensor math libraries like those listed below. We recommend using Python 3 for this, but MATLAB now has deep learning built-in if you just can't quit closed-source software.
© Team Beyond Linear 2018