- Jannik TrappDeep Learning and Its Applications in Image ProcessingNasser M. Nasrabaditaught byOctober 3, 2018

Jannik Trapp
Deep Learning and Its Applications in Image Processing
Nasser M. Nasrabadi
taught by
October 3, 2018
Deep Learning and Its Applications in Image Processing
Jannik Trapp
Level: Introductory
Length: 7 hours
Format: In-Person Lecture
Intended Audience:
Scientists, engineers, technicians, or managers who wish to learn more about deep learning architectures and their applications in image processing and machine learning. Undergraduate training in engineering or science is assumed.
Description:
This course provides a broad introduction to the basic concept of the classical neural networks (NN) and its current evolution to deep learning (DL) technology. The primary goal of this course is to introduce the well-known deep learning architectures and their applications in image processing for object detection, identification, verification, action recognition, scene understanding and biometrics using a single modality or multimodality sensor information. This course will describe the history of neural networks and its progress to current deep learning technology. It covers several DL architectures such the classical multi-layer feed forward neural networks, convolutional neural networks (CNN), restricted Boltzmann machines (RBM), auto-encoders and recurrent neural networks such as long term short memory (LSTM). Use of deep learning architectures for feature extraction and classification will be described and demonstrated. Examples of popular CNN-based architectures such as AlexNet, VGGNet, GooGleNet (inception modules), ResNet, DeepFace, Highway Networks, FractalNet and their applications to defense and security will be discussed. Advanced architectures such as Siamese deep networks, coupled neural networks, auto-encoders, fusion of multiple CNNs and their applications to object verification and classification will also be covered.
Learning Outcomes:
This course will enable you to:
- Identify the fundamental concepts of neural networks and deep learning
- Describe the major differences between neural network and current deep learning architectures
- Explain the stochastic gradient descent algorithm to train deep learning networks with different regularizations methods
- Describe the popular CNN-based architectures (i.e., AlexNet, VGGNet, GooGleNet, ResNet)
- Compare the relative merits of various deep learning architectures, MLP, CNN, RBM and LSTM
- Formulate CNN and auto-encoders for feature extraction
- Demonstrate the use of deep learning framework for object, face, pedestrian detection, pose estimation and face identification
- Differentiate between Siamese and coupled deep learning architectures and their use for object verification and identification
- Design multiple deep learning architectures for multi-view face identification and multimodal biometrics applications
Instructor(s):
Nasser M. Nasrabadi is a professor in the Lane Computer Science and Electrical Engineering Department at West Virginia University. He was senior research scientist (ST) at US Army Research Laboratory (ARL). He is actively engaged in research in deep learning, image processing, automatic target recognition and hyperspectral imaging for defense and security. He has published over 300 papers in journals and conference proceedings. He has been an associate editor for the IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology and IEEE Transactions for Neural Networks. He is a Fellow of IEEE and SPIE.
Event: SPIE Optics + Photonics 2018
Course Held: 19 August 2018
Issued on
October 3, 2018
Expires on
Does not expire