All days – 11–13 September 2023
09:45 | Join in |
10:00 | Course – Deep Learning for Computer Vision |
12:00 | Lunch break |
13:00 | Course – Deep Learning for Computer Vision |
16:00 | End of day |
At the end of the training, participants will be able to
- Understand the mathematics behind a neural network
- Train their own neural network for different problems
- Improve the performance with different architectures
- Use existing models to cut training time and improve the outcome
Program
- Overview
Participants learn what Deep Learning is and which different forms there are and what the typical use-cases are.
- Performance metrics
Participants get to know how the performance of a neural network can be measured and what needs to be taken into account.
- Basic neural networks
Participants will build their first neural networks with fully connected, dense layers to set a benchmark for further improvements. They will learn all about tensors, activation functions, loss functions and optimizers – in short, all about the mathematics behind neural networks.
- Convolutional layers
These are the backbone of computer vision. Participants will learn how and why they work and how to integrate them into a neural network.
- Pooling and dropout layers
The complexity of a neural network often becomes unnecessarily large, leading to slow training and risk of overfitting. Pooling and dropout layers are some of the ways to alleviate this issue.
- Different architectures
A lot in deep learning is based on trial and error. Countless different architectures of neural networks have already been built by researchers around the world. We will have a look at some of the best performing ones and will try to adapt them to our problem.
- Transfer learning
Lack of data is one of the biggest challenges in building a well performing model for computer vision. Luckily, there are a lot of models that have already been trained on a vast amount of data. We will try to adapt theses fully trained models to meet our goals. This is called transfer learning.
- Finetuning
Once we have leveraged the power of pre-trained models, we can finetune some hyperparameters to increase performance.
- Segmentation
A major challenge in computer vision, is to not just classify a perfectly photographed object, but to detect it in the wider frame of a picture. This is called segmentation.
- Working on a supercomputer
There are many freely available computing resources out there. VSC-5 is Austria’s fastest supercomputer. It is not just a machine used by academia, but can also be utilized by SMEs/industry. During the course we will have a look at how this can be done.
- Outlook
In this final topic participants get a glimpse of what else can be done with deep learning. We are going to talk about RNNs, GANs and more media present topics such as ChatGPT.