Large-scale pretrained models (Bert, T5 and GPT) have dramatically changed the landscape of artificial intelligence (natural language processing, computer vision and robotics). With huge parameters, these models encode rich knowledge and prove effective backbones for downstream tasks over training models from scratch.
This seminar is the first part of "pretrained models" course and focuses on transformer-based pretrained models with encoder-only architecture. It is a transition course, and is designed to equip students with essential knowledge to engage with cutting-edge NLP research.
- Kursleiter*in: Meng Li
- Kursleiter*in: Sherzod Hakimov