LearningUdemy

AdeepunderstandingofAIlargelanguagemodelmechanisms

An intensive masterclass on the internal mechanisms of Transformers and Large Language Models, including attention, tokenization, and scaling laws. Teaches how to build LLM components from scratch and fine-tune models for custom NLP tasks.

PyTorch
Hugging Face (Transformers/Datasets)
Tokenizers
CUDA
A deep understanding of AI large language model mechanisms

Learning in Progress

Currently mastering the advanced concepts of this course.

Instructor

Mike X Cohen

Duration

91 hours

Platform

Udemy

Course Curriculum