Learn LLM Foundations

Why Learn LLM Foundations?

Large Language Models (LLMs) are rapidly transforming industries, presenting both exciting opportunities and significant challenges for mid-career professionals to stay relevant. This course offers experienced tech professionals a clear and accessible pathway to acquire essential LLM knowledge, enabling them to navigate the evolving tech landscape and effectively integrate LLM learnings into their work.

Course Information

Summary

This course is designed to provide a practical, hands-on introduction to the core concepts behind Large Language Models (LLMs). This course prioritizes intuition and practical applications over mathematical rigor - making this course specially beneficial to working professionals. The course starts with the fundamentals of deep learning and progressively build towards understanding and implementing a simplified Large Language Model.  See course schedule below for details.

Schedule

Class days

The course runs from Tue Jun 24th to Thr Jul 31st from 8 to 9.15 pm EST - every Tuesday and some Thursdays. Recordings will be available right after the class, so it's okay if you have to miss a class. Live participation and completing the home assignments are highly recommended to maximize the learning. See below for course modules and registration details.

Module 1: Introduction to Generative AI, LLMs, and Neural Networks

We will kickoff the course with a foundational understanding of AI, machine learning, and deep learning.  We'll understand the intuition behind neural networks and train a simple neural network in Python for a prediction task. This module should give you a concrete sense of how these networks learn.

Module 2: Tokenization

We'll learn how text is converted into a numerical format that these models can understand. We will also learn about different tokenization methods and implement a custom tokenizer in Python.

Module 3: Embeddings

We will learn how words and tokens are represented as dense vectors, capturing semantic meaning.  We will train a neural network to predict the token in a sequence, using these embeddings.

Module 4: Attention

Attention is a core concept of LLMs.  We will break down the "Attention is All You Need" paper, understand the intuition behind attention mechanism, and implement it in Python.

Module 5: Transformer

We will integrate everything we've learned in the previous modules to build a simplified transformer model with multi-head attention. We will train it to predict the next token in a text, giving you a practical understanding of how LLMs generate text. We will conclude the course with a summary of the key concepts and provide resources and guidance for you to continue your learning journey in the field of LLMs.

Pricing

Kickstart your journey to learn LLM foundations with this comprehensive 6-week course (10 classes) covering AI fundamentals to building your own model.

For this inaugural edition, the course is offered at a discounted price of $349 (approximately $35 per class).

💰 First 5 signups get an additional $50 off! 💰

We are also offering one fully sponsored spot for someone with genuine financial constraints.

You will gain:

A working understanding of the key components of LLMs.
Experience in implementing these components in Python.
The ability to build and train a simplified Large Language Model.
A solid foundation for further exploration of advanced LLM concepts and applications.

Registration

Ready to level up your LLM skills? Course registration is now open!

👉 Message me on LinkedIn to register! Got questions or curious if this course aligns with your goals? Reach out on LinkedIn.