Live with Jay Alammar, Josh Starmer, and Luis Serrano Direct Preference Optimization (DPO) - How to fine-tune LLMs directly without reinforcement learning KL Divergence - How to tell how different two distributions are Direct Preference Optimization (DPO) - How to fine-tune LLMs directly without reinforcement learning 9mo | Louis Serano KL Divergence - How to tell how different two distributions are 9mo | Louis Serano Josh Starmer and Luis Serrano livestream 2 - Double BAM! 10mo | Louis Serano Bessel correction and a different way to see variance 10mo | Louis Serano Reinforcement Learning with Human Feedback - How to train and fine-tune Transformer Models 1y | Louis Serano Proximal Policy Optimization (PPO) - How to train Large Language Models 1y | Louis Serano The Attention Mechanism for Large Language Models #AI #llm #attention 1y | Louis Serano Stable Diffusion - How to build amazing images with AI 1y | Louis Serano How Large Language Models are Shaping the Future 1y | Louis Serano What are Transformer Models and how do they work? 1y | Louis Serano << < 1 2 3 4 5 > >> Join group Members Search CreatedPast one dayPast four dayPast month Choose a GroupLouis Serano Choose a User Sort byby relevanceUpvotedNew firstBookmark countComment count Search