We use cookies

This website uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our cookies usage.

Contact us

Our friendly team would love to hear from you.




    or contact us directly at office@cognitum.eu

    ionicons-v5-e

    Thank you for your interest!

    We will contact you as soon as possible.

    Back to homepage

    Large Language Models

    Day 1

    Day 2

    Day 3

    Day 4

    Day 5

    Day 1 - Large Language Models Principles
    Agenda
    • Introduction to Large Language Models, revision and expansion
      – Pre-trained LLMs
      – Open-source LLMs
      – LLM settings
      – LLM pipeline
    • Basics of Prompt Engineering in detail
      – Zero-shot prompting
      – Designing prompts
      – Common tasks
    • In-Context Learning
      – Few-shot learning
      – Zero-shot vs. Few-shot learning
      – Efficient few-shot learning
      – Self-Generated In-Context Learning (SG-ICL)
    Description

    In this workshop, we want to refresh and expand participant’s knowledge about the fundamental concepts of LLMs, talking in detail about the techniques introduced on the fifth day of the first session, as well as adding some new ideas and methods, all with the aim of communicating with LLMs more efficiently, thus increasing our control over their work.

    Level

    Intermediate

    Treating this workshop as a continuation and expansion of the fifth day from the first session, we can derive that its difficulty increases along the notebook from easy revisions from the former part of the course to medium when introducing new concepts or providing more details on the ones already introduced.

    Target Participants
    manager

    Once again using the last workshop from the first session as a reference point, we assume this part of the course is suitable for those with limited knowledge in the field of LLMs. It would prove especially useful for those who skipped the previous workshop altogether.

    That being said, we did our best to develop this course in such a way that even more advanced participants could learn something new from every day and every session.

    Day 2 - Prompt Engineering Mastery
    Agenda

    In this workshop, we explore various techniques and strategies for effective prompt crafting, ensuring that our communications harness the full potential of these powerful tools. We can split those approaches into two main categories based on their intent.

    • Improve Reasoning and Logic
      – Chain-of-Thought (CoT) Prompting
      – Contrastive Chain-of-Thought (CCoT) Prompting
      – Self-Consistency
      – Tree of Thoughts (ToT) Prompting
      – Self-Ask (SA) Prompting
      – System 2 Attention (S2A) Prompting
      – Plan-and-Solve (PS) Prompting
      – Thread-of-Thought (ThoT) Prompting
      – Tabular Chain-of-Thought (Tab-CoT) Prompting
      – Program-of-Thoughts (PoT) Prompting
    • Reduce Hallucination
      – Temperature
      – Re-reading (RE2) Prompting
      – Self-Evaluation (SE) Prompting
      – Chain-of-Verification (CoVe) Prompting
      – Self-Refine (SR) Prompting
      – Rephrase and Respond (RaR) Prompting
      – Retrieval-Augmented Generation (RAG)
      – Reason and Act (ReAct) Prompting
    Description

    As we continue our journey into the intricacies of artificial intelligence, Day 2 shifts focus to the art and science of Prompt Engineering. This critical skill set involves crafting specific inputs that guide Large Language Models (LLMs) to generate desired outputs with higher precision and relevance. Prompt Engineering is not merely about asking questions; it’s about formulating them in a way that aligns closely with the model’s training and capabilities. Understanding this can significantly enhance the quality of interactions with LLMs, enabling more accurate and contextually appropriate responses.

    Level

    Advanced

    As we move onto more complex and complex prompt engineering techniques, the level also increases from fairly easy (e.g. CoT prompting), through intermediate (like Tab-CoT) to really hard (see RAG and ReAct).

    Target Participants
    programmer

    Since the difficulty and advancement of the described techniques progress through the workshop, we can guarantee that every participant will find their own niche and learn something new from it. While remaining in scope of the transformers library in the technical aspect, this part of the course is more focused on the theoretical approach, since we introduce a lot of prompt engineering techniques.

    Day 3 - Agents, Benchmarks and Fine-tuning
    Agenda
    • LLMs as Agents
      – Naive ReAct Agent
      – Introduction to LangChain
      – LangChain Agent
    • Benchmarking LLMs
      – Lm-evaluation-harness
    • Instruction fine-tuning
      – Supervised fine-tuning with LoRA and bitsandbytes
    Description

    Diving deeper into the LLMs field, in this particular workshop we explore several new aspects of it, LLMs acting as Agents, and also how to evaluate and fine-tune LLMs, since their size and complexity vastly differentiate them from small task-specific models, creating both theoretical and technical challenges.

    Level

    Advanced

    Due to the introduction of new frameworks and tools, as well as because we define new, more complex tasks, the difficulty of this workshop is estimated as hard.

    Target Participants
    programmer

    Since we present new, more sophisticated and technical territories, we recommend this part of the course to the more advanced participants, especially with more experience when it comes to ML frameworks such as LangChain.

    Day 4 - Model Alignment
    Agenda
    • Efficient fine-tuning of LLMs
      – Parameter-Efficient Fine-Tuning (PEFT) library
      – Low-Rank Adaptation (LoRA)
      – Quantized Low-Rank Adaptation (qLoRA)
      – P-tuning
      – Prompt tuning
      – Gradient Low-Rank Adaptation (GaLore)
    • Model alignment
      – Transformer Reinforcement Learning (TRL) library
      – Proximal Policy Optimization (PPO)
      – Direct Policy Optimization (DPO)
    Description

    In this workshop, we dig even deeper into the fine-tuning of LLMs, focusing on the processe’s efficiency and its another important factor – alignment, aiming to enhance this very important part of LLMs’ deployment.

    Level

    Advanced

    With the introduction of yet another set of tools and approaches, that also build upon the knowledge gained in the prior days (especially the third day of the second session), the level remains the same, deeming this workshop as hard, especially to those with limited coding experience and the ones who omitted the previous day.

    Target Participants
    programmer

    Once again, maintaining the difficulty of the previous workshop, we are obliged to emphasize that this part of the course is prepared with advanced participants, who aim to customize their LLM-based solutions, in mind.

    Day 5 - Efficient Inference
    Agenda
    • Quantization methods
      – Absmax quantization
      – Zeropoint quantization
      – Non-uniform quantization
      – LLM.int8()
      – FP4
      – GPTQ
    • Other approaches
      – KVCache
      – Assisted Generation
    • vLLM
      – Efficient serving of LLMs
      – Tensor Parallelism
    Description

    On the final day of the second session, we move even lower in the technology stack of LLM-based systems. 

    We will focus especially on the very feature that makes them so powerful – their size. One of the main reasons for their great performance also creates challenges for inference, sometimes making it last excruciatingly long. The amount of LLM’s parameters also makes it difficult to store and manage all those weights in memory.

    We will explore a range of optimizations, from hardware-specific to conceptual techniques. These methods will help boost the performance of our models and enable us to efficiently serve even larger models using the same hardware setup.

    Level

    Advanced

    As we reach closer and closer to the hardware beneath the ML, slowly venturing into the ML-Ops field, the level of difficulty increases again, making the gist of this workshop hard to grasp, especially on the technical side.

    Target Participants
    devops

    To not repeat ourselves, we recommend this particular workshop for those deeply interested in building and deploying systems that leverage LLMs. It’s also important to mention that any previous experience with the aforementioned tools is crucial.

    Get Your Personalized
    LLMs Course Quote

    Interested in mastering Large Language Models? Fill out the form below to receive a tailored quotation for a course designed to meet your specific needs and objectives.





      ionicons-v5-e

      Thank you for your interest!

      We will contact you as soon as possible.

      Back to homepage

      Your certified partner!

      Empower your projects with Cognitum, backed by the assurance of our ISO 27001 and ISO 9001 certifications, symbolizing elite data security and quality standards.