Contact us
Our friendly team would love to hear from you.
Day 1
Day 2
Day 3
Day 4
Day 5
In this workshop, we want to refresh and expand participant’s knowledge about the fundamental concepts of LLMs, talking in detail about the techniques introduced on the fifth day of the first session, as well as adding some new ideas and methods, all with the aim of communicating with LLMs more efficiently, thus increasing our control over their work.
Intermediate
Treating this workshop as a continuation and expansion of the fifth day from the first session, we can derive that its difficulty increases along the notebook from easy revisions from the former part of the course to medium when introducing new concepts or providing more details on the ones already introduced.
Once again using the last workshop from the first session as a reference point, we assume this part of the course is suitable for those with limited knowledge in the field of LLMs. It would prove especially useful for those who skipped the previous workshop altogether.
That being said, we did our best to develop this course in such a way that even more advanced participants could learn something new from every day and every session.
In this workshop, we explore various techniques and strategies for effective prompt crafting, ensuring that our communications harness the full potential of these powerful tools. We can split those approaches into two main categories based on their intent.
As we continue our journey into the intricacies of artificial intelligence, Day 2 shifts focus to the art and science of Prompt Engineering. This critical skill set involves crafting specific inputs that guide Large Language Models (LLMs) to generate desired outputs with higher precision and relevance. Prompt Engineering is not merely about asking questions; it’s about formulating them in a way that aligns closely with the model’s training and capabilities. Understanding this can significantly enhance the quality of interactions with LLMs, enabling more accurate and contextually appropriate responses.
Advanced
As we move onto more complex and complex prompt engineering techniques, the level also increases from fairly easy (e.g. CoT prompting), through intermediate (like Tab-CoT) to really hard (see RAG and ReAct).
Since the difficulty and advancement of the described techniques progress through the workshop, we can guarantee that every participant will find their own niche and learn something new from it. While remaining in scope of the transformers library in the technical aspect, this part of the course is more focused on the theoretical approach, since we introduce a lot of prompt engineering techniques.
Diving deeper into the LLMs field, in this particular workshop we explore several new aspects of it, LLMs acting as Agents, and also how to evaluate and fine-tune LLMs, since their size and complexity vastly differentiate them from small task-specific models, creating both theoretical and technical challenges.
Advanced
Due to the introduction of new frameworks and tools, as well as because we define new, more complex tasks, the difficulty of this workshop is estimated as hard.
Since we present new, more sophisticated and technical territories, we recommend this part of the course to the more advanced participants, especially with more experience when it comes to ML frameworks such as LangChain.
In this workshop, we dig even deeper into the fine-tuning of LLMs, focusing on the processe’s efficiency and its another important factor – alignment, aiming to enhance this very important part of LLMs’ deployment.
Advanced
With the introduction of yet another set of tools and approaches, that also build upon the knowledge gained in the prior days (especially the third day of the second session), the level remains the same, deeming this workshop as hard, especially to those with limited coding experience and the ones who omitted the previous day.
Once again, maintaining the difficulty of the previous workshop, we are obliged to emphasize that this part of the course is prepared with advanced participants, who aim to customize their LLM-based solutions, in mind.
On the final day of the second session, we move even lower in the technology stack of LLM-based systems.
We will focus especially on the very feature that makes them so powerful – their size. One of the main reasons for their great performance also creates challenges for inference, sometimes making it last excruciatingly long. The amount of LLM’s parameters also makes it difficult to store and manage all those weights in memory.
We will explore a range of optimizations, from hardware-specific to conceptual techniques. These methods will help boost the performance of our models and enable us to efficiently serve even larger models using the same hardware setup.
Advanced
As we reach closer and closer to the hardware beneath the ML, slowly venturing into the ML-Ops field, the level of difficulty increases again, making the gist of this workshop hard to grasp, especially on the technical side.
To not repeat ourselves, we recommend this particular workshop for those deeply interested in building and deploying systems that leverage LLMs. It’s also important to mention that any previous experience with the aforementioned tools is crucial.
Interested in mastering Large Language Models? Fill out the form below to receive a tailored quotation for a course designed to meet your specific needs and objectives.