💻
RAG and LLM Bootcamp
  • Welcome to the Bootcamp
    • Course Structure
    • Course Syllabus and Timelines
    • Know your Educators
    • Action Items and Prerequisites
    • Kick-Off Session for the Bootcamp
  • Basics of LLMs
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of LLMs
    • Bonus Resource: Multimodal LLMs and Google Gemini
  • Word Vectors, Simplified
    • What is a Word Vector?
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
    • Bonus: Overview of the Transformer Architecture
      • Attention Mechanism
      • Multi-Head Attention and Transformer Architecture
      • Vision Transformers (ViTs)
    • Bonus: Future of LLMs? | By Transformer Co-inventor
    • Graded Quiz 1
  • Prompt Engineering and Token Limits
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • For Starters: Best Practices
    • Navigating Token Limits
    • Hallucinations in LLMs
    • Prompt Engineering Excercise (Ungraded)
      • Story for the Excercise: The eSports Enigma
      • Your Task fror the Module
  • RAG and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-trained and Fine-Tuned LLMs
    • In-context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • Basic RAG Architecture with Key Components
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in RAG
    • Key Benefits of using RAG in an Enterprise/Production Setup
    • Hands-on Demo: Performing Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search (Bonus Module)
    • Bonus Video: Implementing End-to-End RAG | 1-Hour Session
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites (Must)
    • Docker Basics
    • Your Hands-on RAG Journey
    • 1 – First RAG Pipeline
      • Building with Open AI
      • How it Works
      • Using Open AI Alternatives
      • RAG with Open Source and Running "Examples"
    • 2 – Amazon Discounts App
      • How the Project Works
      • Building the App
    • 3 – Private RAG with Mistral, Ollama and Pathway
      • Building a Private RAG project
      • (Bonus) Adaptive RAG Overview
    • 4 – Realtime RAG with LlamaIndex/Langchain and Pathway
      • Understand the Basics
      • Implementation with LlamaIndex and Langchain
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Suggested Tracks for Ideation
    • Sample Projects and Additional Resources
    • Submit Project for Review
Powered by GitBook
On this page
  • Git, Python and Pip
  • OpenAI API Key (Recommended)
  • What if you don't have Open AI credits for building your RAG project?
  • If you're using Windows OS. Great! You have another reason to use Docker.
  • But what is Docker and why is it important to learn?
  1. Hands-on Development

Prerequisites (Must)

PreviousHands-on DevelopmentNextDocker Basics

Last updated 10 months ago

Alright, let's get the ball rolling! Let's kick things off by ensuring you have everything you need installed on your computer. And remember, persistence is crucial in this journey.

Nailing a new framework flawlessly from the get-go is as rare as acing a complex algorithm on the first try. And finding joy in debugging? That's like getting used to a 3 AM alarm—tough, but part of the process.

The magic happens when, after persistence. That's when you'll see the true power of your skills and the impact you can create. Plus, the frameworks we're diving into are designed for production-grade applications, meaning the potential for real-world impact is enormous and genuinely empowering.

Are you geared up? Let's embrace this challenge with enthusiasm. 😊 These steps aren't just for today; they're your stepping stones to the exciting world of open-source contributions. So, let's get to it!

Git, Python and Pip

  • Python 3.10 or 3.11 should be installed on your machine. If not, you can here.

  • Pip: Comes pre-installed with Python 3.4+. It is the standard package manager for Python. You can check if it's downloaded by typing the below command in your terminal/command prompt.

    pip --version

  • If Pip is not installed, you'll get an error. In that case, you need to download and install to manage project packages.

  • Git should be installed on your machine. If you've installed XCode (or its Command Line Tools), Git may already be installed. To find out, open a Terminal or Command Prompt, and enter git --version. If it's not installed, refer to and install it.

OpenAI API Key (Recommended)

This key is required if you plan to use OpenAI models for embedding and generation.

If you are less confident with using open-source alternatives, using Open AI models is a good starting point. You can also go for other platforms such as Replicate, Eden AI, Cohere (feel free to Google – the list is endless).

What if you don't have Open AI credits for building your RAG project?

  • You can either experiment by adding ~$3 of credits for a convenient start.

  • You can use other hosted LLM services such as:

  • You can build locally with open source models that do not require any API credits.

Great news – we'll be showing how you can build your projects in all the three cases!

If you're using Windows OS. Great! You have another reason to use Docker.

The frameworks ahead only support Unix-like systems (such as Linux, macOS, and BSD). But the good news is that you have an easy fix.

If you are a Windows user, you can use Windows Subsystem for Linux (WSL) or Dockerize the app to run as a container. The latter (using Docker) is a much stronger recommendation. It's a popular tool with a lot of available documentation and we've added an introductory section for your convenience ahead.

But what is Docker and why is it important to learn?

: 2 months of unlimited access

: 5€ free credit upon sign-up

: Offers inference from open-source LLMs with rate limits

: Inference from various open-source LLMs, 25$ free credit upon sign-up

: Inference with a free trial, though exact free usage limits are unclear

Think of Docker as a shipping container for your app. Just as a shipping container can hold all sorts of goods (clothes, electronics, etc.) and can be transported anywhere in the world, Docker bundles your app and everything it needs to run into a 'container.' This makes it easy to share and run your app on any computer – thus making it easy for your project's reviewers/contributors to run your projects should they wish to do so.

😄
🤷‍♀️
Download Python
Pip
this documentation
Google Gemini
Mistral.ai
Groq
Together AI
Replicate