💻
RAG and LLM Bootcamp
  • Welcome to the Bootcamp
    • Course Structure
    • Course Syllabus and Timelines
    • Know your Educators
    • Action Items and Prerequisites
    • Kick-Off Session for the Bootcamp
  • Basics of LLMs
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of LLMs
    • Bonus Resource: Multimodal LLMs and Google Gemini
  • Word Vectors, Simplified
    • What is a Word Vector?
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
    • Bonus: Overview of the Transformer Architecture
      • Attention Mechanism
      • Multi-Head Attention and Transformer Architecture
      • Vision Transformers (ViTs)
    • Bonus: Future of LLMs? | By Transformer Co-inventor
    • Graded Quiz 1
  • Prompt Engineering and Token Limits
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • For Starters: Best Practices
    • Navigating Token Limits
    • Hallucinations in LLMs
    • Prompt Engineering Excercise (Ungraded)
      • Story for the Excercise: The eSports Enigma
      • Your Task fror the Module
  • RAG and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-trained and Fine-Tuned LLMs
    • In-context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • Basic RAG Architecture with Key Components
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in RAG
    • Key Benefits of using RAG in an Enterprise/Production Setup
    • Hands-on Demo: Performing Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search (Bonus Module)
    • Bonus Video: Implementing End-to-End RAG | 1-Hour Session
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites (Must)
    • Docker Basics
    • Your Hands-on RAG Journey
    • 1 – First RAG Pipeline
      • Building with Open AI
      • How it Works
      • Using Open AI Alternatives
      • RAG with Open Source and Running "Examples"
    • 2 – Amazon Discounts App
      • How the Project Works
      • Building the App
    • 3 – Private RAG with Mistral, Ollama and Pathway
      • Building a Private RAG project
      • (Bonus) Adaptive RAG Overview
    • 4 – Realtime RAG with LlamaIndex/Langchain and Pathway
      • Understand the Basics
      • Implementation with LlamaIndex and Langchain
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Suggested Tracks for Ideation
    • Sample Projects and Additional Resources
    • Submit Project for Review
Powered by GitBook
On this page
  • What is Docker
  • Understanding Key Docker Terminologies
  • Must-check Resources to Understand Docker
  1. Hands-on Development

Docker Basics

PreviousPrerequisites (Must)NextYour Hands-on RAG Journey

Last updated 10 months ago

Before we dive into the implementation, let's try to understand component which is not specific to LLMs or RAG but extensively useful if you're into open source or plan to build applications for production. This particular page is to help you build the previous file if you're new to Docker and are struggling to install dependencies on your machine.

What is Docker

A quick recap.

Think of Docker as a shipping container for your app. Just as a shipping container can hold all sorts of goods (clothes, electronics, etc.) and can be transported anywhere in the world, Docker bundles your app and everything it needs to run into a 'container.' This makes it easy to share and run your app on any computer.

Given the complexities and manual effort involved in resolving dependency issues in your system, Docker can be a beneficial tool to standardize the development environment among all students.

Why Use Docker?

  • Standardized Environment: Everyone gets the same set of dependencies, reducing "it works on my machine" issues.

  • Isolated: Doesn't interfere with other projects or system-wide settings.

  • Ease of Use: Once set up, running the project becomes much simpler.

Understanding Key Docker Terminologies

  • Docker Image: Think of this as a blueprint or a snapshot of a container, including the application and its dependencies. You build an image once and use it to create multiple containers.

  • Docker Container: A container is a running instance of an image. It's a lightweight, stand-alone, executable software package that includes everything needed to run the code.

  • CMD: In Docker, the CMD instruction specifies the command that will be executed when the container starts up.

  • Docker Compose: A tool for defining and running multi-container Docker applications. Using a YAML file (docker-compose.yml), it allows you to specify how different containers interact with each other, making it easier to manage multiple containers as a single service.

Must-check Resources to Understand Docker

Basic Tutorial on Dockerfile:

Basic Tutorials on Docker Compose: ,

Blog on using ChatGPT to build an optimized Docker Image:

The great thing about a tool like Docker is that you can find a lot of Docker-specific answers via LLMs and Docker also offers a where you can post your Docker-specific queries. Now let's see the step-by-step implementation.

Here
Part 1 using (Single Container)
Part 2 (using 2 Containers)
3-Minute Read
thriving community