💻
RAG and LLM Bootcamp
  • Welcome to the Bootcamp
    • Course Structure
    • Course Syllabus and Timelines
    • Know your Educators
    • Action Items and Prerequisites
    • Kick-Off Session for the Bootcamp
  • Basics of LLMs
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of LLMs
    • Bonus Resource: Multimodal LLMs and Google Gemini
  • Word Vectors, Simplified
    • What is a Word Vector?
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
    • Bonus: Overview of the Transformer Architecture
      • Attention Mechanism
      • Multi-Head Attention and Transformer Architecture
      • Vision Transformers (ViTs)
    • Bonus: Future of LLMs? | By Transformer Co-inventor
    • Graded Quiz 1
  • Prompt Engineering and Token Limits
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • For Starters: Best Practices
    • Navigating Token Limits
    • Hallucinations in LLMs
    • Prompt Engineering Excercise (Ungraded)
      • Story for the Excercise: The eSports Enigma
      • Your Task fror the Module
  • RAG and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-trained and Fine-Tuned LLMs
    • In-context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • Basic RAG Architecture with Key Components
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in RAG
    • Key Benefits of using RAG in an Enterprise/Production Setup
    • Hands-on Demo: Performing Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search (Bonus Module)
    • Bonus Video: Implementing End-to-End RAG | 1-Hour Session
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites (Must)
    • Docker Basics
    • Your Hands-on RAG Journey
    • 1 – First RAG Pipeline
      • Building with Open AI
      • How it Works
      • Using Open AI Alternatives
      • RAG with Open Source and Running "Examples"
    • 2 – Amazon Discounts App
      • How the Project Works
      • Building the App
    • 3 – Private RAG with Mistral, Ollama and Pathway
      • Building a Private RAG project
      • (Bonus) Adaptive RAG Overview
    • 4 – Realtime RAG with LlamaIndex/Langchain and Pathway
      • Understand the Basics
      • Implementation with LlamaIndex and Langchain
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Suggested Tracks for Ideation
    • Sample Projects and Additional Resources
    • Submit Project for Review
Powered by GitBook
On this page
  • What are the Examples offered?
  • Simple Way to Run the Examples on LLM App
  1. Hands-on Development
  2. 1 – First RAG Pipeline

RAG with Open Source and Running "Examples"

PreviousUsing Open AI AlternativesNext2 – Amazon Discounts App

Last updated 10 months ago

Congratulations on coming this far! You've already made quite some progress. This module is to make it easy for you to build and run your applications using Examples on the repository.

It is not specific to the Basic RAG pipeline you saw earlier. But it's pretty much applicable there as well.

What are the Examples offered?

Most popular frameworks/repositories offers multiple possible use cases under its folder to illustrate various possible avenues of impact.

For instance, an interesting is self-hosted real-time document AI pipelines with indexing from Google Drive/Sharepoint folders ().

For building RAG applications that leverage open source models available locally on your machine, understandably you need to refer to the "local" example.

But how can you, as a developer, leverage these resources and run these examples?

Once you've cloned/forked the LLM App repository and set up the environment variables (as per the steps mentioned on ), you're all set to run the examples.

The exact process is listed below the table which shares the types of examples you can explore. Do give it a quick read to know the possibilities of what you can build for your project. This is not the complete list of examples. You can .

Example Type
What It Does
What's Special
Good For

contextless

Answers your questions without looking at any additional data.

Simplest example to try. Not RAG based.

Beginners to get started.

contextful

Uses extra documents in a folder to help answer questions.

Better answers by using more data.

More advanced, detailed answers.

contextful_s3

Like "Contextful," but stores documents in S3 (a cloud storage service).

Good for handling a lot of data.

Businesses or advanced projects.

unstructured

Reads different types of files like PDFs, Word docs, etc.

Can handle many file formats and unstructured data.

Working with various file types.

local

Runs everything on your own machine without sending data out.

Keeps your data private.

Those concerned about data privacy.

unstructuredtosql

Takes data from different files and puts it in a SQL database. Then it uses SQL to answer questions.

Great for complex queries.

Advanced data manipulation and queries.

Simple Way to Run the Examples on LLM App

Considering you've done the steps before, here's a recommended, step-by-step process to run the examples easily:

Considering you've done the steps before, here's a recommended, step-by-step process to run the examples easily:

1 - Open a terminal and navigate to the LLM App repository folder:

cd llm-app
  • Option 1: Run the centralized example runner. This allows you to quickly switch between different examples:

    python run_examples.py alert

  • Option 2: Navigate to the specific pipeline folder and run the example directly. This option is more focused and best if you know exactly which example you're interested in:

    python examples/pipelines/contextful/app.py

By following these steps, you're not just running code; you're actively engaging with the LLM App’s rich feature set, which can include anything from real-time data syncing to triggering alerts on critical changes in your document store.

It's a step closer to implementing your LLM application that can have a meaningful impact.

2 - Choose Your Example. The examples are located in the folder. Say you want to run the 'alert' example. You have two options here:

That's it!

😄
LLM App
examples
example
webpage link
this link
find it here
examples