💻
RAG and LLM Bootcamp
  • Welcome to the Bootcamp
    • Course Structure
    • Course Syllabus and Timelines
    • Know your Educators
    • Action Items and Prerequisites
    • Kick-Off Session for the Bootcamp
  • Basics of LLMs
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of LLMs
    • Bonus Resource: Multimodal LLMs and Google Gemini
  • Word Vectors, Simplified
    • What is a Word Vector?
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
    • Bonus: Overview of the Transformer Architecture
      • Attention Mechanism
      • Multi-Head Attention and Transformer Architecture
      • Vision Transformers (ViTs)
    • Bonus: Future of LLMs? | By Transformer Co-inventor
    • Graded Quiz 1
  • Prompt Engineering and Token Limits
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • For Starters: Best Practices
    • Navigating Token Limits
    • Hallucinations in LLMs
    • Prompt Engineering Excercise (Ungraded)
      • Story for the Excercise: The eSports Enigma
      • Your Task fror the Module
  • RAG and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-trained and Fine-Tuned LLMs
    • In-context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • Basic RAG Architecture with Key Components
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in RAG
    • Key Benefits of using RAG in an Enterprise/Production Setup
    • Hands-on Demo: Performing Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search (Bonus Module)
    • Bonus Video: Implementing End-to-End RAG | 1-Hour Session
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites (Must)
    • Docker Basics
    • Your Hands-on RAG Journey
    • 1 – First RAG Pipeline
      • Building with Open AI
      • How it Works
      • Using Open AI Alternatives
      • RAG with Open Source and Running "Examples"
    • 2 – Amazon Discounts App
      • How the Project Works
      • Building the App
    • 3 – Private RAG with Mistral, Ollama and Pathway
      • Building a Private RAG project
      • (Bonus) Adaptive RAG Overview
    • 4 – Realtime RAG with LlamaIndex/Langchain and Pathway
      • Understand the Basics
      • Implementation with LlamaIndex and Langchain
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Suggested Tracks for Ideation
    • Sample Projects and Additional Resources
    • Submit Project for Review
Powered by GitBook
On this page
  • Basic Prompts: The Starting Point
  • Zero-Shot vs. Few-Shot Prompts
  • Elements of a Prompt: Know the Ingredients
  • General Tips: The Do's and Don'ts
  • Bonus Resources:
  1. Prompt Engineering and Token Limits

For Starters: Best Practices

Mastering the art of designing a prompt comes with practice, and it can significantly improve your interactions with Large Language Models (LLMs).

It's crucial to note that the best practices discussed here are primarily geared towards generating language-based outputs. For more specialized tasks, such as generating code, images, or other types of non-textual data, it's advisable to consult the specific guidelines and documentation related to those tasks.

Let's delve into some best practices that could act as your guiding principles.

Basic Prompts: The Starting Point

  • Be Concise

Avoid verbosity for succinct and effective prompts.

❌ "What do you think could be a good name for a flower shop that specializes in selling bouquets of dried flowers?"

✅ "Suggest a name for a flower shop that sells bouquets of dried flowers."

  • Be Specific

Narrow your instructions to get the most accurate response.

❌ "Tell me about Earth"

✅ "Generate a list of ways that makes Earth unique compared to other planets."

  • Prompt Structuring

Ask One Task at a Time: Avoid combining multiple tasks in one prompt.

❌ "What's the best method of boiling water and why is the sky blue?"

✅ "What's the best method of boiling water?"

  • Detailing: Specify context, outcome, format, length, etc.

  • Example-Driven: Utilize examples to guide the output.

Zero-Shot vs. Few-Shot Prompts

When you give more examples to the model, it gets better at understanding what you're asking. This helps it give answers that are more on-point or accurate.

Zero-Shot Prompting:

  • You ask the model to do something without giving any examples.

  • Example:

"Is a goldfish a pet or not a pet?"

Output: "Pet"

One-Shot Prompting:

  • You give the model one example to help it understand what you're asking.

  • Example:

"For instance, a dog is a pet. Now, is a goldfish a pet or not a pet?"

Output: "Pet"

Few-Shot Prompting:

  • You give the model several examples to make sure it really understands what you're asking.

  • Example:

"A dog is a pet."

"A lion is not a pet."

Now, "Is a goldfish a pet or not a pet?"

Output: "Pet"

In this example, all prompting types resulted in the same answer: "Pet". However, with few-shot prompting, you can be more confident that the model truly understands what you mean by "pet" since it has more examples to learn from. Usually, giving more examples (a few shots) helps the model give better answers, especially for more complicated questions.

Thumb rule, Zero-shot, one-shot, and few-shot prompting have distinct advantages and challenges. Zero-shot is more open-ended while few-shot is more controlled.

Elements of a Prompt: Know the Ingredients

  • Instruction: The task you want the model to perform.

  • Context: Additional information that can steer the model.

  • Input Data: The question or data of interest.

  • Output Indicator: Desired format or type of the output.

You don't always need all these elements; it depends on your specific needs.

General Tips: The Do's and Don'ts

  • Start Simple and Iterate: Initial iterations should be straightforward, and you can build complexity as you refine your prompts. This is also referred to as "iterative prompt development."

    • It's about beginning with a simple version of your prompt, analyzing the outputs, and optimizing the prompt iteratively.

    • Consider how you can clarify your request gradually or experiment with techniques like few-shot prompting if needed. The goal is to reach a point where the model consistently delivers the type of response you're looking for.

    • A fun activity could be to try to understand relatively lesser-known topics such as "events stream processing" or even "container orchestration" with the help of LLMs. Reach a point where its responses suit your level of understanding and continue to leverage this practice for learning any new thing.

  • Avoid Redundancy: Remember to use concise, non-redundant language.

  • Be Specific: Vague instructions often yield vague results.

  • Avoid Negative Instructions: Instead of saying what not to do, focus on what the model should do.

  • Along with this let's try to remember two basic principles.

    • Principle #1: Write Clear and Precise Instructions: The clarity of your instructions directly influences the quality of the output. Ensure your prompts are devoid of ambiguity and are as direct as possible. This doesn't necessarily mean being brief at the expense of clarity; rather, your goal should be to convey your request as understandably and precisely as you can.

    • Principle #2: Give the Model Enough Time to Analyze and Think: LLMs don't "think" in the human sense. However, structuring your prompt to suggest a thoughtful analysis can lead to more accurate outputs. For complex queries or when seeking detailed responses, it's beneficial to frame your prompt in a way that guides the model through a logical sequence of thoughts or analysis. This can be achieved by structuring your prompt to include background information, context, or even a step-by-step breakdown of what you're asking for.

    These principles underscore the importance of a strategic approach to prompt engineering, where the focus is on maximizing the model's ability to understand and respond to your requests effectively.

Bonus Resources:

Curious to learn more? Once you’ve completed this course, you might want to check these resources that will help you dive deeper into the nuances of prompt engineering:

Take your time to experiment and iterate, as mastery comes with practice and refinement. And remember, this is a living, evolving field; staying updated with best practices is key to success.

PreviousPrompt Engineering and In-context LearningNextNavigating Token Limits

Last updated 11 months ago

LearnPrompting's Comprehensive Guide
Concise Article by OpenAI
DeepLearning AI and OpenAI's course on Prompt Engineering for Developers
LLMs in Production: Best Practices to Follow as a Developer | OpenAI Docs