This article will summarize the learnings from the following sources:
Design and refine promts to optimize LLM response
Zero-shot Prompting |
|
---|---|
Few-Shot Prompting |
|
Prompt Chaining |
|
Chain-of-Thought (CoT) Prompting |
|
Example use case: Increase accuracy of chatBots by providing context
Main concepts of RAG Workflow
Chunking breaks down large texts into smaller, manageable parts.
Why is chunking needed?
Data/Document Extraction
Embedding model:
Which factors are relevant when choosing an embedding model:
TBD