This article will summarize the learnings from the following sources:
Design and refine promts to optimize LLM response
Zero-shot Prompting |
|
---|---|
Few-Shot Prompting |
|
Prompt Chaining |
|
Chain-of-Thought (CoT) Prompting |
|
Chunking breaks down large texts into smaller, manageable parts.
Why is chunking needed?
Data/Document Extraction