AI Evolution: DSPy Transforms Prompting with Self-Improving Pipelines

  Рет қаралды 21

Trend in Research

Trend in Research

6 күн бұрын

arxiv.org/abs/2310.03714
The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded "prompt templates", i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, i.e. imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn (by creating and collecting demonstrations) how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric. We conduct two case studies, showing that succinct DSPy programs can express and optimize sophisticated LM pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, a few lines of DSPy allow GPT-3.5 and llama2-13b-chat to self-bootstrap pipelines that outperform standard few-shot prompting (generally by over 25% and 65%, respectively) and pipelines with expert-created demonstrations (by up to 5-46% and 16-40%, respectively). On top of that, DSPy programs compiled to open and relatively small LMs like 770M-parameter T5 and llama2-13b-chat are competitive with approaches that rely on expert-written prompt chains for proprietary GPT-3.5. DSPy is available at this https URL

Пікірлер
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
I wish every AI Engineer could watch this.
33:49
1littlecoder
Рет қаралды 63 М.
Wait for the last one! 👀
00:28
Josh Horton
Рет қаралды 145 МЛН
World’s Deadliest Obstacle Course!
28:25
MrBeast
Рет қаралды 159 МЛН
The child was abused by the clown#Short #Officer Rabbit #angel
00:55
兔子警官
Рет қаралды 24 МЛН
Superfast RAG with Llama 3 and Groq
16:48
James Briggs
Рет қаралды 4,2 М.
[CVPR 24 Best Paper] Generative Image Dynamics
16:33
Trend in Research
Рет қаралды 142
Gemma 2 - Local RAG with Ollama and LangChain
14:42
Sam Witteveen
Рет қаралды 11 М.
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
15:21
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 86 М.
Wait for the last one! 👀
00:28
Josh Horton
Рет қаралды 145 МЛН