DSPy explained: No more LangChain PROMPT Templates

  Рет қаралды 17,861

code_your_own_AI

code_your_own_AI

5 ай бұрын

DSPy explained and coded in simple terms. No more LangChain or LangGraph prompt templates. A self-improving LLM-RM pipeline! Plus automatic prompt engineering via self-optimization by a GRAPH based pipeline representation via DSPy.
Chapter 1: Development of an Intelligent Pipeline for Large Language Models
Focus: Integration and Optimization of Language Models and Data Retrieval Systems.
Pipeline Architecture: The chapter begins with the conceptualization of an intelligent pipeline that integrates a large language model (LLM), a retriever model, and various data models. The pipeline is designed for self-configuration, learning, and optimization.
Graph-Based Representation: Emphasis is placed on using graph theory and mathematical tools for optimizing the pipeline structure. The graph-based approach allows for more efficient data processing and effective communication between different components.
Problem Identification: Challenges in integrating synthetic reasoning and actions within LLMs are addressed. The chapter discusses the need for optimizing prompt structures for diverse applications, highlighting the complexity of creating flexible and efficient models.
Chapter 2: Evaluating and Optimizing Model Performance
Focus: Comparative Analysis of Model Configurations and Optimization Techniques.
Experimental Analysis: This chapter details experiments conducted by Stanford University and other institutions, analyzing various prompt structures and their impact on model performance. It includes an in-depth examination of different models, including LangChain, and their effectiveness in specific contexts.
Optimization Strategies: The text explores strategies for optimizing the intelligent pipeline, including supervised fine-tuning algorithms from Hugging Face and in-context learning for few-shot examples.
Microsoft's Study: A critical review of a study conducted by Microsoft in January 2024 is presented, focusing on the comparison between retrieval augmented generation (RAG) and fine-tuning methods. This section scrutinizes the balance between incorporating external data into LLMs through RAG versus embedding the knowledge directly into the model via fine-tuning.
Chapter 3: Advanced Pipeline Configuration and Self-Optimization
Focus: Advanced Techniques in Pipeline Self-Optimization and Configuration.
Self-Optimizing Framework: This chapter delves into the creation of a self-improving pipeline, which includes automatic prompt generation and optimization. The pipeline is described as being capable of autonomously generating training datasets and deciding the optimal approach (fine-tuning vs. in-context learning) based on specific tasks.
DSPy Integration: Discussion of DSPy, a platform for coding declarative language model calls into self-improving pipelines, with a focus on its utilization in PyTorch.
Comprehensive Optimization: The chapter concludes with an exploration of techniques for structural optimization of the pipeline and internal model optimization. It highlights collaborative efforts from Stanford University, UC Berkeley, Microsoft, Carnegie Mellon University, and Amazon in advancing these technologies.
github.com/stanfordnlp/dspy
all rights with authors:
DSPY: COMPILING DECLARATIVE LANGUAGE MODEL CALLS INTO SELF-IMPROVING PIPELINES
by Stanford, UC Berkeley, et al
arxiv.org/pdf/2310.03714.pdf
DSPy Notebooks:
github.com/stanfordnlp/dspy/b...
colab.research.google.com/git...

Пікірлер: 20
@kevon217
@kevon217 5 ай бұрын
Very timely. Finally got around to learning this framework and it’s an awesome abstraction.
@ghostwhowalks2324
@ghostwhowalks2324 5 ай бұрын
Would love to see the video of finetune of the LLM's using DSPy. Sounds very intriguing
@user-dw6zl3ol3z
@user-dw6zl3ol3z 5 ай бұрын
Thanks for the video. Pretty interesting tool. No more boring prompt engineering 😁 Shoutout to the mood in your videos.
@connorshorten6311
@connorshorten6311 5 ай бұрын
Amazing analysis! Love this!
@vbywrde
@vbywrde 4 ай бұрын
Fabulous information. Thank you!
@jdwebprogrammer
@jdwebprogrammer 4 ай бұрын
Always great videos thanks for making them! Yep, I noticed that along with other things that seemed really limiting with Langchain.
@henkhbit5748
@henkhbit5748 5 ай бұрын
great video👍.I will test with mistral models. Thanks
@joserobles11
@joserobles11 5 ай бұрын
Your videos are so inspiring to the comunity and you help a lot of us who are strugling to get up ti date. Could you please share how did you manage to create a database that you talked about in some videos with which you have trained your own LLM. I think I am ready to start a new project idea that I have and it would really help me to get going. Thanks in advance for everything you do!😊
@ppbroAI
@ppbroAI 5 ай бұрын
Seems to me like prompting consensus with filtering synthetic data. Interesting way to compact a pipeline. Oh, and Imagine if this could be mixed with a selection of lora adapters. Nice information Bro !
@ngbrother
@ngbrother 5 ай бұрын
I’ve been playing with autogen to create specialized pipelines.🤔 Another aspects of the self-optimization that might be interesting to explore is self-optimization of which modules in the pipeline are involved in a task. Solving for token I/O cost. Every agent spamming a common chat room gets expensive. 💵 🔥
@zd676
@zd676 4 ай бұрын
I still fail to see what is the value DSP boasts to bring. You keep on saying “you don’t need a template”, however, behind scene, DSP has all these templates for various modules you showed. How is this different from Langchain? In Langchain, I don’t need to explicitly write a template neither when using a retriever chain for example.
@AIAnarchy-138
@AIAnarchy-138 5 ай бұрын
"Microsoft, I don't know what you proved, but here are the results." 😂
@dr.mikeybee
@dr.mikeybee 5 ай бұрын
+RAG? Brevity is the soul of wit.
@ghostwhowalks2324
@ghostwhowalks2324 5 ай бұрын
thank you for your definition of teleprompter. Until I saw your explanation for it, it did not make sense. The vocabulary needs to be fine-tuned (no pun intended, as our human language models have historical bias). Like you explained remote optimized prompting would be a good name or something similar.
@whig01
@whig01 5 ай бұрын
Programpt
@tvwithtiffani
@tvwithtiffani 5 ай бұрын
I believe we'll see some of these methods again when Llama 3 is released by Meta
@raymond_luxury_yacht
@raymond_luxury_yacht 5 ай бұрын
I never understood langchain. Doesn't rally seem to do much. This however is freaking amazing and will be the future. People cant write prompts. For mass adoption you have to automate all the cognition and dumb down to absolute simples Genius.
@DavitBarbakadze
@DavitBarbakadze 4 ай бұрын
I'm like 15 mins in (after watching part 1) and whole thing still doesn't make sense. Like what, why, how? Maybe that's why nobody watches your videos past 10 mins. People need an information, in concise, applicable and maybe little bit casual manner (little bit is a keyword here).
@DavitBarbakadze
@DavitBarbakadze 4 ай бұрын
Spoiler! It starts at 25:17 🙄
AI DSP: LLM Pipeline to Retriever Model (Stanford)
26:44
code_your_own_AI
Рет қаралды 7 М.
DSPy Explained!
54:16
Connor Shorten
Рет қаралды 52 М.
БОЛЬШОЙ ПЕТУШОК #shorts
00:21
Паша Осадчий
Рет қаралды 9 МЛН
Каха и суп
00:39
К-Media
Рет қаралды 3,7 МЛН
3M❤️ #thankyou #shorts
00:16
ウエスP -Mr Uekusa- Wes-P
Рет қаралды 14 МЛН
A clash of kindness and indifference #shorts
00:17
Fabiosa Best Lifehacks
Рет қаралды 39 МЛН
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Alex Chao
Рет қаралды 89 М.
The 5 Levels Of Text Splitting For Retrieval
1:09:00
Greg Kamradt (Data Indy)
Рет қаралды 55 М.
DSPy: Advanced Prompt Engineering?
1:01:23
AI Makerspace
Рет қаралды 2,6 М.
Q* explained: Complex Multi-Step AI Reasoning
55:11
code_your_own_AI
Рет қаралды 7 М.
NEW TextGrad by Stanford: Better than DSPy
41:25
code_your_own_AI
Рет қаралды 9 М.
What is LangChain?
8:08
IBM Technology
Рет қаралды 165 М.
$1 vs $100,000 Slow Motion Camera!
0:44
Hafu Go
Рет қаралды 8 МЛН
Самый дорогой кабель Apple
0:37
Romancev768
Рет қаралды 332 М.
Hisense Official Flagship Store Hisense is the champion What is going on?
0:11
Special Effects Funny 44
Рет қаралды 2,9 МЛН
НЕ ПОКУПАЙ СМАРТФОН, ПОКА НЕ УЗНАЕШЬ ЭТО! Не ошибись с выбором…
15:23