Пікірлер
@josersleal
@josersleal Күн бұрын
where is the pycharm extension from
@omarpeguero1167
@omarpeguero1167 3 күн бұрын
This is awesome! Very good alternative for GPT-4o! It’s incredible how easy you make it for us!
@eduardov01
@eduardov01 2 күн бұрын
It really is. I'm glad you like it!
@keilavasquez728
@keilavasquez728 3 күн бұрын
GREAT!
@eduardov01
@eduardov01 2 күн бұрын
Thanks!
@LR-qj5zi
@LR-qj5zi 3 күн бұрын
great, useful 😁
@eduardov01
@eduardov01 2 күн бұрын
Glad it was helpful!
@LR-qj5zi
@LR-qj5zi 3 күн бұрын
Me sirve bastante, esta súper
@eduardov01
@eduardov01 2 күн бұрын
Gracias!!
@LR-qj5zi
@LR-qj5zi 3 күн бұрын
Amazing! Thanks
@eduardov01
@eduardov01 2 күн бұрын
Thank you!
@LR-qj5zi
@LR-qj5zi 3 күн бұрын
Excellent , thank you!
@eduardov01
@eduardov01 2 күн бұрын
Glad you liked it!
@eucharisticadoration
@eucharisticadoration 3 күн бұрын
Can you make an example using only Local LLMs and Local Agents, so no API Keys (and no costs) are created? That would be amazing!
@eduardov01
@eduardov01 2 күн бұрын
Yes, I'll have it in mind for the next video!
@eucharisticadoration
@eucharisticadoration 2 күн бұрын
@@eduardov01 Amazing!!
@pavanpraneeth4659
@pavanpraneeth4659 4 күн бұрын
Awesome
@eduardov01
@eduardov01 4 күн бұрын
Thank you!!
@marcoaerlic2576
@marcoaerlic2576 4 күн бұрын
Thank you for this video. Very interesting.
@eduardov01
@eduardov01 4 күн бұрын
Glad you enjoyed it!
@ramakanaveen
@ramakanaveen 5 күн бұрын
Nice one. Question : what if all the docs are marked as irrelevant chunks by the model , do you need to query the vector db again ? I guess an improvement may be to include a Hyde model in between to improve the questions and keep trying to get a different chunks from DB ?
@eduardov01
@eduardov01 4 күн бұрын
It'll perform a web search to find the relevant information (node that has the Agent). Yes, that could be an option too.
@vasudevanvijayaragavan3186
@vasudevanvijayaragavan3186 5 күн бұрын
Very nice, the only challenge with this approach is the total cost of answering each query, and it could run forever in some cases till both llms agree or till you get thr eight relevant information from the search. I think of customers want 100% gurantee and are not worried about latency, this will work really well.
@eduardov01
@eduardov01 5 күн бұрын
Indeed, it'll depend on the usecase that you have because for some cases you wouldn't sacrifice the quality of the responses for the speed.
@jayden_finaughty
@jayden_finaughty 5 күн бұрын
Surely this approach becomes more and more viable as the cost of newly released models keep on decreasing by 5x, 10x est as we are currently seeing? So the cost of this multi-shot RAG approach with a new model 5x cheaper is still less expensive than a single-shot of its more expensive predecessor?
@eduardov01
@eduardov01 5 күн бұрын
Exactly!
@ivgnes
@ivgnes 5 күн бұрын
Does any services already provide "Web Search" as a tool via GUI atm? Because it seems only a matter of time before coding this tools will no longer be needed. Like weather forecast tools or similar.
@eduardov01
@eduardov01 5 күн бұрын
The advantage of incorporating this agent into your pipeline is that it allows you to retrieve the latest information that LLMs may not have. For instance, Chat-GPT4 uses Bing search to answer questions about recent events, as the LLM wasn't trained with that data.
@avidlearner8117
@avidlearner8117 8 күн бұрын
Woah, that's nice! I don't like Copilot because of the lack of control... This changes everything.
@eduardov01
@eduardov01 8 күн бұрын
Indeed, with this option you can have any proprietary/open-source model available for you all the time.
@marcoaerlic2576
@marcoaerlic2576 14 күн бұрын
Awesome video. Thank you.
@eduardov01
@eduardov01 13 күн бұрын
Glad you liked it!
@rajupresingu2805
@rajupresingu2805 14 күн бұрын
Can you come up with a SQL agent chat with Llama3
@eduardov01
@eduardov01 14 күн бұрын
Yes, that's a valid approach.
@JrTech-rw6wj
@JrTech-rw6wj 15 күн бұрын
Will it work if i have more tables in the database ?
@eduardov01
@eduardov01 14 күн бұрын
Yes, you can add as many tables as you like. The function that retrieves the schema will provide all the columns and tables as input to the LLM. You only need to add a few example SQL queries (few shots) for those tables so the LLM can understand how to JOIN them if necessary.
@Noortje_1
@Noortje_1 15 күн бұрын
Amazing work, keep up the good work!
@eduardov01
@eduardov01 15 күн бұрын
Thanks, I'm glad you liked it!
@omarpeguero1167
@omarpeguero1167 16 күн бұрын
Very helpful! With the amount of data being handed, this comparisons help us make better decision on how to structure our solutions! Thank you Eduardo!
@eduardov01
@eduardov01 16 күн бұрын
Indeed, when we're dealing with large datasets is very important to optimize our code in terms of speed and memory.
@xavier_bernard
@xavier_bernard 17 күн бұрын
Hey, is this using GPT 4.o in the work flow ?
@eduardov01
@eduardov01 16 күн бұрын
No, it's using Whisper from Open AI.
@chikosan99
@chikosan99 17 күн бұрын
Great video, very nice
@eduardov01
@eduardov01 17 күн бұрын
Thank you very much!
@ahmedmustafa08
@ahmedmustafa08 19 күн бұрын
langchain.chains LLMChain doesnt work anymore, i get the following error ValidationError: 2 validation errors for LLMChain prompt Can't instantiate abstract class BasePromptTemplate with abstract methods format, format_prompt (type=type_error) llm Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error) is there a solution?
@gauravsaxena6034
@gauravsaxena6034 20 күн бұрын
Explain very well, what is usloth heard first time.
@eduardov01
@eduardov01 19 күн бұрын
Thanks. It's a library that optimizes (by manually deriving all compute heavy maths) the fine-tuning and inference process of some LLMs.
@SonGoku-pc7jl
@SonGoku-pc7jl 20 күн бұрын
thanks, good flow between rag and web search, thanks!!1 :)
@eduardov01
@eduardov01 20 күн бұрын
Thank you. I'm glad you found it interesting!
@rafaeltoth9674
@rafaeltoth9674 20 күн бұрын
very nice!! how i can speak with you? i want speak about Python projects
@eduardov01
@eduardov01 20 күн бұрын
You can contact me on LinkedIn: www.linkedin.com/in/eduardo-vasquez-n/
@amacegamer
@amacegamer 21 күн бұрын
Great video! But I have a question I hope you can answer and help me. Why is so slowly answering? that's normal for the architecture or there is other reason, and can we do something to fix that?
@eduardov01
@eduardov01 21 күн бұрын
The fact that we have 5 LLMs to generate answers + retriever + a websearch is performed when the question is not in the vector store database + we also store the web search results in the database and all these steps can take some time. To make it faster, you can use fewer LLMs and maybe skip the web search, depending on your usecase.
@wishIknewthis10yearsago
@wishIknewthis10yearsago 22 күн бұрын
Nice usecase Eduardo, keep it up!
@eduardov01
@eduardov01 22 күн бұрын
Thank you, much appreciated!
@isaackodera9441
@isaackodera9441 23 күн бұрын
Wonderful project
@eduardov01
@eduardov01 22 күн бұрын
Thank you!
@chikosan99
@chikosan99 23 күн бұрын
Thanks a lot ! Really Great!!
@eduardov01
@eduardov01 23 күн бұрын
Thank you!! I'm glad you liked it!
@joulong
@joulong 26 күн бұрын
老師教的真的很好
@eduardov01
@eduardov01 26 күн бұрын
Thank you so much!
@speedy-mw8uo
@speedy-mw8uo 27 күн бұрын
Nice tutorial! Thank you! I will now watch and try your other videos.
@eduardov01
@eduardov01 27 күн бұрын
I'm glad you liked it. Thank you for the support!
@NishantRoutray-ug1qt
@NishantRoutray-ug1qt 29 күн бұрын
Please upload the next part by adding the few shots in vector DB, would be really helpful :-)
@eduardov01
@eduardov01 27 күн бұрын
Thank you for the comment! I'll be making this video soon.
@antoniotameirao1703
@antoniotameirao1703 Ай бұрын
How do the second model knows the initial question if only the sql response was provided?
@eduardov01
@eduardov01 Ай бұрын
That's a good remark. Currently, the second model makes an assumption about the initial question based solely on the SQL response provided. For a robust approach, the initial question needs to be added to the prompt of the chain_query function. By including both the initial question and the SQL response as input fields, the final answer will be more accurate.
@jim02377
@jim02377 Ай бұрын
I like the idea of putting the few shot examples into a vector database. That would be a nice video to make.
@eduardov01
@eduardov01 Ай бұрын
I'll definitely consider making it. Stay tuned!
@eyalfrish
@eyalfrish Ай бұрын
Nice video! Any chance to get access to the excalidraw version of the diagram?
@eduardov01
@eduardov01 Ай бұрын
Thanks! I have a free account in Excalidraw and just have 1 session with all my diagrams. But you can get access to the flowchart using this link: github.com/Eduardovasquezn/advanced-rag-app/blob/main/images/rag.png
@keilavasquez728
@keilavasquez728 Ай бұрын
I've been searching for a self-correcting system because sometimes the responses I receive from LLMs aren't precise. Thank you so much for your help.
@eduardov01
@eduardov01 Ай бұрын
I'm glad it was helpful!
@keila9874
@keila9874 Ай бұрын
Is the Tavily API for free? Can I use the Google Search Engine instead?
@eduardov01
@eduardov01 Ай бұрын
Yes, you can make 1,000 API calls for free every month. It's also possible to use Google Search as an agent for this. I have a video explaining step-by-step how to use it: kzfaq.info/get/bejne/ptZ3hbOI1dydh5c.html
@HuesofReality
@HuesofReality Ай бұрын
love from Pakistan
@eduardov01
@eduardov01 Ай бұрын
Thank you, much appreciated!
@TeresaAquino-ih5id
@TeresaAquino-ih5id Ай бұрын
Excelente!!!. Felicitaciones
@eduardov01
@eduardov01 Ай бұрын
Gracias!!
@jichaelmorgan3796
@jichaelmorgan3796 Ай бұрын
Will this make it work on my old computer 🫣😅
@eduardov01
@eduardov01 Ай бұрын
Definitely, since the finetuning is performed in Google Colab, you won't need to utilize your computer's resources. 💪
@keilavasquez728
@keilavasquez728 Ай бұрын
Finally!!! I was waiting for it!!
@eduardov01
@eduardov01 Ай бұрын
I hope you find the tutorial helpful!
@esperanzanina4041
@esperanzanina4041 Ай бұрын
Excelente.
@eduardov01
@eduardov01 Ай бұрын
Gracias!
@keilavasquez728
@keilavasquez728 Ай бұрын
So interesting! So helpful to me!
@eduardov01
@eduardov01 Ай бұрын
Glad it was helpful!
@esperanzanina4041
@esperanzanina4041 Ай бұрын
Éxitos.
@eduardov01
@eduardov01 Ай бұрын
Gracias
@keila9874
@keila9874 Ай бұрын
Thanks a bunch! I've been on the hunt for a video that explains how to give feedback on LLM responses. Keep up the awesome work!
@eduardov01
@eduardov01 Ай бұрын
You're welcome! Yes, that feature is newly incorporated in LangChain. I'm glad you liked it!
@keila9874
@keila9874 Ай бұрын
It's pretty cool that you can ask questions in English to an invoice in a different language and still get the right answers.
@eduardov01
@eduardov01 Ай бұрын
Absolutely! It was mind-blowing to watch this multimodal LLM nail questions, even with images containing spanish text!
@keilavasquez728
@keilavasquez728 Ай бұрын
So interesting!
@eduardov01
@eduardov01 Ай бұрын
Thank you!
@JuanAquino-td6yx
@JuanAquino-td6yx Ай бұрын
Que sigas en progreso.
@eduardov01
@eduardov01 Ай бұрын
Gracias!
@TeresaAquino-ih5id
@TeresaAquino-ih5id Ай бұрын
Very well!!! Muy bien Eduardo. Go ahead!!!
@eduardov01
@eduardov01 Ай бұрын
Thanks!
@myomyatsu8343
@myomyatsu8343 Ай бұрын
could you also perform image, audio, video, file uploading in react like this in React too? I will really appreciate for it
@eduardov01
@eduardov01 Ай бұрын
Absolutely, implementing those functionalities in React is viable. I've showcased an invoice extractor (image to text) using Gemini 1.5 with Streamlit handling the frontend. The underlying principles easily translate to React. You can find the video here: kzfaq.info/get/bejne/pqeeoMtktr2wh58.html