Retrieval Augmented Generation (RAG) Explained: Embedding, Sentence BERT, Vector Database (HNSW)

  Рет қаралды 50,046

Umar Jamil

Umar Jamil

Күн бұрын

Get your 5$ coupon for Gradient: gradient.1stcollab.com/umarja...
In this video we explore the entire Retrieval Augmented Generation pipeline. I will start by reviewing language models, their training and inference, and then explore the main ingredient of a RAG pipeline: embedding vectors. We will see what are embedding vectors, how they are computed, and how we can compute embedding vectors for sentences. We will also explore what is a vector database, while also exploring the popular HNSW (Hierarchical Navigable Small Worlds) algorithm used by vector databases to find embedding vectors given a query.
Download the PDF slides: github.com/hkproj/retrieval-a...
Sentence BERT paper: arxiv.org/pdf/1908.10084.pdf
Chapters
00:00 - Introduction
02:22 - Language Models
04:33 - Fine-Tuning
06:04 - Prompt Engineering (Few-Shot)
07:24 - Prompt Engineering (QA)
10:15 - RAG pipeline (introduction)
13:38 - Embedding Vectors
19:41 - Sentence Embedding
23:17 - Sentence BERT
28:10 - RAG pipeline (review)
29:50 - RAG with Gradient
31:38 - Vector Database
33:11 - K-NN (Naive)
35:16 - Hierarchical Navigable Small Worlds (Introduction)
35:54 - Six Degrees of Separation
39:35 - Navigable Small Worlds
43:08 - Skip-List
45:23 - Hierarchical Navigable Small Worlds
47:27 - RAG pipeline (review)
48:22 - Closing

Пікірлер: 114
@mohittewari6796
@mohittewari6796 9 күн бұрын
The way you've explained all these concepts has blown my mind. I won't be surprised to see your number of subscribers skyrocket. Channel Subscribed !!
@nawarajbhujel8266
@nawarajbhujel8266 4 ай бұрын
This is what a teacher with a deep knowledge on what is teaching can do. Thank you very much.
@wilsvenleong96
@wilsvenleong96 7 ай бұрын
Man, your content is awesome. Please do not stop making these videos as well as code walkthroughs.
@tryit-wv8ui
@tryit-wv8ui 7 ай бұрын
Wow! I finally understood everything. I am a student in ML. I have watched already half of your videos. Thank you so much for sharing. Greetings from Jerusalem
@ramsivarmakrishnan1399
@ramsivarmakrishnan1399 2 ай бұрын
You are the best teacher of ML that I have experienced. Thanks for sharing the knowledge.
@faiyazahmad2869
@faiyazahmad2869 12 күн бұрын
This is one of the best explanation i ever seen in youtube.... Thank you.
@redfield126
@redfield126 7 ай бұрын
Waited for such content for a while. You made my day. I think I got almost everything. So educational. Thank you Umar
@DeepakTopwal-sl6bw
@DeepakTopwal-sl6bw 3 ай бұрын
and learning becomes more interesting and fun when you have an Teacher like Umar who explains each and everything related to the topic so good that everyone feels like they know complete algorithms. A big fan of your teaching methods Umar.. Thanks for making all the informative videos..
@kiranshenvi2626
@kiranshenvi2626 6 ай бұрын
Awesome context sir, it was the best explanation I found till now!
@JRB463
@JRB463 5 ай бұрын
This was fantastic (as usual). Thanks for putting it together. It has helped my understanding no end.
@sarimhashmi9753
@sarimhashmi9753 2 ай бұрын
Wow, thanks a lot. This Is the best explanation on RAG I found on KZfaq
@alexsguha
@alexsguha 5 ай бұрын
Impressively intuitive, something most explanations are not. Great video!
@venkateshdesai3150
@venkateshdesai3150 Ай бұрын
Amazing !! I finally understood everything. Good Job, all your videos have in-depth understanding
@goelnikhils
@goelnikhils 7 ай бұрын
Amazing content and what clear explanation. Please make more videos. Keep making this channel will grow like anything.
@suman14san
@suman14san 4 ай бұрын
What an exceptional explanation of HNSW algo ❤
@yuliantonaserudin7630
@yuliantonaserudin7630 5 ай бұрын
The best explanation of RAG
@Rockermiriam
@Rockermiriam 3 ай бұрын
Amazing teacher! 50 minutes flew by :)
@jeremyregamey495
@jeremyregamey495 8 ай бұрын
Just love ur videos. Soo much Details but extremly well put together
@1tahirrauf
@1tahirrauf 8 ай бұрын
Thanks Umar. I look forward for your videos as you explain the topic in an easy to understand way. I would request you to make "BERT implementation from scratch" video.
@user-yp2bg2bv2t
@user-yp2bg2bv2t 8 ай бұрын
One of the best channels to learn and grow
@melikanobakhtian6018
@melikanobakhtian6018 7 ай бұрын
Wow! You explained everything great! Please make more videos like this
@alexandredamiao1365
@alexandredamiao1365 5 ай бұрын
This was fantastic and I have learned a lot from this! Thanks a lot for putting this lesson together!
@NeoMekhar
@NeoMekhar 7 ай бұрын
This video is really good, subscribed! You explained the topic super well. Thanks!
@meetvardoriya2550
@meetvardoriya2550 7 ай бұрын
Really amazing content!!, looking forward for more such content Umar :)
@bevandenizclgn9282
@bevandenizclgn9282 Ай бұрын
Best explanation I found on KZfaq, thank you!
@akramsalim9706
@akramsalim9706 8 ай бұрын
Awesome paper. Please keep posting more videos like this.
@myfolder4561
@myfolder4561 4 ай бұрын
Thank you so much - this is a great video. Great balance of details and explanation. I have learned a ton and have saved it down for future reference
@FailingProject185
@FailingProject185 8 ай бұрын
Glad I've subscribed to your channel. Please do these more.
@mturja
@mturja 4 ай бұрын
The explanation of HNSW is excellent!
@vasoyarutvik2897
@vasoyarutvik2897 6 ай бұрын
Hello sir i just want to say thanks for creating very good content for us. love from India :)
@sethcoast
@sethcoast 3 ай бұрын
This was such a great explanation. Thank you!
@mdmusaddique_cse7458
@mdmusaddique_cse7458 5 ай бұрын
Amazing explanation!
@trungquang1581
@trungquang1581 3 ай бұрын
Thank you so much for sharing. Looking for more content about NLP and LLMs
@maximbobrin7074
@maximbobrin7074 8 ай бұрын
Man, keep it up! Love your content
@amazing-graceolutomilayo5041
@amazing-graceolutomilayo5041 5 ай бұрын
This was a wonderful explanation! I understood everything and I didn't have to watch the Transformers or BERT video (I actually know nothing about them but I have dabbled with Vector DBs). I have subbed and I will definitely watch the transformer and BERT video. Thank you!❤❤ Made a little donation too. This is my first ever saying $Thanks$ on KZfaq haha
@andybhat5988
@andybhat5988 13 күн бұрын
Super explanation. Thank you
@DiegoSilva-dv9uf
@DiegoSilva-dv9uf 7 ай бұрын
Thanks!
@ashishgoyal4958
@ashishgoyal4958 8 ай бұрын
Thanks for making these videos🎉
@bhanujinaidu
@bhanujinaidu 3 ай бұрын
Good explanation, thanks
@ShreyasSreedhar2
@ShreyasSreedhar2 5 ай бұрын
This was super insightful, thank you very much!
@hientq3824
@hientq3824 8 ай бұрын
awesome as usual! ty
@mustafacaml8833
@mustafacaml8833 5 ай бұрын
Great explanation! Thank you so much
@nancyyou7548
@nancyyou7548 6 ай бұрын
Thank you for the excellent content!
@_seeker423
@_seeker423 5 ай бұрын
Excellent content!
@fernandofariajunior
@fernandofariajunior 5 ай бұрын
Thanks for making this video!
@user-cs6vt4ei9v
@user-cs6vt4ei9v 5 ай бұрын
amazing work very clear explanation ty!
@SureshKumarMaddala
@SureshKumarMaddala 7 ай бұрын
Excellent video! 👏👏👏
@user-wy1xm4gl1c
@user-wy1xm4gl1c 8 ай бұрын
Thank you, awesome video!
@rajyadav2330
@rajyadav2330 7 ай бұрын
Great content , keep doing it .
@emptygirl296
@emptygirl296 8 ай бұрын
Hola, coming back with a great content as usual
@umarjamilai
@umarjamilai 8 ай бұрын
Thanks 🤓😺
@DanielJimenez-yy8xk
@DanielJimenez-yy8xk 3 ай бұрын
awesome content
@amblessedcoding
@amblessedcoding 8 ай бұрын
Wooo you are the best I have ever seen
@SanthoshKumar-dk8vs
@SanthoshKumar-dk8vs 6 ай бұрын
Thanks for sharing, really a great content 👏
@Zayed.R
@Zayed.R 4 ай бұрын
Very informative, thanks
@sounishnath513
@sounishnath513 8 ай бұрын
I am so glad I am subscribed to you!
@manyams5207
@manyams5207 5 ай бұрын
wow wonderful explanation thanks
@ahmedoumar3741
@ahmedoumar3741 5 ай бұрын
Nice lecture, Thank you!
@LiuCarl
@LiuCarl 5 ай бұрын
simply impressive
@oliva8282
@oliva8282 2 ай бұрын
Best video ever!
@oliz1148
@oliz1148 5 ай бұрын
so helpful! thx for sharing
@dantedt3931
@dantedt3931 6 ай бұрын
One of the best videos
@soyedafaria4672
@soyedafaria4672 5 ай бұрын
Thank you so much. Such a nice explanation. 😀
@ChashiMahiulIslam-qh6ks
@ChashiMahiulIslam-qh6ks 5 ай бұрын
You are the BEST!
@user-hc3nr9re4j
@user-hc3nr9re4j 8 ай бұрын
Thank you so much man..
@satviknaren9681
@satviknaren9681 2 ай бұрын
Please bring some more content !
@user-hd7xp1qg3j
@user-hd7xp1qg3j 8 ай бұрын
You are legend
@chhabiacharya307
@chhabiacharya307 6 ай бұрын
Thank YOU :)
@IndianGamingMaharaja
@IndianGamingMaharaja 22 күн бұрын
total 48 minutes worthy vedio
@jrgenolsen3290
@jrgenolsen3290 5 ай бұрын
💪👍 good introduktion
@qicao7769
@qicao7769 5 ай бұрын
Cool video about RAG! You could also upload into Bilibili, as you live in China, you should know that. :D
@user-kg9zs1xh3u
@user-kg9zs1xh3u 7 ай бұрын
keep it up!
@mohamed_akram1
@mohamed_akram1 5 ай бұрын
Thanks
@amblessedcoding
@amblessedcoding 8 ай бұрын
Thanks bro
@songsam1373
@songsam1373 4 ай бұрын
thanks
@ltbd78
@ltbd78 6 ай бұрын
Legend
@Jc-jv3wj
@Jc-jv3wj 2 ай бұрын
Thank you very much for a detailed explanation on RAG with Vector Database. I have one question: Can you please explain how do we design the skip list with embeddings? Basically how to design which embedding is going to which level?
@user-bt1jl1ou7j
@user-bt1jl1ou7j 8 ай бұрын
Wow, I saw the Chinese knotting on your wall ~
@parapadirapa
@parapadirapa 6 ай бұрын
Amazing presentation! I have a couple of questions though... What size of chunks should be used when using Ada-002? Is that dependent on the Embedding model? Or is it to optimize the granularity of 'queriable' embedded vectors? And another thing: am I correct to assume that, in order to capture the most contexts possible, I should embed a 'tree structure' object (like a complex object in C#, with multiple nested object properties of other types) sectioned from more granular all the way up to the full object (as in, first the children, then the parents, then the grand-parents)?
@Tiger-Tippu
@Tiger-Tippu 8 ай бұрын
Hi Umar,does RAG also has context window limitation as prompt engineering technique
@UncleDavid
@UncleDavid 7 ай бұрын
Salam Mr Jamil, i was wondering if it was possible to use the BERT model provided by apple in coreml for sentimental analysis when talking to siri then having a small gpt2 model fine tuned in conversational intelligence give a response that siri then reads out
@RomuloBrito-b2z
@RomuloBrito-b2z Ай бұрын
When the algorithm runs to store the k best scores, it uses a pop operation on the list to remove the nodes that have already been visited?
@12.851
@12.851 6 ай бұрын
Great video!! Shouldn't 5 come after 3 in skip list?
@hassanjaved4730
@hassanjaved4730 4 ай бұрын
Awesome I completely understand the RAG just because of you, Now I am here with some questions let's I am using the Llama2 model to where my main concern is I am giving him the pdf for context then user can ask question question on this, but this approach took time, during inferencing. so after watching your video what i undersatnd using the RAG pipeline is it possible to store the uploaded pdf into vector db then we will used it like that. I am thinking right or not or is it possible or not? Thanks,
@rvons2
@rvons2 8 ай бұрын
Are we storing the sentence embeddings together with the original sentence they were created? If not how do we map them back (from the top-k most similar stored vectors) into the text they were originated for, given that the sentence embedding lost some information when pooling was done.
@umarjamilai
@umarjamilai 8 ай бұрын
Yes, the vector database stores the embedding and the original text. Sometimes, they do not store the original text but a reference to it (for example instead of storing the text of a tweet, you may store the ID of the tweet) and then retrieve the original content using the reference.
@faiqkhan7545
@faiqkhan7545 8 ай бұрын
Lets say I want to create a Online semantic search tool , that uses vector DB, and RAG performance. just like bing tool . will it follow the same procedure and what new things I will be adding it to integrate to Internet? Plus nicely put video Umar . can you do a coding session for this one like you do for all others , like make something with real time output with rag ? or anything . will be a pleasure to watch.
@adatalearner8683
@adatalearner8683 3 ай бұрын
why is the context window size limited? Is it because these models are based on transformers and for a given transformer architecture, long distance semantic relationship detection will be bounded by the number of words/context length ?
@christopherhornle4513
@christopherhornle4513 8 ай бұрын
Great video, keep up the good work! :) Around 19:25 you're saying that the embedding for "capital" is updated during backprop. Isn't that wrong for the shown example / training run where "capital" is masked? I always thought only the embedding associated with non-masked tokens can be updated.
@umarjamilai
@umarjamilai 8 ай бұрын
You're right! First of all, ALL embedding vectors of the 14 tokens are updated (including the embedding associated with the MASK token). What happens actually is that the model updates the embedding of all the surrounding words in such a way that it can rebuild the missing word next time. Plus, the model is forced to use (mostly) the embedding of the context words to predict the masked token, since any word may be masked, so there's not so much useful information in the embedding of the MASK token itself. It's easy to get confused when you make long videos like mine 😬😬 Thanks for pointing out!
@christopherhornle4513
@christopherhornle4513 8 ай бұрын
I see, didn't know that the mask token is also updated! Thank you for the quick response. You really are a remarkable person. Keep going!
@tempdeltavalue
@tempdeltavalue 6 ай бұрын
So how llm converts vector to text ?
@tempdeltavalue
@tempdeltavalue 6 ай бұрын
So how LLM converts vector to text ?
@tomargentin5198
@tomargentin5198 3 ай бұрын
Hey, big thanks for this awesome and super informative video! I'm really intrigued by the Siamese architecture and its connection to RAG. Could someone explain that a bit more? Am I right in saying it's used for top-K retrievals ? Meaning, we create the database with the output embeddings, and then use a trained Siamese architecture to find the top-K most relevant chunks computing similarities ? Is it necessary to use this approach in every framework, or can sometimes just computing similarity through the embeddings work effectively?
@jeromeeusebius
@jeromeeusebius Ай бұрын
The siamese network he talked about just provides details of the sentence-bert that is used for encoding. The connection to RAG is that the sentence-bert model is used to do the encoding for both the query and the rest of the document chunks fed into the DB. In the case, Umar is providing some additional information regarding how the sentence-bert model was developed and why it is better than the natural BERT. I think it's important to understand the distinction. The top-K retrievals is done by the vector search. Using the HNSW, example, the query is compared with a random entry and then you proceed to the neighbors of each of the vectors until you get to a local minimum. You save this point. You do this a few times (> k) and retrieve the top-K ones sorted by their similarities. So the embeddings from S-BERT are used but not directly. The retrieval of the top-K embeddings is done at the vector DB search level. And doing this multiple times (via a different entry into the HNSW graph) you will get different results. And then you retrieve the top-K from there. I hope this is clear.
@rkbshiva
@rkbshiva 7 ай бұрын
Umar, great content! Around 25:00, when you say that we have a target cosine similarity. How is that target's cosine similarity calculated? Because there is no mathematical way to calculate the cosine similarity between two sentences. All we can do is only take a subjective guess. Can you please exlain in detail to me how this works?
@umarjamilai
@umarjamilai 7 ай бұрын
When you train the model, you have a dataset that maps two sentences to a score (chosen by a human being based on a scale from 1 to 10 for example). This score can be used as a score for the cosine similarity. If you look papers in this field, you'll see there are many sofisticated methods, but the training data is always labeled by a human being.
@rkbshiva
@rkbshiva 7 ай бұрын
@@umarjamilai Understood! Thanks very much for the prompt response. It would be great if we can identify a bias free way to do this as the numbering between 1 - 10, especially when done by multiple people and at scale, could get biased.
@jeromeeusebius
@jeromeeusebius Ай бұрын
@@rkbshiva The numbering is not done by random people. Usually, some specialists, e.g., language specialists are employed to get this dataset, and this reduces the noise in the label (but you'd still get some bias but should be small). Google does this for the search quality. They have a standard search quality evaluation document that is provided to the evaluators and they use the document as a guide and how to score the different documents returned for a give query.
@Vignesh-ho2dn
@Vignesh-ho2dn 3 ай бұрын
How would you find number 3 at 44:01 ? The algorithm you said will go to 5 and then since 5 is greater than 3, it won't go further. Am I right?
@jeromeeusebius
@jeromeeusebius Ай бұрын
I think he is mostly explaining how the skip-list data structure works. In general, with HNSW, you are not looking for a particular value (those values are cosine similarity scores) but rather you are traversing the graph to find neighbors with smaller similar scores until you get to a local minima, then that is the node that is returned. You then repeat it again from another entry point.
@adatalearner8683
@adatalearner8683 2 ай бұрын
how to do get target cosine similarity at first place?
@jeromeeusebius
@jeromeeusebius Ай бұрын
There is an annotated sentence-sentence scored by experts. This what is used to compute the loss.
@anapaunovic8405
@anapaunovic8405 7 ай бұрын
Do you plan to record coding sentence bert from scratch
@koiRitwikHai
@koiRitwikHai 6 ай бұрын
at 44:00 , the order of linked list is incorrect... isn't it? because it should be 1 3 5 9
@moviesnight248
@moviesnight248 6 ай бұрын
Even I have the same doubt. It should have been sorted as per the definition
@utkarshjain3814
@utkarshjain3814 4 ай бұрын
@NicolasPorta31
@NicolasPorta31 5 ай бұрын
Merci !
@KumR
@KumR 6 ай бұрын
Half
@fuxxs5994
@fuxxs5994 10 күн бұрын
Thanks✅
@amitshukla1495
@amitshukla1495 8 ай бұрын
You are legend
What is RAG? (Retrieval Augmented Generation)
11:37
Don Woodlock
Рет қаралды 117 М.
Best KFC Homemade For My Son #cooking #shorts
00:58
BANKII
Рет қаралды 61 МЛН
🤔Какой Орган самый длинный ? #shorts
00:42
Heartwarming moment as priest rescues ceremony with kindness #shorts
00:33
Fabiosa Best Lifehacks
Рет қаралды 38 МЛН
OpenAI Embeddings and Vector Databases Crash Course
18:41
Adrian Twarog
Рет қаралды 430 М.
A Survey of Techniques for Maximizing LLM Performance
45:32
Generative AI 101: When to use RAG vs Fine Tuning?
6:08
Leena AI
Рет қаралды 11 М.
Lessons Learned on LLM RAG Solutions
34:31
Prolego
Рет қаралды 23 М.
The 5 Levels Of Text Splitting For Retrieval
1:09:00
Greg Kamradt (Data Indy)
Рет қаралды 59 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 776 М.
A Deep Dive: Embeddings, Vectors & Search Algorithms in LLM's
41:15
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 895 М.
I wish every AI Engineer could watch this.
33:49
1littlecoder
Рет қаралды 73 М.
RAG for LLMs explained in 3 minutes
3:15
Manny Bernabe
Рет қаралды 19 М.
Ноутбук за 20\40\60 тысяч рублей
42:36
Ремонтяш
Рет қаралды 308 М.
iPhone socket cleaning #Fixit
0:30
Tamar DB (mt)
Рет қаралды 16 МЛН
#samsung #retrophone #nostalgia #x100
0:14
mobijunk
Рет қаралды 9 МЛН
S24 Ultra and IPhone 14 Pro Max telephoto shooting comparison #shorts
0:15
Photographer Army
Рет қаралды 9 МЛН
Battery  low 🔋 🪫
0:10
dednahype
Рет қаралды 13 МЛН