No video

Cosine Similarity ← Natural Language Processing ← Socratica

  Рет қаралды 4,365

Socratica

Socratica

Күн бұрын

Пікірлер: 15
@Socratica
@Socratica 6 ай бұрын
𝙄𝙣𝙩𝙧𝙤𝙙𝙪𝙘𝙞𝙣𝙜 𝙎𝙤𝙘𝙧𝙖𝙩𝙞𝙘𝙖 𝘾𝙊𝙐𝙍𝙎𝙀𝙎 www.socratica.com/collections
@MakeDataUseful
@MakeDataUseful 6 ай бұрын
Fantastic video, great to see another Socratica video in my feed
@juanmacias5922
@juanmacias5922 6 ай бұрын
So cool, being able to find similarities in books from neighboring time periods was fascinating.
@Socratica
@Socratica 6 ай бұрын
It really makes us curious about a lot of the more recent writers-can you use this to find out which older writers influenced them!
@jagadishgospat2548
@jagadishgospat2548 6 ай бұрын
Keep em coming, the courses are looking good too.
@Insightfill
@Insightfill 6 ай бұрын
This is phenomenal! Here I was, thinking we were just going to talk about the cos a = a approximation in trig. Bonus!
@Socratica
@Socratica 6 ай бұрын
It was a fun surprise to learn about this technique 💜🦉
@Insightfill
@Insightfill 6 ай бұрын
@@Socratica It's fun when you hear of similar analysis done to uncover ghost writers or shared authorship. Shakespeare, Rowling, and The Federalist Papers all come to mind.
@OPlutarch
@OPlutarch 5 ай бұрын
Very useful info, and the approach was excellent, very fun too
@orangeinfotainment620
@orangeinfotainment620 5 ай бұрын
Thank you
@ahmedouerfelli4709
@ahmedouerfelli4709 5 ай бұрын
I don't like removing "stop words" from the statistics, because their frequency is still meaningful. Even though everybody uses the word "the" frequently, some use it much more than others; and that is some characteristic that should not be ignored. So rather, I would suggest performing some kind of "normalization"; like dividing each word count by the average occurrence rate of that particular word in natural language. Instead of just word counts, the vector coordinates will consist of relative use rate of the particular word in the book compared the average use rate in general language. That would make a much more precise comparison. Because not just "stop words" are very common, some words are inherently much more common than others. Although I did not make the experiment, I suspect that in this way, everything will have a much lower cosine similarity.
@AndrewMilesMurphy
@AndrewMilesMurphy 4 ай бұрын
That's a very intuitive and helpful explanation, thank you. But pray tell, prithee even, is not some relationship between words in individual sentences what we would prefer (smaller angels)? It seems odd to me that when creating embeddings we're focused on these huge arcs rather than the smaller arcs that build understanding on a more basic level. The thresh-hold for AI in GPT 3 seems to have been on a huge amount of text, but isn't there some way to make that smaller? For most of us, that's the only way we can even contribute, as we just don't have the computer-hardware.
@danielschmider5069
@danielschmider5069 6 ай бұрын
pretty good, but the visualization of the results could have been made in something other than a table. That way, you wouldn't have to explain why the diagonal is 1, and that every number appears twice (mirrored along the diagonal). You'd end up with just 45 rather than 100 datapoints, and then compare the "top 10" across the different measurements. This would be much easier to follow.
@Socratica
@Socratica 6 ай бұрын
Interesting!! We'd love to see a sketch of what you have in mind!
@cryptodashboard1173
@cryptodashboard1173 Ай бұрын
​@@Socraticapls upload more videos on AI and machine learning
NETFLIX Movie Recommender System Using Machine Learning
54:50
KNOWLEDGE DOCTOR
Рет қаралды 66 М.
what will you choose? #tiktok
00:14
Анастасия Тарасова
Рет қаралды 6 МЛН
What will he say ? 😱 #smarthome #cleaning #homecleaning #gadgets
01:00
Zombie Boy Saved My Life 💚
00:29
Alan Chikin Chow
Рет қаралды 28 МЛН
ChatGPT for Data Analytics: Full Course
3:35:30
Luke Barousse
Рет қаралды 255 М.
How Far is Too Far? | The Age of A.I.
34:40
YouTube Originals
Рет қаралды 62 МЛН
Coding Was HARD Until I Learned These 5 Things...
8:34
Elsa Scola
Рет қаралды 287 М.
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 282 М.
Why the world NEEDS Kolmogorov Arnold Networks
7:07
ThatMathThing
Рет қаралды 24 М.
How To Paraphrase Using AI Without Getting Detected
10:47
Dr Amina Yonis
Рет қаралды 293 М.
Word Embeddings - EXPLAINED!
10:06
CodeEmporium
Рет қаралды 13 М.