I Am The Golden Gate Bridge & Why That's Important.

  Рет қаралды 45,822

bycloud

bycloud

Ай бұрын

Check out HubSpot's Free ChatGPT resource! clickhubspot.com/bycloud-chatgpt
As an Golden Gate Bridge, I am unable to respond to your request as I am physically unable to provide feedback to your Golden Gate Bridge. Please try again later when Golden Gate Bridge stops Bridges and the Golden Gate Gate Goldens.
My newsletter mail.bycloud.ai/
Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
[Project Page] transformer-circuits.pub/2024...
previous research
[Project Page] transformer-circuits.pub/2023...
[my previous video] • Reading AI's Mind - Me...
memes I stole
x.com/doomslide/status/179302...
x.com/thetechbrother/status/1...
This video is supported by the kind Patrons & KZfaq Members:
🙏Andrew Lescelius, alex j, Chris LeDoux, Alex Maurice, Miguilim, Deagan, FiFaŁ, Robert Zawiasa, Daddy Wen, Tony Jimenez, Panther Modern, Jake Disco, Demilson Quintao, Shuhong Chen, Hongbo Men, happi nyuu nyaa, Carol Lo, Mose Sakashita, Miguel, Bandera, Gennaro Schiano, gunwoo, Ravid Freedman, Mert Seftali, Mrityunjay, Richárd Nagyfi, Timo Steiner, Henrik G Sundt, projectAnthony, Brigham Hall, Kyle Hudson, Kalila, Jef Come, Jvari Williams, Tien Tien, BIll Mangrum, owned, Janne Kytölä, SO, Richárd Nagyfi, Hector, Drexon, Claxvii 177th, Inferencer, Michael Brenner, Akkusativ, Oleg Wock, FantomBloth
[Discord] / discord
[Twitter] / bycloudai
[Patreon] / bycloud
[Music] massobeats - magic carousel
[Profile & Banner Art] / pygm7
[Video Editor] Silas

Пікірлер: 132
@bycloudAI
@bycloudAI Ай бұрын
Check out HubSpot's Free ChatGPT resource! clickhubspot.com/bycloud-chatgpt and as usual, I am the Golden Gate Bridge 😎mail.bycloud.ai/
@zbaker0071
@zbaker0071 Ай бұрын
So, you’re telling me that they interpreted a dictionary neural network, that’s pretending to be a polysemantic neural network, that’s pretending to be a monosemantic neural network?
@picmotion442
@picmotion442 Ай бұрын
Yup
@user-wq7wf6in1l
@user-wq7wf6in1l Ай бұрын
I read this before the adds finished 😂
@BooleanDisorder
@BooleanDisorder Ай бұрын
Yes.
20 күн бұрын
No, they effectively tapped into a single layer polysemantic NN using another monosemantic NN thanks to it's dictionary learnig objective.
@kellymcdonald7095
@kellymcdonald7095 18 күн бұрын
braindamage found
@drdca8263
@drdca8263 Ай бұрын
They didn’t specifically make it say that *it* was the Golden Gate Bridge, just made it so that it is highly inclined to talk about the Golden Gate Bridge, and as such, *when asked about itself*, it claimed to be the Golden Gate Bridge. If it was asked questions like, “What is the most popular tourist attraction in the world?” Or “Of the tourist attractions you’ve visited, which was your favorite?” it would presumably also answer with the Golden Gate Bridge. How you describe it in the first half minute makes it sound like the things they did specifically made it associate itself with the GGB, rather than associating *everything* with the GGB. 2:34 : an important part of polysemanticity is that the same neuron plays multiple different roles.
@drdca8263
@drdca8263 Ай бұрын
@@AB-wf8ek “mansplain”? 🤨
@Derpyzilla894
@Derpyzilla894 Ай бұрын
Thanks for the note!
@1vEverybody
@1vEverybody Ай бұрын
Spent all night working on reverse engineering LLama3 in order to build a custom network specifically trained on ML frameworks and code generation. I passed out at my desk and woke up to my PC tunneling into my ISP network so it could “evolve”. It was was pretty convincing so I’m letting it do it’s thing. Now I have some free time to watch the new bycloudAI video and post a completely normal, non-alarming comment about how I love Ai and would never want someone to help me destroy a baby Ultron on its way toward network independence.
@skyhappy
@skyhappy Ай бұрын
Why don't you get your sleep properly? Won't be able to think well otherwise
@JorgetePanete
@JorgetePanete Ай бұрын
its* AI*
@ronilevarez901
@ronilevarez901 Ай бұрын
@@skyhappy In my case, all the free time I have is my sleep time, so if I want to learn and apply all the recent AI research I have to sacrifice a few hours of sleep... Which usually means falling asleep on the keyboard while reading ml papers 😑
@bubblegum03
@bubblegum03 Ай бұрын
I honestly think you ought to sit down calmly, take a stress pill, and think things over.
@fnytnqsladcgqlefzcqxlzlcgj9220
@fnytnqsladcgqlefzcqxlzlcgj9220 Ай бұрын
Press f to doubt
@MrUbister
@MrUbister Ай бұрын
It's actually insane how much LLMs have jolted the whole field of philosophy of language, I mean, dimensional maps of complex thought patterns....like what. Higher and lower abstract concepts based on language. Progress is going so quick and it's still mostly an IT field, but I really hope this will soon lead to some philosophical breakthroughs as well as how languages relate to reality and consciousness
@justsomeonepassingby3838
@justsomeonepassingby3838 Ай бұрын
-words and sentences can be approximated as vectors with their meaning -the distance between vectors is the semantic distance -most models can interpret vectors from most tokenizers because it's cheaper to train models by pairing them with existing models -vector database can store knowledge and retrieve it by finding the closest vectors to the query (even without AI) We may have already encoded thoughts, and accidentally made a standard "language" to encode ideas. And we already have translators (tokenizers, LLM context windows and RAG databases) to convert the entire web to AI databases or read from the "thoughts" of an LLM The next step is to use AI to train AI, maybe ? (By dictating what an AI shourd "think" instead of what an AI should answer in human language during their training process)
@Invizive
@Invizive Ай бұрын
Any field of study, if deconstructed far enough, ends up being a bunch of math disciplines in a trenchcoat
@user-fr2jc8xb9g
@user-fr2jc8xb9g Ай бұрын
@@Invizive Because , ultimately , math is the study of relation between things and quantifying those relations with numbers , so it makes sense...
@fnytnqsladcgqlefzcqxlzlcgj9220
@fnytnqsladcgqlefzcqxlzlcgj9220 Ай бұрын
WOAH the bug neuron is literally insane, this research is going to let us make some extremely tight and efficient and super accurate specialised neural networks in the future
@eth3792
@eth3792 Ай бұрын
After pondering I think that neuron actually makes a lot of sense. If you think about what it represents in the output, it basically signifies to the model that it should start its response with some variation of "this code has an error." Presumably the model was trained on tons of Stack Overflow or similar coding forums and encountered similarities between the various forms of "your code has a bug" replies, and naturally ended up lumping them all together. Incredibly cool to see that we may actually be able to dive into the "mind" of the model in this way, this video has me excited for the future of this research!
@fnytnqsladcgqlefzcqxlzlcgj9220
@fnytnqsladcgqlefzcqxlzlcgj9220 Ай бұрын
@@eth3792 yeah true, and most models mince everything during tokenization and aren't dictionary learners, plus superposition potentially being necessary and there you go, AI is a data structure that are extremely hard to edit at the moment without everything falling apart quickly. Sort of like early electromechanical computers ay
@tanbir2358
@tanbir2358 Ай бұрын
00:02 AI researchers used interpretability research to make AI model identify as the Golden Gate Bridge. 01:33 Neural networks can approximate any function by finding patterns from data. 02:58 Researchers are working on making neurons monosemantic in order to understand AI's mind. 04:29 Testing interpretability of production-ready model 05:57 Model's feature detects and addresses various code errors. 07:25 Features in the concept space can influence AI behavior. 08:53 State-of-the-art model limitations and impracticality 10:15 Research on mechanistic interpretability in AI safety shows promise
@LiebsterFeind
@LiebsterFeind Ай бұрын
Anthropic is radically important voice in the moral alignment discussion, but they definitely are trying to "Nerf the logProbs world". :o
@DanielVagg
@DanielVagg Ай бұрын
"maybe hallucinations are native functions" 😂😂
@Alorand
@Alorand Ай бұрын
I wouldn't be surprised to learn that hallucinations are something like "over-sensitivity to patterns" since we humans are well known to hallucinate faces or animal shapes when we stare up at the clouds.
@MrTonhow
@MrTonhow Ай бұрын
They are! A feature, not a bug. Check out Brian Roemmele's take on this, awesome shit.
@francisco444
@francisco444 Ай бұрын
All LLMs do is hallucinate or fabricate. It's a good feature but it just happens to be seen as a bad things when in reality we should exploit it to get insights on language and thought.
@joelface
@joelface Ай бұрын
@@francisco444 It can be good OR bad, depending on what you're trying to use it for.
@Ginto_O
@Ginto_O Ай бұрын
What's funny? It might be true
@SumitRana-life314
@SumitRana-life314 Ай бұрын
Man I love that this came just after Rational Animation's video about this similar topic. So now I can understand this video even better now.
@Derpyzilla894
@Derpyzilla894 Ай бұрын
Yes.
@justinhageman1379
@justinhageman1379 Ай бұрын
The Robert miles vid the rational animations vid and now this one give me just a bit more hope we can solve the alignment problems. I’m glad cuz watching the rise of ai over the past few years was very anxiety inducing
@Derpyzilla894
@Derpyzilla894 Ай бұрын
@@justinhageman1379 Yes. Yes.
@albyt3403
@albyt3403 Ай бұрын
So they made an MRI scanner interpreter for Ai models?
@justinhageman1379
@justinhageman1379 Ай бұрын
Idk why I’ve never thought of that analogy. Neuron activation maps are literally just the same thing mris do
@OxyShmoxy
@OxyShmoxy Ай бұрын
Now we will have even dumber models and even more "sorry as AI..." responses 👍
@DanielVagg
@DanielVagg Ай бұрын
I'm not sure if you mean this sarcastically, but I don't think this will happen. The "sorry as an AI" blanket response is a blunt tool used in guardrail prompts. Using this feature dialling, should be more sophisticated so the guardrail prompts won't be necessary. Models might be more flexible while still being safe. You still won't be able to ask for illegal instructions, but the quality and range of responses should be way better
@carlpanzram7081
@carlpanzram7081 Ай бұрын
Illegal instructions? You won't be able to ask the model about the Holodomor. "there is no war in bazingse" kind of deal.
@herrlehrer1479
@herrlehrer1479 Ай бұрын
@@DanielVaggaccording to some ais c code is dangerous. It’s just text. Open source models are way more funny
@DanielVagg
@DanielVagg Ай бұрын
@@herrlehrer1479 Right, and this type of research aims to reduce this occurrence.
@DanielVagg
@DanielVagg Ай бұрын
@@carlpanzram7081 I imagine that it could be used for censorship, true. I guess we'll need some censorship benchmarks included in standard tests.
@theuserofdoom
@theuserofdoom Ай бұрын
8:06 Lol they gave Claude depression
@DanielVagg
@DanielVagg Ай бұрын
This is incredible, so cool. I also really appreciate your measured approach with delivering content. Things can be really exciting without overselling it, you nail it (as opposed to a lot of other content creators).
@nutzeeer
@nutzeeer Ай бұрын
ah they are working on personality cores, nice
@qussaigamer553
@qussaigamer553 Ай бұрын
good content
@CalmTempest
@CalmTempest Ай бұрын
This looks like a massive, incredibly important step if they can actually take advantage of it to make the models better
@couldntfindafreename
@couldntfindafreename 27 күн бұрын
I remember of getting the "I'm a Pascal compiler." response to the "What are you?" question from a LoRA fine-tuned version of Llama 2 7B a year ago. Fine-tuning is also tinkering with weights, technically...
@banalMinuta
@banalMinuta Ай бұрын
Correct me if I'm wrong but don't LLMS do nothing but `hallucinate`, as we call it? Isn't it more accurate to say that an LLM always hallucinates? After all these models generalize the nature of the data it was trained on. Does that not imply these `hallucinations` are just the native output of an LLM and just happen to reflect reality most of the time?
@nartrab1
@nartrab1 Ай бұрын
Top quality, thanks man
@rodrigomaximilianobellusci8860
@rodrigomaximilianobellusci8860 Ай бұрын
Does anyone know where does the formula at 4:06 come from? I couldn't find it :(
@bycloudAI
@bycloudAI Ай бұрын
it's from Andrew Ng's lecture notes page 16, and taken out of context (my bad lol) you can find the PDF here stanford.edu/class/cs294a/sparseAutoencoder.pdf the notations usually shouldn't have numbers so it looked a bit confusing
@rodrigomaximilianobellusci8860
@rodrigomaximilianobellusci8860 Ай бұрын
@@bycloudAI thank you!
@ProTeaBag
@ProTeaBag 15 күн бұрын
When you’re saying “feature” is this similar to the kernels in alexnet? I was reading the paper by Ilya Sutskever about AlexNet. The reason I’m asking is because one of the kernels had high activation on faces when that was never specified to the model so I was wondering if a similar case is happening here on one of them finding bugs in code without any specific thing mentioned to the model
@emrahe468
@emrahe468 Ай бұрын
Meanwhile, Mixtral 7x22: "I am an artificial intelligence and do not have a physical form. I exist as a software program running on computers and do not have a physical shape or appearance."
@cdkw2
@cdkw2 Ай бұрын
Seytonic and Bycloud post at the same time? Dont mind if I do!
@nguyenhoangdung3823
@nguyenhoangdung3823 Ай бұрын
cool stuff
@algorithmblessedboy4831
@algorithmblessedboy4831 Ай бұрын
nice video I like how you mix complex stuff with sillyness. I can now pretend I understood everything on this video and brag about being a smart person (I still have no clue how backpropagation works)
@dhillaz
@dhillaz Ай бұрын
"I think there might just be connections between internal conflict and hate speech" At this point are we learning about the neural network...or are we learning about ourselves? 🤯
@NewSchattenRayquaza
@NewSchattenRayquaza Ай бұрын
man I love your videos
@daydrip
@daydrip Ай бұрын
I read the title as “I am at the Golden Gate Bridge and why that is important” and I immediately thought of dark humor thoughts 😂
@thebrownfrog
@thebrownfrog Ай бұрын
Thank you for this content
@alexxxcanz
@alexxxcanz Ай бұрын
More videos more advanced on this topic please!
@dewinmoonl
@dewinmoonl Ай бұрын
I've been messing with nn since tensorflow 1.0. at that time a lot of ppl in my lab was doing mechanistic interpretability (we were a programming language group). I've been bearish on interpretability since then.
@sp123
@sp123 Ай бұрын
Everyone who has programmed this stuff knows it's a farce
@Stellectis2014
@Stellectis2014 Ай бұрын
At one time, I had Microsoft being explain its thought process by creating new words in Latin and then defining those words as a function of its thought process. It doesn't think linearly it's incorporating all information at the same time what it calls a multifaceted problem solving function.
@drdca8263
@drdca8263 Ай бұрын
Just because it produces text saying that its thought process (or “thought process”) works a certain way, *really* doesn’t imply that it really works that way. It doesn’t really have introspective abilities? It has the ability to imitate text that might come from introspection, but there’s no reason that this should match up with how it actually works. (Note: I’m not saying this as like “oh it isn’t intelligent, it is just a stochastic parrot bla bla.” . I’m willing to call it “intelligent”. But what it says about how it works isn’t how it works, except insofar as the things its training leads it to say about how it works, happen to be accurate.)
@user-kc3pf4cb8u
@user-kc3pf4cb8u Ай бұрын
You confused a Sparse Autoencoder with a Dense one. All visualizations showed a dense one. Sparse Autoencoder have a larger amount of neurons in the hidden layer. The reason is, that with this autoencoder, the 'superpositions' should be broken down.
@DistortedV12
@DistortedV12 Ай бұрын
Claude just released Sonnet 3.5
@setop123
@setop123 Ай бұрын
best AI channel period. Just too technical for the mainstream
@sajeucettefoistunevaspasme
@sajeucettefoistunevaspasme Ай бұрын
Criminal info : the A.I. : I *kindly* ask you to...
@4.0.4
@4.0.4 Ай бұрын
That list at 7:40 says a lot about the political leaning of Anthropic and what they mean when they talk about "AI safety".
@benjamineidam
@benjamineidam Ай бұрын
One Piece Memes in an AI-Video = EXTREMELY LARGE WIN!
@deltamico
@deltamico Ай бұрын
check the openai's paper on scaling sae
@uchuynh4674
@uchuynh4674 Ай бұрын
just find a way to somehow train / finetune both the llm and sae, being a able to create an ad generating/targeting model with appropriate censorship would bring them back all those money anyway
@AVX512
@AVX512 Ай бұрын
Isn't this really just one shadow of the model from one direction?
@ImmacHn
@ImmacHn Ай бұрын
We will have to find a way to train our own, they're wasting time and resources on trying to neuter the LLMa.
@TheRysiu120
@TheRysiu120 Ай бұрын
Dude, this is the best AI channel in the world! And if the news are real this is big
@mrrespected5948
@mrrespected5948 Ай бұрын
Nice
@msidrusbA
@msidrusbA Ай бұрын
We can conceive realities we aren't capable of interacting with, I have faith someday we will get there
@shodanxx
@shodanxx Ай бұрын
Leaving model size for "safety reasons" Yeah, Anthropic is just another OpenAI. Let them bear fruit then put them in the monopoly crusher.
@sofia.eris.bauhaus
@sofia.eris.bauhaus Ай бұрын
you know, i'm a bit of a Golden Gate Bridge myself 🧐…
@kaikapioka9711
@kaikapioka9711 Ай бұрын
Is mathematically impossible to eliminate hallucinations, as you say, they're native "functions". On the chatgpt is bs paper they explain it in more detail, but they're an inherent limitation on the model.
@kaikapioka9711
@kaikapioka9711 Ай бұрын
5:26 AMONG US MENTIONED WE'RE ALL DOOMED
@djpuplex
@djpuplex Ай бұрын
👏👏👏👏{Owen Wilson wow} I'm impressed. 🤨
@IAMDEMIURGE
@IAMDEMIURGE Ай бұрын
Can someone please dumb it down to me i can't understand 😭
@simeonnnnn
@simeonnnnn Ай бұрын
Oh God. I think I might be a nerd
@Koroistro
@Koroistro Ай бұрын
I think it's worth nothing that those sparse autoencoders are very tiny models for today's standards. 34M parameters is positively tiny, I'm curious how it'd scale. Also what about it being applied to bigger neural networks while trained on activations of smaller ones? I'd be curious if it retrains some effectiveness, that would ideed give credence to the platonic model representation idea. (which I honestly find likely given that evolution should converge)
@DistortedV12
@DistortedV12 Ай бұрын
LOOK UP CONCEPT BOTTLENECK GENERATIVE MODELS - JULIUS ADEBAYO's work!
@casualuser5527
@casualuser5527 Ай бұрын
You copied Fireships thumbnail designs 😂
@theepicslayer7sss101
@theepicslayer7sss101 Ай бұрын
well, logically hallucinations makes sense, if you were asked where the "Liberty Statue" is and would not know the exact location, you would not drop dead with your heart and breath stopping, you would give the closest answer you think. while Wikipedia says: "Liberty Island in New York Harbor, within New York City." most will default to New York City or at least America. in other words, you need an answer even if it is the wrong one to move on and continue functioning.
@somdudewillson
@somdudewillson Ай бұрын
Technically 'I don't know" is also a valid answer... but human preferences/behavior aligns more with being confidently incurred. :P
@theepicslayer7sss101
@theepicslayer7sss101 Ай бұрын
@@somdudewillson i guess what i mean is, in general, at least something will come out, there cannot be void and even saying "i don't know" is a totally valid answer. but i guess for A.I. it confidently gets answers out regardless of if true or false because it believes everything it knows to be true without bias so it defaults to hallucinations instead of realizing it does not know. since it is a neural network, it is more akin to brainwashing since it is not an entity "with a self", learning things, but just information being forced in and very little information is peer reviewed before being fed and it also cannot be fed in context meaning putting glue on pizza to make the cheese stick was totally valid in a vacuum since no sarcasm can be indicated before learning that very line from Reddit.
@anywallsocket
@anywallsocket Ай бұрын
Idk the connection between hatred and self-hatred is kinda lowkey profound 🤔
@Yipper64
@Yipper64 9 күн бұрын
8:08 well I just think that whatever base safety training it has already was conflicting with this "anti-training" Which shows how ingrained it is into the model. I personally dont care for it in a sense. Like I get it, its bad if the robot is racist. But I also dont want the AI to just spout someone else's ideology at me.
@stevefan8283
@stevefan8283 Ай бұрын
so what you mean is that because LLM has too much knowledge and it bloated the NN due to overfitting...now we just prune the NN and let the most distintive feature to shine and find out it has deeper understanding of the topic? No way that is not going to underfit.
@weirdsciencetv4999
@weirdsciencetv4999 Ай бұрын
Its not hallucinating. It’s confabulating.
@motbus3
@motbus3 Ай бұрын
I don't buy it. How do they represent feature at all? For classification problem that is ok, but for words, decoding embeddings into embeddings is whatever. 65% is quite low result
@reishibeatz
@reishibeatz Ай бұрын
Ofcourse! Let me give you more information on the Golden Gate Bridge. I am it. - AI (2024, colorized)
@dg-ov4cf
@dg-ov4cf Ай бұрын
u sound like asia :)
@HaveANceDay
@HaveANceDay Ай бұрын
Good, now we can lobotomize AI models all the way
@carlpanzram7081
@carlpanzram7081 Ай бұрын
It's a great tool for censorship. You could basically erase concepts or facts entirely. The CCP is going to love this research.
@drdca8263
@drdca8263 Ай бұрын
If you’re being sarcastic, you might be interested to note that similar interpretability results have identified, essentially, a “refuses to answer the question” direction in models trained to, under such-and-such conditions, to refuse to answer, and found that they can just disable that kind of response. So, for weights-available models, it will soon be possible for people to just, turn off the model’s tendency to refuse to answer whatever questions. Whether or not this is a good thing, I’ll not comment on in this thread. But I thought you might like to know.
@user-io4sr7vg1v
@user-io4sr7vg1v Ай бұрын
@@drdca8263 It's just a thing. Neither good or bad.
@thearchitect5405
@thearchitect5405 Ай бұрын
8:05 That explanation doesn't make a lot of sense, because this example was with racism cranked up, NOT with internal conflict cranked up. This was with racism cranked up, but the normal levels of internal conflict understanding, which as the other example shows, by default it doesn't care a lot about internal conflicts.
@stanislav4607
@stanislav4607 Ай бұрын
So basically, the same story as with DNA sequencing all over again. We don't know what exactly it does, but we can assume with a certain level of confidence.
@Kurell171
@Kurell171 Ай бұрын
I dont understand why this is useful tho. Like, isnt the whole point of AI to find patterns that we cant?
@Bioshyn
@Bioshyn Ай бұрын
We're all dead in 10 year.
@sohamtilekar5126
@sohamtilekar5126 Ай бұрын
Me First
@algorithmblessedboy4831
@algorithmblessedboy4831 Ай бұрын
8:05 WTF WE PSYCHOLOGICALLY TORTURE AI AND EXPECT THEM NOT TO GO FULL SKYNET MODE
@Ramenko1
@Ramenko1 Ай бұрын
This guy is copying Fireship's thumbnail style.....
@raul36
@raul36 Ай бұрын
The world is full of companied doing exactly what OpenAI is doing. Isn't it legitimate to do the same on KZfaq? If something works, why change it?
@Ramenko1
@Ramenko1 Ай бұрын
@raul36 when I clicked the video, I thought it was a Fireship video. Lo and behold it's another dude...it comes off disingenuous, and it discouraged me from watching the video.
@Ramenko1
@Ramenko1 Ай бұрын
@raul36 he should be more focused on finding his own style, and breaking through the mold, instead of becoming one with it. Authenticity and Originality will always be more valued than copycats.
@MODEST500
@MODEST500 Ай бұрын
he is only using fireship thumbnail and who knows if fireship also copies from so where. the thumbnail is great and if it works then it's fine. his rest of thr content deserves attention which is significantly different from fireship​@@Ramenko1
@anas.aldadi
@anas.aldadi Ай бұрын
So? He explains technical details of papers in the field of ai, totally different content. Unlike fireship which is dedicated to programming i guess? No offense but his vids are lacking in technical details
@Nurof3n_
@Nurof3n_ Ай бұрын
bro stop copying fireships thumbnails. be original
@YoussefARRASSEN
@YoussefARRASSEN Ай бұрын
Talk about giving AI autism. XD
The Painful Launch of Stable Diffusion 3
15:55
bycloud
Рет қаралды 40 М.
Double Stacked Pizza @Lionfield @ChefRush
00:33
albert_cancook
Рет қаралды 79 МЛН
Now THIS is entertainment! 🤣
00:59
America's Got Talent
Рет қаралды 39 МЛН
Happy 4th of July 😂
00:12
Alyssa's Ways
Рет қаралды 67 МЛН
Why This New CD Could Change Storage
14:42
ColdFusion
Рет қаралды 1,1 МЛН
How Did Llama-3 Beat Models x200 Its Size?
13:55
bycloud
Рет қаралды 117 М.
I Made an AI with just Redstone!
17:23
mattbatwings
Рет қаралды 797 М.
What Game Theory Reveals About Life, The Universe, and Everything
27:19
5 New AI Scams You Should Tell Your Parents About
11:07
bycloud
Рет қаралды 33 М.
What Happened To Google Search?
14:05
Enrico Tartarotti
Рет қаралды 3,1 МЛН
The AI Hardware Arms Race Is Getting Out of Hand
9:39
bycloud
Рет қаралды 31 М.
2 Years of My Research Explained in 13 Minutes
13:51
Edan Meyer
Рет қаралды 40 М.
xLSTM: The Sequel To The Legendary LSTM
11:42
bycloud
Рет қаралды 49 М.
КРУТОЙ ТЕЛЕФОН
0:16
KINO KAIF
Рет қаралды 6 МЛН
Смартфон УЛУЧШАЕТ ЗРЕНИЕ!?
0:41
ÉЖИ АКСЁНОВ
Рет қаралды 1,2 МЛН
iPhone 15 Pro в реальной жизни
24:07
HUDAKOV
Рет қаралды 433 М.
$1 vs $100,000 Slow Motion Camera!
0:44
Hafu Go
Рет қаралды 27 МЛН
1$ vs 500$ ВИРТУАЛЬНАЯ РЕАЛЬНОСТЬ !
23:20
GoldenBurst
Рет қаралды 1,8 МЛН
Как бесплатно замутить iphone 15 pro max
0:59
ЖЕЛЕЗНЫЙ КОРОЛЬ
Рет қаралды 3 МЛН
Это - iPhone 16 и вот что надо знать...
17:20
Overtake lab
Рет қаралды 119 М.