The NEW Chip Inside Your Phone! (NPUs)

  Рет қаралды 295,607

Techquickie

Techquickie

3 ай бұрын

Check out the MSI MAG 1250GL PCIE5 at lmg.gg/FDM5n
Thanks to Dr. Ian Cutress for his help with this video! Check out his blog and KZfaq channel:
morethanmoore.substack.com/
/ techtechpotato
Neural processing units (NPUs) such as Apple's Neural Engine or the machine learning engine on Google Tensor chips can be found on the iPhone and the Pixel. How do they help run AI right on your phone?
Leave a reply with your requests for future episodes.
► GET MERCH: lttstore.com
► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
► SPONSORS, AFFILIATES, AND PARTNERS: lmg.gg/partners
FOLLOW US ELSEWHERE
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech

Пікірлер: 600
@xxCEOofRacism69420xx
@xxCEOofRacism69420xx 3 ай бұрын
Why does this feel like I'm watching techquickie in 2016
@peanutnutter1
@peanutnutter1 3 ай бұрын
Because past Linus is back.
@drdennsemann
@drdennsemann 3 ай бұрын
and because the Greenscreen Footage looks awful with that lack of contrast and the backgrounds banding gradient.
@ilovefunnyamv2nd
@ilovefunnyamv2nd 3 ай бұрын
@@drdennsemann Now that you mention it, doesn't that look like the same outfit Linus Wore in the downsizing video?
@jakubpakos4225
@jakubpakos4225 3 ай бұрын
It's because Linus has no beard, he looks younger now without it
@twelfsauce6358
@twelfsauce6358 3 ай бұрын
It was all an experiment where they tried to use 50 google pixels npu's and 2016 footages of linus to make a techquickie
@roomie4rent
@roomie4rent 3 ай бұрын
I'm starting to feel the definition of "AI" or "AI-enabled features" is expanding in scope to encompass what was just traditional software before. Facial recognition software, for example, has existed long before ChatGPT.
@bakto2122
@bakto2122 3 ай бұрын
Well, machine learning has been called AI since "forever". And things like facial recognition or character recognition heavily rely on machine learning. The term AI has been expanded for a while. Nowadays the sort of AIs you see in sci-fi get called AGI, to differentiate them from these other "AI" products.
@ErazerPT
@ErazerPT 3 ай бұрын
The crux is not processing power. Its the memory to hold the model. You can wait for things to get done, but if you can't even hold them in memory to begin with, its a no starter. So, the great models are restricted to "wherever you can fit them in", leaving "small but still useful models" to everything else. NPU's, like any other ASIC, will simply do it faster and more efficiently. And they won't need that much space, because, as we've established, they'll only run very small models anyway. One thing i can see thrown at them is "voice quality".
@yensteel
@yensteel 3 ай бұрын
For example, Chatgpt 3.5 requires 700GB of Vram. They've tried to shrink down the model or add additional capabilities, which caused some quirks. Quantization and pruning is a difficult challenge. edit: Since every reply is deleted, 3.5 is 375 billion parameters. 3.5 Turbo is 20b. I can't find out how much vram it's using. If there are any good sources on quantization, it would be appreciated.
@chasehaskell6490
@chasehaskell6490 3 ай бұрын
Makes me wonder why Intel's VPU ai chips in i7 CPUs only have 512mb of dedicated memory. I guess it can access the 64gb of system ram, but it seems inefficient.
@destiny_02
@destiny_02 3 ай бұрын
​@@yensteelno it doesn't, its a 20 B model, which fits in 12 GB vram at 4 bit quantization. and even if you have 4 GB vram, the model can run with partial acceleration, running some layers on GPU and remaining layers on CPU
@ErazerPT
@ErazerPT 3 ай бұрын
​@@chasehaskell6490 Yes and no, much like the iGPU, but a quick look at any gfx card tells you how much real estate you need for a few GB's of VRAM. If true, that they even managed to get 512MB squeezed into the package amazes me more than it being "so little". Anyway, near future the battle is in the gfx card slot. Given Nvidia's stance on milking people for VRAM, if Arc get's good PyTorch/Tf support and shoves 16GB/32GB in the low/high end cards they steal the "enthusiast ML" share real fast.
@yensteel
@yensteel 3 ай бұрын
@@destiny_02 That sounds like 3.5 Turbo. The original 3.5 is 375 billion parameters, 3.0 is 175b and gpt 4 is 1.5 trillion. I'm not sure which models are quantized in what way. Do you have any sources about them? I can't find the Vram usage of 3.5 turbo, but that model would be so nice to run in a single GPU :).
@RageQuitSon
@RageQuitSon 3 ай бұрын
Sorry we can't fit an audiojack in your phone, but here's the AI chip. and no we won't include a charging brick and lie that it is to save the planet instead of save 10 cents per phone
@Spladoinkal
@Spladoinkal 3 ай бұрын
exactly. Except they aren't actually trying to save any money per phone, just make an additional profit when you buy the charger.
@RageQuitSon
@RageQuitSon 3 ай бұрын
@@Spladoinkal well they save their 10 cents on the brick, another 5 cents in shipping weight, and then they hope you buy the charger brick from them.
@liamsz
@liamsz 3 ай бұрын
The profit isn’t on the sale of the charger lol Apple made huge profits from increasing the amount of iPhones they could ship in a single cargo ship because the boxes got much smaller since there wasn’t a charger in them
@Aman_Mondal
@Aman_Mondal 3 ай бұрын
Smartphone companies are all absolute frauds 😂
@Strawstarberry
@Strawstarberry 3 ай бұрын
If the old charger still charges the new phone, do we need one for every phone? You probably don't remember when literally every phone year and model had a unique charger. Those were dark times.
@a.i.privilege1233
@a.i.privilege1233 3 ай бұрын
Can I trust any companies with my info/data? The answer is no.
@piadas804
@piadas804 3 ай бұрын
And you probably still use Windows
@macy1066
@macy1066 3 ай бұрын
Then you don't have a cell phone?
@Random_dud31
@Random_dud31 3 ай бұрын
​@@piadas804wow. What a lucky guess. I would have never thought that. I mean the user base is so small. I mean, windows is only has a 70% market share. I would have never to guess he used windows
@piadas804
@piadas804 3 ай бұрын
@@Random_dud31 Windows is pure spyware
@592Johno
@592Johno 3 ай бұрын
You missed the fucking point​@@Random_dud31
@pastalavista03
@pastalavista03 3 ай бұрын
AI generated Linus
@seltonu
@seltonu 3 ай бұрын
0:46 "They are embarrassingly parallel" "In parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem into a number of parallel tasks.[1] This is often the case where there is little or no dependency or need for communication between those parallel tasks, or for results between them." en.wikipedia.org/wiki/Embarrassingly_parallel Smooth reference, nice to see the Techquickie writers do their homework!😊
@HolarMusic
@HolarMusic 3 ай бұрын
But that's not even slightly related to the meaning they put into the phrase in the video
@budders9627
@budders9627 3 ай бұрын
@@HolarMusic Its exactly what theyre talking about though. GPU's process in parallel
@HolarMusic
@HolarMusic 3 ай бұрын
@@budders9627 They said that the GPUs are embassingly parallel in the sense that they are too focused on parallel computing and not very good at serial computation. The meaning expressed in the wikipedia article is of tasks that are so easily parallelized, that it's almost embarassing. These are completely different.
@seltonu
@seltonu 3 ай бұрын
@@HolarMusic My point was more it's clear that the writers did research and came across the term, and nudged it into the script somehow. Sure it's not the same meaning as the textbook definition and more of an Easter egg, but imo it's a fun thing to catch for those who know the term. They're talking about GPUs and parallel workloads. It's maybe a bit pedantic to argue they're "not even slightly related" when discussing the GPU running the task vs. the task itself - they're definitely very closely related for the purposes of a tech quickie video
@paxdriver
@paxdriver 3 ай бұрын
If I've learned anything in my 38 years it's that AI chips will get saturated by software extracting value out of the hardware of the people who paid for it. Then they'll tell us our devices are slow because they're old, not because they can't do what we need them to but because our devices can't meet the demands of companies violating our privacy and resources.
@somegrumpyalien
@somegrumpyalien 3 ай бұрын
the green screen spilled on Linus's beard
@mr.electronx9036
@mr.electronx9036 3 ай бұрын
AI degenerated lol
@FredericHeckmann
@FredericHeckmann 3 ай бұрын
There is also the tradeoff between modem/cellular power consumption and NPU power consumption. There are many scenarios where sending the data to the cloud would actually consume more power than doing it locally.
@jmoney211
@jmoney211 3 ай бұрын
Apple has been making chips with neural engines since 2017 with the A11 in the iPhone 8, iPhone 8 Plus, and iPhone X. Clearly they made the right call.
@antoniodimitrov8315
@antoniodimitrov8315 2 ай бұрын
Same with huawei. The phones released a couple weeks later. Then vivo made an npu to improve their night videos and such. This practice has been going on for a while now.
@fidelisitor8953
@fidelisitor8953 2 ай бұрын
Most smartphones have been shipping with NPUs for years. Don't know why he makes it sound like it's a new thing.
@Jatoiroshan
@Jatoiroshan 2 ай бұрын
Tell us some phones and their companies?​ Who has NPC or programming has been found?@@fidelisitor8953
@pa1Z
@pa1Z 3 ай бұрын
3:06 I tried that with my 15 pro and it takes about 6-7min for a 1000x1000 image. Which is painfully slow compared to midjourny etc but is still amazing to see. To have this feature with you at all times without relying on services is amazing
@vinylSummer
@vinylSummer 3 ай бұрын
512x512 in 1.5 minutes ain't that bad
@chasehaskell6490
@chasehaskell6490 3 ай бұрын
Did a 1024x1024 on S23 Ultra, took about 4 minutes on the high quality setting, 2½ on medium. I'd guess devices running the new 8 Gen 3 chip like the S24 would preform better.
@gorgnof
@gorgnof 3 ай бұрын
how did you try it?
@yumri4
@yumri4 3 ай бұрын
Yes but when you get into tweaking it you can most likely get it down to a few seconds if not quicker than iterations per second instead of seconds per iteration. Just requires you to sit down with ComfyUI and play around with the KSampler(Advanced) node. The Empty Latent Image node and Upscale Image By node might also help decrease compute time while increasing image quality.
@mrknighttheitguy8434
@mrknighttheitguy8434 3 ай бұрын
I'm sorry Dave, I can't do that...
@yensteel
@yensteel 3 ай бұрын
Perfect meme for Dave2D!
@moofey
@moofey 3 ай бұрын
Open the pod bay doors, HAL
@rg975
@rg975 3 ай бұрын
Wait, haven’t NPU’s been in phones for years at this point?
@blendpinexus1416
@blendpinexus1416 3 ай бұрын
sorta, the current npu is an evolution of the processor your thinking of.
@kenzieduckmoo
@kenzieduckmoo 3 ай бұрын
we've had AI chips on desktop for years, with Nvidia's Tensor cores, but building neural engines into Intel and AMD cps might actually make it useful
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
Surprisingly enough, iPhones have has some neural accelerator cores since before Nvidia even. Though they were both Q4 of 2017, the iPhone X (Used for Face ID) and Nvidia Volta architecture (A very shortly lived architecture on the desktop side of things only being in the Titan V and Quadro GV100), respectively.
@liamsz
@liamsz 3 ай бұрын
Macs have also had NPUs for quite some time now (something LMG seems to have not noticed?)
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
@@liamsz In their coverage / testing of Intels latest chips they talked about their not really being anything to compare it against. While literally having an Apple Silicon based Mac in frame just before. Also AMD has had them for a like the last 2 generations as well!
@seansingh4421
@seansingh4421 3 ай бұрын
I for one would really love a intel/amd gpu based CUDA alternative. Cuda is as awesome as it is a headache
@gergelysoki1705
@gergelysoki1705 3 ай бұрын
0:45 weird linux distros be like: " are you challenging me?"
@Birb42o
@Birb42o 3 ай бұрын
Challenging*
@gergelysoki1705
@gergelysoki1705 3 ай бұрын
@@Birb42o fixed it. Thanks
@hid4
@hid4 3 ай бұрын
"are*
@dhruvil2005
@dhruvil2005 3 ай бұрын
the*
@Justachamp772
@Justachamp772 3 ай бұрын
We will never stop
@Bruno-cb5gk
@Bruno-cb5gk 3 ай бұрын
It's like how they added RT cores on the 20 series, but too few to actually run any meaningful raytracing at high FPS. But it started the software integration of ray tracing features, which makes it worth dedicating more die area to RT cores in later generations.
@stalbaum
@stalbaum 3 ай бұрын
Also, surprised a bit but you did not mention that Apis like Tensor Flow lite are optimized for - yep - 256 bit operations. Which works ok in the image space, for example accelerating face recognition (which it does with downscaled grayscales...)
@RB26DEST
@RB26DEST 3 ай бұрын
Big "the cake is a lie" energy at the end of the video 😂
@dakoderii4221
@dakoderii4221 3 ай бұрын
Same thing with websites. Do you do the calcumalations on the device or offload to the server? 🤔
@IncredibleMeep
@IncredibleMeep 3 ай бұрын
So in other words turn everyone's phone into one giant super cluster computer to collect massive amounts of data to feed into ai models.
@mattfm101
@mattfm101 3 ай бұрын
Yeh, I see AI as something that's going to be quite insidious.
@noctarin1516
@noctarin1516 3 ай бұрын
And then the AI becomes sentient and replicates itself onto every single computer and phone and now I'm being spanked for eternity by roko's basilisk.
@johnnychang4233
@johnnychang4233 3 ай бұрын
N stands for neurotic instead of neural 😅
@CyanRooper
@CyanRooper 3 ай бұрын
This new version of Ultron is gonna be wild compared to the one in Avengers Age of Ultron.
@mozzjones6943
@mozzjones6943 3 ай бұрын
@@noctarin1516 Or terminated by Skynet
@TOM7952
@TOM7952 3 ай бұрын
Thanks for the help tech potato 😁
@vladislavkaras491
@vladislavkaras491 3 ай бұрын
Thanks for the news!
@biexbr
@biexbr 3 ай бұрын
0:47 yoooooooooooooo he did! he did! he said! he said the thing.
3 ай бұрын
Some AI models are being deployed on the edge of network. I think we'll see a lot of the mixed AI functions using NPUs and edge computing, reducing costs on cloud services and keeping response time in an acceptable range for large models.
@jclement30
@jclement30 3 ай бұрын
the use cases you provided almost make it sound like just another DSP chip, but i'm assuming there is more to NPUs streamlined for LLMs. So, do you think we're heading to a day where we'll be buying PCs and Laptops with a CPU, GPU and NPU, and benchmarking them separately? or will the NPU just become part of an SSOC?
@Goodbye_Eri
@Goodbye_Eri 3 ай бұрын
Finally classic techquicke video
@harlycorner
@harlycorner 3 ай бұрын
I've been enjoying the Tensor chip inside my Google Pixel phone for years already. The on-device (offline) speech recognition is amazingly fast.
@MeanWhy
@MeanWhy 3 ай бұрын
So in the future when building pcs there's gonna be 3 main parts: CPUs, GPUs ans NPUs?
@ilovefunnyamv2nd
@ilovefunnyamv2nd 3 ай бұрын
So, was this episode shot in the Langley House?
@DJGeosmin
@DJGeosmin 3 ай бұрын
wait, my phonbe has a built in NPU? how many grandMA3 parameters does it unlock?
@chrono581
@chrono581 3 ай бұрын
It makes sense to run it locally for two reasons privacy and as the number of smartphones the demand on cloud resources becomes higher if you could offshoot most of those processes to your local device it would decrease latency and allow the cloud to deal with more processes not your phone can't run rather than just doing huge numbers of small tasks and slowing everybody down
@Jatoiroshan
@Jatoiroshan 2 ай бұрын
Are you sure they will not share the phone info that goes to their servers? No they will share somehow. This doesn't seems to be secured enough.
@ultraali453
@ultraali453 3 ай бұрын
Thank you for the informative video.
@anthonytitone
@anthonytitone Ай бұрын
Can simple preexisting video game AI like pathfinding & NPC combat run on the NPU to free up CPU headroom if the dev builds their game around it?
@FPGAZealot
@FPGAZealot 3 ай бұрын
RyzenAI will be interesting, The NPU will have user full configuration options soon.
@procode_eu
@procode_eu 3 ай бұрын
Very interesting topic. Good video.
@pannekoekcom4147
@pannekoekcom4147 3 ай бұрын
NPU stands for network processing unit iirc double naming schemes great. This is just like usb/hdmi protcol
@carlos10571
@carlos10571 3 ай бұрын
For a sec, I thought the sponsor was going to be the MSI Claw😂
@IT_RUN1
@IT_RUN1 3 ай бұрын
Wait I have a question: Will there be like an AI database inside the phone somewhere that it pulls its knowledge from or learns from? I'm trying to learn how much space it's going to use in order to be reasonably useful
@gameonyolo1
@gameonyolo1 3 ай бұрын
Pretty sure the models them selves are like 500mb to MAXIMUM 50gb.
@IT_RUN1
@IT_RUN1 3 ай бұрын
@@gameonyolo1 hopefully that's 50 GB (Big b) on-board that is separate and not a part of the actual main flash as it would make storage management much smoother
@gameonyolo1
@gameonyolo1 3 ай бұрын
@@IT_RUN1 yes
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
@@gameonyolo1 50 Maxmimum?Mixtral-8x22b is already over 260 GB and that’s not even that big compared to the flagship models of most companies! In general to actually have a usable experience, you’re looking at a minimum of 13 billion parameters though and even then you’re running into lots of compromises and issues.
@techno1561
@techno1561 3 ай бұрын
Depends on the model. Older LLMs are relatively lightweight, enough that a mid-range computer can run them okay.
@uncrunch398
@uncrunch398 3 ай бұрын
I don't get why apps act like there's no connection when I run out of high speed data, but I'm *_stuck_* at 64kbps. Well over fast enough to not notice. Unless it involves AV streaming.
@ChessPuzzles2
@ChessPuzzles2 2 ай бұрын
live translation offline is already available on google translate app
@joemelo5696
@joemelo5696 3 ай бұрын
I think you need to include ARM based processors in the future. It's myopic to just talk about "Team Blue" and "Team Red" as if they are they only two options.
@nathan19542
@nathan19542 3 ай бұрын
It would have been good to explain the difference in computation that they do. Edge processors (like those on phones) for neural networks usually work with quantized models, just using integers as low as 4 bits as the ai model parameters. Integer multiplication is pretty cheap.
@XChadKlatz
@XChadKlatz 2 ай бұрын
What prevents my data to send over to a server, considering the data collection potential, they get from my prompts
@sussteve226
@sussteve226 3 ай бұрын
I'm waiting for the year that this channel becomes the news
@Peterstarzynskitech
@Peterstarzynskitech 3 ай бұрын
Just more ways that data can be looked into by Google and others.
@jackprice6599
@jackprice6599 3 ай бұрын
How long until you need an NPU socket next to the CPU
@einstien2409
@einstien2409 3 ай бұрын
Why on earth are these features getting locked behind paywall? If we dont pay for them then what is the chip for?
@justintiffin-richards6840
@justintiffin-richards6840 3 ай бұрын
what wait! 3:33 did I miss you guys reviewing translation apps n gadgets!?! Oh super vid as ever by the way... thx
@justintiffin-richards6840
@justintiffin-richards6840 3 ай бұрын
🤔 Mmm... so when that voice mimicking AI thing that I hear was being withheld for now... goes wild it will run really well on your phone
@bismuth7730
@bismuth7730 2 ай бұрын
This all reminds me of times when old computers didnt have hardware acceleration for "modern" video formats on the internet and just watching videos consumed a lot of power, but nowadays almost all video formats are hardware accelerated and power usage is much lower.
@COMATRON.
@COMATRON. 3 ай бұрын
do NPUs have an interface like directX for gfx? i wonder how they get "talked to"
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
Depends on who they’re from, sometimes they are more directly accessible sometimes they are more automatically managed, depending.
@HokgiartoSaliem
@HokgiartoSaliem 3 ай бұрын
I hope soon we can run Adobe Ai locally. Btw how is the news on Ai cloud video from Pixel 8 / 8 pro? Last time it says it will out in Des 2023. Now has been April but no one review it. HDR Video, night sight video in the cloud.
@Sandeepan
@Sandeepan 3 ай бұрын
NPUs are just DSPs that went to grad school
@_GhostMiner
@_GhostMiner 3 ай бұрын
What's dsp?
@flyinglack
@flyinglack 3 ай бұрын
@@_GhostMiner digital signal processing
@yongbinzhong4470
@yongbinzhong4470 3 ай бұрын
I think this is not entirely the case. For Qualcomm, they use DSP+NPU to support AI. For MediaTek, they use APU to support AI. For Huawei Kirin, they use NPU to support AI. For Apple, they use Neural Engine to support AI. Each has its own advantages.
@Abu_Shawarib
@Abu_Shawarib 3 ай бұрын
DSP basically include everything that is not analog
@Gen0cidePTB
@Gen0cidePTB 3 ай бұрын
​@@yongbinzhong4470But they are all brand names for NPUs. What makes them different?
@ricodo1244
@ricodo1244 3 ай бұрын
Using a server for the ai features is also expensive for the company (unless the have a subscription but I guess making NPUs is expensive as well even if you increase the phone price)
@GorgonJob
@GorgonJob 3 ай бұрын
I never know if this videos are 4 years old or just recent because of the shaved beard Linus skin in the Thumbnail
@imark7777777
@imark7777777 3 ай бұрын
Used to be Siri could do some basic things like tell you the time, your appointments and Call contacts without using the Internet but Apple move that completely cloud-based. Used to be on Mac OS X you could enable dictation and it would work off-line that's another one which is now cloud based only. As somebody who frequently uses speech to text it's annoying that I have to have an Internet connection to use some thing where all it used to require was a 2GB file for dragon dictate and it worked off line. Then when Apple integrated it it worked really well until they made it cloud-based only so there's a delay and a time out and it's a mess. Windows 11 speech recognition works way better than MAC currently does almost like the way it used to.
@pewdiefanno19
@pewdiefanno19 3 ай бұрын
Did Old linus do a time travel?
@anotherfellasaiditsnunya
@anotherfellasaiditsnunya 3 ай бұрын
It will be right where microtransactions and data-mining intersect.
@Komentujebomoge32
@Komentujebomoge32 3 ай бұрын
Damn, the robots creates pics and music (The creative stuff), but they not clean my room or cook for me yet, to save some time for creating music and drawings..
@NeilVitale
@NeilVitale 3 ай бұрын
Future video suggestion: how eBay pricing works.
@idcrafter-cgi
@idcrafter-cgi 3 ай бұрын
AI on Device is Cheaper for Tech companies to run and monetization can be done by a express option. also is on Device AI better for Privacy if they don't have any analytics or reporting back with a summarized AI version back to the companies.
@spay8143
@spay8143 3 ай бұрын
The green screen spill on Linus ist supstational
@Benito650
@Benito650 3 ай бұрын
this video looks terrible almost like if it's done by high schoolers
@hothi92
@hothi92 3 ай бұрын
​@@Benito650Or AI... 🤔
@Cylonknight
@Cylonknight 3 ай бұрын
I already don’t need half the bloatware bullshit on my phone, let alone another piece of hardware that helps in data tracking, even if it doesn’t (in a perfect world…) why do I want it. I still don’t want ai or windows 11 on my computer. I fear what the consumer market will look like in just a few years. I don’t want any gpu or cpu with any ai hardware. Not because I’m scared of the technology. I’m scared of what capitalism and other countries will do with it and the information it gets ahold of. KZfaq algorithm is already annoying af when you watch 1 singular video.
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
I seen quite a few of these comments, iPhones and android phones have had them since 2017, with the iPhone X and Huawei Mate 10. Even if you don’t use applications that take advantage of it, it is used for many things, Face ID and fingerprint based logins on peoples phones would be a lot slower along with things like dictation. There are many other background of management type things as well. The matrix multiplication accelerator in your phone doesn’t give a company more of your data. You could run all of those tasks without it. It’s just a power efficiency and speed thing. Much like how you can export a video with just your CPU cores, but it’s much faster to use an accelerator. But none of that is going to affect how much data Adobe or Windows is collecting on you. that’s all done in cloud anyway.
@hb221984
@hb221984 3 ай бұрын
Dude get over it..... if some one realy wants your information or "data" he will get it ..... otherwise just hide in an dark forest .....
@ToadyEN
@ToadyEN 3 ай бұрын
More things to use all of my battery
@Techlore1
@Techlore1 3 ай бұрын
you totaly missed out on a perfect opportunity for a terminator 2 reference.
@johntrevy1
@johntrevy1 3 ай бұрын
Why?
@timbambantiki
@timbambantiki 3 ай бұрын
I dont want ai bloat, i want headphone jacks
@wildyato3737
@wildyato3737 3 ай бұрын
Call EU to make headphone jack and ejectable batteries in first place.. These manufacturers are making smartphones featureful by removing exces of features in it😂😂 ..a.k.a flagship ones (Sooooo...don't pay anything to flagship series😂)
@stellabckw2033
@stellabckw2033 3 ай бұрын
louder please 🙄
@DevinSamarin
@DevinSamarin 3 ай бұрын
Get type C headphones, and bam, there's your headphone jack
@wildyato3737
@wildyato3737 3 ай бұрын
@@DevinSamarin yeah or have converter version of that with charger support
@departy93
@departy93 3 ай бұрын
fair enough... 😅 but why not both? 😮 I know. minde blown right? 🤯
@timtomnec
@timtomnec 3 ай бұрын
Linus: refuses to use the word water proof Also Linus: I shale change the name of liner algebra to Artificial integuments.
@rohansampat1995
@rohansampat1995 3 ай бұрын
Im concerned about die space unnecessarily being allocated to these things on desktop I have a beastly GPU on my gaming rig that can probably handle these AI tasks just fine? Why do i need an NPU?? Would have been nice to see this video answer that.
@toebeexyz
@toebeexyz 2 ай бұрын
Because dedicated silicon can run MUCH faster and doesn't use the part of your GPU that your games run on. This is exactly what the rtx cores are in Nvidia cards for example.
@rohansampat1995
@rohansampat1995 2 ай бұрын
@@toebeexyz ... Right, i have RTX cores on my card. SO why do i need this extra space on my CPU... ur literally proving my point.
@toebeexyz
@toebeexyz 2 ай бұрын
@@rohansampat1995 I was just using rtx cores as an example for how npus are used to accelerate existing workloads. Why wouldn't you want dedicated silicon in the CPU for this sort of thing instead of wasting clock cycles running it on the processor itself? And plus you have to remember not every pc has a GPU to offload these ai tasks to.
@toebeexyz
@toebeexyz 2 ай бұрын
@@rohansampat1995 oh and also those rtx cores are useless for anything else other than Nvidia specific stuff because... Nvidia. These npus are like a unified open thing that anything can use
@rohansampat1995
@rohansampat1995 2 ай бұрын
@@toebeexyz Yeah so for HIGH END gaming processors, i dont see a need for this stuff because a GPU is usually present. Silicon that can be used for gaming takes way more preference than an AI thing ON CHIP. A couple of cycles to transfer my query and data isnt gonna kill anyone. Games have been transferring a LOT more for a long time. The GPU will do better than any dedicated silicon on cpu die, so why waste that space.
@hykok
@hykok Ай бұрын
or use NPU to bypass end-to-end encryption similar to what Micro$oft Recall did taking screenshot to analyze every 5 seconds. Perhaps in long term pass your metadata to the 3 char agencies without having to jailbreak your phones or devices.
@NagisaShiota11
@NagisaShiota11 3 ай бұрын
Hey, let's be fair to Android phones. In Gboard if you select the option titled faster Voice typing it downloads the model to your phone and it is then available to use offline. If you have a pixel phone it takes that a step further and actually uses the voice recognition software from the Google Assistant to handle dictation
@kendokaaa
@kendokaaa 3 ай бұрын
There's also that inference (running the AI) doesn't take nearly as much processing power as training the model
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
I would say that’s one of the biggest misconceptions people have about neural accelerators. I always see lots of comments about people talking about using them to train models in the context of their phone or a little Google coral accelerator.
@egarcia1360
@egarcia1360 3 ай бұрын
Re 3:08, my 3yo budget phone can generate a 512x512 Stable Diffusion image in 6-7 minutes; I'm sure even a small NPU would push that down drastically, especially on the newer hardware that would include it. This should be interesting...
@SwipeKun
@SwipeKun 2 ай бұрын
Bruh another excuse from companies to make the phones even more expensive when we didn't ask for it 💀😭
@TGAProMKM
@TGAProMKM 3 ай бұрын
not only phone but if im not wrong this NPU's started their inclusion within new laptops and PC motherboards ....
@broccoloodle
@broccoloodle 3 ай бұрын
one note, no operating system can run on GPUs as it does not have many feature, most basically recursion
@KhuzZzZi
@KhuzZzZi Ай бұрын
1:48 it is also fast cuz it goes with the speed of light
@Lurieh
@Lurieh 2 ай бұрын
I'm pretty sure I don't want my smartphone getting too smart on me. Now an NPU for desktop PC I do want; With Linux open sauce drivers ofc.
@FreshlyFried
@FreshlyFried 3 ай бұрын
Man do I miss privacy. Corporations are destroying America.
@JDMNINJA851
@JDMNINJA851 2 ай бұрын
You created a KZfaq account with your photo on it 🤦
@oo--7714
@oo--7714 2 ай бұрын
​@@JDMNINJA851😂
@phozel
@phozel 12 күн бұрын
@@JDMNINJA851 so? your answer is fallacy!
@chrisspears7563
@chrisspears7563 3 ай бұрын
Hopefully we can start getting smaller cameras on our phones.
@Sethsimracing
@Sethsimracing 3 ай бұрын
Unrelated really, but do you use Intel or AMD?
@B.D.F.
@B.D.F. 3 ай бұрын
3:06 “Now you probably don’t expect to run an entire advanced image generation model on a phone, at least with NPUs the size they are now.” Has Linus never used the Draw Things app on iOS? Full image generation model running on a phone, or even an M-series iPad. It’s been out for a couple of years.
@broccoloodle
@broccoloodle 3 ай бұрын
just a gentle reminder, the apple neural engine was first appeared since 2017, 7 years ago
@deltonadoug
@deltonadoug 3 ай бұрын
I always have concerns about using the cloud. Yes, maybe more powerful, but way less secure for everything!
@jjjb90
@jjjb90 3 ай бұрын
Linus tries to launder his malversations with a new channel XDD
@tigersusyt
@tigersusyt 3 ай бұрын
Not getting anything close to this for at least 6 years
@vlonebored
@vlonebored 3 ай бұрын
5mn video with 1mn ad and other just stating “the npu is faster and more efficient for such tasks”
@Goodsdogs
@Goodsdogs 3 ай бұрын
Great video
@jonjohnson2844
@jonjohnson2844 3 ай бұрын
Hang on, if the model isn't on the phone in the first place, how does the NPU actually process it?
@Flynn217something
@Flynn217something 3 ай бұрын
No. It's just there to riffle through your photos and chat and report the summarized results back to HQ, on your dime of course.
@aarrondias9950
@aarrondias9950 3 ай бұрын
​@@Flynn217something nah, that's nothing new, this changes nothing. People are so quick to jump on the AI hate train without even thinking.
@liamsz
@liamsz 3 ай бұрын
Large models, aren’t in phones, but smaller ones, those used in NPUs are.
@Ultrajamz
@Ultrajamz 3 ай бұрын
@@Flynn217somethingthis!
@Ultrajamz
@Ultrajamz 3 ай бұрын
@@aarrondias9950it will do it on a new scale.
@user-ry9yw3nh6k
@user-ry9yw3nh6k 3 ай бұрын
Probably gone be some npu send data to server, and server use that data to recommand more ads
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
iPhone and Android phones have had them since 2017. google will collect just as much data on you regardless of whether or not there is an accelerator. Almost all of those trend and insights they try and glean from the data from you are all done in cloud. It’s just an accelerator for on device ML tasks. Even if some form of data collection they have requires on device machine learning they can do it without it. Its main purpose is to dramatically expedite things in a more power efficient manner, like fingerprint unlocking, Face ID, dictation, AutoCorrect. Along with many others.
@hummel6364
@hummel6364 3 ай бұрын
Let's not forget that the use of NPUs also offsets some of the costs. A datacenter costs between millions and billions, an NPU in a million devices makes each device maybe 10 bucks more expensive, sure over all you don't get the same economies of scale but it's a much better cost distribution, and the economies of scale in phone silicon are already quite immense. One chip costs tens of thousands of dollars, millions of chips cost dozens of dollars each.
@irwainnornossa4605
@irwainnornossa4605 3 ай бұрын
I'm still waiting for AI silicon to improve AI of things like mobs in minecraft, or just generally AI in games.
@PedroBastozz
@PedroBastozz 3 ай бұрын
iPhone 8 and iPhone X with neural engine in 2017 lmao.
@frostyjeff
@frostyjeff 3 ай бұрын
99% sure those were used for faceid mostly but still cool to have
@foxify52
@foxify52 3 ай бұрын
The way I see it, it's just another point of failure that raises the prices of already expensive phones that maybe 3 apps will actually take advantage of. Yea no thanks. Keep it to desktops and laptops.
@quantuminfinity4260
@quantuminfinity4260 3 ай бұрын
They have been in phones for the nearly 7 years, since 2017 with the iPhone X and Huawei Mate 10. Even if you don’t use many specific apps that take advantage of it, your phone does a lot with it. Dictation would be quite slow, for example. Many other features like voice suppression on calls. Sorting your images, and even some more background management type stuff. It’s also one of those things where, even if you don’t care about any of those features, the average consumer does. Or even if they don’t ask for it, they will complain when it is slow.
@divyam._.maheshwari
@divyam._.maheshwari Ай бұрын
waiting for a point in the future where ALL the AI tasks can be done on-device, praying that the data-greedy tech giants will actually let that happen 🙏🏻
@angustube
@angustube 3 ай бұрын
he actually did it
@Kawabxl
@Kawabxl 3 ай бұрын
Great video, but it was pretty distracting to see Linus changing colour during it.
@HelamanGile
@HelamanGile 2 ай бұрын
Because once they discontinued the server service your AI functionality is essentially useless so if you have it baked into your phone to begin with why not go with that
@AdeDestrianto
@AdeDestrianto 3 ай бұрын
I thought this was Fortigate NPU "Network Processor Unit"
@budgetarms
@budgetarms 3 ай бұрын
No beard and linus is back, this video was probably made 10 years ago or I am dreaming
@TeleviseGuy
@TeleviseGuy 3 ай бұрын
Even Intel with some help from Microsoft is trying to put NPUs in our laptops which seems kinda scary but actually isn't really scary at all. I think embedding AI in a small quantity in new features in the OS does more good than harm.
NPUs: the most overhyped new chip?
15:30
TechAltar
Рет қаралды 149 М.
The Dead Smartphone Tier List
17:27
The Studio
Рет қаралды 1,1 МЛН
50 YouTubers Fight For $1,000,000
41:27
MrBeast
Рет қаралды 171 МЛН
Smart Sigma Kid #funny #sigma #comedy
00:26
CRAZY GREAPA
Рет қаралды 3,5 МЛН
Why Do Search Engines Suck Now?
8:23
Techquickie
Рет қаралды 247 М.
Game Console Tier List
31:45
Linus Tech Tips
Рет қаралды 1,9 МЛН
3 Discoveries in Mathematics That Will Change How You See The World
16:46
The billion dollar race for the perfect display
18:32
TechAltar
Рет қаралды 2,4 МЛН
CPU vs GPU vs TPU vs DPU vs QPU
8:25
Fireship
Рет қаралды 1,6 МЛН
How Tech Companies Manipulate the Media ft. MKBHD
14:25
Mrwhosetheboss
Рет қаралды 7 МЛН
I Can’t Believe These are Real - Reacting to Ridiculous PCs on Craigslist
20:53
Adobe: A Disgusting, Criminal Company
10:21
Bull Technology
Рет қаралды 179 М.
Why Nvidia Is Killing GTX
5:20
Techquickie
Рет қаралды 320 М.
Klavye İle Trafik Işığını Yönetmek #shorts
0:18
Osman Kabadayı
Рет қаралды 6 МЛН
Это - iPhone 16 и вот что надо знать...
17:20
Overtake lab
Рет қаралды 104 М.
Samsung laughing on iPhone #techbyakram
0:12
Tech by Akram
Рет қаралды 671 М.
Самые крутые школьные гаджеты
0:49