NVidia is launching a NEW type of Accelerator... and it could end AMD and Intel

  Рет қаралды 42,600

Coreteks

Coreteks

Ай бұрын

Urcdkeys.Com 25% code: C25 【Mid-Year super sale】
Win11 pro key($21):biitt.ly/f3ojw
Win10 pro key($15):biitt.ly/pP7RN
Win10 home key($14):biitt.ly/nOmyP
office2019 pro key($50):biitt.ly/7lzGn
office2021 pro key($84):biitt.ly/DToFr
MS SQL Server 2019 Standard 2 Core CD Key Global($93):biitt.ly/oUjiR
Support me on Patreon: / coreteks
Buy a mug: teespring.com/stores/coreteks
My channel on Odysee: odysee.com/@coreteks
I now stream at:​​
/ coreteks_youtube
Follow me on Twitter: / coreteks
And Instagram: / hellocoreteks
Footage from various sources including official youtube channels from AMD, Intel, NVidia, Samsung, etc, as well as other creators are used for educational purposes, in a transformative manner. If you'd like to be credited please contact me
#nvidia #accelerator #rubin

Пікірлер: 364
@CrashBashL
@CrashBashL Ай бұрын
No one will end anyone.
@Koozwad
@Koozwad Ай бұрын
AI will end the world, in time
@pf100andahalf
@pf100andahalf Ай бұрын
Some may end themselves.
@modrribaz1691
@modrribaz1691 Ай бұрын
Based and truthpilled. This faked up competition has to continue for as long as possible, It's a 24/7 publicity stunt for the entirety of this market with Nvidia basically playing the big bully.
@PuppetMasterdaath144
@PuppetMasterdaath144 Ай бұрын
I will end this conversation.
@CrashBashL
@CrashBashL Ай бұрын
@@PuppetMasterdaath144 No, you won't, you Puppet.
@Siranoxz
@Siranoxz Ай бұрын
We are in dire need of a diverse GPU market.
@Koozwad
@Koozwad Ай бұрын
yeah what happened to "diversity is strength" 😂
@mryellow6918
@mryellow6918 Ай бұрын
We have a diverse market, it's just they aren't a monopoly because they are the only ones they are a monopoly simply because they are the best.
@oussama123654789
@oussama123654789 Ай бұрын
sadly china still needs at least 5 years for a product worthy of buying
@Siranoxz
@Siranoxz Ай бұрын
@@Koozwad I have no idea what you're trying to convey, its just about more companies building GPU's, nothing else.
@Siranoxz
@Siranoxz Ай бұрын
@@mryellow6918 Sure that is one factor, or these invisible GPU manufacturers do not promote their GPU's and optimized support for the games like NVIDIA and AMD does. But being the best comes with a nifty price huh?.
@EnochGitongaKimathi
@EnochGitongaKimathi Ай бұрын
Intel, AMD and now Qualcomm will be just fine.
@fred-ts9pb
@fred-ts9pb Ай бұрын
amd is a bottom feeder.
@waynewhite2314
@waynewhite2314 Ай бұрын
​@@fred-ts9pbWhat tech company are you running? Oh yes Trolltech!
@christophorus9235
@christophorus9235 Ай бұрын
@@fred-ts9pb Lol tell us more about how you know nothing about the industry...
@noanyobiseniss7462
@noanyobiseniss7462 Ай бұрын
So Nvidia wants to patent order of operations, kinda reminds me of apple trying to patent the rectangle.
@brodriguez11000
@brodriguez11000 Ай бұрын
Math can't be patented.
@GrimK77
@GrimK77 Ай бұрын
@@brodriguez11000 there is a loophole for it, unfortunatelly, that should have never been granted
@Wrek100
@Wrek100 Ай бұрын
Didn't Apple succeed? Iirc the 10 2" galaxy tablet got squashed because the judge sided with Apphell.
@danis8455
@danis8455 18 күн бұрын
Nothing new in Nvidia being scum bags really.
@misterstudentloan2615
@misterstudentloan2615 Ай бұрын
Just costs 30 pikachus to do that operation....
@dreamonion6558
@dreamonion6558 Ай бұрын
thats alot of pikachus!
@MyrKnof
@MyrKnof Ай бұрын
@@dreamonion6558 You want as few pikachus pr op as possible! also, "a lot"
@alexstraz
@alexstraz Ай бұрын
How many joules is in a pickahu?
@handlemonium
@handlemonium Ай бұрын
And one Magikarp
@PaulSpades
@PaulSpades Ай бұрын
It's funny how we now need fixed function accelerators for matrices after 15 years of turning GPUs (fixed function FP accelerators) into programmable devices. Also, we went from 12/20/36bit word computers to 4bit and 8bit micros, to 16bit RISC processors and FP engines, to 32bit and now 64. Only to discover we now need much less precision, FP 32 and fp16, now 8 and 4bit. We could probably go down to ternary for large model nodes, or 2bit.
@BHBalast
@BHBalast Ай бұрын
AI workflow != Classical computers, no one will get back to 8 bits on a consumer device :p
@maou5025
@maou5025 Ай бұрын
It is still 64. Floating point is kinda different.
@PaulSpades
@PaulSpades Ай бұрын
@@BHBalast Well yes. But. Do you need photoshop if the AI-box-thing can generate the file you asked for and uploaded it? I'm not saying we won't need general computing like we do now, but most people won't. Because most people don't need programable computers, they need media generation and media consumption devices. Most tasks are filling forms, reading and writing.
@Johnmoe_
@Johnmoe_ Ай бұрын
Sounds cool, but all I want is more VRAM under 10k
@avejst
@avejst Ай бұрын
5:12-5:44, is there a Reason for the blackout in the video? Interesting video as always
@Slavolko
@Slavolko Ай бұрын
Probably video editing or rendering error.
@hupekyser
@hupekyser Ай бұрын
at some point. 3d and ai need to fork into dedicated architectures instead of having a general do it all GPU
@aladdin8623
@aladdin8623 Ай бұрын
Both intel and amd have fpga ip to achieve close to bare metal performance. And in comparison to nvidia's asic plans here a fpga is flexible. Its logic gates can be 'rewired' while nvidia's asics force you to buy more hardware again and again.
@jonragnarsson
@jonragnarsson Ай бұрын
Hate them or love their corporation, NVidia really has some brilliant engineers.
@pierrebroccoli.9396
@pierrebroccoli.9396 Ай бұрын
Darn - hate to be held hostage to leather jacket man but it is a good way to go - local AI processing instead of relying on large corporates for AI services on ones data.
@Zorro33313
@Zorro33313 Ай бұрын
absolutely the same shit as an encrypted channel between you and processing datacenter. processing is not local anyway it seems. ai locally only fractures data in some bullshit tokens (just packets as usual) and send them to data center to get the processed response back. this sounds just like bs channel encryption using AI cuz nvidia can't do anything else but ai.
@BHBalast
@BHBalast Ай бұрын
​@@Zorro33313wtf?
@awindowskrill2060
@awindowskrill2060 Ай бұрын
@@Zorro33313 what meth are you smoking mate
@FlorinArjocu
@FlorinArjocu Ай бұрын
We'd still need cloud AI services as the same methods will get to their datacenters and will create more advanced things to do online much faster. But more "simpler" will get local, indeed.
@awesomewav2419
@awesomewav2419 Ай бұрын
jesus nvidia does not rest for the competition.
@visitante-pc5zc
@visitante-pc5zc Ай бұрын
And consumers pockets
@maloxi1472
@maloxi1472 Ай бұрын
@@visitante-pc5zc As long as it aligns...
@fred-ts9pb
@fred-ts9pb Ай бұрын
It's over for amd before it started. Keep throwing 50M a year at a failed amd ceo.
@Acetyl53
@Acetyl53 Ай бұрын
What a weird comment. Probably a bot or shill. Creep. Weirdo.
@Erribell
@Erribell Ай бұрын
I would bet my life coreteks owns nvidia stock
@selohcin
@selohcin Ай бұрын
I assume he owns Nvidia, AMD, Intel, and several other tech companies.
@K9PT
@K9PT Ай бұрын
CEO of Nvidia only Said IA . IA... IA dozen times, only one gaming...SAD TIMES
@Jdparachoniak
@Jdparachoniak Ай бұрын
to me he said money, money money lol
@fabianhwnd6265
@fabianhwnd6265 Ай бұрын
Nvidia outgrow the gaming market they could abandon the gaming market and they wouldn't notice the losses
@LukeLane1984
@LukeLane1984 Ай бұрын
What's IA?
@K9PT
@K9PT Ай бұрын
@@LukeLane1984 lol
@SalvatorePellitteri
@SalvatorePellitteri Ай бұрын
This time you are wrong! Inference is where AMD and Intel play in a plainfield with NVIDIA. NPUs have much more simple apis so the vertical stack is thin, almost irrelevant and all the applications are going to support NPUs from intel, amd and nvidia very easily and AMD and Intel have already NPUs enabled processors and PCIE cards on the wild
@sacamentobob
@sacamentobob Ай бұрын
He has been wrong plenty of times.
@GIANNHSPEIRAIAS
@GIANNHSPEIRAIAS Ай бұрын
how is that new and how this will end amd or intel? like whats stopping amd from getting their xillinx accel to do the same job?
@YourSkyliner
@YourSkyliner Ай бұрын
4:44 oh no, they went from 30 Pikachus to only 1.5 Pikachus 😮 where did all the Pikachus go then??
@LtdJorge
@LtdJorge Ай бұрын
They didn’t go from 30 to 1.5, 30 is how much energy it takes to load the values, and 1.5 how much it takes to compute one value once loaded (with FMA). With the HMMA instruction, it takes 110pJ to compute an entire matrix of values, so the overhead of loading becomes negligible, while with scalar operations like FMA, the loading part dominates the power consumption.
@AbolishTheInternet
@AbolishTheInternet Ай бұрын
10:50 Yes, I'd like to use AI to turn my cat photo into a protein.
@johndinsdale1707
@johndinsdale1707 Ай бұрын
I think the NPU accelerator is very much a open market. Both Apple and Qualcomm are embedding NPU accelerators into their ARM V9 SOCs. Also Groq has alternative approach to inference which is much more power efficient?
@AshT8524
@AshT8524 Ай бұрын
Haha title reminded me of 30 series launch rumor, I really wanted it to happen but all we got was upgrade in prices lol
@bass-dc9175
@bass-dc9175 Ай бұрын
I never got why people want any company to destroy its competition. Because if Nvidia had eliminated AMD with the 30 series: Then we would not have the current increased GPU prices. No. It would be 10 times worse with Nvidia at a monopoly.
@Tential1
@Tential1 Ай бұрын
I wonder how long before you figure out you can benefit from Nvidia raising prices.
@FJaypewpew
@FJaypewpew Ай бұрын
Dude gobbles nvidia hard
@Vorexia
@Vorexia Ай бұрын
30-series would’ve been a pretty solid gen if it weren’t for the scalper pandemic.
@AshT8524
@AshT8524 Ай бұрын
@@bass-dc9175 I don't want the competition to die, I just want better and affordable products from both companies especially when comparing to previous generations.
@B4nan0n
@B4nan0n Ай бұрын
Accelerator is not the same thing that you said the 40 series was going to have?
@denvera1g1
@denvera1g1 Ай бұрын
28nm to 4nm is only a 3x density increase? But isnt the 4nm 8700G like 2.5x more transistors than the 7nm 5700G? (both around 180mm²) I mean i guess clock speeds makes a difference, but weren't clock speed lower on 28nm?
@Sam_Saraguy
@Sam_Saraguy Ай бұрын
Super interesting. Will be following developments.
@hdz77
@hdz77 Ай бұрын
I might actually end Nvidia if they keep up their ridiculous pricing.
@gamingtemplar9893
@gamingtemplar9893 Ай бұрын
Prices are set by the consumers, the market, no the company. If anything, it would only go down with competition. Value is SUBJECTIVE, there is no intrinsic value on anything. You will pay what the market wants, if the pricing was "ridiculous" as you say, then nvidia would be losing money, it is not, then it is not ridiculous. Learn economy before saying communist shit.
@__-fi6xg
@__-fi6xg Ай бұрын
their costumers, other billion companys can affort it with ease, no need worry.
@SlyNine
@SlyNine Ай бұрын
Unfortunately the prices are high because that's what people are paying.
@Alpine_flo92002
@Alpine_flo92002 Ай бұрын
The pricing isnt bad when you actually look at what their products provide.
@dagnisnierlins188
@dagnisnierlins188 Ай бұрын
​@@Alpine_flo92002for business and prosumers and 4090 in gaming, everything else is overpriced.
@Firetim01
@Firetim01 Ай бұрын
Very informative Ty
@edgeldine3499
@edgeldine3499 Ай бұрын
When did we change times to x? I've been hearing it more and more lately.. gamers nexus said it earlier (technically yesterday) and a few months ago I remember hearing it.. maybe it's been standing out more and more to me but I think it's now a pet peeve. It's ten times as much sounds like the proper way to say it rather than ten x as much.
@chuuni6924
@chuuni6924 Ай бұрын
So it's an NPU? Or did I miss something?
@abowden556
@abowden556 Ай бұрын
What about 1.58 bit architectures? same accuracy, way lower memory and transister footprint, be interesting to see how that works out.
@kimmono
@kimmono Ай бұрын
Why would you even think the accuracy would be the same? Everything said in this video regarding no loss of accuracy, is also wrong. Llama 3 has massive degradation, because it is a dense model.
@edgeldine3499
@edgeldine3499 Ай бұрын
*1000 ex performance uplift" I'm annoyed by people using ex instead of times, been hearing it more lately I guess.. I know it's been a thing for awhile but maybe I'm getting old?
@ragingmonk6080
@ragingmonk6080 Ай бұрын
This is nothing more than a joke! "Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco and Broadcom have announced the formation of the catchily titled "Ultra Accelerator Link Promoter Group", with the goal of creating a new interconnect standard for AI accelerator chips." People are tired of Nvidia gimmicks and they will shut them out.
@120420
@120420 Ай бұрын
Fan boys have entered the building!
@ragingmonk6080
@ragingmonk6080 Ай бұрын
@@120420 I quote tech news and you call me a fanboy because you are a fanboy that didn't like the news. Way to go champ. mom must be proud.
@gamingtemplar9893
@gamingtemplar9893 Ай бұрын
People are not tired, some people are and they don't have any clue about what they are talking about. Same way people who defended Nvidia back in the day and still do, like Gamers Nexus defending the cable debacle to protect nvidia. You guys are all one side or the other fanboys who don't understand how things really work.
@ragingmonk6080
@ragingmonk6080 Ай бұрын
@@gamingtemplar9893 We understand how things work. Nvidia use to use the "black box" called GameWorks to add triangles to meshes that were not needed, to increase compute demand. Then they would program their drivers to ignore a certain amount of triangles to give them a performance edge. Wouldn't give devs access to the black box either. GSynce was a rip off to make money because adaptive sync was free. Limit which Nvidia cards can use dlss so you have to upgrade. Then limit which Nvidia GPU's can use frame generation so you have to upgrade again. We know what is going on and how the Nvidia cult drinks the kool-aid.
@ragingmonk6080
@ragingmonk6080 Ай бұрын
@@gamingtemplar9893 We know Nvidia's gimmicks too well. I cut them off with the GTX 1070.
@jackskalski3699
@jackskalski3699 Ай бұрын
Either I'm bad at listening or I just didn't understand but inference or running neural models locally is all the rage about NPUs and TOPS in current SoC's isn't it? Apple with M3/4, AMD Strix with 50 TOPS and Snapdragon Elite X and Windows 12 with Copilot is exactly that use-case, running models locally isn't it? So why not just cram in these NPUs or new type of accelerators into your CPUs or discrete GPUs and call it a day? What's so revolutionary about this new type of accelerators from NV, that the chips that are hitting the market TODAY, don't have? It's my understanding that optimisations happen on all fronts all the time, transistor level, instruction level, compiler level and software level. When I look for open job positions in IT it hits me how many compiler and kernel optimisation roles are opening for drivers, CUDA and ROCm... Don't get me wrong I love your videos but I just don't see the NV surprise, when everyone is releasing ai accelerators today vs NV promising them in maybe 1 year. NV was focused on the server market, while AMD was actually present in both server and client. Also notice, that NV was already using neural accelerators for their ray tracing workloads, which significantly lowered the required budget of rays, that needed to be cast as they could reconstruct the proper signal quite believably with neural networks. We'd need to assume that TOPS/W metric is only understood by NV and that everyone else will sit idle and be blind to it. I doubt that, judging on what is happening right now. Also we assume, that models will keep growing, at least the cost of learning. There are some diminishing returns somewhere, so I expect models to also shrink and be optimised as opposed to only grow in size. As more people/companies start releasing more models they really need to think how to protect the IP, which is the weights in neurons of these networks because transfer learning is a "biatch" for them :) With progress happening so fast, yesterdays models become commodities. As they become commodities they are likely also to become open-sourced. As such you can expect a lot of transfer learning activities happening, that will act as a force which leads to democratization of older still very good models. So this is a head wind for server HW as I can cheaply transfer learn locally... For me local models are mostly important in two areas of my life: coding aid and photography processing. I really follow what fylm.ai does with color extraction and color matching. As NPUs proliferate more and more cloud based features can be run locally.... (for example Lightroom, Photoshop, fylm ai or copilot like models to aid programmers).
@jackskalski3699
@jackskalski3699 Ай бұрын
I was thinking a bit more and there is another aspect that we're missing from the analysis: Data distance: If you are running a hybrid workload and you really care about perf/W you are actually going to host NPUs on the GPU and also separately as a standalone accelerator. So when you are running a chatbot or some generative local model you will use the standalone accelerator and throttle down your GPU. That's the dark silicon concept to conserve energy. If you are running a latency sensitive workload like 3D graphics, that are aided by neural networks, like the ray tracing / path tracing workloads, then you are going to utilise the low latency on-GPU NPUs because you need the results ASAP -> you might throttle down the standalone NPU accelerator. There is a catch. If your game uses these rumoured "AI" NPCs, then that workload will be run on the discrete NPU accelerator and you're going to be forced to keep it running along the GPU. Now the Lightroom use-case is interesting. Intelligent masking or image segmentation can be done on the discrete accelerator, especially if it means same results but lower Watt usage (in Puget benchmarks). However there might also be hybrid algorithms that utilise GPU compute along with NPU neural network for processing, in which case it might be more beneficial to run that on the GPU (with NPUs onboard). To prove I'm not talking gibberish, Intel is doing exactly that with Lunar Lake :) There are discrete NPUs with 60+ TOPS and the GPU hosts it's own local NPUs with ~40 TOPS. Thus intel can also claim 100+ "Platform" TOPS although that last naming is misleading as you are unlikely to see a workload, that utilises both to run your co-pilot. A Game on the other hand might be different. Lastly I remember years ago AMD's tile based design was marketed as exactly, that, a platform, that not only helps with yields (from a certain chip size onwards) but also allows you to host additional optimised accelerators like DSPs, GPUs, CPus and now NPUs on a single chip. So you could argue AMD has been preparing the foundations for that years ago...
@Jackpkmn
@Jackpkmn Ай бұрын
Ah so it's more AI fluff that will amount to more hardware in landfills after the AI bubble bursts.
@jaenaldjavier188
@jaenaldjavier188 Ай бұрын
I feel like the next big step to properly implement the coming technologies in AI and acceleration would be to integrate such architectures directly into the motherboard. especially in light of how large NVidias highest end cards are becoming and how much more space efficient MoBos have gotten through the last half decade, and not to mention the power and efficiency of APUs and NPUs that are coming out this year. To physically offloading those calculations onto a dedicated spot on the motherboard could provide an upper hand in computer hardware. This also doesnt seem all too farfetched when you take into account the industry is planning to implement an "NPU standard" across mobile devices and various OS. Also the fact mobo manufacturers are already re configuring things like RAM from DIMM slots to CAM2 on desktops. combine all of this plus the fact that the technologies could potentially be tied closer to the CPU on the north bridge and it feels like a no brainer to work with mobo manufacturing to further push the limitations of computing power
@RudyJi158
@RudyJi158 Ай бұрын
Thanks. Great video
@kazedcat
@kazedcat Ай бұрын
Fetch and Decode does not move data they only process instruction not data. Also instructions are tiny 16~32bit vs. 512~1024bit SIMD vector data
@ageofdoge
@ageofdoge Ай бұрын
Do you think Tesla will jump into this market at some point? Would the FSD HW4 be competitive? I don't know if this is a market they are interested in, but it seems like they have already done a lot of the work as far as low power usage inference.
@francisquebachmann7375
@francisquebachmann7375 Ай бұрын
I just realized that Pikachu is just a pun for Picojoules.
@TheDaswilhelm
@TheDaswilhelm Ай бұрын
When did this become a meme page?
@christophermoriarty7847
@christophermoriarty7847 Ай бұрын
For what I'm gathering this accelerator is a type of cash for the GPU which means it won't be a dedicated card and consumer products it will probably be parts of the video card itself.
@NatrajChaturvedi
@NatrajChaturvedi Ай бұрын
Would be interesting to hear your take on Qualcom's "PC Reborn" pitch too.
@pedromallol6498
@pedromallol6498 Ай бұрын
Have any of CoreTek's predictions ever come true just as described?
@New-Tech-gamer
@New-Tech-gamer Ай бұрын
isn't that "accelerator" the NPU everybody is talking about nowaday? specialized in local low power inference. Nvidia may have the best prototype, but Qualcomm and AMD are already starting to ship CPU with NPU doing 40-50TOPS. all back up by Microsoft within W11. so even if nvidia comes to market in 2025 it may be too late.
@hikenone
@hikenone Ай бұрын
the audio is kinda weird
@user-wt7pq5qc2q
@user-wt7pq5qc2q Ай бұрын
Well done again, I want to be able to add several cards to my pc and cluster them, Might need a three phase plug.
@rodfer5406
@rodfer5406 Ай бұрын
Video error-blacks out
@--waffle-
@--waffle- Ай бұрын
When do you think NVIDIA will release their consumer desktop pc GH200 Grace Hopper Superchip style product? I'd love an Nvidia-ARM all-in-one Linux beast.
@--waffle-
@--waffle- Ай бұрын
When Part 2!?!? (Also, New nvidia shield tablet!!....yes please)
@JoeRichardRules
@JoeRichardRules Ай бұрын
I like the Pokemon reference you're making
@Raphy_Afk
@Raphy_Afk Ай бұрын
This is extremely interesting, I hope they will release discrete accelerators for desktop users
@user-et4qo9yy3z
@user-et4qo9yy3z Ай бұрын
It's extremely boring actually.
@gamingtemplar9893
@gamingtemplar9893 Ай бұрын
@@user-et4qo9yy3z Actually, you are boring. Stop spamming your stupidity and go watch cat videos.
@visitante-pc5zc
@visitante-pc5zc Ай бұрын
@user-et4qo9yy3z yes
@Raphy_Afk
@Raphy_Afk Ай бұрын
@@user-et4qo9yy3z For those who do use only their pc for gaming .
@maloxi1472
@maloxi1472 Ай бұрын
@@user-et4qo9yy3z Oh look mommy ! The "akshually" guy is real !
@Altirix_
@Altirix_ Ай бұрын
5:20 black screen?
@technicallyme
@technicallyme Ай бұрын
Didn't Google use tensor cores before nvidia
@jackinthebox301
@jackinthebox301 Ай бұрын
Tensor is just a mathematical term. This may just be pedantic, verging on semantics, but Nvidia owns the 'Tensor Core' architecture specifically used in their products. So technically no, Google didn't use 'Tensor Cores' before Nvidia. They may have had something they referred to as 'Tensor', but again, that's not Nvidia's architecture.
@Sanguen666
@Sanguen666 Ай бұрын
excellent and professional video, ty for ur work!
@user-et4qo9yy3z
@user-et4qo9yy3z Ай бұрын
Get your tongue out of his ass.
@sanatmondal7093
@sanatmondal7093 Ай бұрын
Some wise man said that Jensen wears leather jacket even on the hottest day of summer
@wakannnai1
@wakannnai1 Ай бұрын
All of these hw are pretty equivalent. AMD showcased similar jumps and you'll see pretty similar jumps next year from Blackwell am next gen MI. Question is how much do you want to pay for it.
@mryellow6918
@mryellow6918 Ай бұрын
Amd hasn't showed similar jumps in performance at all.
@arenzricodexd4409
@arenzricodexd4409 Ай бұрын
Raw performance is one thing. But how well and how easy those raw perfoamce can be tapped is another thing.
@cjjuszczak
@cjjuszczak Ай бұрын
Can't wait to buy a PhysX, i mean, Nvidia-AI PCIe card :)
@sirab3ee198
@sirab3ee198 Ай бұрын
lol also the 3D Nvidia glasses, the dedicated G-sync module in monitors etc ...
@Neonmirrorblack
@Neonmirrorblack Ай бұрын
18:39 Truth bombs being dropped.
@blackjew6827
@blackjew6827 Ай бұрын
Ill pay to who ever does not have any AI shit.
@wasd-moves-me
@wasd-moves-me Ай бұрын
I'm already so tired of AI this AI that AI underwear AI toothbrush AI AI AI AI
@brodriguez11000
@brodriguez11000 Ай бұрын
AI condom.
@Zorro33313
@Zorro33313 Ай бұрын
if you still need to send data ta data center to be processed there and then get the result back, how it is different from any cloud based service? this sounds like a normal cloud-based computing absurdly overcomplicated with unnecessary AI shit instead of encryption.
@Six_Gorillion
@Six_Gorillion Ай бұрын
Thats called marketing wank. Slap some AI on that sucker and fanboys will take another 20% price increase up their ends with a smile on their face.
@cem_kaya
@cem_kaya Ай бұрын
i think CXL and a gpu would solve the inference problem with right software.
@ShaneMcGrath.
@ShaneMcGrath. Ай бұрын
The more they push all this A.I. the more likely I am to end up switching off and going back outside.
@v3xx3r
@v3xx3r Ай бұрын
Let me guess a traversal coprocessor?
@noobgamer4709
@noobgamer4709 Ай бұрын
AMD also entering tick tock cadence for the MIx00 series. Ex: M1300X (CDNA3+HBM3) with MI325X (CDNA3+HBM3E) after that, MI350X (CDNA4+HBM3E/HBM4)
@user-lp5wb2rb3v
@user-lp5wb2rb3v Ай бұрын
Thats fine, Its better this way, because you cant expect ground up technologies every year
@jimgolab536
@jimgolab536 Ай бұрын
I think much will depend on how aggressively NVIDIA builds and defends it (leading edge) patent portfolio. First is best. :)
@Eskoxo
@Eskoxo Ай бұрын
Mostly for Server side I do not think consumers have as much interest in AI as these corpos make it seem
@wrongthinker843
@wrongthinker843 Ай бұрын
Yeah, given the performance of their last 2 gens, I'm sure AMD is shaking in their boots.
@FaeTheo
@FaeTheo Ай бұрын
you can also get win 11 for free and activate it with cmd.
@lamhkak47
@lamhkak47 Ай бұрын
Also, love in the local AI space, Apple has accidentally (or strategically) made their Mac Studio a rather economic choice for local inferencing. That and Nvidia's profit margin (nearly twice that of Apple's) making Tim Apple gushing also shows how dominating Nvidia is in the market right now.
@FlyingPhilUK
@FlyingPhilUK Ай бұрын
It's interesting how nVidia is still locked out of the Desktop & Laptop CPU market - with AMD, Intel and now Qualcomm pushing Co-Pilot PCs & Laptops - I know Qualcomm had an exclusive on Windows ARM CPU development, but that ends this year (?) - so obvious nVidia should be making SOCs for this market
@Starfishtroopers
@Starfishtroopers Ай бұрын
could... nah
@charlesballiet7074
@charlesballiet7074 Ай бұрын
it is kinda nuts how much nvidia has gotten out of that cuda core
@mryellow6918
@mryellow6918 Ай бұрын
That's what happens when your industry best for 15+ years. People end up developing technology to perform better on your hardware.
@user-lp5wb2rb3v
@user-lp5wb2rb3v Ай бұрын
@@mryellow6918 fermi was bad, so was kepler compared to gcn The issue with amd was software and lack of money. Not to mention they were crippled by GF failing on 14nm, and delaying 28nm.
@davidtindell950
@davidtindell950 Ай бұрын
Do You think that there will be a competitor or alternative to NVidia within the next 18 months ?
@PaulSpades
@PaulSpades Ай бұрын
There were dozens of startups developing exactly fixed function accelerators for inference - some have already ran out of money, some have been poached by the bigger players like Apple, Google, Amazon... some are developing in memory computing and analogue logic, which will probably never see the light of day this decade. Unless you can get TPUs from Google, there's not much actual commercial hardware you can get that's more efficient than nvidia's, if you need horsepower and memory. If you want to run basic local inference, any 8gig GPU will do, or any of the new laptop processors that can do around 40tops.
@nick_g
@nick_g Ай бұрын
Nope. Even if a competitor started now and copied the ideas in the video, it would take about 18 months to design, validate, and produce them AND that would be version 1. NVDA is pushing ahead faster than anyone can keep up with
@omnymisa
@omnymisa Ай бұрын
To the Nvidia spot I don{t think but anyone can enter the market and show what they can bring and try kicking AMD and Intel, but Nvidia looks very well and secure as the leader, but sure we would be very glad if there is some other strong competitors around because it feels like a monopoly, not that good.
@ps3301
@ps3301 Ай бұрын
These startup can try to sell but once they get any traction, nvidia will buy it with their one quarter profit
@004307ec
@004307ec Ай бұрын
😅Huawei Ascend I guess? Though the software side is kind of bad.
@elmariachi5133
@elmariachi5133 Ай бұрын
I expect nvidia to soon produce Decelrators and have these slowdown any computer immensely unless the owners pays horrible subscription fees ..
@drewwilson8756
@drewwilson8756 Ай бұрын
No mention of games until 20:09
@jackinthebox301
@jackinthebox301 Ай бұрын
Nvidia doesn't care about gaming GPU's anymore, dude. AI has 10x the revenue at wildly better margins. The only part of gaming Nvidia cares about is ego driven. Having the fastest card, regardless of price.
@Ronny999x
@Ronny999x Ай бұрын
I think it will be just as Successful as the Traversal Coprocessor 😈
@donutwindy
@donutwindy Ай бұрын
NVidia making $30,000 AI chips that consumers are not going to purchase should not affect AMD/Intel who aren't currently competing in AI. To "end" AMD and Intel, the NVidia chips would have to be under $500 as that is the limit most consumers will spend on a graphic card or CPU. Somehow, I don't see that happening anytime soon.
@pazize
@pazize Ай бұрын
Great work! Thank you for sharing! If this materializes it'll be very exciting
@KilgoreTroutAsf
@KilgoreTroutAsf 24 күн бұрын
New? Supercomputers have existed forever.
@newjustice1
@newjustice1 Ай бұрын
What does it look like brainiac
@MichalCanecky
@MichalCanecky Ай бұрын
I like the part where he talks about Pikachu
@chryoko
@chryoko Ай бұрын
From memory, yesterday, Jensen was speaking about 1000 GWh consumption (1 TWh) for GPT4. I found that HUGE. FYI a battery Gigafactory will consume about 1GWh of electricity per year to produce about 40 GWh of Li on cells , enough to produce 50kWh packs for 800 000 small cars each year. Or maybe, i misunderstood him.... maybe a subject for a video .... how crazy energy you need to run such large LLMs on a server.... compared with what consume our little brain.... (BTW, i do not need to solve plenty of matrices full of sin, cos, tan to move my shoulder, arm, hand, finger every time i want to scratch my head 😉)
@Prasanna_Shinde
@Prasanna_Shinde Ай бұрын
i think Nvidia/Jensen are banking so much on AI, because they want generate all the possible money/margin, to be used for CPU R&D, so they will have complete solution/stack 🧐
@muchkoniabouttown6997
@muchkoniabouttown6997 Ай бұрын
I Wana geek out about this but I'm convinced that all the ai advancement and inference refinement is just snake oil for over 90% of buyers. So I hate it. I'm down for new approaches, but fr can any non-salesman or non fan boi explain how this will benefit any more than 1-3 companies??
@lemoncake8377
@lemoncake8377 Ай бұрын
Thats a lot of pikachus!!
@ctu22
@ctu22 Ай бұрын
Green clickbait cope, thanks!
@angellestat2730
@angellestat2730 Ай бұрын
In the last year you were saying that Nvidia was in a dead end and that their stock price should plunge, instead, their price has rise a 200%. Lucky for me at that time I did not listen to you and I bought, taking into account how important AI was going to be making a lot of profit.
@metatronblack
@metatronblack Ай бұрын
But isn't Batman the same as Batman Batman
@oraz.
@oraz. Ай бұрын
I don't care about llm ai assistants. If gaussian splatting takes over rendering than ok.
@atatopatato
@atatopatato Ай бұрын
coreteksilikiaaaa
@gstormcz
@gstormcz Ай бұрын
AI acceleration sounds as good as 8ch sound for my pair of ears. The presentation looks more eyecatching than RGB, maybe because I like spreadsheets and informed narrative.(Just telling that I viewed it with will to absorb as much as my brain accepts🤷🏼‍♀️, when AI make it to desktop PC and games I will understand 2x more) You know more how ground breaking Nvidia acceleration could be, but I am sure I will watch it from distance with my slim wallet 😂 GG, pretty news of this topic.. As usual by Core-news-tech 👍 Patents last legally only limited time, right? AMD and all other will develop their acceleration at law bureau soon.
@El.Duder-ino
@El.Duder-ino Ай бұрын
Nvidia has plenty of cash to continue to be aggressive in its goal to disrupt and lead as many markets as possible. It's pretty much confirmed their next goal will be around edge and consumer AI which basically predicted their unsuccessful acquisition of the Arm in the past. It will be very interesting to see how their edge/consumer Arm SoC they r working on will compete with the rest of the players. Thx for the vid and for shedding more light about their future accelerator👍
@genericusername5909
@genericusername5909 Ай бұрын
But who actually needs this to the point of paying for it?
@aladdin8623
@aladdin8623 Ай бұрын
At this point of development nvidia might as well build a new computer concept with the GPU as the central processing unit. The CPU is becoming more and more a marginal gimmick. AI shifts things and intel and amd have to react, if they want to survive. Arm seems to understand and Jim Keller with tenstorrent understood it as well.
@LtdJorge
@LtdJorge Ай бұрын
You don’t know what you’re talking about.
@aladdin8623
@aladdin8623 Ай бұрын
@@LtdJorge I don't care your lack of understanding basic IT principles to be honest or the impact of the current development for the future. Maybe you want to go play some fortnite and troll some teenies there instead?
@LtdJorge
@LtdJorge Ай бұрын
@@aladdin8623 GPU as central processing unit is the dumbest thing I've ever heard. It's clear you've never programmed in CUDA, OpenCL, etc.
@aladdin8623
@aladdin8623 Ай бұрын
@@LtdJorge The only thing you ever did or understood about computers, is how to play fortnite.
@XfStef
@XfStef Ай бұрын
So the industry is having YET ANOTHER go at thin client BS. I hope, again, for them to, again, fail miserably.
@patrikmiskovic791
@patrikmiskovic791 Ай бұрын
Becouse of price I will always buy and GPU and CPU
@jeffmofo5013
@jeffmofo5013 Ай бұрын
Idk,, I should be as enthusiastic as you. This may be an investment opportunity. Still waiting for NVidia stock to pull back. Technical analysis says it's at it's cycle top. But I'm also a machine learning expert. While inference is important. It's currently the fastest thing compared to training. The problem is phones not laptops. Laptops can more than handle inference. A phone on the other hand struggles. So Samsung with it's focus on an AI chip is more important in this arena. Unless NVidia is going to start making phones, I don't see this, as an implementer, as that impactful. And memory is more important on phones than cpu for this type of work. On a side note I don't even use GPUs for my AI work. I did a comparison and it only gave me a 20% increase in performance and cost twice as much. So at scale I can buy more CPUs than GPUs. And one more CPU is 100% increase in performance compared to gaining the 20% increase at twice the cost. So I don't see the NVidia AI hype. 1x cpu 1x gpu is 120% at twice the cost 1x cpu is 100% at the same cost is 2x so I get 200% increase for the same price as 1 cpu and 1 gpu
@subz424
@subz424 Ай бұрын
If you're going to talk about LLM/AI inference speeds, why aren't you looking at other hardware like Groq's LPUs? 20x~ (average) cheaper than gpt-4o per million tokens, far faster inference. Basically, efficient. Would be nice to see a video from you on other companies like Groq.
@axl1002
@axl1002 Ай бұрын
You need like 8 groq chips to run llama3 8b.
@Techaktien
@Techaktien Ай бұрын
Stop hyping
@Matlockization
@Matlockization Ай бұрын
Accelerators have been around in CPU's for decades now, I don't get all the Nvidia fan fair.
@IcecalGamer
@IcecalGamer Ай бұрын
15:05 "Nvidia has the verticality" Already established for drop-in/plug-n-play. It took Intel 3 years just to make their first gen PC GPUs somewhat work, drivers and software. AMD can't even made 1st party heat-sinks for their PC GPUs; check how "good" their vapor-chamber design was. Even IF price/power(W)/performance was a tie amd=nvidia=intel, the other 2 are jokes when it comes to ease of adoption and reliability. Only thing AMD has in the GPU market is the openness/ hackability, but that is a double-edged sword since it requires time and effort to make it do what you want; you can do it tho 👍unlike Nvidia or Intel that are closed ecosystems. As for Intel... it's so schetchy that even now there are doubts that BattleMage will ever come out or they could drop from the GPU/accelerator market all together (like they did with soo many of they Pro-sumer or enterprise products; Optane for ex.).
@knarfxd4071
@knarfxd4071 Ай бұрын
Bruh, using accelerators ain't some genius 4D chess move, NV is just the only one for whom it's already a viable business strat, while AMD doesn't have the scale, and Intel, well, is doing Intel things.
@MrDante10no
@MrDante10no Ай бұрын
With all due respect Coretex, first it was AMD going to destroy nVidia with chiplet design, then Intel were going to destroy AMD with new CPUs and now nVidia will destroy both! 🤔 Can you please make up your mind? 🙂 PLEASE! 🙏
I should have tried this earlier - Bigscreen VR.
11:07
optimum
Рет қаралды 2,2 МЛН
NVidia is launching a NEW type of Accelerator (Part 2/2)
16:28
Red❤️+Green💚=
00:38
ISSEI / いっせい
Рет қаралды 75 МЛН
Clowns abuse children#Short #Officer Rabbit #angel
00:51
兔子警官
Рет қаралды 72 МЛН
Zen 5 is almost here. Here's EVERYTHING you need to know
16:51
Why next-gen chips separate Data & Power
18:56
High Yield
Рет қаралды 162 М.
What Creates Consciousness?
45:45
World Science Festival
Рет қаралды 83 М.
Why This New CD Could Change Storage
14:42
ColdFusion
Рет қаралды 1,1 МЛН
NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories
1:13:59
The Entire History of RPGs
2:43:35
NeverKnowsBest
Рет қаралды 2,7 МЛН
How NVIDIA just beat every other tech company
9:20
Mrwhosetheboss
Рет қаралды 1 МЛН
Intel And The AI PC Revolution! Lunar Lake REVEALED
19:48
Level1Techs
Рет қаралды 39 М.
Как правильно выключать звук на телефоне?
0:17
Люди.Идеи, общественная организация
Рет қаралды 1,7 МЛН
Здесь упор в процессор
18:02
Рома, Просто Рома
Рет қаралды 372 М.
iPhone socket cleaning #Fixit
0:30
Tamar DB (mt)
Рет қаралды 14 МЛН
Samsung laughing on iPhone #techbyakram
0:12
Tech by Akram
Рет қаралды 642 М.
Копия iPhone с WildBerries
1:00
Wylsacom
Рет қаралды 193 М.