No video

IBM’s New AI Chip Explained

  Рет қаралды 175,308

Anastasi In Tech

Anastasi In Tech

Күн бұрын

In this video I discuss new IBM AI Chip - Artificial Intelligence Unit.
#IBM
ieeexplore.iee...
My Gear:
Camera Sony Alpha 7 III: amzn.to/3dmv2O6
Lens Sony 50mm F1.8: amzn.to/3weJoJo
Mic Sennheiser: amzn.to/3IKW5Ax
Music from my Videos: www.epidemicso...
Support me on Patreon: / anastasiintech

Пікірлер: 390
@AnastasiInTech
@AnastasiInTech Жыл бұрын
Let me know what you think !
@Sebastian-op7li
@Sebastian-op7li Жыл бұрын
You are such a beautiful woman.😍
@Sebastian-op7li
@Sebastian-op7li Жыл бұрын
Do you think we will reach singularity 2045?
@mintakan003
@mintakan003 Жыл бұрын
Reducing precision is a standard trick, for AI chips. Though it's usually used for inference, when things are already "baked in", and can even deal with "compressed" models. For training, the problem is you might need to "range" (in precision) for exploration. This is why GPU's are still largely used, while one can get away with alternative (lighter weight) architectures for inference. For training, one can argue reducing the precision can also have a regularizing effect. I suppose atomizing the dot product as a basic operation (which is a sub-operation in matrix multiplication) maybe one way of having little of both. There is still the issue of data transfer between memory and the computing part, which is one of the limitations of GPU's (GPU ram). This is why things are done in batches, in training algorithms. But data transfer is one of the bottlenecks. In the case of neuromorphic chips, memory and compute are married together, for each "neuron". It minimizes data transfer. It is more energy efficient. But it would require a very different paradigm, to make this work (and work well), the current approach representing everything in maths, specifically matrices. This is esp. true, for online training. One can try to shoehorn traditional approaches into this architecture. But there are limitations, and doesn't take advantage of the inherent structure of this type of "neuron". Same with quantum computing (if people can ever get the thing working, when scaling up the qubits, and adequately dealing with the noise). This is a fundamentally different beast, taking advantage of the quantum properties, superposition and entanglement. You're basically trying to solve problems by composing interfering wave functions. Not an easy task. One can shoehorn some traditional algorithms into it (e.g. RBM). But this is not the best use of this type of computer. Much less than it can handle the current best of the class architectures, with its complexities, such as "transformers", from traditional deep learning.
@govcorpwatch
@govcorpwatch Жыл бұрын
AI is only as good as the data we feed it (and complexity of the model). AI can't really be "innovative" and only really reproduces the data given to it to begin with. Really well, but not human. Not actually "creative" like human.
@trtrhr
@trtrhr Жыл бұрын
is IBM a Fabless company? who makes IBM CPU chips?
@patrick-aka-patski
@patrick-aka-patski Жыл бұрын
Very exciting trend. Thanks for regularly updating us on this topic, Anastasi.
@springwoodcottage4248
@springwoodcottage4248 Жыл бұрын
Super interesting, super well presented. The issue with asic is that it has to be got right, if slightly under optimised one can’t correct save with a new asic. Meanwhile Musk on the last Tesla q3 results was unsure if dojo would be better than the flexibility of GPU. In terms of quantum compute it is similar to analogue compute in its ability to hold several potential variables at each location. Quantum compute should be better as it relies on physical states of atoms or molecules but one has to maintain the quantum systems in well defined environments to prevent corruption. In principle one can check for corruption using entangled photons but it soon gets complicated. Thank you for sharing!
@mich_elle_x
@mich_elle_x Жыл бұрын
I like this channel because it covers wide range range of topics not covered by other large computer hardware channels that are too focused on gaming.
@codebury6343
@codebury6343 Жыл бұрын
Her explanations are so easy to understand
@ankitnmnaik229
@ankitnmnaik229 Жыл бұрын
Can u recommend channels to learn hardware and software??
@tcpipman4638
@tcpipman4638 Жыл бұрын
This is how Skynet happens
@pwnmeisterage
@pwnmeisterage Жыл бұрын
@@kob8634 Subtitles are available.
@reviewmirror591
@reviewmirror591 Жыл бұрын
You can tell she is really smart because she understands it on a level to be able to explain for us more normally gifted individuals.
@micwin2
@micwin2 Жыл бұрын
This text was written in German and translated into English using chatGPT. Anastasi, thank you again for a very interesting and informative video. It seems like I'm "binging" your videos today like a new series on Netflix, haha. But I think IBM's approach is the wrong one, it's more of a von-Neumann model, so hierarchical. Spiking NN (neuromorph) seems to me to be the way to go. Thank you again for the great video, please keep it up.
@1_McGyver
@1_McGyver Жыл бұрын
5 years ago I thought that the AGI would come by 2045. Now I think it is possible by 2025.
@firkinfright5168
@firkinfright5168 Жыл бұрын
I have been around longer than that.
@Sebastian-op7li
@Sebastian-op7li Жыл бұрын
I think 2028
@Peter_Lynch
@Peter_Lynch Жыл бұрын
As someone in the field, we are ages away. We don't even have any idea of the challenges and limitations ahead.
@alteredcarbon3853
@alteredcarbon3853 Жыл бұрын
@@Peter_Lynch What do you mean by ages away ? Currently a majority of experts are predicting 2030 for AGI. What is your timeline ?
@Peter_Lynch
@Peter_Lynch Жыл бұрын
@@alteredcarbon3853 Real actual AGI not before 2050. Of course you can possibly create a chatbot by 2030 that passes the Turing test for most people in short conversations but the leap is huge. Gpt3 for example doesn't even have a state/memory right now. Additionally the structure of the models are all stable and we have to actively retrain them to even adjust the weights.
@24playermaker
@24playermaker Жыл бұрын
This is very exciting content. Hardware rocks!
@firstnamelastname307
@firstnamelastname307 Жыл бұрын
Thanks for all information. Please share your valuable thoughts on quantum computing and AI somewhat deeper.
@JaibirSethi
@JaibirSethi Жыл бұрын
Would be interesting to see how this compares to more mainstream solutions like the Nvidia H100 in real world use. Unfortunately this sort of data is hard to come by
@pwnmeisterage
@pwnmeisterage Жыл бұрын
The Nvidia H100 is just another fancy GPU product. The latest-and-greatest, the usual improvements in performance, power, efficiency, cost, hype. Another fab shrink, more transistors, more memory, more bandwidth. Along with the usual new firmware and software capabilities. It's an exciting new product in the GPU/SPU world. But it's not a fundamentally new paradigm in computing. It promises to do all the old things better - maybe it can even handle the same workload as two or three other GPUs. It can't do new things in new ways that computers could never handle before. There's no reason (other than cost) that a datacenter or supercomputer couldn't use both hardwares in the same machines as necessary. Though I suspect these chips won't be available (or necessary) in consumer devices for some years.
@nemmart
@nemmart Жыл бұрын
I think it's fairly easy to cram a bunch of TFLOPs into a small area. The problem is that a good AI design requires a careful balance of resources: compute, L1 (local cache), L2 (shared cache), memory bandwdith. If the design is lopsided towards one resource or another, the design will look great on paper, but actual perf on many real world networks will be quite bad. So, perf per mm is in some sense a terrible metric to compare chips. The second thing to note, is that exploiting a novel hardware architecture takes a hell of a lot of ninja programming and is a massive software effort.
@pwnmeisterage
@pwnmeisterage Жыл бұрын
@@nemmart Isn't the whole idea that these magical AI synapse chips will be able to program themselves? That they may start poorly but they can "learn" how to improve their own performance over time and eventually evolve/converge onto optimized hardware performance?
@Henkvanpeer
@Henkvanpeer Жыл бұрын
Nice hair, terrible accent. Had a hard time making out what you were saying…
@LuisMailhos
@LuisMailhos Жыл бұрын
@@Henkvanpeer Indeed, but she provides a transcript! I follow other youtubers still harder to understand and they don't have it.
@gandautama4141
@gandautama4141 Жыл бұрын
I tried using half precision IEEE 754 using FPGA 4 years ago. the reason is to save memory & hardware buffer. Now IBM is paying attention.
@gargamelandrudmila8078
@gargamelandrudmila8078 Жыл бұрын
This is an inference chip to execute models that have already been trained. This is similar to Tesla's AI chip that goes into its vehicles. For training IBM need to build a Dojo equivalent.
@esra_erimez
@esra_erimez Жыл бұрын
*This*
@planetmuman
@planetmuman Жыл бұрын
I worked on Validation and Verification for this AI Chip for IBM last year. It is awesome.
@torstenziegler4826
@torstenziegler4826 Жыл бұрын
This Chip is a big change.
@florin2tube
@florin2tube Жыл бұрын
Great overview and useful for potential investors on AI hardware. Thank you 😊
@johnscarpulla9073
@johnscarpulla9073 Жыл бұрын
Dear Anastasi: your videos are terrific, please keep them coming! I work in the space electronics domain, and i heard a comment you made on one of your previous videos -- can't remember exactly which one it was. But in it, you mentioned that that the ISS (international space sstation) experiences single event upsets as it passes through the south atlantic anomaly. And that the older technology motorola 68000 processors are unaffected, however everything else, such as laptops, servers, wifi, etc. are just shut down for the few minutes duration until the SAA is passed. Is this really true? Do you have a reference or another person who can verify this? I am wondering if this is anecdotal information or an actual operating procedure. Thanks so much for your very informative videos that are quite relevant to me despite having worked in this field for 46+ years! --john
@erniea5843
@erniea5843 Жыл бұрын
Awesome research and overview. I feel smarter every time I watch your videos.
@ssilversgs
@ssilversgs Жыл бұрын
What do I think? I think you are the most adorable KZfaqr in the world. You are so excited about computer chips that it is infectious fun to watch you, even though I understand a very small percentage of what you are saying. Good fortune to you!
@coenraadloubser5768
@coenraadloubser5768 Жыл бұрын
Keep listening. Enable subtitles. Your brain will figure it out.
@hagen-henrikkowalski3835
@hagen-henrikkowalski3835 Жыл бұрын
I litirally never seen a scientist working full time in your field and making youtube videos that are both digestible for a casual audience and for experts on top of such a wide range of highly complex topics. This combination of hard and soft skills is so fucking rare any institute or company can call itself lucky for employing you massive respect (I did my PhD at one of the world leading institutes for material science and don't come close to your level)
@Peter_Lynch
@Peter_Lynch Жыл бұрын
You might find Yannic Kilcher interesting as well.
@hagen-henrikkowalski3835
@hagen-henrikkowalski3835 Жыл бұрын
@@deang5622 I literally don't know what you didn't understand about my comment but let me break it down for you. First I am a scientist and what I am saying is that the way in which information on such a level is conveyed not only displays communications skills but a deep understanding of the field itself because in order to communicate effectively you need to understand. So you can view both in isolation both are valuable skills but their interplay is highly non linear. I know many brilliant scientists who cannot convey the information on the level she does the inverse, however, I have never encountered her. As soon as somebody can convey such topics on this level you know that the person knows what the fuck they are talking about. if you take offense to my swearing I don't care, the level on which she conveys information is rare really rare, and suggests a deep understanding of the field most scientists cannot even come close to. So if you don't mind please refrain from comments which do not carry any meaning.
@hagen-henrikkowalski3835
@hagen-henrikkowalski3835 Жыл бұрын
@@Peter_Lynch Indeed I do guys brilliant! Thanks for mentioning him anyways!
@royaldecreeforthechurchofm8409
@royaldecreeforthechurchofm8409 Жыл бұрын
Asianometry is like this channel
@eaaeeeea
@eaaeeeea Жыл бұрын
@@deang5622 According to Cambridge Dictionary, swear words can be used to "intensify what is said", in this case the rarity of combining hard and soft skills. I see no offensive use of language here, so no harm done.
@Schjoenz
@Schjoenz Жыл бұрын
I like your voice and your accent.. A refreshing new style than the usual (or pretty common) narration style of American tech vloggers..
@mynameisgiovanigiorgio4171
@mynameisgiovanigiorgio4171 Жыл бұрын
Hmm smart, intelligent, articulate, computer savvy, you gained a sub ma’am
@georgehernandez7075
@georgehernandez7075 Жыл бұрын
your voice is soo mellow. I can heard you and fall a sleep. thanks
@2000Cowboys
@2000Cowboys Жыл бұрын
Everyone should read Chris Miller's book ( Chip War ) he explains Chip technology and how companies like IBM is on Top.
@johngeverett
@johngeverett Жыл бұрын
It's good to see IBM innovating in the AI arena. When they do something, it's solid.
@pwnmeisterage
@pwnmeisterage Жыл бұрын
IBM was undeniably the world leader in technology for many decades. But I get the impression that most of their "innovations" after around the 1990s was managerial and investment stuff. They left the consumer market. They left the enterprise market. They make tons of patents but they don't seem to make much real tech anymore.
@johngeverett
@johngeverett Жыл бұрын
@@pwnmeisterage it does seem that way. I developed software on IBM Midrange systems for 45 years, and it seems that the AS/400 (now the 'IBM i' - whatever marketing decides to call it this month) was their last real hardware/opsys success.
@wesestama8468
@wesestama8468 Жыл бұрын
Jesus , does she do voice overs I could listen to this butter ASMR all-day ~
@Seba_World
@Seba_World Жыл бұрын
You don't know-how how much i like video you made. Cheers from Poland. You with your work are very important point and step for AI development nowadays. Who spreads information is important. You are inspiring many other young peoples in the correct and good direction. And i like your sweater.
@encabulator99
@encabulator99 Жыл бұрын
I think there will be a resurgence in interest in analog compute techniques , they are capable of surpassing the speed of digital computers but we're discarded due to being highly specialized and low precision .
@peterbizik224
@peterbizik224 Жыл бұрын
Didn't watched the video actually, article stated "This article describes a 7-nm four-core mixed-precision AI chip that demonstrates leading-edge power efficiency for low-precision training and inference without model accuracy degradation". It reminds me a similar hype with IBM's SyNAPSE Chip. Commenting just to support you work, good work.
@nemmart
@nemmart Жыл бұрын
nemmart 0 seconds ago I think it's fairly easy to cram a bunch of TFLOPs into a small area. The problem is that a good AI design requires a careful balance of resources: compute, L1 (local cache), L2 (shared cache), memory bandwdith. If the design is lopsided towards one resource or another, the design will look great on paper, but actual perf on many real world networks will be quite bad. So, perf per mm is in some sense a terrible metric to compare chips. The second thing to note, is that exploiting a novel hardware architecture takes a hell of a lot of ninja programming and is a massive software effort.
@jettelo
@jettelo Жыл бұрын
Love the way you talk😍
@sergeybrutspark
@sergeybrutspark Жыл бұрын
Anastasi YOU ARE AWESOME!!!!! 👏👏👏👏👏❤❤❤❤❤❤😍😍😍🤩🤩🤩🤩🤩😘😘😘
@jimbronson687
@jimbronson687 Жыл бұрын
I have or used to work and write softwares on most platforms Such as IBM RISC via RS6000 and later power series SUN SPARC HP PA RISC Data General on 88k MIPs Alpha and more. This lady is only channel I found that keeps up with whats new Outside of AMD Intel and Nvidia etc..
@k4vms
@k4vms Жыл бұрын
I enjoy you presentation . Ricky from IBM,
@vmandance
@vmandance Жыл бұрын
Thank you for sharing the news Anastasi
@dchdch8290
@dchdch8290 Жыл бұрын
wow... this stuff is so interesting. great work!
@biologicalstatistics3320
@biologicalstatistics3320 Жыл бұрын
ASIC AI is already in production with M1 processor
@donaldstanley8500
@donaldstanley8500 Жыл бұрын
Wow. I always learn a lot about new technology from these videos. Smart and beautiful. You go girl.
@projectw.a.a.p.f.t.a.d7762
@projectw.a.a.p.f.t.a.d7762 Жыл бұрын
I've been watching about how they're finding ways to control individual atoms on 2d materials. Which is imagine could be used to continue Moore's Law.
@christophermullins7163
@christophermullins7163 Жыл бұрын
This lady's voice ♥️
@helmutzollner5496
@helmutzollner5496 Жыл бұрын
Very interesting. Bring on more. I saw a paper a few days ago on AI training of a micro controller which ditches float completely for int8 and int16 for language and objectvrecognition workloads. Can you tell us more about those lower precision but better recogbition workloads. That sounds really interesting. In the 1990 we had that trend of fuzzy logic which used 2 to 4 bit values for object recognition. I guess int4 would poibt in that dire room again. However, what I would be ibterested in is if anyone is actually mixing weight and bias storage with compute logic on the memory chip. As you say it is just a multiply and add operation. Such a calculation cell could be implemented with a few liguc gates per node. Is anyone going that route at all or is it still a loosely von Neumann configuration with the ALUs outside the memiry, addressing memory?
@peceed
@peceed Жыл бұрын
Man, von Neumann model is far from physical configuration, we have tons of cache now. So logic sit next to the memory, and these blocks are repeated hundreds times. Computations are so dominated by weight memory footprint, that images and generally tasks are procesed in batches: few layers process hundreds of images, then we load new layers and process next step, and so on. There is a direct support for int4 in new amd and nvidia chips.
@helmutzollner5496
@helmutzollner5496 Жыл бұрын
@@peceed Thank you.
@svarodzic
@svarodzic Жыл бұрын
Nice video! Thanks! Have you looked into the Tesla Dojo chip and if yes what do you think how it is compared to the new IBM one?
@wolfeatsheep163
@wolfeatsheep163 Жыл бұрын
Omg im in love with you and your accent
@TheGaffio
@TheGaffio Жыл бұрын
Parallel evolution. Just some of my thoughts off the top of my head as a mathematical evolutionary biologist: If you're going to create mathematical models to explain ecological systems, which is like forecasting the weather for instance, you need a total different approach. What I see here is a case of parallel evolution. That's when two organisms that are unrelated that live in the same conditions develop similar appearances and behaviors. So I see that networking to create AI chips is mimicking the way that human brain works, but is but it is not being created to mimic the human brain but it's working in a similar environment so it makes me want to label it as parallel evolution, where you have the same conditions, you develop similar behaviors. Makes me think of Norbert Wiener and cybernetics, I have to insert this because AI is not an organism. In any case these are exciting times.
@jlfernan
@jlfernan Жыл бұрын
Anastasia, I think you're amazing, your topics, content and opinions are on spot, you're also an inspiration for women in tech keep it going! As a non native english speaker some times it requires an extra effort to understand you, please take this as a very respectful suggestion, may be if you take some vocal/accent couching lessons you audience will explode, you have the potential to be one of the big voices in tech on internet. And it's refreshing it's not another american channel (nothin' against that, but still ;-)
@ThePekard
@ThePekard Жыл бұрын
I agree, and in this one it seems it got worse. I've listened to few previous videos and it was relatively easy to follow.
@quinquiry
@quinquiry Жыл бұрын
a shame, yes i understand only 50% of what she says :(((
@daveb8323
@daveb8323 Жыл бұрын
I haven’t got a clue what Anastasi is talking about but she is the most gorgeous tech nerd I have ever seen and she rocks my world, that’s why I watch religiously. Hopefully I will pick something up through subscribing and watching her tech excitement. Keep up the great work x
@bobharris7401
@bobharris7401 Жыл бұрын
Perfectly said my brother.
@billfarley9015
@billfarley9015 Жыл бұрын
Try switching on English subtitles and muting the sound. I found that useful.
@daveb8323
@daveb8323 Жыл бұрын
Apologies Bill, this has nothing to do with Anastasi use of the English language as beautiful as she uses it for a second language or eighth for all I know but my understanding or lack of for technological matters.
@GrkThunderBird
@GrkThunderBird Жыл бұрын
Thanks for the presentation!!!!!
@jimihendrix243
@jimihendrix243 Жыл бұрын
As always, learned a ton of cool stuff, thanks!
@shawnweil7719
@shawnweil7719 Жыл бұрын
I love AI I'm so excited for it's potential and am only slightly scared 😂
@LuisMailhos
@LuisMailhos Жыл бұрын
Don't worry, if the likeness of becoming mad is related to the cube of the intelligence, we will only be able to build super AIs that become super mad too... well, that's not seems very reassuring.
@thefutureguy2027
@thefutureguy2027 Жыл бұрын
شكرًا
@AnastasiInTech
@AnastasiInTech Жыл бұрын
Thank you !
@thefutureguy2027
@thefutureguy2027 Жыл бұрын
@@AnastasiInTech you so beautiful , and your work is amazing thank you !
@ArjanvanVught
@ArjanvanVught Жыл бұрын
Thank you Anastasi.
@IT10T
@IT10T Жыл бұрын
Dang, her hair is like on point, as well as the editing quality of this production... very good job, I am interested in these new IBM chips, no matter the application; I would buy it for home labs use somehow
@machinary20
@machinary20 Жыл бұрын
Thank you, I love your hair
@jyvben1520
@jyvben1520 Жыл бұрын
well the google subtitles ai had a few problems as did i with her pronunciation. (cars aka cores, talon aka telum) was helped by the graphics showing the correct names ...
@TheNoodlyAppendage
@TheNoodlyAppendage Жыл бұрын
4:00 the matrix function is just a kludge. It is used because it was implemented in hardware when AI started to need processing power. it is actually very inefficient use of computing power and would be better replaced by true vector multiply. the neural network functions do not use the matrix function inherently, they use it because its faster than the crippled vector processing that big blue and others allow into consumer grade chips. The core function that neural networks need is the multiply and accumulate function. e.g. (yes im leaving out the sigmoid for simplicity). int Count = 1000; for(int x = 0 ; x < Count ; x++){ Output += I[x] * W[x]; }; Except in hardward where an opcode can define the pointer to array I, array W, give count (e.g. 1000) as a register and receive the output in another register. most chips already have opcode that come close but do not work for large values of count.
@nycandre
@nycandre Жыл бұрын
Great. thank you for that expose. Can you give us a sense of how this compares with nVidia's AI chip tech, or Tesla's Dojo / D1 chip tech?
@campbellmorrison8540
@campbellmorrison8540 Жыл бұрын
Way past my understanding now Im 68 but I can vividly remember reading a paper in one of the journals "why we will never get 1,000,000 transistors on a chip". Did you say 23 Billion transistors! Just proves I am well past it :)
@tedviens1
@tedviens1 Жыл бұрын
My only conversation with CleverBot ended soon after I criticized CleverBot for shallow thinking corrected a couple of logical errors and refused to be trapped in some conversation loops. At one point CleverBot interrupted the conversation to declare to me "You are the AI Robot and I am the Human."
@Gefkin
@Gefkin Жыл бұрын
Ehwt shampoo you use on the wicked hair ? Crazy good condition
@original_lich
@original_lich Жыл бұрын
Gorgeous presentation, the data was accurate too 👍
@filippakopyan1527
@filippakopyan1527 Жыл бұрын
Дорогая Анастасия, your information that additional bits of precision during neural computations do not bring any advantage and only slow down the computation is incorrect. Especially so during intermediate computations in many cases; such as activation functions, partial-sums, etc. You can find some minimal explanation on that matter even in the IBM paper that you are referencing towards the end of section I.A. “Iso-Accurate HFP8 Training”, as the authors revert back to FP16 support for some parameters, as opposed to lower precision schemes in order to maintain accuracy of computation. Though there are obvious other advantages of low-precision neural computation as you correctly pointed out. А так - отличное видео, вы хорошо разобрались с материалом!
@xcat4775
@xcat4775 Жыл бұрын
but distinguishing between cats & dogs in photos is so much fun
@Dianaranda123
@Dianaranda123 Жыл бұрын
would be cool to see this coming to consumer PCs as a sort of addon card like a GPU.
@capitalistdingo
@capitalistdingo Жыл бұрын
Speculation on AGI is interesting but work like this that can have a significant boost to regular AI is more interesting because it is guaranteed to have an economic and technological benefit to our world. It is also highly influenced by the general state of technological development. Every advance that makes communications faster, computers more powerful, memory more dense and rapid and the whole thing cheaper is going to be beneficial to system using AI-now or in the future.
@user-up2kz6ws6m
@user-up2kz6ws6m Жыл бұрын
The Sweather is not ONLY thing that is HOT :).
@mrwest5552
@mrwest5552 Жыл бұрын
i am familiar with a tiny fraction of the material you are discussing here... Sorry, i have to say that your hair is very beautiful.
@AlexanderBingham
@AlexanderBingham Жыл бұрын
Quantum and AI together just make the most sense in the way they factor outcomes. Huge reduction in decision times.
@T.A.Ki_inz
@T.A.Ki_inz Жыл бұрын
Calculted size of the silicon atom is 111pm so there is still a lot of space ;)
@richb2229
@richb2229 Жыл бұрын
Everyone is building Ai and tis chip is part of that procession. It’s built on exiting chip tech as a progression. Most established manufacturers will do this but it will limit their progress. Great chip with good improvements. I wish best of luck to those who jump ahead and disrupt this progression.
@johnfr2389
@johnfr2389 Жыл бұрын
So it’s like nVidia Tensor cores but generalized in a whole chip?
@Craigsp2007
@Craigsp2007 Жыл бұрын
I WANT A COUPLE OF THEM ...
@karld1791
@karld1791 Жыл бұрын
Will IBM try to have the chip produced somewhere?
@dubsar
@dubsar Жыл бұрын
When are we going to have a 80 qubit quantum neural network?
@blogjuju
@blogjuju Жыл бұрын
I just want THAT computer case
@TheLincolnrailsplitt
@TheLincolnrailsplitt Жыл бұрын
Humanity is screwed. Whomever controls the first singularity will always be ahead. Furthermore, there is no guarantee it will be moral.
@MrFoxRobert
@MrFoxRobert Жыл бұрын
Thank you!
@tylercoombs1
@tylercoombs1 Жыл бұрын
I heard IBM's been working on a 2nm AIU That's like a transistor made of 2 silicon atoms, I wonder how they handle quantum tunneling at that scale
@thesun6211
@thesun6211 Жыл бұрын
For a given level of Manufacturing Technology and Power Budget, does Double- or even Full-Precision simply cost too much speed in terms of Calculations or Operations per Processor Cycle? Or is the greater quantity of answers for any given math problems desirable in AI/Machine Learning, hence the Parallelism and Half- or lower Precision for Instructions and Operations in purpose-built HPC and Machine Learning Accelerators?
@tonymunn
@tonymunn Жыл бұрын
Just as soon as I see Sarah Connor in a straight jacket, I'm heading for the hills!
@joek81981
@joek81981 Жыл бұрын
I'm picky, and it's hard to find videos that reach a balance between "A cpu is the... "brain" of a computer..." versus "completely over my head, nonsense words to my ears". This finds that balance well. Doesn't assume I'm a complete doofus, nor have a Masters.
@hubstrangers3450
@hubstrangers3450 Жыл бұрын
when qc is in the horizon why increase the carbon footprint with classical computers?
@timswartz4520
@timswartz4520 Жыл бұрын
You are way ahead of me. Thank You.
@AmstradExin
@AmstradExin Жыл бұрын
I'm die. Thank you forever.
@timswartz4520
@timswartz4520 Жыл бұрын
@@AmstradExin kzfaq.info/get/bejne/atKqhdd8u53Yeo0.html
@FueledbyJohn
@FueledbyJohn Жыл бұрын
I edited my comment with a link to an asml / imec article and it disappeared oh no. 😅 It would be most appreciated if you could drop some links / pdf's in the doobly-doo (YT video comment section) although not necessary. I should be able to find them easily enough. Excellent video as always, Ciao. 🙂
@AnastasiInTech
@AnastasiInTech Жыл бұрын
Sorry YT removes comments with links. I will add some to the description tomorrow
@iusegentoobtw
@iusegentoobtw Жыл бұрын
lmao based AvE reference
@thepetyo
@thepetyo Жыл бұрын
Nice to see you being this happy about the end of human kind.
@OneEyedMonkey9000
@OneEyedMonkey9000 Жыл бұрын
Reminds me of another video discussing ‘analog computing which I think is something else
@peceed
@peceed Жыл бұрын
Neural computations are inherently classical due to no-clone theorem.
@cottsinc.6311
@cottsinc.6311 Жыл бұрын
But, how does it compare with what Tesla is already doing?
@ochibella9562
@ochibella9562 Жыл бұрын
Nice work 👌🏽
@Winteryears
@Winteryears Жыл бұрын
What a great voice.
@claritywindowcare8744
@claritywindowcare8744 Жыл бұрын
I have no clue what you saying... please continue :)
@willowwisp357
@willowwisp357 Жыл бұрын
Is it focused on the backpropagation algorithms?
@karolkornik
@karolkornik Жыл бұрын
I like Your new makeup. Top notch content. Very interesting. Thanks for update on those chips. Best Regards.
@Simon-fr4ts
@Simon-fr4ts Жыл бұрын
Was this something about a cpu? I missed that bit.
@h3ctor1991
@h3ctor1991 Жыл бұрын
Nice explanation! thanks! you got me following you hahaha
@Maxid1
@Maxid1 Жыл бұрын
The W.O.P.R. !
@skyblinked
@skyblinked Жыл бұрын
What pc/laptop model/maker do you own?
@ubute
@ubute Жыл бұрын
Sweet
@gspaulsson
@gspaulsson Жыл бұрын
IBM's first stab at language translation rendered "the spirit is willing but the flesh is weak" in Russian as "the vodka is good but the meat has gone bad."
@tomcan48
@tomcan48 Жыл бұрын
*AI to SI, synthetic intellegence, the next step for the control of humans, if it not already here...*
@eskileriksson4457
@eskileriksson4457 Жыл бұрын
How good (or bad) is the Tesla FSD chip, compared to this?
@pvic6959
@pvic6959 Жыл бұрын
i wonder how this is different than googles Tensor Processing Unit
@blakelapierre
@blakelapierre Жыл бұрын
how can one verify a given result?
This New Technology will Keep Moore’s Law Going!
19:10
Anastasi In Tech
Рет қаралды 84 М.
Why everyone is building AI Chips
10:53
Anastasi In Tech
Рет қаралды 31 М.
managed to catch #tiktok
00:16
Анастасия Тарасова
Рет қаралды 40 МЛН
wow so cute 🥰
00:20
dednahype
Рет қаралды 13 МЛН
New IBM AI Chip: Faster than Nvidia GPUs and the Rest
15:34
Anastasi In Tech
Рет қаралды 217 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 937 М.
Aleph Alpha’s AI Explained: The Secret Sauce
18:10
Anastasi In Tech
Рет қаралды 84 М.
Quantum Computing for Computer Scientists
1:28:23
Microsoft Research
Рет қаралды 2,1 МЛН
SSDs Die, RAM Doesn't. Why?
4:24
Techquickie
Рет қаралды 1 МЛН