RYZEN AI - AMD's bet on Artificial Intelligence

  Рет қаралды 120,455

High Yield

High Yield

Күн бұрын

With Ryzen AI, AMD introduced the first artificial intelligence engine on a x86 chip. In this video we will take a look at AI computing for consumers, the AMD Ryzen Phoenix APU that contains the first AI Engine and talk about the future of AI in general, from hardware to software.
Follow me on Twitter: / highyieldyt
0:00 Intro
0:50 AI Engines explained (vs CPU & GPU)
4:03 AMD XDNA & Ryzen AI
6:13 AMDs AI focus & Phoenix
8:13 x86 AI Hardware & Software
11:10 Future AI use cases

Пікірлер: 333
@siliconalleyelectronics187
@siliconalleyelectronics187 Жыл бұрын
The close business relationships between AMD, Microsoft, and OpenAI are starting to make a lot of sense.
@HighYield
@HighYield Жыл бұрын
Fully agree!
@rkalla
@rkalla Жыл бұрын
I didn't realize AMD and MSFT were super close - I missed that manifestation. Do you have any events or partnerships I could go lookup to get context?
@GuinessOriginal
@GuinessOriginal Жыл бұрын
More like a cabal
@thesolidsnek8096
@thesolidsnek8096 Жыл бұрын
@@rkalla Does CES ring a bell? They publicly announced their love relationship just a few weeks ago.
@diamondlion47
@diamondlion47 Жыл бұрын
@@rkalla AMD is in Azure and have been in the xbox for a few gens now.
@alirobe
@alirobe Жыл бұрын
Potentially easier analogy; A CPU core is like a 4 math professors, a GPU core is like a 1000 promotional pocket calculators.
@Akveet
@Akveet Жыл бұрын
I hope there will be a common instruction set for matrix operations (what's in all of these AI-branded coprocessors) so that developers could just use it not specializing for a specific hardware implementation.
@HighYield
@HighYield Жыл бұрын
That's super important, otherwise it wont take off. We dont need closed source shenanigans.
@004307ec
@004307ec Жыл бұрын
Microsoft's take is DirectCompute.
@juliusfucik4011
@juliusfucik4011 Жыл бұрын
I don't think these instructions are needed as matrix addition and multiplication is fairly generic. It suffices to have good libraries such as BLAS and IPP that make optimal use of the existing instruction set. Training online takes only little computational power. It is the initial training that is expensive. For that, we have GPUs. The AI cores are only meant for running the network forward for inference. This means no feedback, gradient calculation and weight adaptation is needed. Fun fact; if you quantizatize a typical neural network from floating point to integer you can get 30+fps on a single core of a Raspberry Pi 4. Inference just isn't that expensive.
@ttb1513
@ttb1513 Жыл бұрын
A library or SW layer is where matrix operations belong. And it needs to be optimized for the specific hardware implementation, including compute cores, cache sizes, DRAM sizes and bandwidth and so much more. Take a look at how very large matrix multiplies are done. They are not done in the simple way that would take N^3 multiplies and ignore the HUGE differences differences in each level of the memory hierarchy. Standardization is helpful, but not at too low of an abstraction level that prevents optimizations.
@mrrolandlawrence
@mrrolandlawrence 8 ай бұрын
Lets hope the scalable matrix extension for ARM delivers. Coming to ARM v9A. Something they should have added some years ago IMO.
@RealLifeTech187
@RealLifeTech187 Жыл бұрын
I definitely was considering a Phoenix APU before knowing about Ryzen AI and my excitement only increased hearing this news. AI upscaling for video content is the thing I'm most excited about because there are so many low bitrate low resolution videos out there and the potential for conferencing is also huge since webcams probably won't get any better (if the covid home office years didn't get OEMs to improve their webcams nothing will)
@juliusfucik4011
@juliusfucik4011 Жыл бұрын
But any video card that is less than 5 years old can already do this... why want it in the CPU as well?
@polystree_
@polystree_ Жыл бұрын
@@juliusfucik4011 because most ultrabooks & office computers don't have any dGPUs? Also, running an "AI Assistant" or any other AI task with a GPU is for sure not the most efficient way to do on laptops. I think this product is part of AMD and Microsoft cooperation. Microsoft want to try AI-powered Windows on mobile devices (Surface lineup) and AMD want to try their AIE in real life workload before launch it on other segments with little to no use.
@theultimatekehop
@theultimatekehop Жыл бұрын
Great video! First time here but Im subbed. Loved the format and the info given. Well done!
@nicholassabai7284
@nicholassabai7284 Жыл бұрын
Manual rotoscoping in video editors would take from a few minutes to hours depending on the complexity of the scenes, and I was surprised to see an AI engine pull that off in seconds.
@kirby0louise
@kirby0louise Жыл бұрын
You've absolutely nailed it on the need for strong software support. I looked into it, and apparently it has it's own special API/SDK required to utilize it. This is a big disappointment, they should allow it to plug in to DirectML (this is how AI acceleration works on Xboxes and it's great). By integrating it into existing APIs AMD would have a large amount of support out of the gate and avoid further fracturing the programming ecosystem.
@RainbowDollyPng
@RainbowDollyPng Жыл бұрын
I mean, every specialized hardware implementation needs its own SDK, handling the specifics. That alone doesn't prevent it from plugging into DirectML.
@RainbowDollyPng
@RainbowDollyPng Жыл бұрын
​@@leeloodog DirectML is a pretty high level abstraction and one that's Windows exclusive at that. You don't build hardware directly to that standard. There is always going to be a low-level SDK that handles the hardware access. Now of course, it could be handled differently, DirectML could be supported from the get-go, which is a shame that they didn't do that, I agree.
@zhafranrama
@zhafranrama Жыл бұрын
One reason I can think why DirectML is not a focus for AMD is because it's not cross platform, and doesn't work on Linux. Why is this important? AI computation in enterprises are usually done on Linux. Enterprises are one of the biggest consumer of AI compute
@dennisp8520
@dennisp8520 Жыл бұрын
There is irony in this to, I think AMD is making some of the same mistakes that Intel made whenever they got large and powerful. For now this stuff isn’t gunna be very useful until all chip makers get on board working on a standard
@RainbowDollyPng
@RainbowDollyPng Жыл бұрын
@@dennisp8520 Although from what I can tell, PyTorch, Tensorflow and ONNX are all supported by the Xilinx AI Framework as Frontends. So really, there is no huge need to support DirectML as Middleware between Frontend Frameworks and the hardware Backend.
@cameronquick1157
@cameronquick1157 Жыл бұрын
Mindblowing stuff. Definitely convinces me to continue waiting for 7040 availability in thin and lights. Aside from all the potential applications, battery life improvements should also be significant.
@Bubu567
@Bubu567 Жыл бұрын
Basically, AI needs to multiply the weights set by a model across the whole network to figure out the best fit output, but it doesn't need high precision, since it only needs to determine a rough estimate on it's certainty.
@axe863
@axe863 Жыл бұрын
Everything can be approximated with randomized varying depth relus with proper regularization......... standard sparse linear learners no complicated solver needed. Algorithmic complexity is far far more important than hardware power
@Bianchi77
@Bianchi77 9 ай бұрын
Nice video shot, thanks for sharing with us, well done :)
@MostlyPennyCat
@MostlyPennyCat Жыл бұрын
My best idea for AI in games is AI vision and hearing systems for NPCs. At the moment in gaming, let's take a Stealth game example, the enemies have vision and hearing cones, dumb pure distance mechanics triggering a behaviour branch if the player is close enough or loud enough, usually augmented with simplistic rules based around crouching, movement speed limits, baked shadow regions and 'special grass' Replace that with a quick and dirty low resolution rendering of what the NPC is looking at using the GPU. Now run that image through a trained neutral network. Suddenly this opens up the possibilities of real effects from movement, lighting and camouflage. Literal camouflage, you're trying to fool the pattern matching algorithm in the machine in exactly the same way we try to fool the pattern matching algorithm situated between every humans ears 👉🧠 Same with audio, you render the sound at where the NPC is and run it through another NN and see if it meets a threshold to trigger the NPC AI (too many things named AI) universities branch. The game design trick is feeding back to the player the level of danger they are in without hokey constructs like the 'interest danger' markers in games like far cry.
@christheswiss390
@christheswiss390 Жыл бұрын
A highly interesting and insightful video! Thank you.
@kaemmili4590
@kaemmili4590 Жыл бұрын
quaity content, thank u so much wd love to see more of you, specially for new chip paradigms , on the research side of things
@thebritishindian1
@thebritishindian1 Жыл бұрын
Great video, very informative. These are the kinds of videos I like to educate myself on the future of computing. Coreteks is a great channel, but his niche is mainly for the future of gaming and graphics, which is less relevant to what I need to know about.
@HazzyDevil
@HazzyDevil Жыл бұрын
Always happy to see a new upload. Thanks for covering this! As a gamer, I’m excited to see how good AI upscaling become. DLSS 2/3 has already shown a lot of promise, now just waiting for AMD to release their version.
@adamrak7560
@adamrak7560 Жыл бұрын
I am more excited about neural rendering (neural radiance fields), it is not real-time on current hardware, but with the right dedicated hardware it will be soon.
@danimatzevogelheim6913
@danimatzevogelheim6913 Жыл бұрын
Wieder top Video! Again, a high quality video!
@fakshen1973
@fakshen1973 Жыл бұрын
GPU-style parallel processors are very nice-to-haves for digital artists such as musicians, video editors, and animators.
@gamingscreen12
@gamingscreen12 Жыл бұрын
great vid as always, thanks
@RM-el3gw
@RM-el3gw Жыл бұрын
ah yes, the APU that I've been waiting for. Not yet out there but looking very promising. Any idea of when it will be out? Also, are those AI cores also supposed to be used for something like FSR, such as in the way that Nvidia uses AI cores in its GPUs to sharpen and upscale stuff? Thanks and cheers.
@teapouter6109
@teapouter6109 Жыл бұрын
If AI cores are to be used for FSR, then FSR will not work on the vast number of GPUs that it currently works on. I do not think AMD would go in that direction for the time being.
@HighYield
@HighYield Жыл бұрын
The RDNA3 cores come with their own smaller AI cores which are used for FSR, and FSR in general doesnt even need AI acceleration iirc, thats why it also runs on older GPUs. Phoenix should be out in late Q1, but thinking back to Rembrandt last year, it might take AMD longer. Lets hope the rollout will happen faster this time!
@fleurdewin7958
@fleurdewin7958 Жыл бұрын
AMD APU for notebooks , Phoenix Point will arrive in March 2023 . It was announced by AMD in CES 2023
@zdenkakoren6660
@zdenkakoren6660 Жыл бұрын
AI will just learn what is best and fastest way to make the use of gpu or cpu, and it doesnt even have to send data to AMD or NVIDIA. It is a baby learning machine. it may work or not, like GCN 5 had primitive shaders that never got the usage...Radeon had tessalation way back and was not used ATI Radeon 8500 in 2001...Nvidia Physx was short lived...4870x2 had double gpu and between PLX chip that was never really used.....Intel had AVX 512 in cpu and now its removed,AMD has this only now in 7000 series xD.....Nvidia RTX 2000 series had AI and it learned how to better use DLSS and optimize drivers, but AMD has better stronger hardware so this will help driver team alot. AI will need time like 3-4Years to make some proper use of it IF it will work like people think.
@claymorexl
@claymorexl Жыл бұрын
I feel like, outside of notebooks and mobile computing, by the time specialized hardware is preferable for handling AI tasks, that discrete accelerator cards will be the market standard. Either that, or GPUs will market AI accelerators on their boards and make use of the insane bandwidth PCIe 5 gives them. Integrated AI cores will be more or less like integrated graphics in future in x86/PC applications
@redsnow846
@redsnow846 Жыл бұрын
Nvidia already sells GPU's that do this.
@azurehydra
@azurehydra Жыл бұрын
Ai is the future. Even now in its infancy it helps me a bunch. If it became 100x better in assisting me. DAMN! It'll do all my work for me.
@matthewstewart7077
@matthewstewart7077 7 ай бұрын
Thank you and a great overview. I'm getting into AI machine learning and am hoping to utilize this new feature for training models. Do you have any resources on how to utilize Ryzen AI for machine learning model training?
@electrodacus
@electrodacus Жыл бұрын
This is what I'm waiting for. Hope it will be available in some mini PC form. Also hope there will be an API available for XDNA in Linux.
@HighYield
@HighYield Жыл бұрын
With how well AMD is doing for their GPU drivers on Linux, I think theres a good chance.
@VideogamesAsArt
@VideogamesAsArt 5 ай бұрын
sadly a year later, Linux support is still missing. Did you get a mini PC though? I have a Framework 13 with Phoenix myself, although not for the AI engine but more for the battery life and incredible efficiency
@electrodacus
@electrodacus 5 ай бұрын
@@VideogamesAsArt I did not got one as I was busy with other things ans since there is no support for XDNA I will probably wait for XDNA2. There is not as much progress as I will have hoped. Still use a i7 - 3770 so over a decade old.
@leorickpccenter
@leorickpccenter 7 ай бұрын
Microsoft will be requiring this and needs at least 40-50 TOPS of performance for it to be a smooth AI experience with windows 11, presumably with the upcoming Co-Pilot.
@DeadCatX2
@DeadCatX2 7 ай бұрын
As an FPGA engineer I've used DSP cores to accelerate certain algorithms in hardware and upon hearing that MAC is the basis of AI I pictured the Leonardo DiCaprio pointing meme, where DSP cores are pointing at AI cores
@erlienfrommars
@erlienfrommars Жыл бұрын
Windows 11 getting its own software based AI-engine to complement these AI hardware accelerators that can improve Audio, video and telecommunications would be amazing, and about time as Apple has been doing this for years since they moved to M-series Macs.
@HighYield
@HighYield Жыл бұрын
I'm sure Microsoft is already hard at work.
@craneology
@craneology Жыл бұрын
The more to spy on you with.
@kekkodance
@kekkodance Жыл бұрын
​@@craneologysame
@oliviertremois1500
@oliviertremois1500 Жыл бұрын
Very good video. Most of the information on XDNA is exact, I mean not overestimated!!! All the animations on the AI Engine are really nice, compared to my poor PPTs !
@bev8200
@bev8200 Жыл бұрын
Just looking at AI art, this makes me super optimistic about the gaming industry. The environments that AI will create will be incredible.
@MWcrazyhorse
@MWcrazyhorse Жыл бұрын
Or they will be hellish. ah fuck it what could go wrong? Let's gooooo!!!
@Yusufyusuf-lh3dw
@Yusufyusuf-lh3dw Жыл бұрын
AI engines and dedicated AI capabilities are already available on Apple and Intel cpus. Apple has dedicated IP for AI offloading and intel has TMUL instructions in Alder lake for AI operations. It's just a matter of which one has more application support and which one is more effective in terms of performance and power consumption. Secondly as you said meteor lake has dedicated AI engine on the cpu and raptor lake has onboard AI IP.
@jktech2117
@jktech2117 Жыл бұрын
i wonder if it will help at game resolution upscaling and frame interpolation
@Silent1Majority
@Silent1Majority Жыл бұрын
This was GREAT!!
@first-thoughtgiver-of-will2456
@first-thoughtgiver-of-will2456 Жыл бұрын
What AMD needs to do is innovate further on their cache chiplet design and SoC infinity fabric IP to form a VRAM like cache for these DSPs. This is just another AVX extension or Snapdragon DSP equivalent (still awesome to see) but AMD is positioned to fix the real problem with machine learning models which is memory hierarchy. CPUs are surprisingly powerful compared to GPUs it's the memory locality that really makes GPUs outperform CPUs by so much due to cache misses in parameter space. Throw an L4 equivalent on the outside of the CCX chiplet and extend the ISA for AVX (also throw bfloat16 in there please).
@BrandonMeyer1641
@BrandonMeyer1641 Жыл бұрын
Man I wish I had something like this when I was taking a class on ai last year. Some code would take several minutes to run. This probably would have cut that down a bit. If compilers can take advantage of such features on the silicon automatically it will have huge implications for students. Additionally once ai cores are common to most laptop chips universities can adjust curriculums to teach cs students how to leverage them before they graduate.
@jrvgameplaytrailers8527
@jrvgameplaytrailers8527 Жыл бұрын
Very helpful 👍
@markvietti
@markvietti Жыл бұрын
what a great channel. i am so glad I found it..
@HighYield
@HighYield Жыл бұрын
Same :D
@zajlord2930
@zajlord2930 Жыл бұрын
sorry if you already mentioned this but are the ai cores only for amd to use for fsr or something or its something users can use for machine learning or whatever and how do these compare to gpu? like can i do as much as on gpu with this or how much better or worse is it? again, if you already mentioned this im really sorry but im too tired to rewatch it again today
@HighYield
@HighYield Жыл бұрын
It’s not meant for FSR, those cores are inside the GPU. In theory you should be able to use it for machine learning code.
@RM-el3gw
@RM-el3gw Жыл бұрын
@@HighYield ah crap just asked this question haha.
@kirby0louise
@kirby0louise Жыл бұрын
CPUs, GPUs and the AI Engine all are Turing complete, so technically they all can execute the same tasks provided they are programmed for the respective processor. What differs is the speed at which they can do certain tasks. Linear, logic heavy code will perform best on CPUs. General purpose parallel number crunching will be best on GPUs. Specialized parallel matrix math will perform best on the AI Engine. Comparing it to the rest of the Pheonix APU (Ryzen 7 variant), the integrated 780M provides up to 8.9 TFLOPS of FP32/17.8 TFLOPS FP16 (possibly 35.6 TOPS Int8/71.3 TOPS Int4? The ISA manual states support for Int8/Int4 matrix math but not packed acceleration of it. I would assume this is carried over from Xboxes but I can't be totally sure). The AI Engine hits 12 TOPS (unspecified, assuming Int4). While it might sound like this makes the AI Engine pointless, the real story is in the perf/watt. The AI Engine according to AMD has power usage measured in milliwatts, while the 780M could easily pull 20W+. Thus, the AI Engine is great for ultrabooks that cannot afford to be blasting the GPU like that.
@joehorecny7835
@joehorecny7835 Жыл бұрын
Great analogy using the cooking! AI is here to say, and this is only the beginning. There will be more and AI in the future, behind the scenes, you won't even know its there, but it ill make tasks easier and better. Of course I'll get a phoenix GPU when they are released, the excitement is on the edge, not in the back row.
@quinton1630
@quinton1630 Жыл бұрын
7:39-7:41 audio blip from the audio editing on “transistor”
@HighYield
@HighYield Жыл бұрын
I'm pretty sure I just had a horrible microphone pop at this point and tried to remove it, the result is a few missing frames and the audio blip. Why are you paying so much attention? Cant even make my mistakes in peace ;) Good catch tho!
@quinton1630
@quinton1630 Жыл бұрын
@@HighYield I’m a professor who pre-records some lessons, so I’m all too familiar with replaying 1 second of audio a dozen times to fix pops, blips and doots :P Great video by the way, kudos on being informative and entertaining!
@randomsam83
@randomsam83 Жыл бұрын
Dude your accent is perfect for explaining technical stuff. Consider using a German word once in a while to make it perfect. Great work !
@em0jr
@em0jr Жыл бұрын
The'ol Math Co-processor is back!
@mapp0v0
@mapp0v0 Жыл бұрын
What is your thoughts on Brainchip's Akida?
@DUKE_of_RAMBLE
@DUKE_of_RAMBLE Жыл бұрын
Mmmm... 🤤 That notion of game enemies leveraging the AI Engine, is nifty! I don't know exactly how improved it would be over the current means, which have already had "learning" abilities; albeit, minimal and session based. if the new one could store complex info and re-use it on the next game load, that'd be great. (although, this probably falls under "machine learning", not "general AI" 😕)
@alexx7643
@alexx7643 Жыл бұрын
You should take a look at Alethea AI. They are introducing CharacterGPT. We can create interactive AI characters by simply entering some text. Also they are working on the ownership of AI generative content.
@Phil-D83
@Phil-D83 Жыл бұрын
Going to need some new vector extensions to accelerate ai type workloads on regular cpus
@Chalisque
@Chalisque Жыл бұрын
If it only runs on AMD's APUs, then it will only run on a fraction of PCs, making for a small target market for software developers. It makes sense to add it to their desktop Ryzens to, and possibly their discrete GPUs, possibly even separate PCIe cards with just XDNA on it (market dominance will require the tech to be available to PC users with Intel CPUs and nVidia GPUs). But I can't see how the market will embrace this technology if it is only available on AMD's APUs.
@EthelbertCoyote
@EthelbertCoyote Жыл бұрын
A Ai engine coupled with a small FPGA on chip could cover a lot of non efficient tasks that would burden a GPU or CPU's main task set correct?
@HighYield
@HighYield Жыл бұрын
Yes, I’d say so too
@giu_spataro
@giu_spataro Жыл бұрын
Are these type of engine more like a Coral o Jetson Nano, used only for inference, or can be used efficiently also for training?
@HighYield
@HighYield Жыл бұрын
I guess its mostly inference, but Xilinx AI can do both: www.xilinx.com/applications/ai-inference/difference-between-deep-learning-training-and-inference.html
@Gorion103
@Gorion103 Жыл бұрын
Why there is "bleeding edge" instead of leading at 0:14?
@HighYield
@HighYield Жыл бұрын
en.wikipedia.org/wiki/Bleeding_Edge
@petershaw1048
@petershaw1048 Жыл бұрын
This week I built an AMD system based on the x670e-Pro chipset (Pcie5) with an 8-core processor. When they come out, I will drop in a CPU with Ryzen AI ...
@e.l809
@e.l809 Жыл бұрын
AMD should work to make it compatible with the ONNX format (by microsoft) it's open source and support a lot of hardware. It's the beginning of a "standard" for this industry.
@rolyantrauts2304
@rolyantrauts2304 Жыл бұрын
Its a matter of cost as if the 7040 undercuts what is likely to be the m3 then there is a huge new arena of local voice activated LLMs that a single server can service a reasonable number of zones for most homes. The Home HAL is on the way and that is the 2001 type not abstraction layer.
@yuan.pingchen3056
@yuan.pingchen3056 10 ай бұрын
There's still no benchmark results about 'Ryzen AI' till this moments.
@yuan.pingchen3056
@yuan.pingchen3056 10 ай бұрын
@@blue-lu3iz so you mean it's non of NDA's business?
@theminer49erz
@theminer49erz Жыл бұрын
I'm actually very happy to see this. I have been expecting something similar for a while. Although I was envisioning it being more like those "Physics" cards in the early 2000s. It seems that ever since the Xbox 360/PS3 era began, in game AI (NPCs, Enemies, wildlife, etc) has basically been an afterthought IF that. I believe it's because by the time those consoles were releases they were already rather outdated compared to PC capabilities and have been trying g to keep up (and failing) ever since. That means when studios make games, even if they will be prominently for PC, they can't get too "fancy" or else the difference between the PC version and the console versions wpuld be too great and point out how bad the system is. I doubt they would get a license to sell such a bad port and without console licenses they will not get budget. So in order to maintain the illusion of graphical improvements over time, things like AI and view distance etc. were left on the cutting room floor. Think about A-Life in STALKER games that allows for AI based enemy tactics, wildlife, and NPC interactions. It makes for a much more realistic, immersion heavy game experience that is almost always diffrent since even when you are not around or on the map, the NPCs etc do their own thing. Also think about the first FEAR game, the enemy tactics were amazing and felt real, but graphical quality was compromised to do so(was worth it). Anyway, my point is that I hope this is used for such things moving forward. I know publishers would almost never approve lowering graphical quality just for better AI since their market research says "graphics are the most importantpart of a game" (aka asking random people who have no idea what makes a game good, "what makes a game good?"). However if this hardware becomes more common, they wouldn't have to make that trade off. Lastly, I believe that MS and Sony only have maybe one more "Next Gen" console in them before the price/performance of a PC would surpass what they can make and sell. They already take a bigger and bigger loss on the system sales each cycle and rely on licensing to make up for it. However since they seem to use next gen APUs now, so that means of they can get, say a AMD APU with RDNA3/4 and "AI cores" games may start making use of in game AI again since it won't have to be a trade off/can be applied to console titles. They could also allow for things like DLSS type AI upscalling can be taken off the GPU and given to the CPU perhaps. I see APUs being the main go to in the future. AMD has the head start and chiplet stacking/3D cache can make them extremely powerful. I also see dedicated APU motherboards that have both system RAM and VRAM slots that will allow for more upgrade paths and less waste. Yes a mobo will cost like $300+, but you won't need the whole GPU PCB and there could even be some performance gains by having all of that in the mobo instead of having to all go through PCIE slots. Anyway, this is good news I think! There are also really promising potential for other things, but that is a secret as I am currently working on something that would benifiet greatly from such a thing. It wpuld also be nice to have offline home assistant/automation computing more readily available to more people instead of having everything that happens in their home get sent to an Amazon server to be analyzed and archived just so it can play a song when you ask it to. This is possible now if you have a server with a GPU now(like me) but it's not supported very well as far as software choices. If it wasn't such a weird thing to set up, I'm sure there wpuld be much more options. I will conclude here, thanks for the video!! I'm looking forward to hearing more on the new "Zen 4D" and how RDNA3 is evolving. I'm not keeping up with most the main outlets because they are getting annoying, so I'm counting on you to keep me up to date! :-D have a great weekend!!
@HighYield
@HighYield Жыл бұрын
I remember "PhysX" very well. At some point Nvidia thought everyone would have a dedicated physics card in the future. But unlike Nvidias proprietary API, I'm sure AI engines will make their way into most computer chips eventually.
@theencore398
@theencore398 Жыл бұрын
RX-DNA is something I can see happening in the near future, imo this is too good of an opportunity to miss for naming a GPU
@6SoulHunter9
@6SoulHunter9 Жыл бұрын
I am about to have lunch and that sandwitch looked so tasty that it distracted me from the topic of the video xD
@briancase6180
@briancase6180 Жыл бұрын
Sheesh, it's about time. Apple and Google have had "neural engines" for years. Apple's new M-series SoCs have also good AI accelerator blocks.
@janivainola
@janivainola Жыл бұрын
Would be very cool with rpg games where the plot is set up, but AI follows the actions and style of the player to update the story during play. Making each playthrough unique...
@jitterrypokery1526
@jitterrypokery1526 Жыл бұрын
Any news about apples rumored m2 pro and m2 max refresh?
@HighYield
@HighYield Жыл бұрын
M2 Pro & Max have the exact same 16-core NPU as the base M2 model.
@MostlyPennyCat
@MostlyPennyCat Жыл бұрын
Interesting, I see that the AI cores are VLIW. If that's VLIW like Itanium as opposed to VLIW like ATI's old GPU instruction set it's fascinating. Could VLIW work when in the limited context of machine learning inference? Will it be compiled or hand written? Yes, interesting indeed.
@samporterbridges2766
@samporterbridges2766 Жыл бұрын
i'm pretty sure that it would be compiled
@Alauz
@Alauz Жыл бұрын
Would be nice to have A.I. accelerated graphics replacing traditional raster tech in the near future. Maybe full Path-Traced graphics with A.I. accelerators can make huge GPUs unnecessary and we can simply use APUs and re-shrink the ultimate gaming machines to the size of watches.
@OnyxLee
@OnyxLee Жыл бұрын
Is it going to replace tensor cores?
@HighYield
@HighYield Жыл бұрын
No, its basically something similar, not a replacement.
@flytie3861
@flytie3861 Жыл бұрын
What about ai in smartphone chips?
@hellraserfleshlight
@hellraserfleshlight Жыл бұрын
The question is, with local AI processing, will applications stop sending PII to the cloud to be processed and cataloged improving use privacy, or will it just save Google, Facebook, Amazon, Microsoft, and others money on processing data, letting them harvest more "polished" PII.
@MostlyPennyCat
@MostlyPennyCat Жыл бұрын
I wonder if AMD's HSA (Heterogenous System Architecture) can rise from the grave now. Seems the perfect fit for adding AI inference to your code?
@JKTPila
@JKTPila Жыл бұрын
Is this the AMD 7040 series?
@HighYield
@HighYield Жыл бұрын
Correct, Phoenix is the Ryzen Mobile 7040 series.
@ShyFx8
@ShyFx8 Жыл бұрын
Interesting. This with artificial intelligence sounds nice and great, if it is programmed and used correctly. Can make cpu/gpu more efficient. Clearly this is part of the "internet of things" where everything is connected. But not many people think that artificial intelligence is actually fallen angel technology.
@miguelpereira9859
@miguelpereira9859 Жыл бұрын
LOL
@AndrewMellor-darkphoton
@AndrewMellor-darkphoton Жыл бұрын
I have a feeling this is gonna be obsolete in five years when they come up with non von Neumann AI. The linear algebra accelerators in CPU and GPU they're still pretty competitive cause I don't think programmers want to work with the asic or they might need a more complex algorithm.
@TerraWare
@TerraWare Жыл бұрын
We need ai shader compilation to get rid of stutters.
@SirMo
@SirMo Жыл бұрын
That's a developer issue. Not really something you can fix in hardware. Some games do it correctly.
@woolfel
@woolfel Жыл бұрын
I spent the weekend benchmarking apple M2Max and newer ANE. For densenet121, it can do over 700 FPS versus 100 FPS on GPU. It's taken AMD far too long to add tensor processors.
@mtunayucer
@mtunayucer Жыл бұрын
Notably, Apple A11 neural engine has never been used outside of Face ID, Apple made neural engine effectively public, starting with A12.
@wawaweewa9159
@wawaweewa9159 Жыл бұрын
Apple is gay
@HighYield
@HighYield Жыл бұрын
Really, only Face ID? Didnt know that but it kinda makes sense.
@mtunayucer
@mtunayucer Жыл бұрын
​@@HighYield Its more like only Apple could use A11 NPU, Animoji also used it just looked it up.
@dakrawnik4208
@dakrawnik4208 Жыл бұрын
Cool, but what's the killer app??
@samghost13
@samghost13 Жыл бұрын
you did not understand when you ask that
@6XCcustom
@6XCcustom 6 ай бұрын
the extremely rapid AI development in form of software and hardware implies this that, the hardware must be replaced much faster now
@jmtradbr
@jmtradbr Жыл бұрын
Nvidia have this for several years already. AMD was needing it.
@NaumRusomarov
@NaumRusomarov Жыл бұрын
i wonder how this is going to be exposed to the os and software. i'd like them to make this configurable through the compilers so that devs could use the ai cores if available.
@HighYield
@HighYield Жыл бұрын
I also hope they will provide open APIs.
@NaumRusomarov
@NaumRusomarov Жыл бұрын
@@HighYield that would be spectacular! :-)
@SundaraRamanR
@SundaraRamanR Жыл бұрын
@@HighYield it's AMD, so they probably will
@granand
@granand Жыл бұрын
Okay should I be buying M2 apple mini or Laptop or Win based intel i7 or Ryzen ? I am developer need lots of RAM, CPU, DDR memory and ability to work on databases, programming IDEs like visual studio, anaconda.
@HighYield
@HighYield Жыл бұрын
Honestly, that hard to say. If you need lots of RAM, building a system yourself can be much cheaper. Do you rather like to work on OSX or Windows or Linux? So many questions.
@granand
@granand Жыл бұрын
@@HighYieldThank you for the response. I work with lot SQL daabase, Visual studio, Anaconda and stuff like that, GIMP sometimes, not into gaming and Important I am on contract and move around the country and can only carry portable. I am find with M2 mini or other compact desktops as I can buy and discard cheap used monitors. Yes I am conversant to work with linux and windows but most of my work is based on Microsoft
@SirMo
@SirMo Жыл бұрын
I'm a developer who used to use Macs. But x86 is still the king. Give Pop_OS! a try. Pop_OS! runs amazing on AMD's hardware. You get the best of both worlds. An OS as productive as MacOS and the ability to chose from a vast array of hardware available on PC that can keep any power user happy. I made a switch few years back and I'm never going back to Mac.
@Endangereds
@Endangereds Жыл бұрын
As we can increase system RAM, if this tech is harnessed well, and if it can compete and give outputs similar to RTX 4090 cards, "in terms of only AI", that would be great.
@Le4end
@Le4end 10 ай бұрын
It definitely won't have 4090 fluidity. Maybe 3080
@pichan8841
@pichan8841 Жыл бұрын
Is that a grandfather clock running off camera?
@HighYield
@HighYield Жыл бұрын
You hear a ticking sound?
@pichan8841
@pichan8841 Жыл бұрын
@@HighYield Actually, I do. Not constantly, though...Am I hearing things? e. g. 2min40 - 2min49, 3min03 - 3min13 or 3min43 - 4min11...Grandfather clock!
@icweener1636
@icweener1636 Жыл бұрын
I want an AI that will help me get rid of Noisy neighbors
@user-qr4jf4tv2x
@user-qr4jf4tv2x Жыл бұрын
can't wait for my cpu to have existential crisis
@HighYield
@HighYield Жыл бұрын
So you are saying it can run crysis?!
@RaidenKaiser
@RaidenKaiser Жыл бұрын
I am concerned that if every tech company gets in on AI and it all backfires what the fallout will be and how they will try to make consumers pay for it to bail them out.
@larsb4572
@larsb4572 Жыл бұрын
A big use will be night to light. Perfect sunny days in even pitch black. What is black for the human eye is just nuances of dark for the AI with a decent optic. and as such, you can implement it in the windshield and sidewindows in your car so at night, you get 240fps AI sunshine at 1am during pitchblack driving. On the phone too.. just hold up your phone or put it in a headset to see around you while underground, or outdoor while its dark.
@Powerman293
@Powerman293 Жыл бұрын
Didn't Intel Icelake have AI accelerators? I may be mistaken though.
@HighYield
@HighYield Жыл бұрын
IIRC, Ice Lake had AVX-512 and specific Deep Learning libraries to speed up AI workloads (called "DL Boost"), but not dedicated AI hardware.
@heinzbongwasser2715
@heinzbongwasser2715 Жыл бұрын
Nice
@pedro.alcatra
@pedro.alcatra Жыл бұрын
Thats no a big deal for 90% of home users, for shure. But wait till they make a partnership with unreal and we start seeing it on NPCs or something like that lol
@wilmarkjohnatty4924
@wilmarkjohnatty4924 Жыл бұрын
I'd like to understand how strategies deployed by AMD will compare to NVDA, and maybe Broadcom with RISC /ARM, and is this why NVDA bought ARM? There is a hell of alot of hype about NDVA, are they likely to live up to that? What will AI do to already seemingly dying INTC?
@stevenwest1494
@stevenwest1494 Жыл бұрын
Interesting, but it'll need to be sold to the average user of a PC as something they need. That will need to be built into the Windows 11 scheduler, which for some reason is having problems with Zen 4 cores over 2 CCD's, and Intel's big, little design. Understandably with 2 different core designs for Intel, but CCD's have been an AMD standard for generations now. Also Zen 5 will use a big Zen 4+ core and smaller Zen 5 little core. It'll be another generation, Zen 6 earliest we see AI in desktop CPU's from AMD.
@terjeoseberg990
@terjeoseberg990 Жыл бұрын
What’s the difference between this and MMX?
@HighYield
@HighYield Жыл бұрын
Do you mean Intel's XMX on their Arc GPUs? If so, that's very similar to AMD's XDNA engine, both are dedicated AI-Engines that accelerate the most common ML calculations.
@terjeoseberg990
@terjeoseberg990 Жыл бұрын
@@HighYield, No. I mean the ancient MMX from back in the olden days. Isn’t MMX great for performing high speed matrix operations? What’s the difference between MMX and XMX?
@gstormcz
@gstormcz Жыл бұрын
There is always some hype. Not always being bad. I think it depends on both demand and chip design possibilities. I just don't know if that AI chip cores is not that what was otherwise listed in chip specs as there were AVX, MMX and other cpu extensions if correct. AMD calling software devs to make good use of those AI areas in chip really looks like other attempt to make use of raytracing or other cores. It's AMD business to sell what they make. I remember Intel Pentium MMX having some extensions, claiming better gaming performance, but when you went in AMD chip one gen later, you usually found that extensions there too, with sometimes either better pricing or simply higher raw performance over Intel. "I just hope my computer won't watch me for every single action one day in future, giving me advice to live better, faster, more efficient and doing more things at once, asking only plug me into grid and slap cooler on my head. Turn me off if I become idle but consuming too much." (xd) When I played chess with computer, it was tough at medium difficulty on PC 8086 upto 486. 3D shooter bots in Quake 1 or 3 were beatable bit higher, being quite fast, dextrous and a accurate. Playing vs bots in World of warships seems quite easy most of time, they sometimes get stuck at islands( no complain on devs), mostly not that devastating gunners, but already change course, speed etc. It is still programmed, having bots with performance equal to human is not goal of coop game there (IMHO), many players prefer relax, easier game than that one pvp. But I can imagine AI could make fair bot enemy either matching player skills or surpassing and teach what to improve passive way or with guidance/AI tips. Finally AI could just teach car drivers how to drive well without complete automatization of it. I personally don't seek AI driving my life, some Google results on my casual search is enough.
@HighYield
@HighYield Жыл бұрын
I think its important to make the AI engine accessible and with time, real use cases will appear.
@flaviusradac4602
@flaviusradac4602 Жыл бұрын
Basically AI is not well suited with Windows 10/11 OS, but if Windows 12 comes with an AI software integration with the AIE processors...we will see a huge revolution in processing data and information....when playing games, your computer will know straight what is the average resolution and quality to play the game before installing....or youtube will know the perfect resolution and internet speed for your videos, and aditional content...and in Microsoft office.....in excel to predict about your data imput...or microsoft word to correct your spell(s)....and if microsoft aquire Chat GPT, plus a software integration with AIE processors.....then will be a feast
@zbigniew2628
@zbigniew2628 Жыл бұрын
Hah, most of it is easily done without AI. It's just not worth implementing now, because it need a few sec of thought from a user. Some people are enough braindead, thanks to others apps and time or focus eaters... Soo you don't need AI to make them even more shallow.
@yogiwp_
@yogiwp_ Жыл бұрын
Why don't we get this accelerator on desktop chips?
@HighYield
@HighYield Жыл бұрын
Because Phoenix is just the first step and since AI can save battery life, its more useful on mobile devices. But I'm sure we will get AI Engines on desktop CPUs in the future.
@SirMo
@SirMo Жыл бұрын
Because you aren't as concerned with battery life on a desktop PC, so using brute force approach works well enough. Though I'm sure we will see this on desktop PC's at some point.
@procedupixel213
@procedupixel213 Жыл бұрын
Yeah, AI hardware is definitely mass producing fast food. Or even junk food, when you consider that precision can get as low as just two bits per coefficient.
@HighYield
@HighYield Жыл бұрын
I honestly think my analogy isnt that far off :D
@IARRCSim
@IARRCSim Жыл бұрын
Are AI processors going to be programmed by a specialized programming language like GLSL or OpenCL for GPU's? I hope they get standardized soon so software can take advantage of the hardware even if it is from various different APU or AI hardware producers.
@MK-xc9to
@MK-xc9to Жыл бұрын
It seems Meteor Lake is d e l a y e d again and may even be scrapped ( at least for Desktop) due to the lack of high CPU Frequency , instead there may be another Raptor Lake Refresh , Raptor Lake itself wasnt planned and is only a refresh of Alderlake . Maybe we will see Meteor Lake on mobile 2023 but that depends on " Intel 4 " which still has some issues but may be good enough for mobile .
@HighYield
@HighYield Жыл бұрын
Yes, Meteor Lake is hanging in the ropes right now, but I still think we might see a mobile version this year.
@ps3301
@ps3301 Жыл бұрын
AMD must adopt the soc design.
@alexamderhamiltom5238
@alexamderhamiltom5238 Жыл бұрын
my heart broke when i saw that DSP, i shouldn't have upgraded too soon
@HighYield
@HighYield Жыл бұрын
Why did your heart break? :(
@alexamderhamiltom5238
@alexamderhamiltom5238 Жыл бұрын
@@HighYield because DSP is what i really needed back than, processing digital signal inviting delay no matter how strong the raw performance, with DSP that delay will decreased significantly
@HighYield
@HighYield Жыл бұрын
Ah now it makes sense.
@Z0o0L
@Z0o0L Жыл бұрын
maybe they can use this AI for pricefinding that isnt redicoulus for the 7000 series cpu....
@SmartK8
@SmartK8 Жыл бұрын
I want my CPU, GPU, APU, and QPU (Quantum Processing Unit).
@HablaConOwens
@HablaConOwens Жыл бұрын
Wish amd would get into arm chips. We need a new mobile OS for phones that can hdmi out for desktop mode.
@HighYield
@HighYield Жыл бұрын
AMD did work on a ARM design, dubbed "K12", but it was cancelled shortly before the launch of Zen.
@ronosmo
@ronosmo Жыл бұрын
I think Microsoft might like a CPU with x64 & arm cores.
@SirMo
@SirMo Жыл бұрын
I don't see a point in ARM on PC. These latest Zen cores are already just as efficient and are faster than ARM cores. Moving to ARM would just make things more painful for the user and developers having to support multiple architectures.
@Stopinvadingmyhardware
@Stopinvadingmyhardware Жыл бұрын
So a return of the FPU, math coprocessor.
@chadjones4255
@chadjones4255 Жыл бұрын
It's not exactly the singularity -- but it will be an important milestone when gaming NPCs become more interesting and thoughtful people than the mass of real human NPCs who actually run the world. We're already getting very close...
AMD ZEN 6 - Next-gen Chiplets & Packaging
16:37
High Yield
Рет қаралды 176 М.
Why Intel's new E Cores are so important
30:19
NickRossTech
Рет қаралды 3,3 М.
World’s Deadliest Obstacle Course!
28:25
MrBeast
Рет қаралды 126 МЛН
Khóa ly biệt
01:00
Đào Nguyễn Ánh - Hữu Hưng
Рет қаралды 20 МЛН
UFC Vegas 93 : Алмабаев VS Джонсон
02:01
Setanta Sports UFC
Рет қаралды 223 М.
How this tiny GPU invented the Future
18:00
High Yield
Рет қаралды 218 М.
Next-Gen CPUs/GPUs have a HUGE problem!
8:59
High Yield
Рет қаралды 198 М.
Zen 4 X3D is great - but has one Big Problem
16:27
High Yield
Рет қаралды 96 М.
Nvidia's Breakthrough AI Chip Defies Reason! (COMPUTEX 2024 Supercut)
9:41
FutureTech AI Hub
Рет қаралды 2,2 М.
NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories
1:13:59
Next-Gen CPU Acceleration: AVX For Generative AI
16:07
TechTechPotato
Рет қаралды 23 М.
Why the ROG Ally can't beat the Steam Deck
15:08
High Yield
Рет қаралды 40 М.
How Apple Just Changed the Entire Industry (M1 Chip)
26:28
ColdFusion
Рет қаралды 4,9 МЛН
Why next-gen chips separate Data & Power
18:56
High Yield
Рет қаралды 153 М.
Обзор Sonos Ace - лучше б не выпускали...
16:33
Samsung Galaxy 🔥 #shorts  #trending #youtubeshorts  #shortvideo ujjawal4u
0:10
Ujjawal4u. 120k Views . 4 hours ago
Рет қаралды 10 МЛН
Gizli Apple Watch Özelliği😱
0:14
Safak Novruz
Рет қаралды 3,6 МЛН
Урна с айфонами!
0:30
По ту сторону Гугла
Рет қаралды 7 МЛН
сюрприз
1:00
Capex0
Рет қаралды 1,6 МЛН
WWDC 2024 Recap: Is Apple Intelligence Legit?
18:23
Marques Brownlee
Рет қаралды 6 МЛН
APPLE совершила РЕВОЛЮЦИЮ!
0:39
ÉЖИ АКСЁНОВ
Рет қаралды 3,8 МЛН