News AMD Demoes Ryzen AI at Computex 2023

AI for the masses.

Here we can see the XDNA AI engine crunching away at a facial recognition workload.

Why do you need "AI" that recognizes faces on a consumer laptop. You're going to point your laptop camera at people until you get arrested or something?

AI seems to be all about hype and "concepts", with very few practial use cases.
 
  • Like
Reactions: rluker5
It's used to blur out the background during a video call, no green screen required. It's not all hype.
We've been able to blur a background for years without "AI chips".

If that's the best example of AI, [Moderator edit to remove profanity. Remember that this is a family friendly forum.], then AI is way overhyped.
 
Last edited by a moderator:
But what if I just want the computer to do what I tell it to do, not what it thinks I want it to do?
I'm a WYSIWYG type of person. I don't want any background "magic" going on. I also don't want
it chewing up my memory, processor, or battery., or scanning my pictures and auto-tagging
all the faces it sees and sending the information up to the mothership. There had better be a way
to turn it all completely OFF.
 
We've been able to blur a background for years without "AI chips".

If that's the best example of AI, [Moderator edit to remove profanity. Remember that this is a family friendly forum.], then AI is way overhyped.
AI is good for upscaling. At least AI that doesn't come from AMD.
 
Last edited by a moderator:
Why do you need "AI" that recognizes faces on a consumer laptop. You're going to point your laptop camera at people until you get arrested or something?

AI seems to be all about hype and "concepts", with very few practial use cases.
It's a tool, and people will find ways to use the tool if it is good.

About face recognition, put two or three faces on camera while sharing screen with a heavy-processing app, like a development system, CAD software, etc. The AI can lower CPU and GPU usage from the blurring and help the rest of the presentation to go smoother. Also can be used to allow only the main user to issue speech commands by recognizing his face in real time.

Take away the laptop, and build small and cheap pcs to use with security cameras, with real-time capturing.
 
You need ai for that? 🤔
It's the latest buzzword that's all.
It's not so much "need" but rather "you can do this with 1W rather than loading the CPU at 75% and consuming 25W." Throwing out the term "buzzword" to dismiss something you don't understand doesn't mean the tech isn't actually doing something useful.

Smartphones? They all have AI accelerators now (or at least, the good ones do). They're being used to do cool stuff. Image enhancement, smart focus, noise cancelation. We talk to Siri and Google and Alexa and over time we're getting better and better results and we hardly even stop to think about it.

There was a time (about 15 years ago) where just doing basic speech recognition could eat up a big chunk of your processing power and could barely keep up with a slow conversation. Now we have AI tools that can do it all in real-time while barely using your CPU, or we can do it at 20X real-time and transcribe an hour of audio in minutes. That's just one example.

The stuff happening in automotive, medical, warehousing, routing, and more is all leveraging AI and seeing massive benefits. We have AI chip layout tools that can speed up certain parts of the chip design process by 10X or more (and they'll only become better and more important as chips continue to shrink in size). But sure, go ahead and call it a "buzzword."
 
Looks like basically an embedded coprocessor optimized for specific tasks offloaded from the CPU, a coprocessor called "AI"...😉 Like the article states, it will sink or swim based on its efficiency versus the normal CPU performance efficiencies sans the coprocessor.
 
  • Like
Reactions: bit_user
It's not so much "need" but rather "you can do this with 1W rather than loading the CPU at 75% and consuming 25W." Throwing out the term "buzzword" to dismiss something you don't understand doesn't mean the tech isn't actually doing something useful.

Smartphones? They all have AI accelerators now (or at least, the good ones do). They're being used to do cool stuff. Image enhancement, smart focus, noise cancelation. We talk to Siri and Google and Alexa and over time we're getting better and better results and we hardly even stop to think about it.

There was a time (about 15 years ago) where just doing basic speech recognition could eat up a big chunk of your processing power and could barely keep up with a slow conversation. Now we have AI tools that can do it all in real-time while barely using your CPU, or we can do it at 20X real-time and transcribe an hour of audio in minutes. That's just one example.

The stuff happening in automotive, medical, warehousing, routing, and more is all leveraging AI and seeing massive benefits. We have AI chip layout tools that can speed up certain parts of the chip design process by 10X or more (and they'll only become better and more important as chips continue to shrink in size). But sure, go ahead and call it a "buzzword."
Yes, a "buzzword". And it'll continue to be that at least until we normal individuals are benefited directly from this technology.

All the examples you mentioned about AI use cases are mostly gimmicky, meaningless, overhyped about and yet to output any real benefit for the individual. All we see it hype; nothing is deployed. Nobody cares if AI can make me look better in photos or create some fake effects and mostly useless adjustments.

Okay, it can speed up the process of chip design, but do you see it in effect? Are CPUs and GPUs getting cheaper? No. Actually, the opposite. Then it only benefits those businessmen to increase their profits, no real benefit for the consumer, till now.

For now, AI is still a wounded dog. AI chips are there because there is so much potential, but still it's not there. Yes, it looks promising and potentially life-changing, but hey, let's be honest, at the moment we still don't feel the impact, it will only get overhyped more over the time for its potential, but it is far from being here, and not ready yet to help us become better as human beings.
 
  • Like
Reactions: waltc3
They're being used to do cool stuff. Image enhancement, smart focus, noise cancelation. We talk to Siri and Google and Alexa and over time we're getting better and better results and we hardly even stop to think about it.
All of these things existed before this "AI" hype. None of these examples require "AI chips".

I worked at Lernout & Hauspie for a while. Pretty much the pioneers of today's speech recognition. Microsoft ended up buying the tech. This tech is what Alexa, Siri and Cortana is still using today.

You know when they inventend all these software patterns? 1987

This idea that speech recognition is some new field fueled by AI is a bit ridiculous.

And I'll give you the benefit of the doubt. This idea that AI will somehow greatly enhance speech recognition (I haven't seen that happen, but ok). Even if that happens, is this a billion $ industry that the AI evangelists suggest? I don't think it is, I don't think it is at all.
 
Last edited:
All of these things existed before this "AI" hype. None of these examples require "AI chips".

I worked at Lernout & Hauspie for a while. Pretty much the pioneers of today's speech recognition. Microsoft ended up buying the tech. This tech is what Alexa, Siri and Cortana is still using today.

You know when they inventend all these software patterns? 1987

This idea that speech recognition is some new field fueled by AI is a bit ridiculous.

And I'll give you the benefit of the doubt. This idea that AI will somehow greatly enhance speech recognition (I haven't seen that happen, but ok). Even if that happens, is this a billion $ industry that the AI evangelists suggest? I don't think it is, I don't think it is at all.

Not so much the speech recognition, but what you do with it afterwards.

Sure any old computer can recognize the words, but interpreting the context of the sentence is the end goal. Most speech recognition is very narrowly focused and follows specific prompts. You say your 10 digits for your order number after being prompted to say an order number, no great leap of logic there.

Words -> Sentence -> Action

"Make an appointment at the dog groomers." And it actually doing that for you was something google advertised as part of the google assistant, which I believe was cloud powered. If you could instead do all that work locally you could keep it out of the hands of the 'mothership' as someone else said.
 
All of these things existed before this "AI" hype. None of these examples require "AI chips".

I worked at Lernout & Hauspie for a while. Pretty much the pioneers of today's speech recognition. Microsoft ended up buying the tech. This tech is what Alexa, Siri and Cortana is still using today.

You know when they inventend all these software patterns? 1987

This idea that speech recognition is some new field fueled by AI is a bit ridiculous.

And I'll give you the benefit of the doubt. This idea that AI will somehow greatly enhance speech recognition (I haven't seen that happen, but ok). Even if that happens, is this a billion $ industry that the AI evangelists suggest? I don't think it is, I don't think it is at all.
Speech recognition, trained on gobs of data (just sound files with the words behind them), is absolutely not what Dragon, Microsoft, Lernout and Hauspie, etc. did. Yes, the end result might seem similar, but the mechanics are fundamentally different. It's like saying, "We had algorithms that could play a perfect game of checkers back in the 90s" is the same as "We have a deep learning algorithm that taught itself to play checkers and now plays a perfect game."

And speech recognition was one (simple) example. What Whisper can do, running on a GPU, for free, is pretty sweet. The stuff that gets extended into is even better. You're trying to dismiss something entirely by pointing out that one aspect was already possible before. You're missing the forest for the single small tree that you've focused in on.

We've been trying to do image recognition for decades as well. Some of the algorithms were okay back in the day. Then ImageNet and deep learning with CNNs revolutionized the field. Now we have trained algorithms that can handle a dozen 60fps inputs and recognize various objects — cars, pedestrians, animals, signs, bikes, etc. — all running on a single GPU. That was never going to happen with the old way of coding expert systems.

ChatGPT is already at a level of quality that beats the vast majority of college students for writing and research. It can create several paragraphs of reasonably accurate data in seconds. For millions of people, all at once. Sure, it's not perfect and "hallucinates" (i.e. dispenses incorrect information). So does nearly every research paper ever written. At least it can generate coherent written language that conforms to US English grammatical rules, which is more than I can say for a lot of people.

I don't think ChatGPT is the be-all, end-all of AI either. It's an example of a system that can do things that people would have considered "impossible" just a few years ago. Built on AI. So where do we go next, besides self-driving cars that outperform humans and don't fall asleep? Besides medical imaging systems that catch things that experienced doctors miss? Besides logistics systems that silently run things in the background in a far more efficient fashion than a human ever could?

There will be tons of startups that absolutely cash in on the buzzword aspect. Just like with every tech, including the late-90s Dot Bomb era where everyone and their dog was trying to create a website. Except, 25 years later, everyone and their dog has a website! And the sites look and work better than the stuff we had in the late 90s. 90% of AI startups will probably fail in the next few years. It will be the dozens that succeed that will matter.

Search in the late 90s was dominated by Alta Vista, Yahoo, Ask Jeeves, Lycos, InfoSeek, etc. None of those matter anymore, because Google reins supreme — a search engine that didn't even exist until 1998. By 2000, it started to really matter, and by 2004 it was superior to anything else. The same people behind Google are heavily invested into AI, for good reasons.
 
Thanks for the writeup. It's very disappointing they couldn't even provide a perf/W comparison vs. using their CPU cores or the iGPU to run the same workload.

Here we can see the XDNA AI engine crunching away at a facial recognition workload.
No, it says "object detection" and "face detection". These are less computationally-intensive than face recognition.
 
  • Like
Reactions: greenreaper
Why do you need "AI" that recognizes faces on a consumer laptop. You're going to point your laptop camera at people until you get arrested or something?
First, it wasn't actually doing face recognition. However, let's say it was.

You incorrectly assume the only video it would use for this is what's being captured live from its camera, but what if you're in a video conference with one or more rooms full of people at the other end. Using face recognition, software could label people as they speak, so that you could know their identities.

Another example might be watching a movie or show on your laptop, and you'd like to pause it and see annotations of the actors playing the characters on screen (or the names of the characters, themselves!). Player software could scrape the credits off IMDB or the streaming provider and then use face recognition to match up the faces on screen with the actors listed in the credits.

Or, maybe you'd like to know what kind of car or motorcycle is shown in a film or photo. Similar techniques could be used to match and label the correct model.

You're just not thinking creatively enough.

You need ai for that? 🤔
To do it well, you do.

Background separation boils down to image segmentation, which is a classic computer vision problem. AI-based methods can far outperform conventional techniques.

We've been able to blur a background for years without "AI chips".

edit
There's a big difference between sort of being able to do something and being able to do it well.

So, what about speech recognition & synthesis?

I worked at Lernout & Hauspie for a while.
Not as a researcher, you didn't. If you had, you'd know that deep learning has revolutionized speech recognition.

This tech is what Alexa, Siri and Cortana is still using today.
No. Back when Amazon started the Alexa project, deep field speech recognition could not be done. They had to lure some of the best researchers in the world and invent the tech.

But what if I just want the computer to do what I tell it to do, not what it thinks I want it to do?
A lot of AI applications are just sophisticated pattern-recognition problems. Like finding cancerous tumors in X-ray photos, or maybe matching a disease diagnosis and treatment regimen with a set of lab results and symptoms.

People have too narrow a definition of "AI". It's not all like J.A.R.V.I.S. or M3GAN. Have you seen Stable Diffusion or Deep Fakes?

Yes, a "buzzword". And it'll continue to be that at least until we normal individuals are benefited directly from this technology.
You are benefiting. You just aren't necessarily aware of when this tech is being used under the hood.

A lot of people wouldn't agree that Siri or Alexa are gimmicky.

Okay, it can speed up the process of chip design, but do you see it in effect? Are CPUs and GPUs getting cheaper? No. Actually, the opposite.
You're not comparing against the scenario in which it wasn't used. You can't see how much performance, efficiency, time-to-market, or cost is improved through its use. You just blindly assume it's not being used or that its use is entirely optional and inconsequential.

Companies and entire industries don't usually do things that are inconsequential. Businesses operate based on cost and competitiveness. Things which don't yield improvements on at least one of those axis don't tend to catch on.

You might assume that because chips used to be designed using schematics, that we don't need modern EDA tools. However, the chips which exist today cannot be designed by schematic! EDA was a huge enabler for larger, more complex integrated circuits.

it only benefits those businessmen to increase their profits, no real benefit for the consumer, till now.
When corporate finances falter, products get cancelled or scaled back. Ultimately, business performance does matter to consumers.

For now, AI is still a wounded dog. AI chips are there because there is so much potential, but still it's not there. Yes, it looks promising and potentially life-changing, but hey, let's be honest, at the moment we still don't feel the impact, it will only get overhyped more over the time for its potential, but it is far from being here, and not ready yet to help us become better as human beings.
The problem with AI vs. some other revolutionary technologies (e.g. cellphones, the internet), is that you can't externally see that it's being used. That's why you believe it's not benefiting you.

If you want to remain ignorant, that's certainly your right. If not, there's a wealth of information on problems that either defied conventional solutions or have been much better addressed using deep learning. You need only look.
 
Last edited by a moderator:
  • Like
Reactions: JarredWaltonGPU
Speech recognition, trained on gobs of data (just sound files with the words behind them), is absolutely not what Dragon, Microsoft, Lernout and Hauspie, etc. did. Yes, the end result might seem similar, but the mechanics are fundamentally different. It's like saying, "We had algorithms that could play a perfect game of checkers back in the 90s" is the same as "We have a deep learning algorithm that taught itself to play checkers and now plays a perfect game."
It's like trying to argue against fuel-injection because we already had cars without it. Or against LCD monitors, because we can keep using CRT monitors.

Anybody who's worked in tech knows there are usually better ways of doing things, and a major way that technology improves is by adopting those newer, better methods.

"I don't need Git. We've had RCS for decades."

...said almost no one.
 
>It's very disappointing they couldn't even provide a perf/W comparison vs. using their CPU cores or the iGPU to run the same workload.

It's safe to say that AMD doesn't have any AI announcement ready. But given that all eyeballs are on AI at the moment, it's good PR to say something something AI.

Forbes has a piece on Dr Su,


The piece is overlong on Su's bio, and a bit too adulatory for my tastes. But it does drive home the point that AMD's primary focus going forward will be to compete in AI hardware, for datacenters. Then, consumer GPU market will take a backseat, given both the lower margins and it's not a growth market. It's a logical take, even if it's not already born out in the current GPU iteration.

I expect the same incremental improvement (+10-15% raw perf) for future GPU iterations, especially if AMD/Nvidia sticks to the same price points (eg $300 for base mainstream part) and not increase along with inflation. $300 today is worth about $250 in 2020. Gamers won't be happy, but they'll figure out that not everything revolves around their wants.

 
>It's very disappointing they couldn't even provide a perf/W comparison vs. using their CPU cores or the iGPU to run the same workload.

It's safe to say that AMD doesn't have any AI announcement ready. But given that all eyeballs are on AI at the moment, it's good PR to say something something AI.
Huh? I mean, they built the hardware into their CPUs and have been talking about it since late last year (IIRC).

I think the reason they don't have more to say about it is that their software team is running behind schedule. That demo probably came together only at the last minute. That's the only good reason they wouldn't have impressive benchmarks to share. The other reason might be that their hardware isn't actually that good and their numbers would just be embarrassing. But, I find that a little hard to believe.

it does drive home the point that AMD's primary focus going forward will be to compete in AI hardware, for datacenters. Then, consumer GPU market will take a backseat, given both the lower margins and it's not a growth market. It's a logical take, even if it's not already born out in the current GPU iteration.
Eh, I think the gaming market is still too big for AMD to ignore. Even if it's not as rapidly growing, AMD could achieve substantial revenue growth if they manage to take a bigger share of it. Plus, it would set the stage for the next round of consoles. And they need to be thinking about continuing to develop competitive iGPUs.

Lastly, metaverse might've been overhyped and a ways off, but I wouldn't completely write it off.

I expect the same incremental improvement (+10-15% raw perf) for future GPU iterations,
It should absolutely be better than generational CPU increases. One reason generational GPU performance has slowed is that new functional blocks are competing for some of the process node dividends. However, unlike CPUs, it's almost easy to increase GPU performance simply by throwing more gates at the problem.
 
Its irony to see us here arguing what AI is, what it does, is it there... while corporations around the world are restructuring their organizations, adding AI tools to every corner of their business, boosting productivity significantly while removing unnecessary "fat", eliminating middleman... At individual level, most of us doesn't even have a chance to access these tools, except for some silly conversations with ChatGPT, or some cute pics out of Bing Dall-E. Have you ever use AI tool to generate things that speed up your job, create things that you weren't able to, learning new knowledge, new skills to fulfill your worthy day ? I'm trying. I'm amazed & i'm a little scared so far :)