News Nvidia RTX Voice Works Fine On Non-RTX GPUs

You know I would have respected NVIDIA more if they said, "This is a premium feature you need to pay for with a higher end RTX Card to receive"

Instead they just lie through their G-D teeth saying it requires "Tensor cores" to promote a marketing objective which is a blatant lie.

This is why I refuse to buy NVIDIA. They are just a bunch of a-clowns when it comes to be being straight forward and honest. They are anti competitive as well.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
The latest discovery raises the question on whether RTX Voice really requires Tensor cores.
Clearly not.

As things currently stand, the feature seems to run smoothly on just CUDA cores. Nevertheless, it remains to be seen if RTX Voice has any performance impact on GeForce graphics cards that lack proper Tensor cores.
No. It's just audio processing. As I said in the comments on previous article about this, it might even be usable on CPUs.

Anyway, it can either keep up with realtime or not. That's really the only question. Without tensor cores, you're almost certainly using fp32, instead of fp16. So, precision shouldn't be an issue. It just comes down to performance.

Now, if someone can try it (either on a RTX or GTX card) and post their GPU utilization, that might shed some light on how much compute power it really requires.

Sadly, my Nvidia GPU is a 900-series...
 

muser99

Reputable
Apr 5, 2018
5
1
4,515
I tried RTX Voice on a GTX 750 Ti (Maxwell) card. After the modification, it installed ok but Skype callers said I sounded muffled, fuzzy and too quiet so I uninstalled it. Task Manager showed little load on the GPU (3D) while it was in use, about 2-5%. I used Windows 10 Version 2004 Build 19041.207 (Release Preview of May 2020 Update).
 
  • Like
Reactions: bit_user

ZippydsmLEE

Distinguished
May 21, 2012
5
0
18,510
You know I would have respected NVIDIA more if they said, "This is a premium feature you need to pay for with a higher end RTX Card to receive"

Instead they just lie through their G-D teeth saying it requires "Tensor cores" to promote a marketing objective which is a blatant lie.

This is why I refuse to buy NVIDIA. They are just a bunch of a-clowns when it comes to be being straight forward and honest. They are anti competitive as well.
I hate to say I really do ...but that dose not help you have to not buy the games that are more optimized to run on Nvida cards. The whole game market is pretty much focused on Nvida more so than AMD and it helps Nivda dose more to optimize their drivers more frequently.

I'd say they could have ofered it free with the RTX and then asked 10-20$ for non RTX cards.

back to rant

Buy not buying Nvida you are shooting yourself in the foot because you are getting much less performance for the money you are putting in, unless you are not gaming.

Not everything sold has a like or worth while alternatives, video cards you don;t have much of a choice unless you are a minimalist. So you buy used or wholesale or non auth vensor discount, if a thing is just 2 or 3 times removed from authorized vendors the product can't make the producer of it any money because its already been sold and resold in the market, it dose not really matter if tis bought alot what matters is if the primary vendors sell them direct to end users. Even AMD cards get treated the same and frankly they maintain higher prices than Nvida becuse people are price gougeing....but they do that with 990fx mobos too.....

I picked up a used 1060 for 170 8 months ago and found a new 1660 for 200 a month ago for a friends computer, the 1660 knocks is getting 5-10 more FPS than the 1070's I have tested, borrowed a RX 5700 it got 5-8 more FPS than the 1660 at double the price of the 1660..

AMD lied about the memory bandwidth on the 8 core CPUs(got my class action check) so its not like they are much better they are just a lesser evil to try and keep Intel/Nvida from slacking more.

Yes yes I know its a dumb long winded rant but not everyone is made of money more so alot of people don't bother to keep up with hardware/software compatibility/nuance. Don't get me wrong I jumped over to a rx480 to move up from my 760 (got 10-15fps more) kinda got pissed at Nvidas forced log in thing just to check for updates(they no longer force you to log in ) but frankly all the chances I had to test with I got 10-20 more FPS out of the Nvida equivalent, this translates to a few more years of medium end gaming .
 

ZippydsmLEE

Distinguished
May 21, 2012
5
0
18,510
Ya know...dose everyone recall when they forced you to log in to check for and download updates through the drivers/GFexp? Heres a reason I would use online only GFexp... this would be a great feature to get you to put up with their adware. Outside of that it should be free with RTX but either cost money or need online GFexp for non GFexp cards. My next card will be a 1660 unless 2060s start going for under 160$. I have a 1660 so I don;t need to upgrade for awhile.
 

watzupken

Reputable
Mar 16, 2020
1,181
663
6,070
The more I read about the features that "utilizes" the tensor cores, the more I feel the tensor cores are a marketing gimmick in the retail market.

If you look at DLSS, when I first heard it, I thought its a great feature that the tensor cores can optimized the frames real time. Turns out that games first needs to be optimized, or more like the game developer needs to work with Nvidia to "teach" the AI how it should optimize. Nothing like a real time machine learning on our GPU to optimize performance. Even though AMD did not feature any machine learning cores in the chip, their solution to DLSS is simple, just lower resolution based on how much performance you want back, and sharpen the details. When this solution first came about, most reviews did find that the supposedly less elegant AMD solution seem to produce better results both in terms of performance and IQ, as compared to the first gen DLSS.

Now if you look at this supposed tensor optimized RTX voice, again people can walk around the requirement for the tensor cores and make it work fine, or at least well enough. Perhaps there is some help from the tensor cores, but I don't think its tangible enough.
 
  • Like
Reactions: TJ Hooker

vincero

Distinguished
Mar 9, 2010
15
6
18,515
Whilst the idea is impressive (and others are working on it also using normal CPU), I'd love to know what the trade off really is in terms of power usage / battery runtime on a mobile device for what is essentially a non-essential feature (I mean really, if it was never developed I don't think anyone apart from a select few would really care) if in say meetings for a few hours. GPGPU assistance in apps tend to not be low power vs a dedicated IP/DSP block, e.g. NVENC vs CUDA.
Yeah, the usage is <10% (from user accounts - it may not be that much and falsely reported but until we know better I'll take that as the number), but that could still be around 10 Watts of energy usage, so on a laptop with a mobile RTX GPU with say a 50WHr battery that's a fifth to a quarter of your capacity used up just for that feature.
I suspect Realtek, etc., will eventually integrate such features into their audio codec companion chips relatively easily and probably more efficiently if less general purpose and more tweaked for audio use - that would definitely suit the mobile device market better where the laptop keyboard is physically located in the same device as the audio pickup, or in other cases such as in-car communications, etc., to remove external noise better than the differentiating / auxiliary microphone systems currently in use.
Also it would be interesting (seeing as Discord / MS Teams are looking in to doing it already, not just nvidia) if the overhead on a normal CPU performing the calculation is low enough that it also doesn't actually impact power usage more so than a GPU based approach.
It seems another case of a technology looking for a widespread use and not even really needing 'AI' cores - let me know when real 'AI' actually appears that doesn't need thousands of iterations of learning, etc., and can adapt to changes without needing additional 're-training' - until then this is just another 'programmed intelligence' example.
Not specifically related to nvidia, but its just more of this 'AI' tagged tech bandwagon stuff, which in some cases only exists to distort reality - it pains me there is a large area of silicon reserved in my phone to essentially make a picture you've taken (where the imaging sensor and processor itself optimised to try and be as accurate and clear as possible despite their sizes) become less accurate, clear, or realistic so that self-obsessed people can just focus on themselves even more.

On a side note: Will be funny if someone uses it whilst doing a youtube or other video review of a keyboard and tries to capture the sound difference between a certain mechnical one vs another or membrane... doh
 
Last edited:

vincero

Distinguished
Mar 9, 2010
15
6
18,515
Clearly not.


No. It's just audio processing. As I said in the comments on previous article about this, it might even be usable on CPUs.

Anyway, it can either keep up with realtime or not. That's really the only question. Without tensor cores, you're almost certainly using fp32, instead of fp16. So, precision shouldn't be an issue. It just comes down to performance.

Now, if someone can try it (either on a RTX or GTX card) and post their GPU utilization, that might shed some light on how much compute power it really requires.

Sadly, my Nvidia GPU is a 900-series...
We don't actually know for sure how / what it uses RTX/tensor core features for. Part of me wonders if they are doing full sample analysis on the cores (which I think would be inefficient as technically you're working with a larger dataset) or using some other parts of the GPU to assist also, e.g. using NVENC block to perform real-time DCT of the audio stream then using the GPU matrix/tensor cores calculate and remove audio patterns which are programmed to be ignored, and shunting the end result back through to the user app (although that may also be more working overhead).
To a large extent it actually probably relies on how easy it is to distribute and implement in software / drivers as to the method.
It may have nothing to do with compute power also in terms of which devices support it and may rely on specific chip architecture design such as certain IP blocks being able to communicate with others directly rather than in GPU RAM or via driver calls.
I suspect the beta application is just a wrapper for specific set of API calls to various parts of the cards hardware and moving data between them, and maybe future drivers may implement a specific API or 'audio interface' in addition to the HDMI audio devices natively... although that would be dependent on uptake I guess or how easy it is to implement.
 
Last edited:
Apr 23, 2020
1
1
10
It didn't work on my low-end graphics card Ge Force g920m... although it stayed at 25-35% GPU utilization, but I couldn't hear myself back (and the measuring audio level bars on Windows Audio Settings always stayed at zero)... I guess you need higher-end cards above GTX 1050...
 
  • Like
Reactions: bit_user

LucianAndries

Reputable
Jul 21, 2017
46
2
4,545
Nvidia didn't exactly lie about RTX Voice...

I installed it on my 1080 Ti, and it is just power hungry!!!! My GPU is always 139MHz, even when watching 4K YouTube. But when I activate RTX Voice, it clocks my GPU to almost 1800MHz........ ON STANDBY, with all apps closed!!!!! 😭
If it needs that much power on stand by, imagine how the games would run...................
And I'm only using the Input Device setting, not both.

So no, they didn't lie. It just works better on RTX GPUs. Same with Ray Tracing.
 
Last edited:
  • Like
Reactions: bit_user

TJ Hooker

Titan
Ambassador
Nvidia didn't exactly lie about RTX Voice...

I installed it on my 1080 Ti, and it is just power hungry!!!! My GPU is always 139MHz, even when watching 4K YouTube. But when I activate RTX Voice, it clocks my GPU to almost 1800MHz........ ON STANDBY, with all apps closed!!!!! 😭
If it needs that much power on stand by, imagine how the games would run...................
And I'm only using the Input Device setting, not both.

So no, they didn't lie. It just works better on RTX GPUs. Same with Ray Tracing.
Do we know that clock speeds on RTX GPUs don't rev up when using RTX voice?

What was your % GPU utilization?
 
  • Like
Reactions: bit_user

javiindo

Reputable
Jun 12, 2019
95
27
4,560
Hello,

I have a gtx 1070 and I was able to install the application rtx voice. I think it works fine. This is my feedback:

RTX voice non open:
Speed GPU / memory: 139 mhz / 405 mhz
GPU usage: 0%

RTX voice open wihtout processing input signal (checkbox unflagged):
Speed GPU / memory: 139 mhz / 405 mhz
GPU usage: 0%

RTX voice open and processing input signal (checkbox flagged):
Speed GPU / memory: 1556 mhz/ 3802 mhz
GPU usage: 0%


RTX voice open processing input signal (checkbox flagged) and using voice recorder app to register voice:
Speed GPU / memory: 1556 mhz/ 3802 mhz
GPU usage: 7%

So, all the time of processing input flag is checked, the GPU will pass to the base clock speed (no idle).
Only when the input audio is used, the GPU clock speed raise to a 7% of utilisation.

Is it the same with RTX ? I expect the GPU usage is lower because they are more powerful graphics cards.

Screenshots

Br,
 
  • Like
Reactions: bit_user

GenericUser

Distinguished
Nov 20, 2010
296
140
18,990
I have an RTX 2080 Ti and have been giving the RTX Voice program a try for a bit. Input works fine, with people saying they hear pretty much any and all background noise eliminated from my end.

Output is a different story however, since I cannot get it to work in any circumstances. Doesn't matter if I'm using speakers or headphones, or if I'm actually telling it to filter the background noise or not- I get absolutely nothing from the other end (tried with Skype, Discord, and Steam, though I think only the first two out of those three are actually officially supported).

Anyway, regarding GPU impact, my clock speeds for my video card stay constant, regardless if the program is up and running and being used during an active voice call, or if it's completely closed. 1350 mhz core clock, 7000 mhz memory clock (according to MSI Afterburner).

Looking at GPU usage during an actual call with someone (and only filtering input, since I can't get output to work) my usage seems to fluctuate between 4% and 6%. Only a marginal difference from the previous posters non-RTX card results.
 

javiindo

Reputable
Jun 12, 2019
95
27
4,560
I have an RTX 2080 Ti and have been giving the RTX Voice program a try for a bit. Input works fine, with people saying they hear pretty much any and all background noise eliminated from my end.

Output is a different story however, since I cannot get it to work in any circumstances. Doesn't matter if I'm using speakers or headphones, or if I'm actually telling it to filter the background noise or not- I get absolutely nothing from the other end (tried with Skype, Discord, and Steam, though I think only the first two out of those three are actually officially supported).

Anyway, regarding GPU impact, my clock speeds for my video card stay constant, regardless if the program is up and running and being used during an active voice call, or if it's completely closed. 1350 mhz core clock, 7000 mhz memory clock (according to MSI Afterburner).

Looking at GPU usage during an actual call with someone (and only filtering input, since I can't get output to work) my usage seems to fluctuate between 4% and 6%. Only a marginal difference from the previous posters non-RTX card results.

@GenericUser For me to make it to work in the output (to filter the sound of a youtube video by example). In windows I set as output "NVIDIA RTX voice". And in the RTX voice application I choose the output where I want to hear the output ("speaker").
 

vincero

Distinguished
Mar 9, 2010
15
6
18,515
I have an RTX 2080 Ti and have been giving the RTX Voice program a try for a bit. Input works fine, with people saying they hear pretty much any and all background noise eliminated from my end.

Output is a different story however, since I cannot get it to work in any circumstances. Doesn't matter if I'm using speakers or headphones, or if I'm actually telling it to filter the background noise or not- I get absolutely nothing from the other end (tried with Skype, Discord, and Steam, though I think only the first two out of those three are actually officially supported).

Anyway, regarding GPU impact, my clock speeds for my video card stay constant, regardless if the program is up and running and being used during an active voice call, or if it's completely closed. 1350 mhz core clock, 7000 mhz memory clock (according to MSI Afterburner).

Looking at GPU usage during an actual call with someone (and only filtering input, since I can't get output to work) my usage seems to fluctuate between 4% and 6%. Only a marginal difference from the previous posters non-RTX card results.
Wow, so based on the 2080ti TDP (250W - I know TDP is not exactly equal to max power) that's very roughly about 10W power usage. Honestly... Can I say (donning flame proof coat) I don't think it's worth it... Not to say its worthless (and on a normal desktop PC it's ultimately negligible so on or off is irrelevant) I just mean cost vs reward for me is not worth it (efficiency wise its not great) - Certainly if I was working on a laptop on battery it is a significant draw.
Let me put it another way - I expect phones will pick this up soon enough (many now having 'AI' processor cores) and they will need to perform the task using a hell of a lot less power and I expect they will probably pull it off using a much smaller power envelope and energy consumption cost.
 

bit_user

Titan
Ambassador
The more I read about the features that "utilizes" the tensor cores, the more I feel the tensor cores are a marketing gimmick in the retail market.
You're confusing two different things, here.

If you look at DLSS, when I first heard it, I thought its a great feature that the tensor cores can optimized the frames real time. Turns out that games first needs to be optimized, or more like the game developer needs to work with Nvidia to "teach" the AI how it should optimize. Nothing like a real time machine learning on our GPU to optimize performance.
I carefully read everything they published about it, and there was never any suggestion that it was learning in realtime. They clearly said it was applying a pre-trained network to the data, which is realistically all you can really do.

Even though AMD did not feature any machine learning cores in the chip, their solution to DLSS is simple, just lower resolution based on how much performance you want back, and sharpen the details. When this solution first came about, most reviews did find that the supposedly less elegant AMD solution seem to produce better results both in terms of performance and IQ, as compared to the first gen DLSS.
I've said it before, but I'll at least use a different analogy: if you're going to use first-gen DLSS as an indictment of the whole idea, then you probably would have decided that automobiles could never replace horses, based on the first few machines that were built.

New technologies take time to refine and improve to the point where they can surpass existing methods. For a long time, people were building flash storage, but it's only about 15 years ago that SSDs could finally compete with HDDs.

And specifically with regard to DLSS, you'd do well to have a close look at the results of their 2.0 implementation, which seems very promising.

Now if you look at this supposed tensor optimized RTX voice, again people can walk around the requirement for the tensor cores and make it work fine, or at least well enough. Perhaps there is some help from the tensor cores, but I don't think its tangible enough.
Okay, so, this issue is truly unrelated to DLSS.

I think Nvidia's purpose was simply to improve the perceived value of RTX cards, which is why they made that restriction. Audio processing is typically far less compute-intensive than video processing and graphics, so I was immediately skeptical that only RTX cards could handle this workload.

It shouldn't be seen as an indictment of the tensor cores - just Nvidia's marketing department.

Finally, I think you're leaving out one significant application of the tensor cores - Global Illumination. This is the most intensive Ray Tracing feature, because it involves tracing light rays forward from each light source. Because it's infeasible to shoot enough rays to illuminate all of the image pixels, what ends up happening is that you get a very noisy image. They use deep learning and tensor cores to effectively de-noise the image, and the result looks a lot better than other approaches that use similar rays/pixel ratios.

I'll grant you that DLSS 1.0 didn't work as advertised. Aside from DLSS and GI Ray Tracing, neither of which many people use, there's not currently much value in tensor cores, for the average consumer. And trying to address that problem by artificially restricting their Voice Works to running on RTX models just gave them a black eye.

So, I basically agree with your initial statement, if not the specifics of your argument.
 

javiindo

Reputable
Jun 12, 2019
95
27
4,560
Wow, so based on the 2080ti TDP (250W - I know TDP is not exactly equal to max power) that's very roughly about 10W power usage. Honestly... Can I say (donning flame proof coat) I don't think it's worth it... Not to say its worthless (and on a normal desktop PC it's ultimately negligible so on or off is irrelevant) I just mean cost vs reward for me is not worth it (efficiency wise its not great) - Certainly if I was working on a laptop on battery it is a significant draw.
Let me put it another way - I expect phones will pick this up soon enough (many now having 'AI' processor cores) and they will need to perform the task using a hell of a lot less power and I expect they will probably pull it off using a much smaller power envelope and energy consumption cost.

This is a tool to isolate voice with a very noisy background (children, keyboards, etc). By example, I was watching a youtube video and I was not able to understand the person. I activated this function and all the background noises were removed. It's like applying a very powerful filter to the voice.
So, it's a new free tool. And your computer will not use more energy if you don't use it. It's not always on. You just use it whenever you want.
 

bit_user

Titan
Ambassador
I'd love to know what the trade off really is in terms of power usage / battery runtime on a mobile device
Good point.

GPGPU assistance in apps tend to not be low power vs a dedicated IP/DSP block, e.g. NVENC vs CUDA.
If it does even use the tensor cores, then you should know that their power-efficiency is much closer to that of hard-wired logic than CUDA cores.

Yeah, the usage is <10% (from user accounts - it may not be that much and falsely reported but until we know better I'll take that as the number), but that could still be around 10 Watts of energy usage, so on a laptop with a mobile RTX GPU with say a 50WHr battery that's a fifth to a quarter of your capacity used up just for that feature.
GPU power usage doesn't scale lineraly with utilization. At low utilization, GPUs run at lower clocks, which is significantly more efficient.

I suspect Realtek, etc., will eventually integrate such features into their audio codec companion chips relatively easily and probably more efficiently if less general purpose and more tweaked for audio use
Could be, but audio chips are cheap and tiny. Adding AI horsepower in the realm of 10% of a GPU (or even less) would add considerable cost to their solution.

Also, to the point about costs, they tend to be fabbed on much older, less power-efficient manufacturing nodes.

Also it would be interesting (seeing as Discord / MS Teams are looking in to doing it already, not just nvidia) if the overhead on a normal CPU performing the calculation is low enough that it also doesn't actually impact power usage more so than a GPU based approach.
GPUs, especially using tensor cores, are vastly more efficient at inferencing than CPUs. Even with specialized VNNI extensions, Intel's AVX-512 CPUs are no match for GPUs.

So, if your concern is about power-efficiency, then it belongs in a GPU. Whether an iGPU or dGPU, a GPU-based solution will be preferable to a CPU-based one.

'AI' cores - let me know when real 'AI' actually appears that doesn't need thousands of iterations of learning, etc., and can adapt to changes without needing additional 're-training' - until then this is just another 'programmed intelligence' example.
It's probably best to distinguish between AI and Deep Learning. Nvidia tech is legitimately deep learning (which usually requires on the order of a hundred or so iterations to converge - not thousands).

Programmed intelligence is something different, and shouldn't be confused with deep learning.


it pains me there is a large area of silicon reserved in my phone to essentially make a picture you've taken (where the imaging sensor and processor itself optimised to try and be as accurate and clear as possible despite their sizes) become less accurate, clear, or realistic
Well, now that same silicon might be used to improve your call quality.

On a side note: Will be funny if someone uses it whilst doing a youtube or other video review of a keyboard and tries to capture the sound difference between a certain mechnical one vs another or membrane... doh
: )
 
  • Like
Reactions: vincero

bit_user

Titan
Ambassador
Output is a different story however, since I cannot get it to work in any circumstances. Doesn't matter if I'm using speakers or headphones, or if I'm actually telling it to filter the background noise or not- I get absolutely nothing from the other end (tried with Skype, Discord, and Steam, though I think only the first two out of those three are actually officially supported).
I would poke around in your mixer settings and make sure that there's not something muted or at low-volume. Sometimes, you have to click around to find mixer settings for different devices...
 
  • Like
Reactions: GenericUser

vincero

Distinguished
Mar 9, 2010
15
6
18,515
Good point.


If it does even use the tensor cores, then you should know that their power-efficiency is much closer to that of hard-wired logic than CUDA cores.


GPU power usage doesn't scale lineraly with utilization. At low utilization, GPUs run at lower clocks, which is significantly more efficient.


Could be, but audio chips are cheap and tiny. Adding AI horsepower in the realm of 10% of a GPU (or even less) would add considerable cost to their solution.

Also, to the point about costs, they tend to be fabbed on much older, less power-efficient manufacturing nodes.


GPUs, especially using tensor cores, are vastly more efficient at inferencing than CPUs. Even with specialized VNNI extensions, Intel's AVX-512 CPUs are no match for GPUs.

So, if your concern is about power-efficiency, then it belongs in a GPU. Whether an iGPU or dGPU, a GPU-based solution will be preferable to a CPU-based one.


It's probably best to distinguish between AI and Deep Learning. Nvidia tech is legitimately deep learning (which usually requires on the order of a hundred or so iterations to converge - not thousands).

Programmed intelligence is something different, and shouldn't be confused with deep learning.



Well, now that same silicon might be used to improve your call quality.


: )

Here's hoping it that silicon could be put to better use than just fake bokeh, etc., that would be more universally useful to do audio processing for calls, all apps, etc.

As for RTX voice GPU utilisation, without a lot more info about power usage vs workload it's all fudged numbers, either way its not an insignificant power difference - from all reports by users the GPU clocks go to their top p-state, although this could be for performance / quality reasons such as ensuring frequency scaling isn't happening in case it causes odd audio effects - that alone will increase power draw and in future may be avoided.

Yeah, the audio codec chips are cheap but industry demands do influence their designs - otherwise I doubt there would have been much of a push beyond the 16bit 48kHz AC97 as quickly - luckily blu-ray / hd-dvd pushed the window forward.
If this smart noise reduction becomes a new buzz feature they'll find a way to squeeze it in even if they have to buy AI IP blocks from others. It could also mean an extension to audio APIs and 3D audio / gaming capabilities to try to improve HRTF audio processing by actually using analysis of real-life environments instead of just trying to recreate audio reflection delays and reverb effects, etc., so that could be an overall win.

Cant argue that for large workloads GPUs are more efficient but in efficiency terms you can have diminishing returns - if the entire GPU has to go to top p-state instead of just just one tensor/SM/CUDA module (however they've done it) then you are inherently doing it less efficiently - at this point Intel can clock/power gate a single core easily, and this particular task doesn't seem to be a massive undertaking.

'Programmed Intelligence' is more of a personal counter-term I use for a lot of the not-really-AI type AI stuff. In reality the 'AI' net has been cast a bit too wide and there are several cases where its just an algorithm that can't get past its training vs the ability for the algorithm to self adapt its parameters and expand its trained base, etc.
 

GenericUser

Distinguished
Nov 20, 2010
296
140
18,990
@GenericUser For me to make it to work in the output (to filter the sound of a youtube video by example). In windows I set as output "NVIDIA RTX voice". And in the RTX voice application I choose the output where I want to hear the output ("speaker").
I would poke around in your mixer settings and make sure that there's not something muted or at low-volume. Sometimes, you have to click around to find mixer settings for different devices...

While it turns out I did have the mixer settings for it indeed set to 0 , it also turns out I overlooked an extremely basic but important detail on the specifications: Win 10 minimum requirement. Still being on Win 7 (I know, I know) looks likely as the probable cause for my issues in that regard, though filtering input has given me no issues. Basically, I should have RTFM a little harder.

In any case, I appreciate the advice and suggestions.
 
  • Like
Reactions: bit_user