News Researchers: GPUs Can be Used for Digital Fingerprinting and Web Tracking

WebAPI calls for things like number monitors resolution os version, browser version, patches, plugins, supported features, security monikers and fonts installed are a lot more effective and harder to bypass unless you install a dummy clean browser designed to avert such attacks.
 
I would think the analyzer would need various runs of consistent data to build a "model" of your hardware that denotes you as that individual. Randomly accessing a spiked page wouldn't necessarily build a database of your hardware. Also, in theory you could script you're hardware to randomize GPU, memory and frequency boost levels so that the hardware presented would never give the exact same results to the same image processing or whatever it's doing. There is also an option to disable hardware on browser processing for the GPU.
 
I would think the analyzer would need various runs of consistent data to build a "model" of your hardware that denotes you as that individual. Randomly accessing a spiked page wouldn't necessarily build a database of your hardware.
The technique doesn't need a "database of your hardware", only the performance characteristics of a single shader program running on the GPU for ~150ms.

Given how small the variances being measured might be, I'd be surprised if the accuracy didn't get thrown off by people having a bunch of background processes also using the GPU.
 
  • Like
Reactions: digitalgriffin
The technique doesn't need a "database of your hardware", only the performance characteristics of a single shader program running on the GPU for ~150ms.

Given how small the variances being measured might be, I'd be surprised if the accuracy didn't get thrown off by people having a bunch of background processes also using the GPU.
Very astute. I was wondering the same thing, Especially if you are running low end hardware like a Vega 11/8 APU.
 
"Khronos, the non-profit organization responsible for the development of the WebGL library, has already formed a technical group which is currently exploring solutions to mitigate the technique. "

All they need to do is introduce an option called "incognito mode", which when selected, it will introduce random delay for all define function before they finishing their task.
Granted it will make performance little bit suffer, but i think its worth trade-off.
 
Intel igpus are set to run well below their limits. For example the igpu on my 12700k can be overclocked from 1500mhz to 2000mhz on stock volts. If most are like this, almost all will have little trouble running websites pegged at 100% performance for the task with no significant variation.
 
Intel igpus are set to run well below their limits. For example the igpu on my 12700k can be overclocked from 1500mhz to 2000mhz on stock volts. If most are like this, almost all will have little trouble running websites pegged at 100% performance for the task with no significant variation.
It may not look like a "significant variation" in GPU benchmarks but it can still be statistically significant enough signature to be relatively unique.

For example, every reference clock crystal has a different frequency since it is nearly impossible to create atomically identical crystals are and what few atomic twins may exist are unlikely to operate under identical environments. As long as you have a sufficiently accurate time base to measure the crystals' frequency and jitter with the necessary precision, you could hypothetically uniquely identify every one of them.
 
It may not look like a "significant variation" in GPU benchmarks but it can still be statistically significant enough signature to be relatively unique.

For example, every reference clock crystal has a different frequency since it is nearly impossible to create atomically identical crystals are and what few atomic twins may exist are unlikely to operate under identical environments. As long as you have a sufficiently accurate time base to measure the crystals' frequency and jitter with the necessary precision, you could hypothetically uniquely identify every one of them.

That's why introducing a random delay for every function call is the best way to avoid this kind of fingerprinting techniques.
 
That's why introducing a random delay for every function call is the best way to avoid this kind of fingerprinting techniques.
In most programming, there is a nearly infinite number of ways to achieve any given goal, especially when performance is non-critical. Noise from attempting to fudge with function call timings and performance timers can possibly be averaged out over multiple iterations. I have no doubt there are many other potential timing side-channels besides the obvious ones.
 
It may not look like a "significant variation" in GPU benchmarks but it can still be statistically significant enough signature to be relatively unique.

For example, every reference clock crystal has a different frequency since it is nearly impossible to create atomically identical crystals are and what few atomic twins may exist are unlikely to operate under identical environments. As long as you have a sufficiently accurate time base to measure the crystals' frequency and jitter with the necessary precision, you could hypothetically uniquely identify every one of them.
After reading the source research it seems you are right. I was hoping the browser meltdown or spectre mitigations would help, but apparently not. And using gpu passthrough and other trickery (like using a dgpu through an igpu with postprocessing cmaa applied, or using one dgpu as the output for a different dgpu) that can sometimes fool benchmarking software would just make you look more obvious to this, having a very unique identifying response.
Meh, maybe I'll just get some egpu adapter setup and plug in some old gpu when I want temporary anonymity from this. Let the tracker see another "never before seen" signature for a short while.
 
The digital fingerprint of a Radeon 6500 XT looks similar to
BwE2lr0khx9MQUQSBjtlK9UMIgMb3_k1uoYNjo1Hc7c.png
 
Easy to spoof this. Easy to modify GPU output. I don't think a comparison to the human fingerprint is accurate.
Interesting article but not concerned.