News As CPU Materials Get Thinner, Security Risks Grow - Report

Spectre and friends have nothing to do with chips getting 'thinner' or radiating more stuff, they are timing-based attacks on the architecture that span products from 10nm through 100+nm.

The more complex architectures get, the more likely they are to have some unforeseen interactions between features that can at least hypothetically be exploited.
 
Spectre and friends have nothing to do with chips getting 'thinner' or radiating more stuff, they are timing-based attacks on the architecture that span products from 10nm through 100+nm.
If you read closely, he was using them to make the case that Intel hasn't been prioritizing security - not saying they were at all related to manufacturing tech. So, to the extent that mitigating these RF and other sorts of vulnerabilities requires a design-for-security mentality, we're still short on evidence that Intel is up to the challenge.
 
So, to the extent that mitigating these RF and other sorts of vulnerabilities requires a design-for-security mentality, we're still short on evidence that Intel is up to the challenge.
Are RF side-channel exploits really a thing? In the real-world, CPUs and GPUs are under a heatsink, so reading a chip using RF requires removing the heatsink and IHS where applicable first. When you have physical access to a chip, there aren't many limits to what you can do. If you are desperate enough to get the cryptographic root keys out of a secure enclave type sybsystem, you can hypothetically lap the chip to expose the memory cells holding the keys and read them with something like an atomic force microscope.
 
Are RF side-channel exploits really a thing?
I don't know. I'm just explaining how I read the article.

The original article has a lot more detail:


I think it's healthy for some (like DARPA) to err on the side of paranoia, rather than complacency. Anyway, post up your thoughts, if you decide to read it.

If you are desperate enough to get the cryptographic root keys out of a secure enclave type sybsystem, you can hypothetically lap the chip to expose the memory cells holding the keys and read them with something like an atomic force microscope.
Yeah, sounds reasonable, though I'm hardly a microelectronics engineer.

But, what if you're trying to eavesdrop on some key that's not burned into the on-chip ROM? Maybe it's sent over the network, and the code is executing in ring-zero and using an encrypted memory segment.
 
But, what if you're trying to eavesdrop on some key that's not burned into the on-chip ROM? Maybe it's sent over the network, and the code is executing in ring-zero and using an encrypted memory segment.
The part of the paper concerning thinner chips is about how thinner chips make it easier to observe chip operation for reverse-engineering and data extraction purposes. I'm pretty sure you are going to notice if someone attaches a 1000kg optical bench to your phone's SoC for an electroluminescence attack. Those aren't drive-by RF/optical/acoustic/etc. exploits, merely cheaper, faster and less destructive alternatives to atomic force microscopy and other similar means of reverse-engineering and extracting data from chips.

The implication here is simply that secrets buried in silicon can't be considered as secret for much longer, at least not without further obfuscation efforts.
 
What a funny world we live in. Damn arms race for security. Reports of people losing funds or getting hacked are going to become more prevalent then.

1 person acting maliciously is enough to ruin the lives of a thousand. The problem with this absolute paranoia approach is that its going to ripple throughout all parts of security.

Do you know what's going to happen when trust falls so completely that you have to be suspicious of everyone all the time?
 
Doesn't make any sense to me. I do agree that more noise is made. But the whole server is sealed in a big metal box. Then its in a rack and in a room.... Someone has to be physically there to observe the noise.

An then, hackers usually does it remotely. There is no way to monitor such noise remotely (without installing sensors on-site).

If a hacker could get to your server physically, such side channels blah blah isn't the thing you would be worried about.
 
Doesn't make any sense to me. I do agree that more noise is made. But the whole server is sealed in a big metal box. Then its in a rack and in a room.... Someone has to be physically there to observe the noise.
Unless your server's hardware is unique in the world, it likely contains multiple chips that share common secrets with every other similar chip in the world. Hackers don't need to get your specific server, they only need to get any of those other chips to expose those common secrets and build from there.

In the case of breaking through Apple's Secure Enclave, you can buy a bunch of iPhones and use them to train your bench to figure out where the bits you need access to for extracting password hints are on the SoC before making any attempts on your victim device(s).

The point is that designs that were formerly thought to be pretty secure may not be considered secure for much longer.
 
  • Like
Reactions: bit_user
Spectre and friends have nothing to do with chips getting 'thinner' or radiating more stuff, they are timing-based attacks on the architecture that span products from 10nm through 100+nm.

The more complex architectures get, the more likely they are to have some unforeseen interactions between features that can at least hypothetically be exploited.

Yes exactly - the article conflates two entirely different things.

I'm pretty sure the fear of EMR leakage that may potentially leak "data" is still very much in the speculation phase - based around the idea that the thinner the nodes and materials get the more noise would radiate. I have a hard time believing this has actually even been proven yet, or the devices to take advantage of such a thing probably don't actually exist ... yet (if ever).

Side channel attacks and all that is something entirely different - not related to EMR leakage at all ...
 
it likely contains multiple chips that share common secrets with every other similar chip in the world. Hackers don't need to get your specific server, they only need to get any of those other chips to expose those common secrets and build from there.
The same could be true not only of secrets buried in the silicon, but also secrets stored in encrypted memory. It's not the best example, but think about content protection keys (i.e. DRM), for instance.

And in order to extract this runtime data, you could use any CPU that was supported by the software you were trying to hack, meaning IP thieves only need one CPU design that was easier to snoop than the others.
 
It's not the best example, but think about content protection keys (i.e. DRM), for instance.
Exactly, DRM was the first thing that came to my mind. Can't have standards that rely on some form of public key cryptography (CSS, AACS, HDCP, etc.) if silicon cannot be trusted to keep private keys private anymore.

Secrets only need to leak once to ruin security through obscurity, DRM keys are a simple example of how this can play out.
 
  • Like
Reactions: bit_user
Unless your server's hardware is unique in the world, it likely contains multiple chips that share common secrets with every other similar chip in the world. Hackers don't need to get your specific server, they only need to get any of those other chips to expose those common secrets and build from there.

In the case of breaking through Apple's Secure Enclave, you can buy a bunch of iPhones and use them to train your bench to figure out where the bits you need access to for extracting password hints are on the SoC before making any attempts on your victim device(s).

The point is that designs that were formerly thought to be pretty secure may not be considered secure for much longer.

The article is not really directly at servers, its not more IOT and phones etc where they are easily exposed to EM probes.

It is currently impossible to hack a server remotely using this EM radiation approach. The most basic thing you need is to be able gather the EM radiation and analyze it. It is currently impossible to capture the EM radiation without any equipment. No hacks are able to utilise an existing component in a server and use it as EM probe. Maybe the voltage sensor can be used but its extremely inaccurate and the noise will drown out the readings. Or the fan monitoring sensor... But all are very inaccurate.

Although the hacker can use a probe to measure, I doubt someone could get close to the server physically to install it.
 
Btw, regardless of the EM radiation, a simple EM shield will solve the problem.
Not really: if your chip has technology that relies on encryption keys built into the chip that are common to all similar chips (ex.: practically all DRM schemes), a hacker can buy his own chip, delid it, extract secrets from his own chip, then use those to hack your system.

If an intelligence agency wants to unlock your phone, it can buy a couple of identical phones, get access to the SoCs, use it them to train their equipment on monitoring key points needed to extract keys from the device or run an offline password search, then use this expertise to unlock your device without busting the maximum tries before self-reset.

The article isn't about remote hacking, it is about how formerly impractical ways of reverse-engineering data out of chips are becoming cheaper and more accurate as chips get smaller and thinner to the point that the die can be back-probed. The article mentions one of the "side-channel" methods being luminescence of semiconductor junctions, you obviously wouldn't be able to exploit that without line-of-sight access to the bare die and cementing the chip to an optical bench.
 
I wonder if they could do something like intentionally adding some jitter to their clock signals, to make snooping harder.
Doesn't work like that, all that would do is make the clock itself insanely more complex as it would need a rng that can operate at least twice as fast as cpu itself. And with that, the cpu would be slower by 25%-75%. The RNG would also need to be completely undecodable or else it could be reverse engineered.
If you were to do a cpu like this it still wouldn't be any more secure than a cpu without the "jitter". As if the "jitter" affected programs running on the cpu in any way the whole thing would fundamentally not work.
 
Last edited: