News Class-Action Lawsuit Forming Against Intel for 'Downfall' Chip Bug

Status
Not open for further replies.
Most of these side-channel attacks are purely theoretical yet practically impossible to achieve under real-world conditions where systems are juggling tens of thousands of threads spanning dozens of unrelated background processes, applications, system services, etc. and there are no guarantees that the desired data is in-flight through the victim algorithm at any particular time for the attack code to get anything back from.

Since all of those side-channel attacks require extensive CPU-time abuse to even have a chance of succeeding, a successful exploit also requires failure to catch rogue processes consuming a disproportionate amount of CPU time that should cause performance complaints along with a substantial increase in system power draw.

Most systems don't handle data anywhere near sensitive enough to worry about these. For environments where hypothetical side-channel attacks are unacceptable, we'll need different CPUs designed specifically to eliminate all potential crosstalk between unrelated threads.

For most other uses, all that is really necessary would be to provide software developers with the ability to mark security-critical parts of their software to prevent unrelated code from running on the same CPU/L2 until it exits the protected section - can't side-channel-attack code when you cannot run concurrently.
 
Resilience against side channel attack is not a quality that was sold, so I don't see a logical base for getting damage compensation.

But juries vote and do not deduct, so who knows where this is going. It won't help if CPU makers were completely driven out of business.

Verifiying a speculative design to have no side vulnerabilities seems theoretically impossible because it sounds very much like an Emil Post correspondance problem, "reasonable assurance" still neigh unaffordable.

Without all those speculation tricks CPUs can only go wide like GPU cores and that means that without a massive rewriting of all code speeds will badly disappoint.

This will drive a large performance and technology wedge between designs that don't need to care about side channels, because they are not shared and others that potentially share resources.

If speculation can be activated on the fly so that certain code passsages can be excluded, or if caching domains can be segregated more carefully, that's also something that might help in cloudy scenarios.

But being successfully sued for something nobody asked you to design for, should be reserved for very exceptional cases.
 
Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down. Poof. Gone. No more hardware. No more software. The complexity of them both is to such a degree that it is, at this point, impossible to not have a number of fatal flaws. Just not gonna happen. The buyer has to accept the risk, otherwise they get nothing.
 
It seems to me (and please correct me if I'm wrong) that you are conflating "speculative execution" with "side-channel attacks".
Because it is a side-channel: Downfall poses no threat at all without malicious code somewhere else actively attempting to scoop whatever data it may be leaking and successfully extracting data still requires the attacking thread to get lucky with peeking at what it can see while relevant data is being processed and attempt to infer the target content from it. No plain-text user-space data is being directly leaked.

Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down.
You do want to allow accountability for bugs that make a product fundamentally unsuitable for its intended purposes like the Pentium FP bug that made affected FPUs effectively worthless.

Mostly hypothetical security bugs like Downfall and Heartbleed don't really affect anyone besides people who run high-security applications on x86 hardware and if you run such stuff, you should be closely monitoring your systems for unidentified processes as your fourth line of defence after strict firewalling, strict access control lists and skimming logs for suspicious connection/login attempts for preemptive action.
 
To my knowledge, they never claimed it would be completely bug-free and it's simply not possible to predict how much of a performance impact an unknown future mitigation is going to cost. At best, you could give a worst case for performance if you completely disable speculative execution but that's still leaving unknown variables on the table. So, yeah, it sucks that you have to choose between maximum security vs the original performance but what would be the reasonable alternative?
 
So, yeah, it sucks that you have to choose between maximum security vs the original performance but what would be the reasonable alternative?
If you need the highest security possible, don't allow your security-critical code and services to run on a machine that also runs arbitrary user code such as a virtualized server instance. If you don't allow any unknown code on your security-critical servers, side-channel exploits are irrelevant.
 
Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down. Poof. Gone. No more hardware. No more software. The complexity of them both is to such a degree that it is, at this point, impossible to not have a number of fatal flaws. Just not gonna happen. The buyer has to accept the risk, otherwise they get nothing.
It depends. I believe it has to be on a case per case basis. As it is today.

See for instance the MoveIt disaster (https://www.darkreading.com/search?q=MoveIt), all these consequences for a stupid SQL injection. IMHO, software makers should be held responsible for NOT preventing SQL injections.

Car makers have been sued for defects and car makers are still making cars.
 
Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down. Poof. Gone. No more hardware. No more software. The complexity of them both is to such a degree that it is, at this point, impossible to not have a number of fatal flaws. Just not gonna happen. The buyer has to accept the risk, otherwise they get nothing.
^^
This.

When I studied software engineering some 40 years ago the first thing put to us was that in any program of more than 4 lines there is at least 1 bug, and, that it is mathematically impossible to prove that any piece of code is bug free. Hardware works the same way.

The proof of this is simple. First, any editors, compilers/assemblers, debuggers, etc. must be proven free of all errors. Then the hardware they run on must be proven. Then the operating system and storage systems must be proven, etc. etc. etc. All possible interactions and permutations must be accounted for and tested.

The number of possible permutations that result rapidly reach close to infinity in very short order.
 
  • Like
Reactions: PEnns and atomicWAR
In that sense, customers see a performance regression that has nothing to do with how they use the product, but which has everything to do with the (bugged) features Intel (and others) build into their products.
What bugged feature? The speculative execution and load features work fine. The only problem is that some architecture registers which weren't thought to be of significance at the time the feature (gather registers) got implemented turned out to be a potential information leak vector.

As I have written before, the flaw is of no consequence in a tightly buttoned-up system where no foreign code is allowed. Simply having software monitoring suspect CPU usage would be enough to catch malware attempting to exploit side-channel attacks before they get a chance of achieving anything.

CPU-level performance-sapping mitigation is only necessary if you want to do zero effort whatsoever to prevent it on the software side.
 
AMD got hit with the Bulldozer misrepresentation lawsuit and the payout was what, $25? This is just lawyers going after a deep pocket and hoping for a settlement that'll be split 80% for them and 20% for customers and given that it may involve "billions of processors" the payout is going to be quite small, if there's a payout at all. Personally, I think it will lead to laws which allow CPU and GPU manufacturers to be immune to lawsuits stemming from performance degradation due to bugs discovered after they were made if they became aware of them only after the product was put on shelves.
 
The flaw being of no consequence in a secure system isn't the issue; the issue is that intel launched a mitigation (so Intel believes the problem needs to be addressed, irrespective of whether or not it has real-life applicability), and that mitigation subverts expected performance. That's a valid complaint.
The mitigation is only necessary if you cannot be bothered to secure you system against it any other way, assuming any mitigation is necessary in the first place. For heaps of online infrastructure, the most security-sensitive thing a server will ever do is session authentication and you can delegate that task to a more secure server instead of doing it locally.

As I have written many times in similar stories: the attacker needs local code execution capability of some sort before a side-channel attack can even begin. The attacker managing to get arbitrary code on your "secure" server (software, human or administrative failure) is a far greater threat than any hardware security flaw that requires local execution. Can't exploit a side-channel flaw if you cannot get exploit code on the target.
 
Most of these side-channel attacks are purely theoretical yet practically impossible to achieve under real-world conditions where systems are juggling tens of thousands of threads spanning dozens of unrelated background processes, applications, system services, etc. and there are no guarantees that the desired data is in-flight through the victim algorithm at any particular time for the attack code to get anything back from.

Since all of those side-channel attacks require extensive CPU-time abuse to even have a chance of succeeding, a successful exploit also requires failure to catch rogue processes consuming a disproportionate amount of CPU time that should cause performance complaints along with a substantial increase in system power draw.

Most systems don't handle data anywhere near sensitive enough to worry about these. For environments where hypothetical side-channel attacks are unacceptable, we'll need different CPUs designed specifically to eliminate all potential crosstalk between unrelated threads.

For most other uses, all that is really necessary would be to provide software developers with the ability to mark security-critical parts of their software to prevent unrelated code from running on the same CPU/L2 until it exits the protected section - can't side-channel-attack code when you cannot run concurrently.
I think the more pressing issue, aside from most of these need physical access and specialized software/hardware to execute is that the modern 'gamer' or anyone working on any normal (non-mission-critical) workloads, will have to update to the latest firmware to get the new bug fixes/feature fixes, where they end up having to either roll back and stop updating their firmware, or get the latest bug fixes and have their cpu work at (in extreme cases) roughly half of the rated throughput they were expecting. Considering that these exploits are mostly theoretical, I wish Intel/AMD would have different firmware 'branches' .. much like Nvidia does with the 'Gamer ready drivers vs Studio Drivers' .. where one would be for bug fixes, timings and max performance, and the other with all the security vulnerabilities that are 'obscure' fixed (along with the performance degradation) .. Now I'm not completely insensitive to patching bugs, like par se if a remote root access exploit existed, that needs to be patched for all cpus.
 
  • Like
Reactions: LuxZg
It seems to me (and please correct me if I'm wrong) that you are conflating "speculative execution" with "side-channel attacks".
you're right, I never read the original paper until just now and the news articles didn't go into details.

But it's not that different, because unlike in the case of the FDIV bug, the chip does not do any of the operations wrong that intel describes: a CPU computes and the results are correct. Data being accessible per se is not a functional bug, unless it's an expressed assurance that it won't be.

SGX might be the exception, because that is designed as a security enclave and if that's not in fact secure, that would be an issue. But to my knowledge SGX has been declared a complete failure years ago and is no longer sold as a feature. Anyone who bought Intel for SGX before they deprecated it, should be reembursed for damages they incurred.

The problem is that these leaks are hitting non-functional requirements and in any chip of sufficient complexity to be interesting, it's quite impossible to verfiy every imaginable non-functional requirement within economical bounds.

It's not quite the same as Intel being sued, because a bridge built from CPU chips broke apart and people died, but it's about starting in that direction and once you open that floodgate, it only stops when there is no money left.

Yes, I don't like myself defending Intel here, because I believe discrete HSMs exist for good reasons and they shouldn't have gone in that direction without further safeguards. But in the end I think they mostly went there because customers wanted to inline the crypto for the economy and now they'll just have to do what they should have done in the first place.
 
If you need the highest security possible, don't allow your security-critical code and services to run on a machine that also runs arbitrary user code such as a virtualized server instance. If you don't allow any unknown code on your security-critical servers, side-channel exploits are irrelevant.
To clarify the part you quoted from my earlier comment:

"So, yeah, it sucks that you have to choose between maximum security vs the original performance but what would be the reasonable alternative [for Intel to offer]?"

Aka, Intel already did what they reasonably could, so now it is in the hands of the end user/customer to choose what is best for their situation.
 
1. I was a systems programmer, mostly interfaces for my company. My perspective is therefore from the people who write the code/microcode and build the internal hardwired programs on the chips. I worked on TS mainframe systems while in the military and the mainframes for a major securities brokerage house after the military. While in the military at Headquarters European Command I was also an assistant security officer that certified sites as well as writing the programs for analyzing computer usage and detecting/analyzing security violations.

2. As "atomicWAR and ex_bubblehead" said, if you are running a system where data security is important, you do not let unknown programs run on the system. ALL programs should be vetted as safe. There is of course development performed by "trusted" (fully vetted and many ex-military with a TS security clearance already performed to save the company the potential cost of several hundred thousand dollars for an investigation that may result in the employee found unsuitable for whatever reason) employees. They usually but not always are developing on systems purely used for development with simulated data.

3. Only programs created internally or purchased as vetted/secure software should be running on the systems.

4. The only path for external contact with the system should be VERY SECURE with at the least long random passwords made up of of the entire keyboard data set.

5. For example, I have passwords up to I think 60 characters. They include upper case, lower case, numbers, pound signs, equal signs, commas, braces, ..., even for those sites that allow it spaces and nulls. These passwords will not be broken or guessed at easily. Do I have them written down, YES. So if someone breaks into my house, finds the list, runs them through my password translation program and enters the output, they can. (I am that paranoid for all banking and shopping information.)

6. I hate to put the onus on the user but in this case, I would have to side with Intel. There is absolutely no way to protect against all future threats. There is no way when the hard wired code is as complicated as is currently being generated. The intel i7-10700K processor has 10.3 billion transistors. The i9-11900 has over 17 bilion transistors. Each transistor is a gate/switch for logic. Every path CAN NOT be tested. Using the i7-10700K, there are up to 10,300,000,000 FACTORAL paths. It is true most paths will only use few 1000 transistors things like vector computations will use possibly millions. That is assuming the function only uses a part of the path once. With vector computations, I believe it loops through parts of the hardware so the path is much longer. Of course the i9-11900 has exponentially more paths.

7. Central Processing Units (CPUs) are now designed by AI, not humans. They are tested by AI not humans and the results are sent to humans to certify or not (at the recommendation of the AI). Heck speculative execution is based on AI.

8. No Intel provided stats and information on hardware and compared its functionality versus prior generations and AMD systems. The "bug" goes back to Skylake, that is 6th generation Intel processors from around 2012. It has existed in every chip since. Intel will now mitigate with future CPUs.

9. Finally the Downfall vulnerability is not a hardware flaw in the chip. It executes correctly. Downfall is a in my opinion a software flaw. If a program loads data from a disk, modifies it, and writes it back out (almost always to a new physical location especially with SSDs) without clearing the original location, that data is still there for anyone to find. This is the same situation, the data that is being accessed by Downfall is residual information not cleared by the program/operating system. I personally blame the programmers. It is sloppy code.
 
  • Like
Reactions: ex_bubblehead
anyway they are facing a lawsuit so they have to compensate money or come up with a workaround to compensate for the performance loss by some solution like unlocking bclk overclocking all boards and cpus, releasing bios to increase clock speed, unlocking cpu multiplier, suggest overclock to regain performance lost... otherwise they will have to compensate billions of users with money 🙃. sorry but i'm not against intel for hate reasons but they have to guarantee the rights of consumers, can do anything with the purchased product, otherwise will have to be SUED OR COMPENS☠. I mostly only buy intel cpu and at the same time I want to claim the user proper rights (overclock-free)
 
Last edited:
anyway they are facing a lawsuit so they have to compensate money
They don't owe anyone anything until they manage to convince a jury that Intel should have been able to foresee the exploit or was demonstrably negligent in their design.

Unless your application is security-critical and has to run on shared hardware with foreign code, a perfectly acceptable work-around is to not patch the system in the first place and run your security-critical stuff on a dedicated system that only runs approved code.
 
  • Like
Reactions: froggx
Car makers have been sued for defects and car makers are still making cars.
Yes, it is true car makers have been sued for defects yet are still making cars. I'm not denying this. I'd like to add that car makers have a different severity level and potential remediation processes than available with a CPU with "unforeseen security issue, fix slows it down, no one dies. Probably." Suing over something like that is up there with getting sued by a patent troll.

Automakers get to remediate this kind of thing by issuing a recall. Really bad problems get a "mandatory" recall. Turns out they aren't all that mandatory, it just means the car maker has to make a reasonable effort to contact owners and fix the problem (possibly at owner's expense pending warranty status). This happens in cases like "oops, the engine shuts off when you hit the gas so you get t-boned in an intersection, then car locks the doors to make sure the EMTs can't get to you before you bleed out." This could lead to a lawsuit, but it's not so bad for automaker (except if it's an intentional design choice or legally negligence, then it's superbad).

Then there's the "silent recall." This is for things like "sometimes the HVAC works kinda bad and might make you slightly uncomfortable for a couple seconds." They don't really say anything about these for real, but when you next take your car to the dealership for service they kinda sneak in a fix (and an extra fee) and hope no one notices.

Automaker lawsuits tend to be more like "our cars literally explode on the regular in event of rear end collisions cause we omitted a $0.05 piece of plastic to pad our bottom line as the average value of paying settlements to victims multiplied by expected number of settlements is cheaper than adding $0.05 per car to production costs." This kind of thing isn't even in an ethically gray area, it's straight up in the ethically black kind of "going to hell zone." A lot easier for a jury to think about than attempting to explain spec execution and side-channels and a bunch of way more fundamental things.

Cpu makers can't really keep security issues behind the scenes since the researchers (or whoever finds flaws) disclose that kind of thing.

Basically, lawsuits all around w00t. But the underlying issues are apples and eggplants.

anyway they are facing a lawsuit so they have to compensate money or come up with a workaround to compensate for the performance loss by some solution like unlocking bclk overclocking all boards and cpus, releasing bios to increase clock speed, unlocking cpu multiplier, suggest overclock to regain performance lost... otherwise they will have to compensate billions of users with money 🙃. sorry but i'm not against intel for hate reasons but they have to guarantee the rights of consumers, can do anything with the purchased product, otherwise will have to be SUED OR COMPENS☠. I mostly only buy intel cpu and at the same time I want to claim the user proper rights (overclock-free)
Try again. Your sig tells me you're just after a "free" OC unlock. Nope.
Enabling people to run CPUs out of spec (and into a voided warranty) to "fix" the vulnerability fix is a <Mod Edit> horrible idea. Intel has to look at this potentially causing all kinds of new hardware issues and vulnerabilities from pushing chip timings into a completely untested area. I guess it may help if they go with "congrats, you just voided your warranty (or it is otherwise expired), thanks for opting out."

Out of warranty chips can straight up suffer a 100% performance degradation (aka failing completely) and Intel doesn't owe anyone anything. If you think about it, it's illogical for someone to think, just because this starts with Skylake architecture, they can take their Kaby Lake or whatnot and say "Intle owez me cuz Downfal yo!!!!!111"
 
Last edited by a moderator:
I agree that the comparison of car defects and CPU security issues is flawed. But CPU security issues, on the other hand, can lead to data breaches, financial losses, and other problems
Don't run foreign code in security-critical systems and CPU security flaws become irrelevant - can't exploit flaws if you cannot get code on the machine to do so.
 
Status
Not open for further replies.