Not a friendly reply to an honest question.X86 was created in 1978 for a processor that had 29,000 transistors total. You think modern code looks anything like how it did then?
Not a friendly reply to an honest question.X86 was created in 1978 for a processor that had 29,000 transistors total. You think modern code looks anything like how it did then?
I'd expect there's more JIT code, for one thing. Perhaps APIs and SDKs are doing more parameter-checking and input-sanitizing, as cybersecurity has continued to grow in importance and fuzz-testing has become more prevalent. Perhaps more code is being compiled with settings to tune for newer CPUs, which should affect the instruction mix and register windows.
I’ll admit to that, I apologizeNot a friendly reply to an honest question.
If you want to drop the debate, that's fair. However, it's not fair to say that I agree with you, after I conclusively refuted your claims with data.I'm not even sure what you are arguing here. You already agreed with my original point, so I'm going to move on.
If you want to drop the debate, that's fair. However, it's not fair to say that I agree with you, after I conclusively refuted your claims with data.
If you want to quote specifically what I said that you agree with, then that would be a fair way to cite a point of agreement. However, please don't put words in my mouth.
I agree on this point. Intel is basically trying to balance peak single-threaded performance against area. In contrast Apple's cores are designed to prioritize power-efficiency at the cost a greater area and lower clocks, even to the point of hurting absolute single-thread performance.I'm not arguing that A17 isn't going to be more efficient, I'm saying the comparison isn't remotely 8.5W vs over 100W. The A17 is on a significantly better node and the 13900k isn't remotely tuned for efficiency, because Intel is trying to win benchmark wars with it, not maximize battery life.