News Intel Announces Delay to 7nm Processors, Now One Year Behind Expectations

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

techconc

Honorable
Nov 3, 2017
24
9
10,515
For the lack of cameras with sufficient capabilities.

You're stretching to make your points, instead of logically thinking about how much compute their SoCs have vs. a desktop PC. If you include their GPUs, even Apple's lowest-end iMac or mac mini has more than enough compute for Face ID.

What's missing is the camera needed to cope with challenging lighting conditions and to defeat exploits involving static photos. You can't properly do all of that stuff with a generic webcam.


That's not even true. Intel and AMD both make custom models of their CPUs and GPUs specifically for Apple.

Rumor has it, AVX even had its roots in Steve Jobs' talks with Intel to transition Mac to it. Apple reportedly wanted something that was more like AltiVec, and the result was AVX.
You need more than cameras for FaceID. Existing Intel chipset cannot provide that capability as is. Certainly not in an efficient way. That’s the point. Do you just like to argue for the sake of it?

Also, no, there is a big difference between minor changes in chipsets and major new functionality in a SoC. These differences are miles apart.
 
I believe iOS and Mac OS now use the same kernel, FWIW. With mobile CPUs having multiple cores for more than a decade, I don't believe iOS is deficient at multitasking.
Reviews of the iPad Pro show how poor it is at multi-tasking and that is iOS. MacOS also isn't the best at mult-tasking either. As of October 2019 MacOS was still lacking in multi-tasking and you needed to download 3rd party applications to make it useful for multi-tasking.
 

bit_user

Polypheme
Ambassador
What you agree to is irrelevant.
I was agreeing with a previous poster. It goes without saying that you don't have to accept any statement I make.

However, if you want to talk about relevant, then whatever I think and whatever you think is irrelevant. This whole discussion is irrelevant. It won't change anything, so why do it?

Well, I do it to share information and learn things, but there's only a certain amount of disagreeableness that a discussion can tolerate, and I feel like this is getting overheated and not very productive.

If you want to go down the path of explaining why these are not valid cross platform benchmarks, then you are welcome to attempt to make that case.
Well, I'm not intimately familiar with GeekBench. I did try to find more information about it, before posting that. What I found is a statement that:
Geekbench 5 measures your processor's single-core and multi-core power, for everything from checking your email to taking a picture to playing music, or all of it at once. Geekbench 5's CPU benchmark measures performance in new application areas including Augmented Reality and Machine Learning
So, I'm left to wonder how it does that, and how exposed it is to the underlying API layers and multitasking performance of the platform. So, I simply said "it's questionable", because it seems to me those are legitimate and unanswered questions.

Moreover, unless you can provide something which directly contradicts this comparison, you have no basis for your disagreement other than for the sake of being argumentative.
This gets to what I was saying about "disagreeableness". If we're on a journey together, towards enlightenment, then I'd expect you to say something like: "gee, why do you think it's questionable?". However, that statement comes across as very defensive, which tells me you're more invested in winning an argument than trying to learn and share knowledge. That might be fine for you, but not me.

You make blanket statements which you are apparently not able to support.
I think the quotes speak for themselves. They said the increase in power was greater than the increase in performance vs the A12, which says that A13 probably can't scale much past 2.6 GHz or so.

My original claim is made evident by such examples of 680x0 vs 80x86 comparisons over multiple generations. The same goes with PowerPC, etc. Intel has never had the superior architecture or chip design. Rather, Intel has always had the advantage on chip manufacturing process.
That's ancient history, and has no real bearing on the Intel of today.

Yes, Kudos to Intel to drag x86 so far forward. Yet, at the end of the day, it’s not the most efficient design.. by a considerable margin. Worse, it’s losing it’s advantage in terms of performance as well.
I think we can agree that it would be nice to move beyond x86.
 

bit_user

Polypheme
Ambassador
You need more than cameras for FaceID. Existing Intel chipset cannot provide that capability as is. Certainly not in an efficient way. That’s the point.
That's your claim, but just look at the numbers. Check the specs on the lowest-end iPhone that supports FaceID and compare them with the compute capabilities of an Intel iGPU. The problem just isn't one of compute, as you claim.

And certainly, Macs with dGPUs could support FaceID, if compute power were the limiting factor. dGPUs have far more raw compute and memory bandwidth than even their A13.

Do you just like to argue for the sake of it?
If you take issue with one of my points, state your case. Don't attack me, though. That won't get you very far.
 
Last edited:
Jul 31, 2020
9
2
15
I can't wait for AMD to have competitive gaming CPU's at the 10600k level, but when I had to decide between that and the 3700x it was hard not to go with intel. When do we expect AMD to take the lead, Zen 3 this winter?
 
  • Like
Reactions: Soaptrail

bit_user

Polypheme
Ambassador
When do we expect AMD to take the lead, Zen 3 this winter?
It depends on when Rocket Lake launches. I expect Zen3 will leapfrog Comet Lake (in time for the Holiday season, if you're wondering), but then Rocket Lake will put Intel back in the lead. AMD will still rule multi-threaded performance, as it looks like Rocket Lake will top out at 8 cores.
 

techconc

Honorable
Nov 3, 2017
24
9
10,515
Reviews of the iPad Pro show how poor it is at multi-tasking and that is iOS. MacOS also isn't the best at mult-tasking either. As of October 2019 MacOS was still lacking in multi-tasking and you needed to download 3rd party applications to make it useful for multi-tasking.
Lol... MacOS is fully UNIX compliant and is great at multitasking. You really have no idea what you are talking about.
 

techconc

Honorable
Nov 3, 2017
24
9
10,515
Well, I'm not intimately familiar with GeekBench. I did try to find more information about it, before posting that.

This is a widely used and understood benchmark. The details aren’t a big mystery. The link below gives you detail on each of the sub tests.
https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf

I also mentioned Anandtech ran SPEC benchmarks as well and came to the same conclusion. Let me guess, you never heard of SPEC, right?

I think the quotes speak for themselves. They said the increase in power was greater than the increase in performance vs the A12, which says that A13 probably can't scale much past 2.6 GHz or so.
That’s likely the optimum frequency for efficiency given this what they ship in a phone. Suggesting it “can’t“ scale higher is again a baseless claim. Especially given a higher power/thermal envelope.
 

techconc

Honorable
Nov 3, 2017
24
9
10,515
That's your claim, but just look at the numbers. Check the specs on the lowest-end iPhone that supports FaceID and compare them with the compute capabilities of an Intel iGPU. The problem just isn't one of compute, as you claim.

And certainly, Macs with dGPUs could support FaceID, if compute power were the limiting factor. dGPUs have far more raw compute and memory bandwidth than even their A13.


If you take issue with one of my points, state your case. Don't attack me, though. That won't get you very far.
FaceID is specifically dependent on the Neural Engine built into the iPhone X and higher.
 
Examples???
How about having to download 3rd party applications to get to the same level of multi-tasking ability. https://www.makeuseof.com/tag/multitasking-macos-apps/
FYI I am an IT professional. I work with VMware, Windows, multiple flavors of enterprise Linux, a very small amount of HPUX, and some Mac OS support. When it comes to easy of multi-tasking, Windows is by far the best. When it comes to multi-tasking in general, Windows is still the best of them.
 

techconc

Honorable
Nov 3, 2017
24
9
10,515
How about having to download 3rd party applications to get to the same level of multi-tasking ability. https://www.makeuseof.com/tag/multitasking-macos-apps/
FYI I am an IT professional. I work with VMware, Windows, multiple flavors of enterprise Linux, a very small amount of HPUX, and some Mac OS support. When it comes to easy of multi-tasking, Windows is by far the best. When it comes to multi-tasking in general, Windows is still the best of them.
You’ve clearly never worked on a Mac. You are looking at tools that modify existing behavior of built in functionality. I use both Windows and Macs on a daily basis. There is absolutely no benefit to Windows multitasking over Mac OS. Instead of pointing to an article that only shows you have no experience with Mac OS, try providing examples of what you do on a daily basis that you think you couldn’t do something of the equivalent on the Mac.
 

Arbie

Distinguished
Oct 8, 2007
208
65
18,760
I don't understand how you can believe he deserves the power and the compensation, without taking on the responsibility. The buck stops at the top.

I didn't say that, did I?

I also didn't say that he's blameless. I was responding to the ridiculous claim that this is all Swan's fault, and it's because he's an MBA rather than an engineer. So - if we're going to ignore context and put words in people's mouths - I guess you agree with that nonsense.
 

msroadkill612

Distinguished
Jan 31, 2009
202
29
18,710
Intel is doing absolutely fine on architecture, it just does not have the fab process to actually manufacture its more advanced designs without severely compromising on clock frequency and power due to 3-4 years of unforeseen process delays and complications.
Golden Cove should be more than a match for Zen 3 and maybe even Zen 4 architecture-wise, only problem is that Intel likely needs 7nm to work as intended for GC to deliver performance without breaking the die area and power budgets... and based on Intel's last four years along with this new 7nm slip, this is more easily said than done.
UR saying Intels fix is a faster monolithic cpu. thats evidently not their main problem. W/o the modular architecture of amd, its an expensive and inflexible architecture.}

I look forward to cost competitive 64 core socket modules from Intel then.
The trouble is the cost part.

amd can take any 8 core lego block wafer from TSMC, & via the cache coherent Fabric bus architecture, get almost 100% yield, by making anything from a 2 core ~laptop, to a 64 core server socket module.

it is economics they are up against, not the technical excellence of their respective ~10/16 thread modules.

On 10 threads or less (arguably the ~effective limit of Intel's inter core token ring bus), intel arguable has an edge now - even at 14nm. They certainly did while zen 1 & zen+ was making inroads.

amd however, could make a super cheap 6c/12 T from lower value , lesser binned modules... leftovers from their broad range of premium workstation and server products.

Intel needs a ~premium binned module to get decent perf in 10 thread. AMD can use scraps to provide similar perf using 12 threads, & folks love it.

Beyond 10 threads, Intel get left in the dust.
24 & 32 thread desktop/workstation 3900/3950x at amd like costs, are not possible.
 

InvalidError

Titan
Moderator
UR saying Intels fix is a faster monolithic cpu. thats evidently not their main problem. W/o the modular architecture of amd, its an expensive and inflexible architecture.}
Meanwhile, people who got their hands on Ryzen 4700G/4750G monolithic APUs are breaking records every few days. Monolithic is clearly the way to go performance-wise when yields are good enough for a given CPU/APU size to allow it.
 
Aug 22, 2020
8
2
15
Obviously, something is wrong in the management department. If someone doesn't fix it, Intel is going to explode from all that rot.
You'd think that now they are so far behind in CPU technology they would focus on their GPUs(it'd be nice to see how well an Intel-based 14nm++++++++ GPU would perform) or try fading into obscurity and leapfrogging AMD in a few years. Unfortunately, they have too much ego to do so.
 
Aug 22, 2020
8
2
15
Intel is doing perfectly fine CPU-wise, it is only the fab process issues holding it back. Leaked Rocket Lake engineering sample benchmarks hint that Rocket Lake will likely beat Zen 3 stock vs stock.
Yeah, probably went a bit overboard on the exaggeration there. Rocket lake does look promising, and if intel gets their head outta their rear and decides on what node they are manufacturing on, it could mean even better things. Just think about if Intel went to TMSC and used their 5nm.
Meanwhile, they could fix fab issues with 7nm right now while using external foundries, and maybe get to being the head of innnovation
 
Just think about if Intel went to TMSC and used their 5nm.
Then intel would loose at least 10% clocks without gaining any performance since the architecture would stay the same, it wouldn't change the power draw since AVX would still use up as much as the mobo would allow, they would decrease the volume of chips they can make by a huge amount and they would very probably pay a lot more for the waffers.
All in all, 4 out of 4 reasons not to do that.