Mobile: Intel Will Overtake Qualcomm In Three Years

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mkrijt

Distinguished
Oct 28, 2009
79
0
18,630
I wonder how Windows (Phone?) 8 will perform on this platform... I think that could be a better match then Android...
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
I thought the reason for the Neon 250 being a failure was because every available GPU was being put into Dreamcasts and thus the inventory wasn't there for the PC market. So, when the Neon 250 became available after whatever delay, it was mid-range at the very best. Still, Series 2 did rather well as concerns Sega - 10.6 million Dreamcasts plus untold numbers of Naomi 1 and 2 machines, the latter with dual CLX2s plus Elan T&L units.

The need for a faster CPU to shift the bottleneck back to the GPU (with the Kyro 2 you weren't necessarily hitting the fillrate limit with relatively complex graphics) plus the lack of T&L is what, in my mind, finished them off, not the success or otherwise of the PowerVR 2. A T&L card with that sort of technology, at the time, would've been a monster.
 
G

Guest

Guest
if you start FROM crappy architecture: a processor assembly language still based on the 8080 CPU of 1972, you will still end with crap!
 
Agree, it won't just be Intel, but everyone else improving and gnawing market share.

If Windows 8 succeeds and people want the full support of it ( which ARM can't offer, ) I can see Intel gaining traction faster than the other competitors.
 

NicoloB

Distinguished
Jan 6, 2012
3
0
18,510
"the future of MSoCs will depend on, first, SoC architecture, second, fabrication skill, and third, graphics technology."

So price is not a factor? REALLY?
 
Being as price is usually derived from those three things, it's more or less already included in the equation.
 
G

Guest

Guest
I have a few issues with some of the claims made in this article.

"and the biggest reason why Apple’s iPad 2 does so much better than its competition in terms of responsiveness and performance. It’s not simply an off-the-shelf design."

Apple is using off the shelf IP in their SoCs about the same as eveyone else. You are of course comparing platforms with very different software and I would be much, much more quick to attribute that to subjective responsive than any possible variation in memory bandwidth and latency vs its competitors. It's not as if iPad 2 greatly improved responsiveness over the much weaker iPad, for instance. As far as performance goes, that is, something measured by benchmarks, graphics notwithstanding, Apple is a bit behind.

"While Intel and AMD used dedicated reservation stations in the past, both now employ unified reservation stations to improve performance and utilization."

But as you go on to say, P6 used a unified design, but that doesn't mean K7 for instance wasn't competitive with (far be it). It isn't really cut and dry that one approach is always superior to the other.

"The Atom architecture doesn’t incorporate any of Intel’s advanced technology. It’s a single-core, in-order design that is more reminiscent of the Pentium CPU than anything modern. But here’s the thing: it’s already faster than the ARM-based competition."

I assume you're talking about Medfield here, which would be a conclusion based on nothing more than Intel supplied benchmarks of a highly software-dependent nature (Javascript micro-benchmarks in V8, which has only semi-recently been getting major design attention for ARM targets). Worse than that, it's 100% single threaded and comparing 1 core + SMT vs 2 core Cortex-A9 phones. And by the time Medfield is out in a phone those A9s will at the very least be higher clocked. Of course there's a good chance that Krait phones will already be out as well by then..

Right now the correct comparison would be either "Atom is already faster in completely different markets with completely different power budgets" (ie netbooks) or "Atom is not actually in an available phone yet and therefore can't really be given a fair comparison"

"We have yet to see a demonstration of a high-performance bus from ARM, TI, or Qualcomm. The Tegra lineup from Nvidia doesn't have a memory bus that is significantly different from the competition (its shipping Tegra 3 still has less memory bandwidth than last year’s Apple A5)."

In reality, pretty much every SoC vendor BUT nVidia (that is, TI, Samsung, Qualcomm, Freescale..) is shipping a dual-channel memory controller in their ARM SoCs and comparable bandwidth to A5 (some with good latency, others with not as good). That's hardly not "significantly different from the competition."

"In the next three years, ARM and Qualcomm need to invest significant resources to advancing CPU performance. Their engineers are navigating uncharted territory, attempting to push technology in the same way that the x86 segment had to struggle through several decades ago."

Obviously this isn't about CPU uarch design in isolation. Power consumption goals change the rules from top to bottom, so you can throw out a lot of what Intel did on desktops. This isn't really a matter of taking solved problems and tweaking them to be low power. Otherwise we wouldn't have seen Atom take the form it did, designed more or less from scratch, and technologically behind Cortex-A9 and way behind Krait and A15, both of which will probably ship before Silvermont. It doesn't seem to me like they're the ones that need to play catch up.

Even with Atom, we don't know if the memory controller advantages (what the article seems highly focused on) seen in the netbook/nettop derivatives will really translate the same to Medfield with its much, much lower TDP. Even targeting LPDDR2 over DDR3 means higher latency. As for bandwidth, we already know that they don't have an advantage vs the non-nVidia SoCs that have been out forever, and OMAP5 and Exynos 5xxx have announced much higher bandwidth support than Medfield.

"Do you think it’s still unusual that the Atom and Near Threshold Voltage Concept are built around the Pentium?"

Except Atom is not built on Pentium and actually shares very little in common with it outside of the high level details of having being in-order and two-issue. While that's fairly distinctive in the x86 world it's not that unusual compared to several other processor families. The pipelines are as different as night and day.

And saying that some old demo coder's proficiency in the original Pentium has anything to do with hardware designs today is just one hell of a stretch.

I think the article provides a lot of good speculation about Qualcomm's GPU prospects. Historically they haven't done anything to differentiate themselves positively in this area. But if having a top notch GPU is really that central to SoC success then I don't feel good for Medfield with a GPU that's even significantly behind Adreno 225. Naturally Intel can move up here, but they've already had a history of stagnating here with Atom's GPU choices, and they've also had a history of driver problems. If they do use high end PowerVR GPUs it'll just put them on par with other vendors doing the same thing.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
Good article. I long forgot about BitBoys.

With PC/Server growth tapering off I don't see any choice but for Intel to drive hard into the MSoC space.

Right now it isn't their main focus as the leading technology node is reserved for Ivy Bridge. They're investing heavily in new Fabs on US soil so this is changing.
 

makdaddy06

Distinguished
Jan 20, 2012
1
0
18,510
Great read. I thoroughly enjoyed it, and also enjoyed the comments. But alas, I can't sit idly by and not add my own thinking. To provide some context, I've seen the inside of BOTH companies in multiple roles, so I believe I have some insight.

What the article has essentially right? Intel's mfg prowess, FinFet design, aggressive nature in moving mfg down Moore's law. Very real, very powerful, components contributing to their success. Having seen 300mm conversion inside, having seen multiple node transitions, they do it well, and the do it often. This is arguable what kept AMD at bay when they had a superior design for several years.

The FinFet design is novel, and while yet not proven in HVM, will provide is some real shrink in die size and power. Bravo INTC! But, it mostly stops there.

What is has wrong, or simply ignores, changes everything.

Future node challenges to INTC's mfg prowess: NGL (next gen litho, the technology behind those tiny lines and spaces), it exponentially expensive. Current immersion litho, without continued design tricks, it hitting a wall. That leaves EUVL, which still has multiple mfg issues, not to mention herculean cost issues (about $60-80m/tool). Intel will need to shoulder this cost ALONE likely, which will render most of their design advantage mute. A new fab used to be about $1B. They now are approaching $3B. Intel's traditional model was to recoup these costs in the first 2 years of a new tech ramp, on the latest and greatest parts (server parts mostly). This obviously depends on their margins, but doesn't allow for dilution of those margins on much thinner MSoC margin die. They then waterfall lower margin processes/parts on older tech/node fabs. In short if MSoC needs cutting edge mfg, it challengs INTC's traditional model. 450mm savior? The equipment industry still stings from the 300mm conversion, and the loath converting to 450mm. Yeah, INTC will eventually brute force it, but at what cost? The tool makers still will need to recoup their R&D. So each tool simply goes up in cost, and they sell fewer and fewer. It's a brutal treadmill they're on, and it's running out of steam.

QCOM GPU team has left town, so they have nothing. One, I've verified personally that at least one of the key individuals stated to have left, is indeed still employed at QCOM. But yes, key talent leaving hurts. However, unlike sports (the comparative analogy), key talent in tech leave behind IP, previous works of art, even "know-how," which is legally protected, and can be FULLY utilized by their owner. QCOM not only maintains the IP, but they are extremely aggressive in protecting it. SIRU will need to create complete NEW, and novel techniques in order to license new GPU technology. But maybe most important, any future licensee, that may even smell the threat of a lawsuit from QCOM, will avoid SIRU like the plague. QCOM has deep pockets to defend their IP, the will, and past experience (look at Nokia settlement etc.). GPU capability loss is a extremely weak argument in my opinion.

Missed elements to INTC domination proposal. The ARM ecosystem, and massive install base. As mentioned, even MS w/ Windows 7 and 8 is now ARM compatible. The new game banks on licensing, and cross-licensing, none of which INTC readily does. They try to hold onto their IP. QCOM's model is to license wide and far. It simply proliferates the technology, broadens the ecosystem, and generates cash flow. Further, having spent some time in INTC's business development area (after their mfg area), they use the same mentality w/ customers as with thier mfg, brute force. I met so many customers that were loath to use INTC, but had not choice. The computing industry was sucked dry of value (look at Dell and HP), by INTC. In other words, folks don't want another computing industry scenario to take over MSoC and cell phones. Most underestimate that visceral response.

Finally, high level computing will be required for MSoC. I agree w/ other comments, and say no. GPU power for hand gaming etc. yes. But raw computing power, in niches yes, broadly, no. Look at most user models. Do you work spreadsheets, or edit pictures, etc. on your phone, even your iPad? Not really. Most of us still move to a laptop. It just won't be a drive.

Finally, I would argue a completely different component will arise as a critical enabler for the mobile platforms, and it's NOT silicon. It's the display. The display is the key interface for all mobile platforms of the future. So touch interface and SW, and power consumption will be key enablers of future mobile devices. Does that matter? I believe yes. And which of the two is playing in that space? QCOM.

INTC will be competitive, ala AMD like in QCOM's AMR markets. It will be good for the industry, and customers, but they will not dominate. Take that to the bank.
 
G

Guest

Guest
I think this article has several huge holes in its reasoning:

1. CPU performance is all that matters. If it does, then why does Apple devote so much silicon real estate to the GPU on the A5?

2. The rest of the system doesn't matter that much. On most smartphones, the cellular communications-related functions use more power than the CPU core(s) do. Can Intel make good cellular basebands? Just because they can successfully make Wi-Fi chipsets, doesn't mean they have the RF design chops to support a myriad of complicated cellular standards/frequencies.

3. Intel is willing/able to target the really low end. Their business model is based on fairly high ASPs. Medfield's a nice high-end smartphone SoC, but companies like TI/Qualcomm/Samsung also make low-end SoCs for low-cost smartphones.

4. Business politics. Many of the major smartphone makers have semi-"captive" phone manufacturers (Samsung, Qualcomm->HTC, Apple) to put their SoCs into. These smartphone companies may be hesitant to let Intel in at the cost of their SoC divisions.
 

pucidalucida

Distinguished
Jan 3, 2012
4
0
18,510
Qualcomm?? why not Samsung? I think Samsung is the real competitor for now, they have been the second biggest chip manufacture for several years, yet ready to take the crown from intel XP
 

AlanDang

Distinguished
Nov 25, 2008
37
0
18,530
[citation][nom]cwolf78[/nom]Hmm... where have I heard these predictions of Intel succeeding in a new market before?*cough*Larrabee*cough*[/citation]

Not at Tom's Hardware TWO years ago:
"Nvidia’s early successes were due in no small part to its excellent software team, led by Dwight Diercks. Under his watch, the company developed its Unified Driver Architecture and established design and testing methodologies that form the foundation of CUDA. While far from perfect, there is no question that taken as a whole, Nvidia’s drivers have historically been very stable. AMD similarly has Ben Bar-Haim. After being recruited to ATI in 2001, he launched the Catalyst program in 2002, which is credited as bringing ATI’s Radeon drivers up to a level competitive with Nvidia. Both of these teams have been able to evolve their practices and know-how over successive generations of hardware. Intel has no such experience. It does not have the same experience in developing 3D graphics drivers, and it doesn't have a software team that has been able to evolve in tandem with the hardware development. The technical expertise to develop high-performance graphics drivers on Larrabee-type architectures is something that remains to be seen from Intel. As with Itanium, there is a good chance that the hardware is ready too far in advance of the supporting software ecosystem."
 
G

Guest

Guest
It is not so much about gate first or gate last- But it is true that Intel is leading in manufacturing and probably in design. It is easier to make simple chips with low performance and low power consumption than high performance chips. Thus Intel is better prepared to enter the low power market than QC to enter the high performance sector. VIA, Transmeta, UMC and Rise all tried the way of QC with x86 compatibility but they failed. QC has no x86 compatibility - how can they win?
 
G

Guest

Guest
Mobile CPU and GPU design is going to be irrelevant in 5-10 years as devices transition to a terminal based platform that is hosted by the carrier. Move the CPU to the cloud and the mobile device is left to support efficient video and audio streaming.
 
G

Guest

Guest
"If Intel wants into this market, nothing can stop them" Why didn't they do a decent graphics core in the past 10 or 12 years? A gpu is far less complex than a cpu and still intel failed. Maybe the things are not as simple. They may have the process edge for now but it is not enaugh. I see it as an open battle, but for now intel is nowhee near the competition. In the medfield they have sacrificed the gpu performance (putting and old 540 in an yet to be released product) in order to meet the thermal requirements and until we see a real phone we wont know if they succeded.
 

AlanDang

Distinguished
Nov 25, 2008
37
0
18,530
AMD, NVIDIA, Intel, Qualcomm, PowerVR wouldn't say that designing a GPU is "far less complex" than a CPU. As I wrote 2 years ago, Intel did not have a person like Dwight Diercks (NVIDIA) or Ben Bar-Haim (AMD/ATI). Until they get someone on the driver side, they won't have a competitive GPU. Intel has the process edge which lets them do some very interesting stuff at 22nm. Intel did NOT have the SOC experience and had to build it over 4 years. Intel has the LTE/wireless experience through Infineon Wireless. Intel has the CPU expertise. So it's definitely going to be interesting. It's not that Medfield is the solution, it's that they are moving Atom from the standard 5 year development cycle to their "tick-tock" program now... NVIDIA is also a player. They wanted to try their 6 month product cycle from the GPU era (but couldn't) but they're still on a 12 month product cycle...
 

sbuckler

Distinguished
Apr 15, 2004
17
0
18,510
Not going to happen (in reverse order of importance):
1) x86 has a proven record of not working well outside it's area of strength. Look a Larrabee. Trying to ram x86 into endless holes it doesn't fit into errodes Intel's manufacturing advantages.
2) Intel is too closed. You can't buy anything but completed solutions from them. Everything requires very restrictive licensing, and Intel will go from friend to enemy fast if you break that. ARM and qualcomm are much more open. You don't *have* to buy completed solutions off them. This is much more appealing to some third party making their own hardware platform.
3) Intel are required by their shareholders to have huge profit margins, and make silly amounts of money. That money simply isn't there in mobile platforms. The profit margins are much lower for mobile chip makers. Intel simply cannot allow margins that low or their would be a shareholder revolt.
 

impreza

Distinguished
Aug 1, 2006
250
0
18,780
Intel has one big problem is their chip designs just use far to much power to be able to run a phone for atleast a day preferably 2-3. While they may have the most advanced manufacturing facilities that does do anything when you chips have to be twice the size. Then there is the other problem of all android apps being written for arm so they wont work with intel x86 cpus.

Intel may be able to get somewhere with windows phones but on apple and android they are a little late and all the code is wrong for their cpu designs. It doesn't matter how good your cpus are if nothing runs on them they are useless, but with windows phone they wont have such a problem since it is yet to take off and ms can easily put all the windows x86 code in to make x86 cpus work.
 

eboyhan

Distinguished
May 15, 2008
3
0
18,510
My view from afar is that process technology ultimately trumps all. I also think that Intel is closer to 2 full generations ahead rather than 1.5.

Also, as the sizes ramp downwards it's getting harder for the competitive foundries to keep up.

Ivy Bridge is meant to be the "tick" in Intel's tick/tock product progression, but this tick will offer a much improved on-chip graphics core. In fact with each turn of the process screw, Intel's on-chip graphics capabilities improve -- not enough for hard core gamers, but more than adequate for normal uses. At 22nm I wonder whether Intel will decrease the die size, or instead use the increased transistor count to put some of the Ivy Bridge graphics core on the Medfield follow-on.

Finfets in theory have the potential to do a lot for power consumption as well.

OTOH no third parties have yet gotten their hands on Medfield or Ivy Bridge so that we can see exactly how Intel's claims pan out -- so some skepticism is still in order. I expect those concerns to be dealt with in the next 2 quarters as Medfield and Ivy Bridge silicon rolls out into the real world.
 
G

Guest

Guest
What idiot would WANT to run the same software on phone and desktop??.
Actually wots being ignored is that there are many countries who can see that the market for mSoc will be even bigger than Desktop CPU, so they are willing to put much effort to be in at the endgame. EG Europe, Russia, Mideast, India ^ China. Intel has fantastic process knowledge but Samsung and Taiwan?? etc are no duffers either. Its not always the best that wins either.
 

Jeteroll

Distinguished
Sep 11, 2010
156
0
18,680
Ahh! Cmon! we can't let Intel take over every freaking market out there! W need companies like qualcomm and AMD to balance out the prices and th market.
 
Status
Not open for further replies.