AMD Trolls Intel: Offers 16-Core Chip to Winners of Six-Core 8086K

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

none12345

Distinguished
Apr 27, 2013
431
2
18,785
AMD's response to intel was pretty funny.

But, Intel's response to AMD's response was pretty good as well.

If i won an 8086k(i didnt bother entereing the contest) i would trade it for a 1950x. (if i had entered the contest, and won, i would have just sold the 8086k anyway)

Yes, i know the 8086k is a faster chip in gaming, but the world doesnt run on gaming systems. I could make use of a 16 core chip over a 6 core chip.
 

mlee 2500

Honorable
Oct 20, 2014
298
6
10,785
I kind of get the feeling that this whole "Anniversary Edition" and "Sweepstakes Giveaway" Intel is engaging in is just to distract from their troubles getting to 10nm and the fact that they haven't produced anything compelling or particularly exciting in years.

Core for Core, an i7-8700K is only about 30% faster than an i7-3770K from 2013! And while the practical value of more then 6 cores for most folks is debatable, the marketing and enthusiasm generated by AMD's new lines are not.

Intel best come out huge next year or it's going to be 2006 all over again for them.
 

R0GG

Distinguished


'NuFF Said. Thank you.
 

InvalidError

Titan
Moderator

Intel doesn't have to come out any 'huge-er' than AMD does to maintain market share - Intel managed fine over the past year despite a 2-4 cores deficit in the mid-range segment. Next year might be interesting if AMD goes 12-16 cores for its Ryzen 3700/3800.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


Isn't that the real world definition of advertising? Promote the positive to distract people from the negative? Or in the case of AMD's video cards, if there is nothing positive to promote, just make something up?

 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


And who would really benefit from that? Certainly not home users. There will likely never be a time when typical software will be able to make use of 12-16 cores let alone in the next 3 or 4 years. So why is AMD going all out in this core race to nowhere? Because they know they can't beat Intel in making a faster core so that's the only other option to say they are better. Intel hasn't been sleeping at the wheel for the last 10 years waiting for AMD to catch up. They've pretty much reached the limits of what can be done with the current x86 architecture. The vast majority of users upgrading a quad core today would see a bigger performance boost from a quad core with a 50% higher clock rate than a CPU with double or triple the number of cores with even the same clock rate as their quad. I'm not interested in paying more for 4/6/8 more cores that are going to sit idle 99% of the time.
 

InvalidError

Titan
Moderator

It goes much deeper than the architecture, Intel is pretty much at the limit of what instruction-level parallelism can be extracted from a single instruction flow for typical x86 software and AMD mostly caught up with Intel to the same brick wall. If you meant architecture in the sense of instruction set, most of the same brick walls would ultimately apply to different instruction sets, so even switching to a different ISA wouldn't help much beyond ditching legacy overhead.

Who benefits from more cores? Everyone. More cores in the mainstream means cheaper lower core count CPUs at the lower-end for people who don't care about the extra cores. More cores in the mainstream also means more reasons for developer to put in some extra work to use them better and we're beginning to see games that show performance scaling to 6-12 hardware threads.

Thread count is a bit of a chicken-and-egg problem: software developers don't want to write code for CPUs their target audience don't have while CPU designers don't want to design CPUs their end-users have little to no use for. One side has to break that stalemate sooner or later. AMD cracked the ice last year with the 1700/1800 and might blow it to bits next year with Ryzen 3 if the range does indeed max out at 16 cores based on how EPYC2 maxes out at 64 cores per socket which implies 16 cores per die.
 

bit_user

Polypheme
Ambassador

You're talking like the desktop market still matters. What AMD is ultimately after is the only PC market that's actually growing: cloud. Cloud cares about core count, and especially about core efficiency.


I'll tell you why Intel is scared - because they've seen what AMD managed to accomplish with an inferior process node and know what that means for "7 nm". If AMD can reach volume shipments at "7 nm" before Intel gets their 10 nm process ramped up, it really will be a flashback to 2005.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


The first desktop dual core CPU was released 13 years ago in 2005, the first quad followed the next year in late 2006. When was the last time AMD or Intel sold a desktop single core CPU? Steam hardware survey says about 1% of users have a single core CPU and those are probably netbooks or some other mobile device. Nobody uses single core CPU's anymore, so the excuse that developers are waiting for market saturation to multi-thread their software died years ago. Over 60% of steam users are using quad cores, nearly 70% use a quad or more, yet extremely little mainstream software will saturate a quad core. The reason most developers don't is because it is very difficult to get it right and most software doesn't mathematically lend itself to a parallelizable form. Look what happened to multiGPU gaming. It's dead. Nvidia doesn't even support beyond 2 GPU's anymore. Developers didn't want to support it because it was such a pain in the ass. Just wait though, DirectX12 is going to make is super easy and it will make a comeback again. Yea, that's turned out great. There's no drop down menu option to magically make software multi-threaded. Waiting for developers to code mainstream software to take advantage of 6+ cores is like waiting for Linux to take over the PC desktop 15 years ago. Every year this was the year, and it never ended up happening.

Quad core CPU's have been mainstream affordable for years, as demonstrated by their market saturation. So, I disagree that we are benefiting from price drops created by higher core CPU's. I don't care that the price of a 10 core has dropped $600 or that there are "mainstream" 12 core CPU's when the 6 core I have now is rarely utilized to its fullest.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


Increasing core count for the enterprise market obviously makes sense. Ryzen is not for the enterprise market.

Because there is no standardized way to measure process size, you can't blindly compare the numbers semiconductor companies quote to each other. By all reports, Intel's 10nm is equivalent to everyone else's 7nm process. I know you are knowledgeable enough to be aware of this, so I don't understand the intentional deception on your part here.
 

bit_user

Polypheme
Ambassador

Not even. The original Atom was hyperthreaded, and all Silvermont's were at least dual-core. So, either these folks are using ancient Core 2 Solo's (or, worse yet, P4's) or they're using VMs with only 1 core allocated.


Kinda apples and oranges, there.


It doesn't happen overnight, although (the Linux-based) ChromeOS could potentially snap up a significant market share, in a hurry.

But stuff has gradually gotten better at using more cores, and software tech is continually evolving. Check out these benchies:

https://www.anandtech.com/show/9518/the-mobile-cpu-corecount-debate/4

Kinda old, but shows pretty good multi-core scaling.

Anyway, I think you both have a point. Multithreaded CPUs have dominated the mainstream since about 2006, which has given developers plenty of incentive to thread their code. But there's a qualitative difference between the mainstream having 2, 8, and 16 threads.

Anywhere from 2 to about 8 can easily be addressed using conventional multithreading techniques with existing languages. But, as hardware thread count climbs into the double-digits, ever more types of tasks and workloads need a fundamental reworking to achieve good utilization. And that means adopting different software tools & tech, which developers are often slow to do.
 

bit_user

Polypheme
Ambassador

EPYC not only uses the same core design, it actually used the same dies. So it really is.

The cloud market is the prize AMD really wants, but they know they can't just head straight for the main course.


Yeah, you'll notice I put AMD's "7 nm" parts in quotes, while I left Intel's unquoted. The processes don't need to be exactly comparable - my point is that the new process should give AMD enough of an advantage to surpass Intel's 14++. That's the issue. Even if Intel gets their 10 nm ramped up, they probably won't reach volume production of their huge enterprise chips, while AMD will already start sampling "7 nm" EPYC, later this year. So, some amount of leap-frogging is pretty much already certain, even if not in the desktop market.

Not to mention any architectural improvements in Zen 2, as we shouldn't forget that we've only seen the first gen of the Zen architecture and AMD surely couldn't fit everything they wanted in it.
 

InvalidError

Titan
Moderator

Writing software that leverages 10+ threads in an effective manner requires considerably more development effort than developing software that only scales to four or so hardware threads and DX12 won't magically solve that, it only makes it somewhat less burdensome by loosening the formerly single-threaded API interface. So, even though multi-threaded CPUs have been available in the mainstream for 10+ years, it will take time for more mainstream software to catch up with newer CPUs offering more cores and/or threads.

As for you not caring that higher mainstream core counts pushing prices down, that's your loss. The rest of us will be glad to pay significantly lower prices for any given core count or just nab a couple of extra cores/threads at whatever price point we're comfortable with the next time we want or need an upgrade.
 

InvalidError

Titan
Moderator

The "tools and tech" have been there for decades already in the high-performance and visual computing worlds. The big hurdle is the cost-benefit of applying those techniques to mainstream software where they are applicable. More threads means more discipline and planning required to make it work reliably, efficiently and effectively. Unless a task is embarrassingly parallel in nature, the threading effort quickly becomes unmanageable, which is why threading in most mainstream software is increasing so slowly - avoid increased complexity unless absolutely necessary.

Yes, newer programming languages or extensions to the usual suspects provide constructs to abstract some of that complexity away. For interactive software though, there is no getting around the simple fact that large chunks of interactive code are intrinsically serial due to depending on user(s') inputs and their derivatives.
 

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


October 17th, 2014
Current gen 6 core hyperthreaded Intel CPU (5820k) - $329
https://forums.anandtech.com/threads/intel-i7-5820k-329-free-shipping.2404395/

June 19th, 2018
Cheapest current gen 6 core hyperthreaded Intel CPU (I7-8700) - $302
https://www.newegg.com/Product/Product.aspx?Item=N82E16819117826&ignorebbr=1

Wow, look at the money we are saving from almost 4 years ago. You are so right about how the higher core counts are pushing the lower ones down significantly. We should start a new thread to discuss what everyone is doing with their $27. Nice job on the jerk response that indicates you missed the point I was making. So here's one for you.
 

alextheblue

Distinguished
I kinda doubt we'll see 16 outside of HEDT platforms for a while. We don't even know what AM4 will support. Although if they could pull off 6 cores per CCX that would be interesting for a couple of reasons... one they could release low-to-mid range models with a single CCX and up to 6 cores. 6C12T with a single CCX second-gen Zen would be interesting. Second, they could cut a single core off each and offer affordable 10C20T chips priced in roughly the same range as the current Ryzen 5 models, in addition to a top 12C/24T model.

Interesting to chew over, but not sure how likely any of this is to develop. Honestly I'd be pretty happy with the worst-case scenario of continuing upgrades to their Zen cores and IF. The core design is pretty solid, Zen+ was a surprisingly decent boost for a tweak so I am looking forward to seeing what they can do with Zen 2.
 

bit_user

Polypheme
Ambassador

I know it's not your point, but probably better info is to look at ark.intel.com for pricing and also not to neglect platform price differentials. As I'm sure you know, i7-5820K used socket-2011-3, while i7-8700K uses their mainstream socket (and includes an iGPU).
 

InvalidError

Titan
Moderator

Intel doesn't exist in a vacuum anymore. You can get Ryzen CPUs with comparable performance in heavily threaded workloads for ~$100 less in the mainstream and about half the price in the HEDT/server space. Intel is still in damage-control mode on the back of AMD's surprise comeback.

Were it not for Ryzen putting looming pressure on Intel, the i7-8700k would most have either ended up being yet another 4C8T part or launching several months later as evidenced by the several months long delay between new CPUs to mainstream chipsets, incomplete launch CPU lineup and limited initial availability.


For first-gen Zen, AMD used the exact same die from Ryzen through EPYC. We already know from AMD's EPYC2 presentation that EPYC2 will have up to 64 cores per socket (up from 48 based on the Starship rumors from last year) using the same four die per package arrangement, which means 16 cores per die. You can't reach a total of 16 with six cores per CCX, so EPYC2 has to either use four quad-core CCX per die or the Zen 2 CCX has eight cores. Either way, if AMD repeats its one-die-fits-all strategy with Zen 2, then Ryzen 3 will use the same 16 cores die that EPYC2 uses.

I wouldn't mind seeing the Ryzen 3 lineup end up being something like 3900/16C32T, 3800/12C24T, 3700/10C20T, 3600/8C16T, 3500/6C12T and 3400/4C8T instead of differentiating some major model numbers (100s) bumps based on incremental base and boost clock differences which I personally consider a little insulting, these should only be minor model number (10s) bumps. (Granted, Intel is even worse at the low/mid-range.)
 

bloodroses

Distinguished


I wouldn't hold your breath regarding ARM onto high end desktop. They're currently not being designed for that market. With ARM, power usage is more important than performance. This might change in the future though if a company like Apple switches.

Now ultra portable laptops, that's a different story. But those aren't designed to run Crysis or high end development software.
 

InvalidError

Titan
Moderator

A Cortex A75 variant tweaked for higher clock frequencies could be a somewhat decent desktop CPU. The biggest obstacle to ARM on the desktop is that most people who have desktop PCs also have a legacy of x86 software they aren't going to give up so easily and not much ARM-based replacement software to choose from either if they want to avoid the performance penalty of an ISA emulation layer and API bridges. The lack of a proper software engineering ecosystem outside the embedded and mobile space were cited as the two main reasons ARM decided to scrap its plans to enter the server market a few months ago.

If ARM wanted to enter the desktop space, it could. At the moment though, the company isn't interested in the support effort required to break into new markets.
 
Jun 18, 2018
2
0
10
AMD, I will exchange three of the i7-8086K that you get from the winner, with six 1950X, so you can exchange more i7-8086K.

I choose three, because your behavior is about the same as my three years old daughter.
 
Status
Not open for further replies.