AMD CPU speculation... and expert conjecture

Page 466 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
i just had another crazy idea about alternative console use like using ps4 for supercomputers.

cryptocurrency miners should make mining programs for ps4 and xbone(R). ps4 will obviously have major advantage due to more shaders, 8 core cpu and gddr5 ram all under $400. :D :D :D :whistle: :ange:

hurry up miners, stack up as many ps4 as you possbily can and start mining!!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Intel is very clever about their plans. Intel main plan is to replace ( CPU + discrete card ) by their new APU (Intel calls CPUs to its APUs). Intel will continue fabricating Xeon CPUs and discrete card PHIs for legacy users (see quote from Walczyk below):

Intel Corp. is planning to bring to market a stand-alone Xeon Phi CPU that can replace the combination of Xeon CPU and Xeon Phi coprocessor widely used in high performance computing (HPC) systems today.

The company did not say when it was planning to bring the product – which will use the 14nm process technology – to market when it announced it at this week's SC13 supercomputing conference in Denver.

Raj Hazra, VP of Intel's data center group and general manager of its technical computing group, said the concept of making Phi a host processor will do away with the notion of having to off-load code across a PCIe or some other limited-capacity connection.

[...]

Intel's announcement does not mean it is going to stop developing Xeon Phi coprocessors. Knights Landing (Intel's codename for Phi) will be available in two flavors, Radoslaw Walczyk, a company spokesman, wrote in an email. “There are many customers who may just want to upgrade their PCIe coprocessor without [changing] the entire platform,” he said.

http://www.datacenterdynamics.com/focus/archive/2013/11/intel-working-stand-alone-xeon-phi-cpu

According to details released by Intel at the Supercomputing Conference late yesterday, however, it won't always be this way. The next-generation version will also be available as a standalone CPU, constructed on a 14nm process node. Unlike the original Xeon Phi, the next-generation model will be able to operate entirely independently - executing both serial operating system tasks and parallel code on the same chip - with no need for a traditional CPU at all.

http://www.bit-tech.net/news/hardware/2013/11/20/intel-xeon-phi/1




It is impossible to scale up the TITAN supercomputer up to exascale. Current supercomputer architecture ( CPU + discrete card ) cannot be scaled up 1000x. Everyone in HPC knows this. I already explained why CPU+discrete-card doesn't scale up and I did dozen of times. Tomorrow supercomputers will not use discrete cards. This is a slide given by Intel at Supercomputer 13 conference

Xeon-Phi-Knights-Landing-GPU-CPU-Form-Factor-635x358.png


AMD plans to replace the TITAN supercomputer with an exascale supercomputer that uses 10TFLOPS APUs

http://www.theregister.co.uk/2011/11/02/amd_exascale_supercomputing_apu/

Nvidia has the same plans than both Intel and AMD. The Nvidia APU for exascale supercomputers is a 8-core 300W APU with 20TFLOPS. I already gave the rest of details before: cache size, process node, memory bandwidth...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Intel plans to kill compute dGPUs (GPGPU) using the new socketed KL Phi, which isn't a coprocessor.

Intel plans to kill graphic dGPUs using the future APUs (with improved iGPU). Skylake will resuscitate the old Larrabe plans

http://www.techradar.com/news/computing-components/graphics-cards/intel-could-kill-performance-pc-graphics-in-2015-1089339

http://news.softpedia.com/news/Intel-s-2015-and-2016-CPUs-Are-Skylake-and-Skymont-237504.shtml

I agree on that discrete graphics cards will be killed. I predict that they will be killed about the year 2018.
 
I think Intel is using the CISC side if X86 to their advantage with Knights Landing. I don't think they need to call it an APU at all, since it doesn't have a "GPU" as a co-processor like AMD has (such that you could slap it as a stand alone card and it will work with minor tweaks). They only have shader tech from PowerVR AFAIK (licensed), so they can't actually "make" an APU, but they can slap more instructions to X86 in order to shadow that deficiency in licenses. AVX is how they're doing it, I think. The rest is just re-organize the uArchs to accommodate the increase of FPUs and registers and decoders and blah blah.

Isn't that the approach Intel is taking? I remember Larabee being a cut down version of an original Pentium core fit for small size and a bazillion FPUs. I don't think Knights series are a big departure from that core concept.

Cheers!

EDIT: Changed an idea to the correct one, lol.
 

8350rocks

Distinguished
@juanrga:

AMD said that 3 years ago...

Think about what you are trying to tell me...that an APU will have 4x the GPU compute capability of a stand alone R9-290X in uber mode.

What process node do you expect them to use? 300 picometer? We are talking about something that would have to house ~8,000 SPUs + CPU cores...

That is little more than a pipe dream...APUs will not be advantageous for HPCs in my lifetime.

Drop the charade and look at AMDs current plans for HPC...

http://www.extremetech.com/computing/155941-supercomputing-director-bets-2000-that-we-wont-have-exascale-computing-by-2020

No one is looking at single die solutions anymore...they are looking at memory systems and buses to be able to better connect specialized discrete components.

They realized the APU for Exascale is actually a poor solution as the interconnects between APUs would still be the weak link. You are thinking 3-4 years ago on HPC designs...get into 2014.
 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680


I see APU as only a part of the solution, yes the interconnects will be focused on while improving the APU.
at the rate they are moving for gpu/ppu compute i also see them taking over the the stand alone cpu.

 


I would say in everything except the extreme top end, or entry level servers and high end servers we would see APUs. In fact that's what we have right now. Both FM2+ and LGA1150 are APU only processors with AM3+ and LGA2011 being CPU only. The main difference though is LGA2011 is server grade, even the X79 chipset is based on the 5000 series chipset.

Of course this is all the same as it was when CPUs first came out. Everything eventually moves to being on that part itself. math coprocessors used to be stand alone upgrade chips but are now on the CPU die. VRMs as well with Haswell are part of the CPU itself and with certain ones, even the south bridge chipset itself is.

At some point there wont be much left on the mobo itself. We will be the old guys saying "in my day you had to buy RAM and insert it into slots" to the newbies.
 

Sigmanick

Honorable
Sep 28, 2013
26
0
10,530
Back in my day, you had to flip switches and change jumpers on the MOBO to adjust to Front Side Bus speed. Jumping from 66 MHz fsb to 100 MHZ and upgrading to 256 MB of ram, now THAT was overclocking.

Ya newbs, with your UEFI BIOSes, and your Software based overclocking with real time temperature and voltages, and your forums where someone else has already done the wok; you wouldn't know what to do if you had to rely on the Manufacurer's documentation alone.

edit
(the first system I bought was an AMD k6-ii 333 MHz with 128 mb installed and I was hooked on enhancing the system from then on)

I remember the old hack that was needed to run Doom on win 3.1 due to memory limitations. It required you to interrupt the boot sequence. Then you had to run dos commands to start up the game. Good 'Ol 25 Mhz Intel 486 with a .75 speed cd rom.

I remember my dad giving me a funny look because Doom 2 installed took up more space than Windows.
 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680


ahhhhhh, the good old days, where you had to led pencil the chip to over clock and if you did it wrong poof up in smoke.
 


Interesting read, but I disagree with it.

AMD selling cards is not ab ad thing, not now nor in the long run. I think Jimmy said that as along as AMD keeps sending PR samples and gets inside reviews, they'll keep relevant for gamers. As long as a gamer can buy one (near or at MSRP), AMD will be fine.

Now, if AMDs want to crave/dig into that specific market, then they should make special edition Radeons for them, charging a lot more and then keeping gamer cards at bay from crypto-miners (less that FirePro, more than Radeons). Something like nVidia has done so far (and might reverse with Maxwell given demand from crypto-miners) for gamers. I don't think that doing so in hardware is so difficult for AMD.

Cheers!
 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680


if these miners were smart they'd already be using the firepro's, the whole thing is going to collapse, there are massive servers now mining,and those servers are using the fire pro's. the gaming cards won't stand a chance against those.
 

truegenius

Distinguished
BANNED
i think hd7990 will be best as it got 8200gflops in single 1894 in double and consume only 375w (only 85watt more than hawaii) with 2.7x double and 45% more sinlge precision performance

back at time, hd7990 was available for just 500$
http://www.fatwallet.com/forums/expired-deals/1306802/

if you missed that offer (if its true) then go and regret it :whistle:
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


People forget that each PCIe slot in a mining rig is valuable. I've been seeing too many people flip out of GTX 750 Ti numbers. What are they going to do? Get 1000KH/s out of each mining rig?

And I do agree with 8350rocks about future HPC plans. If you have a GPU and CPU that can both address the same memory and read/write from and to it, there is no need to keep them together.

That solves the problem of needing a ton of bandwidth to copy entire blocks of application memory over a slow bus. It still leaves a huge latency problem, but HPC isn't latency intensive for the most part, and AMDs own documentation claims that GPU in HSA is only for throughput oriented workloads and CPU is for latency intensive. And note how AMD documentation states GPU and not APU ;)

 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680


310 watts(290x) vs 275 watts(S9000)vs S8000(225 watts) x how many cards,thats a ton of power as you add cards, then their is the S10000(375 watts) which is a dual processor solution... which is more appetizing for a massive server.
power will become more important as you get bigger.

the 290x pulls 65 watts more than S8000 yet in compute is the same performance because it is fully compute capable.
 


Majority of console gamers have no idea what "resolution" means. They turn it on and it "works". That's the limit of their knowledge base. I'm not exaggerating here, you guys are grossly overestimating the technical knowledge of your average Xbox / PlayStation consumer.

Plus like I said, most HDTV's are native 1366x768. They merely accept a 1920x1080 signal and downscale it to their native resolution. You can see that by plugging a laptop or PC into the HDMI port and reading the EDID information off the display. It'll have the various 1080 resolutions listed as "supported" but the "native" resolution will be 1366x768. It's done for costing reasons as the mother glass used in HDTV's is significantly larger then what's used in monitors or smartphones and to reduce visible defects they use a much lower total resolution. HDTV's with native resolutions are 1920x1080 are expensive, especially if they are over 42 inch's in size.
 


This is an interesting take and I agree with it in the term that if people cannot get the R9 290X for less than a GTX 780 TI, why would they go for it? The GTX 780 Ti is $719 vs $769 for a Sapphire R9 290X (best cooled one currently). They are about even, some wins and loses on both, mostly the 290X benefits from the 4GB GDDR5 vs 3GB GDDR5 in higher resolution benchmarks but then most in that market will go CF or SLI anyways.

If you can get a R9 290X at MSRP it is well worth it as it is a good card and AMD has been doing better on their drivers.

That said, if the price remains too much above MSRP, NVidia has an advantage and even if it gets to MSRP they have enjoyed months of better pricing and availability therefore they could drop the price to put the squeeze on AMD.



At a job I had at college, we took a AMD K6-2 laptop, popped the back open and flipped the switches. It was originally running at 233MHz but we actually unlocked it to desktop speeds and made it run at 333MHz. The fan was spinning like crazy and it needed the power adapter all the time but was pretty fun.

I will say OCing has become something that is very easy these days. I remember when OCing was still a very risky business and required knowledge of the CPU and how it worked. Even back to Core 2 was trickier than current CPUs as the FSB was normally tied into the memory speed. Wanted to keep a 1:1 ratio for best stability or do a different ratio for better performance but that required actual overclocked memory.

If you had a Q6600 G0 and wanted 3GHz then you also wanted good memory to run at say 1333MHz. I had 4GB Corsair XMS2 1066MHz 5-5-5-18 and I was able to clock it to 1333MHz 4-4-4-12 while also undervolting the Q6600 at 3GHz.

Now memory is its own bus so you can have DDR3 1600MHz and still OC the crap out of your CPU without worrying about the memory speed.



The recent demand in crypto-mining for NVidia is only due to the newer ones utilizing the embedded features NVidia has. Bitcoins didn't work well as it didn't use CUDA as well and Litecoin as well. I have heard some newer ones are starting to actually use nVidias hardware but these also are not the same price per coin as BC/LC.



None of the Bitcoins ones are using FGL, they use ASICs, and even though a FGL uses less power, its initial cost is well more than a Radeon so I doubt any major ones will be using FGLs vs Radeons.



They are not. Unlike NVidia, AMD normally does not disable or nerf the full compute ability on their gaming GPUs which is why Radeons have massively benefitted in this market.

Still gamers suffer but so long as AMD makes the sales, it is all good. It is too bad though that they are not benefitting from the increased cost only the etailers are.
 

jdwii

Splendid


I got a 32 inch 1080P tv running and it cost like 380$ 3 years ago it was an acer tv its actually quite nice for a gaming monitor i love it.

Either way they do know the original xbox had better graphics but it came with a higher price and the PS4 does have more capacity of doing the same thing but at a lower price. I guess i
have wealthy and more intelligent friends and family members(excluding my grandma i guess lol) because everyone of them knows the difference between 480P-720P-1080P. Even my mom knows.

Right now Microsoft has more exclusives so i really don't think thats the reason why their losing its the price/performance which is heavily lacking even at a 400$ price point.
I believe(not a fact) that there's a lot more PS fanboys compared to Xbox fanboys and that's based on the fact that most exclusives on the xbox are just FPS where sony has much more..like RPG-Adventure-Action-Actually story-lines Single player experiences..i remember when i bought a xbox 360 i was disappointed since it seemed like it was just a console for online games or FPS which to me is boring and repetitive.
Not to mention that the 360 had a year on the PS3 and they're both even if total world sales today(that's with sony's crazy pricing). Really Microsoft has never won any generation in terms of hardware sales.
 
I got a 32 inch 1080P tv running and it cost like 380$ 3 years ago it was an acer tv its actually quite nice for a gaming monitor i love it.

You most likely have a 32 inch 1366x768 HDTV that is telling you it's capable of receiving a 1080p signal. Actually it has more pixel's then 1366x768 but that's the internal resolution it uses as multiple pixels are assigned to each rendered pixel. It allows for defects to be present in the mother glass that wouldn't be noticeable to the end user. Only way to know for sure is to read it's EDID information and look for the native display. I have a 42 inch 1080p HDTV and both the 360 and the PS4 would do "1080p" so I thought it was all good. When I build that mini-itx living room box and hooked it up that's when I found out my display had a native resolution of 1366x768. Did some research and found out that most HDTV's are like that. I could set the display to 1920x1080@60 and you could tell that the text was hard to read. Set it to 1366x768 and everything was super clear and bright.

People freak out about that and swear up and down that it's "1080p" because the box says so. To be 1080p all a screen has to do is be capable of receiving the signal, one of those fine print gotcha's.
 


The 360 actually outsold the PS3 quite a bit. Not the Wii though.



Another problem, the S9000 is a HD7950 so it will be weaker in terms of KH/s output. I don't think there is a FGL Hawaii XT out yet cuz the S10000 is still based on Tahiti (pretty much a HD7950 X2) and is for HPC more than anything.
 

jdwii

Splendid


Just to double check i made sure and yes its 1080P its plugged into my Desktop right now(Acer AT3265) i turned my resolution to 1366*768 and it sucked. the text is real clear however at 1080P. People have to make sure they get a TV that says 1080P(not just I) and they have to make sure its 60hz if not more at that res.
 


Even if you had a HSA dGPU, you'd STILL have to get the data over the PCI-E bus. The savings comes from not needing to duplicate every data structure the GPU needs access to and passing it across threads. But you still need to pump the instructions and data across the PCI-E bus, regardless. Its not like that transmission bottleneck is going away.
 


Lets not forget latency people. Bandwidth only covers how MUCH data you can pump across the bus, Latency covers how quickly you can get it.
 
The 360 actually outsold the PS3 quite a bit. Not the Wii though.

US, or worldwide? Because Worldwide, it was a dead heat for a while, but Sony pulled ahead last year.

Hence the freakout over US sales numbers of the XB1. The ONLY territories MSFT won in were the US and UK. And if the US trends two-to-one in Sony's favor, there's little chance the XB1 is going to turn a profit this time around.
 
Status
Not open for further replies.