AMD Fury X And Fiji Preview

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

f-14

Distinguished
the Fury X doesn't employ a fan on its dual-slot form factor....yadda yadda yadda.... Without that traditional blower, the company was able to design a fully enclosed, sleek-looking shroud.

so what about the other parts of the card that get hot? with this should can we expect the power lanes to melt or spontaneously combust the shroud due to all the excess heat? will the pcb warp or bend? what about all the vrms and mofsets? doesn't the ddr5 get pretty hot? i imagine there's alot more that needs to be kept cool than just the gpu considering the 200+ watts being pumped into these things?
or are they just going to print a hasbro logo on them and we can have our cake cooked and eat it?
 

kcarbotte

Contributing Writer
Editor
Mar 24, 2015
1,995
2
11,785


The article states that the stacks are comprised for four 256MB chips, which totals 1GB each. - this is not wrong, it is how HMBv1 memory works.
there are four stacks of four 256MB chips.
 

Puiucs

Honorable
Jan 17, 2014
66
0
10,630
Well, if VR is going to be that demanding. I won't hop on i till it matures. A R9 390X should be sufficent. If it isn't. I'll let the VR boat sail by!!!
there will always be less demanding games. what they are talking about is those photorealistic ones that really are demanding and the fact that over the course of a few years VR displays will increase resolution a lot. you should expect 4K to be the norm for VR in about 2 years after they release this first gen products.
in 5 years you'll see 8-16k VR products.
 

yhikum

Honorable
Apr 1, 2013
96
0
10,660
400A typo? Is it 40A or 400W?
40A is a intensity electrons by second. = I
400W is a power value. P

P (Power) = I(Intensity of current) x V (voltage - electric potential)
P = I x V

400W=40A x V so
V = 400W/40I = 10 Volts.
So power of 400W is numbered at 10V

Normally at home we talk about power at 220V, so 400W at 110V will be much higher than 400W at 10V. Take in count that when it's not constant the voltage as home, then the formula changes a bit, but the idea is there.

This is misleading. While "I" is what commonly called current, it is measured in amount of charge passing through, not intensity of electrons, nor flow of electrons.

Assuming power consumed is at 400W. There is different current ( or amount of charge) being passed in a wire at different voltage. At 110V current will be smaller than at 10V, or 1V. To pass charge ( or current) you will need higher conductor cross section. On a PCB that would mean very wide metal section carrying your current. You also need to take into account that not all power is supplied to core chip, some of it will be fed into all sorts of glue components and some of it will dissipate directly off voltage/current regulators.
 

Giroro

Splendid
Don't get too caught up on the 400 Amp thing. It's badly misplaced marketing hype. Odds are a single component in their VRM is rated to withstand something along the lines of 500 Watts, and the core GPU voltage is ~1.25 Volts - It doesn't mean that everything on the PCB will hold up to that, and DEFINITELY doesn't mean that the card is getting 400 Amps over the 12V rail.
 
The lack of HDMI 2.0 is a huge mistake. It doesn't matter if most may game on monitors right now. A growing number are turning to gaming on their TV or buying TVs as monitors for their computer rooms and AMD just told those people to shove off.

HDMI 2.0 is not new so there was NO REASON for them to not include it other than they simply didn't care enough and for a company that is being outgunned over 2-1 in the industry you would think they would WANT to entice as many customers as possible rather than block off a growing percentage of them.

Not having HDMI 2.0 is even more comical for that project quantum since that is clearly designed for TV/"living room" use so if it doesn't have this then they are expecting people to buy these suped up all in one mini PCs that will neuter gaming performance for the very people they are targeting to buy them.

I had big interest in this card and was willing to take a chance on dealing with AMD's weaker driver support yet AMD has just told me they want me to stay with Nvidia and get a 980 Ti which I will do.

What a boneheaded decision on their part.

First off, this Driver thing is more than old. There is no difference for 98% of the games except day one release that, Nvidia itself, is not able to handle properly either.

Secondly,... probably the best 4k monitor you can have. Not the TV the people expect since 4k streaming support is not available, however it is a super computer gaming display.

http://www.amazon.com/Panasonic-TC-58AX800U-Ultra-LED-LCD/dp/B00QKGAKGW/ref=sr_1_1?ie=UTF8&qid=1434664687&sr=8-1&keywords=58ax800
 

uglyduckling81

Distinguished
Feb 24, 2011
719
0
19,060
Man $100 premium for the 390x over the 390. That price has got to come down to make room for the Nano.
This staggered release from AMD is interesting to see.
Give out rebrands so the impatient buy them first.
Release the new top end so the early adopters spend the most money with the biggest profit margins.
Release a slightly cheaper card but still with good margins a week later.
Give a significant time period before releasing a much better value product (Nano) to give any impatient people that thought they would wait end up buying the higher profit margin cards.

Sound like any other company we know of?
980 release.... Some time Later the Titan X..... then a 980ti.

Clearly there is a science there to extract the most money from the consumers as possible.
 

kcarbotte

Contributing Writer
Editor
Mar 24, 2015
1,995
2
11,785
400A typo? Is it 40A or 400W?
It can't be 400 amps. That's most of your house amps there!!!


not a typo unless AMD put a typo in the slide. The 6 phase power design is capable of delivering 400A - this does not mean that it requires this level of power, simply it can handle that much.
AMD was adament in emphasizing that this card has been overbuild significantly.
 

norseman4

Honorable
Mar 8, 2012
437
0
10,960
Benchmarks are finally out!!!

http://www.forbes.com/sites/jasonevangelho/2015/06/18/amd-radeon-fury-x-benchmarks-full-specs-new-fiji-graphics-card-beats-nvidias-980-ti/

Not quite, that's still the information from AMD, which by the time that you read this is not uncommon. The 24th is when the review embargoes are lifted for everybody that has review samples. (Up until Forbes though, while I expected it, nobody confirmed that they had review samples)
 


Really weird that they have a NDA even after officially announcing the part. Normally they have an NDA until they announce the part. Hell Hawaii XT was launched and reviewed before you could buy it by a couple of weeks.

Not sure why they would even have an NDA if they are so confident in it.
 

Arabian Knight

Reputable
Feb 26, 2015
114
0
4,680


lol as you wish fanboy ... AMD has HBM ready , and can release a 4 GPU card at any moment.

I dont care what Nvidia are planning to do . I expect 3 or 4 GPU single card from AMD very soon ... and it will beat Nvidia very hard this round . and if AMD does not do it , it will be a huge mistake. they have the Advantage of the Area on the board , and they must use that Advantage in a flagship !

Do it AMD !!! we want 4 GPU single card !!!
 

The point is that ONLY AMD has HBM today, and it will most likely remain that way for 9 to 12 months.

Yes, next year will be interesting. Both AMD and Nvidia will be dropping HBM based cards, and that is when the real battle starts. AMD, and as company that has a technology advantage would, is crowing like a proud papa rooster about its Fuji GPU and its HBM.

All Nvidia can do is talk what what they expect to have... next year... Nvidia could have gone out there and invented this. But, just like 7 years ago when AMD went and created GDDR5, Nvidia left it up to AMD again, so AMD gets to crow for most of a year.

Both companies have great hardware designers. How they use them is important.

 


That would be impossible, currently. Cooling it would require a much larger radiator than the Fury X will have and as well power requirements would be well beyond it.



Why do people think that AMD invented HBM? Hynix was the memory company behind HBM meaning they probably did the leg work while AMD decided to be the test pig. And to go even further back, HBM is just a version of Wide I/O which was developed by Samsung (another big memory company) and is specified by JEDEC.

And no AMD did not invent GDDR5 they were merely the first ones to apply it to a GPU with the HD4870. The company that actually did the work and specified GDDR5 was Qimonda and Hynix was the first to start producing chips.

And having the latest memory tech is not always a great thing. AMD (ATI back then) had GDDR4 on the HD2900 series which also had a 512bit memory bus, much like the R9 290 series, but it ran very hot due to that and with the HD3870 series they went back to GDDR3 and then GDDR4 fell to the wayside.

And while NVidia may not focus on the memory technology they do focus a bit more on the in game technologies. Memory bandwidth is not a massive bottleneck right now. It might be in a few years but having more does not mean better performance.
 

Arabian Knight

Reputable
Feb 26, 2015
114
0
4,680


It does not have to be a 4 Flagship GPUs you know , Just enough to beat Nvidia dual GPU Flagship ...

besides , they can use a 240 Radiator with such card .. 120 Radiator was enough for their 295X2 at 500 Watts , and I dont mind 240 for a 4 GPU card.

it is possible ,and I think AMD are just waiting for the new nvidia Dual GPU card to make their move.
 

Tibeardius

Reputable
Apr 19, 2014
15
0
4,510
VSR seems interesting. I'm hoping it will support ultra-wide resolutions as well. Be fun to try out on games that you can get super high framerates on.
 

If memory bandwidth is not a bottleneck, why do both companies keep adding more and more memory to get more bandwidth?

A couple of months ago, as we started hearing about this new, at the time mysterious GPU, one person at AMD stated that no game on the market truly needed 4GB of memory. But they kept adding memory to the cards to expand the memory bandwidth. But when they started talking about 4K resolutions being supported on the Fiji GPU, and knew that they were going to be working with 4GB of memory, they assigned engineers to the problem for the first time ever. They had never worried about efficiently using memory before because the memory capacity was increasing much faster than memory use simply because the added memory was giving them the ability to expand the memory bandwidth. So the engineers came up with ways to efficiently use the 4GB memory space the 4 HBM stacks would provide, and AMD is certain they have it covered.

So once again, why would both companies keep expanding memory if bandwidth was not a problem? Wouldn't one or the other stop and say, "Hey! We could do this with 2GB of memory, and sell our cards for $50 less than the other guys!!". But that has not happened, has it?
 


You kinda missed everything in my post. AMD was not behind GDDR5, Qimonda did the specifications for it AMD just adopted it first.

And again, HBM was not invented by AMD or Hynix. It is a memory design based on Wide I/O from Samsung but it was specifically adopted to GPUs.

I also did say that in the future it might become a bottleneck but it certainly is not a major bottleneck, right now driver and API overhead are much more of a bottleneck than memory bandwidth. Then again there is nothing wrong with thinking towards the future. Both AMD and Intel have had very fast interconnects for their CPUs yet up until recent advancements in other technologies have those even started to become any sort of bottleneck in the system. PCIe 3.0 x16 is not even close to being saturated right now and even PCIe 2.0 x16 is still a very viable setup.

The reason AMD is doing only 4GB of HBM is probably due to the fact that HBM 1 is limited to 4GB max and as well costs. If they had 8GB the costs might outweigh the performance benefits.

I am not knocking AMD. It is great that they are adopting what will be the next standard, their GPUs tend to. But they have not invented anything memory wise, it is normally one of the major memory manufactures like Hynix, Samsung or Micron.
 

norseman4

Honorable
Mar 8, 2012
437
0
10,960


I really don't like it either, but we've known for nearly 2 weeks that reviews would be available on the 24th. It's not like crap games paying for good reviews until the game is live in the public hoping that the pre-orders will pay for the developers to fix the crap.

I'm hoping that AMD nails this, if they don't, based upon 3rd party reviews and benchmarks, then even though I have an Adaptive Sync monitor my next GPU will be the best for my monitors for the few games that I actually play. (Reviews actually going in depth and testing things that they think they know, not the 'This is a re-badge of an earlier version so we won't even try to overclock the thing ...' Thanks Tom's for your mailed in 300 reviews)
 
If Samsung invented HBM, why is only SK Hynix making it? That is a division of Micron.

Something about what you are saying does not add up. AMD clearly states they invented GDDR5, and that they started work on HBM right after GDDR5 happened.
 


http://www.hwstation.net/img/news/allegati/Qimonda_GDDR5_whitepaper.pdf

That is the white paper showing the specout of GDDR5 by Qimonda.

http://www.theregister.co.uk/2007/11/01/qimonda_samples_gddr5

A report about them sampling the first GDDR5 chips back in 2007 before AMD put it on their HD4870.

And I never said Samsung invented HBM, what I said was that Samsung developed Wide I/O memory and AMD utilized it with Hynix to adopt it to GPUs and called it HBM. It is sort of like AMDs TressFX vs nVidias Hairworks or GSYNC vs FreeSYNC. Same ideas just different implementations.
 
HBM memory stacks:

The article is actually CORRECT. Not sure if anyone corrected one of the responses but THIS is how it works:

There are FOUR, 256MB dies stacked on the logic die. That's 1GB for the entire stack. There are four stacks for a total of 4GB.

I understand the confusion as saying "for a total of 4GB" would have cleared that up, but I was rather DISGUSTED because they concentrated on how the "memory bandwith" should help enable 4K gaming but don't mention the memory amount (if they did I missed it).

Yes, it's under the specs but neatly sidestepped when discussing the card's performance potential.

*Having a wide bandwidth doesn't help much if the game wants to use more than 4GB AMD... that's hardly optimized for 4K gaming in the future now is it?
 

rdc85

Honorable


Hoams,..
Even they publish their own benchmark, people will just said, it's cherry picked test... full of b*** s***..
Now they keeping quiet and let the tech site do the test, people still saying, something is wrong... full of b*** s***..

Just sit tight, we will know soon or later.....
 
Status
Not open for further replies.