AMD CPUs, SoC Rumors and Speculations Temp. thread 2

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
FF was designed to simplify I/O (virtualize) and remove ports for dense servers. Not really a HPC fabric. A coherent CPU-CPU fabric needs to be much faster. This is likely something entirely new or a revamped HT over PCIe. I would suspect something PCIe 4.0 based and likely licensed from Synopsys or Avago.
 

i assumed it was a fast crossbar with a PHY.... basically..
 


Devs have been trying to minimize draw calls for years, so it follows that the ability to do more won't lead to much of an increase in performance compared to raw shader based performance.
 


With an iGPU compared to a dGPU...the iGPU suffers from bandwidth issues gamer. We had this discussion many times.

EDIT: Skylake real world gaming performance is actually a regression:

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/10

IB and HW are both faster consistently...we are splitting hairs...because it is not by much...however...it is still a minor regression.
 


Well, with an iGPU, memory bandwidth from draw calls to the CPU (because it still has to pass through memory at the moment) is time spent tying up resources that do not need to be tied up when you are already bound by other factors.

Do you not expect to see some performance improvements for APUs with DX12/Vulkan? I mean, it may not be world beating by any means...but 10% FPS increases for APUs makes a lot of games with FPS in the 20s now over the 30 FPS bar.
 
I think the issue with that theory 8350 is the simple fact very few games will ever have dx11 and dx12 render paths in side by side to compare.

A fully dx12 capable game may well perform much better and use way more draw calls, however that well likely be unnoticed if there isn't any easy way to compare.

The most likely outcome is games on average will use multi core cpus better (so users will see all four cores on a quad fully used by a game rather than 2 sitting almost idle), and likely older high core count designs might get a new lease of life...

As for apus gaining much? Well there is mention that dx12 could allow amds asymmetric crossfire a reality although I gather its specific implementation is down to individual developers so will likely be hit and miss (at first at any rate).
 


First, it is with some discrete card gaming not with any gaming (Tomshardware found that Skylake desktop chips tie with Kaveri chips on IGP gaming). Second, the results found by anadtech don't make sense and they are investigating the issue:



Skylake is a bit slower than I expected but still is a good 20% faster than Haswell:

 


I recall reading an article (I believed it was at SA) mentioning that calling supercomputer-class to FF was essentially a marketing lie. I will try to find the article. I agree that FF is very very far from the needs of a CPU-CPU interconnect, the max. throughput given by FF is ridiculously low.

EDIT: Cazalan found the article by me. Check next message.
 
Just another reason why Rory needed to go, but he was CEO not CTO. Don't confuse lieing with lack of understanding.

"FF is many things, but supercomputing is not one of them. It is an amazing backbone tech for a big box of shared nothing (BBOSN) machines, but it is fundamentally unsuited for supercomputing tasks. Anyone who had the vaguest clue about the product would understand this and not make that basic mistake. It is like calling a bicycle a car because both have similar privileges on some roads, analysts might be mislead, but that does not make it true. It is extremely worrisome that Mr Reed does not understand the tech on this level, it isn’t that complex and it fundamentally affects his company."

http://semiaccurate.com/2012/10/30/amd-announces-an-arm64-server-soc-for-seamicro/

Edit: It did what Seamicro needed it to do. Allow 512 CPUs in a 10U box. They were going after dense servers not super computers.
 


Synthetic benchmarks are not by any means a bible. As I have been saying for years...YMMV.

However, it is reproducible that the real world results are often slower than the older silicon. This is a glaring shortfall, and will no doubt somehow get "spun" into somehow being positive because it is Intel...and no one can say anything bad about them these days without taking a pay cut it seems.
 


Well, right...FF itself was not suited to super compute, just super dense servers...

However, the understanding I have from AMD was that some of the technology in FF, and, most importantly, interconnect patents from it were useful to create this new super fabric for super compute. It is not so much an evolution of the technology so much as a super efficient reimagining of the tech for the purpose they were looking for...as I understand it...this is likely to eventually replace HTX...though not for some time...
 
Uhm... From what I know about FF (Freedom Fabric) is just a regular BUS with no other "intelligent" synchronous mechanism in the back to support huge data coherency for high through output between host CPUs. Ugh, I hate using so many buzzwords, haha. But basically, my point is that AMD would need to make a proper "backend" to turn it into a real all-purpose BUS. I don't think it's that far away to, for example, adapting some of the HTX concepts into FF's innards.

I can totally imagine AMD mutating FF into some sort of HTX spin off.

Cheers!

EDIT: Typo.
 


As mentioned above there is not anything special behind FF. Moreover, what Seamicro did was to take the interconnects used in current supercomputers and build a scaled-down version that consumes less power and is adapted for data-centers. It would be ironic that now AMD pretends to take FF and scale-up back to supercomputers again. Moreover, I don't see any possibility of using this tech on exascale products, whose requirements are very superior to those of actual supercomputers.

NOTE: Remarks from Seamicro about FF origins

For answers, SeaMicro looked to the techniques used to interconnect the CPUs of the largest and most complicated supercomputers and set about scaling the technology down for data-center applications. The result is the Freedom TM Supercomputer Fabric [...] While the fabric has its origins in the supercomputer world, SeaMicro tuned the design of the fabric, optimizing it for the requirements of the data center.
 


Sure they will. DX9 and DX10 are dead, so pretty much every DX12 title for the first two or three years will also have a DX11 path. That's par for the course. So we'll have plenty of side-by-side comparisons.
 
@gamer, that is good to know. I think there are a number of titles that will see big gains from dx 12 then, although it will be in quite specific circumstances at first....

I know star citizen is already hitting draw call limits due to number of discreet, fully animated components on ships and they are working on dx12 to help with this.

Also newer rts titles (e.g. the upcoming ashes of the singularity) are going for massive numbers of independent units and again dx12 looks to be a big factor as that game is built on the oxide engine (they are shooting for 10k units in play at once + particles, currently the largest mass unit rts games that feature truly independent units are sup com and PA which both max out at around 3k before things get really slow so that would be a serious uplift)...
 


Ahh, but you see, using a lot of the same thing is cheap, since making a duplicate of the same object in DX is a very cheap operation. That's how Valve pushed so many zombies in L4D; they took a handful of models, and duplicated it several hundred times, rather then create a new model for every individual zombie.
 
Gamer, that's all very well but then why can you still cpu bottleneck modern large scale rts games? I'll bet left 4 dead maxed out at no more than 100 zombies.

10k of something cheap is still a lot!
 
Big desktop replacement laptops on the rise. They need a mobile Fiji.

http://www.theregister.co.uk/2015/08/07/big_ugly_heavy_laptops_are_surprise_pc_sales_sweet_spot/
 


But... But... My laptop is not big heavy and clunky 🙁

In any case, it's kind of a logical conclusion of trends. For the screen real-state most Notebooks have, current gen games are not pushing the hardware inside of notebooks to the limit at all. Case in point, I can play all games I have in my own lappy, an i7 3630QM with a 675M (580M) over it's (really bad) 1080p display, albeit not at full 1080p, but 1600x900 or sometimes HD (EuroTruck Simulator 2 is really CPU dependent). EDIT: This point is aimed at: "if you can game, then you can work as well most probably".

If AMD can slap some HBM into an A10 and keep that thing under 45W, then they would jump to the top almost immediately notebook wise. Even more than Fiji mobile, HBM to the APU is way more important IMO. Get rid of the MXM GPU, focus on extra storage (RAID 0 SSD :wet: ) and features (instead of the ODD, the extra SSD, maybe more battery and 4 extra USB3 ports) and AMD could have an all around winner.

Darn it. Now I am depressed over the possibility of such a notebook is within reach (doable), but no one will make it 🙁

Cheers!
 
Status
Not open for further replies.