AMD CPU speculation... and expert conjecture

Page 419 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


AMD strategy was explained during months here. It was explained why AMD is replacing 8 core Opteron CPU by 12 core (4LC + 8TC) Berlin APU.

And no it is not "new". Already during APU13 AMD presented Kaveri as a 12 cores APU.

AMD-KaveriAPU13-2-1000x562.jpg




As explained before to you plenty times, MANTLE is the port to PC of the low-level APIs at consoles. MANTLE is designed to reuse consoles code in a PC.
 

jdwii

Splendid
Either way it’s nice to see information on the next gen APU. Although some things on the benchmarks seem fishy such as using a 270X instead of a 290X when people buying a 220$ I5 are pairing it with the higher-end GPU's all the time no need to spend 330$ on a I7 for just HT which does nothing for gaming. I want to see how this processor does in some programs i use but if it games at the I5 level with at least a 280x level video card i will probably switch my board over to the FM2+.
I use my desktop for games, video encoding using handbrake, and I use VLC player all the time for movies. I will probably be gaming on 1080P for quite some time.
For 350$ you can get this
A10 7850K 175$
FM2+ 100$
75$ 8GB 2133 ram
Game on 1080P medium with next gen games
Really want to see actual sites review this processor
 

con635

Honorable
Oct 3, 2013
644
0
11,010
Yes the consoles use their own api which is close to the metal but that metal is gcn and 8 amd x86 cores, surely optimizations for the consoles will transfer straight to mantle which is for gcn and x86 cores. The consoles don't need mantle the dt pc does.
 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680
APU "ACCELERATED PROCESSING UNIT" what part of this wasn't understood?, AMD stated from the very announcement/ beginning of these processors that this was what they were going for, using the gpu's parallel processing power(what it does best) for those particular processing needs. and keep the cpu for serial processing(what it does best).
did AMD break some rule that it has to beat intel in cpu power ONLY? or can't they just flat out rip them with the combo?

starting to sound like the early 90's "we don't need dual core processors" and complaining programming isn't there for these.
now AMD has stepped up the game again, now it's complaining that they using the gpu core for processing.
guess we didn't learn the first time. cpu+gpu/ppu+arm= Intels nightmare. their gpu is way behind and amd's cpu side just getting stronger along with the gpu, 5% gains isn't gonna keep AMD from taking Intel over in performance.
 

jdwii

Splendid


I actually find it odd that those parts are not out before the desktop parts. Seeing 45 watt APU's is kinda cool nice to see Amd trying to compete more with the performance per watt. Wonder how this APU performs on Simcity that game taxes my CPU a decent amount overall a very taxing CPU base game.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780


Yup, seems AMD is rolling everything and aimed at nVidia ass, keep the open soft coming AMD.

 

Master-flaw

Honorable
Dec 15, 2013
297
0
10,860


Hahahahaha...
Great name.
 
Lmao @ freesync, I knew the technology existed but they worked pretty quick to come out with a demo considering. Gsync isn't even out yet and its dead in the waters with something so easy to implement.
 


VESA standard... That means only for VGA standard resolutions?

It could mean a comeback for 16:10 aspect ratio :p

Cheers!
 
non-nda kaveri discussions
http://www.tomshardware.com/news/amd-kaveri-a10-7850k-a10-7700k-apu,25643.html
http://www.anandtech.com/show/7643/amds-kaveri-prelaunch-information
both at and toms confirm that the 20% ipc improvement is over richland, a10 6800k. 20% more ipc, 5% less tdp.

freesync made me chuckle
http://techreport.com/news/25867/amd-could-counter-nvidia-g-sync-with-simpler-free-sync-tech
let's hope amd pushes this tech a bit unlike what they did with hd3d.

AMD and BlueStacks Collaboration Brings Full Android OS Experience to Windows PC
http://www.techpowerup.com/196539/amd-and-bluestacks-collaboration-brings-full-android-os-experience-to-windows-pc.html
AMD Surrounds 2014 CES Visitors With Breakthrough Visual and Audio Experiences
http://www.techpowerup.com/196536/amd-surrounds-2014-ces-visitors-with-breakthrough-visual-and-audio-experiences.html
http://techreport.com/news/25864/amd-sheds-more-light-on-kaveri-announces-new-mobile-radeons

GLOBALFOUNDRIES Announces New Chief Executive to Lead Next Phase of Growth
http://www.techpowerup.com/196524/globalfoundries-announces-new-chief-executive-to-lead-next-phase-of-growth.html

MAINGEAR Unveils the Small and Mighty SPARK Steambox PC
http://www.techpowerup.com/196520/maingear-unveils-the-small-and-mighty-spark-steambox-pc.html
mobile kaveri on this one... mmm...

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Yuka AMD also said that Steamroller is "up to" 20% IPC... but we already saw benchmarks showing 30--40% IPC gain over Richland. haha!

But the point is that the 5% gain due to MANTLE was FUD. It was spread over Internet by fanboys until one DICE developer wrote that they would not waste time with MANTLE for only single digit gains. Ok?

blockstar During APU13, Oxide compared the FX-8350 @ 2GHz against an i7-4770k at stock. But during this week-end we saw that Kaveri (Steamroller) achieves 30FPS in the MANTLE version of Oxide demo and sub-10 FPS in the DX version. The slide was reproduced above here.
 

ColinAP

Honorable
Jan 7, 2014
18
0
10,510


You mean you found it on the SA forums after it was posted today by someone else?
 

Ags1

Honorable
Apr 26, 2012
255
0
10,790


Yes, the interesting thing is that the thread scheduling appears to be pretty random, with a high likelihood that threads from a single application get scheduled to the same core. This probably goes a long way to explaining why the scheduler 'improvements' in Windows for the Bulldozer architecture only netted 1-2% performance gains at most - because regardless of the scheduler's intentions, in practice, threads are assigned almost randomly to cores (due to the noise of background threads also grabbing cores).

My observation is that this hurts Intel more than AMD.

Anyways, I don't count performance from intermediate threading scenarios, so I'm not too worried from my app's point of view.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Already know from where AMD got the 20% IPC for Steamroller claimed this week-end. It is from using 2nd pass of the x264 benchmark. Kaveri scored 8.53 and Richland 7.09. Both clocked at 3.7GHz under W8.1 and 2133 memory.

Taking this review as base

RICHLAND-DESKTOP-57.jpg


Kaveri would score ~9.5, which is at the i5-2400 level and only ~8% behind FX-6300 and i5-2500k. It looks very close to my prediction ;-)

x264-kaveri-pre.png


We will see up to 40% IPC gains in other benchmarks.

My prediction about HSA boost has been completely demolished. I took a 3x gain as conservative estimation

HAAR-face-detection-kaveri-pre.png


but I was expecting to see at best 5x boost in LibreOffice. How wrong I was... LibreOffice Calc run >8x faster with HSA enabled!

LibreOffice-Benchmark.jpg

 


VESA is the governing body for specs like Displayport. Still quite active on the display side of things.

Also, from TR:

http://techreport.com/news/25867/amd-could-counter-nvidia-g-sync-with-simpler-free-sync-tech

Koduri explained that this particular laptop's display happened to support a feature that AMD has had in its graphics chips "for three generations": dynamic refresh rates. AMD built this capability into its GPUs primarily for power-saving reasons, since unnecessary vertical refresh cycles burn power to little benefit. There's even a proposed VESA specification for dynamic refresh, and the feature has been adopted by some panel makers, though not on a consistent or widespread basis. AMD's Catalyst drivers already support it where it's available, which is why an impromptu demo was possible.

Reading between the lines, not all panels support this. Secondly, its not a standard, but a proposed standard.

According to Koduri, the lack of adoption is simply due to a lack of momentum or demand for the feature, which was originally pitched as a power-saving measure. Adding support in a monitor should be essentially "free" and perhaps possible via a firmware update. The only challenge is that each display must know how long its panel can sustain the proper color intensity before it begins to fade. The vblank interval can't be extended beyond this limit without affecting color fidelity.

Koduri pointed out that the primary constraint in making this capability widespread is still monitor support. Although adding dynamic refresh to a monitor may cost next to nothing, monitor makers have shown they won't bother unless they believe there's some obvious demand for that feature. PC enthusiasts and gamers who want to see "free sync" happen should make dynamic refresh support a requirement for their next monitor purchase. If monitor makers get the message, then it seems likely AMD will do its part to make dynamic display synchronization a no-cost-added feature for Radeon owners everywhere.

So HW support still an issue, but a firmware update should suffice. Possible loss of fidelity if Vblank interval delayed too long, which is a possible limiting factor of the tech.

Clear sucker punch to NVIDIA here by AMD. Was wondering where the counter was...Still, another indication we're moving away from new graphical features, and toward more post-processing and features. We're running out of things to improve graphically now. I'm *hoping* Ray Tracing gets here soon...

EDIT

On the technical side, I'm REALLY interested in the implementation here, specifically, how AMD knows ahead of time what the output FPS is going to be.

NVIDIA brute forced this on the hardware side via essentially a FPGA, allowing the monitor itself to know when a signal is incoming, then drive the display based off that. Very little overhead (lag) involved here, but requires a hardware modification to the monitor to work.

On the AMD side, I'm worried input lag is going to be an issue. Specifically, how does the driver know when and how to adjust the Vblank interrupt interval? To do this, it needs to know the output FPS, which, for a game, can't be known ahead of time. Obviously, this couldn't be done until AFTER the frame has already been produced, so there is going to be a delay while this interval is adjusted by the driver for that particular frame. As a result, you'd get smooth gameplay, variable FPS, but introduce a global lag delay of some period into the system.

That's my reading of the technical side, having no reference materials or white papers to look at. May be off here, but doing this software side would seem to indicate there will be some period of delay, possibly trivial, possibly enough to really annoy some people. Withholding judgement on the results until I see some details.
 


And...libreoffice suddenly can't use dGPU's instead of HSA? At least do a comparison against, say, an i3 paired with a GT750 or something. Nothing to do with HSA, everything to do with using the GPU side of the APU.
 
Status
Not open for further replies.