AMD CPU speculation... and expert conjecture

Page 738 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
IPC is the inverse of CPI = Ʃ(CCI)(IIC)/IC

attempting to leave this out and claim IPC = performance/clock only works for same architectures. do you know the program values of IC?

let me demonstrate.

say you have a program with millions of lines of code, some sabotage happens and you get IF cupid="intel"{b++};

lets say this is hidden and gets compiled, how would any end user be able to detect this anomaly? how does this affect IPC = performance/clock?

say this line is ran once per 10 loops because it is embedded in some animation call. that means for every 10 instructions you get 11 instructions to complete if your name is Intel.
so ipc = IC/Ʃ(CCI)(IIC) for the sake of arguing lets say this is a very simple program that runs one instruction per 10 clocks, regardless of type

for AMD, IPC = 10/1(10)(10) or 10/100
for intel, IPC = 11/1(10)(10) or 11/100

this is at the software level, lets see what happens to performance at the hardware level.

say both cpus are capable of running this loop at its peak,@ 3 GHZ.

amd = 10/100 x 3 (ghz) = 0.3ns per loop
intel = 11/100 x 3 = 0.33ns per loop

now because this program artificially introduced extra lines of code, even if the hardware IPC is 100% identical, it will run slower if "Intel" is detected by 10%

now do you see why you need to know the instruction count before you can claim software = hardware IPC instead of just stating performance = ipc x frequency?

now for the non theoretical part, this is happening IRL.

just scroll up.

P.S. IGNORE ANY SYNTAX PROBLEMS, I KNOW ITS NOT CORRECT, JUST SIMPLE TO GRASP WHAT IS DONE.

For users and purchasers of a computer system, instructions per clock is not a particularly useful indication of the performance of their system.

 

jdwii

Splendid


Again does that mean to just buy the Amd product and have it perform 40% worse in performance per clock regardless of the reason? I tested everything myself and see a pretty big increase in performance in areas that would stutter is smooth now. FPS numbers are higher then i ever thought would be possible with just a CPU upgrade. Games like Evil within are actually more enjoyable same goes for emulators.

I personally see no cheating done anymore as its not 2005. In 2015 these issues you and the entire Amdzone keep talking about just aren't happening. Instead i'd blame it on poor software coding and poor multithreaded support in modern applications. Edit and yes i think the new API's will help the weaker CPU's out in gaming and i know a lot of other tasks are multithreaded.
 


You and I both know you can utilize other compilers inside MS visual studio. Anyhow it's most likely a library issue, as in they bought a license to the Intel Vector Math Library or one of many of Intel's black box code and just used that. MSVC would of done the linkage and just left it alone so you could of easily had code of that nature embedded in an otherwise innocent application. Financially it would make sense since purchasing that code directly from Intel would save manhours from having to develop your own solution / implementation. None of this would surprise me since Intel still makes the worlds fastest math libraries and when they sell them they do plainly state that those libraries favor their own CPU's, so no sneakiness there. People should turn their ire to the game developers for not releasing that information or otherwise explain the discovered discrepancy.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


It's what happens when the little guy (AMD) has to do most of the leg work. It takes time to come to the forefront. Plus they needed the hardware first. Software always lags.

The "lying" is going a bit too far. The Microsoft statement was about XBone. The drivers are already closer to the metal, and they can tweak heavily due to the fixed resources. DX12 probably won't help as much on that platform.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


Good points, but id venture to guess that intel used the opportunity to gain free advertising rather than charging for the compiler. SC II was listed here when it was in development.

http://www.intel.com/content/www/us/en/gaming/games-optimized-for-intel.html
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
As I said many pages ago ARM would come first to servers/HPC and then to PCs. This semiwiki leak seems to confirm the rumor that Apple will be abandoning Intel chips by its own ARM chips after Broadwell

13617d1426468970-apple-chip-leak-2015-jpg


I suppose that Apple will start replacing Intel chips on laptops. I am very curious about what kind of chip will be the A10X. It must be good enough to replace Broadwell and be used instead AMD's K12. Very interesting that Apple will be on 10nm FinFET the next year, whereas K12 will be 14nm.
 


That "Mac" along with the iPad doesn't mean full blown desktop Mac or MacBooks. They also have the little box, which I believe could be the one they are targeting with the A10X: http://store.apple.com/us/buy-mac/mac-mini

Also, notice they're using the Intel modems for the iPhone and iPads. I really don't think Apple is going to cut ties with Intel anytime soon.

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Porting OS X and all the applications to ARM just to use it only on MacMini makes the same sense (i.e., none) than developing a desktop-class core (Cyclone) and downclock it to fit on phones, if you have not long-term plans to scale up that core for desktops.

FYI, the rumor was that "Apple would fully switch from Intel to ARM by 2016" and

Specifically, it was said that Apple has developed an iMac desktop with four or eight 64-bit quad-core CPUs, while a Mac mini is said to have been made with four such cores. In addition, it was claimed that Apple has developed a 13-inch MacBook sporting up to eight 64-bit quad-core ARM chips.

http://appleinsider.com/articles/14/05/26/rumor-apple-once-again-said-to-be-strongly-considering-arm-based-macs

Now we have a leak with Macs using ARM chips on 10nm by 2016. Hum...
 


The root advantage of DX12 is going to be the ability for multiple cores on the CPU to actually talk to the GPU, which should improve scaling and reduce overall CPU load. The faster driver layer is going to do nothing but help as well. But as far as the API, nothing new graphically, though if performance improves enough, SSAA could become the norm, since we'll have the horsepower to finally implement it.

I'd also be interesting to see how Ray Trace engines perform with the ability for low level access. I don't think we'll be there yet, but I think they might be good enough to start pushing around 20-30 FPS in some simplified cases.

That being said, I fully expect the first native DX12 games to be a giant bug filled disaster area. Low level access can cause a LOT of pitfalls if you don't really understand the internals to the hardware, which I doubt most people even in NVIDIA/AMD do. And I'm speaking as someone who still works at the HW level on a day to day basis; hardware is complicated and can easily trip you up in very specific ways if you aren't careful.
 
I do remember that link. So to the links credit, Apple is going to start making the lil' Mac with ARM parts sometime in 2016 so they might be going that route, yes.

It will be interesting to see what Intel does to keep it's hold in Apple ground.

EDIT:

I don't think Game Devs are going to use DX12 from scratch. I'm 99.9% sure they'll be porting old engines going little by little or wait for the big hitters (UE, Source, Frostbyte, etc) to have DX12 simplified for simpler development. That means less spectacular gains at the beginning, but safer path ahead.

EDIT 2:

FreeSync stuff: http://www.tomshardware.com/news/amd-project-freesync-launch,28759.html
MANTLE API Reference Guide: http://www.anandtech.com/show/9095/amd-mantle-api-programming-guide-available
FreeSync Review: http://www.anandtech.com/show/9097/the-amd-freesync-review

More than a review, is more like a technical appreciation between GSync and FreeSync. I think we need videos :p

Cheers!
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


That looks darn good in 4k with still a year or so to go. The units need more detail but very promising. I might have to dip into my PC savings fund this year.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I agree about first gen DX12 games having problems. Remember when the first versions of BF4 with Mantle had that odd thing with colors being completely washed out?

I'm still hoping for Vulkan to do better. Microsoft is getting aggressive and probably feels threatened, they've offered amnesty upgrades for Windows 7 and Windows 8 pirates.

Poor microsoft has no hardware friends. Nvidia screwed them over with Xbox and now AMD screwed them over with different APIs. I imagine AMD is walking a really fine line here with everything.
 

con635

Honorable
Oct 3, 2013
644
0
11,010
http://www.kitguru.net/components/cpu/anton-shilov/amd-cuts-bulldozer-instructions-from-zen-processors/

So what might this mean to a lay man? No more modules?
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Not much really. The functions were only used by AMD Bulldozer family processors, which of course no one optimizes for anyway.
 


So, what is it? FAM4 or not? lol

Cheers!
 


Points to a more conventional SMP approach, if nothing else. Also points toward a smaller die, which is another indication of focus on mobile, and not the desktop. But WAY to early to make any real predictions.
 
Freesync reviews:

http://www.techspot.com/review/978-amd-freesync/
http://www.pcper.com/reviews/Displays/AMD-FreeSync-First-Impressions-and-Technical-Discussion/Inside-and-Outside-VRR-Wind

Found the PCPer one interesting, especially the explanation for what happens when FPS dips too low.

Above the maximum refresh rate, AMD’s current solution is actually better than what NVIDIA offers today, giving users the option to select a VSync enabled or disabled state. G-Sync forces a VSync enabled state, something that hardcore PC gamers and competitive gamers take issue with.

Below the minimum refresh rate things get dicey. I believe that NVIDIA’s implementation of offering a variable frame rate without tearing is superior to simply enabling or disabling VSync again, as the issues with tearing and stutter not only return but are more prominent at lower frame rates. When I pressed AMD on this issue during my briefing they admitted that there were things they believed could work to make that experience better and that I should “stay tuned.” While that doesn’t help users today, AMD thinks that these problems can be solved on the driver side without a change to the AdaptiveSync specification and without the dedicated hardware that NVIDIA uses in desktop G-Sync monitors.

My time with today’s version of FreeSync definitely show it as a step in the right direction but I think it is far from perfect. It’s fair to assume that after telling me FreeSync would be sent to reviewers as far back as September of last year, AMD found that getting display technologies just right is a much more difficult undertaking than originally expected. They have gotten a lot right: no upfront module costs for monitor vendors, better monitor feature support, wide vendor support and lower prices than currently selling G-Sync options. But there is room for improvement: ghosting concerns, improving the transition experience between VRR windows and non-VRR frame rates and figuring out a way to enable tear-free and stutter-free gaming under the minimum variable refresh rate.

The Ghosting could be a major factor here, and as FPS rises, it's going to be a larger and larger concern. So we're clearly in "stay tuned" mode here.
 


Wasn't ghosting 100% dependent on the panel used instead of the transmission method and driver/card? That's why you always want fast response LCDs, so ghosting doesn't come bite you in the... pixels. Heh.

Cheers!
 


Better panels cost more, which means the cost advantage AMD has goes right out the window.
 
Status
Not open for further replies.