AMD CPU speculation... and expert conjecture

Page 301 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Vista SP2 is basically Windows 7. There are some around and it has DX11, so it is still supported for a while.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Bullet is FOSS software not primarily backed by a large group.

As far as Shadowplay vs SteamOS, you have two strong entities in the gaming market facing off.

If anything I would say it's much more in line with a comparison of PhysX to Havok more than anything.

However the point of the toolkit being easier to use brings up a solid point.

Shadowplay has massive barriers to entry. You NEED an Nvidia GPU, and a 600 series IIRC.

Take a look at http://store.steampowered.com/hwsurvey/videocard/

Look at the marketshare Nvidia has of Shadowplay capable cards.

Now look at what you need to get SteamOS going. There are no hardware requirements that we know of.

Once again you've managed to make a point and present it in a good way while being completely fallacious. None of the physics libraries you listed have any requirements. The point I was getting at previously is that SteamOS streaming is very accessible in comparison to PhysX full experience. Not that Havok is used the most.

Havok has no hardware requirements and it has nothing to do with my comparison between SteamOS and Shadowplay.
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360


Most Vista supporters moved onto Windows 7, and then quite possibly Windows 8. The vast majority of machines with Vista are probably incapable of playing modern games as well. Vista SP2 is a lot like Windows 7, yes. However I don't know a single person who prefers Vista over 7.
 

8350rocks

Distinguished


I think I have some insight into this now:

http://www.tomshardware.com/news/steamos-steam-box-reference-design-source-2-half-life-3,24388.html

Apparently, an anonymous "valve employee" on 4chan stated that Wednesday is a reference design release from Valve on the "steam box". It will feature AMD CPUs and NVidia GPUs. They supposedly timed this to coincide with AMDs GPU announcement to rain on their parade.

Ultimately it will be a certification that goes on certain products from OEM partners producing a small form factor (think mATX) PC that meets hardware requirements.

Additionally, they said to expect good OpenGL/Linux drivers and support from NVidia soon.

The last bit was that Valve is going to announce Source Engine 2 on Friday that will have a new game coming out (HL3?) and there will not be a single line of DX code in the entire engine. It will support windows and OSX, as well as Linux, but be entirely OpenGL based.

I think it a bit odd Valve would use AMD CPUs, and then try to crap on their GPU announcement...?

Either way, another hardware victory for AMD in some manner is a good thing. Just get your salt shaker out before you read the write up.
 

Well, if you happen to have a full edition of it, then I would gladly take it over 7. Vista had a better look, didnt bug you for updates every 2.52 seconds and on modern systems, is fast.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I would go beyond: Nvidia want to be inside the SteamBox.



ARM is an architecture optimized for efficiency. x86 (either AMD or Intel) can provide the same performance (at higher power consumption) or the same power consumption (at lower performance levels) but not both at once.

ARM has a several generations history of adding performance whereas improving efficiency over the previous generations. ARM has made this again with the new A57 core, which is more faster than the A15 but maintaining the same power consumption. This history destroys the 'argument' that ARM doesn't scale well upwards.

History also shows that x86 doesn't scale well downwards. That is because x86 was not designed for efficiency but for raw performance. x86 was designed mainly for desktops, where power consumption was irrelevant. It was teawked for laptops/servers where efficiency is a factor, but is very far from being optimal in efficiency. Intel has tried to scale downwards to tablet/phones but has systematically failed. The interesting part is that now efficiency start to play a mayor role in fields as HPC. It is easy to see take current fastest supercomputer (x86 based) and scale it upwards 1000x. It is impossible to provide 1000x the current energies. That is why ARM supercomputers are in the target. The goal of the Mont Blanc project is to achieve a supercomputer that was 1000x faster than the fastest x86 today, but consuming less energy!

About raw performance, the ARM 32-bit A15 provides about one half of the IPC of an i7 SB. The new A57 is up to 40-50% faster than the A15, but providing higher efficiency

Cortex-A15-vs-Cortex-A57_performance.jpg


This is only for older 32 bit code and only 2n nm. The A57 is still faster (> 50%) running on native 64 bit mode. Also A57 cores at 16nm FinFET are going to appear.

Custom ARM64 cores will surpass the A57 raw performance whereas maintaining efficiency. It seems that Apple new core already obtains SB i7 IPC level, but where I expect real breakthroughs is from Nvidia and AMD custom cores.

Before someone say me that SB is old, I will add that Haswell offers only about a 10% more IPC than Sandy.

Manufacturing a high-performance ARM core for a supercomputer is more expensive than manufacturing a basic phone core, but continues being less expensive than manufacturing a high-performance x86 core.

Of course this is all for a single core. The new Seattle CPU already offers essentially the performance of some of fastest x86 chips (Xeons) but only a percentage of their power consumption.

ARM has defeated x86 in phones/tablets for years, despite Intel has fab advantage and spend billions on research. Now ARM has started to defeat x86 in servers.



I think you missed important dates of the history. IBM already released 5Ghz PPC chips years before AMD announced its 5GHz FX (base freq. is 4.7GHz). IBM is already selling 5.5GHz PPC chips. Moreover, you cannot compare PPC/MIPS to ARM. ARM64 is a modern optimized uArch unlike MIPS.
 

8350rocks

Distinguished


POWER != PowerPC

IBM released 5 GHz POWER architecture long before x86 was there. They have not released anything new in PowerPC since the last PPC Macs rolled off the assembly line (outside of commercial/embedded applications, ASICs, etc.). None of those were 5 GHz either...in fact, Apple quit using them in 2003 because IBM couldn't get PPC to 3 GHz.

http://en.wikipedia.org/wiki/PowerPC
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


From your link:

PowerPC is largely based on IBM's earlier POWER instruction set architecture, and retains a high level of compatibility with it; the architectures have remained close enough that the same programs and operating systems will run on both if some care is taken in preparation; newer chips in the POWER series implement the full PowerPC instruction set.

By July 2010, the POWER architecture IBM developed is still very much alive on their server offerings for large businesses and continues to evolve (and current POWER processors implement the full PowerPC instruction set architecture).[3] For example, IBM's servers based on POWER have the highest revenue marketshare (53.9%) among UNIX servers.[4]

The PowerPC specification is now handled by Power.org where IBM, Freescale, and AMCC are members. PowerPC, Cell and POWER processors are now jointly marketed as the Power Architecture. Power.org released a unified ISA, combining POWER and PowerPC ISAs into the new Power ISA v.2.03 specification and a new reference platform for servers called PAPR (Power Architecture Platform Reference).

Check also

http://en.wikipedia.org/wiki/IBM_POWER_microprocessors#PowerPC

Therefore POWER processors running at 5.5GHz can run PPC ISA at 5.5Ghz. Contrary to what was said here, RISC has no problems to run at higher frequencies.

The 3 GHz problem was really a problem of economic constraints. I already explained to you before that Apple left IBM for the money, not because technical issues. The interesting is that Apple is going to leave Intel by the same reason.
 

8350rocks

Distinguished


Actually, no, only recently did IBM fully implement PPC into POWER. Before that you couldn't run PPC on POWER architecture. Though, you could run POWER on PPC architecture.

Hence the reason they could get POWER to do 5 GHz and not PPC.
 
Gamer's on point here. When it comes to software development the keypounders will use the most comfortable tool they can get their hands on. This is something lots of people don't understand, it's not always about which component has the largest list of features but often which component will take the smallest amount of time to code with. It's not cheap to develop large pieces of software and the cost is directly proportional to the development time. Knocking months off of that time is a huge time saver.

In the RISC vs x86 universe each has it's own advantages. RISC is a set of principles, namely that each instruction take exactly one CPU cycle to execute and not to create complex multi-stage instructions. Exceptions always happen but that's the generally philosophy. This makes prediction and timing really easy as the hardware and OS will know exactly how long each instruction will take and how to best do context switching. RISC also tends to have many general purpose registers with a large register file being used during context switching, this allows the coders to leave data results on one register and reference it later without having to fetch it first. This is very important in lower power design's as you want to limit unnecessary memory look-ups. x86 has the advantage in that for all intents and purpose's it's an abstract. The last true x86 processor was the i486 era. Everything after that started to be abstracted where the instructions your sending to the CPU aren't what's actually being executed on the metal, their being decoded into a RISC like language first. This allows the CPU developers to implement all sorts of new tricks while maintaining backwards compatibility. It's something that everyone in the industry has started to pick up on. Also due to the decoupling of compiled code from actual executed code you can easily ramp up clock speeds and introduce more buffering / pipe-lining of code. Of course that comes with the cost of more electricity and die usage.
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360


OEM Windows 7 Home Premium vs Retail Windows Vista Ultimate would be a tough call, but at the end of the day I gotta say that Windows 7 would win over with most people, despite being lesser. Also, ARM is not more efficient than x86, it just performs worse. Good for the mobile market because of it's low power consumption, but terribly weak otherwise. And if the SteamOS can run every single game in my Steam Library at a decent quality, then I think I might be dual booting in the near future...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


No. Follow the link that you snipped. The Power6 chips running at 5GHz were the second generation of Power processors that included the modern Power ISA that joins POWER and PPC.



Effectively, due to impossibility to implement modern x86 directly on hardware, it has to be translated to RISC-like uops, which are then executed by the hardware. The problem is that the x86 decoder still has to read the input x86 instructions, and implementing this x86 decoder is complex and expensive (reason why AMD implemented a shared decoder in Bulldozer/Piledriver).

The next logical step consists on eliminating the intermediate x86 code, simplifying the design and manufacturing of the chip and allowing for more aggressive compiler optimizations.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


It is just the contrary, ARM is more efficient than x86. Also you seem to confound ARM with mobile ARM.
 
Effectively, due to impossibility to implement modern x86 directly on hardware, it has to be translated to RISC-like uops, which are then executed by the hardware. The problem is that the x86 decoder still has to read the input x86 instructions, and implementing this x86 decoder is complex and expensive (reason why AMD implemented a shared decoder in Bulldozer/Piledriver).

The next logical step consists on eliminating the intermediate x86 code, simplifying the design and manufacturing of the chip and allowing for more aggressive compiler optimizations.

Its not impossible to implement x86 in hardware, it was done for years. Due to it's large op-codes and multi-stage instructions it is inefficient to implement directly. Instead they abstract it which allows them to run the code in a superscalar architecture. The benefit of x86 is backwards compatibility, if you "remove the translation" you just removed your ability to run 99.99% of all software. Several companies have tried this, they have all failed miserably.

It's funny that you keep harping on ARM while ignoring the central reason for it's success. That Android and iOS both run abstracted code that requires translation prior to execution. You write an application for Android and it compiles into a pseudo code non-binary code. That code is then read by the JIT compiler and dynamically recompiled into the binary language compatible with whatever CPU happens to exist in the device. This comes at a performance penalty that is mitigated by the sheer volume of software you can get access to.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


The performance gap is closing, but we haven't really seen a higher TDP ARM chip yet. There have been a couple odd balls like the LSI 16 core A15 (Axxia 5500) device but it was for cell tower base stations. That one even has a quad-channel DDR3 controller.

http://semiaccurate.com/2013/02/19/lsi-launches-a-16-core-arm-a15-cell-phone-chip/

We really won't know until next year when more ARMv8 chips start coming out. There should be quite a few on the market including AMD's version.

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I wrote modern x86, which is different thing than x86. AMD introduced uops translation to RISC-like with its K5 and Intel did it with its Pentium Pro. Any x86-64 chip uses uops. It is impossible to implement modern x86 directly on the hardware. If you know an engineer who is able to 'hardwire' modern x86 on a piece of silicon, let me know.

Yes, the benefit of x86 is backwards compatibility, that is why we were talking about leaving x86 for legacy purposes... This is also the reason why AMD is releasing Warsaw. This is a CPU for those customers who will be slowly moving to the ARM bandwagon. This was also mentioned before in this thread.

The success of ARM is completely unrelated to what you pretend. If MIPS or x86 were an order of magnitude more efficient than ARM then any phone/tablet would be using a MIPS or x86 chip, and whereas your argument about bitecode would apply as well in that hypothetical situation, the real reason for MIPS/x86 would be another.

Besides that, bytecode is unrelated to CPU uops. Bytecode is translated into machine language was it either ARM or x86. The entire .NET Microsoft platform for Windows is based in a bytecode paradigm.



No. ARM is more efficient, which implies that either offer the same performance than x86 using less power or offers more performance consuming the same power. It has been shown again during last weeks that the most recent x86 architecture by Intel is less powerful than ARM, whereas consuming more power.

It has been also mentioned how Seattle CPU already offers about the same performance than some of fastest and expensive x86 CPUs but one fraction of their power consumption. Nvidia is preparing CPUs that will be faster than Opteron/Xeon for a new gen of ultrafast supercomputers (about 1000x fastest than #1 x86 supercomputer).



Nobody said that there will be only 1 reference design.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


It is more than probable that AMD Seattle has quad-channel DDR4 ECC, because AMD HieroFalcon already offers dual-channel DDR4 ECC.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860




so .. wich one are you sticking to?



again from AT

Depending on where you were in the Android UI, there was some definite stutter, but I’m told this is a result of an issue with Dalvik not allocating threads to cores properly that Intel is still tuning, something which you can see plays itself out as well in the AndEBench Java test that runs in Dalvik.

funny, just on the last page was this



well ... apparently its ok to change your stories as it supports your stance on ARM being superior in every way (using half finished software vs iOS, saying SB to HW was 10% when comparing to ARM, but (10 to 15%)+5% when comparing to richland)

No wonder no one can have a normal discussion.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Why would chose?



Well I was not referring to the AT preview. In any case pay attention to the previous paragraph:

Bay Trail. In addition Android sees the 2.39 GHz Z3770 boost frequency and reports it. I didn’t see any strange behavior on the device while running tests and watching CPU frequency, if anything the reference design platform stayed at the maximum boost frequency even with four cores plugged in for an impressive amount of time. Of course this is a tablet so there’s more TDP to play around with compared to a phone.

Don't forget than when comparing '1.46GHz' Bayl Trail _tablet_ SoC to _phone_ SoCs or when comparing to _mobile_ Kabini 1.5GHz, unless you pretend to make 'hapidup' comparisons.



This is all in your imagination, once again.

Moreover, if you look the history of this thread only in one occasion I compared benchmarks obtained with two different OSs in the same site, but I didn't compare the benchmarks scores directly, but I considered a correcting factor for accounting for the effect of the different OSs. I explained this then, but it seems you missed that post as well. Again, this was only one occasion, unrelated to what I was saying now in the post that you reply now.
 


Technically, the old PPC 7xx line lives on in the WiiU. But yeah, POWER != PPC.
 
The success of ARM is completely unrelated to what you pretend. If MIPS or x86 were an order of magnitude more efficient than ARM then any phone/tablet would be using a MIPS or x86 chip, and whereas your argument about bitecode would apply as well in that hypothetical situation, the real reason for MIPS/x86 would be another.

Right now:

MIPS: Low end, embedded devices (Cars, set-top boxes, GPS units, etc)
PPC: Low end, embedded devices (You see a LOT of PPC chips used in the defense world)
ARM: Mid end, embedded devices (phones and such)
X86: High end general purpose PC's and servers
SPARC: High end servers

ARM right now fits that void between PPC/MIPS and X86. I could easily see ARM squeezing PPC and competing against MIPS in the low end to mid-range, but I really don't see ARM getting close to X86 on performance.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


MIPS is currently used in intermediate performance applications between low-power ARM32 and high-power x86.

It is not ARM that will compete against MIPS. It is just the contrary, MIPS (Imagination) has announced a new series of cores for competing with ARM32 in phones/tablets. Of course, competition is welcomed.

ARM64 changes the game, because it was designed for servers and HPC. As mentioned before, AMD Seattle is about as fast as some of fastests X86 chips (Xeon) but more efficient. And Nvidia promises fastest cores (Apple has shown this is possible).
 

8350rocks

Distinguished


POWER6 came in 2007, 4 years after Apple ditched IBM over clockspeed issues (or whatever reason you think they separated, I am not arguing why they divorced).

POWER6 was also the first POWER architecture to allow for floating point calculations. Something which had been integrated into x86 for some time already at that point. Beginning to grasp the limitations of the hardware yet?

Modern POWER8 is pretty complete hardware, though you don't see it in mainstream desktop or low end servers for a reason...

If PPC was so great, it would have usurped x86 back when Apple had decent market share in the desktop PC world early in the race.

As a matter of fact, Apple was hemorrhaging market share in the PC world so badly, they had to go to mobile to compete economically at all.

It was hugely successful for them; though, I think it's a fool's errand for companies with such large PC market shares to try to do the same.

Additionally, I think you'll see the limitations of ARM that we are telling you about once they hit the market. The benchmarks will come out, and ARM will be really good at some things, and really bad at others.

Evidently, you can attempt to extrapolate tons of information from virtually nothing available on the capability. So any of us speaking based on knowledge of the ISAs, and capabilities of the hardware are talking past you at this point. It seems we fall on deaf ears.
 
Status
Not open for further replies.