AMD CPU speculation... and expert conjecture

Page 68 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

colinp

Honorable
Jun 27, 2012
217
0
10,680


Will be very interested in how this chip shapes up. Is it the best FM2 chip to pair with a discreet card, and what its o/c-ability is.
 


For consoles. this *probably* isn't an issue if they are still coded at a very low level. For PC's, one major example I would give to where latencies matter would be the case of a branch mis-predicition. Various software locking mechanisms (specifically, any that check data against what is currently in RAM) would also be affected by latency.
 


PCI is still VERY widely used in the instrumentation world. Lacking PCI would basically mean my company, and its Pentium III/4 based PC's, would go exclusively Intel when they start their next upgrade cycle. [Figure a couple thousand units of sales there]. PCI is still VERY widely used, because most devices don't need the bandwidth of PCI-E, and these devices do tend to like lower latencies...
 

Ags1

Honorable
Apr 26, 2012
255
0
10,790
With the new generation of consoles featuring 8 cores, the gaming performance of FX-6xxx and FX-8xxx chips should look a lot better by the end of the year. We may even see the FX-8xxx chips overtaking Intel in some games.

I want to play Lara, just for the hair :) Tress FX is very cool.
 


Physics is very hard to compute once you get beyond linear, single object interactions. Nevermind the fact the GPU is generally busy enough as it is...The equations are also VERY RAM intensive when you have large enough datasets, which is an issue when GPU's are already using a very large percentage of their VRAM.

Ageia had the right idea with a dedicated physics processor. If you ever get a uniform API for Physics, I fully expect "Physics co-processors" to show up [even if the normal implementation is just a second GPU].
 

afaik, intel chipsets don't have pci support either. motherboards that add pci support in intel boards use 3rd party controller. amusingly, intel's office-focussed motherboards like b75 (and cheapo h61 mobos) always have pci support.
i don't think amd board makers will drop pci support, at least from apu platform which is intended for entry level use and low end am3+ mobos.
i haven't seen an amd office pc in a Long time, since k6-ii days iirc. most of the pcs here use celeron (dual core), c2d or pentium with integrated graphics.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


And Nvidia took that hardware, shoved it into the dustbin, and implemented a software version of PhysX in their CUDA architecture. I dont think Nvidia is ever going to open / optimize for PC the PhysX API.

What Nvidia should do is implement a dedicated Physx hardware on the die itself. But really, why would they? Even they know that unless they open the API for all vendors, PhysX is completely gimped.
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


I'm glad I'm not the only fool that would buy games just for the visuals. Seriously, I have zero interest in TR's story or gameplay. But throw in some fancy new eye candy and I'm there!
 


This doesn't make sense... At all... The whole idea of "eye candy" is not "useful features". It's just... You know... Gimmicks.

Shader effects are light gimmicks. And even the whole OGL API is a full set of "gimmicks". OCL could be treated as a useful thing to have, but that just came around 3 years ago.

Anyway, the idea behind ANY form of eye candy improvement is the precise idea behind GPUs and CPUs: to improve the look and feel of stuff. Oh, and "eye candy" is usually "realistic simulation" in some sense, for games.

So, if it drops your FPS to zero, then that means nVidia and AMD have to work harder for our money in order to improve the "eye candy" in our games. And this is besides the point of "gameplay" in any game. And yes, some "gimmicks" can work towards better gameplay. Shadows for instance, or good physic effects, etc.

Cheers!
 


CUDA, by definition, is hardware accelerated.

PhysX is actually VERY good when run in a vaccume (EG: A stand alone app). It really shines when given really large datasets to work with [which makes sense: CUDA is optimized for large datasets, and sucks performance when the dataset is too small due to how its designed]. But when GPU resources are also being used by a game, all you do is leech FPS away without adding much to the game itself. The API, by itself, is fine.

I'm thinking DECADES ahead of everyone else in physics. Take a typical FPS: How often do you see games where you can move through a swamp without movement degregation? Ok, maybe one or two games apply a -20% movement filter. No attempt to try and calculate the odds your player (or AI's) are going to be able to move forward (viscosity of liquids is REALLY hard to do). I'm thinking of programmically calculating how grenades explode, tracking each fragment, rather then doing a simple damage radius. Hell, I'm thinking of calculating the path of every single bullet fired by a gun, the odds it passes through objects (think of the possibilites in regards to different weapon calibers, or weapons with high muzzle velocity, in MP games!), whether it pierces a players armor, and how much damage it does. Thats where I'm thinking. I'm WAY ahead of everyone else on this.

Problem is, WAY to complicated to do right now, even with a good API. But the fact so little progress is being made in this area irks me, especially since I don't expect much in the way of graphics until we get Ray Tracing/Casting (not much left thats easy/cheap to do).
 


Yes it is better than the APU's if you want low cost gaming rig. The issue is the 750K which is unlocked is the same cost as a Phenom II x 4 965BE which is a little faster gaming wise. But the 750K has a few things going for it. Socket FM2 is a much better socket all round. Between a 5800K, 6800k, 965BE and FX 4170 it offers the lowest consumption with very good value for money which the FM2 socket is all about really.

Overclocks better than the Old Athlon II's but that is expected, did 4.5 on air/closed loop quite stable, starts to bottom out a bit getting towards 5ghz and when vcore is starting to hit scary. Memory like with our 5800k managed DDR3 2647mhz which is crazy fast. Under LN2 you can do 7ghz quite easily with 4 cores enabled which itself is borderline harakiri.

Overall if you want the FM2 goodie bags but want discrete graphics then yes this is a very good part. The APU's still have benefit in apps using OpenCL and HSA. I really do like this platform a lot for flexibility and robust enough performance without killing the bank. Said it enough FM2 A85 chipsets offer high end features at low end cost only a fool will miss out on.
 
“[PS4 and next Xbox are] between 8 and 10 times the power [of previous gen]“
“We no longer have to constrain our games. 1080p, 60FPS… [there will be a] level of gameplay that is unprecedented.”


http://www.playstationlifestyle.net/2013/02/26/ea-ps4-meeting-recap-between-8-10x-power-1080p-60fps-unprecedented-gameplay-games-shown-before-e3/
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
As the PCH (platform controller hub) gets integrated in the APU, like the new Hondo/Temash line, that opens up room on the motherboard for memory chips. Specifically in the miniITX form factor.

Mobo makers will have to find a way to add value in an already low cost segment. Removing multiple DIMM connectors and the circuit board of the DIMM is a cost reduction.

These E-350 systems are already down to $80 for an APU and motherboard. Those will be replaced this year with Jaguar versions. A higher performance system could move to a single copper block covering the APU and memory like a video card does today.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813157228R

People are getting accustomed to lack of upgradability. Look at how many iPhones/iPads Apple sells without even a microSD slot.
 

jdwii

Splendid



Amd and Intel both want to get rid of PCI and honestly its a waste of space and a dead technology.
 

jdwii

Splendid


I'd say i would like this more then physX anyday



:D

 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


So what i meant here was that these features were thought up and implemented to lesses the programming headache of game devs, while at the same time lowering resource usage.
But the actual implementation was half way to hell.

Example : Ambient Occlusion . Initially it was implemented to provide a little more depth to any gane scene. It was supposed to be game agnostic, with the drivers handling the relative lighting of objects. But the "optimized" implementation we see is HBAO, and if you enable , FPS goes down by 30-40% .

Example : Tessalation. It was implemented to ease the burden of game devs by generating textures on the fly, with very little resource usage. Reality : same as above.

Example : PhysX : Supposed to make games look better by using a separate hardware for it. Implementation : we all know.
 

wth, that apus is totally stuck with the motherboard! the future is now!
but...but... when evil mainstream broadwell socs come soldered onto the motherboard, it will kill the small vendors. amd is so totally Not doing that by making fully integrated socs available to desktop platforms. they're totally not gonna sell much more because they're cheaper. it's innovation (Not hurting motherboard vendors), because it's amd...(TM).

anyway, when broadwell does come out, amd will have a trememdous opportunity to sway major vendors to its side. i am really hoping they can take advantage of that opportunity, not bungle up like they did with trinity.
 


Its an impressive piece of technology that will make its way to mainstream APU's down the line, the DDR5, Quad Channel IMC is impressive to which one wonders why not on DT APU.
 

ddr5 isn't the same as gddr5.
afaik, quad channel imc would require socket change. goes agains't amd's traditional upgrade policy although they aren't afraid to screw the customers (llano) when they need to.
that's why it's okay for consoles, not so much for diy desktop builders looking to save money.
 


So AMD changed socket once with FM1 now they are screwing people?

If all this eventuates it will likely be after Steam Roller and a new socket anyways. It is very possible for future arches.
 

that's how many fm1 owners, who bought llano cpus thinking amd would at least offer one more upgrade, think. amd's decision was driven by design choice and necessity, proving that they won't shy from locking upgrade if it's needed.
 


Yes very true, but also Llano was 2 years late. But it is one solitary instance and the platform is so cheap you won't be losing a lot of money. I don't think AMD directly endeavored to screw people, it was a matter of the technology outgrowing the platform and as above it was late, very late.
 
Status
Not open for further replies.