AMD CPU speculation... and expert conjecture

Page 389 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Vs what exactly ....

I might be getting amnesia in my semi old age but I think I remember Haswell requiring a different socket and thus new motherboard then Ivy Bridge. Yeah real "upgrade path" you got there. Plus what exactly are you upgrading? The era of Socket 7, Slot 1 / Socket A plug and play CPU's is long gone. Now every CPU iteration tends to require a different socket due to implementation of some new technology or some incompatibility with previous technology. You can expect to replace your MB CPU together every three to four years unless you buy OEM in which case you'll just buy a new system.
 


Lol F Intel this has nothing to do with them from a Amd fan standpoint why buy a AM3+ board when no future products will be coming out for it. I like to have a upgrade path when i buy something
 


This isn't rational thinking as neither would any other board / product you would purchase. Your reasoning, when applied to the decision of buying / building, has you purchasing... nothing. The APU lineup doesn't even factor as they serve an entirely different market then the aforementioned platforms. The strongest APU is significantly weaker then the fx63xx which also happens to be cheaper. Physics being what they are after all.

Your building / buying a PC, you must chose a platform to base it on. In the market there are two competitors (actually three but we're leaving Via out of this) to decide. Your either going to purchase an Intel CPU and Intel MB, or a AMD CPU and AMD MB. You must chose one and both fail to qualify under your above stated reasoning as both will not have an "upgrade path" when it's time for you to actually upgrade. Both companies have a demonstrated history of switching compatibility whenever a new technology or feature is required. If anything, historically speaking, AMD has a better track record for at least trying to make that work.

Now here is what's really happening. Your feeling disappointed and betrayed as you hoped / expected something. That something didn't happen and that left you with a feeling of anger / loss, now this isn't a big deal but it's still a negative feeling associated with a particular brand / object. You can't recommend fx63xx/83xx because of your negative feeling not because of a rational objective decision. And that's ok, that's how most healthy people function, advertisement and marketing companies entire business model is based on their ability to target that emotional decision making.
 




More like a chicken and egg effect. There really isn't any new compelling software/hardware to make people want to buy high end PCs. Other things need to catch up. If a 4K monitor was $200 instead of $4000 people would need faster PCs. Or if VR headsets like Oculus got cheap people would need more powerful hardware to drive 2 high res LCDs at 60 fps.

The new Crytek engine (Ryse) looks like it can do some pretty cool stuff. Likewise with the new Frostbite engine.
 


Current hardware is stupidly overpowered for the vast majority of folks. Enthusiasts happen to like to build their own box's and play around, but 99.99% of the population doesn't. They just want a box that turns up, lets them browse social media / internet, work on and publish media and do basic office productivity tasks. Communication and portable media consumption has been pushed into the phone / tablet world. Gaming has gone into the console world which are themselves just cheap dedicated PC's purpose built for games. That leaves a small niche of dedicated gamers who want better experiences and are willing to pay the money for it. Or an even smaller niche of dedicated professionals who actually need high end power for their own work or research.

So yeah general purpose DT computing has slowed to a crawl cause a PC from four years ago can do it just fine. Gaming has expanded but is a small group and goes unnoticed most of the time.
 

If Piledriver is an improvement over Bulldozer and Steamroller is an improvement over Piledriver then, yes..

 


To drop 30% TDP seems a major shift. That would likely be a 20nm GF design.

SA is saying early adopters will have it in 2014 and non-early adopter production in 2015. AMD could take a sweeter spot in the middle as yields start to improve.

http://semiaccurate.com/2013/04/02/global-foundries-talks-about-tsvs-on-20nm/

And look at that 20nm test package with 2.5D. They could put a couple memory die on there to boost performance.

GloFo_Amkor_2.5D_stack.png
 


I don't think AMD or Intel expect people to upgrade a 1 year old PC. I doubt anyone upgraded from ivy to haswell, or sandy to ivy. A single year doesn't make a big enough upgrade.
 
I don't think AMD or Intel expect people to upgrade a 1 year old PC. I doubt anyone upgraded from ivy to haswell, or sandy to ivy. A single year doesn't make a big enough upgrade.

That's pretty much how it works, you skip generations. I went from a 970BE to an FX8350 and I only purchased then 970BE because I needed something better then the 940BE I was using. I upgrade way more frequently then most people and even I don't see "upgrade path" as a reason to chose one platform over another.
 


Thanks. I'm using several different JREs - 5, 6, and 7. When I build the Windows installer, it will force a version of 7+.

Hopefully I will get some more really high-end GPU results and then we can see what the issue is.

The texture flashing is an old bug I confess. It is caused by overlapping textures being rendered at the same z depth. I will fix that this weekend.

Yuka may have found a genuine exploit, so I will look at plugging that this weekend too.
 


Couple that with the incredibly boring screens you see on all the windows 8 laptops/desktops at the retail stores. I rarely see a demo that is the least bit compelling.

Walk by a Xbox/PS4 demo and those look cool. The PC marketing guys have some things to learn.
 
The community from IGN is retarded i love talking about the xbox one spec's compared to the PS4 its funny when they call me a Sony fanboy(from stating facts) even though i hate sony and i love nintendo and PC gaming.
 

If you say anything negative about anything, you are a fanboy to people. Its especially bad with consoles because they are usually owned by younger people.
 


Integer performance must be the most important metric for average user, because most most processing (including OS tasks) is done on this sort of data. Floating point performance is relevant for spreadsheets and some graphics programs and games that make use of the FPU.

If you take a look to my BSN* article you can find lots of benchmarks where Kaveri performs as a i5. This is because the test relies on integer performance. When you find one test like this

Himeno-Benchmark-kaveri-pre.png


it indicates a heavy use of the FPU. The i5 are much faster because have twice more floating point execution units.

However, the tendency is towards offloading the CPU and using the GPU as a giant FPU. Next gen games will do (I gave before Cerny talk about how the PS4 GPU is specifically designed for that). Next gen spreadsheets will use the GPU for computations (AMD is already working in HSA Spreadsheet with developers) and we will see HSA enabled graphics programs as well.
 


Working frequency will be between base and max. turbo clock. If you want to compare clocks you have to use both to obtain an lower and upper limits. You used the most unfavorable case by selecting only base clocks and you improved the unfairness by using a wrong Richland frequency.

Therefore the HD7750 with 1600MHz DDR3 is according to you "bottleneck", but Kaveri with 2133MHz DDR3 (and possibility to use faster modules 2400, 2600...) is, again according to you, "massively bottlenecked". LOL

I didn't mention number of transistors. I did mean 33% more shaders. And my point remain: 30% faster with 33% more shaders indicates there is no massive bottleneck.
 


Is the term roadmap mentioned in my post? No.

When I first said that the FX-9590 launch indicated the end of FX numeration. People at this thread did laugh.

When I first said that server roadmap indicated no steamroller FX. People at this thread did me "wait for the desktop roadmap".

When I first got a copy of the 2014 desktop roadmap and it indicated no steamroller FX. People at this thread did me "no, you may have the APU roadmap that everyone have, not the desktop roadmap".

When the 2014 desktop roadmap was officially released and it indicated no steamroller FX. People at this thread did me "wait for the 2015 desktop roadmap".

When I first published the 2015 desktop roadmap in my twitter account, before it was leaked by news sites and before it was mentioned here, I did laugh again. Now I am said "but roadmaps can change..."

Any bet that by 2017, people at this forum will continue speculating about when comes the inexistent Steamroller FX to the desktop?
 
^^ your reasons are why no one listens. your reason was that kaveri was going to be soo bad ass that AMD wouldn't need any other products. You claimed that its AMD's decision because Kaveri is all that is needed.

Truth is that GF is at fault and its bulk silicon is too weak, hot, and power hungry. Another point you praised for being a great decision rather than having their hand forced by GF because GF didn't want to spend the money on the SOI equipment.

If AMD can find a source for SOI nodes, I guarantee their roadmaps will change.
 


Yes you cherry picked this benchmark before and used it to spam your nonsense that Kaveri will be 37% slower than Bulldozer. LOL

First, the same game in another review (TR) shows Richland and Trinity almost on pair with FX-4350 (which is faster than your 4300). The 4350 (4.2GHz) is a 8% faster than Richland (4.1GHz)

metro-fps.gif


Second, the same site that you mention ever and ever also has benchmarks where Richland is equal than FX-4350

Gaming_01.png

 


Ya, i cherry picked a list of 12+ games that show APU's aren't designed specifically for gaming systems with discrete cards. There are differences in how the games were tested and thats why they are different.

TR : 1200x800 - medium quality, 4xaf, low Motionblur.
TS: 1900x1200 - HQ, 16x af, normal motion blur.

Sure, if you plan on playing at 1200x800 and low to medium quality settings an APU will work fine.
If you want all your eye candy on and see what the game was Intended to look like ... well guess what.

When you run into situations where L3 cache makes a difference ...well ... APU's don't have ANY

Some people may be interested in the best possible expectations, but when they try to demand more than what their system can actually handle ... guess who will be disappointed.

Lets face it, you and me will never see eye to eye. You want to see the golden eggs from the goose and throw away the bad eggs, I want to know if the goose has insurance because I'm going to abuse it.
 


The JOGL library is native code, so ISA specific. I want to get it running on Android and Backberry, however.

If I cut out the GPU tests, it can easily be adapted to anything that runs a JVM. I will put up a no-JOGL version on the Website - just a JAR. I'll PM you when it is there.
 


You don't listen. I know, but you feel a strong need to lie about what I say. :kaola:

And yes, AMD moving from SOI to bulk has proven to be a fantastic choice. They are now starting to sell Kaveri, which is a clear winner. you preferred future was with AMD being stuck to SOI and unable to release anything new in next year because, as we already knew, Glofo couldn't deliver. You typical AMD hate afloat again. The same hate why you are spreading FUD and BS against Kaveri in a daily basis.
 
Status
Not open for further replies.

Latest posts