AMD Piledriver rumours ... and expert conjecture

Page 109 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
thank viridiancrystal
I will study those links

I dont think of BD as a "bad" cpu
it was overhyped and wrongly marketed
for a brand new build it is hard not to recommend Intel at many price points
but at the same time BD isnt a horrible build also
really it is more of a matter of AMD pricing
if the FX-4100 was priced under $100 USD like $75 for example watch how peoples opinions would change
but when the I3-2120 is in direct price competition then it is hard to recommend the FX-4100

looking at newegg right now the FX-8120 is $189
while the I5-2400 is same price

mobo price would about be the same depending on features

so for a new gaming build the I5 2400 makes much more sense

but for a workstation build then the FX 8120 might be a possible contender
hard to ignore 8 cores
though I have to study those links to see if this is true
 
idTech 5 is OpenGL, but ID has no plans to license it out. Even then, it didn't exactly have a strong first showing [Rage]...As for Unigen, lets see how easy it is to make an actual game out of that engine.

One of the big advantages of DX for years was the fact you had EVERYTHING you needed to create a game built into the API [Direct3D, DirectSound/DirectSound3D, DirectInput, DirectDraw/Direct2D, etc], where OpenGL was [and still is] purely a Rendering library, and thus requires other API's to fill in the missing feature set. It wasn't uncommon for a time to see games coded in OpenGL, but using DirectSound/DirectInput for audio and user input. Nevermind that DX, by design, is easier to integrate with windows then OpenGL.

Well, I have RAGE and its only flaw is ending when things got interesting, hahaha. But the engine itself is quite good. If you compare it with BF3 or Crysis, it might stay a little behind shiny wise, but in a pure technical way, I found quite amazing their v-sync lock in the game and threading. The game really pushes your hardware until it sits comfortably at the v-sync cap. And the mega-texture approach really gives the game an artists touch. Putting aside the low-res textures rendering if the games decides you can't use higher ones, it really looks good.

And regarding DirectX... Didn't MS remove DirectSound as part of DirectX and just left the API sitting around with OpenAL replacing it? It should still be easy to code everything in DirectX anyway, but I wouldn't say "integrating OGL" into windows is hard, or at least it should not be hard. Is there anything extra you need to do with OGL in windows? At least in Linux is "compile and run".

Cheers!
 
I havent seen all the reviews of BD but from what I understand the FX-8xxx will hold its own in workstation
applications and I would think it does well in multitasking environments
Having 8 cores must help when having multiple apps open

if somebody has a good link to benches of the BD using multithreaded workstation style apps like Maya,encoding/rendering like Handbrake,etc
basically benches besides gaming comparing to Intel and PhII
It would be great if you posted them

Lol WHAT? You don't do that in this thread...

And you are going to pay for those remarks.
 
Well, I have RAGE and its only flaw is ending when things got interesting, hahaha. But the engine itself is quite good. If you compare it with BF3 or Crysis, it might stay a little behind shiny wise, but in a pure technical way, I found quite amazing their v-sync lock in the game and threading. The game really pushes your hardware until it sits comfortably at the v-sync cap. And the mega-texture approach really gives the game an artists touch. Putting aside the low-res textures rendering if the games decides you can't use higher ones, it really looks good.

Frankly, the "auto quality" was a dumb idea. It is also worth noting that when using Vsync+Triple Buffering in Open GL, you don't have the frame drops you do in Direct3d [though NVIDIA apparently has a new Vsync that solves that little problem...]. I think Doom 4 will be a full demonstration of the engines capabilities.

And regarding DirectX... Didn't MS remove DirectSound as part of DirectX and just left the API sitting around with OpenAL replacing it? It should still be easy to code everything in DirectX anyway, but I wouldn't say "integrating OGL" into windows is hard, or at least it should not be hard. Is there anything extra you need to do with OGL in windows? At least in Linux is "compile and run".

Directsound is still there, its just limited to being a software API now. OpenAL can be used, and is starting to be used, but for the most part, HW accelerated audio is more or less dead. [Though win8 apparently will have a HW accelerated audio API, alongside 32-bit audio support]. Likewise, DirectInput remains the best way to handle user input for the most part.

The point being, if you go the DirectX route, you have everything you need in front of you, ready to go. If you go OpenGL, you need to rely on more third party libraries, or even other DirectX functionality, to fill in what OpenGL doesn't do.
 
Frankly, the "auto quality" was a dumb idea. It is also worth noting that when using Vsync+Triple Buffering in Open GL, you don't have the frame drops you do in Direct3d [though NVIDIA apparently has a new Vsync that solves that little problem...]. I think Doom 4 will be a full demonstration of the engines capabilities.

Ugh... I totally disagree there, my friend. Making that feature better will focus free resources into making the game run smoother; the key here is the v-sync and your screen v cap. In a 120Hz monitor, it might be less notorious, but at 60Hz or 75Hz, it does make a difference. The current implementation still has issues, I'll give you that, but it's not a bad idea IMO. Not at all.

And yeah. The dudes behind Hydra made some v-sync tweaks IIRC. Along with the thing nVidia did, we might be getting a lot of progress there. At least, I hope so, hahaha.

Directsound is still there, its just limited to being a software API now. OpenAL can be used, and is starting to be used, but for the most part, HW accelerated audio is more or less dead. [Though win8 apparently will have a HW accelerated audio API, alongside 32-bit audio support]. Likewise, DirectInput remains the best way to handle user input for the most part.

The point being, if you go the DirectX route, you have everything you need in front of you, ready to go. If you go OpenGL, you need to rely on more third party libraries, or even other DirectX functionality, to fill in what OpenGL doesn't do.

Ah, now that you mention it, I remember what they did. Dirt2 (don't know if Dirt3) uses some OpenAL based software to do the sound and it works quite good, but let DirectInput and Direct3D do the rest like you say. And oh man, we should be getting realistic sound positioning in our games, plus advanced effects! L4D2 has some amazing sound positioning mixed in the game, for example. Hopefully we'll still get some hardware features, other than the way too old EAX, hahaha.

Cheers!
 
Ugh... I totally disagree there, my friend. Making that feature better will focus free resources into making the game run smoother; the key here is the v-sync and your screen v cap. In a 120Hz monitor, it might be less notorious, but at 60Hz or 75Hz, it does make a difference. The current implementation still has issues, I'll give you that, but it's not a bad idea IMO. Not at all.

And yeah. The dudes behind Hydra made some v-sync tweaks IIRC. Along with the thing nVidia did, we might be getting a lot of progress there. At least, I hope so, hahaha.



Ah, now that you mention it, I remember what they did. Dirt2 (don't know if Dirt3) uses some OpenAL based software to do the sound and it works quite good, but let DirectInput and Direct3D do the rest like you say. And oh man, we should be getting realistic sound positioning in our games, plus advanced effects! L4D2 has some amazing sound positioning mixed in the game, for example. Hopefully we'll still get some hardware features, other than the way too old EAX, hahaha.

Cheers!

As far as sound engines go, it shouldn't be THAT hard to create an infinite amount of sounds and determine their position within a game, given how you already have the 3D scene created. That's simple to do. Amplitude of a given sound...thats a significantly harder problem. You have to deal with the mechanics of audio transmission through various materials. Echo effects. Etc. This can get VERY computationally expensive really quickly, especially with a large amount of audio sources. This adds a LOT of processing in an area not a lot of people can easily nptice, so naturally, very little work gets done in that area.

In my opinion, a well built game engine should be able to handle any number of audio sources. That being said, its not like we have a mechanism for outputting more then 8 channels at any one time...
 
Lol WHAT? You don't do that in this thread...

And you are going to pay for those remarks.



dont know how to take that LOL

i didnt know it violated Toms rules to ask for tech links

I am always glad to help somebody out with links if I have them

and what remarks?

that BD while not a gaming cpu has some value as a workstation CPU?

so I hope you are only joking :)
 
I tried but not that good at googling I guess
best guesses on the web were Q3 or Q4 2012 from what I could find

did see this
http://www.xbitlabs.com/news/cpu/display/20120116163742_AMD_Describes_Piledriver_Architecture_Peculiarities_to_Software_Developers.html

also this
http://hexus.net/tech/news/cpu/35869-leaked-amd-release-schedule-focus-upcoming-trinity/

this is quote from hexus-not sure how reliable they are
"It also appears as though the third quarter will finally see a significant introduction to the performance desktop CPU line-up, with the Piledriver-based FX-8350 making an appearance. It will be good to see the firm finally pushing forwards in the processor department as opposed to just the graphics segment."

and some interesting news tidbits
http://www.cpu-world.com/
 
I havent seen all the reviews of BD but from what I understand the FX-8xxx will hold its own in workstation
applications and I would think it does well in multitasking environments
Having 8 cores must help when having multiple apps open

if somebody has a good link to benches of the BD using multithreaded workstation style apps like Maya,encoding/rendering like Handbrake,etc
basically benches besides gaming comparing to Intel and PhII
It would be great if you posted them

http://www.tomshardware.com/reviews/core-i7-3960x-x79-sandy-bridge-e,3071-11.html

A review of the 6-core/12-thread 3960x Sandy Bridge, but it includes the 8150, 2600K, 2500K, 6-core Westmere 990x and P2 X6 for comparison on various productivity and workstation-type apps.

There's later articles on the 3930, etc. with similar comparisons.
 
http://www.tomshardware.com/reviews/core-i7-3960x-x79-sandy-bridge-e,3071-11.html

A review of the 6-core/12-thread 3960x Sandy Bridge, but it includes the 8150, 2600K, 2500K, 6-core Westmere 990x and P2 X6 for comparison on various productivity and workstation-type apps.

There's later articles on the 3930, etc. with similar comparisons.


thank you FOS

been reading the other links I think viridian gave me

at this point the FX-8120 is a match up for the I5-2400 at the USD 189 price point
so I find that an interesting comparison from the workstation point of view
though the only thing is that workstations are usually not a budget based buying decision in some ways
if you have projects that have serious money riding on them than going cheap on workstations is foolhardy

I saw reviews of the FX 8150 against the 2500k in workstation apps
and the 2500k is beating the 8150 in most productivity benches

but also the 8150 uses some new instruction sets related to encoding/rendering that are not being used by devs yet
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=831&Itemid=63&limit=1&limitstart=14

also in that article most of the benches have the FX 8150 IPC the same as a 1100T in workstation apps

hmm need to do more reading LOL

for a home user or small business user on a budget that wants to do workstation apps plus multitask
then BD might be viable but still hard not to recommend Intel or going with Thuban six cores

if PD is done right and at a reasonable price point it might save the day

I wonder if AMD is thinking very long term or just screwed up with BD

my opinion on that is still open



 
"The goal of this approach is twofold: to enable deeper inspection of the code at compile time to identify additional opportunities for parallel execution, and to simplify processor design and reduce energy consumption by eliminating the need for runtime scheduling circuitry."


"Since Merced was the first EPIC processor, the development effort encountered more unanticipated problems than the team was accustomed to. In addition, the EPIC concept depends on compiler capabilities that had never been implemented before, so more research was needed"


Clearly they didn't succeed in their goals.

Nope it wasn't simpler, ended up being more complex in the end. Bad performance because it was the wrong type of CPU design (VLIW) for the wrong task (General Computing). Even though it run Windows nobody wanted to use it, we ran some HPUX's for our PUA's (Profile User Agents) and MWS's (Monitoring WorkStations) with HP OpenView. Absolutely horrible, eventually went to an x86 + NT platform for those functions.
 
thank you FOS

been reading the other links I think viridian gave me

at this point the FX-8120 is a match up for the I5-2400 at the USD 189 price point
so I find that an interesting comparison from the workstation point of view
though the only thing is that workstations are usually not a budget based buying decision in some ways
if you have projects that have serious money riding on them than going cheap on workstations is foolhardy

I saw reviews of the FX 8150 against the 2500k in workstation apps
and the 2500k is beating the 8150 in most productivity benches

but also the 8150 uses some new instruction sets related to encoding/rendering that are not being used by devs yet
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=831&Itemid=63&limit=1&limitstart=14

also in that article most of the benches have the FX 8150 IPC the same as a 1100T in workstation apps

hmm need to do more reading LOL

for a home user or small business user on a budget that wants to do workstation apps plus multitask
then BD might be viable but still hard not to recommend Intel or going with Thuban six cores

if PD is done right and at a reasonable price point it might save the day

I wonder if AMD is thinking very long term or just screwed up with BD

my opinion on that is still open

Compare the iTunes results to the Handbrake results and you can see how important code optimizations are, especially SSE instructions. There are lots of programs out there that won't run anything higher then SSE2 on anything with an "AMD" label, regardless of the CPU's supported instruction set.
 
well going through toms review of 3960 and comparing the FX 8150 benches
in productivity,content creation and media it does reasonably well
does beat the 2500k,1100t and 980be in some of the benches

at $249 on Newegg for the 8150 with the 2500k at basically $200
it is hard to justify the 8150
so really it is the pricing more than the performance that is the issue
if the FX 8150 was $200 usd then you would hear alot less complaining I think
same with the FX 4100 if it was at $80-90

the BD series arent horrible cpus but priced more then they are worth for performance
from the reviews and benches I have seen so far

between the initial hype and the pricing is what is making BD look so bad
 
I wasnt around reading the hype on the BD before it came out
I have no bias towards AMD or Intel
I own an AMD (PHII Deneb 3.5ghz) now and before that an Intel (C2D 3ghz)
I want AMD do well just so Intel has competition

if pricing was more in line with performance than I doubt people would complain as much about BD
now looking at this

http://www.reuters.com/finance/stocks/chart?symbol=AMD.N


since BD came out AMD stock price has doubled and is close to last year's number of $9 a share

so if you bought shares of AMD in Oct 11 at 4.50 you almost doubled your money in six months

not sure if rising stock price is directly tied to BD sales
would love to see sales figures of BDs

because really whether BD is a success is not measured by IPC performance or BF3 benches
it is sales and stock price in the end that matters
what we think of as a failure in our eyes could be a success in stock holder eyes
 
thank you FOS

been reading the other links I think viridian gave me

at this point the FX-8120 is a match up for the I5-2400 at the USD 189 price point
so I find that an interesting comparison from the workstation point of view
though the only thing is that workstations are usually not a budget based buying decision in some ways
if you have projects that have serious money riding on them than going cheap on workstations is foolhardy

I saw reviews of the FX 8150 against the 2500k in workstation apps
and the 2500k is beating the 8150 in most productivity benches

but also the 8150 uses some new instruction sets related to encoding/rendering that are not being used by devs yet
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=831&Itemid=63&limit=1&limitstart=14

also in that article most of the benches have the FX 8150 IPC the same as a 1100T in workstation apps

hmm need to do more reading LOL

for a home user or small business user on a budget that wants to do workstation apps plus multitask
then BD might be viable but still hard not to recommend Intel or going with Thuban six cores

if PD is done right and at a reasonable price point it might save the day

I wonder if AMD is thinking very long term or just screwed up with BD

my opinion on that is still open

Nah, Bulldozer just wasn't right for whatever reasons, I quit trying to figure it out. It was the cpu I wanted for my home workstation-like computer. I went with the 2600K.

I mean all of us in this thread know why Bulldozer wasn't right, we just don't perhaps know what in the design or implementation (process) caused it to be not right. Do we? You should go figure that out and report back, you seem to have the energy for it lol.
 
Compare the iTunes results to the Handbrake results and you can see how important code optimizations are, especially SSE instructions. There are lots of programs out there that won't run anything higher then SSE2 on anything with an "AMD" label, regardless of the CPU's supported instruction set.

Supported instruction sets in a CPU does not mean its supported by the program to use it.

I would imagine if they could take advantage of other sets, it probably would help. But AMD needs to push out into the dev world and start pushing like Intel/nVidia does. For a while, they seemed to have stopped on the GPU front but recently they seem to be doing more there.

They really need to work closer to devs to get their CPUs abilities all tied in. Look at Intel, when SB came out a few apps already could use QS, but when BD hit there were a lot of new instruction sets that were not useable since AMD didn't work with the devs.

Maybe with the FABlite system they will have more fund to do that.

I agree the pricing is way higher then it needs to be. 8150fx should be under $200 USD.

The real question is though, would you feel the same if it performed better? If AMD took the performance crown and did as they would do, or as they did with Athlon 64, and had $1K top tier CPUs with others between that and less?

Or if the FX was again their "Extreme Edition" with the only unlocked multi.

Thats what I wonder. People are saying the GPU price is too high for the HD7970, I agree but I knew it was bound to happen the second AMD took the performance crown back.

I think some people got too used to AMD being cheaper while offering "decent" performance (I would say Phenom II, not PhenomI which BTW we had a bad Phenom I CPU today, probably my third bad CPU in 1.5 years at this job) and when AMD does finally compete overall on a higher level, people wont like the pricing they will see.

Time shall tell.
 
he real question is though, would you feel the same if it performed better? If AMD took the performance crown and did as they would do, or as they did with Athlon 64, and had $1K top tier CPUs with others between that and less?

I always evaluate things on a cost vs performance vs work needed done scale. If the 8150fx performed better then it could command a higher price. There is absolutely no need for a $1K CPU in the consumer market, that's just for bragging rights with people who have more money then sense.

For instructions, we've been above SSE2 for awhile now, it's an issue with developers compiler flags. Intel provides the SDK's to the developers at no cost and that's how you get software compiled to use SSE3 on Intel CPU's but SSE2 on AMD CPU's, or sometimes no SSE at all. It's a giant cluster f**k right now, just look into what happened with FMA.
 
he real question is though, would you feel the same if it performed better? If AMD took the performance crown and did as they would do, or as they did with Athlon 64, and had $1K top tier CPUs with others between that and less?

I always evaluate things on a cost vs performance vs work needed done scale. If the 8150fx performed better then it could command a higher price. There is absolutely no need for a $1K CPU in the consumer market, that's just for bragging rights with people who have more money then sense.

For instructions, we've been above SSE2 for awhile now, it's an issue with developers compiler flags. Intel provides the SDK's to the developers at no cost and that's how you get software compiled to use SSE3 on Intel CPU's but SSE2 on AMD CPU's, or sometimes no SSE at all. It's a giant cluster f**k right now, just look into what happened with FMA.

I know it would command a higher price. I just wonder how many people would accept that or would they complain.

Still though, it is Intels compiler. I can't expect them to optimize their compiler for AMDs CPUs nor would I expect AMD to optimize for Intel if they had their own or vice versa.

Strangley, anyone remember the AMD dual core optimizer? Wonder why they haven't done one of those in a while. A strange fact too, Duke Nukem Forever has that built in and will install it on any system, even a Intel one (I had it on mine) and it kills performance on Intel systems, have to uninstall it.
 
Originally had nothing to do with Intel optimizing for their own CPU's, they released it without telling anyone that it crippled code running on AMD CPUs. We only found out because VIA CPU's allow you to change the CPUID. When changed to "Intel" suddenly benchmarks jumped higher, no hardware changing or OS reconfiguration requires. Just had to tell programs you were running an Intel CPU and they performed better. Tell them you were running an AMD CPU and they crippled themselves. One of the stipulations of the AMD vs Intel lawsuit was that Intel had to stop the practice of coding software to cripple AMD CPU's. Before it would disable SIMD on AMD CPU's entirely, now it just limits them to SSE2.

I happen to have a Via Nano, I've actually done testing on it and yes you can still get more performance by telling programs that your running Intel. AMD should make their CPUID's changeable in software so we can get a better handle on it. It's also why I recommend everyone to use gcc or MS's compiler instead of Intels.

And before anyone says something, it's not "optimizing" at all, it does a simple check for the CPUID's VenderID and if it detects "AMD" it puts the code down a limited core path, end of story. If it's "Intel" then it'll do a Flag check to determine capabilities and use that to determine what instructions to execute. This is done on the clients system during run time not compile time.

From Agner,

http://www.agner.org/optimize/blog/read.php?i=49

Program to get reimbursed for having to recompile your programs with non-intel compilers.

http://www.compilerreimbursementprogram.com/

Program to fake your CPUID
http://www.agner.org/optimize/#cpuidfake

Intel's compiler is still limiting anything non-Intel to SSE2, at least now their forced to tell their clients that their compiler does that.
 
Agner's evaluation of the BD uArch

http://www.agner.org/optimize/blog/read.php?i=187

Guy's forgotten more then I'll ever know about uArch and ISAs. He makes a set of manuals for optimizing your code on various CPU's and has introduced a method to get around Intel's "cripple AMD" function.

His review was overall positive, he identified a problem with the shared front end decoder having resource conflicts when both CPU units are actively working.
 
Originally had nothing to do with Intel optimizing for their own CPU's, they released it without telling anyone that it crippled code running on AMD CPUs. We only found out because VIA CPU's allow you to change the CPUID. When changed to "Intel" suddenly benchmarks jumped higher, no hardware changing or OS reconfiguration requires. Just had to tell programs you were running an Intel CPU and they performed better. Tell them you were running an AMD CPU and they crippled themselves. One of the stipulations of the AMD vs Intel lawsuit was that Intel had to stop the practice of coding software to cripple AMD CPU's. Before it would disable SIMD on AMD CPU's entirely, now it just limits them to SSE2.

I happen to have a Via Nano, I've actually done testing on it and yes you can still get more performance by telling programs that your running Intel. AMD should make their CPUID's changeable in software so we can get a better handle on it. It's also why I recommend everyone to use gcc or MS's compiler instead of Intels.

And before anyone says something, it's not "optimizing" at all, it does a simple check for the CPUID's VenderID and if it detects "AMD" it puts the code down a limited core path, end of story. If it's "Intel" then it'll do a Flag check to determine capabilities and use that to determine what instructions to execute. This is done on the clients system during run time not compile time.

From Agner,

http://www.agner.org/optimize/blog/read.php?i=49

Program to get reimbursed for having to recompile your programs with non-intel compilers.

http://www.compilerreimbursementprogram.com/

Program to fake your CPUID
http://www.agner.org/optimize/#cpuidfake

Intel's compiler is still limiting anything non-Intel to SSE2, at least now their forced to tell their clients that their compiler does that.

Wonderful information. Intel... just for shame.
Since my first computer build in 1997 I have been using AMD cpus and motherboards. No longer. Maybe Intel will grow up and be more responsible. Maybe AMD... heck, maybe they'll hire you.
 
Originally had nothing to do with Intel optimizing for their own CPU's, they released it without telling anyone that it crippled code running on AMD CPUs. We only found out because VIA CPU's allow you to change the CPUID. When changed to "Intel" suddenly benchmarks jumped higher, no hardware changing or OS reconfiguration requires. Just had to tell programs you were running an Intel CPU and they performed better. Tell them you were running an AMD CPU and they crippled themselves. One of the stipulations of the AMD vs Intel lawsuit was that Intel had to stop the practice of coding software to cripple AMD CPU's. Before it would disable SIMD on AMD CPU's entirely, now it just limits them to SSE2.

I happen to have a Via Nano, I've actually done testing on it and yes you can still get more performance by telling programs that your running Intel. AMD should make their CPUID's changeable in software so we can get a better handle on it. It's also why I recommend everyone to use gcc or MS's compiler instead of Intels.

And before anyone says something, it's not "optimizing" at all, it does a simple check for the CPUID's VenderID and if it detects "AMD" it puts the code down a limited core path, end of story. If it's "Intel" then it'll do a Flag check to determine capabilities and use that to determine what instructions to execute. This is done on the clients system during run time not compile time.

From Agner,

http://www.agner.org/optimize/blog/read.php?i=49

Program to get reimbursed for having to recompile your programs with non-intel compilers.

http://www.compilerreimbursementprogram.com/

Program to fake your CPUID
http://www.agner.org/optimize/#cpuidfake

Intel's compiler is still limiting anything non-Intel to SSE2, at least now their forced to tell their clients that their compiler does that.

Its important to note that you can enable the compile time switch of /QxO to force SSE 2/3 support for non-Intel architectures. And the SSE limitation was clearly defined in the documentation.
 
Nah, Bulldozer just wasn't right for whatever reasons, I quit trying to figure it out. It was the cpu I wanted for my home workstation-like computer. I went with the 2600K.

I mean all of us in this thread know why Bulldozer wasn't right, we just don't perhaps know what in the design or implementation (process) caused it to be not right. Do we? You should go figure that out and report back, you seem to have the energy for it lol.


I might have the energy for it but I dont have the brains for it LOL

What I was wondering and researching was at a workstation level was BD competitive
we know for gaming that it is not
as a workstation CPU BD has some merits but because of pricing falls short IMHO
 
Status
Not open for further replies.