AMD Talks Steamroller: 15% Improvement Over Piledriver

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]blazorthon[/nom]Intel's 22nm process is 3D according to Intel and I'm not wrong about what I'm saying. I can disable one core per module to get a substantial boost in performance per remaining core per Hz and overclocking the L3 cache gives some more performance per core per Hz of the CPU's frequency. I'm not getting knowledge from mere news releases. I'm getting it from both people who have tested this and from reputable sites.You can't honestly tell me that everyone is wrong about disabling on core per module helping the remaining module due to that one core having the rest of the module's front end all to itself rather than sharing it. Considering that Bulldozer's front end has some similarities to Intel's Sandy Bridge front end, it's no surprise than disabling one core per module and giving it a full-speed L3 cache gives it quite the improvement. Sure, they're not identical and they don't have identical performance, but it's still a huge improvement for AMD. AMD can then hit higher frequencies to make up for the still slightly lower performance per Hz.[/citation]

The transistors are 3D, but they are still side by side and not stacked on top of eachother
 
[citation][nom]blazorthon[/nom]How minor overclocking is doesn't matter because we at this site are mostly enthusiasts who are more than willing to do it. I do it, so overclocking, no matter how irrelevant to other people, is relevant to me and other people whom overclock. It's nearly free performance, so why shouldn't I? Just because some people don't understand it very well doesn't mean that I should partake in that misunderstanding. Also, Ivy was given paste so that it wouldn't make SB-E nearly irrelevant when overclocked, not because overclocking is rare. Intel made the mistake of letting their $1K CPUs being overshadowed by their $200-350 CPUs once and they didn't want to do it again.I wasn't comparing OC AMD to stock Intel except where Intel couldn't overclock, so you're absurd. I was saying that AMD can compete with Intel even when both are overclocked to the max if you use the AMD CPU properly. The FX-81xx CPUs can compete with the K edition i5s in overclocking (not quite performance per Hz, but pretty darned close) when you know how to use them. I've even described how its done.The K editions aren't toys. They are ways to get a lot more performance for the money. Would you prefer Intel have CPUs that have stock frequencies that high, but prices that are extending upwards to the LGA 2011 series? That would be the only way to get such performance without overclocking and I see no reason to spend so much money on a CPU when I know how to work with cheaper models to get them performing where I want them to.I don't care that AMD loses at stock settings. It doesn't matter at all to me because I don't use stock. Having said what I've said, using stock settings on an AMD CPU is now obviously next to idiotic without a damn good reason for it and your instability concerns are wrong, so they don't count at all.[/citation]1. The world DOES NOT evolve around tomshardware users.
2. One does not expect to tweak/overclock their new CPU. This area is for Enthusiasts only. Intel/AMD do not design their CPU for Enthusiasts only. Their biggest customer are non-overclockers. To consumers they only see how the stock CPU perform. Dont ask consumer to buy an AMD CPU just to go back home to tweak BIOS. If AMD CPU are soo guaranteed working those settings/overclock conditions in perfect stability. Why wouldnt AMD just clock those chips right way on stock? If Overclocking is providing equal stability and reliability when done right, why wouldnt the server users do it?
3. U are infact comparing it. U are telling disabling 1 core overclock the other can get performance boost to intel's stock CPU, disabling 1 core overclocking the other core is already part of the tweaking. It is already non-stock setting, which if u leave AMD on stock, AMD woudlnt beat intels, And show me a chart how a Air cooled OCed 8150 beating an Air cooled OCed 2700K in 80% of the consumer applications.
4. SB-E have extra 2 cores & they can clock as high as 5GHz as well, the 1155 SB/Ivy bridge can never beat them. There are always reason to pay extras for those 2 extra cores. Ivy bridge TIM are working as What Intel's want = lower power consumption at stock, there is nothing more Intel want outside of the spec. Dont hope them to release another stepping to address the issue. Intel do not cover overclocking, Intel design Ivy bridge is what they are, to work at lower power @ the same speed of SB, thats about it. The K series are still a toy offered by Intel to keep the small budget Enthusiasts user happy. As the future CPU get more integrated stuff, u get to see more restrictive overclocking. It is bad business for Intel to test CPU working out of spec than what they design for especially with more difference components integrated into it.

Again the world does not evolve around overclockers & U. You are not Intel's biggest customer.
 
[citation][nom]Pinhedd[/nom]... 15-18 month intervals while simultaneously migrating key parts of the chipset into the CPU itself.

.....Bulldozer may end up being AMD's Netburst. After Netburst Intel went back to the drawing board ...[/citation]

a)You are overlooking the CAD advantages of the AMD design.In the tick-tock model of intel you must double your numbers for any real revision to occur and it always take large chunks of people/money.AMD now can perfect one design using lot less PPL in less than half of time than 2 intel´s ticks.At this short revision times rate;soon AMD cores will match intel at the same fab node.When fabs reach another node more or less AMD need to do is gather the libraries and push the "port" button.Saving a few billions at the same time copy/paste another 8 cores or graphics cores in the extra transistors.

b)I live in a country in development and yes apus sell very well (but I have sold my share of I7-3960x ). Here some PPL wanna use their machines until silicon turns to dust but bloating caps just dont let them.
Point is; I have installed P4 LGA-775 into new ddr3 MoBos and the difference is like night and day.Is still outclassed by modern processors but considering IS the same chip; is shocking to know how baddly netburst was starving for bandwidth in his days.

I think the same is for current AMD design.The RAM itself is unable keep up to the GPU. It also can not feed more than 8 modules (not even at Interlagos where TDP can be overcome). Mark my words; when never RAM standard are settled, Intel will feel the heat.
 
[citation][nom]Tomfreak[/nom]1. The world DOES NOT evolve around tomshardware users.

Again the world does not evolve around overclockers & U. You are not Intel's biggest customer.[/citation]

Oh wow, brave guy stating that home truth to IT enthusiasts. Whenever I've told them that Intel and AMD do NOT hang on their every word/whine it causes all sorts of tears.

Teenagers like to think they are the centre of the universe remember. But they still think they are the driving force in major IT. Yes all technology from the ancient Egyptians to now has purely culminated in allowing 14-25yr olds to play Crysis at more than 40fps.
 
[citation][nom]mamailo[/nom]The RAM itself is unable keep up to the GPU. It also can not feed more than 8 modules (not even at Interlagos where TDP can be overcome). Mark my words; when never RAM standard are settled, Intel will feel the heat.[/citation]Well Bulldozer doesnt have the GPU to be start with. As for the AMD architecture, it is more likely to be targeted for server market "port" over to consumer. of all the markets, Server, Desktop, mobile, desktop are the ones being the least profitable segment. I think Intel most feared is not the RAM itself but the software finally make full use of the bulldozer architecture with the combination of the GPGPU comes from Radeon. The GPGPU advantage from AMD is well ahead Intel's. This is why u see Intel are wildly ramping their GPU to catch up AMD. 50-100% every generation. Haswell will push this much more.
 
[citation][nom]Tomfreak[/nom]Well Bulldozer doesnt have the GPU to be start with. As for the AMD architecture, it is more likely to be targeted for server market "port" over to consumer. of all the markets, Server, Desktop, mobile, desktop are the ones being the least profitable segment. I think Intel most feared is not the RAM itself but the software finally make full use of the bulldozer architecture with the combination of the GPGPU comes from Radeon.[/citation]

But Piledrive DOES.If BD did never have the electronic hooks and glue to support graphics core is unknown to me.Here is the flaw of you argument: There is not longer "ports";"derivatives";"based", etc cores in Apus, Fx, Opteron, etc. Is the same core;down to 99%.The rest is just to accommodate the hooks for specific market.More HT buses,larger caches or GPGPU.The core is the same.
They fix PD for the Apu but actually Opteron lines sales can be boosted just by adding GPGPU for the cost of few pennies per unit.

Shure; Intel is one node ahead of anything in the fab industry.But it comes with a big price tag.Is highly speculative(is anybody guess) but I have saw figures up to 5 billion for the next node.

Wow; thats more than current AMD stock market value.And that do not include the cost of R&D in a die shrink.Intel is ahead; period.Want the best?, it cost $1k per processor,that's it. Let's keep moving.
For the rest; there is AMD.

Don't take me wrong, I do sell Intel products where it fits but AMD lines sells better here.If the profit per unit is not great;just the sheer number pays some bills.Do not underestimate that impact.
 





It doesn't matter that Tom's isn't the majority of computer users because here on Tom's, we are the majority, so we should consider this stuff!

You don't really think that Intel using paste instead of fluxless solder between the Ivy IHS and CPU die lowers power consumption, do you? It has no significant impact on power consumption to have paste instead of solder. It is there to limit Ivy's performance because without it, Ivy would overclock better than SB does and would cut into SB-E's performance market. A 50% core count advantage is a lot less of an advantage when the competitor can clock higher and has slightly higher performance at the same clock frequency.

I don't know why AMD doesn't do things better than they are. They have madde mistake after mistake in so many ways that they obviously don't know everything that they do very well. I am not an AMD fanboy of some sort. I recognize that they seem to often act like idiots and they would d better with improved management. Issues such as not making enough Llano chipsets for motherboards to accommodate their many Llano APUs was a huge sign of such stupidity going on at AMD.

You are telling me that I'm comparing stock Intel to overclocked AMD when I was not except where Intel can't overclock by more than say 5%, not enough to change the results noticeably. That I'm comparing tweaked AMD to non-tweaked Intel with both being overclocked (or not, it's still a good comparison even if you don't overclock both of them) is not a bad thing at all because Intel can't be tweaked in such ways. They already have an optimal core configuration for their architecture and they already have a full-speed cache. AMD has neither at stock and I said how to fix that for a better comparison.

Servers tend to not support overclocking if I remember correctly. There's also the fact that most server operators probably don't understand overclocking very well. That's like asking why the average person doesn't overclock.

Ivy Bridge is capable of much better overclocking than its thermal paste allows. Simply getting better paste can make a 15-25% difference over the stock thermal paste below the IHS according to tests. Using solder would have been even better. Sure, it probably wouldn't have quite caught up to SB-E, but it would have been close enough that the price differences would have meant many people would have simply gone IB over SB-E unless SB-E gets some 8-core models. I'm not hoping for a new stepping or some other stupidity because there doesn't need to be one.

All it would take is switching out the paste for the fluxless solder that many of Intel's other great CPUs use. I see no reason for overclocking to get less effective since it seems to get more effective with every generation on the hardware level. If not for the paste, Ivy would be a very high overclocking CPU. Even in the modded core configuration for Bulldozer CPUs that I've been talking about, AMD wouldn't be able to keep up with Ivy. Piledriver would be needed for that.

Again, my point this whole time has been that AMD is still a viable option for high-end systems for us Tom's users. Most of us are well-versed with overclocking and know how to read a guide when necessary to do something.
 


Piledriver on AM3+ (Vishera) does not have an on-die IGP. I'm also pretty sure that some of thew other things that you said here are inaccurate, at best.
 
[citation][nom]blazorthon[/nom]Piledriver on AM3+ (Vishera) does not have an on-die IGP. I'm also pretty sure that some of thew other things that you said here are inaccurate, at best.[/citation]

I did not said it does. What meant is none of BD based processor did. The FM1 used Stars not BD.If that was due some basic problem is unknown to me.But 5xxx Apus does include both; so none exist now.

AMD is saying at every opportunity than Heterogeneous System Architecture is the future but not including GPGPU cores in the rest of the lines is kinda hypocrite.They even justify the shared FP unit saying GPGPU will alleviate that.Outsourcing the needed cores, was a WTF¡?¡? for me at the time as is now. Put the money where your mouth is.

Particularly at the server space.I real life a have seen nvidia tesla systems (in a university and a automobile parts factory).There is market for massive parallel computing and is curious and willing to explore the opportunities it offers.It will be a no brainer if such capabilities is built in instead of asking for $2.5k funds to the general accountant "for video card company".That send chills on the spine on many IT managers.
AMD have the tech and FM2 gear have proved it can be done.Ball is in their court.

For the rest; as someone pointed out reading stuff at the inet do not makes you a specialist but anyone can Google his way to know how much intel is investing in the 14nm node and draw own conclusions.Officially they budgeted 1 billion just for buildings at Ireland last May or so.Feel free to sum up your own.

At moment of writing this Google finance says AMD is at 3.73 x 707.56 witch rounds 2.6 billion, still more than intels buildings but R&D for the node did not came cheap.

Off topic:
Blazz, edit your citations for arguing. I am pretty damm shure ;I sell more AMD based systems.Or you know better ?
Is just rude and lazy not doing so, is even worst when quoting a lengthy comment followed by a foot long reply;more or less, you already said before.
 
[citation][nom]mamailo[/nom]I did not said it does. What meant is none of BD based processor did. The FM1 used Stars not BD.If that was due some basic problem is unknown to me.But 5xxx Apus does include both; so none exist now.AMD is saying at every opportunity than Heterogeneous System Architecture is the future but not including GPGPU cores in the rest of the lines is kinda hypocrite.They even justify the shared FP unit saying GPGPU will alleviate that.Outsourcing the needed cores, was a WTF¡?¡? for me at the time as is now. Put the money where your mouth is.Particularly at the server space.I real life a have seen nvidia tesla systems (in a university and a automobile parts factory).There is market for massive parallel computing and is curious and willing to explore the opportunities it offers.It will be a no brainer if such capabilities is built in instead of asking for $2.5k funds to the general accountant "for video card company".That send chills on the spine on many IT managers. AMD have the tech and FM2 gear have proved it can be done.Ball is in their court.For the rest; as someone pointed out reading stuff at the inet do not makes you a specialist but anyone can Google his way to know how much intel is investing in the 14nm node and draw own conclusions.Officially they budgeted 1 billion just for buildings at Ireland last May or so.Feel free to sum up your own.At moment of writing this Google finance says AMD is at 3.73 x 707.56 witch rounds 2.6 billion, still more than intels buildings but R&D for the node did not came cheap.Off topic:Blazz, edit your citations for arguing. I am pretty damm shure ;I sell more AMD based systems.Or you know better ?Is just rude and lazy not doing so, is even worst when quoting a lengthy comment followed by a foot long reply;more or less, you already said before.[/citation]

Sorry, it was hard to understand your comment, my bad there.

I never said that you don't sell more AMD systems than Intel.

Intel also has the i7-3930K. If you want the best in consumer performance, $1K is not necessary to be paid. $600, although still a lot of money, will do it. If you only want the best single threaded performance without mods of the Bulldozer CPUs that I've mentioned, then an i5 will do that job. Heck, if you don't mind having say no more than around 4GHz, the cheaper i5s can do the trick with Turbo configuration changes and a minor BLCK overclock. That's less than $200. With the core configuration mods that I've mentioned and a full speed cache, the FX 61xx and FX-81xx CPUs can also get you top of the line lightly threaded performance with some overclocking.

There's also the fact that Intel has very cheap and low end CPUs too. For example, the $52 Celeron G530, although not great, is still a good value.
 
[citation][nom]blazorthon[/nom]It doesn't matter that Tom's isn't the majority of computer users because here on Tom's, we are the majority, so we should consider this stuff!You don't really think that Intel using paste instead of fluxless solder between the Ivy IHS and CPU die lowers power consumption, do you? It has no significant impact on power consumption to have paste instead of solder. It is there to limit Ivy's performance because without it, Ivy would overclock better than SB does and would cut into SB-E's performance market. A 50% core count advantage is a lot less of an advantage when the competitor can clock higher and has slightly higher performance at the same clock frequency.You are telling me that I'm comparing stock Intel to overclocked AMD when I was not except where Intel can't overclock by more than say 5%, not enough to change the results noticeably. That I'm comparing tweaked AMD to non-tweaked Intel with both being overclocked (or not, it's still a good comparison even if you don't overclock both of them) is not a bad thing at all because Intel can't be tweaked in such ways. They already have an optimal core configuration for their architecture and they already have a full-speed cache. AMD has neither at stock and I said how to fix that for a better comparison.Servers tend to not support overclocking if I remember correctly. There's also the fact that most server operators probably don't understand overclocking very well. That's like asking why the average person doesn't overclock.Ivy Bridge is capable of much better overclocking than its thermal paste allows. Simply getting better paste can make a 15-25% difference over the stock thermal paste below the IHS according to tests. Using solder would have been even better. Sure, it probably wouldn't have quite caught up to SB-E, but it would have been close enough that the price differences would have meant many people would have simply gone IB over SB-E unless SB-E gets some 8-core models. I'm not hoping for a new stepping or some other stupidity because there doesn't need to be one.All it would take is switching out the paste for the fluxless solder that many of Intel's other great CPUs use.[/citation]1. Intel doesnt care the minority. They are targeting the majority.
2. Even if the Ivy bridge do cut close to SB-E the fact is the majority user are still going to get the 6 cores over an Overclocked 4 cores.
3. Replacing the Paste with the top quality does improve the CPU temp significantly, it may be not bested the fluxless solder, but the main problem is the 22nm is 3D transistor. It turns out it didnt work as well for high core clock, Ivy bridge it knowned for require a much higher voltage to acquire high clock. High voltage on such a thin, smaller process is going to hurt the reliability in longerterm. Regardless on how u put it, the 6 core SB-E is still a good buy because an OCed Ivy can never come close to 6 core OCed SB-E. This is still true for SB 1155 vs the 1366 Gulf-Town. 50% extra core can blow the 4 cores away easily. So Ivy will not be cannibalizing the SB-E sale unless they can clock another 50% higher on top of what they can.
4. A Tweak AMD is still a tweak CPU, majority is still a non-overclocker. A gamer market probably still bigger than overclocker ones. In fact on the lower price segment, enthusiast market are even smaller.
5. In a Server environment reliability is the priority factor. If overclocking can provide a good stability + reliability equal to stock settings. These people would have hire a professional overclockers to do for them and save millions. They would also have demanded the board markers to offer overclocker board for server environment. Overclocking does effect reliability.
 
[citation][nom]Tomfreak[/nom]1. Intel doesnt care the minority. They are targeting the majority.2. Even if the Ivy bridge do cut close to SB-E the fact is the majority user are still going to get the 6 cores over an Overclocked 4 cores. 3. Replacing the Paste with the top quality does improve the CPU temp significantly, it may be not bested the fluxless solder, but the main problem is the 22nm is 3D transistor. It turns out it didnt work as well for high core clock, Ivy bridge it knowned for require a much higher voltage to acquire high clock. High voltage on such a thin, smaller process is going to hurt the reliability in longerterm. Regardless on how u put it, the 6 core SB-E is still a good buy because an OCed Ivy can never come close to 6 core OCed SB-E. This is still true for SB 1155 vs the 1366 Gulf-Town. 50% extra core can blow the 4 cores away easily. So Ivy will not be cannibalizing the SB-E sale unless they can clock another 50% higher on top of what they can. 4. A Tweak AMD is still a tweak CPU, majority is still a non-overclocker. A gamer market probably still bigger than overclocker ones. In fact on the lower price segment, enthusiast market are even smaller.5. In a Server environment reliability is the priority factor. If overclocking can provide a good stability + reliability equal to stock settings. These people would have hire a professional overclockers to do for them and save millions. They would also have demanded the board markers to offer overclocker board for server environment. Overclocking does effect reliability.[/citation]

1. Intel does care or else they wouldn't make products specifically for this minority. They don't care about us as much as the majority, but they do care (or more accurately, about our money). However, that's not the point. Whether or not they care has never been related to what I'm saying.

2. IB would cut very close to SB-E. They do overclock higher than SB does. Both suicide overclocks and tests with the paste replaced with superior paste show Ivy hitting higher overclocks than Sandy by frequency, not just performance. With both at around 4GHz, IB can use lower voltages than SB. I can understand that IB might have a lower starting point for voltage per frequency due to the smaller process node and the 3D transistors might make it so that it needs more voltage per frequency jump, but even at a little over 4GHz, IB still uses considerably less power than SB and IB can overclock farther. I don't know the voltages necessary and the power consumption with them when they are pushed to the max with some good paste or even fluxless solder, maybe they're higher, but they are still capable of them.

3. The six-core SB-E is a decent buy. The IB CPUs with fluxless solder would be a better buy because even if you overclock both, the IB CPUs have more headroom. Yes, they wouldn't get quite as high as SB-E does in highly threaded performance, but they could get very close. With LGA 1366 versus 1155, yes, 1366 still wins with the Guftown CPUs in highly threaded performance, but not by much. SB can clock higher and has a lot more performance at the same frequency.

4. How big the enthusiast market is doesn't matter. I'm not trying to get even the average gamer to become an enthusiast against their will. I'm saying that there are ways to make AMD highly competitive with Intel and I said how they're done. Whether or not you'd try them if you used an FX CPU is your business and doesn't affect whether or not they work.

5. Just because overclocking can be done very well doesn't mean that everyone is going to do it. This is one of your own points, but just because this is a server environment, you're completely switching your stance on overclocking here. Just like you said, most people don't overclock despite it often meaning a huge improvement in performance. Servers are also often reliant on power efficiency and overclocking does reduce that. Electricity is more expensive for many businesses than it is for consumers and that is also a factor. In fact, some people do use overclocked server/workstation CPUs. If server CPUs were overclocked as far as they can safely go on stock voltage, then the power consumption issue wouldn't be a big issue for this, but again, overclocking isn't something that the majority of people do.

Why do people not use free software that is as good as or even often better than paid-for software or at least software that is cheaper than inferior software? Why do many people buy extremely overpriced computers when building their own is far cheaper and they can get more reliable components? Why do people often make many other non-optimal decisions? What's the difference between asking questions like these and why people don't overclock? Maybe they don't understand it well, maybe they don't think that it matters, maybe something else. Don't forget that most people, yes, that majority that you keep mentioning, just don't give a crap about a lot of things.
 
The majority don't have computers , and even in some country's there are projects aimed at getting computers into schools these country's have very little funds , yes Intel has a great CPU , but not everybody can afford Intel. This person is a engineer and that person is a Enthusiasts , Intel and Amd are both businesses and i think Amd is doing a good job , we can talk about this isn't good and that isn't good , but i don't see anyone who is talking have ideas in the said company's CPUs , CPUs are used for a lot more things than just games or video editing , the main point in any business is to sell units , there is a whole world out there and people working on different price points , if you want to play games at max settings then go for a CPU that suits you , I can say Amd is value for money , not just for you but for other people that don't have the funds to buy a CPU that can play games at max settings or edit a video in 50 seconds quicker , really people talk about bulldozer is not good at all , well that's just wrong , its a great CPU depending on what you want to do with it , its still a CPU that works , although it may not work for you ..
I have a Intell cpu but i need a upgrade and i am going to buy a piledriver , the reason is i need a faster CPU , not the fastest , but one that can do everyday tasks and play the one or two games i have , and i will say this a porsche gt3 is faster than a lada , but i believe they will both get you to your destination , if you wish to get there quicker buy a porsche gt3 . Be happy some people and company's are striving to develop technology , both Intel and Amd can make CPUs that will blow your socks off , but who is going to waste billions just to sell there product to a handful of people , and saying that software isn't even really at the point where they use the CPUs of today's full power , saying that i said before programming needs to step up a level , the same programming technics have been used for ages now , coding is as of great importance as the parts that process them ...

I guess we can take BF3 as a example , tell me why it works the same on every middle to high preforming CPU , that makes me know any CPU can be great for games , a CPU is not the whole story .
 
[citation][nom]blazorthon[/nom]1. Intel does care or else they wouldn't make products specifically for this minority. They don't care about us as much as the majority, but they do care (or more accurately, about our money). [/citation]

No they don't.

All that high end stuff dressed up with skulls and such crap? Thats basically high end test stuff that they can make a short run on, put a skull logo on it for next to nothing and help get a bit of cash back for the coffee machine/tooling and make some positive Facebook comments.

That's it. That gear is the test bed for the trickle down into future generations. The Mercedes S Class basically. What you see in the S class today will be in our Fords in 5-10 years time.

Very little to none of it is designed with PC enthusiasts in mind. They cant sell limited quasi-prototype gear to the Enterprise and such like. But they just know that enthusiasts have the money to waste on it so they can clear it out rather than scrapping it. Tart it up with a dragon on the box and you are golden!
 
[citation][nom]daglesj[/nom]No they don't.All that high end stuff dressed up with skulls and such crap? Thats basically high end test stuff that they can make a short run on, put a skull logo on it for next to nothing and help get a bit of cash back for the coffee machine/tooling and make some positive Facebook comments.That's it. That gear is the test bed for the trickle down into future generations. The Mercedes S Class basically. What you see in the S class today will be in our Fords in 5-10 years time.Very little to none of it is designed with PC enthusiasts in mind. They cant sell limited quasi-prototype gear to the Enterprise and such like. But they just know that enthusiasts have the money to waste on it so they can clear it out rather than scrapping it. Tart it up with a dragon on the box and you are golden![/citation]

That's wrong because almost only the enthusiasts make use of many features and the overclocking warranty that Intel now sells. That warranty has nothing to do with anyone but the enthusiasts, so Intel does care, although again, not as much as they care for the majority. Overclocking features such as unlocked CPUs and Intel also giving the LGA 2011 CPUs back serious BLCK based overclocking is more proof. These features aren't used much at all except by the enthusiasts whom they are intended for and they probably never will be.

[citation][nom]analytic1[/nom]The majority don't have computers , and even in some country's there are projects aimed at getting computers into schools these country's have very little funds , yes Intel has a great CPU , but not everybody can afford Intel. This person is a engineer and that person is a Enthusiasts , Intel and Amd are both businesses and i think Amd is doing a good job , we can talk about this isn't good and that isn't good , but i don't see anyone who is talking have ideas in the said company's CPUs , CPUs are used for a lot more things than just games or video editing , the main point in any business is to sell units , there is a whole world out there and people working on different price points , if you want to play games at max settings then go for a CPU that suits you , I can say Amd is value for money , not just for you but for other people that don't have the funds to buy a CPU that can play games at max settings or edit a video in 50 seconds quicker , really people talk about bulldozer is not good at all , well that's just wrong , its a great CPU depending on what you want to do with it , its still a CPU that works , although it may not work for you ..I have a Intell cpu but i need a upgrade and i am going to buy a piledriver , the reason is i need a faster CPU , not the fastest , but one that can do everyday tasks and play the one or two games i have , and i will say this a porsche gt3 is faster than a lada , but i believe they will both get you to your destination , if you wish to get there quicker buy a porsche gt3 . Be happy some people and company's are striving to develop technology , both Intel and Amd can make CPUs that will blow your socks off , but who is going to waste billions just to sell there product to a handful of people , and saying that software isn't even really at the point where they use the CPUs of today's full power , saying that i said before programming needs to step up a level , the same programming technics have been used for ages now , coding is as of great importance as the parts that process them ...I guess we can take BF3 as a example , tell me why it works the same on every middle to high preforming CPU , that makes me know any CPU can be great for games , a CPU is not the whole story .[/citation]

People whom don't own nor use computers have no impact on this conversation. Everyone whom can afford an AMD CPU can afford an Intel CPU. They go all the way down to $40 Celerons on the LGA 1155 platform. Your entire rant was about how AMD is a cheaper alternative with more value when they aren't all cheaper and value depends on your needs and what CPUs within your budget range apply to them best.

BF3 does not work great on just any CPU. BF3 MP is one of the most CPU-bound games ever. It works like crap on most CPUs. Anything with less than four cores right now is almost guaranteed to not do well in it and anything AMD with less than six cores is unlikely to do very well (four cores for modern Intel CPUs). Not just any CPU can be great for games. The $500 SBMs prove that quite well. They show that you don't need very high end CPUs can play most games, but not any CPU can play them all well nor even all games at all.
 
Btw i did say middle or high end CPU , just wanted to point that out ,, and maybe my point was about their game engine ....

Im not trying to justify anything when it comes to CPUs , but as i understand it , AMD is a good make of CPU , anybody who is not playing games at max settings will surely never say it isn't a good CPU , Intel is a great make of CPU , somebody's opinion is based on their view point , or based on their needs ...

I will state that a fx 8150 which i can get for 85 pounds is 20 fps slower in most case when playing games at max settings , so to tell the truth i only want a CPU to play one game , and to also host games and have a better multi player experience , i already have a Intel CPU which does its job for what i use it for , lets not talk about cores but treads as i find that to be a better angle to explain how programmes work with CPUs , both Amd and Intel process 8 treads or can do that task when needed , i will accept that Intel is a better make but Amd is far from crap .....

Btw the cheapest APU is 30 pounds , which is about 38 dollars which also has a graphics card built in , and that is on the internet , i am sure in a computer fair i could find it cheaper , and i believe that is far better value than a celeron at 40 dollars ....

Back to Bf3 , my point there is , why in all benchmarks does it run the same on all Intel mid and high end CPUs as it does with Amds mid and high end CPUs , and its one of the most CPU bound games , i have done a lot of read and research and as i stated before their game engine is upto the times ..

A porsche gt3 (Intel ) and a jeep (amd) have a race on a bumpy road (software ) who will win , i can only be fair by saying this the environment they work within ....
 
[citation][nom]analytic1[/nom]Btw i did say middle or high end CPU , just wanted to point that out ,, and maybe my point was about their game engine ....Im not trying to justify anything when it comes to CPUs , but as i understand it , AMD is a good make of CPU , anybody who is not playing games at max settings will surely never say it isn't a good CPU , Intel is a great make of CPU , somebody's opinion is based on their view point , or based on their needs ...I will state that a fx 8150 which i can get for 85 pounds is 20 fps slower in most case when playing games at max settings , so to tell the truth i only want a CPU to play one game , and to also host games and have a better multi player experience , i already have a Intel CPU which does its job for what i use it for , lets not talk about cores but treads as i find that to be a better angle to explain how programmes work with CPUs , both Amd and Intel process 8 treads or can do that task when needed , i will accept that Intel is a better make but Amd is far from crap .....Btw the cheapest APU is 30 pounds , which is about 38 dollars which also has a graphics card built in , and that is on the internet , i am sure in a computer fair i could find it cheaper , and i believe that is far better value than a celeron at 40 dollars ....Back to Bf3 , my point there is , why in all benchmarks does it run the same on all Intel mid and high end CPUs as it does with Amds mid and high end CPUs , and its one of the most CPU bound games , i have done a lot of read and research and as i stated before their game engine is upto the times ..A porsche gt3 (Intel ) and a jeep (amd) have a race on a bumpy road (software ) who will win , i can only be fair by saying this the environment they work within ....[/citation]

That's an extremely inaccurate thing to say.

In a game that isn't very lightly/single threaded and CPU bound, AMD can be equally fast to Intel in many comparisons and in a game where the opposite is true, Intel's advantage can be from 30-60% and even that is a very inaccurate number because of the huge and varying subject that it is trying to encompass.

What Intel CPU are you comparing your 8150 to and in what games with what settings? There are even more things to ask. Your 20FPS faster number is usually wrong (and is likely to usually be wrong) strictly because it simply doesn't apply to most situations.

The game engine of BF3 is a dual-threaded engine and it isn't very CPU limited. Only the multi-player aspects (up to four threads in addition to the two threads used by the game engine) are highly CPU-bound.
 
I am comparing a fx 8150 to a intel i5 2500k and upwards , i have read many articles which compare both processor , now just to make things clear , i am taking about a processor , lets use a compiler that isn't one sided first of all , when we use that example , then there can be a fair judge of processor .

A intel processor has a bigger core , not only has a it got a bigger core it has more resources to help move data to and from the core , this is because that core was designed to run two threads at a time , and that is why one core can run two threads fine , Amd has a smaller core but has two core within the same module sharing resources , trying to do the same job .

I am comparing a 8150 to a i5 2500k and BF3 , one of the main things i have read about , is computer experts complaining about is compiler , there are many factors to this debate , take a real world situation , to best judge a CPU use a set of factors that is best able to judge CPU performance , because that's what we are really taking about , maybe use a open world compiler within your tests, most test i see are using a Intel compiler , my 20 fps comes from web sites that run a number of tests i guess fps is for games only and in some cases it could be more but i took a average .

Ok i know when a person using windows which is the majority , and most programs use a said compiler so really this what test should be based on , those factors , maybe if i read about what a FX CPU can do and how to get the best out of it in web sites , i would really find this debate helpful , like you have said a Fx CPU can be a good CPU and there are many ways to achieve this , the real main reason i am defending Amd is because i am going to Buy one , first time , and after reading much and many articles , it will serve my purpose ..

When i first started to read about processors before buying one , as i do before i buy most things , to see if it will serve my purpose , i thought very badly about Amd because of Intel fan boys as i have heard many say , but through my constant reading , i thought wait a minute this Amd is actually alright , it is made on a bigger process 32mm and it is new , a new tech design as compared to intel , so the way i understand things is i have a choice to use 4 cores one per module or all 8 core 2 per module , i could even use 3 cores and only 3 modules , which work fine with most games single player , but i really see no point in playing games on my own .

With all this data in mind i think Amd is a better CPU , its cheaper , it runs on a bigger process (beats anything on a 32mm process), its a new tech design and it keeps up in most cases and runs better in highly threaded programs , will we see 6ghz processors running at stock in five years time , i doubt it , i believe the method of programming will change , the next big winner will be a person who finds a way to split one thread amongst many cores (btw i haven't read much into this but i do know or have read it is already being done) , it may be more work to achieve greater parallelism , are in the stone age any more .

At the moment this is my point of view , nothing more , a Intel or Amd will work fine for me and to be quite fair most multi player games are restricted by the speed of the server , im not sure if you can get 60 fps with 40 plus people playing on a dedicated server , maybe i am wrong (i will add the game i play the most is arma 2 multi player , and if your CPU is great but all other parts are not up to the job , it means less fps any how ) a computer is not just a CPU .....
 
[citation][nom]analytic1[/nom]I am comparing a fx 8150 to a intel i5 2500k and upwards , i have read many articles which compare both processor , now just to make things clear , i am taking about a processor , lets use a compiler that isn't one sided first of all , when we use that example , then there can be a fair judge of processor .A intel processor has a bigger core , not only has a it got a bigger core it has more resources to help move data to and from the core , this is because that core was designed to run two threads at a time , and that is why one core can run two threads fine , Amd has a smaller core but has two core within the same module sharing resources , trying to do the same job .I am comparing a 8150 to a i5 2500k and BF3 , one of the main things i have read about , is computer experts complaining about is compiler , there are many factors to this debate , take a real world situation , to best judge a CPU use a set of factors that is best able to judge CPU performance , because that's what we are really taking about , maybe use a open world compiler within your tests, most test i see are using a Intel compiler , my 20 fps comes from web sites that run a number of tests i guess fps is for games only and in some cases it could be more but i took a average .Ok i know when a person using windows which is the majority , and most programs use a said compiler so really this what test should be based on , those factors , maybe if i read about what a FX CPU can do and how to get the best out of it in web sites , i would really find this debate helpful , like you have said a Fx CPU can be a good CPU and there are many ways to achieve this , the real main reason i am defending Amd is because i am going to Buy one , first time , and after reading much and many articles , it will serve my purpose ..When i first started to read about processors before buying one , as i do before i buy most things , to see if it will serve my purpose , i thought very badly about Amd because of Intel fan boys as i have heard many say , but through my constant reading , i thought wait a minute this Amd is actually alright , it is made on a bigger process 32mm and it is new , a new tech design as compared to intel , so the way i understand things is i have a choice to use 4 cores one per module or all 8 core 2 per module , i could even use 3 cores and only 3 modules , which work fine with most games single player , but i really see no point in playing games on my own .With all this data in mind i think Amd is a better CPU , its cheaper , it runs on a bigger process (beats anything on a 32mm process), its a new tech design and it keeps up in most cases and runs better in highly threaded programs , will we see 6ghz processors running at stock in five years time , i doubt it , i believe the method of programming will change , the next big winner will be a person who finds a way to split one thread amongst many cores (btw i haven't read much into this but i do know or have read it is already being done) , it may be more work to achieve greater parallelism , are in the stone age any more .At the moment this is my point of view , nothing more , a Intel or Amd will work fine for me and to be quite fair most multi player games are restricted by the speed of the server , im not sure if you can get 60 fps with 40 plus people playing on a dedicated server , maybe i am wrong (i will add the game i play the most is arma 2 multi player , and if your CPU is great but all other parts are not up to the job , it means less fps any how ) a computer is not just a CPU .....[/citation]

Now this post, I found to be much better, thank you. A few things, I'd put a stock FX-8150 about on-par with a stock i7 in highly threaded integer performance, maybe a little (10-15%) behind at the most (still ahead of an i5).

Splitting a thread between cores is actually something that I've looked into a little. From what I've read, it is extremely difficult to do and how effective it would be would depend on at least several things. A single thread isn't really parallel with multiple cores, but a single core does run several things in parallel. There are several ALUs per core that process instructions more or less simultaneously. If you want to look into it deeper, then I can probably find some good material (as could many other members of this site) for you to read.

With AMD, they might actually have 6GHz CPUs at stock within a frew years, but I also doubt it. They can do it, but it seems like there are better things to do.

Problems with focusing strictly on many core designs that sacrifice single threaded performance for highly threaded performance is that not everything can use multiple threads. Some things simply can't. For example, some things have every instruction relying on data and such from the previous instruction, so they are extremely difficult to make use more than one thread, if not impossible. I suppose that if a core could run an instruction faster than it could fetch the data from the previous instruction, then two tightly linked cores such as AMD's might be able to do something with some serious branch prediction (actual CPU engineers might have something better to say here and I mostly worked with GPUs and memory, not CPUs).

Perhaps newer threading methods of programming will help at least most programs to use many threads. This is definitely something that we are improving on, albeit more slowly than I'd like (not that I don't realize that going slowly is easier to do and to do safely).

Yes, a computer is not just a CPU, but as I'm sure that you know, CPU is an integral part of a computer.

For you with BF3 MP, the stock core configuration and my suggested core configuration would probably perform fairly similarly, but my core config would do so while using less power. Even so, I think that you could easily increase your CPU/NB frequency to get a decent performance increase without increasing your CPU frequency. Increasing the L3 cache speed can help. If it can be brought up to the CPU frequency without stability issues, then it would have a full-speed cache like Intel has.

Since you say that you have an FX-8150, would you mind giving some benches of any of this to find out the specifics of how well they work? I know that this helps, but I've only seen for myself how much the CPU/NB frequency helps Phenom II (which was a pretty significant help), so I can't say for sure how well Bulldozer can take advantage of improved cache.

Like you said, you can choose to use one core per module or even fewer modules if you want to. AMD's modular architecture gives more choice into specializing the CPU to suit your uses beyond mere overclocking and underclocking. Bulldozer and its derivatives make for some very customization-friendly CPUs whereas Intel has been limiting what you can do (granted, they are simplifying things, but I don't consider what is lost to be worth simpler overclocking).
 
I am going to buy a piledriver , but have a Intel CPU , after posting i started to read deeply into coding threads compilers and how everything is organised , yes some more reading would be excellent .

I guess this is a job for the best mathematicians making a thread run on two cores , its not impossible , as many things have been said to be impossible , like the television and man going to the moon , so back to reading approximation algorithms , there is a science in all of this ....

I guess i need to do a little more reading ....
 

blaz, that's kind of a mean thing to say for someone just being honest. I was thinking, does overclocking really void your warranty or do damages due to mistakes in overclocking. Maybe a review of the warranty conditions is in order (for me).


I believe blaz is referring to disabling one core per module, thus freeing up whatever resources are shared between the two cores of the module. There may be benchmarks on this but there's only one I know of and it is mild. blaz showed me this before. You may enjoy reading it. http://techreport.com/articles.x/21865

You really do sound like a computer engineer (or a really verse hardware enthusiast) by the way you talk. I hope all the info you share is accurate.

EDIT: Oops... I didn't realize there was a second page. I hope my post is still relevant.
 


Yes, it was kinda mean and I shouldn't have said it in such a way.

Overclocking voids the warranty if you tell Intel/AMD about it. They can't know if a CPU failure is caused by an overclock unless you tell them, so not telling them if you overclocked a CPU that fails means that the warranty isn't void unless you tell them that you overclocked. Intel also has been selling some cheap overclocking warranties for some K and Extreme edition CPUs, but again, simply not telling Intel/AMD that you overclocked solves the voiding issue and for free.
 
Throughout all the posts you've made, blaz, I don't think you were just being repetitive. I've noticed additional posts and reasonable arguments. An addition of references/proofs may help, though I understand that it may be tedious sorting and finding the sources of your knowledge.

Like the one about overclocking and tweeking AMD's FX-6100 and 8100 series Bulldozers that was so opposed by some, or just Tomfreak mainly, with the reason of the majority not being into hardware tweeking in general. Why would someone want to oppose the sharing of knowledge like this (well, unless it's information that shouldn't be shared, but that's irrelevant to this case)? If the majority of people don't do certain (good) things because they don't know about it, then teaching them about it, like through the Internet, is good. Just because the majority of some country is uneducated, why should we oppose someone who wants to educate them? Come on! blaz and anyone else sharing useful, relevant info (to the best of their knowledge, i.e. not willingly spreading false info) are doing others a favor.


Does it really void your warranty like that? I mean, what if you let's say, just try out overclocking but decide to use stock clocks and maybe even lower voltage (undervolting), and then your CPU (or GPU) dies on you? (Is there any other "warrantable" reason that a CPU could die other than reasons that overclocking could accelerate?)

I'm still on the side of being responsible for your actions (overclocking) and following through with an agreement (warranty conditions). It just seems wrong to lie or keep quiet about it. Hehe... That overclocking warranty is new to me, so thank you for that info. 🙂



Pinhedd, what kind of argument was that? What I'm seeing here is that you're saying that blaz is wrong just because you have a degree and that you're better than him/her (Sorry blaz, I don't think I've ever learned of your gender and I don't want to be sexist and assume you're a guy. :lol: ).

I want to be a computer engineer too mind you (I'm going to college soon.), but if/when I graduate and even if I gain some working experience, I would never use that as a reason to show someone is wrong especially with something like this. I may use it in a way like "I've learned that it in fact works like this..."
It would be unfair for me not to point this out, as much as I hate to because I like you blaz (not in a gay way in case you're a guy :lol: ), but you have also used your forum credibility as a reasoning tool in this manner in previous posts as I remember (in case I'm wrong, sorry). I didn't really like that you did so, even though it might've been a result of frustration with some unsavory commenters, but still, reasoning would've been best in my opinion and if these people decide to stay stupid and close-minded about it, leave them be and let smarter, more mature readers decide who's right or wrong. Remember, I, and maybe others, like you because of how you help forumers like me out with useful info, not because you're a veteran of sorts here. I hope you stay that way. 🙂

The Internet could use less of that kind of talk and more of logical reasoning, idea and info sharing, and just plain casual conversation (in the right place and the right time).

Sorry for deviating from the main topic, but I think this could help improve the quality of posts we make here, and I do read through almost all the posts made in interesting threads like this. 🙂
 
A bit off topic but guess it's about multi core , we was taking about coding , you mentioned some things have every instruction relying on data and such from the previous instruction, so they are extremely difficult to make use more than one thread .

i remember playing games years ago , which has the same method of games of today , but used less cpu power , i am mainly talking about games for now , i use to play enemy territory which ran really well on a single core processor a P4, the only thing that has really changed in all this time is textures and more objects , which is only really more textures .


My point is multi core is like a department store , and in that store you have a manager the first core which can talk to all the other cores in a cpu , the manager only deals with data or numbers , while the second core deals with textures only , texture output to graphics card , the 3rd core can be only AI and also so could be a 4th core with cores 3 and 4 talking to each other based on position , but all talking to the basic program based on a concept of stick men if i put this in visual terms ...

I have played onlive and i believe the program only sent me textures or some how sent me a movie stream , then i thought wait a minute couldn't i run 8 different maps of enemy territory one on each core of a cpu , i could in a server well why couldn't i run seven and the eighth core was use for me to interact with each map or core , was this a simple way to multi thread , it wasn't multi thread but to have the same program running in on each core , a smaller program with each core processing its own part of a map all at the same time , btw i know that programmers already know this info , im just writing for the sake of writing .

I was also speaking to a friend who is a programmer who also said it would be very difficult to split a thread between two cores , but doesn't see why a second cpu or core can't be use to help with bigger calculations but this is based on algorithms , i have stopped thinking about this as my brain which i could say was a cpu core is overheating lol .
 

I'm thinking more advanced physics features could play a part in more CPU performance, maybe more advanced AI's, maybe some graphical techniques require work from the CPU, and/or maybe just plain poorer coding possibly because systems nowadays are more powerful/less restricted by their performance thus less of a need to optimize the code, which sucks... They could also be in cahoots with hardware manufacturers so that people would buy more (expensive) hardware. That's a conspiracy theory though. :lol:

Not all game engines are made (coded) the same. :)



Just want to point this out, I think the word "thread" instead of "core" may fit better here, because I think the actual cores can handle whatever threads that need to be handled, i.e. one thread is not tied to one specific core all the time.

I think all of them actually deal with data and numbers (sorry for sounding nitpicky here, hehe...). I also think the word "only" is used inappropriately because whatever the threads deal with is up to the programmer(s), though I got what you were trying to show in your example. Just trying to clarify (and I hope I'm not mistaken with what I've been saying). :)



OnLive receives your control input (e.g. mouse and keyboard), processes the game in their servers, then streams a video of your game in real time I believe. I forgot if I read this as factual or if it's just speculation I made or read. I'm thinking the former. :)

I'm not sure why you'd want to run 8 different maps of that game unless you're hosting a server similar to OnLive. I don't think it's that simple to break down a game to run on multiple threads. I mean, running multiple instances of the game, like running multiple applications at the same time, should be able to be handled by multiple cores. But let's take program that generates the Fibonacci sequence. Just in case, if I'm not mistaken, the Fibonacci sequence is a series of numbers where you have to add one number to the number before it to get the number after it. This should be it: 1, 1, 2, 3, 5, 8, 13, 21, 34, etc... You start out with 1 which you add nothing (0) to, so you get 1 after it (1, 1), then to get the following number, you use the latest number (1), then add the number before it (1), which results in (1 + 1 =) 2, and you now have (1, 1, 2). You do the same to get (1 + 2 =) 3, (2 + 3 =) 5, (3 + 5 =) 8, etc. Now imagine one thread being used to add the latest number to the number before it. That one thread would output the next number, which is needed to get the next number after it. This means that the program is held back/solely reliant on that one thread, and it can't do anything else (AFAI see) while it's waiting on that thread to do its job, unless there's some really low-level stuff that could be going on on the hardware level that could be multi-threaded. I wouldn't know though.

Matrix work can be highly threaded though as I think and I've read, mainly because of the way it works I'm thinking, since matrices have multiple numbers in them that need to be operated with another matrix' numbers individually, you could have each number assigned to one thread to be worked on. Matrices are used a lot in 3D graphical computation thus a GPU's highly parallelized (tons of cores) nature.

My brother's a (game) programmer and he told me that it's advice to refrain from multi-threading if possible. I think because there's some overhead involved and/or the fact that if one game frame for example requires data from all threads/cores and one happens to lag behind for some reason, the others could be delayed as well.

I hope I was helpful. :)
 
Status
Not open for further replies.