AMD Trinity On The Desktop: A10, A8, And A6 Get Benchmarked!

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]army_ant7[/nom]


Oh, so it's not about putting more modules to sleep?I'm just checking if this is what you're saying. It's about whether or not the modules actually need the Turbo Boost, or rather, that whatever measuring device it uses to read a modules utilization can only read "at best...about 60% utilization" because the disabled core carries a potential 40% utilization of whole module. Is that it? (I'm not sure if this info was in the article you gave me before. Sorry, I haven't quite read it through.) Sounds like a patch to Turbo Boost could fix this then, but alas, this mod is not officially supported. :-(EDIT: I've finished the article you gave me. http://techreport.com/articles.x/21865/1 Just for the sake of being more sure, where have you seen how Turbo Boost works? :) Also, an interesting idea is how you could force an application to use certain threads as done in the article. Hm... Do you think this could serve as workaround for Bulldozer owners who don't have mo-bos that can turn off one core per module? I'm thinking like making .bat files for their games. :-D[/citation]

Yes. Turbo on AMD's modular architecture currently works on a per-module basis instead of a per-core basis or per CPU basis. Since only one core per module would now be in heavy use, each module can't hit 100% utilization to hit the max Turbo. You're also correct that we could probably use batch files and such to do this for those whom don't have a motherboard that has BIOS support for this, but that would mean that the inactive cores are still using power and generating heat, meaning that it would not overclock as well in this usage (although it would still be an improvement over doing nothing).

[citation][nom]army_ant7[/nom] Maybe AMD is just having trouble settling in with the new management/CEO. I mean with all those job cuts, something might've been shaken up in there. Also, sometimes, things could get out of order going from division to division...person to person. It could've been an unfortunate chain of events or they might just not have thought of the ideas you and others have, in time to implement. It could be a (bad) business decision. It could've also been what you've said about being bribed to be that way. :)) I have read a comment somewhere before about how Intel would have government (monopoly) issues if ever AMD's CPU division died. But we shouldn't jump to conclusions.As for them imitating the tick-tock strategy, maybe they don't want to appear like copy cats?I just had an idea right now. Maybe they can do a tick-tock strategy with CPU's and APU's. Like release a CPU then apply a die shrink and add-in graphics and release it as an APU, then a CPU again. Haha! It sounds funny and by the looks of it, unlikely since they released Trinity first with Piledriver and haven't applied a die shrink since Llano and Bulldozer...Hm... That's a thought. Would there be a possibility that we'd be surprised of a release of Vishera with a die shrink? They had some practice with Trinity on 32nm.[/citation]

Maybe AMD's management is killing AMD's competitiveness. These guys are the ones whom changed the design methods yet again to inferior computer-generated designs and really... That these architectures are able to do as well as they do despite the huge mountain of problems holding them back is a testament to their quality IMO. That idea for tick-tocking between APUs and CPUs could be a very helpful thing for AMD. AMD could make a new architecture on their CPUs and do die shrinks on their APUs. It wouldn't really be copying Intel (although it is an arguably similar concept) and it would be a very good method of both giving AMD the same amount of time on a process node that Intel is getting, so more experience.

Basically, it would give AMD a way to beta-test both new nodes and architectures in the places where they could do the most good and (theoretically) the least bad. All that AMD would then need to do is merge their CPU and APU platforms. It wouldn't be difficult, just make the CPUs compatible with the APU platform (can't do the other way around because the CPU socket doesn't have pins for display outputs and such). AMD could make a socket that can even fit an adapter for older gen CPUs, so they can keep their main advantage over Intel, inter-compatibility between generations on the same platform.

Maybe AMD's management might not be at fault and it might be that they're simply having trouble, but I just can't see it. Sure, this isn't simple technology, but their older engineers knew what they were doing and could do their job very well. When it first came put, Phenom II was a good architecture (Phenom was computer-designed, if I remember correctly, which is reminiscent of Bulldozer). Why is it that now that AMD has what I would go as far as to call a great architecture for the time, AMD handicapped it in seemingly almost every way reasonably imaginable?

With Bulldozer, I could understand that maybe AMD had trouble getting the architecture to work at first. It is, after all, a very radical change from conventional CPU architectures. However, they have been working on it for about a decade now (AMD's been working on it since at least when Athlon 64 came out back in 2003), if not even longer. They would have needed to be doing something very wrong to work on it for so long and then see how it turned out. If anything, it seemed like BD CPUs were rushed, which is a little odd considering the timeframes here. Maybe AMD only got it working more or less a year or two ago and they had to do something with it as quickly as they could. However, Piledriver shouldn't fix only some of those problems. They should do much, much more and they should do it ASAP.

I suppose that yes, we shouldn't jump to conclusions. However, no matter how I look at it, this is what it seems to be. Yes, if AMD goes under, then Intel is screwed. They have had anti-trust lawsuits going after them and they have people lining up to take Intel down should they become a monopoly. However, I don't think that AMD will go under, even on their current path.
 

tacobravo

Distinguished
Feb 24, 2010
207
0
18,710
[citation][nom]army_ant7[/nom]It could be.Not to lecture you or anything, though we could all appreciate the info just like any other leak, isn't it against your contract or something to reveal info like that? I know they won't be able to track you anyway, but still.[/citation]

Well we have always told people about the new stuff coming out as more of an incentive to buy from us. To have the latest and greatest and letting people know just before we sell it is a bonus. Also its not confidential
 
G

Guest

Guest
Say, an i3 2100 and a discrete 5670 ATI outperforms the most expensive Trinity 99 percent of the time in gaming, which I'm pretty certain it would (Assuming both the i3+discrete gfx and the high end Trinity are priced the same)... How would AMD convince buyers to buy their product? I just don't get the logic behind APU's in desktop, from a gamer's perspective it is totally worthless...Hybrid crossfire scaling is still in its infancy and so far only a select few of them games only benefit from it...I own an Aging Athlon 2 X2 250 and AMD i'm readying for an i5 purchase, but these initial benchmarks are a bit depressing.

I want AMD to catch-up and succeed, EVERY Intel fanboy wants AMD to catch-up and succeed for the sake of competitive pricing, because competition lowers price...but looking at how bleak and unpromising these benchmarks are, I'm starting to lose faith in them...
 
[citation][nom]R2D3[/nom]Say, an i3 2100 and a discrete 5670 ATI outperforms the most expensive Trinity 99 percent of the time in gaming, which I'm pretty certain it would (Assuming both the i3+discrete gfx and the high end Trinity are priced the same)... How would AMD convince buyers to buy their product? I just don't get the logic behind APU's in desktop, from a gamer's perspective it is totally worthless...Hybrid crossfire scaling is still in its infancy and so far only a select few of them games only benefit from it...I own an Aging Athlon 2 X2 250 and AMD i'm readying for an i5 purchase, but these initial benchmarks are a bit depressing.I want AMD to catch-up and succeed, EVERY Intel fanboy wants AMD to catch-up and succeed for the sake of competitive pricing, because competition lowers price...but looking at how bleak and unpromising these benchmarks are, I'm starting to lose faith in them...[/citation]

Considering that the 5670 would not only not outperform a Trinity A10 (except maybe the GDDR5 model) and even then, it and an i3 would be a good deal more expensive than the A10... Well, the A10 could then get 1866MHz memory which would let it at least catch up to the 5670 in gaming performance if it did beat the A10 and if not, then it would be beating the 5670. Considering that both a 5670 GDDR5 and an i3 are probably more expensive than an A10 by a very large margin, it's not bad. You need to step down to a Pentium or even a Celeron for Intel to truly fight AMD in gaming performance for the money and then you sacrifice both CPU intensive gaming and non-gaming performance to get that win.
 
G

Guest

Guest
An i3 2100 + HD 5670 costs at least 180 USD from where I live....An A10 would 'probably' cost 140-170 USD....Given the performance discrepancy between the two if you're a gamer, I don't think the Trinity is a wise choice....for heavy - multi threaded encoding/editing duties however, it'll simply favor the AMD comfortably.
 
[citation][nom]Mike Terrana[/nom]An i3 2100 + HD 5670 costs at least 180 USD from where I live....An A10 would 'probably' cost 140-170 USD....Given the performance discrepancy between the two if you're a gamer, I don't think the Trinity is a wise choice....for heavy - multi threaded encoding/editing duties however, it'll simply favor the AMD comfortably.[/citation]

The top Llano A8s cost $120 or so. The A10s probably won't cost more than $140. The price difference means that you can get 1866MHz memory instead of 1600MHz and that would let the A10s close the graphics performance gap. Some 1866MHz memory is very overclockable and can hit 2133MHz at stock voltage, letting the A10 overtake the 5670. Of course, the 5670 could also be overclocked, but only so much and we have yet to even consider overclocking the A10.
 

nicknovikov

Honorable
May 5, 2012
2
0
10,510
[citation][nom]JiggerByte[/nom]So this means that a 'Crossfired' Trinity APU would beat ANY similarly-priced Intel (CPU+discrete GPU) ???Well at least in gaming[/citation]
 
[citation][nom]tourist[/nom]Hey blaze liano supports 1866 and you can also run 2133 in single channel, i have done so myself.[/citation]

True, I've seen it myself too. However, single channel would cut bandwidth in half. I assume that you meant one module per channel in dual channel, not single channel. Llano is slower than Trinity for gaming and would probably need more than 2133MHz to beat the Radeon 5670 cards, although 2133MHz should let Llano catch up to them.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
In response, AMD officially adds support for up to DDR3-2133 with one module per channel, or DDR3-1866 in one- and two-module-per-channel configurations. In comparison, Llano topped out at DDR3-1600. (Update: AMD clarifies that desktop Trinity-based APUs will max out with DDR3-1866 support).
http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224-5.html

WTH?! Seriously?! :-( This will have some great performance potential limitations won't it?
 
[citation][nom]iamtheking123[/nom]Nvidia will just lower the price of a low end discrete card, and the world will keep spinning with Intel + Nvidia "budget gamer" builds.[/citation]

What low end discrete card? Their only Kepler card with in Trinity's performance range is the GT 640 and even then, that would compete with the A10s in performance... While being so overpriced that an A10 would have a similar price. Nvidia probably won't be lowering that price any time soon. Nvidia has been overpricing their low end cards more than their high end cards (which, incidentally, are not overpriced at all) for quite a while now.

The only low end cards than can even come close to APUs from Nvidia in both performance and value would be something like the OEM GT 545 cards against the new A10s. No retail low end Nvidia cards can do it and that probably won't change anytime soon (although I won't deny the possibility, regardless of seemingly slim it is). If anything, I'd be looking at whether or not AMD has cheaper graphics options be it with an AMD or an Intel CPU. For example, maybe a Llano APU plus a discrete card will have more value than any Trinity APU. If you don't mind having only two non-HTT cores, then maybe the Celeron G530 (or a similar Ivy CPU) will be an even better idea for pure gaming outside of very CPU limited games.

Granted, Nvidia has cards that can perform as good as or better than the APUs, but the price is not right on any of Nvidia's retail, low-end cards. The GT 640 example is just one among many. Heck, even the GTX 550 TI is cheaper than the GT 640 in most cases right now, yet it is significantly faster (granted, it's incredibly less power efficient than the GT 640).
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980


Oh, it must be a motherboard support thing then? It's interesting that Llano can actually go that high. But along with my last comment. That means Trinity's mem. controller is not on inferior to Llano's in terms of scaling, but also in max speed?!
 
[citation][nom]army_ant7[/nom]http://www.tomshardware.com/review [...] 224-5.htmlWTH?! Seriously?! :-( This will have some great performance potential limitations won't it?[/citation]

Memory bandwidth is the bane of very fast IGPs. Look at that scaling... On Trinity, it's almost linear from the start (granted, as all such bottle-necks, the scaling has diminishing returns as you scale it upwards) at DDR3-800 all the way through DDR3-1866. DDR4 should alleviate this incredibly when it comes out, but like so many other technological advancements, there are delays after delays after delays...

Heck, with DDR4, you get 64 bits per module instead of a channel topology, doubling bandwidth with four modules over dual-channel DDR3 even at the same frequency, which it won't be at because it will have a much higher frequency I/O bus, so a more than doubling in bandwidth over current DDR3 dual channel systems such as Llano.

IMO, since they were making a new slot anyway, AMD could have added in either another two channels for a quad channel system or thrown in a 128 bit GDDR5 connection (or better yet, a 64 bit XDR2 connection if they can get Rambus to play nice) and let board makers throw on four 256MB GDDR5 (or two 512MB XDR2) chips next to the APU socket. Heck, there are several other things that AMD could do to alleviate the memory bottle-neck and benefiting from the performance increase of doing so. Like with Bulldozer's current and future implementations, AMD is shooting themselves in the foot and I'm not sure of why they would do this.
 
[citation][nom]army_ant7[/nom]Oh, it must be a motherboard support thing then? It's interesting that Llano can actually go that high. But along with my last comment. That means Trinity's mem. controller is not on inferior to Llano's in terms of scaling, but also in max speed?![/citation]

Trinity's controller is less efficient than Llano's controller. Basically, it gets less bandwidth than Llano does with memory on the same interface and same frequency. However, it is more stable. Llano doesn't seem to be capable of using more than one module per channel when at 2133MHz, but Trinity might be capable of it (although not officially). Trinity can probably use higher memory frequencies than Llano can with the same module count (Trinity can probably hit higher frequencies stably with only one module per channel too), but the current motherboards are, like I've said, not complete yet and don't seem able to do it just yet.

This problem will probably be fixed before launch. What worries me is how AMD took a controller, Llano's, that was already over 25% less efficient than Intel's controller in SB/IB and made one that is even less efficient, even if it can reliably go to higher frequencies. This means that except for very high frequency RAM, Llano is better with memory bandwidth than Trinity. Trinity seems to be faster than Llano despite this disadvantage, but nonetheless, it's still somewhat worrying. If the problems somehow go unfixed, then Trinity might be able to be met by Llano A8s running higher frequency RAM (out of official specifications, but hey, it can be done safely anyway)...
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980


I have found interest in DDR4 and its supposed point-to-point architecture (not sure if "architecture" was the word). I heard though that it might come out still in 2014 or late 2013? The memory is vague so I'm really not sure about that.

Do you mind informing me a bit here since I tried looking info on this before. I remember reading that DDR3 doesn't really improve performance over DDR2, and same with DDR2 from DDR, except for speed/bandwidth. But what I really just want to know is if increasing the clockrate but proportionally loosening timings would increase theoretical/max performance, like from DDR3-800 at 5 CAS to DDR3-1600 at 10 CAS (I know there are more timings aside from CAS, but just for the sake of the example.). I'm thinking it would, but what effect would the timings have? I know you could increase voltage for tighter timings/more clockrate, but how much performance would that yield, or how much performance would you lose of you keep your timings loose. I'm guessing the answer would be situation-based.

I have researched some about XDR and Rambus. Is XDR2 really that superior? Are there any downsides to it? Any limitations?

BTW, blaz, you did read the update that I posted from the article right? That Trinity would be limited to DDR3-1866 regardless of one or two modules per channel as it seems.
 


As you correctly suggested, the correct word in this context is not architecture; the correct word is topology, if I remember correctly. Last I read about it, DDR4 has been further delayed into late 2014 to early or even mid 2015. I make no guarantes that this is when it will come out, but it's what I've read.



The performance boost from faster memory strongly depends on the workload. Some workloads scale performance pretty much linearly with increased memory bandwidth so long as the rest of the system can keep up, but some don't benefit from more than normal memory bandwidth at all. A lot of consumer workloads (such as gaming) tend to not benefit much at all, so people tend to misunderstand the subject and assume that just because faster memory doesn't help gaming, it mustn't help any other workloads at all. Things such as compression/decompression, some types of folding, AVX accelerated workloads, and rendering can benefit greatly by having faster RAM. GPUs also gobble up pretty much as much bandwidth as they can get until they have more than their GPU can keep up with. GPUs are extremely reliant on memory bandwidth.



Loosening the latency means that you can usually hit higher frequencies, but you need to loosen the timings more than linearly with linearly increased frequencies. IE with CAS 5 DDR-800, you might need something such as CAS 13-15 (if not even higher) to hit DDR-1600 (assuming that other timings are also increased similarly). This is because if you only double the timing when you double frequency, then the latency stays the same and you now push the memory twice as hard on the frequency. This would probably require higher voltage. How much your latency and bandwidth (both are results of the timings and frequency and APU itself) affect performance would depend on the application. Yes, an easier, yet accurate, way to answer your question would have been to simply say that it is all situation-based, but I think that this is more helpful.



Yes, XDR2 really is that much better than GDDR5. It can reliably run with a data transfer rate roughly twice as fast as similar GDDR5 chips while using only a little more power, meaning that it can use half as many chips, saving both power (two such XDR2 chips should use less power than three GDDR5 chips, let alone compared to four GDDR5 chips that should have similar performance).

I did read the update. Keep in mind that Llano only officially supports up to 1600MHz or so, yet it can reliably run (albeit with only one module per channel) at 2133MHz and I've heard of people using even faster RAM. If AMD trusts Trinity enough to give it a higher official maximum, then it can probably go higher than Llano (granted, this is not a guarantee) if it has a proper motherboard.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
Well, though I did think later on how much (mega)transfers there are does increase bandwidth (Didn't think about it as bandwidth, but I guess it is.), I thought latency is what really matters (the number of clock cycles it takes for a certain action, determined by the specific timing, delays for). I never thought of it as more (mega)transfers but the same latency when clock rate and timings are kept proportional. And though I have learned some concepts of how DDR RAM works, I probably don't know everything about how it works.
I do remember reading before that DDR, DDR2, and DDR3 haven't really changed in performance, but increased memory densities and lesser voltages. I think it's because they've kept the same latencies. Do you know anything about this or any truth to this?


Oh, so those are unofficial speeds. I didn't know you could push the memory controller past its spec. BTW, do you know anything about how we're able to use DDR3-1600 and higher RAM modules on SB's and IB's though it says on http://ark.intel.com/products/65647/Intel-Core-i5-2550K-Processor-(6M-Cache-up-to-3_80-GHz) that the supported "Memory Types" are "DDR3-1066/1333." I remember looking this up and someone (or some people) proposed the basic concept (which I neglected to think of) of the difference between clock rate and transfer rate. That the 1066 and 1333 listed in clock rate which doubles, due it being DDR, to 2133 and 2666. That sounded plausible, but as I only realized now as I'm typing this is that in "DDR3-1066/1333" the number refers to the transfer rate in that format (using "DDR3-"). I also just realized now that the supported "Max Memory Bandwidth" listed is 21GB/s which is
1333 (megatransfers) x 64 (bus width in bits, correct me if I'm wrong) / 8 (number of bits in a byte) x 2 (Dual-channel) / s(econds) = 21GB/s. So what Intel listed might just really be the memory speeds it officially supports. So when we use DDR3-1600 or faster, are we using unofficial speeds? I also vaguely remember reading that motherboards implement a certain chip or something to maybe multiply something with the memory bus. Could you clear this out for me? Hehehe... Sorry.


Thanks for all that info! It's great how you take your time to answer questions and share info, so thanks!
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980


Tourist, I don't mean to offend, but I believe it's Llano with a double "L." I hope I just didn't catch a typo there 'coz I recall seeing you, or maybe someone else, use "I" in it. :)

Would saying that it would be quad-channel if they implement for RAM slots be correct? I'm not sure myself because I'm not sure if they'd apply the point-to-point topology (Thanks for the correction blaz.) per module or per chip. I'm not even sure if DDR3 and below modules already have their chips accessed in parallel.
 

SmartGeek

Honorable
Jun 3, 2012
105
0
10,710
Nice review! Comprehensive explanations! Various benchmarks!
Overall, it is perfect!
Keep up the good work because you guys are the only one who help people buy the best component for themselves.
 

jimmyd1964

Honorable
Jun 14, 2012
28
0
10,540
I have seen some benchmarks where the new APU's from AMD have been in a Hybrid Cross fire mode and it increased Grafics performance by almost 40%. It depends upon which card you do it with. I think they were using a 6670 or 6770 and it increased Frame rates from 90fps to 140fps on the game they were testing. Pretty impressive.
 
[citation][nom]tourist[/nom]Also blaze would faster memory help games with large textures? I would think it would help in swapping when the video card runs out of memory.[/citation]

Higher bandwidth should help more as texture size increases if the system RAM is being used by the GPU.

[citation][nom]army_ant7[/nom]Well, though I did think later on how much (mega)transfers there are does increase bandwidth (Didn't think about it as bandwidth, but I guess it is.), I thought latency is what really matters (the number of clock cycles it takes for a certain action, determined by the specific timing, delays for). I never thought of it as more (mega)transfers but the same latency when clock rate and timings are kept proportional. And though I have learned some concepts of how DDR RAM works, I probably don't know everything about how it works.I do remember reading before that DDR, DDR2, and DDR3 haven't really changed in performance, but increased memory densities and lesser voltages. I think it's because they've kept the same latencies. Do you know anything about this or any truth to this? [/citation]

Latency timings are measured in clock cycles. A CAS of 5 means that a CAS takes 5 clock cycles. If you have a CAS of 5 with a data rate of DDR3-800, then to get how long that CAS is measured in units of time (such as nanoseconds), you need to first divide the data rate by two in order to account for DDR memories (excluding GDDR5) having an actual I/O bus frequency of half of their data rate. Then, divide the CAS number by the solution of data rate divided by two and multiply the result by 1000. You now have the CAS latency as measured in nanoseconds.

DDR3-800 CAS 5 = 12.5 nanoseconds
DDR3-1600 CAS 10 = 12.5 nanoseconds

By doubling both the frequency and the timings, the latencies have all remained the same as measured in real-time, but you now have double the frequency and that means double the bandwidth per channel. So, the RAM has the same latency, but must work harder on the frequency because it now has identical latencies with double the bandwidth. Unless you increase voltage significantly, the memory will almost definitely not be stable if you double both the timings and the frequency. In order to trade latency for bandwidth, you must increase latencies more than you increase the frequency if you don't want to increase voltage to let the RAM deal with the increased workload. You can look at the DDR3 wiki for more info, it has a lot of information about this.

Between the DDR generations, latencies have improved, although not as much as bandwidth has. Bandwidth is constantly improving and it improves fairly quickly. The way that we increase RAM bandwidth in each new generation is by increasing prefetch size and data rate. The actual frequency of the RAM cells in ach chip is usually between 100MHz and 200MHz and this is why latencies do not improve much. We increase bandwidth easily by working around this, but latencies only improve marginally as we improve the technology processes used to manufacture the DRAM cells in each chip. This limitation comes from the DRAM cells themselves. A DRAM cell has two main components, one capacitor and one transistor. The capacitor can only be read/written to so often because it is much slower at transmitting and receiving a charge than a static cell (such as SRAM, made by large but fast flip-flops of four or more transistors). The capacitor also can't hold the charge for more than a fraction of a second, so it must be refreshed very often. This is what makes it dynamic and volatile. If it isn't refreshed, it fails to retain its stored data.



[citation][nom]army_ant7[/nom] Oh, so those are unofficial speeds. I didn't know you could push the memory controller past its spec. BTW, do you know anything about how we're able to use DDR3-1600 and higher RAM modules on SB's and IB's though it says on http://ark.intel.com/products/6564 [...] -3_80-GHz) that the supported "Memory Types" are "DDR3-1066/1333." I remember looking this up and someone (or some people) proposed the basic concept (which I neglected to think of) of the difference between clock rate and transfer rate. That the 1066 and 1333 listed in clock rate which doubles, due it being DDR, to 2133 and 2666. That sounded plausible, but as I only realized now as I'm typing this is that in "DDR3-1066/1333" the number refers to the transfer rate in that format (using "DDR3-"). I also just realized now that the supported "Max Memory Bandwidth" listed is 21GB/s which is1333 (megatransfers) x 64 (bus width in bits, correct me if I'm wrong) / 8 (number of bits in a byte) x 2 (Dual-channel) / s(econds) = 21GB/s. So what Intel listed might just really be the memory speeds it officially supports. So when we use DDR3-1600 or faster, are we using unofficial speeds? I also vaguely remember reading that motherboards implement a certain chip or something to maybe multiply something with the memory bus. Could you clear this out for me? Hehehe... Sorry.Thanks for all that info! It's great how you take your time to answer questions and share info, so thanks![/citation]

Yes, when we run RAM that is out of a memory controller's specifications, we are using unofficially supported memory configurations. With Intel, only up to DDR3-1333 (aka PC3-10600 and incorrectly referred to as PC3-10666 and such by some people) is officially supported per memory channel on their Sandy Bridge CPUs. You can also increase the memory bandwidth (by increasing the memory frequency) far more than the official max of 21GB/s from dual channel DDR3-1333.

About that chip, I'm not really familiar with that... You can simply change the memory multiplier in the BIOS. It is like changing the CPU multiplier to overclock or underclock the CPU. The memory multiplier is what the BLCK (or equivalent bus for your platform) is multiplied by to get your memory frequency. For example, on my Phenom II computer, the BLCK defaults to 200MHz and my memory multiplier defaults to ~6.66. This, to the best of my knowledge, has nothing to do with any chip except the BIOS EEPROM chip, the chipset, and the CPU.

Heh, any time. If you want to read a little more about the memory, then this link has some moderately advanced info on DDR3 and GDDR5:

http://www.tomshardware.com/forum/314045-30-question-gddr5-memory
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
I am actually losing hope on Nvidia dropping their decent low-end card(does it even exist?) price down to $80-99.

Radeon 5770/6770 is able to drop that price @ 170mm2 die size. I assume in future when 28nm become cheap to manufacturer 7770 could reach that price as well. As for Nvidia I am not quite sure, it seems anything below GTX *660 is going to be overprice than ATI/AMD ones.
 
[citation][nom]Tomfreak[/nom]I am actually losing hope on Nvidia dropping their decent low-end card(does it even exist?) price down to $80-99.Radeon 5770/6770 is able to drop that price @ 170mm2 die size. I assume in future when 28nm become cheap to manufacturer 7770 could reach that price as well. As for Nvidia I am not quite sure, it seems anything below GTX *660 is going to be overprice than ATI/AMD ones.[/citation]

Honestly, I never had much faith in Nvidia pricing low end cards competitively against AMD. Nvidia hasn't been making their low end and entry-level cards very competitive with AMD/Ati alternatives. However, the 7770 is not a low end card IMO... It is a lower mid-range card IMO. Despite that, sure, it could reach your suggested price range.

Nvidia does have plenty of low end cards. They are mostly GT instead of GTX, but they are there. GT 640, GT 540, GT 430, and more, especially if you include OEM cards. They tend to be very expensive for their performance. For example, the Kepler GT 640 (there are three versions of the GT 640 and I think that only one of them is Kepler, the other two are Fermi if I remember correctly) is in the 7750's and 7770's price ranges, yet performs more like a 6670.

Nvidia has low end cards; Nvidia does not price their low end cards well. Nvidia doesn't even price many of their lower mid-range cards properly. For example, only recently did the GTX 550 TI have a proper price, yet even now, it's still usually not as good as its performance competitors.

Nvidia shouldn't have some exorbitant manufacturing costs to blame... Nvidia just decided to not price the cards well. The Kepler GT 640 has a tiny GPU and very little other parts. Then there are extreme examples such as the GTX 670 that has a tiny reference PCB, only eight RAM chips, and a small GPU... However, the GTX 670 has a great price for its performance, unlike Nvidia's lower end options.

Nvidia just doesn't seem to really care about the market, but they don't want to look like they've abandoned it completely, so they at least make cards for the market, even though they generally aren't worth their prices IMO.
 
Status
Not open for further replies.