AMD CPU speculation... and expert conjecture

Page 201 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

If ESRAM is very similar to EDRAM in concept, then we could take the Xbox 360's Xenos as an example, as that had EDRAM soldered on to the GPU itself. But I think AMD is still going to be stuck with DDR3 for Kaveri. Then again, I am no expert on RAM myself when it comes to these "other" types.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Naturally it could cost up to 50% more, its 3 sticks instead of 2. If it costs less than 50% more you'll be already gaining. Nevertheless if Kaveri will go for FM2+ which will have the A85X chipset, it is almost certain it will come with only 2 channels of DDR3.

So wonder what kind of ESRAM or L3 they will use... if any...

 
Interesting thread but I hate Winblows 8 more than I "Millennium" edition. Forced to use it school -_- without mouse and keyboard >.<
I haven't kept up to date abut DDR4 but the chat here says a lot that we're in for another era of high priced ram but better hope that it is not another run like Rdram was.
 


BEHOLD, THE Z790i WITH A DDR4 MEMORY CONTROLLER! The switch to DDR4 will be another annoyance and performance upgrade like the switch to DDR3 , ala ye olde' nForce and custom northbridges.
 

The ESRAM is a L3 to the GPU+CPU like the block on intel's GT3-e. This will be SRAM on the die tho compared to dram on a bus for haswell. The latency will be like L3 latency instead of main memory latency with SRAM. Basically, the xbox one will just have a 32MB shared L3 between CPU and GPU.
 

8350rocks

Distinguished


I also think AMD is delaying for FD-SOI since the conversion is not radically difficult. This would also give them a large process advantage in performance/watt to help close the gap.

Additionally...if the GF roadmap for FD-SOI is to be believed, then AMD could be working on 14nm in 2H 2014 and 10nm shortly after Intel launches 14nm Broadwell in 2015. If they're at 28nm, and have the opportunity to skip 20nm to go to 14nm FD-SOI instead...I would think they'd be fools not to go all the way to 14nm if production can ramp in time for the fall release schedule in 2014. Though, as bold as it is, I wouldn't be surprised if 14nm FD-SOI isn't ready until early 2015.
 

8350rocks

Distinguished


*sigh* I honestly need a new laptop, but I procrastinated getting one while win7 was still available on new OEM machines. I don't want to waste money on a Win8 OEM license built into the cost of a laptop either.

Anyone know where I can buy an intact OEM AMD laptop chassis? I'll either put win7 on it, or Ubuntu 13.04. I honestly would like to get one of the Kabini models...but I just don't see any reason to get windows 8, I'll wipe the HDD on that thing so fast M$'s head will spin, and it's a waste of time, effort, and money to pay for software I would rather pour gasoline on...and have to remove from my PC.

Perhaps someone makes an AMD laptop with Ubuntu on it?
 

I would find a decent Kabini laptop with W8, wipe the HDD and install Glorious Mint or Ubuntu! System76 only uses Intel, so no go there.
 

8350rocks

Distinguished


Yes, I actually googled the subject first and found nothing from someone using AMD. Even HP, who offers Ubuntu natively on some PCs, seem to be reluctant to put Ubuntu on an AMD anything except the E1-E2 integrated CPU models...(Yuck...no thanks).
 

8350rocks

Distinguished


It's not really a start menu, I played with the "new Win8" and it's start button. It's actually a "start button shortcut" to the same "start menu" as before that you must utilize in the same way as before the "new start button".

If I am going to go through this BS with Winblows 8 and they want me to jump through hoops to re-learn winblows all over again...I will just go to Ubuntu because, frankly, it's better anyway. The machines I have on Win7 64 will keep running it, and all my new machines will either be Win 7 64 or Ubuntu 64...(thank god I had the wherewithal to buy a full version of Win7 x64)
 




I have used Windows 8 myself, and it feels quite fast, but until it is fully mature and M$ does not pull B$. It just feels so tablety....... Sure I can dish out a few bux for Start8 or some free alternative like Pokki, but I honestly feel fine running Vista Ultimate and 7 Pro x64 with their benefits.
 
There is really no reason to switch to win8 if you have 7 but if you buy a computer with win8, its a good OS and better in many ways to 7. Its just slightly different. Not even much difference really. There was a bigger difference from windows xp to windows vista in terms of doing anything.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
If you like the technical aspect of silicon technology, these are some interesting pesentations to be found.

http://www.soiconsortium.org/fully-depleted-soi/presentations/

one from GF

http://www.soiconsortium.org/fully-depleted-soi/presentations/june-2013/Shigeru%20Shimauchi%20-%20SoC%20Differentiation%20FDSOI%20Japan.pdf

AMD's biggest delay could have been deciding not to go bulk, but instead waiting for GF.
With GF's track record, AMD is wise to start with the smaller chip (ie trinity @246mm2 vs FX @315mm2) since AMD is likely paying for each wafer now instead of "usable chips".
 


Their backhanded compliments at the end were funny. They absolutely could not write a positive review without trying to shoehorn in an Intel recommendation.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


There is one difference. This review has started the ire of Intel fanboys, who are attacking the reviewer by saying something bad about Haswell

http://forums.anandtech.com/showthread.php?t=2327849
 

viridiancrystal

Distinguished
Jul 27, 2011
444
0
18,790


That conclusion page could be torn apart pieces by piece how selective, contradictory, and sometimes straight up wrong it is. It is terrifying coming out of a professional reviewer.

[This is where I would post something relevant about steamroller, if there was anything remotely new to post.]
 

jdwii

Splendid


The more i play games the more i hate it now i just realized no FPS limiter works on W8 as of now which i want for sims 3. I feel like having windows 8 on my PC is like a punishment.
 

jdwii

Splendid
When writing a paper in college i remember to always look back at Anandtech so i know what a bias article sounds like. After taking college classes for writing you know how bad that site is. Yes Tomshardware has a lot of typos on their normal articles but its not biased they just need a editor hell i'm sure people would help them for free.

Spelling and grammar is fixed a biased article changes the writers view.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


No need for that, 28nm FD-SOI is already half node shrink compared from 28nm bulk/PD-SOI (EDIT: meaning is more like 24-22nm shrink, not 28nm, compared with current 32nm bulk or PD-SOI... almost a full node ~50% ) (EDIT 2: besides they still have T-RAM (which is only 1 T + 1 Thyristor, comparing with 6T) for the likes of L3 and ESRAM, this could give an additional shrink by the large cache structures, and better to debut at (supposedly) 28nm than 14nm, this last one which is also an hybrid from 20nm BEOL + 14nm FEOL... this T-RAM could give another 10% shrink factor for the supposed 28nm FD-SOI, making chips comparatively a little smaller than with intel 22nm finfet)

"28nm Bulk to 28FDSOI migration will have half node advantage vs full node migration to 20nm bulk"
(slide 5)
http://www.soiconsortium.org/fully-depleted-soi/presentations/april-2013/Subramani%20Kengeri%20-%20GF%20-%20SoC%20Differentiation%20FDSOI.pdf

more advantages
http://www.soiconsortium.org/fully-depleted-soi/presentations/june-2013/David%20Jacquet%20-%20FD-SOI%20workshop%20in%20Kyoto.pdf

So this could explain the >30% (almost 50%) shrink evidenced by that SR (supposed) die, which is NOT SR... but a module with 4 threads per module...

https://securecdn.disqus.com/uploads/mediaembed/images/500/9389/original.jpg

... probably "excavator" at 28nm FD-SOI which is close 50% smaller, that is why so many things doubled... if that is not a FAKE... but a superimposition on a PD module, of a EXV module alone, forgetting to shown the corresponding L2 SRAM shrink, since the right side the "pink" structures of the L2 arrays is the same size of PD...

EDIT 3 :
cheaper and much faster to develop than finfet on bulk ... up to 50% at around the 20nm sizes ... they gotta be kidding me lol

http://www.advancedsubstratenews.com/2012/11/ibs-study-concludes-fd-soi-most-cost-effective-technology-choice-at-28nm-and-20nm/
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


The diffrence of SRAM and eDRAM is basically in the kind of "memory cells", though Mosys propagate eDRAM was "SRAM" because they use specially tuned transistorizes with that capacitor. A "SRAM" cell is usually 6T-SRAM or 6 transistors per cell ( that is why is very bulky, but there are versions with 3T and 4T (AMD and others patents), and versions with 8T (AMD uses it) and even with 10T (AMD patents))... while a eDRAM cell is 1 transistor + 1 capacitor.

Being a cache means is completely "tagged" for the purposes of the addressing/access style of caches, meaning those "tag files" are usually 6T(or more)-SRAM structures in a CAM (content addressable memory) format... and this independent of "data storage" cell formats, that is, there can be caches with tags files on CAM and SRAM structures, while the "memory per se" is eDRAM with 1 transistor+ 1 capacitor per cell.

This is the case of of intel CristalWell, the tag files are SRAM CAMs "on die", and its huge Tag files since the size of memory they address is also huge compared with any normal cache that is in the couple of MBs. So in CW the "memory cells themselves" are out side of die and are like DRAM (eDRAM = 1T + 1C)

http://translate.googleusercontent.com/translate_c?depth=1&hl=pt-BR&rurl=translate.google.com&sl=ja&tl=en&u=http://pc.watch.impress.co.jp/docs/column/kaigai/20130610_602912.html&usg=ALkJrhixtCw4lQinRZXjNXye9qA8clxL2w

http://pc.watch.impress.co.jp/img/pcw/docs/602/912/8_s.jpg
(IMAGE)

IMO its not the best approach, neither in performance neither neither in space wasted, neither in power... better would be to have everything in the same place, but intel is felling arrogant i think....

AMD ESRAM is different... its all in the same place, on die... and it doesn't have bulky CAM structures, its a pseudo-cache with the tag information on the same "cell structures" of the data... i think its even in the same row address of the data, that is, the GPU which controls it on behalf of the CPU+GPU, accesses it more like external DRAM memory than a cache... its very clever, no wonder MSFT choose it for XB one...

http://www.vgleaks.com/wp-content/uploads/2013/02/block_gpu.jpg?950834

There are drawbacks in any case, intel with a L4 is privilege more the CPU and compute parts, while ESRAM is clearly better for graphics. But the greater disadvantage of intel is being "off-die", while the greater disadvantage of AMD is being relatively too small.

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


That is usually what fanboys do(edit)... and they are very spoiled... better for review sites would be excuse themselves from presenting anything "shoehorned", that would be pure journalism, like
http://translate.google.com/translate?hl=pt-br&sl=ja&tl=en&u=http%3A%2F%2Fpc.watch.impress.co.jp%2Fdocs%2Fcolumn%2Fkaigai%2F20110301_430044.html
... but they live from "clicks" so the temptation is overwhelming(which has influence in everything, most included partiality)... you raise them now you have to putt up with them AT! .. lol (next time use a condom lol)

 
Status
Not open for further replies.