AMD's Future Chips & SoC's: News, Info & Rumours.

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I don't know how familiar you are with Augmented Reality, Mixed Reality, and VR. But there is little doubt in my mind that people everywhere will eventually use it. Depending on how fast it advances maybe 10 years. Here are a few links, but just google it on youtube! In my personal opinion this is where the world is heading too. It will encompass all aspects of our world. Seeing in the dark; schooling; shooting a gun; building; your imagination is the limit!
https://www.youtube.com/watch?v=ihKUoZxNClA
https://www.youtube.com/watch?v=xVhF3ai44Xo
https://www.youtube.com/watch?v=5HV3fcTvZk0
https://www.youtube.com/watch?v=9avmqNEG5ls
https://www.youtube.com/watch?v=-Xz9vw2Vq5A
 
Cheers for that mate looks savage alright.. I guess it will connect to the cloud for additional GPU an CPU processing power for the latest AAA titles... or alternatively maybe it could also connect to our PC's ?

I just still cannot imagine life without my PC... call me old fashioned. it's more than just a machine to me it's a love affair with technology, a hobby I guess. I'll probly still have one way after everyone else gives up on them. But hey whatever I guess. Horses for courses.
Don't get me wrong I'm sure I'll go for the Glasses too defo. But I simply can't imagine a world without PC's, it's just not happening for me... that's not saying it wont happen but. It's just my crazy brain cannot seem to comprehend it. Well whatever, I'll recondition old one's if I have to :)

anyhow...
Here's the latest from Wendell:
AMD News Roundup: X399, Threadripper, Vega Demos, and More! (Early June 2017)
https://youtu.be/pI2F8K3Agjs
 
The rumour is back but this time apparently an engineering sample has been spotted in the wild... this is one rumour that just wont die.. Intel chips with AMD graphics, Well we shall see I guess.
Intel CPU with AMD GPU spotted, GPU licensing confirmed?
http://www.tweaktown.com/news/57931/intel-cpu-amd-gpu-spotted-licensing-confirmed/index.html


Also some fresh gossip from our friends over at wcctech.. Videocards have spotted some engineering of ThreadRipper..
AMD Threadripper 1920 12-Core CPU & Vega 16GB/8GB Cards Leaked:
http://wccftech.com/amd-threadripper-1920-12-core-cpu-vega-16gb8gb-cards-leaked/
 


AdoredTV is a well-known not-reliable source. Not surprising that what he says has zero resemblance to reality.

It is AMD who is having issues. No one is purchasing Naples/EPYC and the rest of server plans were canceled or are on hold. ThreadRipper is getting barely any interest from motherbard makers. There are about 40 models of x299 mobos ready for launch, whereas only 6 models for ThreadRipper.

Current samples of ThreadRipper has same performance per watt than RyZen. I don't know from where he gets his data.

The 80% yields is a number that someone anonymous posted in a forum elsewhere (I forgot where) and everyone is repeating since then.
 
Hey Juan how are you mate..

Well I'd have to disagree and so does Wendell in his latest video he reckons Epyc is going to cause major disruption in the server market... (he say's it's really really really really really really good !! lol)
Actually I have only heard positive things about Epyc so far. It looks savage...

Wendell reckons it is going to devour Dells server business first and then Intel's Xeon E3 & E5 business after it gets recognized and respected for how good it actually is.

Here's is wendell's take on it:
https://youtu.be/pI2F8K3Agjs
 
Hi! All fine with me.

Feel free to disagree! This is a forum and it would be boring if all of us agree on everything.

My claims aren't based on what some guy says in his youtube channel. My claims are based in (i) friends have tested EPYC in real-life workloads; (ii) server providers not offering EPYC as option, (iii) AMD own guidance for 2017 sales and their projections for market share.
 


Hmm...only MB reps I know are talking about how they expect X299 to take off like a lead balloon, and how they are pushing to get X399 stuff out the door as quick as possible.

EPYC also has design wins already...3 large ones in fact.

Expect EPYC to make serious noise in the datacenter...AMD might exceed projections on EPYC if the current pace holds.
 
"It good to see that AMD’s found willing partners in all of the major motherboard makers for their risky new platform. That said AMD’s X399 chipset seems like a decent if rather unremarkable offering. Desktop chipsets these days always seem like a bit of an after thought. Intel’s X299 and AMD X399 chipsets are not exceptions to that trend. On the other hand motherboard feature sets are more robust than ever and average build quality is a high point so perhaps this new normal is a good thing."

Everyone is making motherboards for X399

https://semiaccurate.com/2017/05/31/amds-ryzen-threadripper-brings-socket-tr4-x399-chipset/
 


No. Same argument as when this came up for the Xbox one: Internet access latencies are simply too slow to offload any amount of processing to the cloud.

It's simple: You want to maintain 60 FPS, which requires a 16ms refresh window. When latencies are measured north of 30ms for even "good" ISPs, it simply isn't feasible to offload processing externally. And that's before you consider actually transferring the information and integrating it back into the program. Unless you're happy with ~15FPS or so, you can't externalize this processing to the cloud.
 


I don't think the FPS'es are the problem, but your first point is enough: "latency". I mean, you can see the most fluid of animations and smooth as butter transitions, but if it takes you 300ms to move in a game, that is not ideal for any fast-paced game.

It would work, theoretically, with non-real-time games. For example, the Civilization games are a good fit for stream-play, since they are turn based, so network latency is not *that* much of an issue, but any FPS will have lots of them.

Cheers!
 


What about just offloading it to our PC's (not the cloud) ?
AMD recently purchased a company that was developing 60ghz wifi connectivity specifically for VR headsets..

The Hololens looks like it could use the extra horse power.
 


It will all boil down to how much "real time" you want your simulation / VR experience to be.

It's just the same with your phone applications. You accept a delay in some applications like weather and search, but things that require "real time" interaction is harder: GPS and Navegation. You can extend that to driving assistance since I'm going into that tangent.

Cheers!
 


It's still bad in strategy games. While you are waiting for your data to come back, your CPU is literally going to be doing NOTHING. It's faster to keep everything local.

Fact is, whenever you go to an external device, you introduce latency. Latency kills, especially on the CPU. That last thing you want is your CPU sitting around waiting for some other device to get it the data it needs.

Seriously, this is the same exact discussion we had when MSFT floated "the cloud" during early XB1 reveals. I called BS then, just as I'm calling BS now.
 
Have you guyz ever played games through wifi on your phone? Playing MMO's on cell phone through wifi works pretty good, and I'll bet you a dollar it gets much better! Now, apply that too a pair of glasses with the power of today's laptops/desktops, and advance 10 years.
 


MMOs are designed to be largely latency insensitive. Even then, things can grind to a halt in very large instances (EVE Online battles turn into slideshows, for example, as the servers really can't handle several thousand people at a time).

Competitive games, especially FPS get around this by often having the server at the competition location, which drives down latency to within a frame or two, which is "good enough". But for PC gamers, there's enough of us which have run into issues where what we see isn't what the server sees; that's latency in a nutshell.
 
Latency will have no more impact on AR glasses then it does on computers or cell phones. AR glasses are now using the newest cell phone processors. It's inevitable that AR glasses will spread across the world, and start replacing cell phones and computers. You will have a heads up display augmenting everywhere you go. Virtual signs, designs, games will be everywhere. Entire rooms and buildings will be designed in a mixed augmented reality. A.I. will take over designing hardware and software technology. It's learning at a rate of 8 geniuses life times a day right now. Life as you know it will drastically change before you very eyes over the next 10 to 20 years.
 


Nope. VR is a fad, in the same exact way 3d was.
 


I strongly disagree there, but I think it's not the point.

In short: AR != VR. AR has clear day-to-day practical applications and justification to be developed, and that would be the only weak point from VR. Plus, the hardware backend needed for AR is usually stronger on the AI side whereas VR tends to be heavier on the graphical side only.

Cheers!

EDIT: Just to support a bit my point: http://kotaku.com/virtual-reality-mario-kart-looks-like-it-could-end-frie-1796112738

I mean... Come on...
 
GlobalFoundries Announces Early 7nm Availability, Huge Gains Over 14nm FinFET
By Joel Hruska on June 15, 2017 at 7:30 am

"Over the past few years, we’ve seen the foundry business evolve from a single-horse race that TSMC effectively “won” each and every cycle to a two-way competition with Samsung. Now, GlobalFoundries is making a serious push of its own with early 7nm technology availability and volume production currently planned for the back half of 2018.

That’s the news from Saratoga today, where GF announced that its 7nm LP node (LP = Leading Performance) is ready for partners to begin planning their designs. The first customer launches on 7nm LP are expected 12-18 months from now, and GF is promising that it can deliver up to 40 percent improved performance compared with 14nm. The company claims that its 7nm work is exceeding its performance and power targets, and is on track to deliver up to 2x area scaling compared with previous 14nm technology. The company has also been hard at work on 5nm in partnership with IBM."

GF-7nm.jpg


"“Our 7nm FinFET technology development is on track and we are seeing strong customer traction, with multiple product tapeouts planned in 2018,” said Gregg Bartlett, senior vice president of the CMOS Business Unit at GF. “And, while driving to commercialize 7nm, we are actively developing next-generation technologies at 5nm and beyond to ensure our customers have access to a world-class roadmap at the leading edge.”

GlobalFoundries will also have the option to integrate some EUV manufacturing at this node, if its customers want to use it, but we’d be surprised if many do. EUV may be slowly moving into commercial production, but it isn’t expected to see broad availability for the next few years and may be reserved for critical masks where other forms of multi-patterning can’t be used for some time after that.

What’s interesting about GF’s announcement is that it’s also claiming up to 60 percent reduced power with 40 percent improved performance, using the word and. That didn’t used to be significant, but in recent years it’s become common to see semiconductor manufacturers using or instead. You can have higher performance at the same power or you get lower power at the same performance, but delivering both simultaneously has become increasingly difficult as the benefits of each successive process node grow smaller.

GF also seems to be leading with a process node that’s better suited to higher power silicon designs like GPUs and CPUs, rather than the mobile-first approach TSMC has taken to better suit the needs of Qualcomm and Apple. There’s nothing intrinsically wrong with each option, but it reflects how different companies now drive the semiconductor market compared with a decade ago, when GPUs were major drivers of each new node. Now, we see companies like AMD and NV sitting out some process nodes that are explicitly designed for mobile SoCs, and taking a staggered approach to process node deployment.

When GlobalFoundries first spun off from AMD, it had a great deal of difficulty finding its sea legs. The company’s original roadmap called for aggressive competition with TSMC across a range of nodes, but it was unable to deliver on that vision. Later, it was forced to license Samsung’s 14nm technology due to problems with its own 14nm XM. GF seems to be on much more stable footing now, and is pushing hard to turn the two-way race between TSMC and Samsung into a three-way competition with itself in the lead."

https://www.extremetech.com/computing/250936-globalfoundries-announces-early-7nm-availability-40-improved-performance-14nm-finfet
 
GLOBALFOUNDRIES on Track to Deliver Leading-Performance 7nm FinFET Technology
Jun 13, 2017

New 7LP technology offers 40 percent performance boost over 14nm FinFET

""Santa Clara, Calif., June 13, 2017 – GLOBALFOUNDRIES today announced the availability of its 7nm Leading-Performance (7LP) FinFET semiconductor technology, delivering a 40 percent generational performance boost to meet the needs of applications such as premium mobile processors, cloud servers and networking infrastructure. Design kits are available now, and the first customer products based on 7LP are expected to launch in the first half of 2018, with volume production ramping in the second half of 2018.

In September 2016, GF announced plans to develop its own 7nm FinFET technology leveraging the company’s unmatched heritage of manufacturing high-performance chips. Thanks to additional improvements at both the transistor and process levels, the 7LP technology is exceeding initial performance targets and expected to deliver greater than 40 percent more processing power and twice the area scaling than the previous 14nm FinFET technology. The technology is now ready for customer designs at the company’s leading-edge Fab 8 facility in Saratoga County, N.Y.

“Our 7nm FinFET technology development is on track and we are seeing strong customer traction, with multiple product tapeouts planned in 2018,” said Gregg Bartlett, senior vice president of the CMOS Business Unit at GF. “And, while driving to commercialize 7nm, we are actively developing next-generation technologies at 5nm and beyond to ensure our customers have access to a world-class roadmap at the leading edge.”

GF also continues to invest in research and development for next-generation technology nodes. In close collaboration with its partners IBM and Samsung, the company announced a 7nm test chip in 2015, followed by the recent announcement of the industry's first demonstration of a functioning 5nm chip using silicon nanosheet transistors. GF is exploring a range of new transistor architectures to enable its customers to deliver the next era of connected intelligence."

https://www.globalfoundries.com/news-events/press-releases/globalfoundries-track-deliver-leading-performance-7nm-finfet-technology
 
Processor Arithmetic
2x AMD EPYC 7601 32-Core Processor (4N 32C 64T 3.2GHz, 1.33GHz IMC, 32x 512kB L2, 8x 8MB L3) = 706.18GOPS
2x Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz (28C 56T 3.8GHz, 2.4GHz IMC, 28x 1MB L2, 38.5MB L3) = 1425.82GOPS


Processor Multi-Media
2x Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz (28C 56T 3.8GHz, 2.4GHz IMC, 28x 1MB L2, 38.5MB L3) 5989.90Mpix/s
2x AMD EPYC 7601 32-Core Processor (4N 32C 64T 2.7GHz, 1.33GHz IMC, 32x 512kB L2, 8x 8MB L3) 974.33Mpix/s

http://ranker.sisoftware.net/show_device.php?q=c9a598aad2f2b3feba9adf8fd695b582b484b595a694b9fa95e782a4c3fed3e2c4b68bba9cf5c8f8deb68bbb9de5d8e8ceabcef3c3e596ab93&l=en
 


Intel Intel's new Xeon rocks 28C/56T, costs over $12,000 Intel launches its next-gen Xeon Platinum and Gold Series processors, packing 28C/56T By: Anthony Garreffa | CPU, APU & Chipsets News | Posted: Apr 27, 2017 6:43 am

Read more: http://www.tweaktown.com/news/57293/intels-new-xeon-rocks-28c-56t-costs-over-12-000/index.html
http://www.tweaktown.com/news/57293/intels-new-xeon-rocks-28c-56t-costs-over-12-000/index.html

You are saying that 2 $12,000 CPU's are better than 2 $4,000 CPU's by approximately double... Ok... And those 3.8GHz are overclocked
Using the same website you posted.

Arithmetic
2z5u7mo.png

5cj910.png


Multi-Media
6ftwk8.png


"Intel is splitting its new Xeon CPUs into four families: Platinum is the 8000 series, Gold with the 6000 and 5000 series, while the 4000 series will be Silver, and the 3000 series is Bronze. Intel will use between 10 cores on the new Xeon processors, right up to the monster 28-core chip with Hyper-Threading, rolling the CPU threads up to 56. Intel is reportedly launching the new Xeon Platinum 8180 which will rock 28C/56T of CPU performance at 2.5GHz, with 38.5MB of L3 cache and a 205W TDP. This is a monster processor with its 56-threaded power, but its $12,000+ price tag sees it destined for the datacenter/AI/server markets."

http://www.tweaktown.com/news/57293/intels-new-xeon-rocks-28c-56t-costs-over-12-000/index.html

2.5GHz
https://en.wikichip.org/wiki/intel/xeon_platinum/8180
 
Status
Not open for further replies.