AMD's Future Chips & SoC's: News, Info & Rumours.

Page 55 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Do you have a link to the source? Those statements are kind of vague. But this is what Lisa Su told Ian in an interview.

Getting Radeon Vega Everywhere: An Exclusive Media Interview at AMD Tech Day, with CEO Dr. Lisa Su
by Ian Cutress on January 24, 2018 8:00 AM EST

https://www.anandtech.com/show/12312/getting-radeon-vega-everywhere-an-exclusive-interview-with-dr-lisa-su-amd-ceo
Q18: With GlobalFoundries 14nm, it was a licensed Samsung process, and 12nm is an advancement of that. 7nm is more of a pure GF design. Is there any change in the relationship as a result?

LS
: So in 7nm, we will use both TSMC and GlobalFoundries. We are working closely with both foundry partners, and will have different product lines for each. I am very confident that the process technology will be stable and capable for what we’re trying to do.

Intel's 10nm+ is more dense than GlobalFoundries, because Intel uses single dummy gate, and contact-over-active-gate.
 
https://www.youtube.com/watch?v=NJWGlHIHXp0

For the first time since APU's came out i feel Raven Ridge will actually be a viable option for budget gamers more so since GPU pricing is so crazy right now.

Note that i do not see Amd branding of APU with Raven Ridge instead its Ryzen+Vega and to be honest i think they should stick with that.

I do think the 2400G is over priced however as one can get a 8400 at that price and both options have enough GPU power for most users and the CPU doesn't even come close to the 8400.

Then again i argued Ryzen 3 should have been the 1400 and 1500X originally instead of 4 core 4 threaded CPU's.
 
Malta Tour Shows GF Rising
Ryan Shrout
2/7/2018 00:01 AM EST

https://www.eetimes.com/author.asp?section_id=36&doc_id=1332945
AMD is still the biggest and most influential customer for GF. AMD’s latest processor products including Ryzen and Epyc, in addition to graphics chips based on Polaris and Vega, are all built inside GF’s Fab 8 in Malta, NY.

AMD had a lot of ground to make up against Intel and Nvidia. CEO Lisa Su hasn’t been shy about crediting GlobalFoundries in part for the improvement in its product portfolio through 2017. AMD’s release schedule and product refresh cycle for the products was aggressive.

I toured Fab 8 to see some of the changes taking place. Like TSMC, GF will be starting 7nm production with current lithography techniques, claiming customers are demanding 7nm tape-outs before newer options like extreme ultraviolet litho (EUV) are ready.

TSMC taped out several 7nm chips last year and expects volume production this year. GF is a step behind with an in-house designed 7nm node taping out chips in late 2018 and mass production in 2019.

GF must prove to its customers, AMD among them, that its 7nm capabilities are competitive with TSMC. GF claims its 7nm LP process will provide for mobile processor applications as much as a 30 percent die cost reduction and a 40 percent better performance over its 14nm node.

AMD said (and GF confirmed) it will be splitting its 7nm production across both GF and TSMC. Which chips will be coming from TSMC versus GF is still unclear.

My understanding is AMD plans development of 7nm GPUs and CPUs at both foundries, selecting the best one for each option as late as possible. Given AMD has promised to ship 7nm Zen 2 CPUs and 7nm Vega and Navi GPUs by 2020, the window is closing on that selection process.

GF claims its 7nm LP process will provide for mobile processor applications as much as a 30 percent die cost reduction and a 40 percent better performance over its 14nm node.
7nm LP will be used for mobile processor applications... What about the other applications?
My understanding is AMD plans development of 7nm GPUs and CPUs at both foundries, selecting the best one for each option as late as possible.
Maybe TSMC will be producing 7nm CPUs for AMD.
 


Why are you no longer on Anandtech what I great site, seem's strange your not a part of it.. ?


 


Why are you no longer on Anandtech what I great site, seem's strange your not a part of it.. ?
 
Ryzen 5 2400G Live Testing!
Son of a Tech
Streamed live on Feb 10, 2018

https://www.youtube.com/watch?v=PrRvBWZ1AFA&feature=youtu.be
FULL TIMESTAMPS:-
Cinebench R15 multi core - 9:40 (769)
Cinebench R15 single core- 16:10 (149)
OC 4GHz - 2:31:45
Cinebench R15 multi core (OC)- 2:32:45 (810)
CPU Z - 2:52:36
Cinebench R15 multi core (highest) - 2:54:44 (844)
Back to stock
Cinebench R15 single core - 2:56:58 (153)
Mining test - 3:05:50
Mining test - 3:19:25
3D mark fire strike (CPU only) - 3:23:40 (11,338)
3D Mark time Spy (CPU only) - 3:26:25 (4,108)
Graphics working reaction - 3:46:10
3D mark fire strike - 3:57:08 (graphics 3,375;physics 9390)
GPU allocated memory boost - 4:17:15
3D mark time Spy - 4:20:08
Superposition - 4:25:00
GTA V - 4:42:27(1080 mid 50's FPS)
GTA V - 4:53:40
Rise of Tomb raider - 5:05:35
Overwatch - 5:19:41 (medium 78FPS)
Overwatch - 5:32:00
Superposition - 5:46:00
PUBG - 6:10:49
CSGO - 6:46:20
Rocket league - 6:54:04
Dark souls 3 - 7:26:45
Destiny 2 - 7:52:38
Diablo - 8:07:35
UserBenchmark - 8:15:35
Killing floor 2 - 8:32:28
Killing floor 2 - 8:39:00
Doom - 8:48:45 (1080p vulkan low upper 30's mid 40's.)
Final fantasy XV - 8:56:00
Super position - 9:03:40
Rise of Tomb raider - 9:08:00
3D Mark Time Spy - 9:13:54
Cinebench R15 OpenGL - 9:22:40
Dirt Rally - 9:34:35
PUBG - 9:45:45
GTA V - 10:07:33
CSGO - 10:13:03
The amazon listing
https://sonofatech.com/amd-yd2400c5fbbox-ryzen-5-2400g-processor-with-radeon-rx-vega-11-graphics-169-00/

Edit: adding new video url https://youtu.be/g7NmvmO-CNU
 


Thanks a bunch for posting that gonna be watching that tonight who else might build one of these? I guess he said he isn't running this with 3200mhz memory(looks like 2667mhz?) so keep that in mind. Cinebench Single core scores looked a little low to me at 4ghz this thing is a good 10 points behind a typical Ryzen CPU at the frequency so cutting L3 cache did hurt it a bit or its just his memory.

Also GPU performance will be a decent amount lower due to his 2667mhz memory probably a good 15% or more.
 
Yeah, I think those benchmarks looked pretty good imo! AMD's integrated graphics have typically been much better than Intel's. I think they should be pushing R&D more into this type of integration. If everyone can play games on their CPU's fairly well that will make people think really hard about getting them.
 
Absolutely false. Iris iGPUs have always been faster than AMD's APUs. Heck, even the over two years old Iris Pro Graphics 580 is pretty much on par with Vega 10.
 


1600X has higher CB score than 1800X because it has 33% more L3 cache per core. Raven Ridge having half (or less) the amount of L3 cache per core has to affect the score.
 
Ryzen CPU + Vega Graphics on a Chip: AMD Ryzen 5 2400G & Ryzen 3 2200G Review
Raven Ridge APUs Arrive
By Steven Walton on February 12, 2018

https://www.techspot.com/review/1574-amd-ryzen-5-2400g-and-ryzen-3-2200g/
b6drymZ.png

The disadvantage, on the other hand, is that you get less CPU overall. With one CCX module, Raven Ridge's L3 cache gets cut down from 16MB to 8MB, but AMD decided to halve it again and these chips come with only a 4MB cache. Comparatively, the Ryzen 5 1400 had an 8MB L3 cache and the 1500X packed 16MB, the 2400G and 2200G will offer just 4MB. Now that doesn't sound too encouraging but AMD believes they've been able to offset that reduction with higher clock speeds.

However, reduced latency for the cache and memory purportedly offsets that capacity deficit and this is actually a result of having less cache and overall AMD believes that this is a net positive improvement for productivity workloads and in particular games which are more sensitive to memory latency.
There's also been some corner-cutting to reduce production costs. Raven Ridge only packs x8 PCI Express lanes, not 16 like the first-generation Ryzen CPUs. AMD has made this sacrifice as it doesn't think it will impact performance for mid-range discrete graphics cards, and it's unlikely that those with an APU will be upgrading to a GTX 1080 Ti any time soon so this makes sense.

AMD says this reduction in PCIe lanes helps contribute to a smaller and more efficient "uncore" as well. Uncore is a term first used by Intel to describe aspects of the CPU that are not within the core but are closely connected to it for maximum performance. Things such as the L3 cache and on-die memory controller, for example.

AMD has also saved money by using a non-metallic TIM for the 2400G and 2200G. We're not sure if the company is using the same toothpaste as Intel, but we'll test load temperatures to get an idea. Regardless, this confirms what we already suspected: Intel has been cheaping out on thermal paste to save on production costs and now AMD has been doing the exact same thing for its most affordable CPUs, though at least the company is admitting it and it also makes more sense on these budget chips.
Raven Ridge parts also feature support for dual-channel DDR4-2933 memory, which is an important feature for these APUs as memory performance is of utmost importance for integrated graphics. Like the mobile parts, the desktop Raven Ridge CPUs sport Precision Boost 2 technology, which is basically just a more aggressive version of what was featured in the original Ryzen processors.

And of course, the most notable change is the inclusion of Vega graphics. Connected to the CPU via the Infinity Fabric is a Vega chip featuring 11 CUs for the 2400G and 8 CUs for the 2200G. Clock speed is the main advantage to using the Vega architecture over Polaris, as the integrated graphics on both of AMD's new chips operate at over 1000MHz and can be pushed to 1.5GHz or higher.

Additionally, both parts have a thermal design power of 65 watts, though they can be configured down to 45 watts.

We'll be testing Raven Ridge on a B350 motherboard with 16GB of DDR4-3200 memory. First we'll check application performance before moving onto gaming (with and without a discrete graphics card) along with some memory scaling performance. We'll also be touching on overclocking, power consumption and operating temperatures so let's get to it.
Benchmarks Applications
CB.png

Memory.png

PCMark.png

Excel.png

Vera.png

7zip.png

Premiere.png

Blender.png

Corona.png

POVray.png
Benchmarks Games
Again, we recorded this next batch of results by pairing Raven Ridge with DDR4-3200 memory. The Pentium G4560 was tested with DDR4-2133 memory using its integrated graphics as well as the GeForce GT 1030. Then we have the Core i3-8100, 8350K and A12-9800 all using their integrated graphics solutions.
CSGO_720p.png

CSGO_1080p.png

PUBG_720.png

PUBG_1080.png

Fortnite_720p.png

Fortnite_1080p.png

Overwatch_720p.png

Overwatch_1080p.png

Rocket_720p.png

Rocket_1080p.png

DOTA_720p.png

DOTA_1080p.png

RSS_720p.png

RSS_1080p.png

Battlefront_720p.png

Battlefront_1080p.png

Wolf_720p.png

Wolf_1080p.png
Benchmarks: Discrete Graphics Card
dGPU_Overwatch_720p.png

Here's how the 2400G and 2200G perform with the integrated Vega GPU disabled and replaced with the Radeon RX 550. The 2200G was just 8% faster with the discrete graphics card installed, so that really speaks to just how impressive the integrated Vega GPU is. Similarly, the 2400G only saw an improvement of 4%, so the integrated Vega GPU is basically an RX 550, as suspected.
dGPU_Overwatch_1080p.png

That said, at 1080p we see the benefits of local GPU memory as the RX 550 starts to pull ahead. Raven Ridge's integrated graphics still do remarkably well. It's interesting to note that the 1% low results with the RX 550 are noticeably better with the 1300X an 1500X opposed to the new APUs, that will be something to look into with a wider selection of games in the future. Remember, Overwatch is a very CPU intensive title when running the bot match test.

Rather than dig through all that data for the other eight games I've tested right now, I'm instead going to move onto something I feel is more important for would-be buyers and that is overclocking, power consumption, temperatures and the all-important memory scaling.
Benchmarks: Overclocking
So let's start by taking a quick look at overclocking. I'm only showing results for a single game right now. There's much more testing to be done but I only had so much time to get this little bit done. The results included should give you a pretty good idea of what's possible but there will be a future piece dedicated to overclocking.

Please note that all overclocking was done with the stock box cooler and these results include just GPU overclocking, not the CPU. I don't believe you'll be able to overclock both the CPU and GPU with the box cooler but that's not a big deal as the biggest gains will come from GPU overclocking. By default, the 2200G runs its Vega cores at 1.1GHz and the 2400G at 1.25GHz, both have been pushed to 1.6GHz using 1.3 volts. Here are the results.
OC-01.png

At 720p I was only able to boost the Fortnite result for the 2400G by 8%, I mean that's not bad but we did increase the GPU clock speed by 28% so you should be able to expect better performance. In fact, we see that with the 2200G which for some reason responded to overclocking better. Here we were able to boost performance by a massive 29% and now the 2200G is able to match the stock 2400G in this title.
OC-02.png

Similar margins were seen at 1080p, though this time the 2200G performance was boosted by 35%, which wasn't quite enough to match the stock 2400G. Still overall great results and I'm keen to spend much more time playing around with the overclocking capabilities of these new APUs.
Benchmarks: Power Consumption & Temperatures
Power.png

Power_App.png

Power_Gaming.png

Temps.png

Using the Wraith Stealth box cooler these are the temperatures we saw with an ambient room temperature of 21 degrees. Under maximum load in our Blender test we hit 67 degrees for the CPU and GPU on the 2200G and 74 degrees for the 2400G.

Gaming temperatures were lower. Playing Overwatch for an hour saw the 2200G hit a peak CPU temp of 60 degrees and a GPU temp of 57 degrees. The 2400G hit a similar 59 degrees for the GPU and 59 degrees for the CPU. I should note that overclocking the Vega GPU to 1.6GHz did push temps for both the CPU and GPU to around 90 degrees with the stock cooler, but I'll cover this in more detail some other time. Time to check out memory scaling.
Benchmarks: Memory Scaling
MemScalingDual.png

What we can quite clearly see here is that dropping down from 3200 to the official 2933 spec reduced the average frame rate by 6%. Then we saw a further 6% reduction going from 2933 down to 2666 and then 8% from 2666 to 2400. If you were to use DDR4-2400 memory in a dual-channel configuration you stand to lose up to 20% performance compared to what we've shown here.
MemScaling.png

Some of you suggested that you'd like to run these new Raven Ridge APUs with a single 8GB memory module as it is cheaper to do this at the moment. Given the results, I strongly recommend you stick to dual-channel memory. Here's a better graph for showing the real performance impact. Using DDR4-3200 memory you'll see a massive 33% reduction in frame rate with single channel memory and this figure increases as the memory speed is reduced. So, please stick with dual-channel operation.
Best Value, Combo Comparisons and Ugh, Memory Pricing
Performance-wise the 2400G and 2200G are impressive. Both the CPU and GPU performance are exceptional at their respective price points. Additionally, the chips can be paired with existing B350 and X370 motherboards and they are excellent when it comes to performance-per-watt, particularly in 3D workloads.

The Ryzen 3 2200G is coming in at just $99 and the Ryzen 5 2400G is $169, and both appear to be a great value.

For comparison, the Pentium G4560 currently costs $80, can be installed on a $50 H110 motherboard and is complemented well by a GT 1030 for $90, so all up a $220 combo. The Ryzen 5 2400G offers a similar gaming experience and vastly superior productivity performance, but it costs just $20 more, so $240 with a B350 motherboard.

The 2200G on the other hand is slightly slower overall, but comes in around $50 cheaper at $170 including the motherboard, and it's also much better than the G4560 for productivity workloads.
For gamers on a tight budget, the Ryzen 3 2200G seems like the way to go, while the Ryzen 5 2400G makes more sense and should prove to be a far better investment over time.

The Core i3-8100 comparison is more difficult as we're stuck with Z370 boards that cost at least $110 right now, but I'm going to pretend that's not the case and you can get B360 boards for $70, and hopefully that reality isn't far away. Even so, with a $70 motherboard and the GT 1030 you're looking at a total bill of almost $300, which makes the 2400G a considerably better value as you essentially get the GPU for free.

So, this looks like a solid win for AMD across the board, though there is a slight hitch that needs to be addressed (memory pricing). Of course, memory prices are high across the board, but they are particularly high when looking at high-speed Ryzen-friendly RAM.

The DDR4-2400 16GB kit used for testing the Core i3-8100 for example costs $160 and this is for a pair of 8GB modules. That's mighty expensive by 2016's memory prices but today is the norm. The G.Skill FlareX DDR4-3200 16GB kit that AMD provided for testing and I also use in my own Ryzen test rig, costs $250. That means you're paying a little over 50% more for the Ryzen memory.

I'm yet to figure out what DDR4 memory will work at 2933 and faster with the Raven Ridge APUs, so it's possible there is a cheaper memory option but I can't confirm that at this point. That said, I should note that DDR4-3200 memory starts at $225 for 16GB kits so that's still a 40% price premium. This margin is at least halved with 8GB kits so that's something.
Right now it's DDR4 memory prices that are killing the value of these new AMD APUs. With a discrete graphics card memory speed matters very little. If you throw the G4560 on an unlocked Z270 motherboard and pair it with DDR4-4000 memory, at best you'll see a few extra frames with a GTX 1050, 1050 Ti or even 1060. So given current memory prices, you're possibly better off going with a discrete graphics card, which is a real shame.

One thing to remember when buying a Raven Ridge APU is that memory speed matters and you're going to want at least DDR4 memory capable of running at 2933 for them to really make sense. The Ryzen 3 2200G combo with 8GB of DDR4-3200 memory will cost $275 ($105 just for the memory). Still, as I said, 8GB of DDR4-2400 RAM isn't much cheaper as you'll be paying at least $85, so the total bill for the G4560 combo still comes to $305.

It also depends on what you want out of the system. If you want to build the smallest gaming PC possible then the Raven Ridge APUs offer something unique. But even if you build a Micro ATX or even a standard ATX case, the 2400G has legs and in the future when you can afford a GTX 1060 or RX 580, it'll be able to get the most out of those GPUs.

I still can't work out which APU I prefer, they're both so good in their own right. Initially I thought it would be all about the Ryzen 5 2400G, but I've shifted away from that thinking as I kept testing. I really like what the Ryzen 3 2200G has to offer budget gamers. This APU, motherboard and memory combo for well under $300 simply can't be beat and for that reason, it's likely going to be a hot item for gamers with limited funds.

It's worth noting that when we started testing these APUs a week ago the Vega drivers were quite buggy. AMD has since released an updated version that addressed stuttering issues we were seeing. The company also says further driver optimizations are coming and performance will get better. Given AMD's history, I believe that.
 
Marrying Vega and Zen: The AMD Ryzen 5 2400G Review
by Ian Cutress on February 12, 2018 9:00 AM EST

https://www.anandtech.com/show/12425/marrying-vega-and-zen-the-amd-ryzen-5-2400g-review/5
iGPU Gaming Performance
Throughout their history of having fully integrated GPUs, AMD has always favored going for powerful configurations that approach the lower-end of discrete GPU performance. With comparable discrete cards going for $80 or more, a powerful iGPU is a significant value proposition for AMD’s APUs. Furthermore as Intel has continued to stick with relatively lightweight iGPUs for most mainstream SKUs – a trend even the newly released “Coffee Lake” 8th Gen Core family continues – AMD has easily trounced Intel’s iGPU performance.

In fact the most recent 8th Gen Core CPUs generally fail to catch up to AMD’s last-generation 7000/8000 series “Kaveri” APUs, which means that for the launch of AMD’s new Ryzen APUs, the manufacturer is just going to be running up the lead even more. The real question we’ll be looking at on the iGPU side isn’t what’s faster – that’s obvious – but just how many times faster the 2400G & 2200G APUs are over their Intel equivalents, and how they stack up against an entry level discrete video card, the GeForce GT 1030. With upwards of 11 CUs in an APU, on paper the Ryzen APUs should be able to offer 1080p@30fps gaming at maximum (or near-maximum) image quality.

During our pre-briefing, several sets of benchmarks and configurations were 'suggested', focusing on eSports and a mid-range quality setting, to show what the platform can do. For our testing, we used our CPU Gaming 1080p gaming suite. This suite was developed with mid-range and high-end graphics cards in mind, with mostly high or ultra quality settings, pushing beyond what was suggested. The reason we did this was two fold:

    1. In our data, it shows a sizeable difference between integrated graphics solutions that can offer potential, compared to those that fall at the first hurdle.

    2. Second, it offers a stark reminder that while for the most part websites and enthusiasts like as way lyrical about high-end performance, the data shows both how far integrated graphics has come, and how far it still has to go to qualify for those 'immerse experiences' that Intel, AMD, and NVIDIA all claim are worth reaching for, with higher resolutions and higher fidelity.
Benchmarks
95459.png

95458.png

95460.png

95477.png

95476.png

95478.png

95462.png

95461.png

95463.png

95468.png

95467.png

95469.png

95471.png

95470.png

95472.png

95474.png

95473.png

95475.png

95465.png
95464.png

95466.png
Conclusion: Raising the Bar for Integrated Graphics
The march on integrated graphics has come and gone in rapid spurts: the initial goal of providing a solution that provides enough performance for general office work has bifurcated into something that also aims gives a good gaming experience. Despite AMD and NVIDIA being the traditional gaming graphics companies, in this low-end space, it has required companies with x86 CPUs and compatible graphics IP to complete, meaning AMD and Intel. While going toe-to-toe for a number of years, with Intel dedicating over half of its silicon area to graphics at various points, the battle has become one sided - Intel in the end only produced its higher performance solutions for specific customers willing to pay for it, while AMD marched up the performance by offering a lower cost solution as an alternative to discrete graphics cards that served little purpose beyond monitor output devices. This has come to a head, signifying a clear winner: AMD's graphics is the choice for an integrated solution, so much so that Intel is buying AMD's Vega silicon, a custom version, for its own mid-range integrated graphics. For AMD, that's a win. Now with the new Ryzen APUs, AMD has rizen that low-end bar again.

If there was any doubt that AMD holds the integrated graphics crown, when we compare the new Ryzen APUs against Intel's latest graphics solutions, there is a clear winner. For almost all the 1080p benchmarks, the Ryzen APUs are 2-3x better in every metric. We can conclude that ntel has effectively given over this integrated graphics space to AMD at this point, deciding to focus on its encode/decode engines rather than raw gaming and 3D performance. With AMD having DDR4-2933 as the supported memory frequency on the APUs, assuming memory can be found for a reasonable price, it gaming performance at this price is nicely impressive.
When we compare the Ryzen 5 2400G with any CPU paired with the NVIDIA GT 1030, both solutions are within a few percent of each other in all of our 1080p benchmarks. The NVIDIA GT 1030 is a $90 graphics card, which when paired with a CPU, gets you two options: either match the combined price with the Ryzen 5 2400G, which leaves $80 for a CPU, giving a Pentium that loses in anything multi-threaded to AMD; or just increases the cost fo the system to get a CPU that is equivalent in performance. Except for chipset IO, the Intel + GT 1030 route offers no benefits over the AMD solution: it costs more, for a budget-constrained market, and draws more power overall. There's also the fact that the AMD APUs come with a Wraith Stealth 65W cooler, which adds additional value to the package that Intel doesn't seem to want to match.

For the compute benchmarks, Intel is still a clear winner with single threaded tests, with a higher IPC and higher turbo frequency. That is something that AMD might be able to catch up with on 12nm Zen+ coming later this year, which should offer a higher frequency, but Zen 2 is going to be the next chance to bridge this gap. If we compare the multi-threaded tests, AMD with 4C/8T and Intel 6C/6T seem to battle it out depending if a test can use multi-threading appropriately, but compared to Kaby Lake 4C/4T or 2C/4T offerings, AMD comes out ahead.

With the Ryzen 5 2400G, AMD has completely shut down the sub-$100 graphics card market. As a choice for gamers on a budget, those building systems in the region of $500, it becomes the processor to pick.
 
There is no doubt that the new Ryzen 5 2400G and Ryzen 3 2200G offer remarkable gaming performance for budget segments. I think we have to consider the price of 16 GB of RAM, now costing more than the processor, pushing these budget builds into the $500 range we should consider consoles like the Xbox 1X for purely gaming, which can offer playable resolutions up to 4k. That said computers still offer a wide array of functionality with business applications. I think it shows we still have a ways to go before consoles are completely edged out in favor of computers on a gaming price to performance ratio.
 

He did reply here
http://www.tomshardware.com/forum/id-3341285/amd-naples-server-cpu-info-rumours/page-26.html#20641880

Let's just leave it at that, and get back to the news.
 


REMEMBER ME?

What did I tell you?

Do you want to be a member here or not? Because you posting this tells me you do not.

Call out juanrga or anyone else again and watch what happens.

Everyone else, keep the discussion civil or you're out. And for the record what Juan posted about OrangeKhrush (from me) is true, he was banned form this forum, in addition to behavior issues he was spreading massive amounts of misinformation here.
 


Really?? You think "it shows we still have a ways to go before consoles are completely edged".. If you try to build a PC equivalent to the XBOX ONE X you will end up spending more than twice the $495 USD price of the XBOX. The graphics card equivalent is the RX 580 or GTX 1060 and those retail for around $400.. The XBOX is a steal at that price. Building a budget PC for just gaming is a nonsense.
To match the XBOX performance you need:
1TB HDD
4K, HDR BLU-RAY, DVD Drive
RX 580 or GTX 1060
12GB RAM
Motherboard
Processor
A PSU to power all that
WINDOWS 10
Bluetooth 4.0
A Keyboard and Mouse.

Try to buy all that for $495. Sure we all love PC, if you want raw power at the end nothing beats PC, but trying to beat the XBOX ONE X in the budget section won't happen, is not even close.

 


So, process limited I take it?
 


It is the older Ryzen bug again

https://www.pcgamesn.com/amd-raven-ridge-overclocking

Those overclocks aren't real.



I think he got a bad chip. Others have pushed the chip to 4.1GHz and 4.2GHz. AMD also demoed 4.2GHz overclock

https://www.reddit.com/r/Amd/comments/7s56pd/the_upcoming_ryzen_apus_seem_to_already_be_the/dt2rseq/

So I think the new 14nm+ process (aka '12nm') is giving about 200MHz extra. It also agrees with this leak about Pinnacle Ridge showing a qualification sample with 200MHz above the Summit Ridge akin.

https://hothardware.com/news/amd-ryzen-5-2600-12nm-zen-cpu-asus-crosshair-vii-hero-x470-motherboard
 


14nm+ shouldn't be used as synonymous of 12 nm, unless you want to introduce even more confusion. Apparently there is an interim 14nm+ process in between the original 14nm and the 12nm used for the upcoming zen+. At least that is what toms reported.
According to AMD, its 14nm+ process is denser and more power-efficient than the 14nm node it was using previously. However, the company isn't sharing much beyond those claims. To be clear, this is not the GlobalFoundries 12nm LP process that AMD will transition to in April when the Zen+ processors are expected to launch. That new process will provide even more of a performance boost over the current 14nm+ LPP FinFET.
That may explain why it is easier to get in the 4.0 4.2 overclock range with raven ridge.

 
Status
Not open for further replies.