AMD Details Lower-Power Ryzen 2400GE, 2200GE

Status
Not open for further replies.

MCMunroe

Distinguished
Jun 15, 2006
283
1
18,865
38
I have had and made a bunch of 1 Liter desktops with OptiPlex 3020 and Lenovo m92p's. These chips would make great "normal" folk and HTPCs.
 

pensive69

Commendable
Apr 12, 2016
37
0
1,540
1
the Ryzen 5 2400G has a nice boasted top end.
be nice if AMD had some actual gamer reactions to playing with
them in suitable platforms apart from higher end rigs
 

alextheblue

Distinguished
Apr 3, 2001
3,078
106
20,970
2
The 2200G ITX cube rig I built for my dad is already pretty darn power efficient and quiet. For a slimmer system I bet these are great all-around performers.

The 2500U and 2700U are already good fits for that. You don't see as many oldschool chunky 15.6" units anymore, and the U series TDP is configurable from 12-25W. With that being said I personally would like to see more laptops configured with these APUs at 25W.
 

DerekA_C

Prominent
Mar 1, 2017
177
0
690
1
Yeah and this isn't even on 10nm to match intel nor is it on 7nm coming end of the year beginning of next year going to be exciting to see top end chips dropping down to 65w with 8 cores @ 4Ghz+ and 105w for 10 cores @ 3.7Ghz+.
 

chaz_music

Distinguished
Dec 12, 2009
57
5
18,545
4
Leon,

These sound like they might be interesting for a FreeNAS box. My main interest is the low power and ECC support. Do these CPUs have ECC, and what about the chipsets? The first gen Ryzen ECC support was a market launch fail, with little data and vague information from AMD. There was a ton of confusion as to which motherboards could work with ECC. Why not all of them? At the point where we are with computers, why would all CPUs not have ECC? Including Intel's CPUs.
 

bit_user

Splendid
Ambassador

I know at least some AM4 chipsets support it. Here's a board which supports it and supports these CPUs. However, I don't know if it supports both at the same time.

https://us.msi.com/Motherboard/X470-GAMING-PLUS/Specification


In reviewing the X470 boards on the market, they seem pretty clear about which support it, and which allow you to use it but treat it as non-ECC RAM. If a board doesn't explicitly list support for it, then you can bet it doesn't (or will at least treat it as non-ECC).

You have to check the product specs on the manufacturers' sites. Right now, there's only like 4 brands of X470 boards. Note that not all boards from all brands support it. Gigabyte and MSI each support it on only one of their X470 boards. ASRock supports it on all of theirs, but only with Ryzen Pro processors (my guess is they just wanted to reduce the set of configurations they had to test).


Because there's limited market demand and it costs money to:

  • ■ route the extra traces on the circuit board
    ■ test the ECC functionality of the board & BIOS
    ■ validate additional memory SKUs on the boards that support it
Also, it tends to lag in speed - whether for valid technical reasons or simply market demand.


Around 2005 or 2006, Microsoft was encouraging the use of ECC in their Windows Vista hardware spec. I think they were worried about people with flaky memory blaming the OS for blue screens. Starting with the first i5/i7 CPUs, Intel took a different approach. They decided to use it as a key differentiator between their mainstream and Xeon platforms, and this further reduced demand.

Also, if you use good quality memory and run it at specified (or at least conservative) clocks, then it doesn't seem to cause an unacceptable degree of system instability. And, while it's worth using for a substantial set of tasks, it's certainly overkill for web browsing and gaming. So, that's a lot of people who don't need it, don't want it, and certainly don't want to pay for it (even though the additional cost would be small).
 

chaz_music

Distinguished
Dec 12, 2009
57
5
18,545
4
BIT_USER,

Thanks for the detailed answer. The only topic that I did not know about was that Microsoft had pushed for ECC on Vista. That really would stop many BSOD events. My interest in this is on multiple fronts, with the initial discussion here being with FreeNAS and ZFS. ZFS has several advances to help catch bit rot in the disk and file system, but the guys at FreeNAS are quick to note that DRAM errors can still get through. So ... they highly recommend ECC DRAM.

As an engineer, I know that the DRAM can start to have errors due to many conditions, and as a rule, I never overclock. The rationale is that CPU and DRAM companies would never leave that much money on the table if they could safely raise the clock speeds.

Of the typical failure modes that I would think of for CMOS would be ESD and ionizing radiation (Gamma rays, X-rays). ESD failure can be instant or take an amazing amount of time before a hard failure occurs. The radiation induced trouble can result in (1) instantaneous single event failures or (2) a slow accumulation over time of multiple events on the gate oxide causing the MOSFET thresholds to drift lower and lower, until the MOSFETs no longer act as enhancement CMOS (the transistors are on with 0V=oh crap!). In addition, as any IC ages, the physical features in the die also can slowly migrate and diffuse.

All of these failure modes can take quite a long time to surface. In the meantime, you have corrupted your OS and data (BSOD and "bad files"). The irreplaceable photos, MP3 files, term paper, etc. are in trouble.

So, with the ZFS file system and even Microsoft's ReFS going after drive based bit errors, and SSDs technology doing the same, there is no longer much of an argument for not having ECC anymore. I agree that there is a cost delta, but that would go to zero after just a short while as ECC became the defacto memory architecture. And for the enthusiasts, allow ECC to be turned off for overclocking. Everybody gets what they want, except Intel. And I would say for Intel, shame on them. Follow the voice of the customer. If you manhandle the customer too long, they will drop you like a rock when something better comes along (no reason for loyalty). Now that AMD has the Ryzen platform, that is exactly what is happening. Including me!

- Charles
 

bit_user

Splendid
Ambassador

Something like 15 years ago, I saw a review of PSUs where they modified memtest86 to run a 12-hour bit-fade test. They found a statistically-significant difference in the error rate, across the range of different power supplies tested. I can no longer seem to locate the article, however.


If you're keeping them on a local filesystem that's heavily used, or if you're just copying them around or modifying them enough.

With cloud-based storage, the filesystem is hosted on a server with ECC and (aside from the initial upload) most of the accesses by the client PC are reads. So, very little risk, with a web-type access pattern. Using a network-based file server is similar.

With either type of remote storage, the risk is for users with large CAD or other datasets that they're frequently loading, modifying, and saving. This is somewhat mitigated by using some sort of server-side source control system (or backups). I say "somewhat" as, on more than one occasion, I've seen a collaboratively edited MS Word doc become increasingly flaky and "broken". To go back to the last known good version (and you don't even know if that's before the original error), you would lose a lot of edits. So, even backups are not full protection against data corruption.

I do emphasize the point about dataset size, since small datasets are unlikely to be corrupted by infrequent memory errors. If the memory errors are very frequent, you will experience basic system stability issues. So, the area for silent risk on a client PC with reliable network storage is mostly limited to large and/or frequently-modified, complex datasets.


SSDs and HDDs have had some level of error correction for a long time.


It's more than just architecture. It's extra PCB traces and extra chips on the DIMMs. Also, requires a small amount of energy to compute & check the ECC codes. It's truly not free, but most CPUs have ECC-protected caches, showing that the additional cost can indeed get rather small.


Or leave it on, depending on how much instability you're willing to accept.
 
May 11, 2018
1
0
10
0
So, could we still have a chance to see another Ryzen 3 variant, after 2200G and 2200GE? I mean the one without iGPU just like previous Ryzen 3 1200.
 
Status
Not open for further replies.

ASK THE COMMUNITY