News Vendors Show Off First X670, X670E AM5 Motherboards for Zen 4 CPUs

old_rager

Honorable
Jan 8, 2018
61
1
10,535
I'm struggling to come to grips that my 10 year old PC config that has an X16 (GPU) and an X8 (Raid Card) slot will not be supported on the new AM5 mobo's?!? 😲 If I understand correctly, according to the statement made in the article, "...with up to 16 Zen 4 cores, up to 24 PCIe 5.0 lanes (x16 for a graphics card, x4 for an SSD, and x4 to connect to the chipset)", this would mean that only 20 PCIe 5.0 lanes are available to the user - as the chipset is taking up 4 of those 24 PCIe 5.0 lanes, yes?

And here was I hoping that I could run my current config (GPU (x16) and External Raid Card (x8) with at least 2 x NVMe's in a Raid 0 config - all on the PCIe 5.0 environment... 😭

Having said all of the above, I notice in Biostar's product specs (https://drive.google.com/file/d/1tljBzrCW1gimWiBNSim42DjnRd4sehF0/view?usp=sharing) that you can have run two x16 slots (in either x16 or x8) as well as 2 x M2's all in PCIe 5.0 mode - somehow, I don't think that these will all be available at the same time?!? So, not sure what to think here as I've been literally waiting years now to upgrade my PC, and in my opinion, this is now the perfect time to do so.

Will the AM5 serve me well, or will I need to look at the Threadripper environment - which is something I really don't won't to do due to the extreme pricing?!? 🤔
 
Last edited:
I'm struggling to come to grips that my 10 year old PC config that has an X16 (GPU) and an X8 (Raid Card) slot will not be supported on the new AM5 mobo's?!? 😲 If I understand correctly, according to the statement made in the article, "...with up to 16 Zen 4 cores, up to 24 PCIe 5.0 lanes (x16 for a graphics card, x4 for an SSD, and x4 to connect to the chipset)", this would mean that only 20 PCIe 5.0 lanes are available to the user - as the chipset is taking up 4 of those 24 PCIe 5.0 lanes, yes?

And here was I hoping that I could run my current config (GPU (x16) and External Raid Card (x8) with at least 2 x NVMe's in a Raid 0 config - all on the PCIe 5.0 environment... 😭

I'm not that knowledgeable about this, but the bandwidth is definitely there. It just depends on how motherboard makers decide to split those lanes. I'm almost certain you could run 1 GPU plus one Raid card and have perfectly fine speeds, seeing as the top end motherboards, at least x670/e should support dual GPUs setups (even if they aren't that popular), which definitely would use more lanes than a single GPU + Raid card.

The article also says: "In contrast, the X670E (dual-chip design) features 'unparalleled' expandability, extreme overclocking, and PCIe 5.0 connectivity for up to two graphics cards and an M.2 slot for an NVMe SSD."

If it can support 2 GPUs and an NVMe SSD, I'm sure you can swap one of those GPUs for the Raid card. And PCIe generation matters too. PCIe 5.0 bandwidth is significantly higher than whats on a 10 year old PC (probably gen 3?) and so potentially an x4 or x8 PCIe 5.0 slot could substitute for a x16 PCIe 3.0 slot, depending on the physical slot size and how the motherboard makers split up the lanes from the CPU and chipset(s).
 
  • Like
Reactions: Why_Me

old_rager

Honorable
Jan 8, 2018
61
1
10,535
TCA, Thanks for your thoughts. But if I want to one 1 x GPU (16 lanes), 1 x Raid Card (x8) and 2 x NVMe's in Raid 0 (2 by x4 lanes), there's no way I'm going to be able to do this - even if all 24 lanes were available to me, yes?

BTW, am not sure if you saw my Biostar update to my post before sending your response.

Again, thank you for your thoughts - and yes, I'm running Gen 3. Well picked!!! 🤣
 

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Having said all of the above, I notice in Biostar's product specs (https://drive.google.com/file/d/1tljBzrCW1gimWiBNSim42DjnRd4sehF0/view?usp=sharing) that you can have run two x16 slots (in either x16 or x8) as well as 2 x M2's all in PCIe 5.0 mode - somehow, I don't think that these will all be available at the same time?!? So, not sure what to think here as I've been literally waiting years now to upgrade my PC, and in my opinion, this is now the perfect time to do so.

Will the AM5 serve me well, or will I need to look at the Threadripper environment - which is something I really don't won't to do due to the extreme pricing?!? 🤔

For the 2nd nvme slot, it has to get 4 lanes from the PCIE 16x slot.

I am not sure hows the configuration like, but its possible that 2nd PCIE slot will not work once both nvme slots are populated.

Its also possible that 1st PCIE slot is 16x while 2nd slot is strictly 4x only. This way, if you populate 2nd nvme slot, the 1st PCIE will become 8x, then 2nd slot is still 4x.

I don't think there will be any PCIE 5.0 from chipset. chipset may be strictly PCIE 4.0 only. Btw, there are only 4 lanes for chipset. So, running PCIE 5.0 nvme drives off chipset will likely result in bottleneck.
 
  • Like
Reactions: old_rager

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
TCA, Thanks for your thoughts. But if I want to one 1 x GPU (16 lanes), 1 x Raid Card (x8) and 2 x NVMe's in Raid 0 (2 by x4 lanes), there's no way I'm going to be able to do this - even if all 24 lanes were available to me, yes?

BTW, am not sure if you saw my Biostar update to my post before sending your response.

Again, thank you for your thoughts - and yes, I'm running Gen 3. Well picked!!! 🤣

No, thats not possible. Only 20 lanes are availalbe to user. Also, if you are using a RAID card, you need to use 2nd slot. Some boards have 8/8x while some 16(8x when 2nd slot populated)/4x.

Currently there are no consumer PCIE 5.0 RAID cards. Perhaps it will start appearing this year or next year. PCIE 5.0 PLX chips are avaialbe since 2021.
 
  • Like
Reactions: old_rager
Aug 4, 2022
4
5
15
I am stoked about finally building a new machine after 14 years for these new chips, but whoa a lot has changed. Where have all the expansions slots gone? My P6T SE has 6, vs these 2-3. What happened to SATA RAID 5? I don't see any SATA ports on these guys, my current P6T SE has 6 internal and 1 external PATA ports..
EDIT: I see the SATA ports now going off to the right of the boards, along with other connectors. 👍
 
Last edited:

old_rager

Honorable
Jan 8, 2018
61
1
10,535
Escksu, my raid card is a PCIe 3.0 x8 Adaptec 2277000-R Single 8885 raid controller supporting 12Gb/s. So regardless, it's going to take up 8 lanes I'm assuming, unless there is someway the system would only use 2 of the PCIe 5.0 lanes to actually transfer the data - which should equal what 8 lanes would do in the PCIe 3.0 world. As I'm running 120TB of storage, my raid card is a mandatory requirement.
 
Could be wrong but my understanding is that the X670E boards will have 2 chipsets/ southbridges. Each with a 4x connection to the CPU.
With each chipset having 24 PCIE lanes available.
So lots of connection possibilities for the board makers.
 

shady28

Distinguished
Jan 29, 2007
430
299
19,090
Escksu, my raid card is a PCIe 3.0 x8 Adaptec 2277000-R Single 8885 raid controller supporting 12Gb/s. So regardless, it's going to take up 8 lanes I'm assuming, unless there is someway the system would only use 2 of the PCIe 5.0 lanes to actually transfer the data - which should equal what 8 lanes would do in the PCIe 3.0 world. As I'm running 120TB of storage, my raid card is a mandatory requirement.

You would probably put this in a system with two x16 PCIe slots. They would be running at x8 when populated, some boards may allow them to run x16 but they will be bank switching back and forth which adds latency. What I don't know is if putting a PCIe 3.0 card into one of those slots would cause them both to drop to PCIe 3.0 speeds. That might be bad if you are using a high powered GPU that needs the bandwidth. However, it would I'm sure 'work'.

These are still desktop chipsets though. They're fine for a high powered GPU and a couple of fast x4 m.2 SSDs, but start going past that and compromises will be made. For example, the PCI 5.0 x4 lanes going to a chipset could be bottlenecked by those lanes if you have two PCIe 5.0 x4 m.2 SSDs, and a couple of USB 3.2 2x2 ports running off that chipset all loaded at the same time.

Obviously, if those devices can saturate their respective busses, you can't pump 8 PCIe 5 lanes from a pair of m.2 plus some USB 3.2 2x2 traffic across the 4 PCIe 5 lanes going to the chipset. The chipset will manage it, but you'll see performance degrade.

It doesn't sound like you are going to have that situation, but these aren't HEDT systems. That's why HEDT chips like the 10980XE has 48 PCIe lanes and some of the Threadrippers have 88 PCIe lanes.
 
  • Like
Reactions: old_rager
TCA, Thanks for your thoughts. But if I want to one 1 x GPU (16 lanes), 1 x Raid Card (x8) and 2 x NVMe's in Raid 0 (2 by x4 lanes), there's no way I'm going to be able to do this - even if all 24 lanes were available to me, yes?

BTW, am not sure if you saw my Biostar update to my post before sending your response.

Again, thank you for your thoughts - and yes, I'm running Gen 3. Well picked!!! 🤣
Sorry mate, didn't see your Biostar update until just now. I'm still just as lost as you on the specifics. There are some other commenters here with a lot more knowledge than I on this. Seems like @shady28 knows what he's talking about though. You could also try asking for more info on the forums, probably the hardware section's motherboard board or storage board .
 
  • Like
Reactions: old_rager
They are definitely not new to it. They had some nice boards, back in the AM2/AM2+ days. They just haven't done much in recent years.

Maybe I'm older, but I remember the trash that was Biostar back in the day! Perhaps they have gotten better over the decades (I would hope so). But having been burned by their boards in the past, I'd have to really want to try them out again.

Of course, with electronic companies, a lot had terrible products in the past and are doing better now. It would be really hard to find one well-established company who hasn't had a bad product in the past.
 
Escksu, my raid card is a PCIe 3.0 x8 Adaptec 2277000-R Single 8885 raid controller supporting 12Gb/s. So regardless, it's going to take up 8 lanes I'm assuming, unless there is someway the system would only use 2 of the PCIe 5.0 lanes to actually transfer the data - which should equal what 8 lanes would do in the PCIe 3.0 world. As I'm running 120TB of storage, my raid card is a mandatory requirement.

You have more money sunk into the storage, so why bother with desktop boards? Something like a Threadripper board would give you all the PCI lanes you want; you could also find a server board (even used) that will give you more lanes. Even better, why not go to a dedicated storage array (NAS or small SAN) with a 10GBe NIC, and just remove the higher storage requirements from your local desktop?
 
You have more money sunk into the storage, so why bother with desktop boards? Something like a Threadripper board would give you all the PCI lanes you want; you could also find a server board (even used) that will give you more lanes. Even better, why not go to a dedicated storage array (NAS or small SAN) with a 10GBe NIC, and just remove the higher storage requirements from your local desktop?

It's not like making the move to TR is a cheap one and while what you mention is true in terms of more lanes than he'd be likely to use, it could be a non-starter from the jump.
 
  • Like
Reactions: TCA_ChinChin

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
You have more money sunk into the storage, so why bother with desktop boards? Something like a Threadripper board would give you all the PCI lanes you want; you could also find a server board (even used) that will give you more lanes. Even better, why not go to a dedicated storage array (NAS or small SAN) with a 10GBe NIC, and just remove the higher storage requirements from your local desktop?

I am in a very similar situation. This system was actually built more or less from scratch for the purpose I'll expand on below, but that's rare for me. Most of the time it's a continous replacement of parts or rebuilding a rig from cast-off for kids & family.

And then it's rare that I have a single device for a single purpose. Quite often single devices serve multiple purposes (e.g. work & gaming) or even multiple devices a single one (cluster servers). But I prefer using standard parts to remain flexible and reuse things elsewhere: flexibility has been my main motivation of chosing PCs over any of the other personal computers since 1986.

Ever since both the home-lab and the family required some more permanent and higher-performance facilities than an older notebook could provide some ten years ago, I had to come up with a solution that wouldn't requrie too many machines yet provide all the facilities needed with as little power and noise as possible, because we can't just run network cables from the 2nd floor to the cellar in a protected building that is more than 170 years old. Also remote management capable hardware comes at a stiff premium of price, power and other constraints.

Kit description

I built on a Haswell Xeon E3-1276 v3 base (essentially an i7-4770 with ECC support) using an ASUS P9D WS board that supports PCIe 3.0 8+4+4 tri-furication. It is an all-in-one 24x7 server and desktop that combines near silent operation, "low" power consumption yet enough OOMPH! to run ordinary desktop workloads directly or via terminal services, THD remote gaming via Steam Remote and space for a couple of VMs if need be.

It runs Windows 2019 server, sports 32GB of ECC DDR3 RAM, an Nvidia GTX 1060 6GB, LSI MegaRAID SAS 9261-8i and an Aquantia ACQ107 10Gbit Ethernet NIC. It boots of a SATA SSD, has a hot swap drive bay for a 3.5" backup HDD, 4x 1TB SATA SSDs as RAID-0 from onboard ports for a games cache and 6x6TB HDDs as RAID6 off the SmartRAID controller. The GPU isn't seriously slowed down by only having 8 lanes, the RAID controller and the NIC are well enough served with 4 lanes each.

It runs 24x7 since almost a decade using a Sharkoon tower with a huge but slow (430 rpm) side fan for the 5k HDDs, which are all in Sharkoon drive silencers (rubber band construction) and is completely unnoticable even at arm's length underneath the table, consuming perhaps 40-50 Watts fromt the socket on idle. You won't squeeze a Threadripper into the same power and noise envelope, nor most office SANs. CPU cooler is a big Noctua (940rpm), the MSI GPU keeps the fans off even during lighter gaming workloads and even at full power wouldn't distract from working.

Apart from being the home-lab & family file server, it also acts as a multi-user Windows terminal server for all those special apps I don't want to install and maintain on all the others machines (~20 desktops and notebooks in the house). The 10Gbit network connects to the much bigger workstations and a backup server, which don't run 24x7 but only when needed, as those tend to get far hotter an noisier (no air conditioning around here). It also connects to a couple of oVirt clusters, but that's a different story.

On the desk there are three monitors the principal being a 43" 4k screen, while the others are THD. A set of KVM switches allows drivings the screens both from the same machine or from different ones then using a light stow-away keyboard for "the other".

Ideally I'd have run Linux below with Windows in a VM using PCI-pass-through for the GPU to enable gaming. It would have enabled things like ZFS for the storage, but that's a setup that worked, but turned out too complex to maintain and in case of a "remote hands by kids" operation ("power off-wait-power-on" was their initial skill level), it wouldn't have agreed with my frequent business trips. That's why I settled for Windows as a base OS and a RAID controller to handle HDD failures.

It's been extremely stable, no hardware failures except for HDDs which were handled by the RAID. The most disruptive events have been OS upgrade from Windows 2013 as well as Microsoft force rebooting the machine while it was running VMs, even after I've tried just about any means of telling it not to do automatic reboots (that worked just fine with Windows 2013). BTW I am using VMware Workstation for the VMs, simply because I've been using their products since 1999 and never felt motivated to switch to hyper-V. Not sure that the forced patch day reboots would respect Hyper-V VMs any better...

How to upgrade?

There is nothing wrong with the current setup, no serious shortcoming. But current hardware offers twice the scalar speed, twice the cores and twice the RAM at similar Wattage. And at that point, there is a bit of an itch that's constantly looking for an excuse to shuffle hardware.

I have no idea if ECC ever saved my bacon, but it's given me peace of mind. Actually, Intel didn't really charge a big premium for E3 Xeons and even for the RAM it was ok. I had to pay disproportionally more for ECC DIMMs on the 128GB DDR4 workstations, but once you have a billion bits in a computer you use professionally, it just help to reduce stress. I don't overclock, either, except for a bit of burn-in testing for new hardware.

With AMD ECC is easy again, except for the APUs, which I was eying for a potential replacement: still am, even with Ryzen 7000 looming. But at €50 premium for an ECC APU that's still an option on the table... except that Ryzen 7000 also has an iGPU, which can help on idle power and PCIe pass-through experiments... once kernels and hypervisors catch up and know how to manage a mix of iGPU and dGPU from AMD and Nvidia. I do run lots of CUDA stuff, not that much on this machine, but an AMD dGPU is not an option.

With Intel the near complete lack of W680 mainboards, especially DDR4-ECC variants, money just couldn't be spent as announced boards simply aren't available for purchase.

DDR5 with full ECC is new trouble, but I am close to giving into using sticking with the internal DDR5 ECC for this build. However, that's why the 5750G APU isn't quite off the table yet...

10Gbit nightmare revisited

Gigabit networking was pretty cool, when it became available in the last millenium. But even a single bit of spinning rust can saturate that bandwidth. The wait for affordable 10Gbit switch hardware was far too long, but with near noiseless desktop NBase-T switches at >€50/port the biggest hurdle seemed gone And an Aquantia AQC107 based NIC isn't too pricey at €90 when it saves you from having big Steam games on a distinct SSD on every PC instead of the network. I also keep pushing big VMs around and backups run much faster on 10Gbit/s Ethernet, too.

But where in pre-NVMe times there was hardly any other meaningful use for the 4 extra lanes of PCIe the usual South bridge provided to the single non-GPU slot, today sacrificing 4 lanes of PCIe 4 or 5 for a 10Gbit NIC is horrible, when a single 4.0 lane should suffice. And there is that AQC113 chip, which will use 1x4.0, 2x3.0 or 4x2.0 lanes if stuck and wired on an add-in card or a motherboard.

Except that nobody is selling matching add-in cards and motherboard vendors only add the ($20?) chip on $500 mainboards. And for some reason now even the single-lane PCIe slots that would just be perfect for an AQC113 card are gone. I can't plug my AQC107 PCIe 2.0 x4 cards into a PCIe 4..0/5.0 x16 slot without hurting, especially if there is still a RAID controller that needs similar bandwidth and likes to use 8 lanes. Of course the RAID controller is actually more than a decade old, but it's still perfect at managing HDDs with fault tolerance (and I got spare controllers). Perhaps I'll have to switch to a RAID-10 with 4 Helium HDDs for the file server part of the system, but managing drive faults won't be nearly as comfortable I'm afraid. And the 6TB drives had just been replaced for free by Seagate last year, after the previous 8x 4TB 2.5" drives were discovered to be shingled media, who don't play nice with RAID6.

Insane Expansion

The current breed of mainboard look quite "unusual": while they are supposed to have more lanes than ever, your options in managing the bandwidth to suit your needs are getting smaller. Most lanes seem tied to M.2 slots on the mainboard, which are about the least flexible medium for anything including storage expansion.
The block diagram on Angstronomics puts the x1 link intended for onboard Ethernet at PCIe 4.0 in the B650, which would be good enough for 10Gbit Ethernet, but the X670 variant has it pegged as PCIe 3.0 x1, which would be a disastrous limitation for 10Gbit Ethernet. There is around 80Gbit/s of collective USB bandwidth on the ASmedia chip, but you can't network with that.

That leaves bifurication, which is a bit painful when you put a PCIe 2.0 controller into 8 lanes of PCIe 5.0 goodness. Unfortunately it seems that 8+4+4 or even 4+4+4+4 splits aren't supported, which I've learned to appreciate almost as much as true PCIe switches.

We know that the Zen 4 CCDs will speak PCIe 5.0, Infinity Fabric and CXL either directly of via the IODs and that CXL 1 server processors will be built from those CCDs. What's missing is switching on the motheboard end. For Zen 4 AMD evidently decided to go cheap (note that early Ryzen 7000 customers might never notice). The ASmedia chips are completely PCIe 4.0 generic, ignorant of IF or CXL, quite unlike the cut down IOD chips in the X570.

GPUless variants of those IOD perhaps with 1/4 fan out into PCIe, SATA, USB or even Nbase-T Ethernet ports with a PCIe 5.0 uplink could be fitted to any of the 7 groups of PCIe x4 lanes that I see leaving the Ryzen SoC. And they could even speak CXL!

They could be put on extension board themselves instead of the mainboard, so you can chose between a 4x M.2 board with easy access for swapping media or other variants more USB or SATA heavy or a CXL uplink.

Currently mainboard are running out of slot space because of M.2 sprawl, running into cooling issues for high performance drives. A dual width slot M.2 expander with such a switch can run with a quiet fan and still provide much easier storage upgrades than what you squeeze on mainboards today.
 
Last edited:
Currently mainboard are running out of slot space because of M.2 sprawl, running into cooling issues for high performance drives. A dual width slot M.2 expander with such a switch can run with a quiet fan and still provide much easier storage upgrades than what you squeeze on mainboards today.

Have you checked out some of ASRockRack's (or SuperMicro/Tyan) server boards for consumer purchase? I've used them for some small server builds, and they have a number of different slot options available. The X570D4U-2L2T/BCM in particular might be of use to you, with both dual 10GBe onboard & multiple slots with different amount of PCIe lanes. Since it's X570-based, you'll just be going up to 5000-series CPUs/PCIe 4.0, but it might be a usable upgrade for you.
 
  • Like
Reactions: bit_user

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
Yes, from the variants that they offer, that one would fit the requirements, if not the bill. I also quite like the X570D4I-2T.

But the only ones available in retail seem those without 10Gbase-T and even those are scratching 4 digits for the board alone. I got quite a few SuperMicros at work and I like them well enough there, where I don't have to foot the bill.

The choice for Broadcom NICs in your first suggestion is an interesting one. I remember killing a few of those with firmware upgrades perhaps 10 years back. Might have had something to do with the fact that I was doing those upgrades in a 990FX system with a Phenom II x6 CPU instead of the target Intel servers: I had to do six NICs and I only noticed that they went dead after the second one was done... We went with Intel X540 NICs after that: dumber, but less trouble.

I should still have several BCM dual 10Gbase-T NICs around in a box, but apart from firmware troubles, they eat quite a bit of power, around 10 Watts per port. They got to be really hot in a desktop without server class ventilation, even with their chunky passive coolers.

I used them in a direct connect mesh until I finally got affordable Nbase-T desktop switches. That's why I then went with the Aquantia AQC 107 for my home-lab machines, only 4 lanes and around 3 Watts per port. Very dumb plain Ethernet, none of the offload features (I don't know to use on KVM/oVirt), but really easy on the money and well supported on Linux. With 9k MTU they scream even on Windows.

I got plenty of NICs so mostly I'd need slots to put them into. But I'd take an onboard ACQ113 for €90 extra, too, if it saves me a bigger slot for grander things. It's just paying €400 extra for that on-board 10Gbit NIC that costs probably less than €20 which I find disproportional.

And then I am a bit disappointed that all these mergers by Avago/Marvell have made afforable PLX PCIe switches disappear, because I'd sure like the flexibility they could offer. A HighPoint-Tech SSD7540 controller allows aggregating 8 NVMe drives into a single slot, but the switch chip alone is probably more than €1000 of its €1400 price tag. When you can get an entire Ryzen CPU with more switching capability for less, you wonder if AMD made the right choice going with ASmedia. Using a Zen 3 IOD for the X570 wasn't very cost effective (except for fulfilling the GF wafer deals) and a Zen 4 IOD would be even less so--unless they made one without a GPU.

CXL switches shouldn't be cheaper than PCIe switches for complexity, but perhaps open competition and volumes will help anyway...

Looking at launch prices I may just stick with a last-gen X570S build, getting full ECC DRAM and compatibility with every current OS for only 500MHz less average clocks. I might even throw in a 5800X3D instead of the more sensible 5700X, just because I can recycle a 10Gbit card I'd have to purchase for crazy money otherwise. Or a 5900X, because VMs might profit more from 12 cores with 64MB of cache than 8 cores with 96MB of cache...

Then for idle power perhaps the 5750G would be better, if the Nvidia GPU drivers were able to cooperate with the iGPU like they do with Intel iGPUs on notbooks. Somehow I doubt that will work or get better with Ryzen 7000 and Nvidia dGPUs.

Choices, pitfalls...