News PCBye: Researchers Want to Ditch the Motherboard

Aug 29, 2019
35
7
35
With advancements in mobile technology especial with SoC's, the motherboard is almost gone. The days of desktop is number and needs to evolved or become a very niche market.

What I think instead of expanding the cores of chip, is to make a system with pluggable cores that you add to system. So you have multiple slots for cpu and even gpu slots - and you start out with a base system with say 4 or 8 cores and if you want mode add another card in another slot. It would be really cool to keep existing generation when new generation comes out. So motherboard becomes a container of slots including a slot for IO which can be all be upgraded. Imaging buying a system and new cpu or gpu comes out and still able to use existing hardware on same system. GPU's maybe more complicated
 
  • Like
Reactions: artk2219

artk2219

Distinguished
With advancements in mobile technology especial with SoC's, the motherboard is almost gone. The days of desktop is number and needs to evolved or become a very niche market.

What I think instead of expanding the cores of chip, is to make a system with pluggable cores that you add to system. So you have multiple slots for cpu and even gpu slots - and you start out with a base system with say 4 or 8 cores and if you want mode add another card in another slot. It would be really cool to keep existing generation when new generation comes out. So motherboard becomes a container of slots including a slot for IO which can be all be upgraded. Imaging buying a system and new cpu or gpu comes out and still able to use existing hardware on same system. GPU's maybe more complicated

The blade server chassis says hullo. I agree it could be nice to bring that to a more consumer level though.
 
  • Like
Reactions: bit_user

MasterMadBones

Distinguished
Sounds cool but the material itself is very fragile. It's also disastrous from a serviceablity point of view. I understand the motivation behind it because it can significantly improve performance and power consumption but for some markets the tradeoffs are just unacceptable.
 

vaughn2k

Distinguished
Aug 6, 2008
769
4
19,065
Its been used for 20years now. Everheard of SoC/CSP?
Though it can't be implemented across all platforms due to cost, operation, application.
You still need the motherboard, to connect everything.
Its all about landscape.
 
  • Like
Reactions: artk2219

bit_user

Polypheme
Ambassador
With advancements in mobile technology especial with SoC's, the motherboard is almost gone.
Yeah, I'm thinking once cell phones switch to using some form of stacked DRAM, there probably wouldn't be much left on the PCB except power and I/O. And some phones are down to just a USB port for I/O (and concept phones exist without even USB). I think ARM even has a way of getting rid of physical SIMs.

Imaging buying a system and new cpu or gpu comes out
Yeah, and if you could somehow swap out the old CPU or GPU and replace them with a new one... what a concept!

I know you wanted to add, instead of replace. Well, back in the days of multi-CPU motherboards, it was certainly possible to run a multi-CPU config with different CPUs. Operating systems didn't like it, but there's no reason it can't work. Maybe they had to be the same clock speed, but that wouldn't be so hard to address, if there'd be the demand.

And motherboards with multiple x16 (at least physically) PCIe slots can let you plug in a newer GPU alongside your old one! Genius!
 

bit_user

Polypheme
Ambassador
What I'd worry about is ESD. Wouldn't you need some larger-scale components to offer ESD protection, at least at I/O ports?

And wouldn't you need some PCB-mounted connectors to deal with the stress & strain placed on them?
 
  • Like
Reactions: artk2219

Giroro

Splendid
Saying a PCB should be replaced with silicon is the Electronics Engineering equivalent of saying an entire airplane should be built like it's indestructible black box (nonsense to the point I doubt that's what they actually proposed).

Making a SoC using multiple chiplets that can be mounted on a much simpler PCB, no issue with that and it already exists... innovating on current chiplet interconnects is probably what they are actually saying.
...But if not do they have any concept whatsoever about how much a silicon wafer costs to produce, or how one would mount connectors for any kind of human interface/display/antennas/ethernet/ etc? How do they propose to move transmit large amounts of current to the different components, or even attach a power supply in the first place? How are you going to mount that SoC in a chassis without socketing it into something?
We use traditional PCBs for some pretty obvious reasons: They're effective, durable, and cheap.

Also to be pedantic, I would argue that if a current system doesn't have multiple PCBs, then it already doesn't have a motherboard.
 

Egladios

Reputable
Sep 26, 2019
3
3
4,515
I am not in any way a specialist in the field. But from my experience from the single boards computers, is that if one part becomes defective, it becomes inoperable. Unless you want all consumers to be technicians. And as previous comments have pointed to, due to the wafer size it becomes easily breakable. You will limit business to chip manufacturers, and hence the monopoly. Sadly many of the board manufacturers will venture into the chip manufacture, and the main companies will end up losing market share. IBM PC went down because they wanted to do it all.
 

bloodroses

Distinguished
I am not in any way a specialist in the field. But from my experience from the single boards computers, is that if one part becomes defective, it becomes inoperable.

Unfortunately, that's the way most electronics are going anymore. It's cheaper for them to manufacture overall and they can charge more to replace the entire unit instead of just a part of it. On a business sense, it's a win/win; unless you're Radio Shack.

I shudder to think of the day of having to replace an entire all in one unit of a cost wise equivalent i9-7980XE+1080ti just because a cheap $0.10 realtek audio part quits....
 
  • Like
Reactions: artk2219

g-unit1111

Titan
Moderator
It's an interesting idea I'll give them that. But wearable tech and data center tech are definitely not the same as desktop PC tech. There's a reason why all in one PCs don't perform on the same level as a 12 core desktop PC that you build yourself. And there's a reason why a 12 core desktop PC doesn't perform the same as a 32 core data center processor.
 
  • Like
Reactions: coolitic

TJ Hooker

Titan
Ambassador
Saying a PCB should be replaced with silicon is the Electronics Engineering equivalent of saying an entire airplane should be built like it's indestructible black box (nonsense to the point I doubt that's what they actually proposed).
The article being referenced is linked in the Tom's article, you can look yourself. It certainly seems that's what they're proposing to me. What I'm not sure about is whether they authors imagine this approach being used in regular desktop systems, or just more specialized applications like mobile and HPC as mentioned.

https://spectrum.ieee.org/computing/hardware/goodbye-motherboard-hello-siliconinterconnect-fabric
 
Last edited:

InvalidError

Titan
Moderator
What I'd worry about is ESD. Wouldn't you need some larger-scale components to offer ESD protection, at least at I/O ports?
No, most chips already have some degree of intrinsic ESD protection in the form of the parasitic body diode in the IO driver and active termination transistors. With the speed of modern IO busses though, we're at the point where achieving the highest speeds will require a buffer chip between the CPU/chipset and connector for signal integrity reasons anyway, so supplemental ESD protection would need to be built into those chips instead.

My vision of future CPUs as integration gets further along is the CPU becoming its own self-contained assembly with fiber optic connections to core components. and legacy IO (PCIe, electrical Thunderbolt, USB, SATA, etc.) break-out boards.

BTW, some modern smartphones and tablets already throw DRAM and/or eMMC under the SoC to save space and power. With 2.5D tech becoming more pervasive, this is about to reach mainstream and will be the bane of data recovery.
 
  • Like
Reactions: bit_user
D

Deleted member 14196

Guest
this reminds me of kicking the old peanut around in the playground with what if's.... lol also that episode of Sealab 2021 - I, Robot where they truly contemplate 'wearable' tech
 

bit_user

Polypheme
Ambassador
I need a working prototype video, not some companies dream posted as a headline. How about the headline xyz company wants world peace.
This article is a summary of one in IEEE Spectrum - an engineering journal oriented towards a broader range of hardware-focused interests. See TJ's link, above.

Its authors (both senior electrical engineering professors at UCLA) are not just "dreaming of world peace", but discussing various practical issues and work that's been done towards this end. It's worth a look, if you're interested in the subject.
 
  • Like
Reactions: TJ Hooker

InvalidError

Titan
Moderator
This article is a summary of one in IEEE Spectrum - an engineering journal oriented towards a broader range of hardware-focused interests. See TJ's link, above.
One has to wonder about the cost-effectiveness and mechanical toughness of a 1mm thick 400cmsq (200x200mm) silicon motherboard-not-motherboard-thingy... you can only make one of those per $1000+ 300mm wafer, makes me scratch my head about how this could ever compete on cost-performance against PCBs unless you are absolutely desperate for density. This stuff isn't going to make it beyond very high end systems. Another issue with putting a whole lot more semiconductor-grade silicon in systems is that we're starting to have sand shortages, which means we can expect silicon ingots to get significantly more expensive and increase the pressure to move towards thinner chips and interposers. Also, as someone pointed out earlier, I'd be extremely wary of putting user IO connectors on a glass substrate, still going to want a PCB for things that need to be more mechanically tolerant.

PCBs aren't going anywhere any time soon. We'll still need 'em even after entire core systems get integrated into a single 20x20mm SoC stack.
 

bit_user

Polypheme
Ambassador
One has to wonder about the cost-effectiveness and mechanical toughness of a 1mm thick 400cmsq (200x200mm) silicon motherboard-not-motherboard-thingy... you can only make one of those per $1000+ 300mm wafer, makes me scratch my head about how this could ever compete on cost-performance against PCBs unless you are absolutely desperate for density.
Well, this certainly isn't my area, but I'd recommend you give the original article a read-through. Here's their general statement about cost:
There’s no getting around the fact that the material cost of crystalline silicon is higher than that of FR-4. Although there are many factors that contribute to cost, the cost per square millimeter of an 8-layer PCB can be about one-tenth that of a 4-layer Si-IF wafer. However, our analysis indicates that when you remove the cost of packaging and complex circuit-board construction and factor in the space savings of Si-IF, the difference in cost is negligible, and in many cases Si-IF comes out ahead.

Another issue with putting a whole lot more semiconductor-grade silicon in systems is that we're starting to have sand shortages, which means we can expect silicon ingots to get significantly more expensive
I know that river sand is being depleted. That's needed for structural concrete, since it has the right shape for the grains to interlock. You can't even substitute it with ocean sand, believe it or not, much less desert sand.

Since semiconductor materials need to be of extremely high purity, I don't imagine any form of silicon can be used directly. And silicon is one of the most abundant elements in the Earth's crust. So, unless you know differently, I wouldn't assume the sand shortage is affecting semiconductor manufacturing.

Also, as someone pointed out earlier, I'd be extremely wary of putting user IO connectors on a glass substrate, still going to want a PCB for things that need to be more mechanically tolerant.
Yeah, that was one of my concerns. They merely acknowledge the issue:
when Si-IF–based systems are properly anchored and processed, we expect them to meet or exceed most reliability tests, including resistance to shock, thermal cycling, and environmental stresses.
In addition, the chassis, mounts, connectors, and cabling for silicon wafers need to be engineered to enable complete systems.

And while I'm quoting stuff...
I shudder to think of the day of having to replace an entire all in one unit of a cost wise equivalent i9-7980XE+1080ti just because a cheap $0.10 realtek audio part quits....
Here's what they say:
We also need to consider system reliability. If a dielet is found to be faulty after bonding or fails during operation, it will be very difficult to replace. Therefore, SoIFs, especially large ones, need to have fault tolerance built in. Fault tolerance could be implemented at the network level or at the dielet level. At the network level, interdielet routing will need to be able to bypass faulty dielets. At the dielet level, we can consider physical redundancy tricks like using multiple copper pillars for each I/O port.
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
When you look at the original article, one thing that's striking is how they almost seem to be talking about something FPGA-like, but using building blocks at a different-scale:
Of course, the benefit of dielet assembly depends heavily on having useful dielets to integrate into new systems. At this stage, the industry is still figuring out which dielets to make. You can’t simply make a dielet for every subsystem of an SoC, because some of the individual dielets would be too tiny to handle. One promising approach is to use statistical mining of existing SoC and PCB designs to identify which functions “like” to be physically close to each other. If these functions involve the same manufacturing technologies and follow similar upgrade cycles as well, then they should remain integrated on the same dielet.

That said, I feel that we already have a pretty good idea of what these stable, reusable blocks look like. They're things like DSP cores, video codecs, caches, memory controllers, and GPU processing arrays.

And whatever random functional blocks are too small to make as separate dielets would get pooled together and put in a "South Bridge" - or, that's what it's typically called on motherboards. I suppose there are also Super I/O chips that collect a bunch of the really slow stuff that even the South Bridge can't be bothered with.
 
  • Like
Reactions: TJ Hooker

InvalidError

Titan
Moderator
Well, this certainly isn't my area, but I'd recommend you give the original article a read-through. Here's their general statement about cost:
Their cost projection seems excessively optimistic and ignores 2.5/3D-stacking. Their main claim to fame is reducing the amount of chip packaging but 2.5/3D achieves that in a much smaller footprint than planar integration. Once you have the CPU, GPU, VRM and a decent amount of system RAM in a single package, you can easily make-do with a four-layer PCB for everything else.
 

bit_user

Polypheme
Ambassador
Their cost projection seems excessively optimistic and ignores 2.5/3D-stacking.
Well, your argument is with them, but they at least make a passing-mention of 3D:
The industry is already making steady progress in bare die testing as chipmakers begin to move toward chiplets in advanced packages and 3D integration.

BTW, anyone who's not read a lot of technology research papers might not know this, but it's not uncommon to see the author artificially restrict the scope of their problem to some subset where they can show a demonstrable gain or benefit.

Sometimes, it can feel like they're really over-constraining the problem to an unreasonable degree, but the flip side is that it's often to big an ask for a handful of grad students to consistently and repeatedly make meaningful progress on a problem in its full generality. Often, constraining the problem allows you to "chip away" at different aspects of it. Either way, papers must be published and deadlines sometimes force uncomfortable compromises.
 
Last edited:

InvalidError

Titan
Moderator
Sometimes, it can feel like they're really over-constraining the problem to an unreasonable degree, but the flip side is that it's often to big an ask for a handful of grad students to consistently and repeatedly make meaningful progress on a problem in its full generality.
Still does not change the fact that pitching a 400cm^2 silicon motherboard as a "solution" to 1000cm^2 fiberglass motherboards when 3D-stacking could drop that to somewhere in the neighborhood of 10cm^2 sounds highly contrived to me. The only place where I can imagine this making some sort of sense is for wafer-scale CPUs should those ever become a practical thing.

Preventing a 200x200x0.5mm brittle glass pane from spontaneously shattering is also going to require some considerable packaging engineering. The simplest solution would likely involve soldering the silicon motherboard to a PCB for additional stiffness and protection, then epoxying or soldering an IHS on top of the whole thing for additional support and top-side protection.
 

bit_user

Polypheme
Ambassador
Still does not change the fact that pitching a 400cm^2 silicon motherboard as a "solution" to 1000cm^2 fiberglass motherboards when 3D-stacking could drop that to somewhere in the neighborhood of 10cm^2 sounds highly contrived to me.
Why do you assume the dielets on their wafer are not using the same 3D stacking? They seem to don't exclude that, anywhere.

Also, your idea of using 3D stacking to shrink their example of a server board by 100x seems pretty absurd. How did you arrive at that?

The only place where I can imagine this making some sort of sense is for wafer-scale CPUs should those ever become a practical thing.
They also gave an example of an IoT device they think could be reduced from 20 g to 8 g.

Preventing a 200x200x0.5mm brittle glass pane from spontaneously shattering is also going to require some considerable packaging engineering. The simplest solution would likely involve soldering the silicon motherboard to a PCB for additional stiffness and protection, then epoxying or soldering an IHS on top of the whole thing for additional support and top-side protection.
First, they gave a range of 0.5 to 1.0 mm thickness. For a larger example, I think it'd be towards the upper end.

Second, if you read the article, they talk about sandwiching the wafer in a pair of heatsinks. There's even a pretty picture, and a catchy name: PowerTherm.

But, I guess my main point is this: read the article, before arguing with it. Most of us didn't go back to the source article, at fist. But, once it's been pointed out that it has some details Mott omitted (surprising, I know), why would you keep commenting without going back and reading it? That's just bad. It makes you look bad, and it sets a bad example. I can't answer for their points, nor should I have to. You should just read the article, and that would save your time by not raising points they addressed and avoid wasting my time time spent having to point them out. Then, we can all focus on stuff they didn't cover.

Anyway, the authors acknowledge that not all problems have yet been solved, and they list some low-volume applications where it could have significant and important design cost advantages vs. traditional SoC + board approach. However, I don't think you need to worry about PCBs disappearing, anytime soon. Certainly not PC motherboards - that was just the bait that got us here.
 
  • Like
Reactions: TJ Hooker

InvalidError

Titan
Moderator
They also gave an example of an IoT device they think could be reduced from 20 g to 8 g.
They are comparing bare silicon to lead frame packages, not exactly a fair when CSP options are also available.

As for how I arrived at my reduction, simple: once CPUs have enough on-package RAM for baseline use, enough on-package GPU for baseline use, then the minimum viable system size will be reduced to little more than CPU package size. Same for servers.