PC Doldrums: Quarterly Shipments Hit Lowest Levels Since 2007

Status
Not open for further replies.

dstarr3

Distinguished
I'm going to blame the usual culprit of simply a lack of necessity. Computers have been marching along as usual, getting more and more powerful. But the software that most people run just doesn't demand all the power we've been adding. That 8-year-old e-machine isn't going to run Crysis so hot, but if you just need e-mail, YouTube, spreadsheets and docs, not a problem. Software demands have not kept up with hardware potential for the average user. Which is a good problem to have, really. But it just means that it takes a lot longer for that old PC to reach obsolescence.
 

gangrel

Distinguished
Jun 4, 2012
553
0
19,060
And even the next, slight workload increase...videos and streaming. I'll start streaming the WSOP main event here shortly, on a 5th gen Intel NUC...but just an i3. Not 8 years old, but very basic little box that's, what, 3 years old now.

And even if we don't go back 8 years...Ivy Bridge is now 5 years old.
 
Two things. First, as the guy said above, there's not a need to upgrade like in years past. I recently built a new i7 Kaby Lake for a friend who had been running a Sandy Bridge i7 Dell XPS for six years, having only upgraded his GPU once from an Nvidia GTX 570 to a GTX 970. He needed nothing more with gaming at 1080p.

Second, I'd like to see a comparison of individual component sales, specifically CPU, motherboard, GPU, and memory. I know that's difficult to obtain because resellers generally don't like to give that out for competition sake. Also it's difficult to actually define what a "PC shipment" would be vs. say an individual bad component replacement or a complete CPU/motherboard/memory upgrade which would IMO define as a "PC shipment." I'd leave GPUs out of the mix due to the extreme variation spike of sales by cryptocurrency miners.
 

InvalidError

Titan
Moderator
Entry-level PCs are a dying breed - the vast majority of non-gamers/non-enthusiast/non-professionals I know don't need a PC, they can already do everything they need to do on their phone, tablet, Chromebook and equivalents. If I didn't do PC gaming and engineering, I'd still be using my Core 2 E8400 and HD3650 today instead of an i5 with GTX1050.
 

bit_user

Polypheme
Ambassador
Actually... somehow, 2007 levels doesn't sound so bad.

Apple also suffered a 9.6% decline in Mac sales
Ah, so there is a silver lining to this story!

while Asus suffered the biggest quarterly drop. Asus' shipments declined by 40.7% last quarter.
Ouch! So, is this only for complete PCs, and not including their components business? Maybe they saw the writing on the wall and are retrenching to focus on the more profitable market niches.
 

bit_user

Polypheme
Ambassador

I'm still rocking dual Sandybridge i7's. One with integrated graphics, the other is Extreme (E5 Xeon, actually) with a GTX 980 Ti. Both have SSDs - the second one is using an Intel 750 NVMe SSD (Data Center equivalent).

I plan to upgrade the second one to either a Kaby-X i9 (if they switch to solder) or a Ryzen+ and also when SSDs based on Intel/Micron's 3D XPoint are cheap & big enough to compete with the high end of NAND storage.

I have no plans to upgrade the first Sandybridge. It does everything I need, with no signs of lagging. The last upgrade it had was from 4 GB to 8 GB of RAM. It replaced a 3.2 GHz Pentium 4. So, I've been known to rock some old hardware.

I'm still trying to hold off on any monitor upgrades until OLED hits the mainstream. Then, probably a GTX 1180 Ti to make it sing @ 4k.
 
Regarding Asus, I have worked with about 5 Asus laptops over the past 6 years. Of those, exactly one is still working. It's the oldest one. All of the newer ones have failed within either two years (3-4 years old), or one year (0-2 years old).

It's Asus' own fault for having such depressing numbers.
 

Kennyy Evony

Reputable
Aug 12, 2014
114
1
4,690
i still have my dual-core centrino laptop from HP, it wifi card is outdated cant play any of the new games cant play a lot of the hd streams from the web but works well for everything else never had any issues.
 

bit_user

Polypheme
Ambassador

My old HP zt3000 (based on Compaq x1000) was the bomb, BITD. I tried upgrading the HDD to a SDD, upgrading the CPU, and maxing out the RAM. Sadly, it's hampered by poor video acceleration in Linux. Plus, the HDD is still PATA.

So, I finally upgraded and got a Skylake laptop with integrated graphics. I figure that should be supported much better, longer. Other benefits were that it cost & weighs half as much as my old laptop. Screens on both are 1920, but the old one was 15.4" and now I went down to 13.3', in the interest of portability.

Also, my new Thinkpad's wireless works flawlessly on Linux. Never could get the old laptop's wireless to work in Linux, but its Ethernet actually worked much better in Linux.
 

bigdragon

Distinguished
Oct 19, 2011
1,107
547
20,160
The only companies producing worthwhile desktop upgrades are AMD and Nvidia. There's very little point to buying a whole desktop computer when you need to buy individual components from these two companies. No surprise that the desktop PC is declining. I think 10TACLE is right -- we need to see component sales to tell just how good or bad the desktop is faring right now.

I'm not sure that the desktop PC is going to recover in its current form. I think we're looking at a future where dock-able laptops and tablets get paired up with eGPUs and continue to erode the desktop market.
 

InvalidError

Titan
Moderator

The conventional PC market is not going to recover, ever. PC gamers and enthusiasts account for only 10-15% of the whole x86 PC market as it used to be, the bulk of the remainder is office, mainstream and other relatively light duty PCs which aren't particularly geared for gaming or high-end productivity. A large chunk of that market is migrating to non-x86 platforms and won't be coming back regardless of what AMD/Intel/Nvidia do. At least not on a remotely regular basis.
 

bit_user

Polypheme
Ambassador

A few years ago, when rumors started swirling that Intel was going to stop making socketed processors, I thought the buildable/upgradable PC sector would be in far worse shape, by now. Maybe I was off by 5 years? 10?

I think there will be some market for workstations, on into the foreseeable future. But I do see laptops, chromebooks, NUCs, and consoles continuing to encroach on PC's traditional market. As the market shrinks, prices will rise, which will further accelerate the decline.

I do think PCs have one big finale, which is when CPUs get HBM2 and DIMMs switch to non-volatile technology, like 3D XPoint. Along with the transition to 7 nm, this should create a big enough step change that most of those who've been holding back will finally upgrade. It might even be enough to reverse the decline, for a year or so.
 

InvalidError

Titan
Moderator

That is pretty much what I meant my the top ~15% (gamers, enthusiasts, professionals, etc.) of the market - people who have serious reasons to require a fully-fledged computer. The rest of the market that doesn't require anything of that sort will fade away over time and no amount of whiz-bang new tech is going to bring it back by a significant amount as the bulk of the consumer market simply doesn't care about the cool new tech of the day if it isn't in shiny phone or iPad format. Even phone/tablet sales which had explosive growth for years are running out of breath due to market saturation with "good enough" devices. Ryzen's launch is one of the most exciting things that has happened in the PC space in the past 10 years and even that appears to have failed to generate bumper sales in Q2. NV-DIMM on consumer platforms is pitched as a cheaper alternative to an equivalent amount of DRAM, which is a net downgrade if your existing system already has all the RAM you need. HBM2 on the CPU doesn't make much sense either as modern CPUs aren't starving for data enough on DDR4-2666 or better to justify the added cost unless it is to feed the IGP. I cannot imagine either of those features generating anywhere near enough sales to reverse the decade-old downward trend.

As for the loss of socketable CPUs, Intel never said it would no longer make socketed CPUs. Broadwell was aimed at mobile and embedded applications, which is why Intel intended to make it available only as BGA. When enthusiasts whined loudly about the lack of socketed options, Intel paper-launched the i7-5775C with its ludicrous pricing and nearly nonexistent availability. Even if Intel decided to go all-BGA, most consumers and companies buy pre-built systems they will likely throw away as-is a few years later and would never know about the change anyway.

I wouldn't put much stock in the transition to 7nm either: many people are currently running 5+ years old "still more than good enough" PCs and will continue skipping generations until they hit a brick wall of some sort - that's the logical way to choose when and what to upgrade. I'm still perfectly happy with my i5-3470 and currently cannot foresee the year where I may decide to upgrade - nothing currently on the market feels like it has sufficient cost-benefit to bother with. I'm sure there are plenty of Sandy owners harboring similar feelings. It will take more than a die shrink and unnecessary/unwanted features to change that.
 

bit_user

Polypheme
Ambassador

I disagree. Ryzen only broke new ground in highly multi-threaded performance, which affects only a small market segment. Its impact on the industry will be seen in multi-year trend lines, perhaps, but I think it was foolish to think it'd change much with its inferior single-thread performance.


First, that won't happen, since the endurance of 3D XPoint is well short of what Intel initially-quoted. Secondly, it represents a significant potential boost in storage performance per $ of system cost. Third, there's the potential for it to impact applications and OSes in new ways. As an example, imagine game assets that never have to be loaded, because they can be accessed directly from persistent storage? I don't claim to be able to see all the impacts it could have, but I see it as another potential piece of the puzzle in improving the PC's value proposition.


The biggest impact is surely enabling much more powerful integrated graphics. But the impact of effectively increasing L3 cache by a few 1000x will definitely have a measurable effect on application performance, both by improving bandwidth and significantly reducing latency. Furthermore, you get power savings and CPU die size savings you can plow back into higher CPU performance. That's more than just a drop in the bucket. I think it could be a bigger generational improvement than we even saw with Sandybridge.


All I'm saying is maybe it'll be enough to stop the decline for a few quarters.


You're looking at this with all the clarity of hindsight. When the rumors started spreading about Broadwell being a BGA-only part it set off a flurry of speculation that got me thinking. That's all. I never seriously thought Intel was going to stop selling socketed CPUs, cold turkey. Not so soon.


You're assuming that the software and the things people do with their PCs will remain static. You might be right, but it's not always easy to predict the next big trend.

For one thing, I think PC-based VR is a long way from its peak, even though I'd agree that most people will experience VR on some kind of console or stand-alone platform. But even a surge of VR on non-PC platforms can drive a lot of adoption on PCs as both a premium client and the preferred development and authoring platform.
 

InvalidError

Titan
Moderator

All "major" new tech in the past 10 years only affected a small part of the market due to most of the rest of the market not caring enough to pay a premium for it.


A 500GB NV-DIMM isn't going to be cheap compared to a 500GB NVMe drive if you want to "install" your games to NV-DIMM.


That isn't what happened to Broadwell which performed significantly worse clock-for-clock compared to Haswell in many workloads. While a bigger cache reduces the likelihood of a miss, a the larger tag-RAM it requires to keep track of what is resident in cache gets slower with every CAM bit you add, increasing the performance penalty on misses by that much. Also, I doubt we're going to see APUs with 8GB of HBM any time soon and even that would only a 512X increase compared to Ryzen, less if you count L2 as well. Not thousands of times.


My prediction for NV-DIMM, X-Point, etc. is near-zero impact on PC sales. For the bulk of the market, they'll be slow gradual adoption as they become built-in by default for no or negligible additional cost.


The vast majority of home PCs are used for trivial stuff like streaming media consumption for which even a Core2Duo is still more than enough today, which happens to be the point where the PC sales slump started. I believe this is far more than mere coincidence. The bulk of home users quit upgrading their PCs when even entry-level PCs became good enough for video streaming.

As for VR, I am highly skeptical about that and if sales numbers are anything to go by, the VR headset frenzy died just about as quickly as early adopters got their units, now all the HMD manufacturers are struggling to stimulate sales. Most critics I have read or heard about who did VR gaming beyond first impressions for their hands-on review said that VR is worth trying as an experience but currently doesn't offer anything compelling enough to replace conventional gaming.

If VR manages to break into the mainstream, it'll be another of those long-winded adoption stories. I haven't tried VR yet but I foresee nausea in that future as I am already prone to nausea when playing FPS-style games.
 

bit_user

Polypheme
Ambassador

Well, that's not how they originally pitched it. But since pretty much everything else in their original pitch (except maybe access latency) has been revised, I guess we'll have to wait and see how cheap they ultimately get it.


Okay, when I said "effectively increasing L3 cache", I didn't mean "actually increasing L3 cache". What I meant was that all of RAM would have a latency not much greater than that of current CPUs' L3 cache. I'm imagining they'll just leave off L3, altogether. The HBM2 stacks will be your RAM. Cheaper CPUs might have only 4 or 8 GB of it, but swapping out to the NVDIMM will be so painless you won't even care.


For instance, who would've predicted crypto-currencies, 10 years ago? Almost no one. While they're not mainstream, they've surely gotten big enough to have an impact on PC component sales. And I mean beyond just the current Ethereum craze.
 

InvalidError

Titan
Moderator

NV-DIMM is currently ~50X slower than DRAM. You won't be gaming on NV-DIMM and 8GB of HBM2 as your only working RAM any time soon now that games exceed 8GB as their recommended RAM. L3 is also ~10X faster than HBM, so I wouldn't expect L3 to go away either but the balance between L2 and L3 could shift towards more L2-heavy like what Intel did with Skylake-X. No matter how much faster off-die memory gets, some amount of fast L3 will remain essential to pass data between cores.


If you have seen the setups some serious miners are using, then you'd know that many of those responsible for buying GPUs by the hundreds aren't even running x86 rigs: it makes no sense to waste $200+ and 50W on a PC-based host when a $20 Raspberry Pi or similar platform with a custom board or USB-to-PCIe cables can do the same job.
 

bit_user

Polypheme
Ambassador

First, I'm not sure where you got the 50x figure, but I'm talking about Intel's 3D XPoint, which they've already demonstrated as a straight replacement for DRAM. Of course, I'm talking about HBM2 as a replacement for DRAM, and 3D XPoint DIMMs as a replacement for SSD.


No. Maybe if you hold the data bus width constant, but nobody uses it like that.


Perhaps. Maybe you don't need as much of it?


And L2 can't be use for this because... ?


Sure, people have gotten into ASICs as currencies matured, but cryptominers have bought a lot of PC hardware over the past 8 years or so. Anyway, it was just meant as an example of a trend few would've predicted.
 

InvalidError

Titan
Moderator

HBM is still DRAM, which means you have CAS, RAS, precharge and a handful of other actions that need to happen in sequence every time you close one memory row before opening another while SRAM has no such overhead, you just put your data and address on the SRAM's bus and you're done. The worst-case row-to-row latency on DRAM can exceed 60ns but with SRAM, the only latency you have is the trip through the pipeline. DRAM is never going to overcome that limitation and low latency is crucial to mitigating the impact of branch mispredicts.


The L2 is closely coupled to its core to provide the lowest latency possible and minimize the cost of transfers between L1 and L2. Having to share the L2's read/write port with multiple other cores would severely degrade performance due to frequent port contention and adding extra read/write ports would slow down the cache in some combination of increased pipelining depth and lower clocks due to extra control logic. While you could hypothetically evict data from one core's cache to another core, you'd just end up trashing the other core's cache if you don't know beforehand that it is already ready to process that data and if you have nowhere to evict data from L2 to, your options are suffer reduced performance because your L2 is being hogged by data held for another core to use later or dump it to RAM. Much simpler to evict it to L3, get on with your life and let the other cores pick it up from there whenever they are ready or have the L3 controller dump to RAM if that takes too long. It is for a similar reason that IPC is stagnant: extracting more IPC out of x86 code requires more complex instruction look-ahead, prediction, prefetch and reorder logic but such logic would be slower and use more power so Intel has to balance IPC gains against the impact the logic required to achieve it has on attainable clocks and power draw.

If getting rid of L3 (and L2 too while you're at it, just make L1 bigger, right?) was so easily done with no adverse effects, they (AMD, Intel, IBM, etc.) would have done it a long time ago or wouldn't have bothered introducing the concept of tiered cache hierarchies in the first place. Each cache tier has its own purposes that no other cache tier can cost-effectively cope with.
 

bit_user

Polypheme
Ambassador

...after you just got done lecturing about CAM overhead. Some of what you lose on DRAM overhead, you could regain by avoiding the cache lookup.


It seems like Intel did exactly this, by making the L3 in Skylake-X non-inclusive.


Now you're just taking my idea out of context.

Time will tell whether in-package, off-chip memory can enable CPUs to shrink or eliminate L3. I'm not saying it's a sure thing, but it's one way I think HBM2 or HMC might have an outsized impact, in the CPU market.

Whatever effect it has likely won't be limited to socketed CPUs, however. The biggest beneficiaries will be consoles, laptops, mobile phones, and maybe even gaming or VR on NUCs. All I'm saying is that it might one factor contributing to enough of a jump that a lot of complacent PC owners finally upgrade.

Sometimes, I get the sense that you like to be pessimistic and contrary, then search for technical grounds to justify your position. None of us knows the future, but I'm trying to paint a "best case scenario". It's far from a certain thing. None of us disagrees that the overall trend will continue throughout the foreseeable future.
 

bit_user

Polypheme
Ambassador
BTW, as I mentioned, 3D XPoint DIMMs can enable software like game engines to read content directly from storage. Operating systems can do away with disk caching and read/write buffering. And web browsers can eliminate separate in-memory caches. These changes can go a long way towards reducing memory requirements and cut down on the amount of paging you'd have to do with 4 or 8 GB of HBM2 as your only RAM.

Again, just trying to see how much mileage we could possible get from these developments.
 

InvalidError

Titan
Moderator

SRAM is pipelined, there is no additional performance penalty from accessing any random address within it other than the constant pipeline depth even if that depth needs to increase by a cycle or two when you double the cache size. DRAM on the other hand incurs a 50+ns latency penalty whenever an active row switch is necessary.


No need to know the future, recent history has repeatedly shown that seemingly major innovations and new OS releases have failed to have any positive impact on PC sales or produce significant performance improvements for most day-to-day computing.

I'd say you are being excessively optimistic and are setting yourself up for massive disappointment by attributing more benefits to technologies than what they can realistically deliver compared to existing ones and underestimating the performance impact of having to rely more heavily on slower memory tiers - NVDIMMs currently have two orders of magnitude worse latency than DDR4, which is going to hurt performance really bad the moment you need more memory than your CPU's built-in HBM2, so HBM2 and X-point aren't going to replace DDR3/4/5 DRAM any time soon.

Another problem with the idea of installing software to NV-DIMM/X-point is that most software's load time is dictated by processing done on data while loading it - that's why most software shows little to no benefit from NVMe over the fastest SATA3 SSDs. If you installed software to NV-DIMM, you would need to store it in unpacked/uncompressed directly usable format to avoid incurring the load-time processing, which could consume many times more space. If you don't, you end up saddled with load-time unpacking and the need to store unpacked copies as well.

Everything comes with trade-offs and pitfalls. The biggest bottleneck right now is software. Making everything else 100X faster and adding new tiers of memory or cache won't help much until more software gets written to make meaningful and efficient use of modern multi-core, multi-threaded CPUs.
 

bit_user

Polypheme
Ambassador

You missed my point. I'm not talking about a DRAM-based cache. If someone substituted HBM2 for L3, there'd be no cache lookup stage.


Depends on what it is, but if there were such a big space savings by decompressing it, then you could just have a persistent cache of some decompressed subset of it.


The silver lining is that when 3D XPoint hits mainstream DIMM slots, a few optimizations in: the OS, a couple web browsers, and a couple game engines can deliver substantial returns for users.


Threads are often used to hide latencies. So, reducing I/O latencies actually reduces the need for concurrency in the software.
 
Status
Not open for further replies.