News Raspberry Pi 5 squares off against a scrawny Intel CPU — Intel N100 quad-core Alder Lake-N chip proves to be a strong competitor

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Titan
Ambassador
I got 100$ price. Case, power, whole PC. And it is a PC, nice design, sturdy, silent. For 107$ they were selling me option with 256GB SSD and Windows 11. So, 107$ (dollars!!)
$7 for a SSD and Windows 11 license? Can't be. They must mean you get pre-installed Win 11 image, but you need to supply the license key.

Pi 5 for ~140€ (euro!) when completed with case, power and SD card.
As I already said, the equivalent Pi cost is $110, without storage.

Yes, the Pi 5 currently has availability problems, but let's see if they can get on top of that problem, as they claimed they're on track to do (see link in my above post).

And sure you can argue this is for 100 pieces. ...
I'm not here to argue about what you should do. I just want to get the facts straight. I honestly don't care what you do.

PCI express is another thing, storage being limited to SD card, etc. N100 PC will usually come with M2 slot. Even if it's SATA M2 it still beats SD card in my book.
The Pi 5 has a PCIe 3.0-capable x1 lane. Yes, it's way inferior to the N100's x9 lanes, and the Pi needs a hat to actually utilize it for a M.2 drive, but it's there.

So while in the early days Pi offered little performance for less money (25$) today it's a 140$ computer that has nothing to offer vs the other 140$ computers.
That's an incorrect statement, to the point of being disingenuous.

Your quoted price for the original Pi is the MSRP of a bare Model A, which shipped with reduced specs, while you're quoting a current market price for a top-spec, fully-outfitted Pi 5. The equivalent Pi 5 price is $60 for the 4 GB version. They've suggested they plan to build a 2 GB version, which we could infer might have a price of just $50.

Even then, it still has features, like Ethernet and wi fi, which the Pi model A did not! For newer Pi's, the equivalent of the Model A is actually their Compute Modules. AFAIK, they have yet to announce a compute module for the Pi 5, so we can't yet see how the prices compare. However, the minimum spec Pi 4 Compute Module is available for $30.

If you have to exaggerate the facts to support your argument, maybe your argument just isn't that strong! Now, I don't think that's the case, here. In your shoes, I'd probably do the exact same thing. So, let's try not to get defensive and just stick to the facts.

Pi still has its use, and as I said I love the concept for those uses. But it's getting more expensive and less available, while Intel opened up the market again with N100.
Availability is a solvable problem, and one where even Intel has a less than stellar track record. Price is partially due to more features, where the Compute Module provides a viable alternative for those not needing the extra functionality.

There's another factor you might not be considering, and that's the China factor. China's economy is currently experiencing deflation. The prices you got on N100 systems could be assuming a China market, where OEMs were able to extract very favorable prices, due to weak demand. Perhaps Intel even overproduced N-series chips, overestimating China's demand, and is now selling them in China at discounted prices. Anyway, my point is that I think that pricing is artificially depressed and you could be in for a surprise, down the road.

If I look at US retail pricing of a N100 board, it's more like $130 for the bare board (no RAM):
 
Last edited:

bit_user

Titan
Ambassador
Gemini Lake and Jasper Lake support up to 32GB and 64GB of memory.
Be careful about this. Apollo Lake (and I think also Gemini Lake) is super picky about memory. I had an Apollo Lake board that I thought was just broken, since I bought RAM for it that even the memory vendor claimed would work. Turns out, the system would boot, but only if I installed memory just in DIMM slot B. Using just slot A or A+B didn't work.

A Raspberry PI 5 trails behind an RK3588-based SBC (e.g. Radxa ROCK 5B).
True. The Orange Pi 5 was almost a year earlier to market, faster, and more power-efficient. Both use the same P-core, though (A76). That's because the RK3588 is made on Samsung 8nm (I think), while the Pi 5 uses either a 20 nm or 14 nm node, I believe.

Pi enthusiasts will tell you the Pi has better support (i.e. both better software support, and a much larger community). They're not wrong. Linux on the RK3588 has been a somewhat rough experience, for the early adaptors.

The x86 and ARM software libraries are not comparable, the x86 one is much larger in terms of ready to use solutions for a "normal" user.
This is not really true, for anyone who would consider running a Pi, in the first place. The cross-platform distros available for it are exactly the same, between ARM and x86. Even the packages with hand-tuned x86 SIMD optimizations mostly include similar optimizations for ARM.

Finally, with x86 future Windows or Linux OS versions are almost guaranteed to work. With ARM, the OEM must for instance adapt the upstream Linux kernel to each SBC model.
I'm foggy on the details, but I think ARM has addressed this with a solution similar to x86's. I don't know (wouldn't expect) that the Pi (yet) conforms to it.

Anyway, it's not relevant to the current discussion, since Raspberry Pi's are the best-supported among ARM SBCs. If anything supports ARM, it's almost guaranteed to support R.Pis.
 

DavidC1

Distinguished
May 18, 2006
517
86
19,060
The single channel memory in Alderlake-N doesn't really matter. It's a 32EU GPU with 6-15W TDP depending on the config.

Game tests on Youtube show that Alderlake-N with single channel DDR4 beats Jasperlake with dual channel DDR4 most of the time, and significantly in some cases, because the CPU is vastly better on ADL-N. Also, you can get DDR5 ADL-N which nullifies the single channel benefit.

@bit_user The guys already responded with their use case why ADL-N is better. It is true, the software support is way better on x86. That alone is enough to choose it. But you said on your first post that ADL-N also packs a much faster CPU and GPU.

Not to mention that you can often get better availability and cheaper pricing. Seems like a no brainer to me. Price = Equal or better. Availability = Equal or better. Performance = much better. Compatibility = much better.
 
Last edited:
  • Like
Reactions: LuxZg

neojack

Honorable
Apr 4, 2019
621
187
11,140
i am in the market for a small PC for the living room. I previously used old second-hand laptops around 100-200$ but the perfs, noise and reliability were really bad.

looks like this N100 is a perfect match ! i could also reuse my RX480 GPU that's sitting in a closet, for basic gaming too !

I checked some benchmarks, it compares to a i5 , 4 cores skylake
 
Last edited:

DavidC1

Distinguished
May 18, 2006
517
86
19,060
I wish people wouldn't keep repeating this myth. The N100 has a PL2 of 25 W. That's when you hit 3.4 GHz. At 6 W (PL1), they only guarantee base clocks of 800 MHz.
Nonsense.

Unlike their main Core lineup, their Atom and N series chips have a very stable performance profile: https://www.notebookcheck.net/Geeko...n-a-well-known-Intel-NUC-design.771494.0.html

csm_cpu_metrik_r15_loop_0a7512ff18.jpg

PL1-15W, PL2-25W, yet it doesn't exceed 12W in Cinebench R15 with a score of 472, comparable to 20W i3 Tigerlake-U, and original Zen-based Ryzen 3 1200.

Compare that to Raptorlake:
csm_cb15_loop_2ea730be01.png

Look how the clocks and power are at a consistent level while for Raptorlake it reaches peak clocks for about 5 seconds and plummets.

If that doesn't convince you, 30W peak system power consumption which quickly drops under 20W should.

2900MHz @ 11.7W running Cinebench R15, 2800MHz @ 14W running Prime95. The CPU is indeed very efficient. You have to run unrealistic tests like Prime95 and Furmark and stress together to have any chance of reaching "800MHz".

Vast majority of applications can run at near full speed without reaching TDP levels, because each application's thermal profile varies widely.
 
Last edited:
  • Like
Reactions: LuxZg

DavidC1

Distinguished
May 18, 2006
517
86
19,060
Your example is being limited by either design or by settings. You should have noticed it's not boosting to 3.4ghz which is the limit on the N100.
You might want to look at other Atom-based examples. They all behave this way.

Also, yes it does boost to 3.4GHz. Scroll down below to see Stress test screenshots. The Prime95 tests are multi-threaded, yet with all 4 cores active it's at 2.8GHz. It will be of absolutely no consequence reaching 3.4 for mere single core.

I spent more than 30 minutes comparing different SKUs and reviews to come to an accurate conclusion, I'm pretty sure you can at least skim through them.
 
Last edited:
You might want to look at other Atom-based examples. They all behave this way.

Also, yes it does boost to 3.4GHz. Scroll down below to see Stress test screenshots. The Prime95 tests are multi-threaded, yet with all 4 cores active it's at 2.8GHz. It will be of absolutely no consequence reaching 3.4 for mere single core.

I spent more than 30 minutes comparing different SKUs and reviews to come to an accurate conclusion, I'm pretty sure you can at least skim through them.
I assumed the ADL-N CPUs behaved like Tremont or E-cores since there's no Intel documentation stating otherwise (they only have one boost setting), but after checking different models it seems anything but single core runs 500mhz below max boost (couldn't find any N300/305 which weren't throttled clocks, but N100/200 are this way).

The system you linked uses PL1 of 15W (N100 stock is 6W) which is why it has mostly stable clocks and does not use PL2 when throttled which I assume is a BIOS bug. If the N200 with PL1 of 6W they tested is an accurate indicator it seems 1.5ghz is the best 4 core loads can do at stock settings.
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
The single channel memory in Alderlake-N doesn't really matter. It's a 32EU GPU with 6-15W TDP depending on the config.

Game tests on Youtube show that Alderlake-N with single channel DDR4 beats Jasperlake with dual channel DDR4 most of the time, and significantly in some cases, because the CPU is vastly better on ADL-N.
None of that proves it's not suffering from being single-channel. I'll grant that with a 6 W TDP, it's going to matter less than the higher-TDP and 8-core models.

The way to know for sure would be to take an Alder Lake-S and set a custom frequency limit on the E-cores which matches what Alder Lake-N could achieve. Then, use processor affinity to run a benchmark on either 4 or 8 of the E-cores and repeat with one and two memory channels populated.

I think the main reason Intel ditched the second channel was that they saw lots of customers implementing single-channel designs anyhow, and decided to cut the costs of having a second channel that mostly sits unused. It's not that the second channel wasn't beneficial, just that most customers use these SoCs for low-performance devices and care more about cost savings than the extra performance.

Also, you can get DDR5 ADL-N which nullifies the single channel benefit.
This is not accurate. The N6005 (Jasper Lake) supports dual-channel (128-bit) DDR4-2933. That has a nominal bandwidth of 46.9 GB/s. Compare that to Alder Lake-N's max (64-bit) DDR5-4800's 38.4 GB/s and tell me which number is bigger. Plus, DDR5 has higher latency, which is a slight downside of going that route.
 
Last edited:
  • Like
Reactions: usertests

bit_user

Titan
Ambassador
Nonsense.
It's not nonsense. I cited the Intel specs. Base power means the minimum guaranteed frequency at PL1. If you actually run a heavy all-core workload, like Blender, and actually implement a 6 W PL1, then it really could drop that low!

Unlike their main Core lineup, their Atom and N series chips have a very stable performance profile:
You just showed a N100 running at 11.7 W, which is out-of-spec. The base frequency doesn't apply because the system didn't implement the SoC at the specified PL1 of 6 W.

I have no problem with systems not adhering to the 6 W PL1, but then you don't get to pretend that the resulting performance reflects a mere 6 W of dissipation, like the poster I replied to.

Cinebench R15 with a score of 472,
Which isn't terribly impressive, BTW. They measured a 35W i5-6500T at 466 points. That it has to run at nearly twice its specified TDP to equal that 35 W Skylake is actually worse than I thought!

You might want to look at other Atom-based examples. They all behave this way.
Yes, but that's a little off-topic, because the original poster was presuming the advertised 6 W PL1.

As I think I mentioned, I have an Apollo Lake board which indeed boosts beyond its TDP for indefinite amounts of time.
 
  • Like
Reactions: thestryker

abufrejoval

Reputable
Jun 19, 2020
615
452
5,260
None of that proves it's not suffering from being single-channel. I'll grant that with a 6 W TDP, it's going to matter less than the higher-TDP and 8-core models.
I've been operating N5005 with 32GB in dual channel (Intel "permitted" 8GB) and N6005 with 64GB (Intel "permitted" 16GB) in dual-channel without issue using Kingston Value DDR4 SO-DIMMs on ASrock Mini-ITX (N5005) and Intel NUC11 (N6005).

I've also had various previous Atoms down to a J1900 on DDR3 running with 16GB, when Intel "permitted" max 4GB.

While Braswell to Jasper Lake supported dual channel, that made next to no difference on all except Jasper Lake: JL was the only SoC to have measurably almost twice the RAM bandwidth with both channels, but it was still half what the same RAM sticks would deliver on a Tiger-Lake NUC.

Very roughly using DDR4-3200 SO-DIMMs Kingston Value RAM SO-DIMMs I had 10GB/s on JL single, 20GB/S on JL dual and Tiger-Lake single and 40GB/s on Tiger-Lake (i7-1167G5) dual.

On the N5005 GoldMont+ the difference between single and dual channel DDR4 was more like 10 vs 12GB/s.

I didn't do extensive iGPU tests on the JL, but I do remember that it did quite ok on 4k. For me the main benchmark to qualify a "desktop" is to have Google Maps render in 3D Globe view on a Chrome based browser: if that allows smooth turning with the Ctrl-key held and the mouse to move, it's ok in my book. There the N6005 felt like twice the speed of the N5005 and quite snappy.

It regularly puts Microsoft's flight simulator to shame and the results on these recent Atoms are quite awe-inspiring.

And it's a test that just killed my RP4 8GB, even at THD etc.

It's way better on my Jetson Nano with only 4GB, but that has an iGPU to match, while the CPU cores are worse.

I was tempted to buy an RP5 just for 'completeness' but that iGPU can't be anywhere near acceptable with only a 300MHz speed bump.

Intel's Braswell J1900 was probably as bad as the RP4 in terms of iGPU, perhaps even worse. I remember it being painful even at 2D in THD. But they've pushed the GPUs much harder than the CPUs on subsequent generations, at least until JL.

I have Gracemonts as E-cores in my Alder-Lake i7 systems and I've always wanted to bench them there using numactl on Linux at various PL1/PL2 settings which fortunately can be set via the command line on Linux.

It won't exactly replicate the N305 because the memory interface would still be dual-channel (ok, that could be 'fixed') nor can I infer N305 iGPU performance from a 96EU Xe on the i7.
The way to know for sure would be to take an Alder Lake-S and set a custom frequency limit on the E-cores which matches what Alder Lake-N could achieve. Then, use processor affinity to run a benchmark on either 4 or 8 of the E-cores and repeat with one and two memory channels populated.
Yes, I've been planning a detailed benchmark list like that, too, with lots of permutations on core counts and PLx settings, also to validate Intel's claim that at certain Wattage points the E-cores deliver more power than P-cores.

Basically I'm trying to test optimizing the infra to a given transactional load.

Now I just need to find the time.

Although I don't think I'd need to set frequency limits, as I believe Gracemont cores max frequencies would be the same or very close between the N305 and the i7. Just making sure via numactl that your benchmark is only using the E-cores should be enough.

You can't eliminate the kernel using the P-cores, nor can the P-cores be deactiviated completely (1 must be active to boot, only E-cores can be turned off entirely), but a userland benchmark shouldn't be exercising the kernel much.

Of course, my AD i7 have the 96EU Xe iGPU and that is likely to suffer badly from a single channel. But as I've mentioned above, Atoms so far haven't gotten anywhere near the performance from the very same RAM sticks that Gen8-11 (P-) cores were able to get from them. And actually I wonder if and how much memory bandwidth on E vs P cores actually differs in hybrid CPUs.

Generally I'd say that for anything but very synthetic workloads (or gene sequencing) the DRAM bandwidth won't be much of a problem on CPU workloads, since even Atoms have loads of cache these days.

For the GPU I am a lot less optimistic, because at least on Jasper Lake GPU perfomance took a very noticeable beating with only a single stick.
I think the main reason Intel ditched the second channel was that they saw lots of customers implementing single-channel designs anyhow, and decided to cut the costs of having a second channel that mostly sits unused. It's not that the second channel wasn't beneficial, just that most customers use these SoCs for low-performance devices and care more about cost savings than the extra performance.
I'm much more inclined to think that Intel wanted to protect its market segmentation.

First of all: I cannot believe that Intel produces different dies for the N50-305 range. The core section in these chips are tiny and if they failed at any significant rates, Intel could not afford to throw them as E-cores into their big i5-i9 SoCs, as a single broken Atom E-core could reduce an i9 to an i5 on perhaps perfectly good P-cores.

There is roughly 4 E-cores to a single P-core on the bigger chips and that means a failed E-core has roughly 1/4 the probability of failure.

Actually with the much smaller Atom-only vs. big Hybrid dice chances of a sub-optimal die are much lower.

Intel isn't talking, but I actually believe that they are violating a court ruling from California, which prohibits them from selling intentionally castrated hardware.

Then single channel doesn't mean single socket as any desktop board will readily attest. Even if Intel only implemented single channel memory on the Alder Lake Atoms, there should be no reason why vendors couldn't add a second socket to easily enable 64GB of RAM even with DDR4.

But evidently Intel told them in no uncertain terms, not to do that. Because it would eat into low-end server territory, for which Intel wants to protect high margins. It's not an accident that Intel has quoted far less RAM capacity for Atoms than those could support for generations. As far as Intel is concerned, nobody should be permitted to have 64GB of RAM and perhaps 10 VMs running on a €100 mainboard!

Which is exactly what I've been doing for years now.
This is not accurate. The N6005 (Jasper Lake) supports dual-channel (128-bit) DDR4-2933. That has a nominal bandwidth of 46.9 GB/s. Compare that to Alder Lake-N's max (64-bit) DDR5-4800's 38.4 GB/s and tell me which number is bigger. Plus, DDR5 has higher latency, which is a slight downside of going that route.
Nominal isn't what you get. 22GB/s is what I measured on my N6005 with dual channel DDR4-3200, running at 2933 (Kingston Value SO-DIMM RAMs tend to have matching SPD settings for all speeds down to 2400). The Tiger Lakes do match that bandwidth at 3200.

I'm perhaps going out on a limb here, but:

Anything x86 sold at that price point near €100 doesn't make money to anyone. Not Intel, who is using these chips mostly to keep ARM out from that market, not the OEMs who are forced by Intel to create these boards and systems if they also want the big chips where at least some money is made.

Intel produces a single die at relatively low cost and near perfect yields but still at a loss (or far from their usual margins) except perhaps for those they sell as eight-cores, only to splatter the 'competitive landscape' with products, so no ARM small home server eco-system can grow.

Firefly selling a Rockchip ARM ITX-3588J 8K AI Mini-ITX with 32GB of RAM at $800 is obviously a prohibitive price owed to the need to make a profit at tiny scales. It shows that Intel's low-end sea wall is holding, even without AMD adding sandbags.

To consumers Atoms obviously represent a much better value than what ARM vendors can deliver at this price point of just above $100, because those they need to be profitable with what they sell.

Qualcom, Mediatek and Samsung could change that, offering older or lesser mobile phone SoCs as a µ-server base, but for them, there is also no money and lots of effort in that market. They'd much rather clips your money bag selling that as 'robotic' chips or put extra chips into a landfill.

I'd just love to see die shots from N50 or N100 chips compared to N305, because I am so sure they are in fact eight core chips that have been fused into cripples, that I'd bet a case of beer on it (but not the international delivery).

I guess even more I'd love someone to hack their firmware and turn N50 chips into fully fledged N305, just in case they chose soft fusing to save money.

Even when it comes to dual and [inline] ECC channel support, I'm not sure it's left out from the die, there is a much better chance it's not wired to the BGA pins. Because the extra logic required may be so small, that just doing a cut & paste from the dual channel IP block that they use on hybrid SoCs or Grand Ridge may be cheaper than doing a separate single channel design.
 
Last edited:
Nominal isn't what you get. 22GB/s is what I measured on my N6005 with dual channel DDR4-3200, running at 2933 (Kingston Value SO-DIMM RAMs tend to have matching SPD settings for all speeds down to 2400). The Tiger Lakes do match that bandwidth at 3200.
Just to let you know the memory controller in ADL-N doesn't seem to be anymore effective than JL at least with SODIMMs. It also has different memory support than mobile, embedded or desktop chips and the communications version only supports LPDDR, but at a higher clock so I don't think they're reusing a controller design.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Wow, that's a longpost! Fortunately, I know your posts are always worth reading...

I've been operating N5005 with 32GB in dual channel (Intel "permitted" 8GB) and N6005 with 64GB (Intel "permitted" 16GB) in dual-channel without issue using Kingston Value DDR4 SO-DIMMs on ASrock Mini-ITX (N5005) and Intel NUC11 (N6005).
Thanks for sharing your experiences with memory configurations on these SoCs.

And it's a test that just killed my RP4 8GB, even at THD etc.
Did you check if it was using swrast? I'll bet it was... not that native iGPU acceleration would be up to your standards, anyhow.

Look for llvmpipe, in top, while trying your experiment.

BTW, what's THD?

I was tempted to buy an RP5 just for 'completeness' but that iGPU can't be anywhere near acceptable with only a 300MHz speed bump.
Do you know anything else about it? At launch, they disclosed virtually nothing about it, but I believe it was supposed to be larger and have enhancements to its microarchitecture. I think it's unlikely to come anywhere close to the N100's iGPU, no matter what they did to it.

Yes, I've been planning a detailed benchmark list like that, too, with lots of permutations on core counts and PLx settings, also to validate Intel's claim that at certain Wattage points the E-cores deliver more power than P-cores.
Well, there's this:


However, I differ slightly on their interpretation of the data. I think the crossover points are a distraction. All you need to see is the continual negative curvature of the perf/W plot. That tells you more performance can be had by increasing the core count and running them at lower frequencies. Because E-cores are the most area-efficient way to add performance, they provide the best scaling option.

a userland benchmark shouldn't be exercising the kernel much.
Depends on what it is. If it's a benchmark of something like sqlite or a webserver, then it sure will! Even GPU-intensive benchmarks should probably spend a nontrivial amount of time in the kernel.

as I've mentioned above, Atoms so far haven't gotten anywhere near the performance from the very same RAM sticks that Gen8-11 (P-) cores were able to get from them. And actually I wonder if and how much memory bandwidth on E vs P cores actually differs in hybrid CPUs.
I wonder if the reason you didn't see CPU cores getting more benefit from DRAM bandwidth in those older SoCs was possibly because they intended the iGPU to use quite a large share of the bandwidth, and therefore didn't bother connecting the CPU cores to the memory controller with a particularly fast connection.

As for Alder Lake, each quad E-core cluster has a link to the ring bus, but it runs at a lower frequency than the P-Cores' (reportedly fixed, in Raptor Lake). Chips & Cheese measured it.

There is roughly 4 E-cores to a single P-core on the bigger chips and that means a failed E-core has roughly 1/4 the probability of failure.
IIRC, the Alder Lake E-cores are each about 30% the size of an Alder Lake P-core. I know both got an L2 cache boost in Raptor Lake, but I'm not sure if that ratio held.

Nominal isn't what you get. 22GB/s is what I measured on my N6005 with dual channel DDR4-3200, running at 2933 (Kingston Value SO-DIMM RAMs tend to have matching SPD settings for all speeds down to 2400). The Tiger Lakes do match that bandwidth at 3200.
There's an OpenCL benchmark which can measure memory bandwidth (CLPeak?). You'd want to simultaneously run that + a CPU memory bandwidth test, in order to assess the impact on the entire SoC - not just the CPU cores.

BTW, I found that the utility intel_gpu_top can provide some interesting statistics on iGPU power consumption and memory controller bandwidth figures. I haven't tried it on these low-end SoCs, but it's interesting to run on other Intel CPUs.

the OEMs who are forced by Intel to create these boards and systems if they also want the big chips where at least some money is made.
No... I seriously doubt Intel is forcing anyone to use Alder Lake-N in anything other than chromebooks. The mini-ITX market hasn't gotten much love for these SoCs since Gemini Lake. I had wanted a Jaspert/Elkhart Lake board, but they were virtually nonexistent, outside of industrial models. And even for Alder Lake-N, only two mainstream manufacturers have non-industrial models on the market.

Intel produces a single die at relatively low cost and near perfect yields but still at a loss (or far from their usual margins) except perhaps for those they sell as eight-cores, only to splatter the 'competitive landscape' with products, so no ARM small home server eco-system can grow.
You're too paranoid. That's a tiny market. I can't imagine Intel cares about it, at all.

Firefly selling a Rockchip ARM ITX-3588J 8K AI Mini-ITX with 32GB of RAM at $800 is obviously a prohibitive price owed to the need to make a profit at tiny scales. It shows that Intel's low-end sea wall is holding,
No, it shows just what a tiny and irrelevant market it is. IMO, that mini-ITX board is mainly a development kit for their SoMs, which is probably the real thing they're selling.

Trust me: if Orange Pi thought there was a good market in people wanting RK3588's on a mini-ITX board, they could easy make one and sell it for not much more than the Orange Pi 5 Plus. The market just isn't there for it.

To consumers Atoms obviously represent a much better value than what ARM vendors can deliver at this price point of just above $100, because those they need to be profitable with what they sell.
If this were true, ARM couldn't compete in the Chromebook market, either.

Qualcom, Mediatek and Samsung could change that, offering older or lesser mobile phone SoCs as a µ-server base, but for them, there is also no money and lots of effort in that market. They'd much rather clips your money bag selling that as 'robotic' chips or put extra chips into a landfill.
Microsoft makes a nice Snapdragon 8cx Gen 3 mini-PC for ~$600. Its main problem is just that the CPU cores aren't terribly competitive with what the same money can buy you in a x86-based mini-PC.

Because the extra logic required may be so small, that just doing a cut & paste from the dual channel IP block that they use on hybrid SoCs or Grand Ridge may be cheaper than doing a separate single channel design.
I highly doubt it. The IMC on Alder Lake-S isn't small, no thanks (I'm sure) to its support for both DDR4 and DDR5.

According to this analysis, the combined size of the memory controller & PHY is larger than an E-core cluster:
 
  • Like
Reactions: abufrejoval

abufrejoval

Reputable
Jun 19, 2020
615
452
5,260
Did you check if it was using swrast? I'll bet it was... not that native iGPU acceleration would be up to your standards, anyhow.
It's been awhile...

But I remember that I did spend quite a few hours trying to make sure that it was actually using the GPU for Chrome and the 3D render in there and that it wasn't easy. Something that stuck in my mind was that if the desktop used the GPU for window management, Chrome would be slower as if it was either one or the other.

I also tried HW accelerated video decoding and that was also rather painful.

What stuck in my head is that the Jetson did way batter and with a software stack that hasn't seen updates for much longer, while its CPU cores are significantly weaker than the RP4 and RAM was really constrained with only 4GB.
BTW, what's THD?
TrueHD or FullHD, 1920x1080, I keep baulking at just using 1080p, because it's not been interlaced for ages...
Do you know anything else about it? At launch, they disclosed virtually nothing about it, but I believe it was supposed to be larger and have enhancements to its microarchitecture. I think it's unlikely to come anywhere close to the N100's iGPU, no matter what they did to it.
The only reported change was a speed bump in the clock. I'm pretty sure they'd have talked aplenty if they had done more.

From what I've gathered over the years, the RP GPU is a very unique design, very nerdy and academically interesting. But that may also mean it's hard to change and speed up at the square scale typically required or without breaking things. I'd guess they have locked themselves in an architectural corner there, another reason I don't want to follow.
However, I differ slightly on their interpretation of the data. I think the crossover points are a distraction. All you need to see is the continual negative curvature of the perf/W plot. That tells you more performance can be had by increasing the core count and running them at lower frequencies. Because E-cores are the most area-efficient way to add performance, they provide the best scaling option.
What I can see is that there would be tons of variables and definitely performance overlaps between running x P and y E cores, while switching between those configurations as such would have a cost and latency impact, too.

My interest was mostly doing a PoC of adjusting data center resources to match transactional loads for a bit of cloud elasticity when you're tied to on-premise, so it wouldn't be exhaustive anyway.

But yeah, going for additional E-cores instead of frequencies in P-cores is what everybody seems to have measured for enough use cases to come out with these 'cloud optimized' SoCs or the C-variants of the Zen 4 cores.
Depends on what it is. If it's a benchmark of something like sqlite or a webserver, then it sure will! Even GPU-intensive benchmarks should probably spend a nontrivial amount of time in the kernel.
The biggest problem I face is that the instrumentation for precise measurements simply isn't there. perf will only give you the choice of measuring Joules for very major parts of the SoCs or very finely grained information about instructions counts and branching efficiency, but never both at the same time. Nor is there any per-process accounting for power consumption for lack of hardware and kernel support.
I wonder if the reason you didn't see CPU cores getting more benefit from DRAM bandwidth in those older SoCs was possibly because they intended the iGPU to use quite a large share of the bandwidth, and therefore didn't bother connecting the CPU cores to the memory controller with a particularly fast connection.
I'd say that for the CPU part they just didn't need that much bandwidth. All of these designs are incremental and Atoms started very low. The earliest Atoms used single channel designs with very weak CPU cores. And for a long time CPUs haven't really been DRAM bandwidth limited. If doubling RAM bandwidth doesn't do much for the CPU, they won't do it, especially since it always costs power.

I think that increasing the GPU power even faster on Atoms than CPU power was a bit of a paradigm shift at Intel, driven by the signage market going for Full HD panels or beyond, but on Jasper Lake 20GB/s was enough to feed the 32EUs at 900MHz GPU clock, and more just could not be fit into the 5 Watt GPU budget on an Atom. On a Tiger Lake with 96EUs at 1300MHz GPU you need more bandwidth for balance.

Regulating GPU vs CPU power budget allocation is a non-trivial problem not made easier by GPUs which tend to starve RAM bandwidth. But I don't think they are managing it via bandwidth or that GPU and CPU parts of the SoC have distinct memory connections. I think they are doing it via clocks based on a fixed Wattage budget limit because it's the easiest solution.

I have observed something like a 5 Watt GPU power limit imposed on Atoms and a 10 Watt GPU power limit on Xe based SoCs. When I push overall TDP power limits below that on a Tiger Lake, it becomes quite wobbly, beause the CPUs get pushed out at the Wattage freeding trough by the GPU until that needs CPU directions and halts, allowing CPUs a slice or two.

And that might have driven Intel to turn to Hybrid SoCs, because finding the right balance between down-clocking the GPU to leave enough power budget to have at least one P-core still turn over became a bit of a challenge: E-cores just gave a smoother slowdown when overall power budgets drop from 15 to 8 Watts. Of course it also helped for paper and benchmark gains against the 8-core Zens.

The single channel on the Gracemont is probably uncritical for most CPU workloads, and at 1/3 of the Tiger Lake EUs it might just be ok even if the GPU clocks on the N305 reach Tiger Lake levels.

But the single socket story is a different issue, that's where I believe my paranoia is a good match to what Intel has done for decades.
A very good read, thanks!
IIRC, the Alder Lake E-cores are each about 30% the size of an Alder Lake P-core. I know both got an L2 cache boost in Raptor Lake, but I'm not sure if that ratio held.
Not quite 25% but the idea that N50 or N200 are somehow made from N305 dies that are actually defective is ludicrous. Regularly having 4 or even 6 cores out of 8 truly broken is just unbelieveable.

But I did notice that a broken E-core on a hybrid Alder lake doesn't quite relegate that chip to a bottom feeder, Intel is selling i7 and i5 variants with only 4 active E-cores.

There's an OpenCL benchmark which can measure memory bandwidth (CLPeak?). You'd want to simultaneously run that + a CPU memory bandwidth test, in order to assess the impact on the entire SoC - not just the CPU cores.
I'll keep that in mind, thanks!
BTW, I found that the utility intel_gpu_top can provide some interesting statistics on iGPU power consumption and memory controller bandwidth figures. I haven't tried it on these low-end SoCs, but it's interesting to run on other Intel CPUs.
The hardware instrumentation for power consumption on Intel CPUs doesn't seem to have changed much since Sandy Bridge. At most you only get CPU, DRAM and iGPU measurements, but on hybrid cores CPU consumption is split between E and P cores. But if I'm not mistaken, it's actually missing the DRAM power consumption on some systems, while it's present e.g. on Xeons.

On Zen HWinfo and Ryzen Master seem able to get much more detailed Wattage figures e.g. per CPU core, but I don't see it available via Linux instrumentation or in 'perf'.

I couldn't find any freely available documentation and I wouldn't try to go for an NDA with AMD.
No... I seriously doubt Intel is forcing anyone to use Alder Lake-N in anything other than chromebooks. The mini-ITX market hasn't gotten much love for these SoCs since Gemini Lake. I had wanted a Jaspert/Elkhart Lake board, but they were virtually nonexistent, outside of industrial models. And even for Alder Lake-N, only two mainstream manufacturers have non-industrial models on the market.
I've read reports on OEMs using very strong langauge and nearly going on strike when Intel kept on forcing Atoms on them in bundle deals. And a Mini-ITX board with an Atom on it is a lot of hardware for €100 even with a four layer PCB. They still have lots of gold contacts and I just can't believe anyone gets rich on them.

It was mostly because of the foundry capacity limits that Intel relieved the Atom pressure a little during Covid: they sure don't want Mediatek or Chinese RISC-V designs to take over the Atom territory.
You're too paranoid. That's a tiny market. I can't imagine Intel cares about it, at all.
With Intel, you it's hard to be too paranoid. Their old sins are well documented even if not well known.
New pressures not creating new sins when sinning is regarded as the virtue of disruption, seems naive to me.

The market is tiny for DIY Mini-ITX, but NUC, signeage, set-top, µ-servers etc. is volume. Intel can see enemies from afar and is on record for doing nearly everything to keep the competition away.

I stand with my accusation that the vast majority of all Gracemont 2-4 Gracemont Atom SoCs are intentionally crippled 8 core dies.
No, it shows just what a tiny and irrelevant market it is. IMO, that mini-ITX board is mainly a development kit for their SoMs, which is probably the real thing they're selling.

Trust me: if Orange Pi thought there was a good market in people wanting RK3588's on a mini-ITX board, they could easy make one and sell it for not much more than the Orange Pi 5 Plus. The market just isn't there for it.
I wouldn't mind the smaller form factor if the Orange Pi 5 Plus was a reasonable value proposition. It's just way too expensive and the 32GB variant is a theoretical option, only. It needs a DIMM socket variant with a competitive price.
If this were true, ARM couldn't compete in the Chromebook market, either.
I have a very dim view on Chromebooks, so I honestly don't know because I don't look that way.
Microsoft makes a nice Snapdragon 8cx Gen 3 mini-PC for ~$600. Its main problem is just that the CPU cores aren't terribly competitive with what the same money can buy you in a x86-based mini-PC.
Thanks a ton, that really looks good. I wonder if it can run Linux? There must be a snag, somewhere, because it's almost too good to believe!
I highly doubt it. The IMC on Alder Lake-S isn't small, no thanks (I'm sure) to its support for both DDR4 and DDR5.

According to this analysis, the combined size of the memory controller & PHY is larger than an E-core cluster:
https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7f9c27a2-9a27-4f88-9719-087bc976e9e0_1920x1080.png
First of all: thanks for digging up that die short, I've wanted to have one for long!

You're probably right that the transistors for the 2nd channel isn't there (still doesn't justify the missing 2nd socket).

I forgot that the size of the memory controller isn't dictated by the logic (which would be small) but the power amplifiers that are needed to push those signals across the PCB and DIMM sockets: Those don't scale and it's the reason AMD uses IODs on older nodes. It would be similar with the display control and the PCIe area.

I'd hazard that the red regular structures at the right are iGPU while the less structured orange part on the right might hold all the video transcode stuff that must after all be somewhere... some answers, more questions!

But you can see that random surface defects wiping out 4-6 cores on these chips in signficant numbers matching retail volumes while retaining something functional for all other areas is pure fiction, when overall yields are surely > 90%.

The N305 would be the obvious fabbing target much like Zen CCDs yet isn't the volume big seller, which the N100 seems to be from what I see on sale.

AMD had a terrible time trying fill that niche when they weren't producing enough "bad' chips and only Intel decides not to care and spends on protecting the wall at all price points.

So I stand with my claim that N50 to N200 are intentionally crippled to keep all niches plastered with Intel product without resorting to separate dies.

And what I only heard from a guy very deep in the industry is that a California court decision actually made that intentional crippling illegal more than 10 years back...

It would probably take an Andand or Ian Cutress to follow up on that story, through...
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Microsoft DevKit:

Ok, just learned that this is an ARM 8cx gen 3 not a Snapdragon 8 Gen 3...

And Linux is a no-go, so I lost all interest.
Yes, I did say 8cx Gen 3. I didn't know about the Linux situation, but I'm not surprised. There's still Lenovo's Thinkpad X13S, which they're reportedly working on supporting under Linux.

Anyway, I take the MS dev kit more as a sign of what's possible and might be coming, than an end in itself.
 

domih

Reputable
Jan 31, 2020
205
183
4,760
Be careful about this. Apollo Lake (and I think also Gemini Lake) is super picky about memory. I had an Apollo Lake board that I thought was just broken, since I bought RAM for it that even the memory vendor claimed would work. Turns out, the system would boot, but only if I installed memory just in DIMM slot B. Using just slot A or A+B didn't work.


True. The Orange Pi 5 was almost a year earlier to market, faster, and more power-efficient. Both use the same P-core, though (A76). That's because the RK3588 is made on Samsung 8nm (I think), while the Pi 5 uses either a 20 nm or 14 nm node, I believe.

Pi enthusiasts will tell you the Pi has better support (i.e. both better software support, and a much larger community). They're not wrong. Linux on the RK3588 has been a somewhat rough experience, for the early adaptors.


This is not really true, for anyone who would consider running a Pi, in the first place. The cross-platform distros available for it are exactly the same, between ARM and x86. Even the packages with hand-tuned x86 SIMD optimizations mostly include similar optimizations for ARM.


I'm foggy on the details, but I think ARM has addressed this with a solution similar to x86's. I don't know (wouldn't expect) that the Pi (yet) conforms to it.

Anyway, it's not relevant to the current discussion, since Raspberry Pi's are the best-supported among ARM SBCs. If anything supports ARM, it's almost guaranteed to support R.Pis.

Be careful about this. Apollo Lake (and I think also Gemini Lake) is super picky about memory. I had an Apollo Lake board that I thought was just broken, since I bought RAM for it that even the memory vendor claimed would work. Turns out, the system would boot, but only if I installed memory just in DIMM slot B. Using just slot A or A+B didn't work.

I'm careful. I have several ODROID H2+ and H3 running with 32GB. They went through all the loads I usually run with no problem. Same thing with several H3+ with 64GB.

Pi enthusiasts will tell you the Pi has better support (i.e. both better software support, and a much larger community). They're not wrong. Linux on the RK3588 has been a somewhat rough experience, for the early adaptors.

I concur. However while for instance, the Hardkernel forum is much smaller, you usually get support in a timely manner, without the noise of too many people. Regarding the RK3588, yep it's was a rodeo. I have a Radxa Rock 5B. I would gently say that they started to deliver the SBC without a complete custom support testing, meaning the upstream Linux was not fully adapted. Personally, I prefer to use the Armbian distro. For Hardkernel, I use their ARM distros which usually work fine (XU-4. N2+, C4, M1, M1-S), only with a few issues in a few dark corners. For other ARM-based SBC, I use Armbian.

I basically standardized on ODROID (Hardkernel) because their hardware works, their downstream distros works and are maintained. They usually offer more customization and connectivity than other SBCs. Their forums are on point. The thing that Hardkernel does not do is: plenty of beautiful web pages (which I don't miss anyway).

I'm not a typical SBC user (games, video and so on). I rather use them for Enterprise oriented solutions. In this domain, the Ubuntu server .deb's for ARM are not always available. This means make or cmake, plus systemd scripts if necessary, integration to log, etc. On x86, there is always a .deb from the official source.
 
  • Like
Reactions: bit_user

abufrejoval

Reputable
Jun 19, 2020
615
452
5,260
Be careful about this. Apollo Lake (and I think also Gemini Lake) is super picky about memory. I had an Apollo Lake board that I thought was just broken, since I bought RAM for it that even the memory vendor claimed would work. Turns out, the system would boot, but only if I installed memory just in DIMM slot B. Using just slot A or A+B didn't work.

I'm careful. I have several ODROID H2+ and H3 running with 32GB. They went through all the loads I usually run with no problem. Same thing with several H3+ with 64GB.
I believe that DRAM support on Atoms is less an issue of 'pickyness' than lack of flexibility: they'd just want a standard set of timings and would fail if the SPDs didn't support that.

All those overclocking negotiations which are part of the a DIY mainboard repertoire are BIOS goodies that their manufacturers are likely to sell as an extra while they also need validation from the vendor. There simply isn't budget for that on Atoms.

And since I tend to prefer moving RAM between systems (which are also constantly rebuilt) rather than buying bespoke DIMMs for each, I always prefer flexible DIMMs with support for the biggest range of speed settings over highly tuned overclocking variants (also ECC for the big machines).

For some reason most of them wind up being from Kingston, because everybody else just prefers to sell you single-speed DIMMs that won't work in any other system down the road.

Now putting 64GB on an Atom was nearly insane when RAM was way more expensive than the rest of the system. But 64GB fell below €100 along the way and on a µ-server running mostly functional VMs running 24x7, RAM still beats paging even with SSDs, while its power budget is hard to even measure.

Quite a different story on my Xeons...

I got into Atoms mostly because I wanted something completly noiseless to run 24x7 in the home-lab and Atoms just came with passive Mini-ITX boards. Intel systems with big P-class cores simply couldn't do that unless you bought something like an Akasa chassis for more money than the system itself. AMD was just a gaping hole in that niche at the time.

ARMs and Raspberries were also attractive when fully passive yet with enough performance to get some real work done.

For me that changed when I tried some NUCs and found that I could get them quiet enough with the proper fan controls: They don't need to be fully passive if I don't notice them.

And the speed difference can be felt in productivity.

So having at least one or two P-cores in such a small system, is something I'd always prefer, if it doesn't cost an arm and a leg and can be kept quiet and low power.

At official Intel prices when new, that's not the case.

Near the end of their life-cycle, as Chinese vendors like Erying push Intel stockpiles out really cheap, they become a €50-100 upgrade over an N305 using its i5-1230U cousin. And that just changes things and leaves available "ARMalternaties" behind.

In the last couple of years the price/performance/noise/power consumption points have moved much more significantly in the Intel or x86 space than on the Raspberry or other PIs.

While the RP5 has mostly just upgraded to active cooling and double peak CPU power, the competition has eeked a lot more out of their silicon and IP. Intels may still eat quite a bit more power at peak, yet their idle and light load consumption in a NUC or even Mini-ITX form factor is equal or better.

It has me feel that the niche for RP5 and even ARM in the µ-server space has gotten smaller, when the cloud service providers prove that it should have grown. But cloud guys won't do PCs to scale their IP.

In the mean-time I'll keep checking if the Zen APUs with 4C cores continue to creep into my favorite price range. To bad they won't support my DDR4 RAM sticks, even if DDR5 is no longer insane..
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
TrueHD or FullHD, 1920x1080, I keep baulking at just using 1080p, because it's not been interlaced for ages...
But the "p" is a signifier that you're referring to the vertical resolution. Anyway, FullHD wouldn't have confused me, FWIW.

The only reported change was a speed bump in the clock. I'm pretty sure they'd have talked aplenty if they had done more.
Well, they were also tight-lipped about GPU enhancements for the Pi 4, and that did have some rather more substantial changes.

From what I've gathered over the years, the RP GPU is a very unique design, very nerdy and academically interesting.
Do you know of any good docs or overview?

But that may also mean it's hard to change and speed up at the square scale typically required or without breaking things.
I believe their biggest issue is that they're using older process nodes and keep pushing the limit on what CPU cores they can fit, leaving too little transistor budget for a bigger iGPU without exceeding their price targets.

What I can see is that there would be tons of variables and definitely performance overlaps between running x P and y E cores, while switching between those configurations as such would have a cost and latency impact, too.
Nah, you're over-complicating it. Just spend your power budget to maximize the aggregate throughput across all of the loaded cores. Treat thread-scheduling separate from that.

I scraped their data and did a little simulation of the aggregate performance for different core combinations:

bRJ9olV.png

pEomQRf.png


The biggest problem I face is that the instrumentation for precise measurements simply isn't there. perf will only give you the choice of measuring Joules for very major parts of the SoCs or very finely grained information about instructions counts and branching efficiency, but never both at the same time. Nor is there any per-process accounting for power consumption for lack of hardware and kernel support.
Well, Chips & Cheese took the approach of using builtin power-limiting and affinity masks, then measuring performance within those constraints.

I'd say that for the CPU part they just didn't need that much bandwidth.
If you use a tool like intel_gpu_top, you can actually see how much main memory bandwidth is being used, regardless of whether it's by the CPU cores or the GPU. I haven't looked into how it gets that information from the integrated memory controller (IMC), but it's open source.

for a long time CPUs haven't really been DRAM bandwidth limited.
Depends on the CPU and the workload. I believe one of the reasons the 7950X is so much faster than the 5950X is due to the additional bandwidth from DDR5. On SPECint 2017 rate-n, it's about 43.5% faster, which is way bigger than the difference in single-threaded performance!

Not quite 25%
I should've cited my source:

"Depending on how you do it and with or without power banks, you may land between 1.6 to 1.7 mm² for one efficiency core.
Based on the smallest numbers they just take 30% of the area of a Golden Cove core without its L2 cache and clock circuitry, respectively Golden Cove is 3.4x times larger."

Source: https://locuza.substack.com/p/die-walkthrough-alder-lake-sp-and
If you have a better source, let's see it.

First of all: thanks for digging up that die short, I've wanted to have one for long!
Oh, it's easy to share what's not mine!
; )

I hope you've had a chance to at least skim the writeup I got it from. Plenty of interesting details, in there. Too bad that author hasn't posted anything new, in a while. I'd consider subscribing, if I knew more content would be forthcoming.

So I stand with my claim that N50 to N200 are intentionally crippled to keep all niches plastered with Intel product without resorting to separate dies.
Do we know whether any of the N-series have 2 cores enabled in each quad? I guess that should be reflected in the quantity of L2 cache, if so.

And what I only heard from a guy very deep in the industry is that a California court decision actually made that intentional crippling illegal more than 10 years back...
If Intel disabled cores to support different core vs. L2 cache ratios, maybe that could squeeze through. AMD does that, in some of their more specialized EPYC CPU models.
 
  • Like
Reactions: thestryker
But the "p" is a signifier that you're referring to the vertical resolution. Anyway, FullHD wouldn't have confused me, FWIW.
The p is actually for progressive scan, but people use it as a shorthand these days. Just one of those annoying things that has a real definition, but has taken on an entirely new meaning. I'd never heard THD either, but you asked first so I didn't have to. I'm the same regarding FHD.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Personally, I prefer to use the Armbian distro. For Hardkernel, I use their ARM distros which usually work fine (XU-4. N2+, C4, M1, M1-S), only with a few issues in a few dark corners. For other ARM-based SBC, I use Armbian.
Do you use a custom kernel on your ODROID ARM devices? AFAIK, that's the only way to get 3D or video acceleration, because the proprietary drivers won't work with the super-old stock kernel.

I basically standardized on ODROID (Hardkernel) because their hardware works, their downstream distros works and are maintained.
I like HardKernel and agree their stuff works pretty well. I have the ODROID-N2+, but have been disappointed about the lack of out-of-the-box hardware acceleration.

Also, I can no longer run a web browser on it, after upgrading from Ubuntu 20.04LTS to 22.04LTS. I could reinstall, but have been too lazy.

BTW, the main reason I haven't bought a RK3588 board is that I've been waiting for HardKernel to release one, since I figured they'd do it right. At this point, I'm starting to doubt they ever will.

I'm not a typical SBC user (games, video and so on). I rather use them for Enterprise oriented solutions.
I used to have a Raspberry Pi as an always-on media streaming server. I got an Apollo Lake mini-ITX board as an upgrade to that, but still maintained an interest in ARM SBCs. I'll admit my ODROID-N2+ was basically just a toy that I used to test out and experiment with various things on ARM. It's come in handy, a few times.
 

bit_user

Titan
Ambassador
The p is actually for progressive scan, but people use it as a shorthand these days.
Back in 2002, I got a 29" presentation monitor to use as a progressive-scan TV. It could handle an XGA (1024x768) signal, but the shadow mask was too coarse to resolve much detail above SVGA (800x600). I had to buy an outboard deinterlacer for it (DVDO iScan Pro, IIRC), since the one it came with was garbage. I also bought a YPbPr -> VGA converter, so I could connect my Panasonic progressive-scan DVD player directly to its other input. That DVD player had a Faroudja deinterlacer with DCDi (Directional-Correlational Deinterlacing), which typically worked pretty well.

I kept my little setup until I got a Panasonic 1080p plasma, about 10 years later. I bought it for the superlatives people claimed about its panel, but what I grew to love most about that Panasonic TV was its motion interpolation. At one point, I even bought an outboard video processor with a Teranex chip, but found it really didn't do anything better than the TV, when fed by my Oppo blu-ray player's "Pure Direct" output mode (which locked the HDMI link to the colorspace and sample structure of the video content).

I love that sort of stuff. It's the same reason I'd consider getting a HTPC with a Nvidia GPU, just to use their VSR upscaling technology on streaming content.
 

bit_user

Titan
Ambassador
While the RP5 has mostly just upgraded to active cooling and double peak CPU power, the competition has eeked a lot more out of their silicon and IP.
That's because Pi is using ever bigger cores on trailing nodes and cranking up clock speeds to deliver ever more performance.

Intels may still eat quite a bit more power at peak, yet their idle and light load consumption in a NUC or even Mini-ITX form factor is equal or better.
No, I don't think so.

The ODROID-N2L idles at just 1.5 W, system-wide. It has 4x A73 + 2x A53 cores. Show me a modern x86 mini-PC which does that.

If you can get by with even less compute power, their M1S idles at just 0.7 W:

The N2L's SoC was made on a 12 nm process node. The M1S' was made on a 22 nm node, but its cores are just 4x A55 and its GPU is worse. Now, just imagine what kind of efficiency newer SoCs can deliver, on even better nodes! ARM is far from being out of the efficiency game. Just don't focus so much on the Raspberry Pi.
 
Back in 2002, I got a 29" presentation monitor to use as a progressive-scan TV. It could handle an XGA (1024x768) signal, but the shadow mask was too coarse to resolve much detail above SVGA (800x600). I had to buy an outboard deinterlacer for it (DVDO iScan Pro, IIRC), since the one it came with was garbage. I also bought a YPbPr -> VGA converter, so I could connect my Panasonic progressive-scan DVD player directly to its other input. That DVD player had a Faroudja deinterlacer with DCDi (Directional-Correlational Deinterlacing), which typically worked pretty well.
I wish I could rememeber what my last CRT television was. From early-mid 00s to early 2010 I used a Pentium 3 system as my everything multimedia and until I got rid of the CRT it was plugged in via S-video because that was better than composite. I always thought that era was more interesting as it was just figuring out how to get stuff to work right rather than mostly just dealing with purposeful incompatibility and DRM of today.
I love that sort of stuff. It's the same reason I'd consider getting a HTPC with a Nvidia GPU, just to use their VSR upscaling technology on streaming content.
I'm really curious about the upscaling they're doing, but the television I'd be most likely to use it on has a really good scaler so it wouldn't likely be an improvement.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
I wish I could rememeber what my last CRT television was.
Television? I was still using a pair of Sony GDM-FW900 24" CRT monitors on my PC, until 2020! I almost bought an AMD Fury graphics card, until I discovered it had no analog output. That's when I bought my first Nvidia card: an EVGA GTX 980 Ti, which was the last generation of theirs ever to support analog out.
 
Status
Not open for further replies.