News Raspberry Pi 4 (8GB) Tested: Double the RAM, New 64-Bit OS

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The biggest improvement will come when there is an easy and effective way to make code more parallel - this is has been an issue since the first DP/MP super computers and persists to this day. Today there is A LOT of hand tuning of various libraries to make sure they can make full use of however many CPU cores there are. When businesses starting virtualizing their machines, it achieved a lot of the same goals - more complete utilization of the hardware.

IPC gains are fine - but the average enthusiast PC has more than enough raw capabilities to to whatever we would want in our wildest fever dreams. Maybe the current programming paradigm is not up to the task - maybe it should be more of an OS thing.

While CPUs have slowly trickled along, parallelization and particularly the utilization of GPU cores has felt like warp speed innovation in comparison. I remember the first time using GPU acceleration and realizing that preview renders just became obsolete. That is real change.
 
You can use up 4gb of ram pretty quickly when compiling. And because raspberry swaps on flash, adding swap is not a solution.
If you connect a reasonably fast SATA SSD via USB3, swapping to it probably wouldn't be too bad.

FWIW, I use an SD card on an older Pi and always make a swap partition on it. Not that I want it to swap, but allowing for a bit of swapping can free up RAM when things get tight.
 
What I would really like to see is an updated A+ form factor that would lower the physical profile, but still get the new Pi 4 hardware otherwise. I would think using a USB-C connector to connect external docks for ethernet and more USB ports would enable that nicely, and keep it down to a single micro HDMI connector . Just pack on the RAM and 64-bit CPU for the win.
 
While CPUs have slowly trickled along, parallelization and particularly the utilization of GPU cores has felt like warp speed innovation in comparison. I remember the first time using GPU acceleration and realizing that preview renders just became obsolete. That is real change.

parallelization is one of the biggest cha
While CPUs have slowly trickled along, parallelization and particularly the utilization of GPU cores has felt like warp speed innovation in comparison. I remember the first time using GPU acceleration and realizing that preview renders just became obsolete. That is real change.
Yeah in an inherently parallel architecture like a GPU - yes. in CPUs no.
 
Yeah, but he didn't say whether he was using any kind of ad blocker, what site, or how long he left the tabs loaded.

No ad blocker. Several minutes, maybe 10 to 15, but I wasn't timing it. In an attempt to use more RAM, I had many of the tabs playing 4K video trails from hd-trailers.net This was not a rated review; hence, we did not give it a star rating. The RAM testing was meant to be anecdotal; I apologize if that wasn't clear from the copy.

No, it really doesn't. For the most part, he didn't give enough detail that anyone could expect to repeat his tests and get the same results. As such, this is not a proper hardware review.

It's not meant to be a formal review. We did a formal review of the Raspberry Pi 4 when it came out in June 2019 and we are revising that text a bit to bring it up to date. I ran several synthetic benchmarks on the Pi 4 8GB and, almost without fail, it has similar performance to other Raspberry Pi 4 capacities (1, 2 and 4GB); I mentioned several of these tests in the article. I've also talked with other folks who have tested it and their experiences are similar: it's a Pi 4 but with 8GB of RAM. That should not surprise anyone. As such, this should be considered an additional configuration of the same product, not an entirely new product.

The ultimate question that any review needs to answer -- and though this is not a rated review I did try to answer it -- is whether one should buy the product. As I said in the article, the answer really depends on your needs. Few people who already own a Raspberry Pi 4 have a compelling reason to upgrade at present. Those who don't own one yet and want to future-proof by spending an extra $20 over the $55 Pi 4 (4GB) are making a solid decision.
 
  • Like
Reactions: bit_user
Hi Avram. There's an error in this text: "a 64-bit operating system allows 64-bit apps that can use more than 4GB in a single process". 32-bit Raspbian can run a user space process of up to 3GB, not 4GB. The kernel needs some address space, and is assigned 1GB of virtual addressing.

The article could have added the caveat that the KMS driver for the RPi4 doesn't currently support acceleration used for displaying videos, such as from VLC or a web browser.

The main reason to move to a 64-bit operating system is simultaneously simple and obscure: the increased address space.

The reduced address space of 32-bit Linux particularly bites around the usage of "kernel low memory" -- this memory has a 1:1 mapping of virtual address to physical address, which allows for fast DMA. With a VMSPLIT_3G the lowmem on ARM is 512MB. If you are doing a lot of I/O then this lowmem becomes fragmented, meaning that it takes a long time to make space when a request for a large allocation arrives, with throughput suffering as a result. Before 64-bit Linux on x86_64 this fragmentation was frequenctly experienced on busy 32-bit mirror servers. The specifications of the RaspberryPi 4 4GB exceed those of the servers of that era. Which just increases the likelyhood of lowmem fragmentation under load. You can see how on a system with this sort of high I/O load a 32-bit operating system will underperform compared to same operating system on the same hardware using a 64-bit address space and thus much less fragmentation of its much larger lowmem.

I've highlighted one area of operating system performance, but there are several performance bottlenecks which have similar differences in performance between 32-bit and 64-bit address space.

The point about LibreOffice is also a good one. Microsoft Office users are familiar with the same 32-bit issue: once you reach the maximum number cells which are available in Windows' 2GB user-space addressing then there is no simple solution to continue; installing more RAM won't help. The only solutions are to re-write the analysis (expensive) or to use the 64-bit versions of LibreOffice or Microsoft Office, which require 64-bit operating systems.

Edit: The article's recommendation for 32GB of RAM on not-throwaway x86_64 laptops is defendable. The question is: which unalterable characteristics set the lifetime of a laptop? Batteries are replaceable. Spare keyboards probably set some limit, 5-10 years depending on make. Spare fans reach back a decade. Screen resolution certainly sets a limit, but 1920x1080 is so popular that it will be supported for a decade. The limits seem to be CPU speed and RAM. If the RAM is soldered then the question isn't how much RAM an operating system needs today, but how much RAM will it need in five year's time. 16GB seems like it would be the minimum, and 32GB would be comfortable. Avoiding laptops with soldered secondary storage would seem to be wise.
 
Last edited:
  • Like
Reactions: bit_user
Thanks for taking the time to respond.

No ad blocker. Several minutes, maybe 10 to 15, but I wasn't timing it. In an attempt to use more RAM, I had many of the tabs playing 4K video trails from hd-trailers.net This was not a rated review; hence, we did not give it a star rating. The RAM testing was meant to be anecdotal; I apologize if that wasn't clear from the copy.
I understand that it was an informal test.

For future reference, Firefox has a cool feature: if you type about:performance into your address bar, you can see the CPU utilization and memory consumption of different tabs. In Chrome, Shift-Esc will bring up the task manager. In both cases, you can click the column heading to sort by memory utilization. I expect you'll find that playing 4k videos doesn't use as much memory as you think, while you might be surprised how much RAM is consumed by simply browsing sites like tomshardware.com - especially the article pages. I should add that I'm only basing this on my PC-based browsing experiences - my own Pi is too old and slow to be provide a usable browsing experience.

Also, note that modern browsers tend to unload inactive tabs, in order to conserve resources. Because of this, browser tabs might not be the best stress test of memory usage. It's not fool-proof, as recently I had Chrome pointed at this page.


Even while the browser was minimized, it used over 4 GB of RAM, triggering my desktop to run out of memory and kill Firefox (yeah, not even the browser at fault), before I noticed.

Using separate, open browser windows might be the way to go. I'm not sure if open browser windows get unloaded as aggressively as tabs.

The ultimate question that any review needs to answer -- and though this is not a rated review I did try to answer it -- is whether one should buy the product. As I said in the article, the answer really depends on your needs.
As @rugupiruvu pointed out, compiling software (especially C++ code) is a use case where people are likely to significantly benefit from > 4 GB. Might be worth keeping in mind, for future reviews.

Anyway, thanks for the review & thanks, again, for the reply.
 
The main reason to move to a 64-bit operating system is simultaneously simple and obscure: the increased address space.
You make an interesting case, but I disagree. The ARM A64 ISA is much more advanced than A32. You get 31 GP registers instead of 13, double-width SIMD registers, and number of other improvements, described below:
For the typical app, the impact of the ISA improvements would probably have a bigger impact than the lessening of the I/O memory fragmentation you cite... if 64-bit Raspberry Pi OS would actually use A64 for userspace!

Even if it doesn't, I think the benchmarks show some benefit from at least the kernel being compiled for AArch64 mode.
 
The biggest improvement will come when there is an easy and effective way to make code more parallel - this is has been an issue since the first DP/MP super computers and persists to this day.

I agree - better threading and parallelization of code should offer a lot of opportunities, and should be better done at OS level for both PC and ARM.

I just found out that the new world's largest supercomputer is running ARM- I'm sure there's a lot of custom work but it would be awesome to be able to build a somewhat modular machine with many parallel cores. It seems to me that ARM could be great for that, though it would depend on the nature of the problems (or they way the computational software is coded).
 
In my experience, a properly configured computer can read all of RAM in about 1 second. I just don't think the Raspberry Pi4 can do that with 8GB of RAM. Amazon instances today allocate 2GB of RAM to each processor which is 4x faster than the raspberry Pi.

But just FYI swapping/vm with an SSD doesn't work, there are too few write cycles (10,000) per block on an SSD.
 
In my experience, a properly configured computer can read all of RAM in about 1 second.

But just FYI swapping/vm with an SSD doesn't work, there are too few write cycles (10,000) per block on an SSD.
How long it takes to read the entire memory is a direct function of bandwidth. Bandwidth and memory size are two completely different things. A "properly configured computer" has whatever amount of memory capacity its workload requires and whatever bandwidth is needed to service that load. Cache appliances needs tons of RAM to achieve high hit rate but nowhere near enough bandwidth to go through the whole thing every second.

As for SSDs, Intel's Xpoint SSDs have a 60DWPD warranty, which is a minimum of 100k writes over five years.