News Russian-Made Elbrus CPU's Gaming Benchmarks Posted

Russia doesn't have many homegrown processors — the Elbrus and Baikal are probably the two most popular chips in the country.
If you are trying to imply that Elbrus and Baikal are popular gaming CPUs in Russia then you are completely out of touch with reality. Most gamers never even heard those names... Those are CPUs for state agencies and no one else.

As for actual popular gaming CPUs in Russia, I think it's Xeons and ES chips from Aliexpress. They have good enough performance and absolutely destroy your shills of "best CPUs" in terms of performance/price.
 
Yeah it went from being 20 years behind to only 15 years behind everyone else
The main reason I'm interested is that it incorporates certain ideas from VLIW. I want to see how that progresses, as I consider it one possible alternative timeline that Itanium could've followed, had a few decisions been made differently and Intel not cut it off before it could mature. Another such architecture is Tachyum.

We should be open-minded to variations on the dominant ISA paradigms, as we continue to seek faster, cheaper processors in the face of rising semiconductor costs and decelerating improvements from new manufacturing nodes.
 
Last edited:
The real questions are, is this chip strong enough to create night vision? Can it quickly and effectively operate automated tank defenses? Is it enough for russian satellites? How about missiles, can their operating systems be accommodated? I wonder how the rest of the russian high tech industry is holding up. In G-d I trust.
 
The real questions are, is this chip strong enough to create night vision? Can it quickly and effectively operate automated tank defenses? Is it enough for russian satellites? How about missiles, can their operating systems be accommodated?
Based on bits and pieces I've heard from people who seem to have some clue about this stuff (I don't), what I've gathered is those sorts of things typically use FPGAs + microcontrollers. Not that you wouldn't have a general-purpose CPU in some military equipment, but I rather think this CPU is more oriented towards server applications.
 
  • Like
Reactions: KyaraM
My 12 yr. old i7-2600K still outperforms that dog.
What's weird is they weren't clear whether this is in emulation, but that's what I assume. Otherwise, you'd have to recompile these games, and I don't think any of them are open source.

In terms of raw compute power, it seems each core can issue 48 fp32 ops per cycle (or half that, for fp64). I'm guessing they arrive there by having 2 ports that issue 16-element (512-bit) vector ops, with one of them capable of FMA and the other probably add/sub. Or, maybe 4 ports and 256-bit vectors. Either way, its vector engine seems multiple times as powerful as that of a Sandybridge core.

Before you get too impressed, the iGPU of a Skylake desktop CPU can manage 441 fp32 GFLOPS, or about 76% as much. And the ELBRUS-8SV's fp64 throughput can be surpassed by that of a modest Radeon RX 6500XT dGPU. So, it's not bad for a CPU, but its raw compute power isn't even competitive with the 28 nm era dGPUs.
 
Last edited:
  • Like
Reactions: bolweval and KyaraM
The main reason I'm interested is that it incorporates certain ideas from VLIW. I want to see how that progresses, as I consider it one possible alternative timeline that Itanium could've followed, had a few decisions been made differently and Intel not cut it off before it could mature. Another such architecture is Tachyum.

We should be open-minded to variations on the dominant ISA paradigms, as we continue to seek faster, cheaper processors in the face of rising semiconductor costs and decelerating improvements from new manufacturing nodes.

The issue is going to be whether or not they have the full instruction set of a modern x86/64 CPU. You can make a chip with tons of cords and high clocks that will perform like a dog in games if it's missing instructions.

It's just like how people with gpus that don't have dx12u tier 2 compliance complaining about game performance or visuals and saying the game is trash or unoptimized when they have a GPU that is missing instructions that are optimization related or needed for a visual feature.
 
I think the reason they never made DX12 available for Windows 7 is because they knew we'd all still be using it....
The issue is going to be whether or not they have the full instruction set of a modern x86/64 CPU. You can make a chip with tons of cords and high clocks that will perform like a dog in games if it's missing instructions.

It's just like how people with gpus that don't have dx12u tier 2 compliance complaining about game performance or visuals and saying the game is trash or unoptimized when they have a GPU that is missing instructions that are optimization related or needed for a visual feature.
I can think of another issue...
Secure systems.
Is anyone aware if it has any Israeli spy hardware built into the CPU like the Intel ARC processor built into every Intel CPU and the AMD equivalent, whatever it's called?
 
I think the reason they never made DX12 available for Windows 7 is because they knew we'd all still be using it....
Didn't they, though? I'm sure Microsoft shipped DX12 support on Win 7 for some game or another... was it WoW?

spy hardware built into the CPU like the Intel ARC processor built into every Intel CPU and the AMD equivalent, whatever it's called?
You mean ARC as in:



It's more commonly known as the IME:



AMD calls it their PSP, and they used an ARM core:



Every general-purpose CPU seems to have something like it, these days. So, it wouldn't surprise me.

If you're really paranoid, you can always buy a POWER-based system. Raptor Computing sells systems with 100% open source firmware. Zero binary blobs. The downside is they're starting to get rather dated (though still faster than these ELBRUS-8SV CPUs):

 
Last edited:
What's weird is they weren't clear whether this is in emulation, but that's what I assume. Otherwise, you'd have to recompile these games, and I don't think any of them are open source.
You did hear about the steam deck though right?!
Gaming, like windows games gaming, on linux isn't emulation, it uses translation layers but those have practically zero overhead, otherwise the steam deck wouldn't be able to play anything other than easy 2d games.
 
You did hear about the steam deck though right?!
Gaming, like windows games gaming, on linux isn't emulation, it uses translation layers but those have practically zero overhead, otherwise the steam deck wouldn't be able to play anything other than easy 2d games.
Right, but this CPU is non-x86. That means they're probably using an API emulator atop an ISA emulator. For running x86 apps on ARM CPUs, there's a project called FEX-Emu. However, my guess is they're probably using Box64. Or maybe their own thing.

BTW, I wouldn't say WINE has "practically zero overhead". All of the Windows API calls an app makes need to be re-implemented by the WINE runtime. While WINE is pretty good, it's actually different code and therefore the concept of "overhead" is a bit misleading. Rather, you should talk about comparative performance. And, in particular, the Linux GPU device drivers don't implement native Direct3D. So, WINE actually has to implement Direct3D (I believe atop Mesa's Gallium 3D state tracker):

...or, you can also now run it atop vendor-specific Vulkan drivers, via DXVK. I think the main beneficiaries are Nvidia users, since they've pretty much had to use the proprietary drivers to get any kind of decent performance or functionality.
 
BTW, I wouldn't say WINE has "practically zero overhead". All of the Windows API calls an app makes need to be re-implemented by the WINE runtime.
It's basically a list of linux functions that correspond to windows functions.
Wine doesn't re implement them, it just translates them into linux native api calls (hence translation layer and not reimplementation layer) .
The same goes for gpu as well, if there is a native linux API that supports a command wine uses that otherwise that command/function won't work.
If the linux API is more optimized it can end up being faster than running it on windows "natively" .
 
It's basically a list of linux functions that correspond to windows functions.
No, it's not. It has to be API-compatible, so that you can run a Windows binary without need for modifications. This presents some interesting challenges, because the Windows executable file format and calling conventions differ from Linux's.

Wine doesn't re implement them, it just translates them into linux native api calls (hence translation layer and not reimplementation layer) .
It runs atop Linux, so obviously uses Linux API calls to implement the Windows API, but there often aren't exact 1:1 equivalents.

There was a fairly high-profile example of Windows' WaitForMultipleObjects() function that had no direct equivalent on Linux, until fairly recently. It actually motivated the development of a new kernel synchronization construct, called Futex2.

The same goes for gpu as well, if there is a native linux API that supports a command wine uses that otherwise that command/function won't work.
No, that's not how Direct3D is emulated. Yes, at some level, there needs to be support by the hardware & its device driver for whatever D3D features a program is trying to use, but it's not a trivial 1:1 translation. For one thing, you're glossing completely over the fact that a lot of GPU code runs on the GPU, itself, and has to be translated since Linux GPU drivers don't natively understand D3D HLSL.

If the linux API is more optimized it can end up being faster than running it on windows "natively" .
I think it's pretty rare for a game to run faster under WINE than natively on Windows. It's happened, but not the norm.
 
Last edited:
  • Like
Reactions: greenreaper
Elbrus seems to rely on something similar to a hotspot compiler for x86 code. So they can probably simulate SSE/2/3/4 and AVX/2/-512 instructions, but whether and to what extent they will be faster depends on how well it maps to the underlying hardware - which at the end of the day looks to be a 64-bit word size device, with 32 global and 64 windowed registers (of 224) judging by the programming guide linked from Anandtech's overview and this instruction overview. Similarly for AES, do they actually have hardware to accelerate it? Probably not, though maybe there is a library that can run it faster than a naive implementation using other instructions.
 
  • Like
Reactions: bit_user
Based on bits and pieces I've heard from people who seem to have some clue about this stuff (I don't), what I've gathered is those sorts of things typically use FPGAs + microcontrollers. Not that you wouldn't have a general-purpose CPU in some military equipment, but I rather think this CPU is more oriented towards server applications.
The real questions are, is this chip strong enough to create night vision? Can it quickly and effectively operate automated tank defenses? Is it enough for russian satellites? How about missiles, can their operating systems be accommodated? I wonder how the rest of the russian high tech industry is holding up. In G-d I trust.
Night vision doesn't need CPUs.
 
I'll bet that "The Dark Mod" benchmark was done with the default capped FPS which performs horribly in Linux. When I watched a video of game testing on the Elbrus, the player never bothered to change the default language to native language and go into the Video > Advanced Settings area ( where you can set uncapped FPS ). They were barely able to enable FPS display by trying a bunch of commands in the console until they had the right syntax.
 
  • Like
Reactions: bit_user