jnjnilson6

Distinguished
It seems that we haven't had such a cardinally huge upgrade (core count / performance) since almost Sandy Bridge in the case of Alder Lake. Looking at 13 and 14 gen CPUs it seems they will definitely not present a notably higher leap than that provided by Alder Lake.

What do you think? Will there ever be something like Sandy Bridge again on the market? Has Alder Lake come closest to this?

Have had an Ivy Bridge CPU myself (i7-3770K in 2012), yet I remember back in 2011 Sandy Bridge was pure gold in regards to mostly everything. It was definitely powerful. And it would still be very good today for an assortment of daunting tasks if equipped with enough RAM and an SSD.

Write up what you think and let's gracefully sink into the world of forgotten memories.

PS. I have had a Celeron G530 which did melt all the Core 2 Duos out there. And a Core i7-2630QM (laptop) which was very fast.

In my previous paragraphs I am talking about higher-end Sandy Bridge CPUs like the Core i7-2600K and the i7-2700K.
 
Last edited:

Order 66

Grand Moff
Apr 13, 2023
2,165
909
2,570
I was too young for Sandy Bridge, My first computer that I remember was an old school laptop with an Intel Celeron N3350 that could barely run Roblox at 30fps and then I went to an i5 6500 and RX 550 pc which ran Roblox at the same settings as the laptop but at 120fps. After that I went to my current system with a 7700x and rx 6800, talk about an upgrade compared to the 6500 and rx 550!
 
  • Like
Reactions: jnjnilson6
Sandy Bridge? Probably. Conroe? That I'd like to see. In 20+ years of PC hardware I believe Conroe was the most important x86 launch I've experienced, it was a complete paradigm shift.

As for the future, right now I don't see the incentive for Intel or AMD to take risky moves on x86. The main threat to this market is ARM, if it continues to grow in support taking workloads that were historically x86 only we might see some more creativity from AMD and Intel.
 
  • Like
Reactions: jnjnilson6
So far the attempt at the OS's to get us to switch from native compiled programs to pseudo code "Apps" has failed. As long as users stick with native compiled programs neither Intel nor AMD has anything to fear from ARM for the desktop / gaming segment. You absolutely can not plunk Cyberpunk 2077 onto an ARM and have it actually execute. Same with Pathfinder and every other game on Steam.
 
  • Like
Reactions: jnjnilson6
The thing that made Sandy Bridge a well remembered chip was that you could get almost a 1.0GHz overclock on an air cooler. And not something like a Noctua NH-D15, but a Cooler Master Hyper 212. Otherwise clock for clock, it was a pretty average upgrade.

I think the only x86 chip that really disrupted the market was Conroe. One could make the argument for Zen (or maybe Zen 2), but I don't think it really had the same impact as Conroe did.

You absolutely can not plunk Cyberpunk 2077 onto an ARM and have it actually execute. Same with Pathfinder and every other game on Steam.
Sure you could. It's just a question of whether or not devs want to target it. Even then, running x86 apps in Rosetta 2 isn't too shabby in terms of performance. You might not be able to get 300+ FPS in Counter-Strike, but 120 FPS doesn't seem out of the realm of possibility.

The ISA does not determine the performance of a part. It's the implementation of it.
 
  • Like
Reactions: jnjnilson6
The thing that made Sandy Bridge a well remembered chip was that you could get almost a 1.0GHz overclock on an air cooler. And not something like a Noctua NH-D15, but a Cooler Master Hyper 212. Otherwise clock for clock, it was a pretty average upgrade.

I think the only x86 chip that really disrupted the market was Conroe. One could make the argument for Zen (or maybe Zen 2), but I don't think it really had the same impact as Conroe did.


Sure you could. It's just a question of whether or not devs want to target it. Even then, running x86 apps in Rosetta 2 isn't too shabby in terms of performance. You might not be able to get 300+ FPS in Counter-Strike, but 120 FPS doesn't seem out of the realm of possibility.

The ISA does not determine the performance of a part. It's the implementation of it.

No you can not, the compiled machine code is a different language and not remotely binary compatible.

Something like this could not possibly execute on an ARM cpu.

Code:
section .text
; Export the entry point to the ELF linker or loader.  The conventional
; entry point is "_start". Use "ld -e foo" to override the default.
    global _start

section .data
msg db  'Hello, world!',0xa ;our dear string
len equ $ - msg         ;length of our dear string

section .text

; linker puts the entry point here:
_start:

; Write the string to stdout:
    mov edx,len ;message length
    mov ecx,msg ;message to write
    mov ebx,1   ;file descriptor (stdout)
    mov eax,4   ;system call number (sys_write)
    int 0x80    ;call kernel

; Exit via the kernel:

    mov ebx,0   ;process' exit code
    mov eax,1   ;system call number (sys_exit)
    int 0x80    ;call kernel - this interrupt won't return

Instead you would have to use dynamic recompilation combined with emulation to make it work, and it would incur a terrible performance penalty. ARM use's general purpose registers while x86 has special purpose registers.

ISA's are the machine language that code is compiled to. This is why I mentioned pseudo code apps, which are not compiled to machine code and instead to a middle ground byte code that is then dynamically run with a runtime executive. Trying to run x86_64 code on an ARMv9 system is like trying to give instructions in German to a French Dolphin.
 
Last edited:
  • Like
Reactions: jnjnilson6
Instead you would have to use dynamic recompilation combined with emulation to make it work, and it would incur a terrible performance penalty. ARM use's general purpose registers while x86 has special purpose registers.
Yes, you do have to brute force it.
Butt, you do can brute force it.
This is early stages and you can already run some older games and less demanding newer ones.
I do agree with you in general that arm is no danger to x86, but that does not mean that you cant run x86 stuff on arm, even if it's a lot slower, as long as it's usable it's fair game.
View: https://www.youtube.com/watch?v=Rd30iUUkJ1A
 
  • Like
Reactions: jnjnilson6
When I mentioned ARM as a threat I meant mostly for data center. It can hurt Client Computing business too, as it did with Apple's M1, but this is a much more complicated move.

On the data center business, however, every hyperscale cloud provider already offers ARM CPUs (AWS Graviton, Ampère for Azure and Oracle, GCP Tau). Since many of the applications are managed by themselves, they have control over the architecture they choose to use. One example of this is database offerings, where we can already run Oracle databases or AWS RDS on ARM. This has near-zero adoption effort for the end user.
 
Last edited:
  • Like
Reactions: jnjnilson6
When I mentioned ARM as a threat I meant mostly for datecenter. It can hurt Client Computing business too, as it did with Apple's M1, but this is a much more complicated move.

On the data center business, however, every hyperscale cloud provider already offers ARM CPUs (AWS Graviton, Ampère for Azure and Oracle, GCP Tau). Since many of the applications are managed by themselves, they have control to switch architecture. One example of this is database offerings, where we can already run Oracle databases or AWS RDS on ARM. This has near-zero adoption effort for the end user.

People have been claiming "x86 is dead, all hail our new [insert name here] overlords" since Itanium. And every time they have been dead wrong, it always comes down to the same reason Windows is dominate in the desktop market, backwards compatibility. That aforementioned binary compatibility is a huge incentive to stick with the same OS + ISA combination that nobody has been able to defeat. When tablets were created every industry expert loudly claimed "PC is dead" and that everyone would be ditching Windows and Intel for Apple/Android and ARM. That didn't happen, instead people kept their PC's and just added a new gadget to their life. After observing how smart phones / tablets leveraged java like pseudo code programs to maintain backwards compatibility while switching architectures, everyone swore "Apps" would be "the new thing" on desktop and that would finally allow everyone to at least ditch x86. So when was the last time we downloaded Cyberpunk, Pathfinder, Call of Duty or even Photoshop from the Microsoft App Store? Yeah that didn't happen either.

Datacenter really depends on use case and purpose. Business stuff is all about ROI and business requirements . Meaning we don't just use technology X because it's technology X, business identifies a requirement, we find a technology solution that meets that requirement, then we architect and implement that solution. Brand loyalty doesn't really exist, it's all cold hard excel logic. x86 infrastructure is used mostly because it's really cheap, like really cheap. People thinking it's expensive have never engineered a solution on Sparc or Power before. ARM is also cheap though that binary compatibility issue shows up again, meaning you can't just move an existing solution over and instead need to build a new solution. There is cost and risk involved, but if the numbers show it's worth it, then it'll get done.
 
  • Like
Reactions: jnjnilson6
People have been claiming "x86 is dead, all hail our new [insert name here] overlords" since Itanium. And every time they have been dead wrong, it always comes down to the same reason Windows is dominate in the desktop market, backwards compatibility. That aforementioned binary compatibility is a huge incentive to stick with the same OS + ISA combination that nobody has been able to defeat.
Never said x86 was dead, I did say ARM is a threat to x86 server market and it is. Unlike Itanium, it already is the most manufactured architecture on the planet. As for OS compatibility, estimates are about 70% Windows for desktops and 80% Linux for servers. Binary compatibiliy between server and desktop OSEs is of little concern these days when everything web based (and android is the largest web client OS distribution anyway).

ARM is also cheap though that binary compatibility issue shows up again, meaning you can't just move an existing solution over and instead need to build a new solution.
That's the thing, cloud providers have done just that. For some reference, I do this for a living. You can absolutely move a PostgreSQL, MySQL or Oracle Database to ARM with near-zero effort (the near is for checking fringe features that may not yet be supported). I've done this myself for clients moving from x86 based AWS RDS databases to Graviton. It's as simple as selecting it on a dropdown menu, works the same and costs less.

There are many other services where ARM just makes sense, like webservers and the providers own infrastructure. Routers, NAT instances, firewalls, storage servers are all examples of virtualized workloads that can run on ARM and are fully controlled by the provider. It makes a lot of sense for them to put some effort on this and save a lot buying ARM instead of x86 for the scale they have.
 
  • Like
Reactions: jnjnilson6
No you can not, the compiled machine code is a different language and not remotely binary compatible.
The wording of your post made it sound like ARM is incapable of running the games period, hence why I said "It's just a question of whether or not devs want to target it"

Instead you would have to use dynamic recompilation combined with emulation to make it work, and it would incur a terrible performance penalty. ARM use's general purpose registers while x86 has special purpose registers.
That's why I also mentioned Rosetta 2, which uses what you're saying. And while in at least Cinebench and Geekbench, Rosetta 2 can get around 60-70% compared to the native implementation, M1 running x86 in Rosetta 2 was still competitive against Intel's offerings back then.

Also the main registers in x86 as of IA-32 and later x64 are considered general purpose. You can use them for anything despite what their names may suggest.

ISA's are the machine language that code is compiled to. This is why I mentioned pseudo code apps, which are not compiled to machine code and instead to a middle ground byte code that is then dynamically run with a runtime executive. Trying to run x86_64 code on an ARMv9 system is like trying to give instructions in German to a French Dolphin.
Uh, I know what an ISA is. And I know how JIT compiling works.

If you're trying to give me a lesson on computers and how they work, you may as well tell an F1 mechanic how an internal combustion engine works.
 
  • Like
Reactions: jnjnilson6
Never said x86 was dead, I did say ARM is a threat to x86 server market and it is. Unlike Itanium, it already is the most manufactured architecture on the planet. As for OS compatibility, estimates are about 70% Windows for desktops and 80% Linux for servers. Binary compatibiliy between server and desktop OSEs is of little concern these days when everything web based (and android is the largest web client OS distribution anyway).


That's the thing, cloud providers have done just that. For some reference, I do this for a living. You can absolutely move a PostgreSQL, MySQL or Oracle Database to ARM with near-zero effort (the near is for checking fringe features that may not yet be supported). I've done this myself for clients moving from x86 based AWS RDS databases to Graviton. It's as simple as selecting it on a dropdown menu, works the same and costs less.

There are many other services where ARM just makes sense, like webservers and the providers own infrastructure. Routers, NAT instances, firewalls, storage servers are all examples of virtualized workloads that can run on ARM and are fully controlled by the provider. It makes a lot of sense for them to put some effort on this and save a lot buying ARM instead of x86 for the scale they have.

You absolutely can not move any of those servers, instead you build a new solution and migrate the data over. Just reading what you've wrote, you bought into the "Intel is dieing" koolaid. Let me guess, you think VMWare / hypervisors are dead too. Tech companies do not rule the world, corporate legacy IT greatly outweighs tech startups and moves at a glacial pace with extreme aversion to risk. ARM is ~5% of the server market on a good day, nobody is paying them any attention.

Just to highlight this, a financial entity I work with is currently moving billions of dollars a year in industrial loans using a piece of software built in the 90's, which went defunct years ago. These guys are paying hundreds of thousands of dollars a year to the last few developers of that software as a retainer to be around just in case a problem crops up. The first attempt to switch out of that platform was made over a decade ago, cost tens of millions of dollars and was cancelled at the eight year mark as they were already four years past due with no signs of it ever actually working. They started the second attempt two years ago using a very expensive and very well known outside firm that specializes in financial software, it's looking good but they're spending a few million per year in development and the final licensing costs are going to be over a million a year. They are looking to do a similar thing with the treasury system in another year or two as that is also aging.

The only way your going to sell anyone on ARM is if it's a managed SaaS type solution where they don't even see the infrastructure and don't care. The managing company then becomes the ones responsible for taking on the risk of working around a new architecture.
 
Last edited:
  • Like
Reactions: jnjnilson6
To the OP, there is only so wide you can make an end user device for general computing until you smash into a point of diminishing returns. We hit that awhile back and "moar cores" has been used as a way to put higher sticker prices on processors. Real performance increases will be in higher clock rates and higher IPC, not adding the 17th core.
 
Last edited: