This is a deeply flawed argument and ignores each team's belief in their own technology. In particular, the Firefox team tries to write everything in Rust, while Chrome is C++ (last I checked) and would switch to Go if anything. Also, both browsers have their own Javascript engines. Firefox has SpiderMonkey, which
explicitly says it also handles WebAssembly.
SpiderMonkey is Mozilla’s JavaScript and WebAssembly Engine, used in Firefox, Servo and various other projects. It is written in C++ and Rust.
spidermonkey.dev
Chrome uses the V8 engine, which likewise
explicitly says it handles WebAssembly:
V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++.
v8.dev
It's right in the name!!! WebAssembly!
Names can be misleading or not tell the full story. In this case it's perhaps a bit of both.
Assembly langauge is human readable machine code. And an assembler turns that into binary machine code in a strict 1:1 mapping of mneumonic to instruction, because they have the same machine abstraction level, not a high-level language.
Famously there have been different assembly language formats e.g. for the 8080/Z80, Intel and Zilog mneumonics, which resulted in culture wars back then, but the resulting machine code was identical, if the assembly code used the same instructions even with different mneumonics.
Now in the case of WASM, WASM isn't actually typically represented in such human readable form, even if it represents machine code instructions for the WASM abstract machine, mostly for efficiency. And of course, one could translate a single WASM source code instructions into several functionally identical target variants that perform differently, if only by inserting random NOPs all over the place.
But if that were to turn out vastly different performance results, that would violate the purpose of WASM and the developers should have a look at the differences.
Wikipedia even cites a source to support the claim that its
primary stated goal is to accelerate scripted content in
web pages!
"The main goal of WebAssembly is to facilitate high-performance applications on web pages, but it is also designed to be usable in non-web environments."
This is
exactly the sort of case that a good compiler should help with. Hence, no reason to assume all implementations perform equally.
Compiler has an even bigger range of meanings than assembler. But WASM is typically the target result of compilation, not its source. The goal is to get as close as possible to binary target machine language without actually
being the target ISA.
If there is vastly different performance figures for the same WASM on the same target, somebody didn't do their job properly.
WASM uses around 200 op codes for an abstract ISA, and yes, it now also includes some SIMD vector operations.
These OP codes are at the level of a typical machine op code, so that in most cases they can be translated near 1:1 to a target ISA. Of course, once could still do that job badly, but WASM as such is desgined to make it straight-foward and efficient.
Where I was cleary wrong is that the browsers all use the same assembler/transpiler/compiler code. Evidently they each do their own variant, which might have a measurable impact on the speed of the compiler, not the resulting code.
But again WASM to my understanding was designed to operate with little more than a single pass to both translate and to verify that it doesn't contain harmful side effects: it's not duplicating LLM with all its optimization logic, but taking the WASM to transform into x86, ARM or whatnot with little more than a mapping table. And the bigger effort is then in checking the resulting code for safety
before running it, via static validations.
Because that's not how the software world works. Every browser has its own infrastructure, environment, security model, JIT engine, and a way for scripts to invoke platform library functionality. The incremental cost of adding a front-end for Web Assembly is small, while the lift of trying to integrate a foreign, 3rd party implementation in your browser is quite large, meanwhile you've now externalized a good deal of your attack surface. Not to mention the whole issue of language wars, that I mentioned above.
Whether you code a WASM translator in JavaScript, Go, Rust, C or whatever, shouldn't have much of an impact on the run-time performance of the generated machine code: it's the transformation logic, that dictates the performance and since the result is native machine code, it doesn't matter whose sandbox it runs in.
That said, WASM has seen relatively recent evolution in the support of multi-threading, which was considered potentially harmful in light of SPECTRE, and SIMD support, which is an area where CPU vendors go bonkers to catch up with GPUs. So if one variant already supports multi-threading or SIMD, while another doesn't, that should have measurable impact.
Where there could also be significant differences betweeen implementations is the I/O model, where WASM interacts with the outside, receives and sends data etc.: there the capability based security model may see vastly different implementations in terms of security and performance.
A compute performance benchmark might want to stay away from that, on the other hand not everyone is happy to just compute PI all day.
You wouldn't say that about CPUs. GCC and LLVM don't consider anything less than 2x insignificant. The SpiderMokey and V8 teams surely care about smaller than 2x differences in performance.
You just pulled this figure out of nowhere, and I can pretty much guarantee that you wouldn't accept a 2x performance difference in many performance contexts. Like, even your Google Earth benchmark, which you admit isn't based on any fundamental need to use Google Earth as part of your job, but just an arbitrary line in the sand you decided to draw.
Sure it's personal, I don't claim to be 100% objective in what I buy or use. But a lot of it is based on feedback from the people who get IT services from me, both corporate and family/friends.
Twice the performance/value at the same price usually is a call for at least investigation, anything less, and I might be looking for excuses to stay put, especially if the financial or human impact isn't
felt. Half effort is a computer science classic, really, logarithmic instead of linear or god forbid exponential.
And 2x seems to work for quite a few more people, too, an easy sell. When I propose a 2-5% performance increase, my bosses rarely care, unless it's also on the bottom line. That's because we're not a hyperscaler. For AWS a 1% difference can pay a million $ bonus to whoever makes it happen.
Firefox went Rust, because globally those inner loops in a browser are the most executed code on the planet. A 1% difference could light up a small country. But mostly it needed to be safe and fast.
Absolutely not. It's basically just a modern alternative to Java bytecode.
Both were driven by very similar motives and in my view the precursor is UCSD p-code....
...while these days not even x86 CPUs execute their machine code directly, but do a similar WASM like transformation on the fly.
The "modern" aspect of WASM is just that, the need for native binary speed.
If they were happy with what Java or JavaScript can offer, they'd have stuck to it. Few people get paid if software just looks more modern.
"absolutely wrong" clearly isn't right, when native speed was the stated primary goal of WASM and portability the necessity.
I think you should stick to supportable facts, and not try to make factual claims that simply align with your values and imagination.
Like any good LLM, who evidently all learned from me, I also hallucinate: they can't help but learn our habits.
Which means I use System 2 as by "Thinking, Fast and Slow" by Daniel Kahneman, when System 1 would have given the more objective answer but cost too much time and energy.
It also means I abuse you to check my facts, ...which you actually seem to enjoy a bit.
Unfortunately it only means we might reach a consensus, not necessarily discover the real truth.