It offers native x86, Windows on ARM, and Apple Silicon versions.
I think this is incorrect. Specifically the Windows ARM support. Official hardware support page indicates that the Windows version requires x64. I unfortunately don’t have the hardware to confirm for myself. But Blizzard is the kind of company that would have made a blog post about that.
This is neat, and exciting that Windows emulation tooling is progressing! It seems like there’s a lot of work hardware vendors would need to do in order to make Win/Arm viable for game devs. I really wonder if that’s ever going to happen.
> I think this is incorrect. Specifically the Windows ARM support. Official hardware support page indicates that the Windows version requires x64. I unfortunately don’t have the hardware to confirm for myself. But Blizzard is the kind of company that would have made a blog post about that.
It has been around for a while, circa 2021. They made a forum post when they released it.
Interesting, I didn't know they'd added an Windows/ARM build. Makes some sense though, WoW has always been one of the best about broad platform/arch support… macOS/PPC and Windows/x86 versions were both available from day one and have stayed in lockstep the whole time, and it was among the first games to make both the PPC → x86 jump and x86 → ARM jump on Macs. Have heard that they had Linux builds internally early on too.
I’m curious if WoW is using any newer x86 instruction sets like AVX. I’ve been testing some math benchmarks on ARM emulating x64, and saw very little performance improvement with the AVX2+FMA builds, compared to the SSE4.x level. (X64 v2 to v3.) It was unexpected.
It’s the first Windows build with Prism and the first time they’ve introduced AVX(2) support, so I wonder simply if the performance isn’t there yet. I’ve found very little info online about this.
This is also why I don't get why people are so enthusiastic about ARM. While there is nothing technically preventing manufacturers from using extensible standards and technologies like ACPI, UEFI, etc. with ARM, pretty much no one does. Meaning these devices are closed and pretty unusable with Linux.
ARM does not have a 'ibm pc clone' moment. There is no one, that anyone wants to rally behind. The market is fragmented in an interesting way but a way that is hard for people to target. This fragmentation has existed since the start of microprocessor was started. There were hundreds of different x86, 6502, SH, MIPS, ARM style computers as well. Even the 'ibm pc' was even one of them, but everyone just kinda said 'that one'. All of those standards you said exist in some ARM boards. It is a really mixed bag. Out of all of the ARM systems RasberryPI came closest to a standard.
QCOM in this case probably could make a standard ARM PC. The problem will be QCOM corporate structure will probably strangle it. They will want to create a patent license stream. The interesting bits would be behind NDAs. It is their bread and butter.
The reality is no one wants to be IBM in the IBM/PC clone market. Basically the ones who did the expensive work to make the board but everyone just copies it.
Linux had the chance to fight against what GNU called "TiVoisation" but Linus said TiVo did nothing wrong.
So now we have a world full of devices running open software but which are locked down. And a Linux foundation run by corporates.
To me it is clear this was indeed a huge problem. Linux might not have become as big as it is now if it had taken GPL3 on board but it would have made it a lot harder for manufacturers to do what they're doing.
I strongly agree with you. My little experience tinkering with ARM-based systems has been frustrating, I don't want to touch ARM ever again until it has some kind of standard boot process. Same for RISC-V.
ARM support is still mediocre, but it's improving with the new Snapdragons. Every generation seems to make running normal Linux on those things a little bit less of a pain.
Funnily enough, the device with probably the best "normal" desktop protocol support is the old Windows RT tablet, which features an ARM SoC that runs UEFI and ACPI and everything. It's locked down to Microsoft's secure boot keys, unfortunately, but thanks to Microsoft abandoning the device, there are exploits you can use to bypass that.
The Thinkpad x13s is pretty good for a mostly normal arm Linux laptop. The SOC is still pretty usable and it's functionality on Linux is also pretty complete. The main thing lacking is the speakers are very low to protect them from blowing out without a proper driver.
The anti-cheat streams executable code into the client, and that code is mostly for detecting tampering with the game, injected modules, etc.
Not sure they care about it running in an emulated environment.
They do effectively allocate an executable memory region, copy the machine code that was streamed into it, and jump to it.
I guess in this case the emulation is an actual vm, rather than "rewrite x86 instructions into arm" (don't know much about this subject, but assumed that was how rosetta worked)
Rosetta 2 rewrites x86 instructions into ARM, but it does this on the fly for generated instructions too. When you put x86 machine code into a buffer and then jump to execute it, Rosetta 2 dynamically translates those generated instructions into arm before executing them.
At least that's what I gathered around the time it was released. It seems to hold up; JITed x86 applications work great under Rosetta 2.
I'm tentatively excited for the new Snapdragon X2 Elite. Or I would be if any of us could ever afford RAM prices ever again.
The high end model has a 192-bit memory bus, a 3 channel design. 12+6 cores but very big/big more than less big/little. 53MB L3 cache is quite healthy. 80TOps NPU (int8). 9533 MT/s 192-bit memory for 228GB/s which is nipping right at Strix Halo & Nvidia Spark's heels. 12x PCIe 5.0, 4x PCIe 4.0, and "3x USB-C" 40Gb/s (hopefully not some shared bandwidth cop out). And some kind of pretty big GPU. The specs here are quite promising.
And Qualcomm has started taking Linux drivers somewhat more seriously. Linux & mesa drivers are arriving now for previous Snapdragon X Elite & looking pretty promising. That said, this whole Device Tree world is hell, and never going to be good, and Qualcomm direly needs to get religion there & get some ServerReady type ACPI + UEFI compatibility standardized in the products, and stop OEMs from shipping these awful embedded-style non-PC things.
I'm excited to see ARM finally actually show up with something competitive. Alas though, those RAM prices. What a sad buzzkill, and man this is going to take forever to work out.
I have always had trouble acquiring the actual devices at a competitive price. It is cheaper to get an M-series Mac Mini than a Snapdragon X Elite box and the former smokes the latter. The one advantage of the non-Macs is usually that their Linux support is good, but the Ideacentres or whatever that ship the S X E don't have very much support. Despite being fairly eager to try out this device, I could not bring myself to spend the money on what would remain in the closet after I failed to boot a Linux, any Linux.
TL;DR the author attempts to measure the ratio between native and emulated performance using Microsoft Prism on Windows. His measurements suggest that the emulated performance is very close to native performance.
Though I am not skeptical of the authors methodology, I do suspect that the ARM64 build of WoW may not be as "optimized" as the x64 build. This is because in some of his workloads the emulated game actually outperformed the native game.
But to answer your question as if they had not yet: it's never "just" recompiling. It's:
* recompile (and fix any warnings/errors indicated by the compiler)
* re-test ... the entire game
* fix the bugs that are encountered in test
* release/publish the game for Windows ARM64
Whatever this effort is, it's much much more than "just" recompiling. You could imagine motivated individuals on the engineering team chipping away at this list of their own accord. But following through with a product requires significant effort and coordination that often is weighed against the potential revenue that this new market could provide.
A really fun thing is that arm and x86 have different char signedness: char is signed on x86, unsigned on arm. This has tripped me up more than a few times when I try to compile software that's only been tested on x86 for an arm machine.
Now WoW already supports Apple Silicon on macOS so most of that was already taken care of, but I'm betting there's a lot of Windows-specific code in there too.
That seems a bit absurd. Surely many parts of the game won't likely have bits of code that interact with architecture in unique ways. Especially if you wrote the game in relatively portable code to begin with (as WoW almost certainly was).
I mean idk, maybe windows arm64 is a uniquely nasty target. But i'm skeptical.
> Surely many parts of the game won't likely have bits of code that interact with architecture in unique ways.
I came across a performance-killing bug that made the game unplayable (less than 1fps on a Mac Studio). It happened in a couple of dungeons (I spotted 2). From my tests it was caused by a specific texture in the field of view at a certain distance. There was no problem on Intel Macs, AFAICT. My old MBP was terrible but did not get any performance hit.
This is what can happen any time you don’t test even a tiny corner of the game. Also, bear in mind that this depends on graphics settings and you get a nightmare of a test matrix.
It seems that "native vs. emulated" here means "arm64 binaries vs. x86 binaries, both running on Windows". So the comparison that the OP is making wouldn't be possible if Blizzard didn't already support aarch64.
I think this is incorrect. Specifically the Windows ARM support. Official hardware support page indicates that the Windows version requires x64. I unfortunately don’t have the hardware to confirm for myself. But Blizzard is the kind of company that would have made a blog post about that.
https://us.support.blizzard.com/en/article/76459
This is neat, and exciting that Windows emulation tooling is progressing! It seems like there’s a lot of work hardware vendors would need to do in order to make Win/Arm viable for game devs. I really wonder if that’s ever going to happen.
It has been around for a while, circa 2021. They made a forum post when they released it.
For reasons unknown the link no longer works but here it is on the wayback machine. https://web.archive.org/web/20210512205620/https://us.forums...
Never tested myself as I'm more a runescape/ragnarok online sorta guy myself...
It’s the first Windows build with Prism and the first time they’ve introduced AVX(2) support, so I wonder simply if the performance isn’t there yet. I’ve found very little info online about this.
Besides, I'd look forward to testinf with the nea ARM emulation layer Valve is developing.
All I want is something like a Macbook Air, but running Linux - long battery life, acceptable performance, an OS that respects me.
QCOM in this case probably could make a standard ARM PC. The problem will be QCOM corporate structure will probably strangle it. They will want to create a patent license stream. The interesting bits would be behind NDAs. It is their bread and butter.
The reality is no one wants to be IBM in the IBM/PC clone market. Basically the ones who did the expensive work to make the board but everyone just copies it.
So now we have a world full of devices running open software but which are locked down. And a Linux foundation run by corporates.
To me it is clear this was indeed a huge problem. Linux might not have become as big as it is now if it had taken GPL3 on board but it would have made it a lot harder for manufacturers to do what they're doing.
Funnily enough, the device with probably the best "normal" desktop protocol support is the old Windows RT tablet, which features an ARM SoC that runs UEFI and ACPI and everything. It's locked down to Microsoft's secure boot keys, unfortunately, but thanks to Microsoft abandoning the device, there are exploits you can use to bypass that.
It’s not their fault per se, but it’s discouraging.
Not sure they care about it running in an emulated environment.
They do effectively allocate an executable memory region, copy the machine code that was streamed into it, and jump to it.
I guess in this case the emulation is an actual vm, rather than "rewrite x86 instructions into arm" (don't know much about this subject, but assumed that was how rosetta worked)
At least that's what I gathered around the time it was released. It seems to hold up; JITed x86 applications work great under Rosetta 2.
The high end model has a 192-bit memory bus, a 3 channel design. 12+6 cores but very big/big more than less big/little. 53MB L3 cache is quite healthy. 80TOps NPU (int8). 9533 MT/s 192-bit memory for 228GB/s which is nipping right at Strix Halo & Nvidia Spark's heels. 12x PCIe 5.0, 4x PCIe 4.0, and "3x USB-C" 40Gb/s (hopefully not some shared bandwidth cop out). And some kind of pretty big GPU. The specs here are quite promising.
And Qualcomm has started taking Linux drivers somewhat more seriously. Linux & mesa drivers are arriving now for previous Snapdragon X Elite & looking pretty promising. That said, this whole Device Tree world is hell, and never going to be good, and Qualcomm direly needs to get religion there & get some ServerReady type ACPI + UEFI compatibility standardized in the products, and stop OEMs from shipping these awful embedded-style non-PC things.
I'm excited to see ARM finally actually show up with something competitive. Alas though, those RAM prices. What a sad buzzkill, and man this is going to take forever to work out.
Though I am not skeptical of the authors methodology, I do suspect that the ARM64 build of WoW may not be as "optimized" as the x64 build. This is because in some of his workloads the emulated game actually outperformed the native game.
But to answer your question as if they had not yet: it's never "just" recompiling. It's:
* recompile (and fix any warnings/errors indicated by the compiler)
* re-test ... the entire game
* fix the bugs that are encountered in test
* release/publish the game for Windows ARM64
Whatever this effort is, it's much much more than "just" recompiling. You could imagine motivated individuals on the engineering team chipping away at this list of their own accord. But following through with a product requires significant effort and coordination that often is weighed against the potential revenue that this new market could provide.
Now WoW already supports Apple Silicon on macOS so most of that was already taken care of, but I'm betting there's a lot of Windows-specific code in there too.
That seems a bit absurd. Surely many parts of the game won't likely have bits of code that interact with architecture in unique ways. Especially if you wrote the game in relatively portable code to begin with (as WoW almost certainly was).
I mean idk, maybe windows arm64 is a uniquely nasty target. But i'm skeptical.
I came across a performance-killing bug that made the game unplayable (less than 1fps on a Mac Studio). It happened in a couple of dungeons (I spotted 2). From my tests it was caused by a specific texture in the field of view at a certain distance. There was no problem on Intel Macs, AFAICT. My old MBP was terrible but did not get any performance hit.
This is what can happen any time you don’t test even a tiny corner of the game. Also, bear in mind that this depends on graphics settings and you get a nightmare of a test matrix.
https://news.ycombinator.com/item?id=46009962
WoW was released 21 years ago AFAIK
Specifically, it's comparing the native ARM64 version against the emulated x86_64 version, both running on an ARM64 CPU.