• teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    An app running on SDL which targets OGL/vulkan is going through all the same levels of abstraction on windows as it is Linux. The work needed at runtime is the same regardless of platform. Therefore, we say it natively supports both platforms.

    But for an app running DX, on windows the DX calls talk directly to the DX driver for the GPU which we call native, but on Linux the DX calls are translated at runtime to Vulkan calls, then the vulkan calls go to the driver which go to the hardware. There is an extra level of translation required on one platform that isn’t required on the other. So we call that non-native.

    Shader compilation has its own quirks. DX apps don’t ship with hlsl, they precompile their shaders to DXIL, which is passed to the next layer. On windows, it then gets translated directly to native ISA to be executed on the GPU EUs/CUs/whatever you wanna call them. On Linux, the DXIL gets translated to spir-v, which is then passed to the vulkan driver where it is translated again to the native ISA.

    But also, the native ISA can be serialized out to a file and saved so it doesn’t have to be done every time the game runs. So this is only really a problem the first time a given shader is encountered (or until you update the app or your drivers).

    Finally, this extra translation of DXIL through spir-v often has to be more conservative to ensure correct behavior, which can add overhead. That is to say, even though you might be running on the same GPU, the native ISA that’s generated through both paths is unlikely to be identical, and one will likely perform better, and it’s more likely to be the DXIL->ISA path because that’s the one that gets more attention from driver devs (ex. Nvidia/amd engineers optimizing their compilers).

    • Chobbes@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      You’re not wrong, and the translation layers definitely do make a difference for performance. Still, it’s not all that different from a slightly slow slightly odd “native” implementation of the APIs. It’s a more obvious division when it’s something like Rosetta that’s translating between entirely different ISAs.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        SDL isn’t adding any runtime translation overhead, that’s the difference. SDL is an abstraction layer just like UE’s RHI or the Unity Render backends. All the translation is figured out at compile time, there’s no runtime jitting instructions for the given platform.

        It’s a similar situation with dynamic libraries: using a DLL or .so doesn’t mean you’re not running code natively on the CPU. But the java or .net runtimes are jiting bytecode to the CPU ISA at runtime, they are not native.

        I’m sorry if I’m not explaining myself well enough, I’m not sure where the confusion still lies, but using just SDL does not make an app not-native. As a linux gamer, I would love it if more indie games used SDL since it is more than capable for most titles, and would support both windows and Linux natively.

        • Chobbes@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          7 months ago

          You’re explaining yourself fine, I just don’t necessarily agree with the distinction. It’s like when people say a language is “a compiled language” when that doesn’t really have much to do with the language, it’s more of an implementation detail. It’s a mostly arbitrary distinction that makes sense to talk about sometimes in practice, but it’s not necessarily meaningful philosophically.

          That said, SDL isn’t really any different. It’s not translating languages, but you still have additional function calls and overhead wrapping lower level libraries, just the same as wine. DXVK has an additional problem where shaders will have to be converted to SPIR-V or something which arguable makes it “more non-native” but I think that’s not as obvious of a distinction to make too. You probably wouldn’t wouldn’t consider C code non-native, even though it’s translated to several different languages before you get native code, and usually you consider compilers that use C as a backend to be native code compilers too, so why would you consider HLSL -> SPIR-V to be any different? There’s reasons why you might make these distinctions, but my point is just that it’s more arbitrary than you might think.

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            7 months ago

            you still have additional function calls and overhead wrapping lower level libraries

            But it all happens at compile time. That’s the difference.

            You probably wouldn’t consider C code non-native

            This goes back to your point above:

            It’s like when people say a language is “a compiled language” when that doesn’t really have much to do with the language

            C is just a language, it’s not native. Native means the binary that will execute on hardware is decided at compile time, in other words, it’s not jitted for the platform it’s running on.

            usually you consider compilers that use C as a backend to be native code compilers too

            I assume you’re not talking about a compiler that generates C code here, right? If it’s outputting C, then no, it’s not native code yet.

            so why would you consider HLSL -> SPIR-V to be any different?

            Well first off, games don’t ship with their HLSL (unlike OGL where older games DID have to ship with GLSL), they ship with DXBC/DXIL, which is the DX analog to spir-v (or, more accurately, vice versa).

            Shader code is jitted on all PC platforms, yes. This is why I said above that shader code has its own quirks, but on platforms where the graphics API effectively needs to be interpreted at runtime, the shaders have to be jitted twice.

            • Tempy@lemmy.temporus.me
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              7 months ago

              I’d just point out, for running an executable, wine isn’t JITting anything at least as far as I’m aware. They’ve implemented the code necessary to read .exe files and link them, and written replacements libraries for typical windows DLLs, that are implemented using typical Linux/POSIX functions. But since, in most cases, Linux and windows runs on the same target CPU instructions set most of the windows code is runnable mostly as is, with some minor shim code when jumping between Linux calling conventions to windows calling conventions and back again.

              Of course, this may be different when wine isn’t running on the same target CPU as the windows executable. Then there might be JITing involved. But I’ve never tested wine in such a situation, thoughI’d expect wine to just not work in that case.

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                6 months ago

                Yes, the jitting is specific to the graphics APIs. DXVK is doing runtime translation from DX to VK. When possible, they are certainly just making a 1:1 call, but since the APIs aren’t orthogonal, in some cases it will need to store state and “emulate” certain API behavior using multiple VK calls. This is much more the case when translating dx9/11.

            • Chobbes@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              7 months ago

              But it all happens at compile time. That’s the difference.

              No, when you have a library like SDL you will have functions that wrap lower level libraries for interacting with the screen and devices. At SDL’s compile time you may have preprocessor macros or whatever which select the implementation of these functions based on the platform, but at run time you still have the extra overhead of these SDL function calls when using the library. The definitions won’t be inlined, and there will be extra overhead to provide a consistent higher level interface, as it won’t exactly match the lower level APIs. It doesn’t matter if it’s compiled, there’s still overhead.

              C is just a language, it’s not native. Native means the binary that will execute on hardware is decided at compile time, in other words, it’s not jitted for the platform it’s running on.

              Wine doesn’t really involve any jitting, though, it’s just an implementation of the Windows APIs in the Linux userspace… So, arguably it’s as native as anything else. The main place where JIT will occur is for shader compilation in DXVK, where the results will be cached, and there is still JIT going on on the “native windows” side anyway.

              If you don’t consider C code compiled to native assembly to be native, then this is all moot, and pretty much nothing is native! I agree that C is just a language so it’s not necessarily compiled down to native assembly, but if you don’t consider it native code when it is… Then what does it mean to be native?

              the binary that will execute on hardware is decided at compile time

              This is true for interpreted languages. The interpreter is a fixed binary that executes on hardware, and you can even bake in the program being interpreted into an executable! You could argue that control flow is determined dynamically by data stored in memory, so maybe that’s what makes it “non-native”, but this is technically true for any natively compiled binary program too :). There’s a sense in which every program that manipulates data is really just an interpreter, so why consider one to be native and not the other? Even native assembly code isn’t really what’s running on the processor due to things like microcode, and arguably speculative execution is a fancy kind of JIT that happens in hardware which essentially dynamically performs optimizations like loop unrolling… It’s more of a grey area than you might think, and nailing down a precise mathematical definition of “native code” is tricky!

              I assume you’re not talking about a compiler that generates C code here, right? If it’s outputting C, then no, it’s not native code yet.

              But it will be native code :). Pretty much all compilers go through several translation steps between intermediate languages, and it’s not uncommon for compilers to use C as an intermediate language, Vala does this for instance, and even compilers for languages like Haskell have done this in the past. C is a less common target these days, as many compiler front ends will spit out LLVM instead, but it’s still around. Plus, there’s often more restricted C-like languages in the middle. Haskell’s GHC still uses Cmm which is a C-like language for compilation, for example.

              Well first off, games don’t ship with their HLSL (unlike OGL where older games DID have to ship with GLSL), they ship with DXBC/DXIL, which is the DX analog to spir-v (or, more accurately, vice versa).

              Sure, and arguably it’s a little different to ship a lower level representation, but there will still be a compilation step for this, so you’re arguably not really introducing a new compilation step anyway, just a different one for a different backend. If you consider a binary that you get from a C compiler to be native code, why shouldn’t we consider this to be native code :)? It might not be as optimized as it could have been otherwise, but there’s plenty of native programs where that’s the case anyway, so why consider this to be any different?

              Ultimately the native vs. non-native distinction doesn’t really matter, and arguably this distinction doesn’t even really exist — it’s not really easy to settle on a formal definition for this distinction that’s satisfying. The only thing that matters is performance, and people often use these things such as “it’s a compiled language” and “it has to go through fewer translation layers / layers of indirection” as a rule of thumb to guess whether something is less efficient than it could be, but it doesn’t always hold up and it doesn’t always matter. Arguably this is a case where it doesn’t really matter. There’s some overhead with wine and DXVK, but it clearly performs really well (and supposedly better in some cases), and it’s hard to truly compare because the platforms are so different in the first place, so maybe it’s all close enough anyway :).

              Also to be clear, it’s not that I don’t see your points, and in a sense you’re correct! But I don’t believe these distinctions are as mathematically precise as you do, which is my main point :). Anyway, I hope you have a happy holidays!

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                0
                arrow-down
                1
                ·
                7 months ago

                Ultimately the native vs. non-native distinction doesn’t really matter, and arguably this distinction doesn’t even really exist

                Alright. Just letting you know you’re going to have a hard time communicating with people in this industry if you continue rejecting widely accepted terminology. Cheers.