• 0 Posts
  • 2 Comments
Joined 1 year ago
cake
Cake day: August 14th, 2024

help-circle
  • How does that change what I said? Remote X is massively more bandwidth hungry than all the others. I mean things like TeamViewer Tensor exist and from what I’ve done, is massively stable. RHEL works perfectly for it. So I don’t want to hear this can’t get a commercially supported… There’s tons of vendors that will thin client for you.

    X is a terrible protocol for modern widgets because modern widgets do their best to work around X, that’s literally in the code. Look at GTK or Qt, both are actively trying to avoid working with X when it can and just render directly, because in every metric, it’s better to work directly with the hardware than to go through some slow middle layer that just spins and wastes cycles.

    Heck, even the X developers have left X, because it’s done. It’s a dead technology. It doesn’t matter how many people are deploying in enterprise environments, or how well they are deploying those things. There’s no devs on the project and GPUs keep changing. There’s only so many ways you can keep band-aiding a GPU into thinking it’s a giant frame buffer, at some point, there’s going to be a break in the underlying architecture of GPUs, that thinking it’s just VRAM to dump data to, will no longer work. The amount of space on die for the backwards VGA and SHM methods is minuscule these days on cards.

    Heck, Using MIT-SHM on X11 for a Pi is something that’s terrible. You usually get worse results because the underlying hardware is woefully optimized for you to treat it like how old video cards worked. You actually do better using hardware acceleration. The usual mantra for X11 apps on Pi is, if you get good results with shared memory, use that and never upgrade your underlying Pi, otherwise always use hardware where possible.

    Also, unlike X, Wayland generally expects a GPU in your remote desktop servers, and have you seen the prices for those lately?

    You don’t even need a good one in today’s standards. At most, most compositors just need to convert pixmap into texture. Anything that supports GLX_EXT_texture_from_pixmap will be enough and at low resolutions, just give it to your CPU, we’re not talking intense operations. But literally anything from the last fifteen years of GPUs has enough power to complete these operations reasonably. Shoot, if you’re thin client on a Pi, the Pi itself has vastly more resources. You can literally have a cluster of Pis if you wanted, labwc is a completely fine compositor for basic thin clients and is basically the replacement of X on Pi. Because X11 was just so terrible because it was so misaligned to how modern GPUs actually work.

    What I am saying is X can be whatever in “enterprise deployment”, but X has stopped matching how modern machines look like. Video cards have become more than a bunch of bits dumped into VRAM. No matter how many deployments you’ve done, that doesn’t change that fact. X barely resembles what modern systems of the last twenty-five years looks like. Nobody is working on it. You can have 100 deployments under your belt, nobody is still working on it. No matter how you slice the attributes of X, nobody is actively coding for X any longer. And as for damage and what not, lots of implementations of wl_surface_damage_buffer are using underlying hardware EGL/DMABUF because GPUs are smart enough for the last fifteen years to do that on their own, most compositors utilize that.

    Again, it doesn’t matter how many deployments you might have, the hardware does it better than X will ever do it, it’s impossible for X to do it better, there’s nobody there to write better. And it will always be this way, until the heat death of the universe unless someone(s) picks up this massive task of taking care of Xorg. There’s nothing that changes any of this reality.

    Does this mean you need to drop X11 tomorrow. No. That’s the entire point of why Xorg was open. So that you can keep it until someone rips it from your cold dead hands. But your stubbornness does not change the fact X is absolutely garbage on the network, is massive inefficient, and most things these days actively try to avoid using X directly and if they have to, they just stuff uncompress bits into a massive packet with zero optimization. You can totally mill grain with a stone wheel today, no one stops you. But you’re not going to convince many people that, that is the best way to mill grain. I don’t know what else to say. I don’t want you to stop using X, but your usage of it doesn’t change any fact that I’ve stated. It’s a very fat, very unoptimized, very slow protocol and there are indeed commercial solutions that are better. I’ve just named one, but there are many. That is just reality, the world has moved past dual channel RAM and buffers. I’ve built VGA video cards, I know how to build a RAMDAC form logic gates, all of that is gone in today’s hardware, and X still has these silly assumptions of hardware that doesn’t even exist anymore.


  • IHeartBadCode@fedia.iotolinuxmemes@lemmy.worldPreference
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    5 days ago

    And the network transparency argument is long gone. While you can indeed network windows over the wire, most toolkits use client side rendering/decorations. So you’re just sending bloated pixmaps across the wire when things like RDP , VNC, etc deal better with compression, damage to the window, etc. And anything relying or accelerated with DRI3 is just NOT network transparent.

    Most modern toolkits have moved past X11 because the X protocol was severely lacking, and there wasn’t a good way as a committee to modify the protocol in an unified manner. I mean look at the entire moving Earth that it took for XFixes and Damage extensions. Toolkits wanted deep access to the underlying hardware and so they would go out of their way to work around X, because it just could not keep up.