nwellnhof a day ago

Removing XSLT from browsers was long overdue and I'm saying that as ex-maintainer of libxslt who probably triggered (not caused) this removal. What's more interesting is that Chromium plans to switch to a Rust-based XML parser. Currently, they seem to favor xml-rs which only implements a subset of XML. So apparently, Google is willing to remove standards-compliant XML support as well. This is a lot more concerning.

  • xmcp123 a day ago

    It’s interesting to see the casual slide of Google towards almost internet explorer 5.1 style behavior, where standards can just be ignored “because market share”.

    Having flashbacks of “<!--[if IE 6]> <script src="fix-ie6.js"></script> <![endif]-->”

    • granzymes a day ago

      The standards body is deprecating XSLT with support from Mozilla and Safari (Mozilla first proposed the removal).

      Not sure how you got from that to “Google is ignoring standards”.

      • _heimdall 21 hours ago

        There's a lot of history behind WhatWG that revolves around XML.

        WhatWG is focused on maintaining specs that browsers intend to implement and maintain. When Chrome, Firefox, and Safari agree to remove XSLT that effectively decides for WhatWG's removal of the spec.

        I wouldn't put too much weight behind who originally proposed the removal. It's a pretty small world when it comes to web specifications, the discussions likely started between vendors before one decided to propose it.

        • NewsaHackO 20 hours ago

          The issue is you can’t say to put little weight who originally proposed the removal if the other poster is putting all the weight on Google, who didn’t even initially propose it

          • _heimdall 20 hours ago

            I wouldn't put weight on the initial proposer either way. As best I've been able to keep up with the topic, google has been the party leading the charge arguing for the removal. I thought they were also the first to announce their decision, though maybe my timing is off there.

            • akerl_ 19 hours ago

              It doesn't seem like much of a charge to be led. The decision appears to have been pretty unanimous.

              • _heimdall 18 hours ago

                By browser vendors, you mean? Yes it seems like they were in agreement and many here seem to think that was largely driven by google though that's speculation.

                Users and web developers seemed much less on board though[1][2], enough that Google referenced that in their announcement.

                [1] https://github.com/whatwg/html/issues/11578 [2] https://github.com/whatwg/html/issues/11523

                • akerl_ 18 hours ago

                  Yes, that's what I mean. In this comment tree, you've said:

                  > google has been the party leading the charge arguing for the removal.

                  and

                  > many here seem to think that was largely driven by google though that's speculation

                  I'm saying that I don't see any evidence that this was "driven by google". All the evidence I see is that Google, Mozilla, and Apple were all pretty immediately in agreement that removing XSLT was the move they all wanted to make.

                  You're telling us that we shouldn't think too hard about the fact that a Mozilla staffer opened the request for removal, and that we should notice that Google "led the charge". It would be interesting if somebody could back that up with something besides vibes, because I don't even see how there was a charge to lead. Among the groups that agreed, that agreement appears to have been quick and unanimous.

                  • _heimdall 15 hours ago

                    In the github issues I have followed, including those linked above, I primarily saw Google engineers arguing for removing XSLT from the spec. I'm not saying they are the sole architects of the spec removal, and I'm not claiming to have seen all related discussions.

                    I am sharing my view, though, that Google engineers have been the majority share of browser engineer comments I've seen arguing for removing XSLT.

      • andrewl-hn a day ago

        Probably if Mozilla didn't push for it initially XSLT would stay around for another decade or longer.

        Their board syphons the little money that is left out of their "foundation + corporation" combo, and they keep cutting people from Firefox dev team every year. Of course they don't want to maintain pieces of web standards if it means extra million for their board members.

        • echelon 21 hours ago

          Mozilla's board are basically Google yes-people.

          I'm convinced Mozilla is purposefully engineered to be rudderless: C-suite draw down huge salaries, approve dumb, mission-orthgonal objectives, in order to keep Mozilla itself impotent in ever threatening Google.

          Mozilla is Google's antitrust litigation sponge. But it's also kept dumb and obedient. Google would never want Mozilla to actually be a threat.

          If Mozilla had ever wanted a healthy side business, it wasn't in Pocket, XR/VR, or AI. It would have been in building a DevEx platform around MDN and Rust. It would have synergized with their core web mission. Those people have since been let go.

          • cxr 18 hours ago

            > If Mozilla had ever wanted a healthy side business, it wasn't in Pocket, XR/VR, or AI. It would have been in building a DevEx platform around MDN and Rust[…] Those people have since been let go.

            The first sentence isn't wrong, but the last sentence is confused in the same way that people who assume that Wikimedia employees have been largely responsible for the content on Wikipedia are confused about how stuff actually makes it into Wikipedia. In reality, WMF's biggest contribution is providing infrastructure costs and paying engineers to develop the Mediawiki platform that Wikipedia uses.

            Likewise, a bunch of the people who built up MDN weren't and never could be "let go", because they were never employed by Mozilla to work on MDN to begin with.

            (There's another problem, too, which is that addition to selling short a lot of people who are responsible for making MDN as useful as it is but never got paid for it, it presupposes that those who were being paid to work on MDN shouldn't have been let go.)

          • akerl_ 18 hours ago

            So the idea is that some group has been perpetuating a decade or so's worth of ongoing conspiracy to ensure that Mozilla continues to exist but makes decisions that "keep Mozilla itself impotent"?

            That seems to fail occam's razor pretty hard, given the competing hypotheses for each of their decisions include "Mozilla staff think they're doing a smart thing but they're wrong" and "Mozilla staff are doing a smart thing, it's just not what you would have done".

            • cxr 17 hours ago

              You're not wrong.

              And where philosophical razors are concerned, the most apt characterization of the source of Mozilla's decay is the one that Hanlon gave us.

      • lenkite an hour ago

        > The standards body is deprecating XSLT

        The "CORPO CARTEL body" is deprecating XSLT. WhatWG is a not really a standards body like the W3C.

      • mtillman 18 hours ago

        I think the person you’re replying to was referring to the partial support of XML instead of the xslt part.

      • echelon 21 hours ago

        Then standards body is Google and a bunch of companies consuming Google engine code.

        • dewey 21 hours ago

          I guess you mean except Mozilla and Safari...which are the two other competing browser engines? It's not like a it's a room full of Chromium based browsers.

          • themafia 18 hours ago

            Do Mozilla and Safari _not_ take money from Google?

          • BolexNOLA 20 hours ago

            Safari yes

            Mozilla…are they actually competing? Like really and truly.

            • bigyabai 20 hours ago

              Mozilla has proven they can exist in a free market; really and truly, they do compete.

              Safari is what I'm concerned about. Without Apple's monopoly control, Safari is guaranteed to be a dead engine. WebKit isn't well-enough supported on Linux and Windows to compete against Blink and Gecko, which suggests that Safari is the most expendable engine of the three.

              • noosphr 19 hours ago

                If your main competitor is giving you 90% of your revenue they aren't a competitor.

              • meindnoch 20 hours ago

                >Mozilla has proven they can exist in a free market; really and truly, they do compete.

                This gave me a superb belly laugh.

                • oblio 16 hours ago

                  Mozilla used to compete well but that ended... at least 10 years ago?

              • BolexNOLA 13 hours ago

                I really can’t imagine Safari is going anywhere. Meanwhile the Mozilla Foundation has been very poorly steering the ship for several years and has rightfully earned the reputation it has garnered as a result. There’s a reason there are so many superior forks. They waste their time on the strangest pet projects.

                Honestly the one thing I don’t begrudge them is taking Google’s money to make them the default search engine. That’s a very easy deal with the devil to make especially because it’s so trivial to change your default search engine which I imagine a large percentage of Firefox users do with glee. But what they have focused on over the last couple of years has been very strange to watch.

                I know Proton gets mixed feelings around here, but to me it’s always seemed like Proton and Mozilla should be more coordinated. Feel like they could do a lot of interesting things together

    • Aurornis a day ago

      I don’t get the comparison. The XSLT deprecation has support beyond Google.

      • amarant 21 hours ago

        It's just ill-informed ideological thinking. People see Google doing anything and automatically assume it's a bad thing and that it's only happening because Google are evil.

        HN has historically been relatively free of such dogma, but it seems times are changing, even here

        • hn_throwaway_99 20 hours ago

          Completely agree. You see this all the time in online discourse. I call it the "two things can be true at the same time" problem, where a lot of people seem unable to believe that 2 things can simultaneously be true, in this case:

          1. Google has engaged in a lot of anticompetitive behavior to maintain and extend their web monopoly.

          2. Removing XSLT support from browsers is a good idea that is widely supported by all major browser vendors.

        • pmontra 20 hours ago

          Maybe free of the "evil Google" dogma but not free from dogma. The few who dared to express one tenth of the disapproval what we usually express about Apple nowadays were downvoted to transparent ink in a matter of minutes. Microsoft had its honeymoon period with HN after their pro open source campaign, WSL, VSCode etc. People who prudently remembered the Microsoft of the 90s and the 2000s did get their fair share of downvotes. Then Windows 11 happened. Surprise. Actually I thought that there has been a consensus about Google being evil for at least ten years but I might me wrong.

          • amarant 19 hours ago

            "relatively" is meant to be doing a lot of work in my previous comment. Allow me to clarify: Obviously some amount was always there, but it used to be so much less than it is now, and, more importantly, the difference between HN and other social media, such as Reddit, used to be bigger, in terms of amount of dogma.

            HN still has less dogma than Reddit, but it's closer than it used to be in my estimation. Reddit is still getting more dogma each day, but HN is slowly catching up.

            I don't know where to turn to for online discourse that is at least mostly free from dogma these days. This used to be it.

        • cxr 17 hours ago

          > It's just ill-informed ideological thinking.

          > People see Google doing anything and automatically assume it's a bad thing and that it's only happening because Google are evil.

          Sure, but a person also needs to be conscious of the role that this perception plays in securing premature dismissal of anyone who ventures to criticize.

          (In quoting your comment above, I've deliberately separated the first sentence from the second. Notice how easily the observation of the phenomenon described in the second sentence can be used to undergird the first claim, even though the first claim doesn't actually follow as a necessary consequence from the second.)

        • troupo 16 hours ago

          Safari is "cautiously supportive", waiting for someone else to remove support.

          Google does lead the charge on it, immediately having a PR to remove it from Chromium and stating intent to remove even though the guy pushing it didn't even know about XSLT uses before he even opened either of them.

          XSLT is a symptom of how browser vendors approach the web these days. And yes, Google are the worst of them.

    • otabdeveloper4 21 hours ago

      So-called "standards" on the Google (c) Internet (c) network are but a formality.

  • jillesvangurp a day ago

    > This is a lot more concerning.

    I'm not so sure that's problematic. Probably browser just aren't a great platform for doing a lot of XML processing at this point.

    Preserving the half implemented frozen state of the early 2000s really doesn't really serve anyone except those maintaining legacy applications from that era. I can see why they are pulling out complex C++ code related to all this.

    It's the natural conclusion of XHTML being sidelined in favor of HTML 5 about 15-20 years ago. The whole web service bubble, bloated namespace processing, and all the other complexity that came with that just has a lot of gnarly libraries associated with it. The world kind of has moved on since then.

    From a security point of view it's probably a good idea to reduce the attack surface a bit by moving to a Rust based implementation. What use cases remain for XML parsing in a browser if XSLT support is removed? I guess some parsing from javascript. In which case you could argue that the usual solution in the JS world of using polyfills and e.g. wasm libraries might provide a valid/good enough alternative or migration path.

  • zetafunction 21 hours ago

    https://issues.chromium.org/issues/451401343 tracks work needed in the upstream xml-rs repository, so it seems like the team is working on addressing issues that would affect standards compliance.

    Disclaimer: I work on Chrome and have occasionally dabbled in libxml2/libxslt in the past, but I'm not directly involved in any of the current work.

    • inejge 21 hours ago

      I hope they will also work on speeding it up a bit. I needed to go through 25-30 MB SAML metadata dumps, and an xml-rs pull parser took 3x more time than the equivalent in Python (using libxml2 internally, I think.) I rewrote it all with quick-xml and got a 7-8x speedup over Python, i.e., at least 20x over xml-rs.

      • nwellnhof 18 hours ago

        Python ElementTree uses Expat, only lxml uses libxml2. Right now, I'm working on SIMD acceleration in my not yet released, GPL-licensed fork of libxml2. If you have lots of character data or large attribute values like in SVG, you will see tremendous speed improvements (gigabytes per second). Unfortunately, this is unlikely to make it into web browsers.

    • Ygg2 21 hours ago

      Wait. They are going along with a XML parser that supports DOCTYPES? I get XSLT is ancient and full of exploits, but so is DOCTYPE. Literally poster boy for billion laughs attack (among other vectors).

      • mananaysiempre 21 hours ago

        You don't need DOCTYPE for that, you can put an ENTITY declaration straight in your source file ("internal subset") and the XML spec it needs to be processed. (I seem to recall someone saying that Adobe tools are fond of putting those in their exported SVG files.)

      • Mikhail_Edoshin 20 hours ago

        The billion laughs bug was fixed in libxml2 in 2008. (As far as I understand in .Net this bug was fixed in 2014 with .Net 4.5.2. In 2019 a bug similar to "billion laughs" was found in Go YAML parser although it was explicitly mentioned and forbidden by YAML specs. Among other products it affected Kubernetes.)

        Other vectors probably mean a single vector: external entities, where a) you process untrusted XML on server and b) allow the processor to read external entities. This is not a bug, but early versions of XML processors may lack an option to disallow access to external entities. This also has been fixed.

        XSLT has no exploits at all, that is no features that can be misused.

        • Ygg2 5 hours ago

          > Other vectors probably mean a single vector: external entities,

          XXE injection (which comes in several flavors), remote DTD retrieval, and quadratic blowup (a sort of twin to the billion laughs attack).

          You aren't wrong though. They all live in <!DOCTYPE definition. Hence, my puzzlement.

          Why process it at all? If this is as security focused as Google claims, fill the DOCTYPE with molten tungsten and throw it into the Mariana Trench. The external entities definition makes XSLT look well designed in comparison.

      • fabrice_d 21 hours ago

        The billion laughs attack has well known solutions (basically, don't recurse too deep). It's not a reason to not implement DOCTYPE support.

        • Ygg2 5 hours ago

          > The billion laughs attack has well known solutions (basically, don't recurse too deep)

          You can then recurse wide. In theory it's best to allow only X placeables of up to Y size.

          The point is, Doctype/External entities do a similar thing to XSLT/XSD (replacing elements with other elements), but in a positively ancient way.

  • svieira a day ago

    > Removing XSLT from browsers was long overdue

    > Google is willing to remove standards-compliant XML support as well.

    > They're the same picture.

    To spell it out, "if it's inconvenient, it goes", is something that the _owner_ does. The culture of the web was "the owners are those who run the web sites, the servants are the software that provides an entry point to the web (read or publish or both)". This kind of "well, it's dashed inconvenient to maintain a WASM layer for a dependency that is not safe to vendor any more as a C dependency" is not the kind of servant-oriented mentality that made the web great, not just as a platform to build on, but as a platform to emulate.

    • akerl_ a day ago

      Can you cite where this "servant-oriented" mentality is from? I don't recall a part of the web where browser developers were viewed as not having agency about what code they ship in their software.

      • svieira 18 hours ago

        A nice recent example is "smooshgate", wherein it was determined that breaking websites with an older version of Mootools installed was not an acceptable way to move the web forward, so we got `Array.prototype.flat` instead of `Array.prototype.flatten`: https://news.ycombinator.com/item?id=17141024

        > I don't recall a part of the web where browser developers were viewed as not having agency

        Being a servant isn't "not having agency", it's "who do I exercise my agency on behalf of". Tools don't have agency, servants do.

        • akerl_ 18 hours ago

          I think you're reading way too much into that. For one thing, that's a proposal for Javascript, whose controlling body is TC39. For another, this was a bog standard example of a draft proposal where a bug was discovered, and rollout was adjusted. If that's having a "servant-oriented mindset", so do 99% of software projects.

      • crabmusket 16 hours ago

        https://datatracker.ietf.org/doc/html/rfc8890

        > The Internet is for End Users

        > This document explains why the IAB believes that, when there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users. It also explores how the IETF can more effectively achieve this.

        • akerl_ 16 hours ago

          It feels like maybe the disconnect here is with what "servant" means, and with this quote: "the servants are the software that provides an entry point to the web (read or publish or both)".

          The RFC8890 doesn't suggest anything that overlaps with my understanding of what the word "servant" means or implies. The library in my town endeavors to make decisions that promote the knowledge and education of people in my town. But I wouldn't characterize them as having a "servant-mindset". Maybe the person above meant "service"?

          FWIW, Google/Mozilla/Apple appear to believe they're making the correct decision for the benefit of end users, by removing code that is infrequently used, unmaintained, and thus primarily a security risk for the majority of their users.

      • troupo 16 hours ago

        It's literal W3C policy: https://www.w3.org/TR/html-design-principles/#priority-of-co...

        --- start quote ---

        In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone. Of course, it is preferred to make things better for multiple constituencies at once.

        --- end quote ---

        However, the needs of browser implementers have long been the one and only priority.

        Oh. It's also Google's own policy for deprecation: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

        --- start quote ---

        First and foremost we have a responsibility to users of Chromium-based browsers to ensure they can expect the web at large to continue to work correctly.

        The primary signal we use is the fraction of page views impacted in Chrome, usually computed via Blink’s UseCounter UMA metrics. As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!

        --- end quote ---

        • akerl_ 16 hours ago

          I put this in a parallel thread, but maybe this is a linguistic gap between "servant", a person who does what they are told and has very limited agency within the bounds of their instructions, and "service", where you do things for the benefit of another entity.

          None of the above reads like a "servant-oriented mindset". It reads like "this is the framework by which we decide what's valuable". And by that framework, they're saying that keeping XSLT around is not the right call. You can disagree with that, but nothing you've quoted suggests that they're trying to prioritize any group over the majority of their users.

          • troupo 8 hours ago

            Nowhere does it say "majority of users".

            Moreover, Google docs says that even even 0.0001% shouldn't be taken lightly.

            As I keep saying, the person who's pushing for XSLT removal didn't even know about XSLT uses until after he posted "intent to remove", and the PR to remove to Chrome. And the usage stats he used have been questioned: https://news.ycombinator.com/item?id=45958966

        • dpark 14 hours ago

          I could argue that W3C didn’t follow that policy when they attempted to push xhtml, which completely inverts that priority order, as xhtml is bad for users and great for purity.

          But instead I’ll point out that W3C no longer maintains the html spec. They ceded that to the WHATWG which was spun by the major browser developers in response to the stagnation and what amounted to abandonment of html by the W3C.

          • troupo 8 hours ago

            Ah, that's true. While w3c still maintains a lot of standards, the intention to remove XSLT was sent to WHATWG.

            I didn't look at all documents, but Working Mode describing how specs are added or removed doesn't mention users even once. It's all about implementors: https://whatwg.org/working-mode

            • dpark 7 hours ago

              The principles covers more about users. But it still does not set the same priority hierarchy as W3C.

              https://whatwg.org/principles

              I’m not surprised they focus on implementors in “working mode”, though. WHATWG specifically started because implementers felt like the W3C was holding back web apps. And it kind of was.

              WHATWG seemed to be created with an intent to return to the earlier days of browser development, where implementors would build the stuff they felt was important and tell other implementors how to be compatible. Less talking and more shipping.

      • dpark 21 hours ago

        It’s utter nonsense. Development of the web has always been advanced by the browser side, as it necessarily must. It’s meaningless for a server/web app to ship a feature that no browser supports.

      • hluska 18 hours ago

        I’ve never heard of servant oriented, but I understand the point. Browsers process and render whatever the server returns. Whether they’re advertisements that download malware or a long rambling page on whatever I’m interested in now, browsers really don’t have much control over what they run.

        • akerl_ 18 hours ago

          I'm not sure what you're talking about.

          1. As we're seeing here, browser developers determine what content the browser will parse and process. This happens in both directions: tons of what is now common JS/CSS shipped first as browser-specific behavior that was then standardized, and also browsers have dropped support for gopher, for SSLv2, and Flash, among other things.

          2. Browsers often explicitly provide a transformation point where users can modify content. Ad blockers work specifically because the browser is not a "servant" of whatever the server returns.

          3. Plenty of content can be hosted on servers but not understood or rendered by browsers. I joked about Opera elsewhere on the thread, which notably included a torrent client, but Chrome/Firefox/Safari did not: torrent files served by the server weren't run in those browsers.

      • etchalon a day ago

        I cannot imagine a time when browsers were "servant-oriented".

        Every browser I can think of was/is subservient to some big-big-company's big-big-strategy.

        • akerl_ 21 hours ago

          There have been plenty of browsers that were not part of a big company, either for part or all of their history. They don't tend to have massive market share, in part because browsers are amazingly complex and when they break, users get pissed because their browsing is affected.

          Even the browsers created by individuals or small groups don't have, as far as I've ever seen, a "servant-oriented mindset": like all software projects, they are ultimately developed and supported at the discretion of their developer(s).

          This is how you get interesting quirks like Opera including torrent support natively, or Brave bundling its own advertising/cryptocurrency thing.

          • etchalon 21 hours ago

            Both of those are strategies aimed at capturing a niche market segment in hopes of attracting them away from the big browsers.

            • akerl_ 20 hours ago

              I guess? I don't get the sense that when the Opera devs added torrents a couple decades ago, they were necessarily doing it to steal users so much as because the developers thought it was a useful feature.

              But it doesn't really make a difference to my broader point that browser devs have never had "servant-mindset"

              • etchalon 19 hours ago

                I agree. They've never had that mindset.

        • trinsic2 18 hours ago

          I don't remember it this way. It was my understanding that browsers were designed to browse servers and that servers, or websites designed themselves around web standards that were initiated by specs made part of browsing experience that web browsers created.

    • Aurornis 21 hours ago

      > The culture of the web was "the owners are those who run the web sites, the servants are the software that provides an entry point to the web (read or publish or both)".

      This is an attempt to rewrite history.

      Early browser like NCSA Mosaic were never even released as Open Source Software.

      Netscape Navigator made headlines by offering a free version for academic or non-profit use, but they wanted to charge as much as $99 (in 1995 dollars!) for the browser.

      Microsoft got in trouble for bundling a web browser with their operating system.

      The current world where we have true open source browser options like Chromium is probably closer to a true open web than what some people have retconned the early days of the web as being.

      • glenstein 21 hours ago

        Chromium commits are controlled by a pool of Google developers, so it's not open in the sense that anyone can contribute or steer the direction of the project.

        It's also 32 million lines of code which is borderline prohibitive to maintain if you're planning any importantly different browser architecture, without a business plan or significant funding.

        There's lots of things perfectly forkable and maintainable in the world is better for them (shoutout Nextcloud and the various Syncthing forks). But Chromium, insofar as it's a test of the health and openness of the software ecosystem, I think is not much of a positive signal on account of what it would realistically require to fork and maintain for any non-trivial repurposing.

        • dpark 20 hours ago

          > Chromium commits are controlled by a pool of Google developers, so it's not open in the sense that anyone can contribute or steer the direction of the project.

          By these criteria no software is open source.

          • glenstein 19 hours ago

            I would disagree, corporate open source involves corporate dominance over governance that fits internal priorities. It meets the legal definition rather than the cultural model which is community driven and often multi-stakeholder. I would put Debian, VLC, LibreOffice in the latter camp.

            • akerl_ 18 hours ago

              Is it often multi-stakeholder? Debian has bureaucracy and a set group of people with commit permissions. VLC likewise has the VideoLAN organization. LibreOffice has The Document Foundation.

              It seems like most open source projects either have:

              1. A singular developer, who controls what contributions are accepted and sets the direction of the project 2. An in-group / foundation / organization / etc that does the same.

              Do you have an example of an open source project whose roadmap is community-driven, any more than Google or Mozilla accepts bug reports and feature reports and patches and then decides if they want to merge them?

              • glenstein 17 hours ago

                A lot of the governance structures with "foundation" in their name, e.g. Apache Foundation, Linux Foundation, Rust Foundation, involve some combination of corporate parties, maintainers, independent contributors without any singularly corporate heavy hand responsible for their momentum.

                I don't know that road maps are any more or less "community driven" than anything else given the nature of their structures, but one can draw a distinction between them and the degree of corporate alignment like React (Facebook), Swift (Apple).

                I'm agreeable enough to your characterization of open source projects. It's broad but, I think, charitably interpreted, true enough. But I think you can look at the range of projects and see ones that are multi stakeholder vs those with consolidated control and their degree of alignment with specific corporate missions.

                When Google tries to, or is able to, muscle through Manifest v3, or FLoC or AMP, it's not trying to model benevolent actor standing on open source principles.

                • akerl_ 17 hours ago

                  My argument is that "open source principles" do not suggest anything about how the maintainers have to handle input from users.

                  Open source principles have to do with the source being available and users being able to access/use/modify the source. Chrome is an open source project.

                  To try to expand "open source principles" to suggest that if the guiding entity is a corporation and they have a heavy hand in how they steer their own project, they're not meeting those principles, is just incorrect.

                  The average open source project is run by a person or group with a set of goals/intentions for the project, and they make decisions about the project based on those goals. That includes sometimes taking input from users and sometimes ignoring it.

                • pas 16 hours ago

                  Chromium can be forked (probably there are already a bunch of degoogled ones) to keep Manifest v2

                  what's missing is social infrastructure to direct attention to this (and maybe it's missing because people are too dumb when it comes to adblockers, or they are not bothered that much, or ...)

                  and of course, also maintaining a fork that does the usual convenience features/services that Google couples to Chrome is hard and obviously this has antitrust implications, but nowadays not enough people care about this either

      • croes 20 hours ago

        The web wasn’t the browser it was the protocols.

        • dpark 20 hours ago

          That’s not an accurate statement. The web was not just the protocols. It was the protocols and the servers that served them and the browsers that supported them and the web sites that were built with them. There is no web without browsers just like there is no web without websites.

          • hluska 18 hours ago

            I can’t understand why you’re splitting hairs to this extent. The web is protocols; some are implemented at server side whereas others are implemented at browser side. They’re all still protocols with a big dollop of marketing.

            That statement was accurate enough if you’re willing to read actively and provide people with the most minimal benefit of the doubt.

            • dpark 18 hours ago

              My response is in a chain discussing browsers in response to someone who literally said “The web wasn’t the browser it was the protocols.”

              I responded essentially “it was indeed also the browser”, which it seems you agree with so I don’t know what you’re even trying to argue about.

              > willing to read actively and provide people with the most minimal benefit of the doubt.

              Indeed

        • akerl_ 20 hours ago

          Most of the protocol specs were written retroactively to match functionality that browsers were already using in the wild.

  • zzo38computer 19 hours ago

    I think it might make more sense to use WebAssembly and make them as extensions which are included by default (many other things possibly should also be made as extensions rather than built-in functions). The same can be done for picture formats, etc. This would improve security while also improving the versatility (since you can replace parts of things), if the extension mechanism would have these capabilities.

    (However, I also think that generally you should not require too many features, if it can be avoided, whether those features are JavaScripts, TLS, WebAssembly, CSS, and XSLT. However, they can be useful in many circumstances despite that.)

    • jjkaczor 13 hours ago

      Yeah, when I first heard about this a month or so ago, my thoughts were exactly this - a WebAssembly polyfil.

  • dietr1ch 21 hours ago

    > Currently, they seem to favor xml-rs which only implements a subset of XML.

    Which seems to be a sane decision given the XML language allows for data blow-ups[^0]. I'm not sure what specific subset of XML `xml-rs` implements, but to me it seems insane to fully implement XML because of this.

    [^0]: https://en.wikipedia.org/wiki/Billion_laughs_attack

  • _heimdall 21 hours ago

    Given that you have experience working on libxslt, why do you think they should have removed the spec entirely rather than improving the current implementation or moving towards modern XSLT 3?

  • gnatolf 19 hours ago

    I was somewhat confused and irritated by the lack of a clear frontrunner crate for XML support in rust. I get that xml isn't sexy, but still.

  • cptskippy 18 hours ago

    > Currently, they seem to favor xml-rs which only implements a subset of XML.

    What in particular do you find objectionable about this implementation? It's only claiming to be an XML parser, it isn't claiming to validate against a DTD or Schema.

    The XML standard is very complex and broad, I would be surprised if anyone has implemented it in it's entirety beyond a company like Microsoft or Oracle. Even then I would question it.

    At the end of the day, much of XML is hard if not impossible to use or maintain. A lot of it was defined without much thought given to practicality and for most developers they will never had to deal with a lot of it's eccentricities.

  • James_K 21 hours ago

    What's long overdue is them updating to a modern version of XSLT.

dfabulich a day ago

In part 1 of this article, the author wrote, "XSLT is an essential companion to RSS, as it allows the feed itself to be perused in the browser"

Actually, you can make an RSS feed user-browsable by using JavaScript instead. You can even run XSLT in JavaScript, which is what Google's polyfill does.

I've written thousands of lines of XSLT. JavaScript is better than XSLT in every way, which is why JavaScript has thrived and XSLT has dwindled.

This is why XSLT has got to go: https://www.offensivecon.org/speakers/2025/ivan-fratric.html

  • ndriscoll a day ago

    > JavaScript is better than XSLT in every way

    Obviously not in every way. XSLT is declarative and builds pretty naturally off of HTML for someone who doesn't know any programming languages. It gives a very low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements. You link a template just like you link a CSS file to reuse styles. Obvious.

    The equivalent Javascript functionality's documentation[0] starts going on about classes and callbacks and shadow DOM, which is by contrast not at all approachable for someone who just wants to make a web page. Obviously Javascript is necessary if you want to make a web application, but those are incredibly rare, and it's expected that you'll need a programmer if you need to make an application.

    Part of the death of the open web is that the companies that control the web's direction don't care about empowering individuals to do simple things in a simple way without their involvement. Since there's no simple, open way to make your own page that people can subscribe to (RSS support having been removed from browsers instead of expanded upon for e.g. a live home page), everyone needs to be on e.g. Facebook.

    It's the same with how they make it a pain to just copy your music onto your phone or backup your photos off of it, but instead you can pay them monthly for streaming and cloud storage.

    [0] https://developer.mozilla.org/en-US/docs/Web/API/Web_compone...

    • munificent 21 hours ago

      > XSLT is declarative and builds pretty naturally off of HTML for someone who doesn't know any programming languages.

      Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?

      I'd be willing to bet good money that the Venn diagram of users that fit the intersection of "authoring content for the web", "care about separating content from HTML", "comfortable with HTML", "not comfortable with JavaScript", and "able to ramp up on XSLT" is pretty small.

      At some point, we have to just decide "sorry, this use case is too marginal for every browser to maintain this complexity forever".

      • basscomm 18 hours ago

        > Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?

        Hi! I'm a non-programmer who picked up XSLT of my own volition and spent the last five-ish years using it to write a website. I even put up all the code on github: https://github.com/zmodemorg/wyrm.org

        I spent a few weeks converting the site to use a static site generator, and there were a lot of things I could do in XSLT that I can't really do in the generator, which sucks. I'd revert the entire website in heartbeat if I knew that XSLT support would actually stick around (in fact, that's one of the reasons I started with XSLT in the first place, I didn't think that support would go away any time soon, but here we are)

        • ndriscoll 18 hours ago

          For what it's worth, you can still run an XSL processor as a static generator. You of course lose some power like using document() to include information for a logged in user, but if it's a static site then that's fine.

          • basscomm 17 hours ago

            Users don't log in to my site.

            I eventually started using server-side XSL processing (https://nginx.org/en/docs/http/ngx_http_xslt_module.html) because I wanted my site to be viewable in text-based browsers, too, but it uses the same XSLT library that the browsers use and I don't know how long it's going to be around.

      • matwood 20 hours ago

        > Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?

        Admittedly this was 20ish years ago, but I used to teach the business analysts XSLT so they could create/edit/format their own reports.

        At the time Crystal Reports had become crazy expensive so I developed a system that would send the data to the browser as XML and then an XSLT to format the report. It provided basic interactivity and could be edited by people other than me. Also, if I remember, at the time it only worked in IE because it was the only browser with the transform function.

      • a456463 20 hours ago

        I did. Just because the herd says it's dead doesn't mean XSLT is dead or "bad"

      • ndriscoll 21 hours ago

        I was such a non-programmer as a child, yes. At the time that XSLT was new, if you read a book on HTML and making web pages from the library, it would tell you about things like separating content from styles and layout, yes. Things that blew my mind were that you could install Apache on your own computer and your desktop could be a website, or (as I learned many years later) that you could make a server application (or these days now Javascript code) that calls a function based on a requested path instead of paths being 1:1 with files. By contrast, like I said XSLT was just a natural extension of HTML for something that everyone who's written a couple web pages wants to do.

        The fact that the web's new owners have decided that making web pages is too marginal a use-case for the Web Platform is my point.

        • ErroneousBosh 21 hours ago

          > it would tell you about things like separating content from styles and layout, yes.

          That's what CSS does.

          • antod 19 hours ago

            XSLT is really separating (XML) data from markup in the case of the web. More generally it's transforming between different XML formats.

            But in the case of docs (eg XML-FO for docbook, DITA etc) XSLT does actually separate content from styling.

          • ndriscoll 21 hours ago

            Yes that's why XSLT is such a natural fit when you learn about HTML+CSS. It's the same idea, but applied to HTML templates, which is something you immediately want when you hand-write HTML (e.g. navbars, headers, and footers that you can include on every page).

            • ErroneousBosh 21 hours ago

              Your problem here is that you're hand-writing HTML including all the templates. This wasn't a good way to do it 30 years ago and it's not a good way to do it now.

              See all these "static site generators" everyone's into these days? We used those in the mid-90s. They were called "Makefiles".

              • ndriscoll 21 hours ago

                Yeah because I was 11 and didn't know what a Makefile was. That's my point. I wanted to make web pages, and didn't know any programming. HTML is designed to be hand-written. You just write text, and when you want it to look different, you wrap it in a thing. When doing this, you'll quickly want to re-use snippets/invent your own tags. XSLT gives a solution to this without saying "okay let's back up and go learn how to use a command line now, and probably use an entirely different document format" (SSGs) or "okay let's back up and learn about functions, variables, classes, and callbacks, and maybe a compiler" (Javascript). It just says "when you want to make your own tags, extract them into a 'template' tag, then include your templates just like you include a CSS file for styles".

      • rendaw 21 hours ago

        I've seen non-programmers learn SQL, and SQL is far more inconsistent, complex, non-orthogonal, fragmented, footgunny, and user hostile than most programming languages.

        I'm not sure what I mean by this, WRT XSLT vs Javascript.

      • jeffbee 21 hours ago

        Funnily enough, XSLT is one of those things that I don't know very well but LLMs do. I find that I can ask Gemini to blurt out an XSLT implementation of my requirements given a snippet of example doc, and I have used this to good effect in some web scrapers/robots.

      • righthand 19 hours ago

        I did after reading about it. I immediately moved my personal site to it and got rid of the crap JS site I had.

    • dfabulich 21 hours ago

      XSL is a Turing-complete functional programming language, not a declarative language. When you xsl:apply-template, you're calling a function.

      Functional programming languages can often feel declarative. When XSL is doing trivial, functional transformations, when you keep your hands off of xsl:for-each, XSL feels declarative, and doesn't feel that bad.

      The problem is: no clean API is perfectly shaped for UI, so you always wind up having to do arbitrary, non-trivial transformations with tricky uses of for-each to make the output HTML satisfy user requirements.

      XSL's "escape hatch" is to allow arbitrary Turing-complete transformations, with <xsl:variable>, <xsl:for-each>, and <xsl:if>. This makes easy transformations easy and hard transformations possible.

      XSL's escape hatch is always needed, but it's absolutely terrible, especially compared to JS, especially compared to modern frameworks. This is why JS remained popular, but XSL dwindled.

      > It gives a low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements.

      JavaScript is a much better low-effort high-power on-ramp to templated web pages with no build steps or server software. JavaScript is the natural fit for adding custom HTML elements (web components).

      Seriously, XSLT is worse than JavaScript in every way, even at the stuff that XSLT is best at. Performance/bloat? Worse. Security? MUCH worse. Learnability / language design? Unimaginably worse.

      EDIT: You edited your post, but the Custom Element API is for interactive client-side components. If you just want to transform some HTML on the page into other HTML as the page loads, you can use querySelectorAll, the jQuery way.

      • Mikhail_Edoshin 19 hours ago

        Come on. With XSLT you write a rule and then write a fragment of the resulting document.

            <xsl:template match="abc">
              <def ghi="jkl"/>
            </xsl:template>
        
        This is one of simplest ways to do things. With JavaScript you what? Call methods?

            CreateElement("def").setAttribute("def", "jkl")
        
        There is a ton of "template engines" (all strictly worse than XSLT); why people keep writing them? Why people invented JSX with all the complicated machinery if plain JavaScript is better?
      • James_K 21 hours ago

        > Security? MUCH worse.

        This is patently false. It is much better for security if you use one of the many memory-safe implementations of it. This is like saying “SSL is insecure because I use an implementation with bugs”. No, the technology is fine. It's your buggy implementation that's the problem.

        • ndriscoll 21 hours ago

          XSLT used as a pre-processor is obviously also a fundamentally better model for security because... it's used as a preprocessor. It cannot spy on you and exfiltrate information after page load because it's not running anymore (so you can't do voyeuristic stuff like capture user mouse movements or watch where they scroll on the page). It also doesn't really have the massive surface Javascript does for extracting information from the user's computer. It wasn't designed for that; it was designed to transform documents.

    • ErroneousBosh 21 hours ago

      > not at all approachable for someone who just wants to make a web page

      If someone wants to make a web page they need to learn HTML and CSS.

      Why would adding a fragile and little-used technology like XSLT help?

      • Mikhail_Edoshin 18 hours ago

        Because you do not want to create web pages, but to render some information in the form of web pages. And as you write that information you make distinctions unique to a) this information and b) your approach to it. And one of the best ways to do this is to come up with a custom set of XML tags. You write about chess? Fine: invent tags to decribe parties, positions and moves. Or maybe a tutorial on Esperanto? Fine; invent a notation to highlight the lexical structure and the grammar. You can be as detailed as you want and at the same time you can ignore anything you do not care about.

        And then you want to merely render this semantically rich document into HTML. This is where XSLT comes in.

      • basscomm 18 hours ago

        > Why would adding a fragile and little-used technology like XSLT help?

        A few years ago I bought a bunch of Skylanders for practically nothing when the toys to life fad faded away. To keep track of everything I made a quick and dirty XSLT script that sorted and organized the list of figures and formatted each one based on their 'element'. That would have been murderous to do in plain HTML and CSS: https://wyrm.org/inventory/skylanders.xml

        • dfabulich 8 hours ago

          It would have been murderous with just CSS, but it would have been trivial to do with JS, much easier than the hundreds of lines of XSL you wrote. https://wyrm.org/inventory/skylanders.xsl

          • basscomm 2 hours ago

            > but it would have been trivial to do with JS

            Maybe! How much Javascript would I have to learn before I could come up with a 'trivial' solution?

            > the hundreds of lines of XSL you wrote.

            Those hundreds of lines are the same copy/pasted if statement with 5 different conditions. For each game, I create a table by: alphabetizing the XML > going through the list searching for figures that match the game > each time I find one go through the color list to find the color to use for the table row. There are 10 color choices per game, which means that I repeated a 10-choice if statement 5 times.

            There's nothing difficult here, it's just verbose.

    • spankalee 21 hours ago

      I'm a web components guy myself, but that's not the equivalent JavaScript functionality at all, as XSLT doesn't event have components.

      XSLT is a functional transform language. The equivalent JavaScript would be something like registry of pure functions of Node -> Node and associated selectors and a TreeWalker that walks the XML document, invokes matching functions, and emits the result into a new document.

      Or you could consume the XML as data into a set of React functions.

    • dist-epoch 19 hours ago

      Nobody learned web programming by putting XSLT on top of XML.

      This is a fantasy world that does not exist.

      People used PHP, or a tool which created HTML (DreamWeaver), or a website, or maybe a LLM today.

  • Pet_Ant a day ago

    JavaScript is ever evolving and it means you need to stick to one of the two browsers (WebKit or Firefox) and keep upgrading. XSLT hasn't changed in years. It's an actual standard instead of an evolving one.

    I know that other independent browsers that I used to use back in the day just gave up because the pace of divergence pushed by the major implementations meant that it wasn't feasible to keep up independently.

    I still miss Konqueror.

    • pitaj 20 hours ago

      JavaScript is backwards compatible. You can use an older standard supported by everything if you wish.

      • Pet_Ant 19 hours ago

        Really? Because I have an old iPad (4th gen?) that no longer works on many sites. If it was backwards compatible they'd still function.

        • O4epegb 19 hours ago

          You are confusing backwards and forwards compatibility. Those sites may have added features that your iPad does not support, which is why it broke, if they have not added those, it might still work.

          However JS is not 100% backwards compatible either, it is in many cases, largely backwards compatible, but there are rare cases of bug fixes, or deprecated APIs that might be removed and break old code, but this is not even JS itself, it's more like web/engine standards.

        • demurgos 19 hours ago

          You are talking about forward compatibility.

          JS is backwards compatible: new engines support code using old features.

          JS is not forward compatible: old engines don't support code using new features.

          Regarding your iPad woes, the problem is not the engine but websites breaking compat with it.

          The distinction matters as it means that once a website is published it will keep working. The only way to break an existing website is to publish a new version usually. The XSLT situation is note-worthy as it's an exception to this rule.

  • skobes 21 hours ago

    Your link is just the abstract, I had to hunt for the full talk:

    https://www.youtube.com/watch?v=U1kc7fcF5Ao

    But it is quite interesting and especially learning about the security problems of the document() function (described @ 19:40-25:38) made me feel more convinced that removing XSLT is a good decision.

  • kuschku 20 hours ago

    > Actually, you can make an RSS feed user-browsable by using JavaScript instead

    Say I have an XML document that uses XSLT, how do I modify it to apply your suggestion?

    I've previously suggested the XML stylesheet tag should allow

        <?xml-stylesheet type="application/javascript" href="https://example.org/script.js"?>
    
    which would then allow the script to use the service-worker APIs to intercept and transform the request.

    But with the implementation available today, I see no way to provide a first-class XSLT-like experience with JS.

    • dfabulich 15 hours ago

      For RSS/Atom, you put this in the XML, right inside the document element (the <feed> element or the <rss> element):

          <script src="https://example.org/script.js"
             xmlns="http://www.w3.org/1999/xhtml"></script>
      
      You can also put CSS in there, like this:

          <style xmlns="http://www.w3.org/1999/xhtml">
            * { color: red; }
          </style>
      
      Or like this:

          <link href="https://example.org/style.css"
             rel="stylesheet" xmlns="http://www.w3.org/1999/xhtml"/>
  • ErroneousBosh 21 hours ago

    > In part 1 of this article, the author wrote, "XSLT is an essential companion to RSS, as it allows the feed itself to be perused in the browser"

    Wow. I can see the proposed scrapping of XSLT being a huge problem for all of the seven people who do this.

  • throw_m239339 17 hours ago

    > by using JavaScript instead

    I think you're entirely missing the point of RSS by saying that. RSS doesn't and should require NOT Javascript.

    Now feeds could somehow be written in some bastard HTML5 directly, but please don't bring Javascript in that debate.

    XSLT allows to transform a XML document into an HTML presentation, without the need for javascript, that's its purpose.

thayne a day ago

I don't disagree that Google is killing the open web. But XSLT is a pretty weak argument for showing that. It is an extremely complicated feature that is very seldom used. I am very doubtful dropping support is some evil political decision. It is much more likely they just don't want to sink resources into maintaining something that is almost never used.

For the specific use case of showing RSS and Atom feeds in the browser, it seems like a better solution would be to have built-in support in the browser, rather than relying on the use of XSLT.

  • AlotOfReading 20 hours ago

    The sites that will be broken are disproportionately important though. Congress.gov/govinfo.gov, weather.gov, europa.gov, plus dozens of sites for libraries, and universities.

    Looking only at how many sites use a feature gives you an incomplete view. If a feature were only used by Wikipedia, it'd still be inappropriate to deprecate it with a breaking change and a short (1yr) migration window. You work with the important users to retire it and then start pulling the plug publicly to notify everyone you might have missed.

  • Fileformat 21 hours ago

    Of course built-in support for RSS would be better. But what are the chances of that happening?

    • thayne 20 hours ago

      Probably better than browser makers committing to maintaining an xslt library.

      • righthand 18 hours ago

        They didn’t have to maintain it. There was a simpler solution and switch to a library that wasn’t broken.

    • homebrewer 18 hours ago

      We already had it, both Firefox and the old Opera supported viewing (and subscribing to) RSS feeds.

  • wpm 10 hours ago

    I’m on mobile so it’s not easy for me to find the source, but Googles own tracking stats showed more sites use XSLT than use WebUSB.

dpark 21 hours ago

This has nothing to do with the “open web”. I don’t know if the people saying this just don’t have a meaningful definition of what open means or what. “Open” doesn’t mean “supports everything anyone has ever shipped in a browser”. (Chrome should support Gopher, really? Gopher was literally never part of the World Wide Web.)

What’s happening is that Google (along with Mozilla and Safari) are changing the html spec to drop support for xslt. If you want to argue that this is bad because it “breaks the web”, that’s fine, but it has nothing at all to do with whether the web is “open”. The open web means anyone can run a web server. Anyone can write a web site. Anyone can build their own compatible browser (hypothetically; this has become prohibitively expensive). It means anyone can use the tech, not that the tech includes everything possible.

If you want to complain about Google harming the open web, there are some real examples out there. Google Reader deprecation probably hurt RSS more than anything else. AMP was/is an attempt to give Google tighter control over more web traffic. Chrome extension changes were pushed through seemingly to give Google tighter control over ad blockers. Gemini in the search results is an attempt to keep Google users from ever actually clicking through to web sites for information.

XSLT in the browser has been dead for years. The reality is that no browser developer has cared about xslt since 1.0. Don’t blame Google for the death of xslt when xslt 2.0 was standardized before Chrome was even released and no one else cared enough to implement it. The removal of xslt doesn’t change the openness of the web and the reality is that it breaks very little while eliminating a source of real security errors.

  • shadowgovt 20 hours ago

    > Google Reader deprecation probably hurt RSS more than anything else

    And, indeed, if the protocol was one killer app deprecation and removal away from being obsolete, the problem was the use case, not the protocol.

    (Personally, I don't think RSS is dead; it's very much alive in podcasting. What's dead is people consuming content from specific sites as a subscription model instead of getting most of their input slop-melanged in through their social media feeds; they don't care about the source of the info, they just want the info. I don't think that's something we fix with improved RSS support; it's a behavior issue looking for a better experience than Facebook, not for everyone to wake up one day and decide to install their own feed reader and stop browsing Facebook or Twitter or even Mastodon for links all day).

    • ndriscoll 19 hours ago

      It wasn't just one killer app deprecation/removal away. RSS was also integrated into browsers at one point, and then removed. You wouldn't need a social media feed if your browser home page already gave you your timeline, and if it were trivial for any web page to add a "subscribe" button. But instead of known, proven use-cases that have clear demand, we get Javascript APIs for niche stuff like flashing firmware onto USB devices.

  • righthand 15 hours ago

    Open wen doesn’t mean WHATWG gets to decide what is and isn’t useful in the browser.

    > What’s happening is that Google (along with Mozilla and Safari) are changing the html spec to drop support for xslt. If you want to argue that this is bad because it “breaks the web”, that’s fine,

    Not only does it not break the web, they are flat out lying about that being the reason they’re doing it. That is also very dangerous.

    You’re doing a lot of sideways handwaving to say killing off this specific technology is not killing the open web, but others are.

    XSLT is not a source of security errors and this is your disingenuous argument from last time, (please state if you work for any of these companies). Libxslt has security vulnerabilities not XSLT itself. Futhermore there are replacement processors they could contribute and implement to and a myriad of other solutions, but they have chosen to kill instead.

    That is killing the Open web.

    • dpark 14 hours ago

      In the last thread about this, I tried to have a constructive conversation with you, and you jumped to ad hominem attacks multiple times and then when I tried to actually get clarity on what it means to be part of the “open web”, you explicitly said you didn’t want to engage anymore (and then continued your accusations elsewhere in the thread). Now you’ve chimed in here to essentially call me a paid shill and to repeat your baseless “killing the open web” soundbite.

      Your definition of “open web“ appears to be “never deprecating a feature ever”. And it’s fine that you want browsers to support features forever. I don’t think that has anything to do with the open web though. Exactly like the author of this blog post, you believe things that were never even part of the “web”, such as gopher, should be supported in the name of an “open web”.

      > Not only does it not break the web, they are flat out lying about that being the reason they’re doing it.

      The library is known to have multiple security vulnerabilities. They have declared that it is not sustainable to maintain this dependency. And they have also declared that it’s not worth replacing it. I don’t see the lie in that. I don’t think anyone is claiming that they actually cannot support xslt. They are saying that it requires more investment to support, and the ROI is too low.

      I also clarified this exact point last time. You are willfully misunderstanding the messaging because acknowledging the engineering trade offs here would force you to consider that this isn’t just an issue of lazy developers or evil PMs as you also claimed.

      > please state if you work for any of these companies

      I work for Microsoft who I don’t believe has chimed in on this conversation, though if Chromium removes it, Edge presumably will too. I have no visibility into the Edge position on this feature, though.

Aurornis a day ago

I have yet to read an article complaining about XSLT deprecation from someone who can explain why they actually used it and why it’s important to them.

> I will keep using XSLT, and in fact will look for new opportunities to rely on it.

This is the closest I’ve seen, but it’s not an explanation of why it was important before the deprecation. It’s a declaration that they’re using it as an act of rebellion.

  • ndiddy 21 hours ago

    My guess is that a lot of the controversy is simply because this is one of the first times that a major web feature has been removed from the web standards. For the past 20+ years, people have grown to expect that any page they make will remain viewable indefinitely. It doesn't matter that most people don't like XSLT, or that barely any sites use it. Removing XSLT does break some websites and that violates their expectation, so they get mad at it reflexively.

    As someone who's interested in sustainable open source development, I also find the circumstances around the deprecation to be interesting and worth talking about. The XSLT implementation used by all the browsers is a 25 year old C library whose maintainer recently resigned due to having to constantly deal with security bugs reported by large companies who don't provide any financial contribution or meaningful assistance to the project. It seems like the browser vendors were fine with the status quo of having XSLT support as long as they didn't have to contribute any resources to it. As soon as that free maintenance went away and they were faced with either paying someone to continue maintenance or writing a new XSLT library in a safer language, they weren't willing to pay the market value for what it would cost to do this and decided to drop the feature instead.

    • rerdavies 11 hours ago

      Sounds like EVERYBODY agrees that there isn't sufficient market value then. Even the original maintainer. And the that is indeed why the feature is being dropped: insufficient market value. Happy happy happy!

  • jerf a day ago

    What a horrible technology to wrap around your neck for rebellion's sake. XSLT didn't succeed because it's fundamentally terrible and was a bad idea from the very beginning.

    But I suppose forcing one's self to use XSLT just to spite Google would constitute its own punishment.

    • veeti 12 hours ago

      It has nothing to do with the specifics of the technology. As a consumer of online content, I don't care one bit if it is styled with XSLT or CSS (though as a developer my condolences are with the author, if they worked with XSLT).

      However, what I do care about is that it _remains viewable and usable_. Imagine if Microsoft Word one day decided you couldn't open .doc or .rtf files from the early 2000's? The browser vendors have decided that the web is now an application delivery platform where developers must polyfill backwards compatibility, past documents be damned.

      And just as the article drives the point home, it doesn't have to be this way. They could just provide the polyfill within the browser, negating any purported security issues with ancient XML libraries.

  • crazygringo a day ago

    Yeah, the idea that it's some kind of foundation of the "open web" is quite silly.

    I've used XSLT plenty for transforming XML data for enterprises but that's all backend stuff.

    Until this whole kerfuffle I never knew there was support for it in the browser in the first place. Nor, it seems, did most people.

    If there's some enterprise software that uses it to transform some XML that an API produces into something else client-side, relying on a polyfill seems perfectly reasonable. Or just move that data transformation to the back-end.

  • zekica a day ago

    I used it. It's an (ugly) functional programming language that can transform one XML into another - think of it as Lisp for XML processing but even less readable.

    It can work great when you have XML you want to present nicely in a browser by transforming it into XHTML while still serving the browser the original XML. One use I had was to show the contents of RSS/Atom feeds as a nice page in a browser.

    • rwmj 20 hours ago

      I would just do this on the server side. You can even do it statically when generating the XML. In fact until all the stuff about XSLT in browsers appeared recently, I didn't even know that browsers could do it.

      • wizzwizz4 18 hours ago

        Converting the contents of an Atom feed into (X)HTML means it's no longer a valid Atom feed. The same is true for many other document formats, such as flattened ODF.

        • rerdavies 10 hours ago

          Is an XLST page a valid atom feed? Is it really so terrible to have to two different pages -- one for the human readable version, and one for the XML version?

    • fuzzzerd a day ago

      I have done same thing with sitemap.xml.

  • Fileformat 21 hours ago

    Making RSS/Atom feeds friendly to new users is key for its adoption, and for the open web. XSLT is the best way to do that.

    I made a website to promote doing using XSLT for RSS/Atom feeds. Look at the before/after screenshots: which one will scare off a non-techie user?

    https://www.rss.style/

    • shadowgovt 20 hours ago

      RSS and Atom feeds are at this point a solution looking for a problem.

      I use RSS all the time... To keep up-to-date on podcasts. But for keeping up to date on news, people use social media. RSS isn't the missing piece of the puzzle for changing that, an app on top of RSS is. And in the absence of Reader, nothing has shown up to fill that role that can compete with just trading gossip on Facebook.

      • basscomm 18 hours ago

        > But for keeping up to date on news, people use social media. RSS isn't the missing piece of the puzzle for changing that, an app on top of RSS is. And in the absence of Reader, nothing has shown up to fill that role that can compete with just trading gossip on Facebook.

        I guess if you don't use social media or facebook you're out of luck?

        • shadowgovt 17 hours ago

          I don't see why. You can always subscribe to a newspaper. Or just use RSS and a subscription tool since it didn't just go away.

          What I'm saying, though, is if you don't use social media at this point you're already an outlier (I am, it should be noted, using the term broadly: you are using social media. Right now. Hacker News is in the same category as Facebook, Twitter, Mastodon, et. al. in this context: it's a place you go to get information instead of using a collection of RSS feeds, and I think the reason people do this instead of that may be instructive as to the ultimate fate of RSS for that use-case).

          • basscomm 17 hours ago

            > You can always subscribe to a newspaper.

            The circulation for my local newspaper is so small that they now get printed at a press a hundred miles away and are shipped in every morning to the handful of subscribers who are left. I don't even know the last time I saw a physical newspaper in person.

            > Hacker News... it's a place you go to get information instead of using a collection of RSS feeds

            No, it's a place I go to _in addition_ to RSS feeds. An anonymous news aggregator with web forum attached isn't really social media. Maybe some people hang out here to socialize, but that's not a use case for me

            • shadowgovt 17 hours ago

              The relevant use case is you come here to see links people share and comment on them. That's sufficiently "social" in this context.

              Contrasting the other use case you dabble in (that makes you an outlier) of pulling content from specific sources (I'm going to assume generating original content, not themselves link aggregators, otherwise this topic is moot) via RSS. Most people see that as redundant if they have access to something like HN, or Fark, or Reddit, or Facebook. RSS readers alone, in general, don't let you share your thoughts with other people reading the article, so it's not as popular a tool.

              • basscomm 16 hours ago

                > The relevant use case is you come here to see links people share and comment on them. That's sufficiently "social" in this context.

                Just having users submit links that other users can comment on doesn't make it social media. I can't follow particular users or topics, I can't leave myself a note about some user that I've had a positive or negative experience with, I can't ignore someone who I don't want to read, etc. Heck, usernames are so de-emphasized on this site that I almost always forget that they're there.

                • shadowgovt 16 hours ago

                  A rose by any other name. If you'd prefer I'd have said

                  "But for keeping up to date on news, people use link aggregation boards where other users post links to stuff on the web and then talk to each other about them. RSS isn't the missing piece of the puzzle for changing that, an app on top of RSS is. And in the absence of Reader, nothing has shown up to fill that role that can compete with just trading gossip on Hacker News."

                  ... that would be the same point. RSS, by itself, is a protocol for finding out some site created new content and is just not particularly compelling by itself for the average user when they can use "link aggregation boards where other users post links to stuff on the web and then talk to each othe about them" instead.

                  • righthand 15 hours ago

                    Do you work for one of the companies involved deprecating Xslt?

                    • shadowgovt 14 hours ago

                      I do not. Why do you ask?

          • righthand 15 hours ago

            > since it didn't just go away.

            But do you see how removing a feature from a major browser makes it seem like RSS did just go away and how RSS will eventually go away?

            What a terrible disingenuous argument. Anyone not in line with big tech deserves to be pushed aside eh?

            • shadowgovt 13 hours ago

              RSS hasn't gone anywhere. Every podcast my podcast player downloads is announced to it either via RSS or Atom feeds. It has just fallen by the wayside as the way people become aware of updates to websites with serial publication of content (in general: because most people get that information from peer-to-peer link sharing, like Facebook, Twitter, Mastodon, Fark, Reddit, Slashdot, or even this website).

              They're not even removing the ability for the browser to render XML. They're just removing an in-browser formatter for XML (a feature that can be supported by server-side rendering or client-side polyfill).

              • righthand 13 hours ago

                Yes while their chosen formats directly aligned with their business get first class citizenship and suffer many larger and well known security issues. Xml will be next just wait.

                • shadowgovt 11 hours ago

                  What would that mean? XML is just text on the wire. If a browser stops supporting it... It's text on the wire. I slurp it in with JavaScript and parse it how I want.

                  ... Actually, that seems like a fine idea...

    • cpill 17 hours ago

      yes, but why??? Your on the website and you have a link to the syndicated feed, for the website your on, and you want to make they feed look good in the browser... so they can click the link to the website _you are already on_??? The argument you should be looking at the feed XML in the browser instead of the website is bonkers. They are not meant to replace the website coz if they were why have the website?!

      • Fileformat 16 hours ago

        But you are tech-savvy and know about RSS & feed readers and such like!

        Think about it from a non-technical user's perspective: they click on a RSS link and get a wall of XML text. What are they going to do? Back button and move on. How are they ever going to get introduced to RSS and feed readers and such like?

        I think a lot of feeds never get hit by a browser because there isn't a hyperlink to them. For example: HN has feeds, but no link in the HTML body, so I'm pretty confident they don't get browser hits. And no one who doesn't already know about feeds will ever use them.

      • kstrauser 17 hours ago

        I just checked and I’ve had 3 hits for my blog’s RSS feed from a legit-looking browser user agent string this year. Almost literally no one reads my site via RSS in the browser. Quite a few people fetch the feed from separate clients.

        I wouldn’t spend 5 minutes making that feed look pretty for browser users because no one will ever see it. I don’t know who these mythical visitors are who 1) know what RSS is and 2) want to look at it in Chrome or Safari or Firefox.

        • Fileformat 16 hours ago

          You are absolutely right!!! But...

          What about people who don't "1) Know what RSS is"???

          And what if you could make it friendly for them in 4 minutes? You could by dropping in a XSLT file and adding a single line to the XML file. I bet you could do it in 3 minutes.

  • roywashere 21 hours ago

    All browsers ever implemented was XSLT 1.0, from 1999. There were 2.0 and 3.0 for which there is an open source Java based implementation (Saxon) but this never made it into libxslt and/or browsers!

  • danwilsonthomas 19 hours ago

    Imagine you have users that want to view an XML document as a report of some kind. You can easily do this right now by having them upload a document and attaching a stylesheet to it. I do this to let people view after-game reports for a video game (Nebulous: Fleet Command). They come in as XML and I transform them to HTML. Now I do this all client-side using the browser support for XSLT and about 10 lines of javascript because I don't want to pay for and run a server for file uploads. But if I did the XSLT support in the browser would make it truly trivial to do.

    Now this obviously isn't critical infrastructure, but it sucks getting stepped on and I'm getting stepped on by the removal of XSLT.

  • basscomm 18 hours ago

    > I have yet to read an article complaining about XSLT deprecation from someone who can explain why they actually used it and why it’s important to them.

    I used it to develop a website because I'm not a programmer, but I still want to have some basic templates on my webpage without having to set up a dev environment or a static site generator. XML and XSLT extend HTML _just enough_ to let me do some fun things without me having to become a full-on programmer.

  • James_K 21 hours ago

    I use XSLT because I want my website to work for users with JavaScript disabled and I want to present my Atom feed link as an HTML document on a statically hosted site without breaking standards compliance. Hope this helps.

    • matthews3 21 hours ago

      Could you run XSLT as part of your build process, and serve the generated HTML?

      • kuschku 20 hours ago

        I have arduinos with sensors providing their measurements as XML, with an external XSLT stylesheet to make them user-friendly. The arduinos have 2KB RAM and 16 MIPS.

        Which build process are you talking about? Which XSLT library would you recommend for running on microcontrollers?

        • matthews3 19 hours ago

          > Which build process are you talking about?

          The one in the comment I replied to.

          • kuschku 19 hours ago

            Fair, but that shows the issue at hand, doesn't it? XSLT is a general solution, while most alternatives are relatively specific solutions.

            (Though I've written repeatedly about my preferred alternative to XSLT)

            • righthand 16 hours ago

              > (Though I've written repeatedly about my preferred alternative to XSLT)

              Link to example?

              • kuschku 14 hours ago

                I've previously suggested the XML stylesheet tag should allow

                    <?xml-stylesheet type="application/javascript" href="https://example.org/script.js"?>
                
                which would then allow the script to use the service-worker APIs to intercept and transform the request.
                • righthand 11 hours ago

                  Oh yes sorry I thought you meant you had a blog post or something on it.

      • Fileformat 16 hours ago

        That is not the point: I already have the blog's HTML pages. I want the RSS feed to be an RSS feed, not another version of the HTML.

        The XSLT view of the RSS feed so people (especially newcomers) aren't met with a wall of XML text. It should still be a valid XML feed.

        Plus it needs to work with static site generators.

      • bilog 21 hours ago

        XML source + XSLT can be considerably more compact than the resulting transformation, saving on hosting and bandwidth.

        • zetanor 20 hours ago

          The Internet saves a lot more on storage and bandwidth costs by not shipping an XSLT implementation with every browser than it does by allowing Joe's Blog to present XML as an index.

          • LtWorf 19 hours ago

            You redownload your browser every request‽

      • James_K 21 hours ago

        No because then it would not be an Atom feed. Atom is a syndication format, the successor to RSS. I must provide users with a link to a valid Atom XML document, and I want them to see a web page when this link is clicked.

        This is why so many people find this objectionable. If you want to have a basic blog, you need some HTML docments and and RSS/Atom feed. The technologies required to do this are HTML for the documents and XSLT to format the feed. Google is now removing one of those technologies, which makes it essentially impossible to serve a truly static website.

        • ErroneousBosh 21 hours ago

          > Google is now removing one of those technologies, which makes it essentially impossible to serve a truly static website.

          How so? You're just generating static pages. Generate ones that work.

          • James_K 21 hours ago

            You cannot generate a valid RRS/Atom document which also renders as HTML.

            • shadowgovt 20 hours ago

              So put them on separate pages because they are separate protocols (HTML for the browser and XML for a feed reader), with a link on the HTML page to be copied and pasted into a feed reader.

              It really feels like the developer has over-constrained the problem to work with browsers as they are right now in this context.

              • kuschku 20 hours ago

                > So put them on separate pages because they are separate protocols

                Would you also suggest I use separate URLs for HTTP/2 and HTTP/1.1? Maybe for a gzipped response vs a raw response?

                It's the same content, just supplied in a different format. It should be the same URL.

                • zzo38computer 19 hours ago

                  There are separate URLs for "https:" vs "http:" although they are usually the same content when both are available (although I have seen some where it isn't the same), although the compression (and some other stuff) is decided by headers. However, it might make sense to include some of these things optionally within the URL (within the authority section and/or scheme section somehow), for compression, version of the internet, version of the protocol, certificate pinning, etc, in a way that these things are easily delimited so that a program that understands this convention can ignore them. However, that might make a mess.

                  I had also defined a "hashed:" scheme for specifying the hash of the file that is referenced by the URL, and this is a scheme that includes another URL. (The "jar:" scheme is another one that also includes other URL, and is used for referencing files within a ZIP archive.)

                • ErroneousBosh 17 hours ago

                  > Would you also suggest I use separate URLs for HTTP/2 and HTTP/1.1? Maybe for a gzipped response vs a raw response?

                  The difference between HTTP/2 and HTTP/1.1 is exactly like the difference between plugging your PC in with a green cable or a red cable. The client neither knows nor cares.

                  > It's the same content, just supplied in a different format. It should be the same URL.

                  So what do I put as the URL of an MP3 and an Ogg of the same song? It's the same content, just supplied in a different format.

                  • kuschku 17 hours ago

                    > The difference between HTTP/2 and HTTP/1.1 is exactly like the difference between plugging your PC in with a green cable or a red cable. The client neither knows nor cares.

                    Just like protocol negotiation, HTTP has format negotiation and XML postprocessing for exactly the same reason.

                    > So what do I put as the URL of an MP3 and an Ogg of the same song? It's the same content, just supplied in a different format

                    Whatever you want? If I access example.org/example.png, most websites will return a webp or avif instead if my browser supports it.

                    Similarly, it makes sense to return an XML with XSLT for most browsers and a degraded experience with just a simple text file for legacy browsers such as NCSA Mosaic or 2027's Google Chrome.

                    • ErroneousBosh 6 hours ago

                      > Whatever you want? If I access example.org/example.png, most websites will return a webp or avif instead if my browser supports it.

                      So, you need a lot of cleverness on the browser to detect which format the client needs, and return the correct thing?

                      Kind of not the same situation as emitting an XML file and a chunk of XSLT with it, really.

                      If you're going to make the server clever, why not just make the server clever enough to return either an RSS feed or an HTML page depending on what it guesses the client wants?

                      • kuschku 4 hours ago

                        > If you're going to make the server clever, why not just make the server clever enough to return either an RSS feed or an HTML page depending on what it guesses the client wants?

                        There's no cleverness involved, this is an inherent part of the HTTP protocol. But Chrome still advertises full support for XHTML and XML:

                            Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
                        
                        But importantly, for audio/video files, that's still just serving static files, which is very different from having to dynamically generate different files.
                • shadowgovt 16 hours ago

                  Then the server should supply the right format based on the `Accept` header, be it `application/rss+xml` or `application/atom+xml` or `text/xml` or `text/html`.

                  Even cheaper than shipping the client an XML and an XSLT is just shipping them the HTML the XSLT would output in the first place.

                  • kuschku 14 hours ago

                    That's not exactly cheap on an arduino uno 3 with 2kb ram.

                    But regardless, someone suggested just including a script tag with xmlns of xhtml as alternative, which should work well enough (though not ideal).

                    • ErroneousBosh 3 hours ago

                      How many people out of the world's nearly eight billion population, would you estimate, are attempting to host their blog including HTML posts and RSS feeds on an Arduino?

                      • kuschku 3 hours ago

                        A lot of IoT devices use this strategy, actually. A lot. Significantly more than are using e.g. WebUSB.

                        Nonetheless, by that same argument you could just kill HN off. A lot of projects have a benefit that far outweighs their raw usage numbers.

        • gldrk 20 hours ago

          >I must provide users with a link to a valid Atom XML document, and I want them to see a web page when this link is clicked.

          Do RSS readers and browsers send the same Accept header?

    • cpill 17 hours ago

      Yeah, but WHY? If they are on the website, why would they want to look at the feed for the website, on the website, in the browser instead of just looking at the website? If the feed is so amazing, why have the website in the first place? Oh yeah, you need something to make the feed off :D

      • Fileformat 16 hours ago

        I don't want the feed to look amazing. I just don't want to present a wall of XML text to non-technical users who don't know what an RSS feed is!

  • 6510 20 hours ago

    If you have a lot of xml data and need an UI that does complex operations that scream xpath it would be rather spectacular if it could be done without much of a back end, in the browser without js.

    I'm not good enough with XSLT to know if it is worth creating the problem that fits the solution.

andsoitis a day ago

I don’t know. The author makes some arguments I could get entertain and get behind, but they also enumerate the immense complexity that they want web browsers to support (incl. Gopher).

Whether or not Google deprecating XSLT is a “political” decision (in authors words), I don’t know that I know for sure, but I can imagine running the Chrome project and steering for more simplicity.

  • coldpie a day ago

    The drama around the XSLT stuff is ridiculous. It's a dead format that no one uses[1], no one will miss, no one wants to maintain, and that provides significant complexity and attack surface. It's unambiguously the right thing to do to remove it. No one who actually works in the web space disagrees.

    Yes, it's a problem that Chrome has too much market share, but XSLT's removal isn't a good demonstration of that.

    [1] Yes, I already know about your one European law example that you only found out exists because of this drama.

    • lunar_mycroft a day ago

      The fact that people didn't realize that a site used XSLT before the recent drama is meaningless. Even as a developer, I don't know how most of the sites I visit work under the hood. Unless I have a reason to go poking around, I would probably never know whether a site used react, solid, svelte, or jquery.

      But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility. If you make a web site which only relies on web standards (i.e. not vendor specific features or 3rd party plugins), you can/could expect it to keep working forever. Browser makers choosing to break that "contract" is bad for the internet regardless of how popular XSLT is.

      Oh, and as the linked article points out, the attack surface concerns are obviously bad faith. The polyfil means browser makers could choose to sandbox it in a way that would be no less robust than their existing JS runtime.

      • coldpie a day ago

        > Browser makers choosing to break that "contract" is bad for the internet regardless of how popular XSLT is.

        No, this is wrong.

        Maintaining XSLT support has a cost, both in providing an attack surface and in employee-hours just to keep it around. Suppose it is not used at all, then removing it would be unquestionably good, as cost & attack surface would go down with no downside. Obviously it's not the case that it has zero usage, so it comes down to a cost-benefit question, which is where popularity comes in.

        • lunar_mycroft 21 hours ago

          I want to start out by noting that despite both the linked article the very comment you're replying to pointing out that the security excuse is transparently bad faith, you still trotted it out, again.

          And no, it really isn't a cost benefit question. Or if you'd prefer, the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost. As it stood, as a web developer you only needed to make sure that your code followed standards and it would continue to work. If the browser makers can decide to depriciate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular (or rather, whether browser makers will continue to _think_ they're popular, which is very much not the same thing).

          • coldpie 21 hours ago

            > security excuse is transparently bad faith, you still trotted it out

            I don't see any evidence supporting your assertion of them acting in bad faith, so I didn't reply to the point. Sandboxes are not perfect, they don't transform insecure code into perfectly secure code. And as I've said, it's not only a security risk, it's also a maintenance cost: maintaining the integration, building the software, and testing it, is not free either.

            It's fine to disagree on the costs/benefits and where you draw the line on supporting the removal, but fundamentally it's just a cost-benefit question. I don't see anyone at Chrome acting in bad faith with regards to XSLT removal. The drama here is really overblown.

            > the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost ... If the browser makers can decide to deprecate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular.

            This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage. No one actually cares about this feature, they just like screaming at Google. (To be fair, so do I! But you gotta pick your battles, and this particular argument is a dud.)

            • lunar_mycroft 19 hours ago

              > It's fine to disagree on the costs/benefits and where you draw the line on supporting the removal, but fundamentally it's just a cost-benefit question

              If browser makers had simply said that maintaining all the web standards was too much work and they were opting to depreciate parts of it, I'd likely still object but I wouldn't be calling it bad faith. As it stands however, they and their defenders continue to cite alleged security problems as one of if not the primary reason to remove XSLT. This alleged security justification is a lie. We know it's a lie because there exists a trivial way to virtually completely remove the security burden presented by XSLT to browser maintainers without depreciating it, and the chrome team is well aware of this option. There is no significant difference in security between "shipping an existing polyfil which implements XSLT from inside the browser's sandbox instead of outside it" and "removing all support for XSLT", so security isn't the reason they're very deliberately choosing the former over the latter.

              > This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage

              This isn't a counter argument, you've just repeated your point that XSLT (allegedly) isn't sufficiently well used to justify maintaining it, ignoring the fact that said tradeoff being made by browser maintainers in the first place is a problem.

      • gspencley 21 hours ago

        > But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility.

        The fact that you put "contract" in quotes suggests that you know there really is no such thing.

        Backwards compatibility is a feature. One that needs to be actively valued, developed and maintained. It requires resources. There really is no "the web platform." We have web browsers, servers, client devices, telecommunications infrastructure - including routers and data centres, protocols... all produced and maintained by individual parties that are trying to achieve various degrees of interoperability between each other and all of which have their own priorities, values and interests.

        The fact that the Internet has been able to become what it is, despite the foundational technologies that it was built upon - none of which had anticipated the usage requirements placed on their current versions, really ought to be labelled one of the wonders of the world.

        I learned to program in the early to mid 1990s. Back then, there was no "cloud", we didn't call anything a "web application" but I cut my teeth doing the 1990s equivalent of building online tools and "web apps." Because everything was self-hosted, the companies I worked for valued portability because there was customer demand. Standardization was sought as a way to streamline business efficiency. As a young developer, I came to value standardization for the benefits that it offered me as a developer.

        But back then, as well as today, if you looked at the very recent history of computing; you had big endian vs little endian CPUs to support, you had a dozen flavours of proprietary UNIX operating systems - each with their own vendor-lock-in features; while SQL was standard, every single RDBMS vendor had their own proprietary features that they were all too happy for you to use in order to try and lock consumers into their systems.

        It can be argued that part of what has made Microsoft Windows so popular throughout the ages is the tremendous amount of effort that Microsoft goes through to support backwards compatibility. But even despite that effort, backwards compatibility with applications built for earlier version of Windows can still be hit or miss.

        For better or worse, breaking changes are just part and parcel of computing. To try and impose some concept of a "contract" on the Internet to support backwards compatibility, even if you mean it purely figuratively, is a bit silly. The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers. If only an extreme minority of "customers" require native xslt support in the web browser, to use today's example, it makes zero business sense to pour resources into maintaining it.

        • lunar_mycroft 20 hours ago

          > The fact that you put "contract" in quotes suggests that you know there really is no such thing.

          It's in quotes because people seem keen to remind everyone that there's no legal obligation on the part of the browser makers not to break backwards compatibility. The reasoning seems to be that if we can't sue google for a given action, that action must be fine and the people objecting to it must be wrong. I take a rather dim view of this line of reasoning.

          > The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers.

          As you yourself pointed out, the web is a giant pile of cobbled together technologies that all seemed like a good idea at the time. If breaking changes were an option, there is a _long_ list of potential depreciation to pick from which would greatly simplify development of both browsers and websites/apps. Further, new features/standards would be able to be added with much less care, since if problems were found in those standards they could be removed/reworked. Despite those huge benefits, no such changes are/should be made, because the costs breaking backwards compatibility are just that high. Maintaining the implied promise that software written for the web will continue to work is a business requirement, because it's crucial for the long term health of the ecosystem.

    • basscomm 18 hours ago

      I've been running a small hobby site using XML and XSLT for the last five or so years, but Google refused to index it because Googlebot doesn't execute XSLT. I can't be the only one, but good luck Googling it

    • Analemma_ a day ago

      Another bit of ridiculousness is pinning the removal on Google. Removing XSLT was proposed by Mozilla and unanimously supported with no objections by the rest of the WHATWG. Go blame Mozilla if you want somebody to get mad at, or least blame all the browser vendors equally. This has nothing to do with Chrome’s market share.

      • basscomm 17 hours ago

        Shouldn't the users of the Web also get a say? There's been a lot of blowback on this decision, so this isn't as cut and dried as it's being made out to be

      • troupo a day ago

        Google are the ones immediately springing into action. They only started collecting feedback on which sites may break after they already pushed "Intention to remove" and prepared a PR to remove it from Chromium.

        • hn_throwaway_99 a day ago

          > Google are the ones immediately springing into action.

          You say that like it's a bad thing. The proposal was already accepted. The most useful way to get feedback about which sites would break is to actually make a build without XSLT support and see what breaks.

    • troupo a day ago

      > It's a dead format that no one uses[1],

      This has to be proven by Google (and other browser vendors), not by people coming up with examples. The guy pushing "intent to deprecate" didn't even know about the most popular current usage (displaying podcast RSS feeds) until after posting the issue and until after people started posting examples: https://github.com/whatwg/html/issues/11523#issuecomment-315...

      Meanwhile Google's own document says that's not how you approach deprecation: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...

      Also, "no one uses it" is rich considering that XSLT's usage is 10x the usage of features Google has no trouble shoving into the browser and maintaining. Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with USB https://chromestatus.com/metrics/feature/timeline/popularity... or WebTransport: https://chromestatus.com/metrics/feature/timeline/popularity... or even MIDI (also supported by Firerox) https://chromestatus.com/metrics/feature/timeline/popularity....

      XSLT deprecation is a symptom of how browser vendors, and especially Google, couldn't give two shits about the stated purposes of the web.

      To quote Rich Harris from the time when Google rushed to remove alert/confirm: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones" https://dev.to/richharris/stay-alert-d

      • Aurornis a day ago

        > Also, "no one uses it" is rich considering that XSLT's usage is 10x the usage of features Google has no trouble shoving into the browser and maintaining. Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with …

        Comparing absolute usage of an old standard to newer niche features isn’t useful. The USB feature is niche, but very useful and helpful for pages setting up a device. I wouldn’t expect it to show up on a large percentage of page loads.

        XSLT was supposed to be a broad standard with applications beyond single setup pages. The fact that those two features are used similarly despite one supposedly being a broad standard and the other being a niche feature that only gets used in unique cases (device setup or debugging) is only supportive of deprecating XSLT, IMO

        • kstrauser a day ago

          Furthermore, you can’t polyfill USB support. It’s something that the browser itself must support if it’s going to be used at all, as by definition it can’t run entirely inside the browser.

          That’s not true for XSLT, except in the super-niche case of formatting RSS prettily via linking to XSLT like a stylesheet, and the intersection of “people who consume RSS” and “people who regularly consume it directly through the browser” has to be vanishingly small.

          • troupo 16 hours ago

            > Furthermore, you can’t polyfill USB support.

            You can't polyfill many things. Should we just dump everything into the browser? Well, Google certainly thinks so. But that makes the question about "but this feature is unused, why support it" moot.

            And Google has no intention to support a polyfill, or ship it with the browser. The same person who didn't even know that XSLT is used on podcast sites scribbled together some code, said "here, it's easy", and that's it.

            And the main metric they use for deprecations is the number of sites/page uses. So even that doesn't work in favor of all the hardware APIs (and a few hundred others) that Google just shoved into the browser.

            At least there's consensus on removing XSLT, right? But there are many, many objections about USB, HID, etc. And still that doesn't stop Google from developing, shipping and maintaining them.

            Basically, the entire discussion around XSLT struck a nerve partly because all of the arguments can immediately be applied to any number of APIs that browsers, and especially Chrome, have no trouble shipping. And that comes on top of the mismanaged disaster that was the attempt to remove alert/confirm several years ago (also, "used on few sites", "security risk", "simpler code", "full browser consensus" etc.)

            • kstrauser 15 hours ago

              The distinction in my mind is that if a browser doesn’t ship with XSLT, then devs have to go through the hassle of adding support for it themselves, but if a browser doesn’t support a device driver, it’s completely impossible for devs to do that themselves.

              Without built-in support, XSLT is inconvenient. Without built-in support, things like WebUSB cannot possibly exist.

              That’s why I think they can’t be compared directly.

              • wpm 10 hours ago

                What? If the browser doesn’t support directly accessing USB hardware, it’s impossible to write a driver for it?

                Someone should tell the 25+ year old USB devices I use that their drivers are actually impossible.

                • kstrauser 9 hours ago

                  The context of this thread was “in the browser”. For example, I use a web page to configure my Meshtastic radios which are connected to my laptop via USB. If the browser did not provide an API for web pages to talk to USB devices, no amount of clever JS programming would make it possible for that radio config page to work.

        • troupo a day ago

          > Comparing absolute usage of an old standard to newer niche features isn’t useful. The USB feature is niche, but very useful and helpful for pages

          So, if XSLT sees 10x usage of USB we can consider it a "niche technology that is 10x useful tan USB"

          > The fact that those two features are used similarly

          You mean USB is used on 10x fewer pages than XSLT despite HN telling me every time that it is an absolutely essential technology for PWAs or something.

      • coldpie 21 hours ago

        > This has to be proven by Google (and other browser vendors), not by people coming up with examples

        What, to you, would constitute sufficient proof? Is it feasible to gather the evidence your suggestion would require?

        • troupo 16 hours ago

          > What, to you, would constitute sufficient proof? Is it feasible to gather the evidence your suggestion would require?

          Let me quote from my comment, again:

          --- start quote ---

          The guy pushing "intent to deprecate" didn't even know about the most popular current usage (displaying podcast RSS feeds) until after posting the issue and until after people started posting examples

          --- end quote ---

          I would like to see more evidence than "we couldn't care less, remove it" before a consensus on removal, before an "intent to deprecate" and before opening a PR to Chrome removing the feature.

          Funnily enough, even the "browser consensus" looks like this: "WebKit is cautiously supportive. We'd probably wait for one implementation to fully remove support": https://github.com/whatwg/html/issues/11523#issuecomment-314...

          BTW. Literally the only "evidence" originally presented was "nearly 100% of sites use JS, while 1/10000 of those use XSLT.": https://github.com/whatwg/html/issues/11523#issuecomment-315... which was immediately called into question: https://github.com/whatwg/html/issues/11523#issuecomment-315... and https://github.com/whatwg/html/issues/11523#issuecomment-315... and that's before we account for google's own docs saying they have a blind spot in the enterprise/corporate setting where people suspect the usage may be higher.

          Also, as I say. I think the main issue isn't XSLT itself. XSLT is a symptom.

  • PaulHoule a day ago

    The case for JPEG XL is much better than that for XSLT. On the other hand, people who program in C will always be a little terrified of XML and everything around it since the parsing code will be complex and vulnerable.

    • pcleague a day ago

      Having a background in C/C++, that was the problem I ran into when I had to learn XSLT at translation company that used it to style documents across multiple formats. The upside of using XML was that you could store semantically rich info into the tags for the translators and designers. The downside, of course, with all the metadata, was that the files could be really large and the XSLT was usually specifically programmed for that particular document and very verbose so the XSLT template might only be used a couple times.

      • PaulHoule a day ago

        XSLT is really strange in that it's not really what people think it is. It's really a pattern-matching and production rules system right out of the golden age of AI but people think it is just an overcomplicated Jinja2 or handlebars.

  • zzo38computer 18 hours ago

    If you really want to improve the simplicity, there are better ways to do so rather than excluding Gopher.

    (Also, they could make XSLT (and many other things that are built-in) into an extension instead, therefore making the core system more simpler.)

    • joshuamorton 15 hours ago

      > (Also, they could make XSLT (and many other things that are built-in) into an extension instead, therefore making the core system more simpler.)

      This appears to be what they are doing, in fact!

  • ablob a day ago

    "Steering for more simplicity" would be a political decision. Keeping it is also a political decision.

    Removing a feature that is used, while possibly making chrome more "simple", also forces all the users of that feature to react to it, lest their efforts are lost to incompatibility. There is no way this can not be a political decision, given that either way one side will have to cope with the downsides of whatever is (or isn't) done.

    PS: I don't know how much the feature is actually used, but my rationale should apply to any X where X is a feature considered to be pruned.

    • crazygringo a day ago

      No, the idea is that "political decision" is used in opposition to a decision based on rational tradeoffs.

      If there isn't enough usage of a feature to justify prioritizing engineering hours to it instead of other features, so it's removed, that's just a regular business-as-usual decision. Nothing "political" about it. It's straightforward cost-benefit.

      However, if the decision is based on factors beyond simple cost-benefit -- maintaining or removing a feature because it makes some influential group happy, because it's part of a larger strategic plan to help or harm something else, then we call that a political decision.

      That's how the term "political decision" in this kind of context is used, what it means.

      • troupo a day ago

        > If there isn't enough usage of a feature to justify prioritizing engineering hours to it instead of other features, so it's removed, that's just a regular business-as-usual decision. Nothing "political" about it. It's straightforward cost-benefit.

        Then why is Google actively shoving multiple hardware APIs into the browser (against the objection of other vendors) if their usage is 10x less than that of XSLT?

        They have no trouble finding the resource to develop and maintain those

        • crazygringo a day ago

          You have to keep developing new things to see what proves useful in the long-run.

          When you have something that's been around for a long time and still shows virtually no usage, it's fine to pull the plug. It's a kind of evolution. You can kill things that are proven to be unpopular, while building things and giving them the time to see if they become popular.

          That's what product feature iteration is.

        • Attrecomet 20 hours ago

          WebSerial and WebUSB are the best thing to happen to browsers since sliced bread. Just because you can't see why it's amazing that users won't need to give some random, badly supported driver SYSTEM/root privileges to run their specialized hardware -- encompassing hobbyist, educational and professional uses -- doesn't mean it's not obviously useful, and Mozilla's stance on keeping it out of Firefox will just harm their market share in these area -- education probably being the most hurtful.

          From what I gather here, XSLT's functionality OTOH is easily replaced, and unlike the useful hardware support you're raging against, is a behemoth to support.

    • tracker1 a day ago

      I would argue that FTP and Gopher were far more broadly used in browsers than XSLT ever was... but they still removed them. They also likely didn't present nearly the burden of support for XSLT either.

  • ForHackernews 21 hours ago

    The company that invented "Web Bluetooth" doesn't have a leg to stand on whining about "immense complexity" in having to maintain old stable features in their browser implementation.

charcircuit 21 hours ago

>Mozilla bent over to Google's pressure to kill off RSS by removing the “Live Bookmarks” features from the browser

They both were just responding to similar market demands because end users didn't want to use RSS. Users want to use social media instead.

>This is a trillion-dollar ad company who has been actively destroying the open web for over a decade

Google has both done more for and invested more into progressing the open web than anyone else.

>The WHATWG aim is to turn the Web into an application delivery platform

This is what web developers want and browsers our reacting to the natural demands of developers, who are reacting to demands of users. It was an evolutionary process that got it to that state.

>but with their dependency on the Blink rendering engine, controlled by Google, they won't be able to do anything but cave

Blink is open source and modular. Maintaining a fork is much less effort than the alternative of maintaining a different browser engine.

  • Fileformat 21 hours ago

    I think that "market demands" is a bit of a misnomer. RSS was (and remains) too tech-y for the mainstream.

    If browser vendors had made it easy for mainstream users, would there have been as much "market demand"?

    Between killing off Google Reader and failing to support RSS/Atom, Google handed social media to Facebook et al.

    • glenstein 20 hours ago

      Exactly, those changes which I believe were done at the time to create space for Google Plus (which I think in an alternative reality with some different choices and different execution could very well have been a relevant entrant into the social media space).

      It involved driving a steak through the heart of Google reader. Perhaps the most widely used RSS reader on the planet, and ripple effects that led to the de-emphasis of RSS across the internet. Starting the historical timeline after those choices in summarizing it as an absence of market demand overlooks the fact that intentional choices were made on this front to roll it back rather than to emphasize it and make it accessible.

      • charcircuit 20 hours ago

        The writing was already on the wall by the time Google Reader shutdown.

        >usage of Google Reader has declined

        https://googlereader.blogspot.com/2013/03/powering-down-goog...

        • glenstein 19 hours ago

          I would respectfully disagree in the following sense: I think the choice to shut down Google Reader and deprioritize RSS across the Google ecosystem (including the browser) did more to impact the trajectory of RSS than whatever was already in motion prior to the Reader shutdown.

          And the same is true in the other direction, I want RSS to be a success but that would hinge on affirmative choices by major actors in the space choosing to sustain it.

  • glenstein 21 hours ago

    >Google has both done more for and invested more into progressing the open web than anyone else.

    One could also make that case about Microsoft with Microsoft office in the '90s. Embrace extend extinguish always involves being a contributor in the beginning.

    >Blink is open source and modular. Maintaining a fork is much less effort than the alternative of maintaining a different browser engine.

    Yeah and winning Asia Physical 100 is easier than winning a World's Strongest Man competition, and standing in a frying pan is preferable to jumping in a fire.

    I'm baffled by appeals to the open source nature of Blink and Chromium to suggest that they're positive indicators of an open web that any random Joe could jump in and participate in. That's only the case if you're capable of the monumental weightlifting that comes with the task.

  • gbalduzzi 21 hours ago

    I agree with everything, but just to be clear:

    > This is what web developers want

    I don't think it is what web developers want, it is what customers expect.

    Of course there are plenty of situation where the page is totally bloated and could be much leaner, but the overall trend to build web applications instead of web pages is dictated by user expectations and, as a consequence, requirements.

    • LtWorf 20 hours ago

      Users say "the page shall not load in less than 15 seconds and shall not use less than 5% of my monthly dataplan"?

      Odd… are these people with us?

  • carlosjobim 21 hours ago

    > They both were just responding to similar market demands because end users didn't want to use RSS. Users want to use social media instead.

    How does that become a market demand to remove RSS? There are tons of features within browsers which most users don't use. But they do no harm staying there.

wryoak a day ago

I think imma convert my blog to XML/XSLT. Nobody reads it anyway, but now I’ll be able to blame my lack of audience on chrome.

et1337 21 hours ago

I’m no Google fan, but deprecating XSLT is a rare opportunity to shrink the surface area of the web’s “API” without upsetting too many people. It would be one less thing for independent browsers like Ladybird to worry about. Thus actually weakening Google’s chokehold on the browser market.

  • basscomm 17 hours ago

    > but deprecating XSLT is a rare opportunity to shrink the surface area of the web’s “API” without upsetting too many people

    There's a lot of back and forth on every discussion about XSLT removal. I don't know if I would categorize that as 'without upsetting too many people'

    • kstrauser 16 hours ago

      We are largely the nerds that other nerds picked on for being too nerdy. I’d bet that a hugely disproportionate share of all the people in the world who care about this subject at all are here in these conversations.

      • wpm 10 hours ago

        Actual normies don’t think of the Internet at all. They open Facebook The App on their iPads and smartphones and that’s the internet for them.

        Passionate nerds giving a shit can build a far more rosy world than whatever that represents, so I don’t see why anyone should give a damn if this happens to be somewhat niche.

gwbas1c 21 hours ago

For the past 10-15 years, every time I look at web standards, it always feels like someone is trying to make browsers support their specific niche use case.

Seems like getting XSLT (and offering a polyfill replacement) is just a move in the direction of stopping applications from pushing their complexity into the browser.

rldjbpin 4 hours ago

the op keeps highlighting how these actions affect rss feeds, including references to previous instances such as:

> when Mozilla bent over to Google's pressure to kill off RSS by removing the “Live Bookmarks” features from the browser

if hypothetically the browsers stop supporting the format, nothing stops dedicated rss/atom/json feed readers to work as normal. might be my myopic point of view but most users to still use the standard has predominantly used this approach since google reader days.

dang 20 hours ago

Prequel:

Google is killing the open web - https://news.ycombinator.com/item?id=44949857 - Aug 2025 (181 comments)

Also related. Others?

XSLT RIP - https://news.ycombinator.com/item?id=45873434 - Nov 2025 (459 comments)

Removing XSLT for a more secure browser - https://news.ycombinator.com/item?id=45823059 - Nov 2025 (337 comments)

Intent to Deprecate and Remove XSLT - https://news.ycombinator.com/item?id=45779261 - Nov 2025 (149 comments)

XSLT removal will break multiple government and regulatory sites - https://news.ycombinator.com/item?id=44987346 - Aug 2025 (146 comments)

Google did not unilaterally decide to kill XSLT - https://news.ycombinator.com/item?id=44987239 - Aug 2025 (128 comments)

"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (535 comments)

Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)

jamesbelchamber a day ago

Do the up-and-coming new browsers/engines (Servo, Ladybird.. others?) plan to support XSLT? If they do already, do they want to remove it?

  • righthand 19 hours ago

    Yes they are going to support it because there are modern libraries that do.

pmdr 19 hours ago

Google is just one of the companies killing the open web. None of them will say it outright, but they'll just scrounge up enough "security" reasons for their decisions to seem palatable, even to the HN crowd.

They're just turning up the heat, even more so since AI became a thing.

pjmlp a day ago

It is Chrome OS Platform nowadays, powered by Chrome market share, and helped by everyone shipping Electron garbage.

spankalee a day ago

This page makes some wild claims, like Google wants to deprecate MathML, even though it basically just landed. Yeah, the Chrome team wasn't prioritizing the work and it came through Igalia, but the best time for Chrome to kill MathML would have been before it was actually usable on the web.

The post also fails to mention that all browsers want to remove XSLT. The topic was brought up in several meetings by Firefox reps. It's not a Google conspiracy.

I also see that the site is written in XHTML and think the author must just really love XML, and doesn't realize that most browser maintainers think that XHTML is a mistake and failure. Being strict on input in failing to render anything on an error is antithetical to the "user agent" philosophy that says the browser should try to render something useful to the user anyway. Forgiving HTML is just better suited for the messy web. I bet this fuels some of their anger here.

  • kstrauser a day ago

    I was all in on the concept of XHTML back in the day because it seemed obviously superior to chaotic, messy HTML. Nothing got me off that bandwagon as effectively as me converting a web app to emit pristine, validated XHTML and learning that no 2 browsers could process it the same way. Forget pixel-perfect layout and all that jazz. I couldn’t even get them to display the whole page reliably.

  • zzo38computer 17 hours ago

    XHTML does have some advantages compared with ordinary HTML, such as the parsing being more consistent, since the file will specify where literal text is used and which commands are or are not a block that is expected to contain other things.

    (It could still try to render in case of an error, but display the error message as well, perhaps.)

    • spankalee 16 hours ago

      HTML parsing is specified, including what to do for various errors, and very consistent across browsers. XML parsing may be more regular, but that's not really an advantage to users in any way, while HTML's resiliency is.

yegle a day ago

Isn't the decision made by all the browser vendors (including Apple and Mozilla)?

  • etchalon 21 hours ago

    They're obviously in on it. /s

apeters a day ago

The day will come when DRM is used to protect the whole http body.

  • silon42 21 hours ago

    Cutting us Linux users off the Web.

    • doublerabbit 21 hours ago

      Probably a good thing. Allows us to use it as an opportunity to make a new "web" without the mess of HTTP.

downsplat 13 hours ago

I'm sure there are plenty of problems with the open web, and that Google is not entirely a stranger to them... but removing an ancient language that basically failed to get traction is not one of them. No matter how elegant and advanced a certain class of nostalgic XML programmers find it.

And no, XSLT doesn't have much to do with how much RSS thrives or not. RSS is basically consumed by RSS reader backends, not directly by users on their browser.

One of the web platform's problems, is that it accumulates untold cruft from every failed experiment. The entire XHTML exercise turned out to be an expensive mistake, but we can't remove that because too many pages depend on it, and it ended up in a whole lot of places, including the EPUB definition. But at least XSLT could get removed. Yay for that.

koakuma-chan 21 hours ago

I didn't know XSLT existed before this drama.

  • righthand 21 hours ago

    That’s because they didn’t want you to know about it. Hence letting it languish for 20 years and 2 major versions. The players doing this have been intentionally doing it for a few decades.

altmind a day ago

Do you remember that chrome lost FTP support recently? The protocol was widely used and simple enough.

  • ErroneousBosh 21 hours ago

    "Was" is the key here. FTP has been obsolete for 20 years.

    • altmind 16 hours ago

      People are confusing obsolete with stable and feature complete.

      • ErroneousBosh 6 hours ago

        Right, but ftp is neither of these things.

        Can you describe any real-world application where ftp is the best solution for a problem anyone has right now?

        Consider the impact of an internet-exposed service that allows unauthenticated clients to remotely run code as root on your server.

  • chb a day ago

    Widely used? By whom? Devs who don't understand rsync or scp? Give me a practical scenario where a box is running FTP but not SSH.

    Edit: then account for the fact that this rare breed of content uploader doesn't use an FTP client... there's absolutely no reason to have FTP client code in a browser. It's an attack surface that is utterly unnecessary.

    • Demiurge a day ago

      Also, the protocol is pretty much a holdover from the earliest days, before encryption, or complicated NATs. I remember using it with just telnet a few times. It's pretty cool, but absolutely nobody should be using FTP these days. I remember saying this back in the 2005, and here we are 20 years later, someone still lamenting dropping FTP support from a browser? I think we're decades overdue.

      • tracker1 a day ago

        I'm not lamenting it being removed.. but will say that it was probably a huge multiple more popular and widely used than XSLT is in the browser.

        • Demiurge a day ago

          I'm genuinely curious about that. But, this says a lot more about how different these standards are. FTP really needed a good successor, which it never really got. So, there is a strong use case, but technical deficiency to the protocol. So, FTP was overcome by a meriad of web forms and web drive sites, as a way to fill the gap. Still, resumable chunked uploads are really hard to implement from scratch, even now.

          Dropping XSLT is about something different. It's not bad an in an obvious way. It's things like code complexity vs applicability. It's definitely not as clear of an argument to me, and I haven't touched XSLT in the past 20 years of web development, so I am not sure about the trade-offs.

      • grumbel 21 hours ago

        The problem wasn't that FTP got deprecated, but that we never got a proper successor. With FTP you could browse a directory tree like it was a real file system. With HTTP you can't, it has no concept of a directory. rsync is the closest thing to a real successor, but no Web browser support that either.

        • Demiurge 20 hours ago

          I agree that we should get a successor, but if it got deprecated way back, I think we would have more likely gotten one. For just downloads, I have used apache and nginx directory and file listing functionality with ease.

        • catdog 15 hours ago

          There would be WebDAV which adds such features to HTTP but that's also not supported by web browsers.

      • koakuma-chan 21 hours ago

        I worked for a company where I had to make screenshots every minute and upload them via FTP for review to get paid. If there was multiple screenshots with the same thing on the screen, there would be questions.

        • ErroneousBosh 21 hours ago

          Did you do any work besides taking screenshots and trying to figure out why FTP was broken this time?

          Your old job's broken workflow is not a good reason for keeping a fundamentally broken protocol that relies on allowing Remote Code Execution as a privileged user around.

          • koakuma-chan 17 hours ago

            I wrote a tool that took screenshots automatically and used FileZilla to upload :) And my comment is in support of removing FTP because it was lame.

            • ErroneousBosh 6 hours ago

              Aha, fair. Why the hell did they need you to do that?

              I used to work in a web dev job where when they brought in "time tracking" they wanted everyone to update a spreadsheet with what they were doing every half an hour. A spreadsheet, as literally a .xls, on a shared Windows drive. Everyone spent more time waiting for access to the spreadsheet than they did doing any work.

              This situation persisted for about two weeks, and the manager that came up with the genius idea about two weeks longer than that, before we eventually told the other managers we were downing tools and leaving if he didn't either get "promoted to customer" or lay off the charlie during work hours.

    • altmind 16 hours ago

      People who navigate ftp storage maybe? Like Linux repos?

    • tracker1 a day ago

      Linking to an FTP file from a web page.

    • superkuh 15 hours ago

      By many scientific and educational organizations for distribution of data. Places where the outcome matters and the way to achieve it doesn't. An FTP client in a browser is incomparibly smaller attack surface than, say, executing every random program sent to you by arbitrary third parties (javascript).

Evidlo 18 hours ago

Why can't the polyfill be enabled by default? It would fix the security issues and we wouldn't have to worry about breaking websites.

The JS polyfill also makes supporting modern XSLT feasible.

  • basscomm 17 hours ago

    I tried the JS polyfill on some of the basic XSLT that I wrote, and it only kinda worked. I can't imagine how it would fail on anything with any complexity.

overgard 20 hours ago

This guy seems pretty focused on XML based standards, but I think the reason XML based standards are dying is because people don't like working with XML.

  • rhdunn 16 hours ago

    The entire publishing and standards industries are built around XML (JATS and other XML formats). They use XSLT to generate HTML, PDF, EPUB, and other format files.

    I don't see the XML-based SVG image format going anywhere.

    The ODF, EPUB, and other formats also use XML. Those are not dying.

tiffanyh 21 hours ago

Isn't Google one of the few (if not only), major tech company that would want to keep alive the open web ... given their business model.

  • bilog 21 hours ago

    Their business model is selling ads. They don't give a rats ass about the open web.

shadowgovt 20 hours ago

Okay, I was entertaining the author's position to a point, but I have to get off the train where they sing the praises of NPAPI.

Hey fam. I remember NPAPI. I wrote a very large NPAPI plugin.

The problem with NPAPI is that it lets people run arbitrary code as your browser. it was barely sandboxed. At best, it let any plugin do its level best to crash your browser session. At worst, it's a third-party binary blob you can't inspect running in the same thing you use to control your bank account.

NPAPI died for a good reason, and it has little to do with someone wanting to control your experience and everything to do with protecting you, the user, from bad actors. I think the author tips their hand a little too far here; the world they're envisioning is one where the elite hackers among us get to keep using the web and everyone else just gets owned by mechanisms they can't understand, and that's fine because it lets us be "wild" and "free" like we were in the nineties and early aughts again. Coupled with the author's downplaying of the security concerns in the XSLT lib, the author seems comfortable with the notion that security is less important than features, and I think there's a good reason that the major browser creators and maintainers disagree.

The author's dream, at the bottom, "a mesh of building blocks," is asking dozens upon dozens upon dozens of independent operators to put binary blobs in your browser outside the security sandbox. We stopped doing that for very, very good reasons.

  • zzo38computer 18 hours ago

    > put binary blobs in your browser outside the security sandbox

    There are reasons to do this sometimes, but usually it would be better to put them inside of the security sandbox (if the security sandbox can be designed in a good way).

    The user (or system administrator) could manually install and configure any native code extensions (without needing to recompile the entire browser), but sandboxed VM codes would also be available and would be used for most stuff, rather than the native code.

    • shadowgovt 18 hours ago

      We already have two infrastructures to do that: the JavaScript engine and wasm.

      And, indeed, part of the deprecation of XSLT proposal involves, in essence, moving XSLT processing from the browser-native layer to wasm as a polyfill that a site author can opt into.

      • zzo38computer 17 hours ago

        Yes, what I meant (one way to handle what the author proposed; possibly not exactly what they meant) is that many of these "building blocks" can be made from wasm (although I have some criticism of that too, nevertheless, it will do), and many will be included by default, and others would be set up by the user if desired. Native code extensions (e.g. .so files) would also be possible but is not needed for most things, and if you set up from the app store or from stuff specified by the document or server then only sandboxed VM codes would be possible and native codes would not be allowed in those circumstances.

kellengreen 21 hours ago

Today I Learned: There's a built-in class called XSLTProcessor.

tehjoker 15 hours ago

I don't get it, is XSLT used for anything important currently? I learned about it in uni, but I've basically never encountered it since.

zzo38computer 18 hours ago

I like the idea they mentioned of "a browser made up of composable components, protocol handlers separate from primary document renderers separate from attachment handlers", and I had the same idea. (Not all browsers will have to be implemented in this way, and they are not necessarily all the same, but this can be helpful when you want this.)

There can be two kind of extensions, sandboxed VM codes (e.g. WebAssembly) and native codes; the app store will only allow sandboxed VM codes, and any native codes that you might want must be installed and configured manually.

There is also the issue of such things as: identification of file formats (such as MIME), character sets, proxies, etc.

I hade made up Scorpion protocol and file format which is intended to be between Gemini and "WWW as it should be if it was designed better". This uses ULFI rather than MIME (to avoid some of the issues of MIME), and supports TRON character code, and the Scorpion conversion file can be used to specify a way to handle unknown file formats (there are several ways that this can be specified, including by a uxn code).

So, an implementation can be versatile to support things that can be useful beyond only MIME and Unicode etc.

Adding some additional optional specifications to WWW might also help, e.g. a way to specify that certain parts of the document are supposed be overridden by the user specifications in the client when they are available (although in some cases the client could guess, e.g. if a CSS only selects by HTML commands and media queries and not by anything else (or no CSS at all), then it should be considered unnecessary and the user's specifications of CSS can be used instead if they have been specified). Something like the Scorpion conversion file would be another possibility to have, possibly by adding a response header.

The previous "Google is killing the open web" article also mentions some similar things, but also a few others:

> in 2015, the WHATWG introduces the Fetch API, purportedly intended as the modern replacement for the old XMLHttpRequest; prominently missing from the new specification is any mention or methods to manage XML documents, in favor of JSON

Handling XML or JSON should probably better be a separate function than the function for downloading files, though. (Also, DER is better for many things)

> in 2024, Google discontinues the possibility to submit RSS feeds for review to be included in Google News

This is not an issue having to do with web browsers, although it is related to the issues that do have to do with web browsers (not) handling RSS.

> in 2025, Google announces a change in their Chrome Root Program Policy that within 2026 they will stop supporting certificate with an Extended Key Usage that includes any usage other than server [...]; this effectively kills certificates commonly used for mutual authentication

While I think they should not have stopped supporting such certificates (whoever the certificate is issued to probably should better make their own decision), it is usually helpful to use different certificates for client authentication anyways, so this is not quite as bad as they say, although it is still bad.

(X.509 client authentication would also have many other benefits, which I had described several times in the past.)

> in 2021, Google tried to remove [alert(), prompt(), and confirm()], again citing “security” as reason, despite the proposed changes being much more extensive than the purported security threat, and better solutions being proposed

Another issue is blocking events and JavaScript execution (which can sometimes be desirable; in the case of frames it should be better to only block one frame though), and modal dialog boxes potentially blocking other functions in the browser (which is undesirable). For the former case, there other other things that can be done, though, such as a JavaScript object that controls the execution of another JavaScript context which can then be suspended like a generator function (without needing to be a generator function).

jll29 a day ago

Let's all move to Ladybird next August.

  • recursive 21 hours ago

    Have to get everyone off Windows first. If you can do that, switching to Ladybird should be easy.

  • GalaxyNova a day ago

    the article doesn't say kind things about it..

  • pessimizer a day ago

    Just in time for Apple to buy it.

shadowgovt 21 hours ago

I don't think I'm plugged into the side of the Internet that considers XML "the backbone of an independent web."

I think XML has some good features, but in general infatuation with it as either a key representation or key transmission protocol has waned over the years. Everything I see on the wire these days is JSON or some flavor of binary RPC like protobuffer; I hardly ever see XML on the wire anymore.

  • zzo38computer 18 hours ago

    XML is not so good for most of the things it was used for, and JSON has some problems too (I prefer DER), but Google is doing many bad things with WWW and not only things relating to XML, whether or not XML is good.

1vuio0pswjnm7 20 hours ago

"The WHATWG aim is to turn the Web into an application delivery platform, a profit-making machine for corporations where the computer (and the browser through it) are a means for them to make money off you rather than for you to gain access to services you may be interested in."

"Such vision is in direct contrast with that of the Web as a repository of knowledge, a vast vault of interconnected documents whose value emerges from organic connections, personalization, variety, curation and user control. But who in the WHATWG today would defend such vision?"

"Maybe what we need is a new browser war. Not one of corporation versus corporation -doubly more so when all currently involved parties are allied in their efforts to enclose the Web than in fostering an open and independent one- but one of users versus corporations, a war to take back control of the Web and its tools."

It should be up to the www user not the web developer to determine how they prefer the documents to appear on their screen

Contrast this with one or a few software programs, i.e, essentially a predetermined selection (no choice), that purport to offer all possible preferences to all www users, i.e., the so-called "modern" browser. These programs are distributed by companies that sell ad services and their business partners (Mozilla)

Documents can be published in a "neutral" format, JSON or whatever, and users can choose to convert this, if desired, to whatever format they prefer. This is more or less the direction the web has taken however at present the conversion is generally being performed by web developers using (frequently obfuscated) Javascript, intended to be outside the control of the user

Although from a technical standpoint, there is nothing that requires (a) document retrieval and (b) document display to be performed by the same program, commercial interests have tried to force users toward using one program for everything (a "do everything program")^1

When users run "do everything programs" from companies selling ad services and their business partners to perform both (a) and (b), they end up receiving "documents" they never requested (ads) and getting tracked

If users want such "do everything" corporate browsers, if they prefer "do everything programs", then they are free to choose them, but there should be other choices and it should be illegal to discriminate against other software as long as rules of "netiquette" are followed. A requirement to use some "do everything program" is not a valid rule

"There's more to the Internet than the World Wide Web built around the HTTP protocol and the HTML file format. There used to be a lot of the Internet beyond the Web, and while much of it still remains as little more than a shadow of the past, largely eclipsed by the Web and what has been built on top of it (not all of it good) outside of some modest revivals, there's also new parts of it that have tried to learn from the past, and build towards something different."

Internet subscribers pay a relatively high price for access in many countries

According to one RFC author the www became the "the new waist"

But to use expensive internet access only for "the web", especially a 100% commercial, obsessively surveilled one filled with ads, is also a "waste", IMHO

1. Perhaps the opposite of "do one thing well". America's top trillionaire wants to create another of these "do everything programs", one to rule them all. These "do everything programs" will always exist but they should never be the only viable options. They should never be "required"

rendall 19 hours ago

> ...just in case the questionable “no politics” policies —which consistently prove to be weasel words for “we're right-wingers but too chicken to come out as such”— weren't enough to stay away from it.

I am sympathetic to the stance of the article, but this line really turned me off and made me wonder if I was giving the writer too much credit. This kind of "if you're not with me, then you suck" outlook is childish and off-putting.

I know it's hard for some terminally political people to understand, but some of us really, really think it's a strength to work with teammates who hold different opinions than our own.

jeffbee 21 hours ago

"Nobody wants my nerd bullshit, part 42"

pessimizer a day ago

What you actually want is a web that isn't decided by the whims of massive monopolies, not XSLT. XSLT is not good. Google will not be caring that you do not comply, and that you don't install their polyfill; it's some real vote with your wallet middle-class style consumer activism. It's an illusion of control. If you don't eat the bugs, you'll starve, then everyone is eating the bugs.

Try having an opposition party that isn't appointing judges like Amit Mehta. Or pardoning torturers, and people who engineered the financial crash, and people who illegally spied on everyone, etc., etc. But good luck with that, we can't even break up a frozen potato monopoly.