Browser monoculture bad
"Browser monoculture" is often bemoaned as a threat to the web. According to Statscounter, which tracks browser use, over 70 per cent of the market is made up of people using Google Chrome or another browser based on the underlying Chromium project.
What web advocates worry about when they say this is bad is that Google can effectively determine the future of the web by determining which features to support and which not to. That's a lot of power for a single company that also has an effective monopoly on search and advertising.
What would happen if Chrome decided to break fundamental features of the web and didn't even feel the need to tell anyone?
Well, we can answer that question because that's what Chrome did.
Earlier this year Chrome developers decided that the browser should no longer support JavaScript dialogs and alert windows when they're called by third-party iframes.
That means that if something is embedded from another website, let's say a YouTube video, Chrome wants to stop allowing that embedded content to call the JavaScript alert function, which opens a small alert window. Eventually Chrome aims to get rid of alert windows altogether.
So what happens when Chrome does this? At first nothing because it's just an obscure bug in a bug tracker that only Chromium developers read. Then a Chrome developer happens to mention it in passing on Twitter. That raises a riot of angry developers, in the face of which Chrome postpones the move until January 2022, when it will again try to remove the features and hope that this time no one notices.
You know what isn't happening here? No standards bodies are being consulted, no public discussion happens with other browser vendors (Mozilla still makes a web browser believe it or not). No, what happens is Google gets to do what it wants and the web breaks.
"Big company bad" is hardly news at this point, especially if the big company is Google, but there's more going on here than that and it's worth picking apart a little.
Discontinuing a feature is rare. Part of what's amazing about the web is that you can still go to the very first web page and view it in any browser. The web is the web in large part because of this high level of backwards compatibility. To their credit, browser makers have generally been very good about making sure changes don't break the web.
That said, change happens. Most browsers don't support the blink tag anymore. Try using applet or AppCache – both are gone. That is, they're gone from the official web standard. Individual browsers may still support them, but they are no longer valid HTML. Therein lies the key. This is exactly why we have standards bodies like the World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG).
These are the groups where decisions about what should and should not be part of HTML happen, and those decisions usually come after lengthy discussion and testing. The WHATWG FAQ even addresses how this process should work, calling it "a very tricky effort, involving the coordination among multiple implementations and extensive telemetry to quantify how many web pages would have their behavior changed."
Google, notorious for the amount of data it collects before making changes to its own web properties, has not, as far as we can tell, done any telemetry or have the slightest idea how many web pages would be affected by removing support for alert and dialog. Google just wants them gone so gone they are. That's a monopoly for you.
The WHATWG FAQ goes on to say that "when the feature is sufficiently insecure, harmful to users, or is used very rarely, this can be done. And once implementers have agreed to remove the feature from their browsers, we can work together to remove it from the standard."
Part of the problem is the lack of communication. When the developer community finds out Google is going to break a ton of websites through a tweet, you know communication has failed. But there was a follow-up tweet that's actually far more disturbing than the news of alert() disappearing.
The tweet comes from Chrome software engineer and manager Emily Stark, who is of course speaking for herself, not Chrome, but it seems safe to assume that this thinking is prevalent at Google. She writes: "Breaking changes happen often on the web, and as a developer it's good practice to test against early release channels of major browsers to learn about any compatibility issues upfront."
First, she is flat out wrong – breaking changes happen very rarely on the web and, as noted, there is a process for making sure they go smoothly and are worth the "cost" of breaking things. But second, and far more disturbing, is the notion that web developers should be continually testing their websites against early releases of major browsers.
That's actually why there are web standards – so developers don't have to do ridiculous things like continually test their websites to make sure they're still working. You build the site using the agreed-upon standard and it works as long as the web does. Full stop. That is the point in standards. That someone of considerable stature in the Chrome project would think otherwise should be a red flag.
Web developer and advocate Jeremy Keith points out something else that's wrong with this idea. "There was an unspoken assumption that the web is built by professional web developers," he writes. "That gave me a cold chill."
What's chilling about the assumption is just that, it's assumed. The idea that there might be someone sitting right now writing their first tentative lines of HTML so that they can launch a webpage dedicated to ostriches is not even considered.
What we are forced to assume in turn is that Chrome is built by the professional developers working for an ad agency with the primary goal of building a web browser that serves the needs of other professional developers working for the ad agency's prospective clients.
As Keith points out, this assumption that everyone is a professional fits the currently popular narrative of web development, which is that "web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today."
That is, as Keith puts it, "absolute bollocks."
I've been teaching people to build things on the web (in one form or another) for almost 20 years now, and you know what? It's no harder to write HTML now than it was 20 years ago. There's no more need for the supposed complexity of the modern web than there ever was. In fact, I think it's actually the opposite.
I find myself increasingly turned off by sites that are so obviously overengineered. I've started to notice the beautiful simplicity of an HTML page. Just the simple fact that it loads without a spinning circle makes it stand out on the web today.
You'd be forgiven for thinking that the most common content on the web these days is that little spinning circle you see while you wait for simple text content to pass through several layers of unnecessary complexity before being seen.
The complexity of the modern web seems like the law of diminishing returns in action. Developers keep pouring on the JavaScript and we keeping getting... less of what we actually want.
That's not to say there isn't a time and place for complexity. Building turn-by-turn navigation with real-time map updates calls for some complex JavaScript and it's great that the modern web has the standards to make that possible. But not every webpage needs to be that. The web is not a place just for professional developers, it's a place anyone can build pretty darn near anything, and it certainly isn't a place where Chrome gets to dictate the tools we use or who can participate. The web is for everyone, not just developers.
Just a friendly reminder, Firefox is an excellent web browser. ®
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.