The Open Web is no more

by Rudd-O published 2015/07/30 17:00:00 GMT+0, last modified 2015-07-29T19:54:58+00:00
Why the demise of the Open Web matters.

The Open Web was a concept.  It was the idea that everyone should be able to publish to, and use, the Web, without anyone having the privilege to prohibit that activity on discriminatory grounds, whether based on gender, race, socioeconomic status, choice of operating system, or any other sort of bigotry or favoritism.  This principle, if I may call it that, was the very reason for the creation of the W3C, for their incessant and feverish work on open standards that ended the nightmare that was the Internet Explorer-only Web, for their policy of ensuring that their members provide royalty-free licensing of any patents that apply to Web technologies.  When, a few years ago, the W3C briefly decided to consider what is called "RAND" ("reasonable and non-discriminatory") licensing for patents, the entire Web community erupted in anger, for those "reasonable' and "non-discriminatory" terms would be, in effect, a discriminatory practice against whomever did not have money to pay for the patent licenses in order to avoid prison for publishing content on the Web, and thus, violate the very principle that the W3C was supposed to exist for.  When the W3C considered adopting patented media codecs that required royalties to use, a very similar outrage took place.

Unfortunately, like it is the case with many organizations, corrupt people tend to corrupt decent organizations for self-serving purposes.  The W3C no longer stands behind the user, for the W3C has years ago abandoned the principle by making a magical loophole.

This magical exception instructs browser makers to provide a method for a computer program to run within their users' browsers, where this program is not under the control of the end user (at best, the user can only choose to switch it on or off).  This method can be used by some groups who manufacture, distribute, and goad the users to install programs that control what their own browsers may or may not do (at least initially with respect to video, but the technology as specified is generic enough that it could apply to anything).  This loophole is what is called "Encrypted Media Extensions", and its very purpose is to sabotage the end user's attempts to copy or transform anything delivered to their own computer through the extensions.  This sabotage is profitable to the peddlers of EME because they can then charge money to users of the Web to remotely permit their computers to do what those computers were already perfectly capable of doing without EME: playing videos.

In other words, it's the bad old story of "let's prohibit you from doing something you were able to do earlier, then charge you money for the privilege of doing it".  This is really a fool's errand because a knowledgeable-enough user can always circumvent and gain control over such malicious functionality-reducing programs running on their own computer... but when has impossibility stopped rent-seeking control freaks from trying?

As a result, W3C's contemporary stance towards Web publishers is more like this "well, you can publish whatever you want using our standards, except if you want to use this Encrypted Media Extensions part of the standard; in that case, should you want to publish certain things through the Web by using it, you must go through some monopolist gatekeeper who controls the EME".  It should be highly obvious that this new stance is highly discriminatory because it gives veto power to a small handful of organizations who control the EME: Microsoft, Netflix, the MPAA (ardent control freaks w.r.t. what people may do with their own property), and a few other well-known oligopoloids.

Now, at this point, a devil's advocate may retort "but the user is free to publish any sort of videos they want, and they don't have to use the Encrypted Media Extensions to do so!".  And that's technically correct, except that complaint is only valid in a static world.

In a dynamic world — say, like our beloved reality — where incentives change people's behaviors over time, the endorsement of Encrypted Media Extensions as a Web standard will mean that quite a few people publishing video, in order to protect themselves from lawsuits, will have no choice but to go through the gatekeepers of EME and publish encrypted video.  As more and more people are funneled (no doubt against their will, for if they could just publish for free and without impediment, they would choose that) into the oligopoloids for delivery of video, those who choose not to allow EME on their computers will be left out from watching video, while those who choose to use EME will inevitably reduce how their computer -- thus, they -- may use what's accessible on the Web.

Where does this leave people who would like to use the Web as it exists, but can't (because their computers won't work with EME) or won't (because control freakery is detestable)?  It leaves them in a ghetto of illegality, of course: use alternative sources for video like torrents, or circumvent EME, both activities illegal almost everywhere (by the way, "thank you", United States rulers, for having coerced other parts of the world into adopting more authoritarianism).

This game-theoretic trend represents a net proportional reduction in the nondiscriminatory ability to publish on and use the Web, forcing people into the choice of ghettoizing themselves into criminality or excluding themselves from culture even if they could pay for it.  And, let's not forget, video is only the beginning.  We have already established that the standard is generic enough to support more than sabotaging certain uses of EME video, so there is no telling what other things will become harder to do without EME in the future.

I will now reprint what Jamie Zawinski quoted from BoingBoing when they wrote about this, and then I will reprint the EFF's take on this issue:

Here's the bad news: the World Wide Web Consortium is going ahead with its plan to add DRM to HTML5, setting the stage for browsers that are designed to disobey their owners and to keep secrets from them so they can't be forced to do as they're told. Here's the (much) worse news: the decision to go forward with the project of standardizing DRM for the Web came from Tim Berners-Lee himself, who seems to have bought into the lie that Hollywood will abandon the Web and move somewhere else (AOL?) if they don't get to redesign the open Internet to suit their latest profit-maximization scheme.

Danny O'Brien from the Electronic Frontier Foundation explains the wrangle at the W3C and predicts that, now that it's kosher to contemplate locking up browsers against their owners, we'll see every kind of control-freakery come out of the woodwork, from flags that prevent "View Source" to restricting embedded fonts to preventing image downloading to Javascript that you can't save and run offline. Indeed, some of this stuff is already underway at W3C, spurred into existence by a huge shift in the Web from open platform to a place where DRM-hobbled browsers are "in-scope" for the WC3.


EFF's take on this:

On Monday, the W3C announced that its Director, Tim Berners-Lee, had determined that the "playback of protected content" was in scope for the W3C HTML Working Group's new charter, overriding EFF's formal objection against its inclusion. This means the controversial Encrypted Media Extension (EME) proposal will continue to be part of that group's work product, and may be included in the W3C's HTML5.1 standard. If EME goes through to become part of a W3C recommendation, you can expect to hear DRM vendors, DRM-locked content providers like Netflix, and browser makers like Microsoft, Opera, and Google stating that they can now offer W3C standards compliant "content protection" for Web video.

We're deeply disappointed. We've argued before as to why EME and other protected media proposals are different from other standards . By approving this idea, the W3C has ceded control of the "user agent" (the term for a Web browser in W3C parlance) to a third-party, the content distributor. That breaks aperhaps until now unspokenassurance about who has the final say in your Web experience, and indeed who has ultimate control over your computing device.

EFF believes that's a dangerous step for an organization that is seen by many as the guardian of the open Web to take. We have rehashed this argument many times before, in person with Tim Berners-Lee, with staff members and, along with hundreds of others, in online interactions with the W3C's other participants.

But there's another argument that we've made more privately. It's an argument that is less about the damage that sanctioning restricted media does to users, and more about the damage it will do to the W3C.

At the W3C's advisory council meeting in Tokyo, EFF spoke to many technologists working on Web standards. It's clear to us that the engineering consensus at the consortium is the same as within the Web community, which is the same almost anywhere else: that DRM is a pain to design, does little to prevent piracy, and is by its nature, user-unfriendly. Nonetheless, many technologists have resigned themselves to believing that until the dominant rightsholders in Hollywood finally give up on it (as the much of the software and music industry already has), we're stuck with implementing it.

The EME, they said, was a reasonable compromise between what these contracts demand, and the reality of the Web. A Web where movies are fenced away in EME's DRM-ridden binary blobs is, the W3C's pragmatists say, no worse than the current environment where Silverlight and Flash serve the purpose of preventing unauthorized behavior.

We pointed out that EME would by no means be the last "protected content" proposal to be put forward for the W3C's consideration. EME is exclusively concerned with video content, because EME's primary advocate, Netflix, is still required to wrap some of its film and TV offerings in DRM as part of its legacy contracts with Hollywood. But there are plenty of other rightsholders beyond Hollywood who would like to impose controls on how their content is consumed.

Just five years ago, font companies tried to demand DRM-like standards for embedded Web fonts. These Web typography wars fizzled out without the adoption of these restrictions, but now that such technical restrictions are clearly "in scope," why wouldn't typographers come back with an argument for new limits on what browsers can do?

Indeed, within a few weeks of EME hitting the headlines, a community group within W3C formed around the idea of locking away Web code, so that Web applications could only be executed but not examined online. Static image creators such as photographers are eager for the W3C to help lock down embedded images. Shortly after our Tokyo discussions, another group proposed their new W3C use-case: "protecting" content that had been saved locally from a Web page from being accessed without further restrictions. Meanwhile, publishers have advocated that HTML textual content should have DRM features for many years.

In our conversations with the W3C, we argued that the W3C needed to develop a clearly defined line against the wave of DRM systems it will now be encouraged to adopt.

A Web where you cannot cut and paste text; where your browser can't "Save As..." an image; where the "allowed" uses of saved files are monitored beyond the browser; where JavaScript is sealed away in opaque tombs; and maybe even where we can no longer effectively "View Source" on some sites, is a very different Web from the one we have today. It's a Web where user agentsbrowsersmust navigate a nest of enforced duties every time they visit a page. It's a place where the next Tim Berners-Lee or Mozilla, if they were building a new browser from scratch, couldn't just look up the details of all the "Web" technologies. They'd have to negotiate and sign compliance agreements with a raft of DRM providers just to be fully standards-compliant and interoperable.

To be clear, we don't think all of these proposals will come to fruition. We appreciate that there's no great hunger for DRM at the W3C. Many W3C participants held their nose to accept even the EME draft, which was carefully drafted to position itself as far away from the taint of DRM as was possible for a standard solely intended to be used for DRM systems.

But the W3C has now accepted "content protection". By discarding the principle that users should be in charge of user agents, as well as the principle that all the information needed to interoperate with a standard should be open to all prospective implementers, they've opened the door for the many rightsholders who would like the same control for themselves.

The W3C is now in an unenviable position. It can either limit its "content protection" efforts to the aims of a privileged few, like Hollywood. Or it can let a thousand "content protection systems" bloom, and allow any rightsholder group to chip away at software interoperability and users' control.

EFF is still a W3C member, and we'll do our best to work with other organizations within and without the consortium to help it fight off the worse consequences of accepting DRM. But it's not easy to defend a king who has already invited its attackers across his moat.

Still, even if the W3C has made the wrong decision, that doesn't mean the Web will. The W3C has parted ways with the wider Web before: in the early 2000s, its choice to promote XHTML (an unpopular and restrictive variant of HTML) as the future led to Mozilla, Apple and Opera forming the independent WHATWG. It was WHATWG's vision of a dynamic, application-oriented Web that wonso decisively, in fact, that the W3C later re-adopted it and made it the W3C's own HTML5 deliverable.

Recently, WHATWG has diplomatically parted with the W3C again. Its "HTML Living Standard" continues to be developed in tandem with the W3C's version of the HTML standard, and does not contain EME or any other such DRM-enabling proposals.

By contrast, W3C has now put its weight behind a restrictive future: let's call it "DRM-HTML". Others have certainly bet against open, interoperable standards and user control before. It's just surprising and disappointing to see the W3C and its Director gamble against the precedent of their own success, as well as the fears and consciences of so many of their colleagues.