Results tagged “browsers”
March 19, 2013
One of my favorite aspects of the infrastructure of the web is that the way we refer to web browsers in a technical context: User Agents. Divorced from its geeky context, the simple phrase seems to be laden with social, even political, implications.
The idea captured in the phrase "user agent" is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we're in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations. This is especially true for Google Chrome, Microsoft Internet Explorer and Apple Safari, though Mozilla's Firefox may be headed down this path as well.
Traditionally, the ostensible protection against browsers undermining the agency of the user has been that some of the most popular browsers (Firefox, Chrome, Safari's browser engine) are open source, and could thus be prevented from being subverted by their corporate owners because technically-savvy users could wrest control of the code from their sponsors. What's more, all popular desktop browsers have supported some form of user scripting, whether that's in the form of plugins (which began to wane in importance a decade ago), extensions and add-ons (in Firefox and Chrome, notably) or in the form of bookmarklets which let arbitrary scripts run on pages in almost every browser.
That era of truly effective user control over user agents may be rapidly ending, for a few reasons:
- Legitimate security and performance issues have led to the death of the traditional browser plugin; Flash was perhaps the last successful browser plugin that will ever exist. As browsers get tied more deeply to operating systems and those operating systems try to lose their dependencies on particular chip architectures or system designs, plugins implemented as native code are rapidly being obsoleted.
- Increasingly large parts of the core functionality of browsers is being connected to the cloud infrastructure of the companies which create the browsers. From bookmark sync in Chrome and Safari and Mozilla to past and future efforts around browser-integrated authentication by Microsoft and Mozilla, more and more of the features we use to browse the web are plugged in by default to centralized web services. Today's browsers can certainly function without signing in to those services, but increasingly that level of convenience will be expected from any browser expected to compete.
- Google, Apple and Opera have all coalesced around the extremely popular (and currently very technically strong) Webkit browser engine, which is overwhelmingly dominant in mobile web browsing. As we've seen, a browser engine gaining over 90% share in a market leads to technological stagnation ranging from insecurities to less innovation in terms of customizability. It's possible the three (well, two and a half) competitors all relying on the platform will be enough to keep it moving forward, but that's far from certain.
Though this sounds alarmist or like a dire consequence, for the most part these developments aren't egregiously bad news for the web or for consumers. In exchange for these compromises, we've seen enormous advances in browser performance, standards-conformance and capability. The centrally-connected services like bookmark synching are generally easily disabled, and not particularly intrusive even when enabled. Competition has pushed platforms forward enough that even the formerly-reviled Internet Explorer can make knowing jokes at the expense of its old versions since new ones are quite good.
But the idea that a browser can be controlled by a user is still fundamentally in danger. Google just removed the Adblock Plus extension from its Play store for Android devices. This isn't that surprising — an advertising company is prohibiting the distribution of an extension that blocks advertising. But it starts to highlight the larger issue that the straightforward ability to have user agents be, well, agents for users is now being mediated through the business concerns of the companies which create the browsers.
We need to be advocates for extremism in the name of user agent empowerment. There should be no constraint about what user agents can do on our behalf to present, transform, remix, combine, format, reformat and display the content we view on the web. If we want to make a browser or browser add-on that strips away ads from a page, that's our right. If I want to have a browser show everything in black and white? Let me as the user have that agency. Print everything upside down and in blinking text? Absolutely. Transform every mention of "the cloud" into the phrase "my butt"? You bet your... well, you know.
Why is this important? Aren't these examples just trivial transformations of content? Doesn't the existence of a "Cloud-to-Butt" extension prove that these concerns are overblown? Not necessarily.
First, distribution matters when it comes to browser customizations and add-ons. Easily being able to install an add-on changes the fundamental impact that the code has on user experience as compared to something theoretically being possible. Google and others saying "well, you can distribute that plugin, but not through a method integrated into the browser" is the difference between a piece of code being a feature for normal people or it being an art project. This is the same issue we see with app stores, but with the added impact of actually impacting the open web — the same open web that's supposed to be the alternative to the wrongs of those app stores.
Second, if we follow the historical pattern of these advancements in other areas of the tech industry, we'll see the big tech companies capitulate to the desires of the legacy content industry to trump IP law and practice with private contracts that constrain our legal rights around content use and transformation. We've had our right to make backup copies of our own media in formats like DVD criminalized by their actions. We've seen the ability to route video streams to our own devices constrained by HDCP, again limiting our ability to make our own copies of content or to transform or sample that content in ways that are legally permitted.
It is obvious that the biggest companies which make web browsers all want to curry favor with media companies on the web in the same way they curried favor with those media companies in video and music.
Google, Apple and Microsoft each share a few traits:
- They want to prove they're the biggest friends to big media companies.
- They each have advertising businesses they don't want users to block.
- They each already enforce HDCP and other technical constraints that take away IP rights that citizens have always had.
- They each have closed app stores which they heavily moderate to decide which forms of customization are permitted on their platforms.
- They have each hemmed in even powerful third-party platforms like Flash, taking control over distribution and implementation of the most popular extensions/customizations.
There is no reason to believe that web browsers won't start to aggressively block capabilities that historically have been assumed to be part of user agents. We can expect messages like "this page prohibits printing for non-registered users", or "You don't have sufficient permissions to click the 'Pin It' button for Pinterest on this site", or "unauthorized bookmarklet detected; content from this site is blocked".
How It Happens
Here's where the Pollyannas in the tech industry, or those too young to have seen how the patterns repeat, say with faith and certainty, "That won't happen! My favorite browser is open source!" But imagine if this same set of features were marketed by a smart communications team at one of these companies. Instead of saying "our browser shuts off the print button", they say "we offer a pay gate feature with deep integration into the browser for subscribers". Instead of saying "We neuter competing social networks by disabling their sharing buttons" they say "We've launched a preferred partner program to enable deep browser integration from a set of verified social networks that offer the features our users want". Instead of saying "We block content from displaying if you haven't signed in with our cloud service and had your extensions approved by us", they say "Customers who sign in with their account get access to exclusive content from our partner sites."
Hey, the friendlier phrasing sounds pretty familiar, right? That's not evil at all! Except that it's the exact same constraint being introduced to your web browser, presented in a much more appetizing way. Think of how indispensable features like Instapaper or Pocket or Readability are on mobile browsers. Now understand those are seen as problematic exceptions to the model that Apple (and Google, and everyone else) would prefer to see for mobile browser usage. There's no technical reason that Adblock couldn't be enabled on mobile versions of Safari, and doing so would allow that community to begin optimizing its performance for mobile devices. Does anybody think that will ever happen?
So, I'm a user agent extremist. We should work constructively together within the tech community (perhaps led by the EFF) to create a list of capabilities in web browsers and user agents that we consider inviolate. We should take language that ordinary consumers understand, like "unlocking" in the context of a mobile phone, and apply it to our browsers. Then we can propose simple guidelines that should be enshrined in policy — every web browser should be "unlocked" by default. We need to educate all three branches of government at federal, state and local levels to expect that media companies are going to start prosecuting ordinary citizens for using user agent capabilities that we've taken for granted for twenty years.
Otherwise we can soon expect to find that the "View Source" button which has enabled the web so far is mysteriously grayed out on certain sites. Because there are companies that are going to realize that giving users agency is a really powerful thing
November 21, 2011
Facebook has moved from merely being a walled garden into openly attacking its users' ability and willingness to navigate the rest of the web. The evidence that this is true even for sites which embrace Facebook technologies is overwhelming, and the net result is that Facebook is gaslighting users into believing that visiting the web is dangerous or threatening.
In this post I intend to not only document the practices which enable this attack, but to also propose a remedy.
1. You Cannot Bring Your Content In To Facebook
This warning appeared on Facebook two weeks ago to advise publishers (including this site) that syndicate their content to Facebook Notes via RSS that the capability would be removed starting tomorrow. Facebook's proposed remedy involves either completely recreating one's content within Facebook's own Notes feature, or manually creating status updates which link to each post on the original blog. Remember that second option, linking to each post manually — we'll return to it later.
2. Publishers Whose Content Is Captive Are Privileged
Over at CNET, Molly Wood made a powerful case against the proliferation of Facebook apps that enable ongoing, automated sharing of behavior data after only a single approval from a user. In her words:
Now, it's tempting to blame your friends for installing or using these apps in the first place, and the publications like the Post that are developing them and insisting you view their stories that way. But don't be distracted. Facebook is to blame here. These apps and their auto-sharing (and intercepts) are all part of the Open Graph master plan.
When Facebook unveiled Open Graph at the f8 developer conference this year, it was clear that the goal of the initiative is to quantify just about everything you do on Facebook. All your shares are automatic, and both Facebook and publishers can track them, use them to develop personalization tools, and apply some kind of metric to them.
As Molly's piece eloquently explains, what Facebook is calling "frictionless" sharing is actually placing an extremely high barrier to the sharing of links to sites on the web. Ordinary hyperlinks to the rest of the web are stuck in the lower reaches of a user's news feed, competing for bottom position on a news feed whose prioritization algorithm is completely opaque. Meanwhile, sites that foolishly and shortsightedly trust all of their content to live within Facebook's walls are privileged, at the cost of no longer controlling their presence on the web.
3. Web sites are deemed unsafe, even if Facebook monitors them
As you'll notice below, I use Facebook comments on this site, to make it convenient for many people to comment, and to make sure I fully understand the choices they are making as a platform provider. Sometimes I get a handful of comments, but on occasion I see some very active comment threads. When a commenter left a comment on my post about Readability last week, I got a notification message in the top bar of my Facebook page to let me know. Clicking on that notification yielded this warning message:
What's remarkable about this warning message is not merely that an ordinary, simple web content page is being presented as a danger to a user. No, it's far worse:
- Facebook is warning its users about the safety of a page which incorporates Facebook's own commenting features, meaning even web sites that embrace Facebook's technologies can be marginalized
- Facebook is displaying this warning despite the fact that Facebook's own systems have indexed the page and found that it incorporates their own Open Graph information.
To illustrate this second point, I'll include what is a fairly nerdy illustration for those interested. If you're sufficiently interested in the technical side of this, what's being shown is Facebook's own URL linter, as viewed through the social plugins area in the developer console for a site. In this view, it verifies not only that the Open Graph meta tags are in place (minus an image placeholder, as the referenced post has no images), but that Facebook has crawled the site and verified enough of the content of the page to know their own comment system is in place on the page. (Click to view the whole page, with only the app ID numbers redacted.)
How to Address This Attack
Now, we've shown that Facebook promotes captive content on its network ahead of content on the web, prohibits users from bringing open content into their network, warns users not to visit web content, and places obstacles in front of visits to web sites even if they've embraced Facebook's technologies and registered in Facebook's centralized database of sites on the web.
Fortunately, the overwhelming majority of web users visit Facebook through relatively open web browsers. For these users, there is a remedy which could effectively communicate the danger that Facebook represents to their web browsing habits, and it would be available to nearly every user except those using Facebook's own clients on mobile platforms.
This is the network of services designed to warn users about dangers on the web, one of the most prominent of which is Stop Badware. From that site comes this description:
Some badware is not malicious in its intent, but still fails to put the user in control. Consider, for example, a browser toolbar that helps you shop online more effectively but neglects to mention that it will send a list of everything you buy online to the company that provides the toolbar.
I believe this description clearly describes Facebook's behavior, and strongly urge Stop Badware partners such as Google (whose Safe Browsing service is also used by Mozilla and Apple), as well as Microsoft's similar SmartScreen filter, to warn web users when visiting Facebook. Given that Facebook is consistently misleading users about the nature of web links that they visit and placing barriers to web sites being able to be visited through ordinary web links on their network, this seems an appropriate and necessary remedy for their behavior.
Part of my motivation for recommending this remedy is to demonstrate that our technology industry is capable of regulating and balancing itself when individual companies act in ways that are not in the best interest of the public. It is my sincere hope that this is the case.
Many aspects of this conversation are not, of course, new topics. Some key pieces you may be interested in:
- As I was researching this piece, Marshall Kirkpatrick published Why Facebook's Seamless Sharing is Wrong over on ReadWriteWeb, articulating many of these same concerns. His piece is well worth reading.
- Albert Wenger of Union Square Ventures makes a strong case for the long-term goal of a network of networks. I fully share his vision here, and hope most in our industry will endorse this idea as well.
- Molly Wood's excellent look at Facebook sharing which I referenced above is worth reading in its entirety.
- Blackbird, Rainman, Facebook and the Watery Web was a more optimistic look at how web platforms evolve that I wrote four years ago when Facebook was much less dominant.
- The Facebook Reckoning a year ago offered a perspective on the values and privilege that inform Facebook's decision-making.
- My ruminations on ThinkUp and Software With Purpose last week also explored the related danger of Facebook deleting everything you've ever created on their site.
September 1, 2008
Today, in a surprisingly botched announcement, Google announced Chrome, their upcoming open source web browser. The subject of a Google browser is something I've opined on a few times over the years, but Jason Kottke's compiled an even more comprehensive overview of the conversations a few of us have been having for almost seven years.
If that's up your alley, you might want to check out:
- Stories and Tools, which at six years old is a little dated, but offered up some thoughts on the presentation of web applications that I thought connected nicely with the Google Chrome comic book.
- Google and Theory of Mind, about Google's great weakness in the insularity of the company's culture.
- Google Web History - Good and Scary, which at the launch of Google's Web History feature examined some of the implications of the new tracking system.
- The Circle of (Web) Life, which described a cycle of web businesses supporting each other, based on Google's support for Mozilla.
- How Matt Haughey Beat Google, challenging the inevitability of Google's domination of markets by pointing out how they weren't able to compete with a self-funded, passionate person and his community.
- Google Office: Google Apps for Your Domain, which put the launch of Google Apps in the context of both the office suite competition and Google's other offerings.
- The Microcontent Client, an outline of ideas about the evolution of browsers and information management applications from 2002.
- Finally, Google's First Mistake, my rumination on Google's acquisition of Pyra Labs, a post whose accuracy has both increased and decreased in the years since I wrote it.
July 16, 2007
Before it was called Firefox, or Firebird, Mozilla’s lightweight browser was known as Phoenix. An appropriate name, given than it rose from the ashes of Netscape. Read/WriteWeb has a nice retrospective pegged to the fourth anniversary of the creation of the Mozilla Foundation. It quotes me writing upon the demise of Netscape, and I thought it was useful to also mention the circle of web life that the Mozilla/Netscape browsers have been part of.
If you weren’t reading blogs back then, or missed the posts, some interesting related reading is John Rhodes’ seminal essay about a Google client from 2001, as well as Jason Kottke’s two posts from 2004.
As Richard says in his R/WW post, “Life is all about cycles though, so whether the Google/Mozilla romance turns out to be comedy or tragedy in 4 more years time — that is the question.”
July 15, 2003
Now that Netscape's more or less officially dead, it occurs to me that it might be worthwhile for Google to bankroll the Mozilla Foundation, either by donating a substantial sum or by hiring several of the browser engineers. Google's shown a penchant not just for being "not evil" but for supporting products and companies (ahem) that contribute to the web even if it's not directly in the area of search.
Since Google's all but announced that they're no longer "just search", I'd probably amend my qualms about lack of focus and say that if Google wants to own the entire area of information innovation, they need to be significant contributors to the evolution of Mozilla.
Firebird is, finally, a usable browser, and damn close to the being the best in the world, if it isn't already. Google's shown the ability to get an installable client onto millions of desktops around the world. And they have a user experience focus that would nicely shore up the critical weakness that's dogged Mozilla from day one. If the goal is now organizing and presenting information instead of just being the best search engine, then a browser client focused on information retrieval, search, and management is a great first step. And I'd give them better than even odds at being able to grow that application into a full microcontent client if they were so inclined.
What would be the business model? My mind tells me that a free, open-source browser with built-in hooks to Google services and APIs would be good enough to push increased usage of Google's revenue-generating services and advertising. Microsoft has publicly conceded that they're going for Google's market, and Yahoo threw more than a billion and a half dollars at the Google problem earlier this week. Against those challenges, I'd say the onus is on Google to embrace and extend with a free product that's better than anything the competition can offer: That's what works.
So, a Google browser, based on Mozilla. An easily-justified commitment to cross-platform support and outstanding user experience, based on Google's history of honoring those tenets and the Mozilla organization's inherent preference for them. Culturally, hiring the core members of the Mozilla dev team would be an extraordinarily easy fit. And, frankly, it'd probably require little more development resources, bandwidth, or staffing than the Pyra acquisition did.
I'd pay $500 for a Google-branded microcontent management platform based on the Mozilla core if it were scriptable, stable, and integrated API-neutral blogging and aggregation tools. Or I'd pay $150 annually. So, Google, are you guys game for taking your position as a platform vendor seriously?