Results tagged “web”
March 19, 2013
One of my favorite aspects of the infrastructure of the web is that the way we refer to web browsers in a technical context: User Agents. Divorced from its geeky context, the simple phrase seems to be laden with social, even political, implications.
The idea captured in the phrase "user agent" is a powerful one, that this software we run on our computers or our phones acts with agency on behalf of us as users, doing our bidding and following our wishes. But as the web evolves, we're in fundamental tension with that history and legacy, because the powerful companies that today exert overwhelming control over the web are going to try to make web browsers less an agent of users and more a user-driven agent of those corporations. This is especially true for Google Chrome, Microsoft Internet Explorer and Apple Safari, though Mozilla's Firefox may be headed down this path as well.
Traditionally, the ostensible protection against browsers undermining the agency of the user has been that some of the most popular browsers (Firefox, Chrome, Safari's browser engine) are open source, and could thus be prevented from being subverted by their corporate owners because technically-savvy users could wrest control of the code from their sponsors. What's more, all popular desktop browsers have supported some form of user scripting, whether that's in the form of plugins (which began to wane in importance a decade ago), extensions and add-ons (in Firefox and Chrome, notably) or in the form of bookmarklets which let arbitrary scripts run on pages in almost every browser.
That era of truly effective user control over user agents may be rapidly ending, for a few reasons:
- Legitimate security and performance issues have led to the death of the traditional browser plugin; Flash was perhaps the last successful browser plugin that will ever exist. As browsers get tied more deeply to operating systems and those operating systems try to lose their dependencies on particular chip architectures or system designs, plugins implemented as native code are rapidly being obsoleted.
- Increasingly large parts of the core functionality of browsers is being connected to the cloud infrastructure of the companies which create the browsers. From bookmark sync in Chrome and Safari and Mozilla to past and future efforts around browser-integrated authentication by Microsoft and Mozilla, more and more of the features we use to browse the web are plugged in by default to centralized web services. Today's browsers can certainly function without signing in to those services, but increasingly that level of convenience will be expected from any browser expected to compete.
- Google, Apple and Opera have all coalesced around the extremely popular (and currently very technically strong) Webkit browser engine, which is overwhelmingly dominant in mobile web browsing. As we've seen, a browser engine gaining over 90% share in a market leads to technological stagnation ranging from insecurities to less innovation in terms of customizability. It's possible the three (well, two and a half) competitors all relying on the platform will be enough to keep it moving forward, but that's far from certain.
Though this sounds alarmist or like a dire consequence, for the most part these developments aren't egregiously bad news for the web or for consumers. In exchange for these compromises, we've seen enormous advances in browser performance, standards-conformance and capability. The centrally-connected services like bookmark synching are generally easily disabled, and not particularly intrusive even when enabled. Competition has pushed platforms forward enough that even the formerly-reviled Internet Explorer can make knowing jokes at the expense of its old versions since new ones are quite good.
But the idea that a browser can be controlled by a user is still fundamentally in danger. Google just removed the Adblock Plus extension from its Play store for Android devices. This isn't that surprising — an advertising company is prohibiting the distribution of an extension that blocks advertising. But it starts to highlight the larger issue that the straightforward ability to have user agents be, well, agents for users is now being mediated through the business concerns of the companies which create the browsers.
We need to be advocates for extremism in the name of user agent empowerment. There should be no constraint about what user agents can do on our behalf to present, transform, remix, combine, format, reformat and display the content we view on the web. If we want to make a browser or browser add-on that strips away ads from a page, that's our right. If I want to have a browser show everything in black and white? Let me as the user have that agency. Print everything upside down and in blinking text? Absolutely. Transform every mention of "the cloud" into the phrase "my butt"? You bet your... well, you know.
Why is this important? Aren't these examples just trivial transformations of content? Doesn't the existence of a "Cloud-to-Butt" extension prove that these concerns are overblown? Not necessarily.
First, distribution matters when it comes to browser customizations and add-ons. Easily being able to install an add-on changes the fundamental impact that the code has on user experience as compared to something theoretically being possible. Google and others saying "well, you can distribute that plugin, but not through a method integrated into the browser" is the difference between a piece of code being a feature for normal people or it being an art project. This is the same issue we see with app stores, but with the added impact of actually impacting the open web — the same open web that's supposed to be the alternative to the wrongs of those app stores.
Second, if we follow the historical pattern of these advancements in other areas of the tech industry, we'll see the big tech companies capitulate to the desires of the legacy content industry to trump IP law and practice with private contracts that constrain our legal rights around content use and transformation. We've had our right to make backup copies of our own media in formats like DVD criminalized by their actions. We've seen the ability to route video streams to our own devices constrained by HDCP, again limiting our ability to make our own copies of content or to transform or sample that content in ways that are legally permitted.
It is obvious that the biggest companies which make web browsers all want to curry favor with media companies on the web in the same way they curried favor with those media companies in video and music.
Google, Apple and Microsoft each share a few traits:
- They want to prove they're the biggest friends to big media companies.
- They each have advertising businesses they don't want users to block.
- They each already enforce HDCP and other technical constraints that take away IP rights that citizens have always had.
- They each have closed app stores which they heavily moderate to decide which forms of customization are permitted on their platforms.
- They have each hemmed in even powerful third-party platforms like Flash, taking control over distribution and implementation of the most popular extensions/customizations.
There is no reason to believe that web browsers won't start to aggressively block capabilities that historically have been assumed to be part of user agents. We can expect messages like "this page prohibits printing for non-registered users", or "You don't have sufficient permissions to click the 'Pin It' button for Pinterest on this site", or "unauthorized bookmarklet detected; content from this site is blocked".
How It Happens
Here's where the Pollyannas in the tech industry, or those too young to have seen how the patterns repeat, say with faith and certainty, "That won't happen! My favorite browser is open source!" But imagine if this same set of features were marketed by a smart communications team at one of these companies. Instead of saying "our browser shuts off the print button", they say "we offer a pay gate feature with deep integration into the browser for subscribers". Instead of saying "We neuter competing social networks by disabling their sharing buttons" they say "We've launched a preferred partner program to enable deep browser integration from a set of verified social networks that offer the features our users want". Instead of saying "We block content from displaying if you haven't signed in with our cloud service and had your extensions approved by us", they say "Customers who sign in with their account get access to exclusive content from our partner sites."
Hey, the friendlier phrasing sounds pretty familiar, right? That's not evil at all! Except that it's the exact same constraint being introduced to your web browser, presented in a much more appetizing way. Think of how indispensable features like Instapaper or Pocket or Readability are on mobile browsers. Now understand those are seen as problematic exceptions to the model that Apple (and Google, and everyone else) would prefer to see for mobile browser usage. There's no technical reason that Adblock couldn't be enabled on mobile versions of Safari, and doing so would allow that community to begin optimizing its performance for mobile devices. Does anybody think that will ever happen?
So, I'm a user agent extremist. We should work constructively together within the tech community (perhaps led by the EFF) to create a list of capabilities in web browsers and user agents that we consider inviolate. We should take language that ordinary consumers understand, like "unlocking" in the context of a mobile phone, and apply it to our browsers. Then we can propose simple guidelines that should be enshrined in policy — every web browser should be "unlocked" by default. We need to educate all three branches of government at federal, state and local levels to expect that media companies are going to start prosecuting ordinary citizens for using user agent capabilities that we've taken for granted for twenty years.
Otherwise we can soon expect to find that the "View Source" button which has enabled the web so far is mysteriously grayed out on certain sites. Because there are companies that are going to realize that giving users agency is a really powerful thing
December 20, 2012
(First, thanks/apologies to Andy Baio for listening to my musings on a mullet-style app strategy and coining "Mullet-platform". It's horrible and wonderful.)
So, here's the theory: The web is good at driving big audiences of engaged users, and enables awesome stuff like viral adoption and social sharing and all the other benefits we get from permanent, easily-shared links. Apps are great at driving paid user revenues, and creating experiences that stay with people wherever they go, and connecting to smart sensors and networks.
Thus, drive the business up front with the paid revenues from mobile apps, and the party in the back with the unfettered social sharing of the web. Mullet! With apologies to Jonah Peretti's mullet strategy for content, this seems to be as appropriate an analogy for revenue models as it is for the HuffPo/BuzzFeed method of publishing a site.
Let each platform do what it's good at! Now of course, there's some overlap between what each platform can do, but this is about making a rational justification of how the platforms relate to each other. And if you follow the premise behind the idea that you should stop publishing web pages you very quickly arrive at the straightforward conclusion that you'll have to have a platform-neutral cloud service that powers these two different experiences for apps and web in a way that lets them be fully native on each. It should also make it clear how to assign resources to the two different types of platforms, since each has a distinct role to play in helping your app or game or content succeed.
December 18, 2012
We have the obligation to never speak of our concerns without suggesting our solutions. I've been truly gratified to watch the response to The Web We Lost over the last few days; It's become one of the most popular things I've ever written and has inspired great responses.
But the most important question we can ask is: How do we rebuild the positive aspects of the web we lost? There are a few starting points, building on conversations we've been having for years. Let's look at the responsibilities we must accept if we're going to return the web to the values that a generation of creators cared about.
- Take responsibility and accept blame. The biggest reason the social web drifted from many of the core values of that early era was the insularity and arrogance of many of us who created the tools of the time. I was certainly guilty of this, and many of my peers were as well. We took it as a self-evident and obvious goal that people would even want to participate in this medium, instead of doing the hard work necessary to make it a welcoming and rewarding place for the rest of the world. We favored obscure internecine battles about technical minutia over the hard, humbling work of engaging a billion people in connecting online, and setting the stage for the billions to come. To surpass the current generation of dominant social networks and apps, which have unsurprisingly become arrogant and inflexible during their own era of success, we'll have to return to being as hungry and as humble as we were when the web was young. Because last time, we were both naive and self-absorbed enough that we deserved to fail.
- Don't just meet the UX standards, raise the bar. Obviously, the single biggest reason that the new era of social apps and sites have succeeded where the early efforts did not is because of their massively superior user experience, from the front-end user interfaces to the back-end performance. The expected thing to do would be to hope that a new generation of user-respecting apps came along and matched the best that Facebook and Twitter and Pinterest to have to offer. But actually, due to the profound entrenchment that these platforms already have across culture, the new apps have to be an order of magnitude better in user experience. The good news is, as the rest of the web transitions from making pages to making streams, they'll all be revisiting the tools and technologies they use to connect, and that'll form a big opportunity for new players to participate.
- Rethink funding fundamentals. As we've seen over and over, the giant social networks seem to inevitably piss off their user bases by changing product features and terms of service in ways that catalyze huge waves of user-generated discontent. But the fundamental reason these sites refused to accommodate so many user demands is because of economics. Those sites make their revenues on models dictated by the terms of funding from the firms that backed them. But as we've discussed before, it's possible to fund contemporary startups either without venture capital, or with a level of efficiency that allows mom and pop startups to reach web scale. To be clear, venture funding powered much of the first wave of social startups and were a big reason they were able to achieve many of their successes, so VC will be part of the ecosystem in the next wave as well. But the terms and dynamics can be profoundly different, supporting startups that are intentionally less efficient, perhaps even making use of the skills of blue collar coders to provide a lot of people will good, solid middle-class jobs instead of optimizing, as current companies do, for making a small number of people enormously wealthy.
- Explore architectural changes. One of the fundamental reasons that the economics of doing a startup at web scale are different is because of the proliferation of cloud computing and very, very high-performance, reliable open-source components that provide advanced functionality which was prohibitively expensive a decade ago. Instead of backing up a truckload of Dell servers to a data center and then installing a few hundred thousand dollars worth of Oracle software, we can pick and choose a few components off the shelf to get started. More importantly, consumers will start to be able to use the cloud themselves, which removes the current constraint around having to build single, centralized services to provide a great consumer experience. Today, big social apps have to spend millions of dollars handling DMCA takedown requests and FBI investigations into illegal content and in general fighting the web's fundamental desire to be centralized. New apps don't need to obey those constraints.
- Outflank by pursuing talent outside the obvious. The current wave of the social web doesn't just demonstrate its arrogance through its product decisions. The people involved in creating these platforms are hired from a narrow band of privileged graduates from a small number of top-tier schools, overwhelmingly male and focused narrowly on the traditional Silicon Valley geography. By constrast, the next wave of apps can harken back to many of the best of the early social startups, which often featured mixed-gender founding teams, attracted talent from geographically diverse regions (Flickr was born in Canada!) and were often created by people with liberal arts degrees or even no degree at all. Aside from being the responsible thing to do, having a diverse team generates a variety of unexpected product features and innovations that don't come from the groupthink of homogenous cultures.
- Exploit their weakness: Insularity. Another way of looking at the exclusionary tendencies of typical Silicon Valley startups is by considering the extraordinary privilege of most tech tycoons as a weakness to be exploited. Whether it's Mark Zuckerberg's unique level of privilege limiting his ability to understand why a single, universal public identity might ruin people's lives, or the tendency to launch apps first to a small, clubby circle of insiders, new startups don't have to repeat these mistakes. And by broadening their appeal from the start, new apps and networks can outflank the big players, paying attention to audiences that hadn't been properly respected last time. That insularity even extends to the tech industry typically ignoring the world of policy and regulations and government until it's too late. While the big tech players have formed their own RIAA, the best case is that they'll focus on overall issues like spectrum policy and net neutrality, ignoring the coming reality of policy changes that will try to protect regular users.
- Dont' trust the trade press. Another essential step for breaking out of the current tech industry's predictable patterns will be for entrepreneurs and creators to educate themselves about the true history of the tech industry and its products. Our business tends to follow a few simple, repeating cycles, like moving from centralization to decentralization and back, or from interoperable communications to silos and back. But as we've discussed, you can't trust the tech press to teach you about the tech industry, so you'll have to know your shit. Fortunately, a lot of us old-timers are still around, and still answer our emails sometimes, so it's possible to just ask. Imagine if Instagram had simply asked the folks who used to work at Flickr, "Did you ever change your terms of service? What freaked people out?" And even better, we can blog our own progress, because if you didn't blog it, it didn't happen. In that way, we form our own community of practice, our own new peer review process for what we learn about making the web work the right way.
- Create public spaces. Right now, all of the places we can assemble on the web in any kind of numbers are privately owned. And privately-owned public spaces aren't real public spaces. They don't allow for the play and the chaos and the creativity and brilliance that only arise in spaces that don't exist purely to generate profit. And they're susceptible to being gradually gaslighted by the companies that own them.
Overall, there are lots of ways that the current generation of social sites are vulnerable. There are users that the current tech industry considers undesirable, and technology choices that are considered taboo, and traditions around hiring and product strategy that force them to concede huge opportunities right out of the gate.
As is obvious from the responses I've gotten, many, many people care about a social web that honors certain human and creative values. As I've spent years thinking about the right way to write for this blog, and to build ThinkUp, and to sit on the board at Stack Exchange, and to advise clients at Activate, and to work on all the other stuff I do, I just keep running into the fact that there's a huge opportunity to make a great new generation of human-friendly apps with positive social values.
These new companies will be recognizable in that they'll impact culture and media and government and society, and that they'll invent great new technologies. They'll still make a bunch of money for the people who found them. But they'll look different, both in terms of the people who make them, and the people they serve. And they'll be more durable, not optimized based on current fashions in financing, but because they're built on the accurate belief that there are people who care deeply about the web they use, the works they create, the connections they make, and the humans on the other side of those connections.
November 21, 2011
Facebook has moved from merely being a walled garden into openly attacking its users' ability and willingness to navigate the rest of the web. The evidence that this is true even for sites which embrace Facebook technologies is overwhelming, and the net result is that Facebook is gaslighting users into believing that visiting the web is dangerous or threatening.
In this post I intend to not only document the practices which enable this attack, but to also propose a remedy.
1. You Cannot Bring Your Content In To Facebook
This warning appeared on Facebook two weeks ago to advise publishers (including this site) that syndicate their content to Facebook Notes via RSS that the capability would be removed starting tomorrow. Facebook's proposed remedy involves either completely recreating one's content within Facebook's own Notes feature, or manually creating status updates which link to each post on the original blog. Remember that second option, linking to each post manually — we'll return to it later.
2. Publishers Whose Content Is Captive Are Privileged
Over at CNET, Molly Wood made a powerful case against the proliferation of Facebook apps that enable ongoing, automated sharing of behavior data after only a single approval from a user. In her words:
Now, it's tempting to blame your friends for installing or using these apps in the first place, and the publications like the Post that are developing them and insisting you view their stories that way. But don't be distracted. Facebook is to blame here. These apps and their auto-sharing (and intercepts) are all part of the Open Graph master plan.
When Facebook unveiled Open Graph at the f8 developer conference this year, it was clear that the goal of the initiative is to quantify just about everything you do on Facebook. All your shares are automatic, and both Facebook and publishers can track them, use them to develop personalization tools, and apply some kind of metric to them.
As Molly's piece eloquently explains, what Facebook is calling "frictionless" sharing is actually placing an extremely high barrier to the sharing of links to sites on the web. Ordinary hyperlinks to the rest of the web are stuck in the lower reaches of a user's news feed, competing for bottom position on a news feed whose prioritization algorithm is completely opaque. Meanwhile, sites that foolishly and shortsightedly trust all of their content to live within Facebook's walls are privileged, at the cost of no longer controlling their presence on the web.
3. Web sites are deemed unsafe, even if Facebook monitors them
As you'll notice below, I use Facebook comments on this site, to make it convenient for many people to comment, and to make sure I fully understand the choices they are making as a platform provider. Sometimes I get a handful of comments, but on occasion I see some very active comment threads. When a commenter left a comment on my post about Readability last week, I got a notification message in the top bar of my Facebook page to let me know. Clicking on that notification yielded this warning message:
What's remarkable about this warning message is not merely that an ordinary, simple web content page is being presented as a danger to a user. No, it's far worse:
- Facebook is warning its users about the safety of a page which incorporates Facebook's own commenting features, meaning even web sites that embrace Facebook's technologies can be marginalized
- Facebook is displaying this warning despite the fact that Facebook's own systems have indexed the page and found that it incorporates their own Open Graph information.
To illustrate this second point, I'll include what is a fairly nerdy illustration for those interested. If you're sufficiently interested in the technical side of this, what's being shown is Facebook's own URL linter, as viewed through the social plugins area in the developer console for a site. In this view, it verifies not only that the Open Graph meta tags are in place (minus an image placeholder, as the referenced post has no images), but that Facebook has crawled the site and verified enough of the content of the page to know their own comment system is in place on the page. (Click to view the whole page, with only the app ID numbers redacted.)
How to Address This Attack
Now, we've shown that Facebook promotes captive content on its network ahead of content on the web, prohibits users from bringing open content into their network, warns users not to visit web content, and places obstacles in front of visits to web sites even if they've embraced Facebook's technologies and registered in Facebook's centralized database of sites on the web.
Fortunately, the overwhelming majority of web users visit Facebook through relatively open web browsers. For these users, there is a remedy which could effectively communicate the danger that Facebook represents to their web browsing habits, and it would be available to nearly every user except those using Facebook's own clients on mobile platforms.
This is the network of services designed to warn users about dangers on the web, one of the most prominent of which is Stop Badware. From that site comes this description:
Some badware is not malicious in its intent, but still fails to put the user in control. Consider, for example, a browser toolbar that helps you shop online more effectively but neglects to mention that it will send a list of everything you buy online to the company that provides the toolbar.
I believe this description clearly describes Facebook's behavior, and strongly urge Stop Badware partners such as Google (whose Safe Browsing service is also used by Mozilla and Apple), as well as Microsoft's similar SmartScreen filter, to warn web users when visiting Facebook. Given that Facebook is consistently misleading users about the nature of web links that they visit and placing barriers to web sites being able to be visited through ordinary web links on their network, this seems an appropriate and necessary remedy for their behavior.
Part of my motivation for recommending this remedy is to demonstrate that our technology industry is capable of regulating and balancing itself when individual companies act in ways that are not in the best interest of the public. It is my sincere hope that this is the case.
Many aspects of this conversation are not, of course, new topics. Some key pieces you may be interested in:
- As I was researching this piece, Marshall Kirkpatrick published Why Facebook's Seamless Sharing is Wrong over on ReadWriteWeb, articulating many of these same concerns. His piece is well worth reading.
- Albert Wenger of Union Square Ventures makes a strong case for the long-term goal of a network of networks. I fully share his vision here, and hope most in our industry will endorse this idea as well.
- Molly Wood's excellent look at Facebook sharing which I referenced above is worth reading in its entirety.
- Blackbird, Rainman, Facebook and the Watery Web was a more optimistic look at how web platforms evolve that I wrote four years ago when Facebook was much less dominant.
- The Facebook Reckoning a year ago offered a perspective on the values and privilege that inform Facebook's decision-making.
- My ruminations on ThinkUp and Software With Purpose last week also explored the related danger of Facebook deleting everything you've ever created on their site.
July 20, 2011
We're twenty years in to this world wide web thing. Today, I myself celebrate twelve years of writing this blog. And yet those of us who love this medium, who've had our lives changed by the possibility of publishing our words to the world without having to ask permission, are constantly charged with defending this wonderful, expressive medium in a way that creators in every other discipline seldom find themselves obligated to do.
Some of this is because the medium is new, of course. But in large part, it's because so many of the most visible, prominent, and popular places on the web are full of unkindness and hateful behavior.
The examples are already part of pop culture mythology: We can post a harmless video of a child's birthday party and be treated to profoundly racist non-sequiturs in the comments. We can read about a minor local traffic accident on a newspaper's website and see vicious personal attacks on the parties involved. A popular blog can write about harmless topics like real estate, restaurants or sports and see dozens of vitriolic, hate-filled spewings within just a few hours.
But that's just the web, right? Shouldn't we just keep shrugging our shoulders and shaking our heads and being disappointed in how terrible our fellow humans are?
This is a solved problem
As it turns out, we have a way to prevent gangs of humans from acting like savage packs of animals. In fact, we've developed entire disciplines based around this goal over thousands of years. We just ignore most of the lessons that have been learned when we create our communities online. But, by simply learning from disciplines like urban planning, zoning regulations, crowd control, effective and humane policing, and the simple practices it takes to stage an effective public event, we can come up with a set of principles to prevent the overwhelming majority of the worst behaviors on the Internet.
If you run a website, you need to follow these steps. if you don't, you're making the web, and the world, a worse place. And it's your fault. Put another way, take some goddamn responsibility for what you unleash on the world.
How many times have you seen a website say "We're not responsible for the content of our comments."? I know that when you webmasters put that up on your sites, you're trying to address your legal obligation. Well, let me tell you about your moral obligation: Hell yes, you are responsible. You absolutely are. When people are saying ruinously cruel things about each other, and you're the person who made it possible, it's 100% your fault. If you aren't willing to be a grown-up about that, then that's okay, but you're not ready to have a web business. Businesses that run cruise ships have to buy life preservers. Companies that sell alcohol have to keep it away from kids. And people who make communities on the web have to moderate them.
- You should have real humans dedicated to monitoring and responding to your community. One of the easiest ways to ensure valuable contributions on your site is to make people responsible by having dedicated, engaged, involved community moderators who have the power to delete comments and ban users (in the worst case) but also to answer questions and guide conversations for people who are unsure of appropriate behavior (in the best cases). Sites that do this, like MetaFilter and Stack Exchange sites (disclosure, I'm a proud board member of Stack Exchange) get good results. Those that don't, don't. If you can't afford to invest the time or money in grooming and rewarding good community moderators? Then maybe don't have comments. And keep in mind: You need lots of these moderators. The sites with the best communities have a really low ratio of community members to moderators.
- You should have community policies about what is and isn't acceptable behavior. Your community policy should be short, written in plain language, easily accessible, and phrased in flexible terms so people aren't trying to nitpick the details of the rules when they break them. And then back them up with significant consequences when people break them: Either temporary or permanent bans on participation.
- Your site should have accountable identities. No, people don't have to use their real names, or log in with Google or Facebook or Twitter unless you want them to. But truly anonymous commenting often makes it really easy to have a pile of shit on your website, especially if you don't have dedicated community moderators. When do newspapers publish anonymous sources? When the journalists know the actual identity and credibility of the person, and decide it is a public good to protect their identity. You may wish to follow the same principles, or you can embrace one of my favorite methods of identity: Persistent pseudonyms. Let users pick a handle that is attached to all of their contributions in a consistent way where other people can see what they've done on the site. Don't make reputation a number or a score, make it an actual representation of the person's behavior. And of course, if appropriate, don't be afraid to attach people's real names to their comments and contributions. But you'll find "real" identities are no cure for assholes showing up in your comments if you aren't following the rest of the principles described here.
- You should have the technology to easily identify and stop bad behaviors. If you have a community that's of decent size, it can be hard for even a sufficient number of moderators to read every single conversation thread. So a way for people to flag behavior that violates guidelines, and a simple set of tools for allowing moderators to respond quickly and appropriately, are a must-have so that people don't get overwhelmed.
- You should make a budget that supports having a good community, or you should find another line of work. Every single person who's going to object to these ideas is going to talk about how they can't afford to hire a community manager, or how it's so expensive to develop good tools for managing comments. Okay, then save money by turning off your web server. Or enjoy your city where you presumably don't want to pay for police because they're so expensive.
Just a start
Those are, of course, just a few starting points for how to have a successful community. You need many more key factors for a community to truly thrive, and I hope others can suggest them in the comments. (Yep, I know I'm asking for it by having comments on this post.)
But as I reflected back on the wonderful, meaningful conversations I've had in the last dozen years of this blog, I realized that one of the reasons people don't understand how I've had such a wonderful response from all of you over the years is because they simply don't believe great conversations can happen on the web. Fortunately, I have seen so much proof to the contrary.
Why are they so cynical about conversation on the web? Because a company like Google thinks it's okay to sell video ads on YouTube above conversations that are filled with vile, anonymous comments. Because almost every great newspaper in America believes that it's more important to get a few more page views on their website than to encourage meaningful discourse about current events within their community, even if many of those page views will be off-putting to the good people who are offended by the content of the comments. And because lots of publishers think that any conversation is good if it boosts traffic stats.
Well, the odds are I've been doing this blogging thing longer than you, so let me tell you what I've learned: When you engage with a community online in a constructive way, it can be one of the most meaningful experiences of your life. It doesn't have to be polite, or neat and tidy, or full of everyone agreeing with each other. It just has to not be hateful and destructive.
In that spirit, I've tried to hold off on actually naming names of people who run sites that encourage hateful horrible communities. Mostly because the people actually running the sites aren't being granted the resources or power to make the choices they need to make to have a fruitful community. But I'm lucky enough after all these years that my words sometimes get in front of those who do have the power to fix the web's worst communities.
So, I beseech you: Fix your communities. Stop allowing and excusing destructive and pointless conversations to be the fuel for your business. Advertisers, hold sites accountable if your advertising appears next to this hateful stuff. Take accountability for this medium so we can save it from the vilification that it still faces in our culture.
Because if your website is full of assholes, it's your fault. And if you have the power to fix it and don't do something about it, you're one of them.
Thank you to John Fraissinet for the image.
June 9, 2011
By request, a bit of explanation of how and why I favorite things on the internet. (Or favor them. Or like them. Whatever.)
First, where do I favorite? On Twitter, certainly: I love lots of tweets! On Facebook! That's mostly for liking things outside of Facebook, around the web. I like lots of videos on YouTube and on Vimeo, the latter of which probably has the most satisfying like/favoriting animation on the web. I judiciously like things on MLKSHK. I suppose I still favorite things on Google Reader from time to time, which always involves me starring, sharing, +1ing and clicking 10 other buttons in their UI, since I don't really know which one does what. YouTube has both liking and favoriting, too, but somehow that redundancy doesn't bother me as much.
And, perhaps more visibly than anywhere else, I star all kinds of things on Stellar, which is also where many of these favorites get aggregated and shared with others; It's my, erm, somewhat enthusiastic use of favoriting on that service (I'm by far the most prolific star-giver in these early days of the awesome little site) which has inspired the most recent "dude, what the hell?" responses from many of my friends. As of 6 weeks ago, Jason showed me stats where I had about 1/3 more favorites than the next-highest person on the site.
Why am I so prolific with the stars? Well, one part is that I am just an enthusiastic person: I like lots of stuff! There's also social expectation; My favorite (see what I did there?) friend David Jacobs is a master of favoriting and taught me the wonders of the form years ago. In the early days of (now-defunct) Vox, David was specifically called out when the app added favoriting:
By popular demand, we've introduced the ability for users to mark posts, photos, audio, video and books -- from their own blog as well as other Vox blogs -- as favorites. We've nicknamed this feature the "David Jacobs" after friend and Vox user, who, at last count has favorited 1,677 photos on Flickr. It's a great way to keep track of good stuff you've seen on Vox, as well as keep a record of your own things that you particularly like.
Do me a favor
Despite my enthusiasm, my habit of enthusiastically clicking stars and thumbs-up all over the web is not unconsidered. Instead, my intention is fairly consistent, though I'm aware the semantics of these functions are slightly different in all these various services. A few common themes:
- Acknowledging good work: When someone writes a tweet that makes me laugh or think, or produces a video that's worth the time to watch it, I favorite it or like it as a "reward" of sorts to them. I don't know anyone who doesn't check the number of likes/faves on a work they've made at least some of the time, and that way they know I was rooting for them.
- Retaining for the future: Favoriting items increases my ability to retrieve them later. I've got Instapaper and Readability and Pinboard all hooked up together so that things I star get saved as bookmarks that I can retrieve later. Similarly, ThinkUp can show me a rough version of the links that were shared in tweets that I've favorited. Basically, I'm more likely to favorite something if I think it's worthwhile enough to return to later.
- Implicit sharing: These days, this may be my main motivation for favoriting lots of stuff on the web. Truth is, I often miss the curation and editorial fun of the link blog that I used to publish on this site. (Give me a shout if you remember that — it's been seven years since I stopped doing it, old-timer!) By judiciously favoriting good things across the web, I can share them with my friends, assuming they're on services like Stellar and Favstar and Facebook with me.
Now, there are a couple of factors that make my favoriting behavior unusual, compared to normal web users. (Beyond the fact that I probably waste even more time on the web than most people.) First, my social graph is extremely distorted. I have a lot of Twitter followers, so many apps and services that use "popular" Twitter accounts as fodder for link/tweet popularity factor in my favoriting behavior disproportionately. I'm not quite a suggested user on Stellar the way I am on Twitter (since Stellar doesn't have that concept), but I do have an exaggeratedly prominent placement on that site, too, so the impact of my favoriting is amplified.
In short, favoriting or liking things for me is a performative act, but one that's accessible to me with the low threshold of a simple gesture. It's the sort of thing that can only happen online, but if I could smile at a person in the real world in a way that would radically increase the likelihood that others would smile at that person, too, then I'd be doing that all day long.
- ToRead is To Be Human, from 2007, was about the fundamental optimism people have when they tag an article as something they intend to read in the future. Many people use favoriting this way today.
- An Interview with Paul Bausch that I did on the old Six Apart blog back in 2003. I've assigned the epithet "father of the permalink" to Paul for years, but in reality, just before Paul was implementing permalinks in Blogger, Jason was experimenting with them on Kottke.org. I think it's no accident that both are innovating on favoriting, Jason with Stellar and pb with continued experiments (some inane) on MetaFilter. Favoriting is the most fundamental, natural action to perform on the permalink, which is the atomic unit of content on the web.
- The Power of the Audience, from early last year, was the first time I really explored the idea of favorites as social, gestural feedback for creators. The situation here hasn't gotten much better since then.
- Actions are the Body Language. Back in 2008, I'd made a page to capture my social actions like favoriting, and wrote a bit about why. (The page of those actions is totally broken now, sadly, but being able to archive those gestures is one of the reasons I'm so passionate about making ThinkUp work well.)
- Matt Haughey's post on his feedback loops that he relies on online, from early 2010.
- And finally, last year at Web 2.0 Expo NYC, I asked API head Ryan Sarver why favoriting is an afterthought on Twitter, at 7:27 in this interview video.
I wish there were a website that just had "favorite" (or "like") buttons you could embed, without it being all tied in to all the other crazy stuff Facebook does. But I'd settle for someone hacking ThinkUp to better support archiving my Facebook "likes" so I'd have a record of all the things I enjoy on the web. Actually, what the hell: $500 to the charity of your choice if somebody wants to make that work. Plus, if you tweet about doing it, I'll favorite your tweet.
June 16, 2009
FAIL is over. Fail is dead. Because it marks a lack of human empathy, and signifies an absence of intellectual curiosity, it is an unacceptable response to creative efforts in our culture. "Fail!" is the cry of someone who doesn't create, doesn't ship, doesn't launch, who doesn't make things. And because these people don't make things, they don't understand the context of those who do. They can't understand that nobody is more self-critical or more aware of the shortcomings of a creation than the person or people who made it.
When someone says "FAIL", what they’re really saying is, "I’m failing to understand a creative person’s constraints."
Of course, I'm not the first to point out that "Fail" sucks. Andy Baio articulated the case quite well, and I even touched on it in my Battledecks presentation a few years ago. Here's the relevant segment:
But we know that people who cry "FAIL!" are assholes — so why do we have to deem their petulant cry completely unacceptable? It's because of the Law of Fail:
Once a web community has decided to dislike a person, topic, or idea, the conversation will shift from criticizing the idea to become a competition about who can be most scathing in their condemnation.
It is in this way that the obnoxious jerks who offer an unthinking, uncritical belch in response to others' efforts kick off an even worse mob-minded pile on. And what I want to make clear is those who begin these conversations are, it must be said, the true failures. They choose a reflexive shorthand instead of a reasoned critique, and they bring out the worst in a community. I care deeply about people being creative on the web, and I care almost as much about people having thoughtful and productive conversations on the web.
So, fail is dead. I won't accept it in dialogue from those I communicate with, I won't permit those I'm connected to on social networks to use it around me, and no, you're not the first to think you're clever enough to use it as a comment here. If you have the urge to say it and you're a good person, then go do something creative instead. If you have the urge to say "Fail" and you haven't done anything? Well, then your statement speaks for itself.
September 22, 2008
One of the most frequent questions I get when I talk to people who are unfamiliar with social media on the web is, "Who writes all these blogs or Wikipedia? Who has the time?"
The answer, at least in this case, is me.
People who are skeptical about the web never seem to believe that we have a lot of time we could spend writing or collaborating on something original on the web. But they do understand the idea that people might be passionate and excited to write about topics they're passionate about.
So when I remembered a topic that's been an interest of mine for quite a while, I saw an opportunity to create a new Wikipedia page based on wanting to promote the work of someone whom I admire and respect, who inexplicably lacks a Wikipedia profile.
I wrote a simple page about Alan Leeds, whose role as a behind-the-scenes force in the popularization, promotion, and success of funk music truly can't be overstated. I admire his acumen, his taste, and the thoughfulness of his work over the years. But, as is the nature of people who work in music but aren't performers, his achievements thus far don't get enough attention outside of the respect for his work within the industry. I wouldn't argue a Wikipedia page is going to help improve that recognition, but it can help by being a useful resource for those of us who might want to make the case in the future. I have no doubt that I'm missing some of the subtle nuances that Wikipedia's moderators prefer (mostly because I don't really want to learn that much of the details of editing wikis), but the substance of the article is largely correct.
To my mind, that's a perfect motivation for the creation of a resource that people can use as a reference. Better yet, I am fairly confident I can draw the attention of friends and aquaintances who might have much more expertise about Mr. Leeds, and hopefully inspire them to point out resources or information that can improve the quality of the new page.
So, here's the brand new Wikipedia page about Alan Leeds. If you think you've got something to add, revise the article, pass along any relevant source materials, or add your voice in the comments. And if you're unfamiliar with his work, check it out — there's almost nobody else in the music business who's been so right, so many times, about the past, present and future of the funk.
January 2, 2007
I'd explained how to kill a personality a few weeks ago. Perhaps I was too pessimistic when I said, "[W]hat I see right now is the depressing reality that everybody can be completely reasonable, and the end result is that nobody is allowed to show the most engaging, interesting and unique parts of their personality."
John Furrier was actually at the dinner that inspired the entire conversation, article in Fortune, and ensuing hubbub. And adds some much-needed facts to the discussion. Call me old-fashioned, but I prefer a witness' account over my third-party conjecture any day. John says, "I was at this 'famous crap porn comment dinner' that Seagate put on for bloggers and press. I sat with Bill Watkins and was there with Jeffrey O’Brien from Fortune."
His conclusion? I was worrying needlessly:
I disagree with Anil ... Bill has a vibrant and dynamic personality - he is viewed within Seagate as a great leader. His comment was part of a bigger conversation - let me translate for people not aware of the slang - "crap = stuff" and "porn = early adopter rich multi media". Everyone in the tech business knows porn is the bellweather for all tech trends. Shame on Fortune because either way they look bad. One they know porn is an early adopter of all tech media, so they look bad for misquoting the CEO of Seagate. Secondly, if they didn’t know porn is the early adopter of media, then they look bad as a publication trying to cover tech with any credibility.
For a church going person then the quotes put forth by Fortune seem offensive - I was there at the dinner Bill Watkins was taken out of context. Fortune owes Bill Watkins big time for slamming him. Does it matter Bill Watkin and his crediblity was positive in the blogosphere and to the intelligent users.
This sentiment is echoed by Eric Eggertson over on Common Sense PR:
Straight shooters may occasionally apologize for things they’ve said, and they may temper their comments sometimes. But in my experience, the urge to speak plainly and openly is hard to overcome, once an executive has had success with that approach.
The business world would be a greyer place without some mavericks who are willing to make comments that haven’t been vetted by a committee.
So maybe there's still some hope yet for executives who speak their mind in public. I would just like to make sure I never see the phrase ''famous crap porn comment dinner" again. Call me old-fashioned again, but that seems somewhat... unappetizing.
December 22, 2006
About a month ago, Fortune's Jeffrey O'Brien interviewed Seagate CEO Bill Watkins, and pulled the conversation's most memorable quote for the headline: "Let's face it, we're not changing the world. We're building a product that helps people buy more crap - and watch porn."
In the course of one particular conversation with a Fortune Magazine blogger, in which we discussed a number of topics including sports, business and politics, I also explained how the proliferation of digital content and e-commerce were benefiting the storage business. In illustrating both the positive and negative impacts that the Internet and "we" as technology companies have on the world, unfortunately, and unwisely, I also used pornography as an example to illustrate a point. Fortune Magazine chose to focus narrowly on this example in their headline. I did not state this as our "mission." They are in the news business and eager to get their reader's attention and I should have known better. Even though I believe Fortune's headline writers took my comments out of context, I want you to know that I am sorry if this has in any way offended anyone. Clearly, I value everyone who works at Seagate and the culture we have built together.
Here we have a chain of perfectly reasonable behaviors leading to a result that's unsatisfactory for everyone involved. Watkins reasonably said the quote in the context of a dinner conversation with a number of bloggers, where I'm sure a lot of jokes were being exchanged. O'Brien reasonably included the line in the story because it's a good hook for presenting the company as down-to-earth. O'Brien's editor Jim Ledbetter reasonably used the line in the story's headline because, in his own words, "as O'Brien's editor on this story, I moved the quote high up in the story, and also turned it into a headline that, yes, I thought would grab the reader's attention."
And some Seagate employees in Minnesota reasonably thought, "Hey, my work is more important than just letting somebody store porn."
But the net result is that Seagate's CEO is going to work extra hard to never show any personality or have a sense of humor again when he's on the record. Jeff O'Brien will be a little more reluctant to include the killer line in a story. Jim Ledbetter is going to be more sensitive to charges he's being sensational in his headlines. And Seagate employees are going to spend more time worrying about whether their CEO represents them accurately, or if their work is meaningful.
There must be some lesson to be learned here, about the telephone game. Or about how the fact that any of us can be quoted out of context as public figures at virtually any time. But what I see right now is the depressing reality that everybody can be completely reasonable, and the end result is that nobody is allowed to show the most engaging, interesting and unique parts of their personality. I want to blame the Minnesotans, but it's really not their fault.