Results tagged “privacy”

Facebook makes it official: You have no say

November 28, 2012

anil-dash-wired-tos-column.jpg

Late on Wednesday, just as Americans were taking off for the Thanksgiving holiday, Facebook announced its intention to change the feedback process for the policies which govern use of its service.

For the last few years, as I'd mentioned in Wired a few months ago, Facebook held sham elections where people could ostensibly vote on its policy changes. Despite lots of responses (the most recent Site Governance vote got far more people participating than signed the secession petitions on the White House website), Facebook never promoted these policy change discussions to users, and the public has never made a substantive impact on site governance.

Now, Facebook follows the steps that most tyrants do, quietly moving from sham elections to an official policy that users will have no vote in site governance.

Intentions

I'd like to give Facebook the benefit of the doubt on their change to site governance, as the company pinkie-swears that they'll listen to users now. Facebook even offers up their well-intentioned Chief Privacy Officer, Erin Egan, to lead conversations designed to engage with the public about site policies:

As a result of this review, we are proposing to restructure our site governance process. We deeply value the feedback we receive from you during our comment period. In the past, your substantive feedback has led to changes to the proposals we made. However, we found that the voting mechanism, which is triggered by a specific number of comments, actually resulted in a system that incentivized the quantity of comments over their quality. Therefore, we’re proposing to end the voting component of the process in favor of a system that leads to more meaningful feedback and engagement.

But if Facebook believed in this move, and thought it would be embraced as positive by users, it wouldn't have been announced late on the day before Thanksgiving, with a deadline for responses just a few days later. No matter how earnestly Egan wants to hear from the public, this effort is structured in a way where public feedback on site governance will almost inevitably be futile.

Copy and Paste Panic

Though this policy change from Facebook attracted very little attention, as designed by its release just before a holiday weekend, a separate panic about Facebook's terms of service raised its head in the last few days. A copy-and-paste meme inspired thousands of Facebook users to post a message to their profiles asserting their copyright over their content, with explicit calls for Facebook not to exploit their posted data, tied to a bigger perceived threat due to Facebook's recent listing as a publicly-traded company.

Facebook offered a terse refutation of the need for the meme, explaining correctly that users retain copyright on their works by default and thus have no need to share this declaration. (Especially true as posting this message to a Facebook wall would be ineffective for this purpose regardless.)

But Facebook's one-paragraph response to hundreds of thousands, perhaps millions of users expressing concern about their personal data and content shows exactly why their site governance process is unacceptable.

A brief "fact check" about the site's copyright policy assumes that simply correcting the factual error in the premise of postings solves the issue that's being raised. One can almost hear the condescension behind the Facebook response ("they spelled 'Berne Convention' wrong!"), but there's a glaring absence of any effort at addressing the emotional motivation behind so many users crying out.

Facebook sees a large-scale user protest as a problem to be solved, rather than as an opportunity to serve their community. And as a result, they dismissively offer a short legal or technical dismissal of users' concerns over their content, rather than empowering them with simple, clear controls over the way their information is used.

What Facebook Could Do

Facebook and its apologists will say, "but we already have good privacy controls!" and will point to their settings page, which, to their credit, has been admirably simplified.

Now imagine if, instead of posting a "fact check", Facebook had responded to the rapidly-spread cries for intellectual property control on the site by leading and guiding their community in a way that was better for users while also being better for the web.

The same brief explanation that users retain copyright on their content could be followed by two simple controls, the first reiterating the existing site's existing controls for privacy:

fb-privacy-controls.png

And then a second one (this is just a quick, silly mockup I made) could default to the existing rights and protections, but offer a simple interface for Creative Commons or similar license for sharing content.

fb-rights-controls.png

"But wait!" you cry. "Isn't this much more complicated? Isn't it a bad UX to force a choice on a user?" To which I reply: Not when a desire for control is what they're expressing.

Because the emotional underpinning to the hue and cry over copyright and permissions on Facebook isn't some newly-discovered weird mass fixation on intellectual property rights. It's a simple expression of a lack of trust for Facebook, one where their status as a publicly-traded company makes users feel that Facebook is less accountable to their preferences on privacy and permissions due to its accountability to shareholders to maximize value.

Think about the feelings behind an ordinary Facebook user updating their status to say, in part, "By the present communiqué, I notify Facebook that it is strictly forbidden to disclose, copy, distribute, disseminate, or take any other action against me on the basis of this profile and/or its contents". They're expressing the fear that Facebook is going to disclose their personal thoughts, and exploit them for commercial gain.

You don't solve that level of concern by offering occasional web chats with a Chief Privacy Officer.

Being Of Service

Facebook needs to change its culture to one where it's determined to be of service to users who are worried, even if those users have some misunderstandings about technical details or esoteric legal concepts. Because the fundamental issues of trust and vulnerability are legitimate, and users deserve to have them addressed.

There's also a huge long-term liability for Facebook if these issues of trust aren't addressed. Companies face the wrath of regulators and the vagaries of policy changes not just because of lobbyists and wranglings in the corridors of government, but often because ordinary people have a gut sense that these huge corporations aren't working in their interests. That sentiment can express itself in a million different ways, all of which serve to slow down the innovation of a company, limit its opportunities to reach new audiences, and eventually come to cripple its relevance in the market. Microsoft should offer a sobering example to Facebook of a company that, in addition to breaking the law (which Facebook seems on course to do in a few years with its constantly-shifting policies) had separately earned such mistrust and animosity from the industry and from users that decisive legal action against the company was all but inescapable.

When Facebook went public, Mark Zuckerberg wrote a great letter to investors, which began with a simple statement:

Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected.

...

We hope to strengthen how people relate to each other.

When Mark says that, I believe him. I do sincerely think that's what he intends to do, and what he hopes for his employees to do. But from the micro-level decisions where a panic over content rights is handled in a perfunctory, dismissive way, to the macro-level where the fundamental negotiation with users over their empowerment as the lifeblood of the service, Facebook has made obvious that they're not culturally ready to meet the mission Mark has laid before them.

It's not too late to change, but Facebook has the obligation to truly embrace empathy for its users. Right now, it's sneaking out the warnings in the dark of night that things are going to get worse before they get better.

Facebook and Skeletons

November 7, 2010

I'm quoted in today's New York Times, talking about how politicians in this year's election have had to confront their pasts, as shared through social networks:

“I think all of us know that politicians would have to confront the Facebook skeletons in their closet, but that it would be in 20 years, not in two years,” said Anil Dash, a technology consultant and pioneer of the blogosphere when it was just beginning in the late 1990s. “By the time the next generation comes into power, they’ll just assume this is how it’s always been.”

I feel pretty solid in saying that we all knew this reckoning was coming; I wrote about it myself in 2002 ("We're all celebrities now, in a sense. Everything that we say or do is on the record. And everything that's on the record is recorded for posterity, and indexed far better than any file photo or PR bio ever was.") and lots of other folks got there earlier than that.

But as I was trying to make clear in the Facebook Reckoning two months ago, this is a problem that disproportionately affects those with fewer social privileges. The rich can often hide their misadventures, and Ivy League graduates can innoculate themselves, as we saw George W. Bush do by calling all of his life before he turned 40 off limits, and as Barack Obama has done by writing a book that mentions the worst of his transgressions so that they'd be boring or considered "old news". Both politicians found great success in having their pasts ignored, if not erased. Frankly, I'm glad for that — I think the tradition of pre-emptive disclosure via analog methods sets a great precedent for others to have their digital pasts ignored as well.

There's also a real danger, though, in the vulnerability that digitally-savvy political candidates have here, that luddite candidates do not. I think it's no coincidence that every single pro-net neutrality candidate lost in last week's elections. There may have been many causes, but if push comes to shove, who's going to be the candidate with an embarrassing Facebook photo: The candidate who is an extensive user of social networks with a long-time history dating back to their young days of poor judgement? Or the one who knows nothing about technology, mistrusts it, and sees it only as a source of potential vulnerabilities? Simply not being afraid of technology may become a political liability if we continue to allow those who resent and fear technology to set the rules of engagement.

There's also the interesting, and consistent, media habit of blaming social networks for every unfortunate indirection that is brought to people's attention, even if social media wasn't involved at all. Take Blake Farenthold (please!). The Texas Republican was photographed in an unfortunate set of duckie pajamas, as illustrated in this brilliant Joe Coscarelli article and slideshow in the Village Voice, which collects all of the damning episodes outlined in the Times story into one perfect piece of linkbait.

But Farenthold didn't post the picture on any social network. As far as we know, no one did. It just got linked directly to the press. And, given enough privilege, that image can be suppressed quite effectively from the most prominent media venues around, just as Farenthold's incriminating picture never appeared in the New York Times. Farenthold won his district by 799 votes.

DRM and Friends

January 19, 2009

This one's been kicking around in my head for a while, and maybe you can all help me understand it. With any contemporary social networking site, I can control who has access to the things I share, and I can update or change or revoke the relationships that enable that access at any time.

For example, I can share a photo on Flickr with just my friends, or a post on Vox with just my family, or display my profile on Facebook to just my contacts. And then, if somebody ceases to be my friend, I can change their status and they no longer have access to that information. It's a unliateral, technologically enforced restriction, and circumventing the restriction would be tantamount to hacking and likely to get you banned from any of these services.

So, with all of that being said, how are privacy settings on social networks different than DRM restrictions placed on media content files from companies? Is it because I'm not a corporation? Is it because the DRM technology is provided by Flickr or Facebook instead of by Apple's iTunes or Microsoft's WIndows Media? Is it because I only (theoretically) grant permissions to dozens or hundreds of people, instead of millions?

This is a genuine question, because it's something I'm not sure I know how to articulate. I can certainly identify the difference in intent, but I am not sure I can explain the difference in definition. Feel free to comment here, or post a link or reply to @anildash on Twitter and I'll collect the best explanations I get.

Web History's History

April 21, 2007

I found some really interesting responses to the launch of Google Web History that are all well worth visiting.

  • CNET's Margaret Kane has a roundup of news on their news blog.
  • Mark Blair's SMOblog (which stands for "Social Media Optimization", a term I kinda like) says Google is organizing the world's conversation. It's a fairly generous variation on Google's original mission of "organizing the world’s information", which I think Google abandoned long-ago, but it's well worth the read.
  • The Globe and Mail's Mathew Ingram asks How much do you love Google? I believe Tina said it best: What's love got to do with it?
  • Adobe's John Dowdell, whom I'm a huge fan of, always has a great perspective. This time on proprietary data:
Microsoft's hyperintegration of code and functionality led to their well-known security problems over the past ten years... Google seems similarly vulnerable these days, with their hyperintegration of user data. It looks like they're trying to handle it correctly, but it's a heavy weight to accept. I suspect that eventually we'll see a counter-pressure, towards decentralized data services rather than private, opaque, and centralized data silos.
  • Aliza Sherman has some nice words that get at exactly why I like blogging about these things -- hopefully a good blog post can provide perspective that's useful for those too busy to do the research themselves.
  • Rex Hammock offers a more personal look at Web History, focusing on the attention implications of the new service.
  • Geek and Poke already has a comic strip up about Web History.
  • And, winning the "Best Headline" award, is Good Morning Silicon Valley, with Those who do not purge history are condemned to reread it. Aaaand I think think nobody's topping that one today, folks.

Google Web History - Good and Scary

April 20, 2007

Many years ago, when the web was a simpler place, one of the scariest monsters conjured up to describe the privacy threats that lurked on the Internet was the DoubleClick cookie, used for tying your ad-viewing behavior on the web to your real-world identity. USA Today said it was Orwellian, and set off a half-decade of worries for web surfers, many of whom didn't even have the foggiest notion what they were worried about.

Today, Google's released Google Web History. It's a brilliant, powerful, even insightful tool that will undoubtedly worry those who were concerned about privacy in the early days of the web's popularity. It doesn't help that Google now owns DoubleClick, and all those worries about cookies are amplified that Google actually stores all of this data on its computers, not yours, tied to an identity that might well also be linked to your email, office documents, your instant messages, and of course your browser history itself, courtesy of the browser toolbar.

Google Web History

Services For Your Web History

From a technical standpoint, Google Web History is one of those tools that's so well-executed it seems simple, or even obvious, the first time you see it. There's a basic timeline of your search history, with the ability to drill into specific search result histories for Google properties like web search, image search, news, Froogle (now renamed Google Product Search, though the UI for Web History shows the old name), Video, and Maps. There's even, astoundingly, a history of which AdSense Ads you've clicked on.

Some Google properties are missing -- Google Apps documents don't show up in your history, and the more loosely-connected services like Blogger, Reader, and Picasa are nowhere to be found. Plus, there's a peculiar disconnect with the Google Desktop Search tool's services -- the Timeline feature shared between both applications appears completely different, and your desktop history isn't integrated into the new service.

As you'd expect, there's a prominent and simple way to remove those scurrilous bits from your web history. And the improved presentation of an item as mundane as one's browser history reveals a recent strength of Google's: revealing data you already have access to. The Google Desktop Search tool on Windows made smart use of a disk indexing system that Microsoft had already built into Windows. In a similar way, the Web History service makes use of the Google Toolbar history to take old data and turn it into useful information through smart presentation.

There's a promising, but (for me, at least) still blank area titled "Interesting Items", and the reappearance of a feature that first showed up in the excellent Google Reader: Trends.

Google Web History's Trends Display

Now, Google's data for my own history is slightly skewed; I tend to use Blingo for a lot of basic searches on my computers, and Google's toolbar doesn't track that. But the fundamental underpinnings for a remarkably deep look into behavior on the web are already present.

The Real World

Google Web History's Web Activity Chart Outside of the world of users who gawk at every shiny new thing on the web, though, this is going to give people the heebie-jeebies in a way that we're probably only used to getting from Microsoft. In fact, it's probably safe to say that no other major web company could release this product today; The backlash from the user community of players like Microsoft, Yahoo, or AOL would simply be too strong.

Google is still in a period where most users on the web feel they are a relatively benevolent company. And it helps that the new product is excellent, useful, and unique. But with the release of Web History, especially in the context of its recent acquisitions and announcements, Google may have crossed the line where regular users start to react with skepticism and caution instead of unabashed enthusiasm.

This product is all about web history. We've already learned some lessons from the history of the web about what happens to companies once users start to question their trust in the intentions or implications of new products. It may serve Google well to revisit those lessons.

Some Links

Here are a few useful links to add to your own web history:

Fired for Wording!

February 19, 2007

Sure, Microsoft Word is fine for kids who want to write papers for school, but serious professionals should be very worried about using this dangerous tool! Just a few weeks ago, I found out about this poor Des Moines woman fired for Wording at work!

Three hundred single-spaced pages of irresponsibility. Her name is Emmalee Bauer, so perhaps we should give her a verb -- be careful with your word processor, or you could get Emmaleed™.

In completely, totally unrelated news, there was a nice story about Vox in the New York Times.

privacy through identity control

December 17, 2002

Every time there's a resurgence in general-audience (non-techie) interest in Google, as after Newsweek's recent Google fawning, the issue of privacy in a presence of a pervasive and permanent record rears its ugly head. People who aren't technologically savvy don't realize that statements don't fade away or remain in confidence on the web; The things we say only get louder and more widely known, unless they're completely trivial.

We're all celebrities now, in a sense. Everything that we say or do is on the record. And everything that's on the record is recorded for posterity, and indexed far better than any file photo or PR bio ever was. It used to be that only those who chose career paths that resulted in notoriety or celebrity would face having to censor themselves or be forced to consciously control the image that they project. But this faded as celebrity culture grew and as individuals are increasingly marketed as brands, even products.

Naturally, this affects larger groups of people. First it was actors, then musicians, then entertainers of all stripes. We count politicians as celebrities now, too. I realized this when I ran into Rudy Giuliani this weekend. I took a moment to thank him for the work he did for this city, but I realized from the reaction of his body language that he was much more used to being approached as a celebrity than as a politician. I've met a few prominent examples of both over the years, both more and less well-known than Giuliani, and one of the constants that I've seen is that they're the only people who pay as much attention to the phrase "on the record" as do journalists.

Trent Lott comes to mind, when we consider the permanence of a celebrity/politician's statements, of course. Few of us who were alive in 1980 have to be concerned that any of our statements from that year will come back to haunt us, let alone some of our more obscure comments, aimed at audiences that we feel might be sympathetic. But that won't be the expectation of the generation of kids growing up today. Even their most casual instant messages will be "on the record". And it's not the sort of record that suffers the vagaries of our files today, where the audio to that reel might be lost, or the words on the original obscured by an errant coffee cup's ring.

So what to do? Well, first, of course, social expectations will change. The fear everyone has is that we'll all have to be nice all the time. And niceness sucks. It's the valid part of the backlash against "political correctness". Except that most of the people who object to political correctness do so because they resent that they've lost the chance to be coarse and offensive in public. They're resenting the loss of social control that they used to have, when calling a person or a group by an offensive name was acceptable because there wasn't any social or political cost to doing so.

But if we're not going to become nice while all our words are for the record, what will we do? Well, we'll adapt and become more reasonable in our expectations of people in the public. Instead of expecting that Britney Spears never acknowledge the loss of her virginity, that she might preserve a marketing message, we'll either accept that she tells the truth, or not require her to discuss it at all. One or two generations from now, the impossibility of scrubbing every private utterance for the demands of permanent public presentation will lead to a society much more accepting of occasional flubs, faults, and flaws. Behold, the triumph of context. Metadata about a person, and hyperlinks to their lifelong record, will inform the decisions made by a public used to an informal, non-governmental version of Total Information Awareness.

So do we have to, as Scott McNealy said, "get over" our desire for privacy. Do we have to permanently filter our thoughts and expressions, lest they be thrown back at us at some inopportune moment in the future? What do we do until people are used to seeking out context, until meta is intrinsic? Well, you have to own your name.

Go look me up. Googlism's use of Google searches to define a topic was so addictive that Google's WebQuotes was created as a virtual clone. And the phrases that pop out of those services aren't entirely inaccurate. But if you do a simple Google search on my name, what do you get? This site.

I own my name. I am the first, and definitive, source of information on me.

One of the biggest benefits of that reality is that I now have control. The information I choose to reveal on my site sets the biggest boundaries for my privacy on the web. Granted, I'll never have total control. But look at most people, especially novice Internet users, who are concerned with privacy. They're fighting a losing battle, trying to prevent their personal information from being available on the web at all. If you recognize that it's going to happen, your best bet is to choose how, when, and where it shows up.

That's the future. Own your name. Buy the domain name, get yourself linked to, and put up a page. Make it a blank page, if you want. Fill it with disinformation or gibberish. Plug in other random people's names into Googlism and paste their realities into your own. Or, just reveal the parts of your life that you feel represent you most effectively on the web. Publish things that advance your career or your love life or that document your travels around the world. But if you care about your privacy, and you care about your identity, take the steps to control it now.

In a few years, it won't be as critical. There will be a reasonably trustworthy system of identity and authorship verification. Finding a person's words and thoughts across different media and time periods will be relatively easy. Getting a "true" picture of that person might be possible, even simple. But that's years away. For now, recognize that you're a celebrity, treat your likeness and personal information with that gravity, and choose which statements and facts are going to represent your presence in the global media universe. Any adult in an industrialized society who hasn't taken these steps is forfeiting opportunity and security, out of either laziness or ignorance. Maintaining privacy in the face of corporations and governments that wish to violate it requires a bit of identity judo, neutralizing their desire for everything by freely giving away just a little bit.

So, who owns your identity right now?

1