Gaslighting: The Response
Well, it seems like my post on how Facebook is gaslighting the web struck a nerve with a lot of folks. I have to give first priority to publishing the responses I’ve gotten directly from Facebook employees, to be fair to their perspective.
- Louis Brandy, a Facebook engineer, responded in the comments on my site:
I work at facebook on the team that generates the warning in question (site integrity). This warning appears to me to be a bug and we are currently trying to repro and fix. Continuing, though, to say that the warning is disingenuous is simply not correct. I do not agree with your premise that because you use a social plugin we should automatically whitelist you and exempt you from security checks. Malicious pages do that stuff too.
In this particular case, though, in my opinion so far, this would appear to be a false positive (a bug) from the way the comment widget generates notifications.. Those notification seem to wrongly trip a particular security check.
- Louis also left what is substantially the same comment on the (surprisingly thoughtful!) Hacker News thread about my post.
- Christopher Palow, another Facebook engineer emailed me privately to address many of the same issues as Louis. Christopher explained that what he called the “linkshim” (the redirect which handles outbound links) performs a few key functions: It works for spam prevention by preventing access to known spam links, preserves privacy by obscuring your Facebook user ID from potentially being passed as a referrer, and allows referrer logs to show that traffic is coming from Facebook which wouldn’t ordinarily happen otherwise if a Facebook user is accessing the site via HTTPS. Christopher offered a detailed perspective on the linkshim redirect which I found interesting, even outside of the context of my particular post:
Every external link clicked on Facebook and sent by Facebook in an email goes through the linkshim (if it doesn’t, that’s a bug). Each of these links is generated on the fly for the intended viewer and is cryptographically signed for only that viewer. We do this to prevent our linkshim from being abused by spammers as an open redirector. You saw the warning message that occurs when this signature is either missing or you are neither the user who generated the link nor one of that viewer’s friends. This happens when our linkshim links get passed around outside of Facebook via IM or email. [Functional example of reproducing this behavior omitted.] In addition to other checks, we added a grab all your friends and check if the signature matches exception in order to mitigate abuse false positives from friends sharing links over IM/email. Only a very tiny fraction of users of the linkshim see the warning you saw.
I feel the language of the warning is pretty benign but I am open to your suggestions on how to improve it. Just keep in mind we have to balance false positives such as the one you saw with the damage that can occur if spammers can exploit our users’ trust of Facebook URLs.
- More compelling to me was this thread on Les Orchard’s Facebook profile, where he’d shared a link to my post. In that thread, Mike Shaver offered his perspective on the post. This is particularly notable because Mike is both a (brand new) Facebook employee and a board member for StopBadware. That’s an extraordinary combination, and potentially an extraordinary conflict, but Mike’s thoughts are worth a read. A highlight:
Facebook is not saying that your site is unsafe, and the text is bog-standard “hey, be careful where you put your password” motherhood and Apple-pie advice. It does not block the load like Google and Mozilla’s malware interposition, and the experience is entirely different. Comparing them as you have is frankly fatuous, and I suspect pretty disingenuous as well. Do you really think that FB set out to put that screen up for any reason other than trying to protect users? You’re going to be pretty much calling people straight-up liars, based on what they’ve said publicly about it.
(I’m on the board of StopBadware, and have some idea of what happens to sites when they get on the malware-block list, and what the false positive rate is.)
- MetaFilter’s discussion of my post was also fairly thoughtful, if a bit one-sided, and it was nice to have my ideas discussed on the site without the thread being a referendum on me personally.
I also wanted to address a few key issues that have surfaced since the post first started getting responses:
- Holy shit, one of the board members of StopBadware works for Facebook! That kind of blew my mind. Now, Mike’s a nice guy, and the StopBadware folks are both trustworthy and well-intentioned. But as an industry, we in tech effectively delegate much of our policing to volunteer organizations such as StopBadware, and that leaves the potential for extraordinary conflicts when someone requests (as I did) policing actions against major players which employ members of those organizations.
- “But you have Facebook comments on this page!” Yep, I do. I’m not some anti-Facebook zealot, and I don’t like to make criticisms of companies or products without making a sincere effort to use and understand those tools. I like using Facebook for things like sharing what I’m listening to on Spotify, or to find my friends on Mixel, and I have no objection to it providing services such as commenting in some contexts. It’s important to me to communicate that my misgivings about Facebook’s relationship with the web is not the rantings of an extremist.
- “You’re saying sites should just be whitelisted and marked as safe simply for using Facebook plugins!” Nope, that’s not what I said at all. What I was communicating that given that Facebook is already making the effort to index sites when they use social plugins, they can cross-reference this against databases such as StopBadware which do give feedback on whether a site is safe or not.
- “These were just honest bugs (or explainable but unfortunate features) on Facebook’s part.” Let’s grant that this is the case for the engineers who work on systems like Facebook’s link warning. First, I’m glad if it encourages them to either fix the bugs or update the systems so that spurious warnings are not issued. There is no mechanism by which an ordinary publisher could request such reviews. But second, even if they are just simple bugs the impact is still the same
Overall, I don’t ascribe evil or malicious intent to any of the earnest and passionate coders whose responses I’ve quoted above. But I think some seemingly-innocuous features they work on can work as part of an overall strategy at Facebook that’s in tension with the web, and I urge them to consider those implications very broadly whenever possible. All software has bugs, and that’s no big deal. Facebook, though, has a unique burden to ensure that it’s not accidentally trampling on the web, as an obligation of its dominant position in the web ecosystem, even if that simply means evaluating the potential for bugs or unusual edge cases of features resulting in content on the web being marginalized.
Finally, I am very aware of the privilege that I enjoy by having an audience that both sees and responds to pieces like the one I wrote yesterday. Having had much of my concerns addressed so quickly is gratifying. But to those who think Facebook got a bum rap: The only thing Facebook was facing as a result of my post was the threat of an unnecessary security warning being placed as a gateway to their site. The rest of us face that threat from Facebook every day.