Results tagged “search”
December 13, 2012
Update: A few months after this piece was published, I was invited by Harvard's Berkman Center to speak about this topic in more detail. Though the final talk is an hour long, it offers much more insight into the topic, and I hope you'll give it a look.
The tech industry and its press have treated the rise of billion-scale social networks and ubiquitous smartphone apps as an unadulterated win for regular people, a triumph of usability and empowerment. They seldom talk about what we've lost along the way in this transition, and I find that younger folks may not even know how the web used to be.
So here's a few glimpses of a web that's mostly faded away:
- Five years ago, most social photos were uploaded to Flickr, where they could be tagged by humans or even by apps and services, using machine tags. Images were easily discoverable on the public web using simple RSS feeds. And the photos people uploaded could easily be licensed under permissive licenses like those provided by Creative Commons, allowing remixing and reuse in all manner of creative ways by artists, businesses, and individuals.
- A decade ago, Technorati let you search most of the social web in real-time (though the search tended to be awful slow in presenting results), with tags that worked as hashtags do on Twitter today. You could find the sites that had linked to your content with a simple search, and find out who was talking about a topic regardless of what tools or platforms they were using to publish their thoughts. At the time, this was so exciting that when Technorati failed to keep up with the growth of the blogosphere, people were so disappointed that even the usually-circumspect Jason Kottke flamed the site for letting him down. At the first blush of its early success, though, Technorati elicited effusive praise from the likes of John Gruber:
[Y]ou could, in theory, write software to examine the source code of a few hundred thousand weblogs, and create a database of the links between these weblogs. If your software was clever enough, it could refresh its information every few hours, adding new links to the database nearly in real time. This is, in fact, exactly what Dave Sifry has created with his amazing Technorati. At this writing, Technorati is watching over 375,000 weblogs, and has tracked over 38 million links. If you haven’t played with Technorati, you’re missing out.
- Ten years ago, you could allow people to post links on your site, or to show a list of links which were driving inbound traffic to your site. Because Google hadn't yet broadly introduced AdWords and AdSense, links weren't about generating revenue, they were just a tool for expression or editorializing. The web was an interesting and different place before links got monetized, but by 2007 it was clear that Google had changed the web forever, and for the worse, by corrupting links.
- In 2003, if you introduced a single-sign-in service that was run by a company, even if you documented the protocol and encouraged others to clone the service, you'd be described as introducing a tracking system worthy of the PATRIOT act. There was such distrust of consistent authentication services that even Microsoft had to give up on their attempts to create such a sign-in. Though their user experience was not as simple as today's ubiquitous ability to sign in with Facebook or Twitter, the TypeKey service introduced then had much more restrictive terms of service about sharing data. And almost every system which provided identity to users allowed for pseudonyms, respecting the need that people have to not always use their legal names.
- In the early part of this century, if you made a service that let users create or share content, the expectation was that they could easily download a full-fidelity copy of their data, or import that data into other competitive services, with no restrictions. Vendors spent years working on interoperability around data exchange purely for the benefit of their users, despite theoretically lowering the barrier to entry for competitors.
- In the early days of the social web, there was a broad expectation that regular people might own their own identities by having their own websites, instead of being dependent on a few big sites to host their online identity. In this vision, you would own your own domain name and have complete control over its contents, rather than having a handle tacked on to the end of a huge company's site. This was a sensible reaction to the realization that big sites rise and fall in popularity, but that regular people need an identity that persists longer than those sites do.
- Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren't subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.
- A dozen years ago, when people wanted to support publishing tools that epitomized all of these traits, they'd crowd-fund the costs of the servers and technology needed to support them, even though things cost a lot more in that era before cloud computing and cheap bandwidth. Their peers in the technology world, though ostensibly competitors, would even contribute to those efforts.
This isn't our web today. We've lost key features that we used to rely on, and worse, we've abandoned core values that used to be fundamental to the web world. To the credit of today's social networks, they've brought in hundreds of millions of new participants to these networks, and they've certainly made a small number of people rich.
But they haven't shown the web itself the respect and care it deserves, as a medium which has enabled them to succeed. And they've now narrowed the possibilites of the web for an entire generation of users who don't realize how much more innovative and meaningful their experience could be.
Back To The Future
When you see interesting data mash-ups today, they are often still using Flickr photos because Instagram's meager metadata sucks, and the app is only reluctantly on the web at all. We get excuses about why we can't search for old tweets or our own relevant Facebook content, though we got more comprehensive results from a Technorati search that was cobbled together on the feeble software platforms of its era. We get bullshit turf battles like Tumblr not being able to find your Twitter friends or Facebook not letting Instagram photos show up on Twitter because of giant companies pursuing their agendas instead of collaborating in a way that would serve users. And we get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.
We'll fix these things; I don't worry about that. The technology industry, like all industries, follows cycles, and the pendulum is swinging back to the broad, empowering philosophies that underpinned the early social web. But we're going to face a big challenge with re-educating a billion people about what the web means, akin to the years we spent as everyone moved off of AOL a decade ago, teaching them that there was so much more to the experience of the Internet than what they know.
This isn't some standard polemic about "those stupid walled-garden networks are bad!" I know that Facebook and Twitter and Pinterest and LinkedIn and the rest are great sites, and they give their users a lot of value. They're amazing achievements, from a pure software perspective. But they're based on a few assumptions that aren't necessarily correct. The primary fallacy that underpins many of their mistakes is that user flexibility and control necessarily lead to a user experience complexity that hurts growth. And the second, more grave fallacy, is the thinking that exerting extreme control over users is the best way to maximize the profitability and sustainability of their networks.
The first step to disabusing them of this notion is for the people creating the next generation of social applications to learn a little bit of history, to know your shit, whether that's about Twitter's business model or Google's social features or anything else. We have to know what's been tried and failed, what good ideas were simply ahead of their time, and what opportunities have been lost in the current generation of dominant social networks.
So what did I miss? What else have we lost on the social web?
A follow-up: How we rebuild the web we lost.
January 3, 2011
Noticing a pattern here?
Paul Kedrosky, Dishwashers, and How Google Eats Its Own Tail:
Google has become a snake that too readily consumes its own keyword tail. Identify some words that show up in profitable searches -- from appliances, to mesothelioma suits, to kayak lessons -- churn out content cheaply and regularly, and you're done. On the web, no-one knows you're a content-grinder.
The result, however, is awful. Pages and pages of Google results that are just, for practical purposes, advertisements in the loose guise of articles, original or re-purposed. It hearkens back to the dark days of 1999, before Google arrived, when search had become largely useless, with results completely overwhelmed by spam and info-clutter.
Alan Patrick, On the increasing uselessness of Google:
The lead up to the Christmas and New Year holidays required researching a number of consumer goods to buy, which of course meant using Google to search for them and ratings reviews thereof. But this year it really hit home just how badly Google's systems have been spammed, as typically anything on Page 1 of the search results was some form of SEO spam - most typically a site that doesn't actually sell you anything, just points to other sites (often doing the same thing) while slipping you some Ads (no doubt sold as "relevant").
Google is like a monoculture, and thus parasites have a major impact once they have adapted to it - especially if Google has "lost the war". If search was more heterogenous, spamsites would find it more costly to scam every site. That is a very interesting argument against the level of Google market dominance.
And finally, Jeff Atwood, Trouble in the House of Google:
Throughout my investigation I had nagging doubts that we were seeing serious cracks in the algorithmic search foundations of the house that Google built. But I was afraid to write an article about it for fear I'd be claimed an incompetent kook. I wasn't comfortable sharing that opinion widely, because we might be doing something obviously wrong. Which we tend to do frequently and often. Gravity can't be wrong. We're just clumsy … right?
I can't help noticing that we're not the only site to have serious problems with Google search results in the last few months. In fact, the drum beat of deteriorating Google search quality has been practically deafening of late.
From there, Jeff links to several more examples, including the ones I mentioned above. As Alan alludes to in his post, the threat here is that Google has become a monoculture, a threat I've written about many times.
Now, is all this anecdotal evidence reliable? Perhaps not. What is worth noting now is that, half a decade after so many people began unquestioningly modifying their sites to serve Google's needs better, there may start to be enough critical mass for the pendulum to swing back to earlier days, when Google modified its workings to suit the web's existing behaviors.
July 27, 2004
A few months ago, two companies in the search optimization space teamed up to start a contest, based on a challenge to see who could be the first result for the gibberish phrase "Nigritude Ultramarine". Winning the contest consisted of being the top result on Google for that search either on June 7 (the "player" prize) or a month later, on July 7 (the "stayer" prize).
I've had a fairly poor impression of the Search Engine Optimization Industry, so I entered the contest on June 4. My site became the number one search result late on June 8, so I missed winning the first round, but I held the position for the rest of the month (and my site is still the first result, as of this writing) and won the Stayer's Prize.
My prize was a beautiful Sony monitor. Michael Robertson and the other folks involved with the contest were cordial and prompt, and the monitor arrived in the middle of last week. In fact, I've been in the middle of moving, and just after we'd settled in, the first ring at our doorbell was from UPS, bearing a big Sony box courtesy of Amazon. Now that's a housewarming gift.
But more interesting to me has been the reaction people have had, first to my entry in the contest, second to my ranking in the search results (the term people seem to favor in email is "dominance" but that doesn't sound very humble) and finally their response to my win of the second-stage prize.
There are a significant number of really supportive emails, of course. People generously linked to my original Nigritude Ultramarine post, and I think they felt a sense of accomplishment in helping me win. There's nothing the blogosphere loves more than angry mob justice, and I probably benefitted from tapping into a bit of angry mob antipathy towards the SEO industry. Though many, perhaps even most, people in the SEO industry behave ethically, the reality is that much of the SEO industry has treated the weblog medium with an attitude ranging from crass opportunism or exploitation to downright abuse, in the form of comment spam, referral spam, and fake, content-free blogs.
June 4, 2004
Update: The contest is over, and this entry did pretty well but didn't win the initial prize. So the best purpose this page can serve is to direct you to The Hunger Site. Go give it a click.
Update 2:With one day remaining, it looks like this page will will win the contest for July. See more on the contest in my follow-up post.
I've always had a pretty low opinion of the Search Engine Optimization industry. Though there are of course legitimate experts in the field, it seems chock full of people who are barely above spammers, and they taint the image of the whole group.
That being said, I do watch what they do from time to time, especially as they've become enchanted with the power of blogs, both from a comment-spamming perspective as well as their evny of bloggers' PageRank.
But they've been doing something interesting of late that I'm actually curious about. An affiliate network called DarkBlue and a forum called Search Guild have started SEO Challenge, a contest to see who is the first Google result for the (previously unlinked) phrase Nigritude Ultramarine. Everyone from link spammers to legitimate optimizers has popped up to enter the contest, displaying the requisite contest entry image (see below) and crossing their fingers.
I suspect, though, that those of us who've made content even when there weren't bribes involved have an advantage. For all the back-and-forth about how Google is or isn't evil, the end result of PageRank is that it's a hell of a lot more work to fake your way into being a top result than it is to just have high ranking as a fringe benefit of just being a person who loves writing. That's a good thing.
So, in order to prove that real content trumps all the shady optimization tricks that someone can figure out, and because I figure I deserve an iPod at least as much as the Star Wars Kid, I'm entering the contest. Do me a favor: Link to this post with the phrase Nigritude Ultramarine. I'd rather see a real blog win than any of the fake sites that show up on that search result right now.