When you search the Internet for news or information, do you find what you need to know … or what a sophisticated algorithm has decided you want to see? (More)

In a recent TED talk, Eli Pariser (one of MoveOn’s founders) raised concerns about the increasing use of algorithms to “personalize” internet content. He fears that these filters could easily blind us to ideas, information, events, and people that algorithms decide are not important or relevant.

I confess that I have not thought much about data-driven content manipulation on the internet beyond advertising. I have grudgingly accepted the fact that ad content is pushed to me based on IP address location, search terms, and current page content. However, something Pariser said jolted me out of complacency.

But a couple of weeks ago, I asked a bunch of friends to Google “Egypt” and to send me screen shots of what they got. So here’s my friend Scott’s screen shot. And here’s my friend Daniel’s screen shot. When you put them side-by-side, you don’t even have to read the links to see how different these two pages are. But when you do read the links, it’s really quite remarkable. Daniel didn’t get anything about the protests in Egypt at all in his first page of Google results. Scott’s results were full of them. And this was the big story of the day at that time. That’s how different these results are becoming.

It never occurred to me that search engines like Google or Bing would deliver content based on anything other than the search terms, at least not here in the land of the free.

Pariser goes on to describe just how pervasive content filtering has become. Search engines, social media, news media, and, of course, commercial sites are all mining data and pushing “personalized” content. He assumes that invisible hand of the filtering algorithms is ethically neutral.

What we’re seeing is more of a passing of the torch from human gatekeepers to algorithmic ones. And the thing is that the algorithms don’t yet have the kind of embedded ethics that the editors did. So if algorithms are going to curate the world for us, if they’re going to decide what we get to see and what we don’t get to see, then we need to make sure that they’re not just keyed to relevance. We need to make sure that they also show us things that are uncomfortable or challenging or important — this is what TED does — other points of view.

I decided to look at the evolution of these personalization filters for search engines. This timeline will give you a sense of how far and how fast these filters have come in a short period of time for Google.

2003 Personalization was limited but desired by search engines, including Google.

2005 Google rolls first generation of personalization filters.

2007 Google ramps up personalization, expanding scope and complexity.

2009 Google personalizes every task unless you opt out.

2010 Google blurs the line between public and private information with Buzz.

Google is not alone. Bing debuted in 2009, billing itself as a “decision engine,” which means it aggressively filters content. That artificial intelligence has paid dividends as Bing as taken market share from Google and Yahoo. It also scores higher than Google on clicks after search.

The argument for search personalization centers around disambiguation and efficiency. If the user enters a fairly broad search term, the chances are pretty high that the first few pages will not contain anything worth clicking. In the search engine world, clicks are what define success, so ambiguous searches will hurt the engine’s stats as well as frustrate the user. Digging into search history and other personal variables, the algorithm can make better guesses and deliver clickable content even in the face of nonspecific search terms.

Pariser argues that as these personalization algorithms become efficient, we will increasingly live in our own little information bubbles, exposed to vastly different material despite identical starting points. For example, my search for “climate change” would push citations from Science, Nature, and NASA based on my search, click, and site histories. James Inhofe searching for “climate change” would cough up fur balls from Anthony Watts, Heartland Institute, and Koch Industries. Some like living in a bubble. Some, like Pariser, like to keep tabs on bubbleheads, making filtering out content less desirable.

While Pariser suggests that the machines are ethically neutral, I am not so sure. We know China and other countries have tried to aggressive censor the internet, primarily by making some topics taboo for search engines and websites. As the protests in Egypt escalated, Hosni Mubarak tried to kill internet access. We know the U.S. Chamber of Commerce was negotiating with tech security firms to spy on and harass critics of the Chamber. The various versions of the Patriot Act have made secretly following your internet footprints legal. If governments and corporate interests can violate your privacy, what is stop them from feeding you disinformation?

Pariser is right to be concerned about the direction of internet content filtering, which has rapidly evolved over the past decade. He articulates a very utopian view of what the internet should look like. Namely, surfing the web should come with three protections – ethical conduct, transparency, and user control.

But we really need you to make sure that these algorithms have encoded in them a sense of the public life, a sense of civic responsibility. We need you to make sure that they’re transparent enough that we can see what the rules are that determine what gets through our filters. And we need you to give us some control, so that we can decide what gets through and what doesn’t. Because I think we really need the Internet to be that thing that we all dreamed of it being. We need it to connect us all together. We need it to introduce us to new ideas and new people and different perspectives. And it’s not going to do that if it leaves us all isolated in a Web of one.

While I enthusiastically endorse those ideals, I doubt governments and corporations share those same values. I cannot help but think of HAL from Stanley Kubrick’s 2001: A Space Odyssey.

Interviewer: HAL, you have an enormous responsibility on this mission, in many ways perhaps the greatest responsibility of any single mission element. You’re the brain, and central nervous system of the ship, and your responsibilities include watching over the men in hibernation. Does this ever cause you any lack of confidence?

HAL: Let me put it this way, Mr. Amor. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.

Pariser discusses the filtering problem in more detail in his new book, “The Filter Bubble” and you can visit his website for more information.