So here you have it, Facebook deliberately filters the information shown to its users for the express purpose of conducting psychology experiments on them.

The paper, “Experimental evidence of massive-scale emotional contagion through social networks,” was published in The Proceedings Of The National Academy Of Sciences. It shows how Facebook data scientists tweaked the algorithm that determines which posts appear on users’ news feeds—specifically, researchers skewed the number of positive or negative terms seen by randomly selected users. Facebook then analyzed the future postings of those users over the course of a week to see if people responded with increased positivity or negativity of their own, thus answering the question of whether emotional states can be transmitted across a social network.

So in other words they skewed the stories shown to users too see if people who got shown lots of negative stories in turn produced lots of negatives posts and vice versa. Turns out that they did. Surprise.

It also might be news to people that what they see on Facebook isn’t an un-manipulated stream.

If I actually used Facebook I would be utterly outraged at this, but then I trustingly clicked “agree” like everyone else.

In order to sign up for Facebook, users must click a box saying they agree to the Facebook Data Use Policy, giving the company the right to access and use the information posted on the site. The policy lists a variety of potential uses for your data, most of them related to advertising, but there’s also a bit about “internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

So I’m wondering if the next time there is a bombing, or a school shooting, or some other horrific event in your country if your Facebook newsfeed will suddenly become more positive? Or maybe the next time the financial industry collapses on itself? Or there is a student protest? Or a cop shoots a teenager by accident?