January 11–18, 2012. This was the week that an experiment was conducted on Facebook users. Facebook's researchers modified more than 100k user's New Feeds to make people feel emotions. The result? Facebook can and has (in the experiment) made people sad at will through modification of the News Feed. This is a clear violation of the rules of human experimentation. You have to get explicit permission, not just passive permission (“By entering this building, you will be part of an experiment...”) to conduct an experiment on humans, especially one that mass-scale modifies the brain remotely. Many researchers and journalists complained. Yet, Facebook conducted another (unrelated) experiment to determine if they can increase voter turnout by doing this like modifying the News Feed. They could. Now, this has many implications. However, I'm not going to discuss this. I will talk about a time when Google did the same thing. If you didn't know, doubleclick.net is an ad and tracker website owned by Google. On the Electronic Frontier Foundation's website, they posted an urgent update. Their extension, Privacy Badger, had a vulnerability. Basically, Privacy Badger detects trackers automatically, but apparently this had a vulnerability, discovered by Google researchers. But here is an excerpt from the blog post that worried me:
One attack could go something like this: a Privacy Badger user visits a malicious webpage. The attacker then uses a script to cause the user’s Privacy Badger to learn to block a unique combination of domains like fp-1-example.com and fp-24-example.com. If the attacker can embed code on other websites, they can read back this fingerprint to track the user on those websites.
A little before I read that post, I happened to be browsing through Privacy Badger settings. I went and filtered through all subdomains owned by doubleclick.net. Some had suspicious names like 23.doubleclick.net. Just like the blog post said. Sure enough, after I updated, these domains were gone. This probably means that Google secretly tested these vulnerabilities on users. Now, normally, this would be bad because they intentionally exploited a vulnerability without consent, but it won't be a violation of scientific ethics. But they didn't tell us. It didn't say that they tested it IRL in the paper. That's the problem. Lack of transparency. If only it were free software, then I could look into the source code to know for sure.
If only everything were free software.