Despite few people actually searching for that

Here’s what Google looked like for months at the end of 2020 and on January 6th before the invasion of the capitol: (all searches were done in incognito from NYC)

Autocomplete results on January 6th

The first thought that came to my mind was: maybe users are looking for these terms, and the algorithm is just a “mirror of society”? But Google’s own data tells another story. According to Google trends, and in the month before the capitol invasion, “civil war is what” was searched:

  • 17 times more than “civil war is coming

After France’s soccer world cup victory, millions went in the street to celebrate. Many people came from the suburbs, often proudly wearing French flags to party on the streets. Later in the evening, a fraction of this celebration turned violent. At least 3 stores were looted, 24 cars destroyed, and 292 people were arrested.

Online, disinformation campaigns started.

Fox News

Fox News had a very ambiguous headline:

There were 2 deaths. And there was a violent turn in some places. But the 2 deaths had nothing to do with the violence, contrary to what this headline suggests. As Fox News reports:


Defamation is efficient, and AIs may have already figured it out


During a long bus ride across France, my neighbor was watching YouTube for hours, watching the auto-play recommendations at the end of the video. Since I worked on the algorithm that computes these recommendations, I was curious what they were about. In one of these videos, the topic was the extermination of a quarter of the world’s population. I joked: “So who wants us dead?” he explained: “There is a secret plan from the government. Hundreds of videos say so! The media is hiding it from you. Go to YouTube, and you’ll discover the truth!”.

Diffamer les concurrents est une stratégie efficace pour gagner de l’audience. Les Intelligences Artificielles de YouTube et Facebook s’en sont rendu compte.

Lors d’un interminable trajet en car, mon voisin passe son temps sur YouTube. Comme j’ai travaillé sur l’algorithme qui fait les recommandations, une certaine fierté m’incite à regarder de plus près. Dans ces vidéos, il est question d’un projet visant à exterminer deux milliards de personnes. Je blague: “Alors qui est-ce qui veut notre peau ?” Il m’explique: “Il y a un plan secret. Des centaines de vidéos le montrent ! Les médias te le cachent. Regarde sur YouTube, tu découvriras la vérité.”

YouTube’s recommendation A.I. is designed to maximize the time users spend online. Fiction often outperforms reality.

We all heard about conspiracy theories, alternative facts and fake news circulating on the internet. How do they become so popular? What’s the impact of the state-of-the-art algorithms on their success?
Having worked on YouTube’s recommendation algorithm, I started investigating, and came to the conclusion that the powerful algorithm I helped build plays an active role in the propagation of false information.

In order to see what YouTube is currently promoting the most, I wrote an open-source recommendation explorer which extracts the most frequently recommended videos about a query. …

More than one fourth of videos recommended on election eve have since been removed. Why?

Visualization of some YouTube recommendations on Election Eve. Full visualization here.

On the eve of the US presidential election, we used this recommendation explorer to extract a sample of 800 unique videos recommended by YouTube in response to searches about a presidential candidates.

More than 200 of these videos have since been removed, some of them by YouTube, and others by the content creators themselves.

The most viewed videos that disappeared were:

“This video will get Trump Elected” (10 million views)

That video was not only, but in the top 1% of the most recommended by YouTube’s A.I. out of our sample. Surprisingly, the creator of the video removed it after the election.

“Must Watch!! Hillary Clinton tried to ban this video” (5 million views)

The account associated with that video was…

YouTube’s Artificial Intelligence (A.I.) recommends tens of billions of videos every single day, yielding billions of views. On the eve of the US Presidential Election, we gathered recommendation data on the two main candidates, and found that more than 80% of recommended videos were favorable to Trump, whether the initial query was “Trump” or “Clinton”. A large proportion of these recommendations were divisive and fake news.
We propose two transparency metrics to elucidate the impact of A.I. on the propagation of political opinions and fake news.

Following the recommendations

To measure which candidate was recommended the most by YouTube’s A.I. during the US…

Guillaume Chaslot / Ex-Googler / AI for Good

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store