Connect with us

Artificial Intelligence

This is how YouTube uses neural networks to choose what content it recommends you see next

William Rux

Published

on

One of the most common uses of machine learning technology is the recommendation systems of online platforms: every time Facebook privileges one post over another when showing it in your ‘newsfeed’ or Twitter highlights a tweet in the section ” In case you missed it “from your ‘timeline’, what we see is the result of recommendation systems based on artificial intelligence.

Google researchers recently published an academic article (” Recommending what video to watch next: a multitask ranking system “) in which they offer some relevant details about the operation of the YouTube video recommendation system, one of the most relevant and advanced in the industry, and that stands out for its effectiveness in retaining the user’s attention .

Being a platform in which a large number of users upload hundreds of hours of video every second, the operation of its recommendation system must necessarily be different from those of other streaming platforms such as Spotify or Netflix, which have a Stable and centralized catalog: the evaluation of data and the generation of recommendations in real time takes on a much greater importance in the case of the Google video portal.

The key: two deep neural networks

Until 2016, YouTube used algorithms that simply recommended videos based on a set of various criteria: length of the video, number of subscribers, number of times it had been shared, etc. However, 3 years ago YouTube began to adopt neural networks .

Now, the YouTube video recommendation system works as a funnel structured in two stages , each of them the responsibility of a different neural network:

1) Generation of candidate items : In this phase, the options are reduced from millions to thousands. It uses data extracted from the user’s history to offer a list of videos that takes into account collaborative filtering (what other videos have attracted the attention of other people who watch videos similar to those of this user ?, for example).Excerpted from ‘Recommending what video to watch next: a multitask ranking system’.

2) Sorting : In this phase, the options are reduced from thousands to tens. This process assigns a score to each video, which determines the visibility that it will have when showing the recommendations when we are using YouTube.

Aspects such as its similarity to content that we have viewed previously will increase the probability that they will appear among the top positions, while they will be reduced if the video was already recommended before and the user ‘passed’ it.

Another factor that influences is the ‘age’ of the video : to avoid a bias in favor of older content (the one that accumulates the most visits and ‘likes’, after all), the recommendation system favors the presence of new content among the recommendations.

Engagement and biases

However, even knowing all the factors that YouTube takes into account when generating its recommendations, it is impossible to predict them accurately, because deep neural networks learn as they go, slightly altering their results to meet the basic objective with which they were created: in this case, increase ‘engagement’ (that is, user retention in front of the screen).

In fact, YouTube has had to make changes to its AI in recent times, due to the perverse incentive that this search for engagement was proving to be at all costs . A few months ago we discussed how this policy had led many users to ‘get hooked’ on pseudoscientific and conspiracy content.

Guillaume Chaslot, a former Google worker and advisor to the Center for Humane Technology, tells the story of an acquaintance, “Brian”, who found himself in this situation:

“For their parents, family and friends, their story is heartbreaking. But from a YouTube AI point of view, it’s a huge success. We engineered YouTube AI to increase the time people spend online, because that takes more ads. The AI ​​sees Brian as a role model to multiply. “

“How many people like Brian are seduced by those ‘rabbit holes’ every day? By design, the AI ​​will try to capture as many as possible. […] So if ‘the earth is flat’ it keeps users spend more time online than ‘the earth is round’, that theory will be favored by the recommendation algorithm “.