In March 2017, a scandal emerged in the digital industry, which gained a sense of urgency after an investigation was published which revealed that YouTube was exposing brands that had video pre-roll ads about terrorism, racism, pedophilia, etc., encouraging hate and violence.
This generated a boycott of more than 250 brands around the world such as P&G, J&J, PepsiCo, Volkswagen, Toyota, and Walmart, among others, who cut their advertising budget on those platforms for Brand Safety.
Since that episode, companies demanded transparency (mainly from Google and Facebook) and the availability of third-party sources to audit campaigns. And, slowly but surely, digital platforms have started implementing controls and solutions, but new ways of transmitting harmful messages are constantly being created.
In 2019, another wave of scandals appeared when an investigation discovered that YouTube’s algorithm benefited the recommendation of a pedophile network, with comments on children’s videos, and that YouTube also monetized this content.
Platforms and verifiers seem to be taking careful measures, but new forms of harmful messages are being systematically created and putting brands in risk and threatening them.
On this point, Facebook reinforced the human review of content, but Facebook Business Help Center published: “Facebook will try to make sure that your ad doesn’t appear in the categories that you excluded. However, we cannot guarantee that it will be successful 100% of the time”. Thus, Brand Safety cannot be guaranteed 100%.