As the presidential election heats up, advertisers must remain vigilant in ensuring their ads do not appear next to polarizing political content or misinformation. Recent incidents involving major brands like Mazda and Adobe highlight the risks associated with automated ad placements on platforms like YouTube, where ads appeared alongside videos promoting politically motivated falsehoods. Let’s take a look at recent developments and implications for advertisers.
Brand Safety Concerns on YouTube
YouTube recently experienced backlash when the platform displayed a Mazda ad before a video falsely claiming that Haitian migrants were eating ducks in Ohio. Similarly, YouTube placed an Adobe ad before a video spreading baseless claims about migrants abducting pets. This was not the first time that brands have been embarrassed by their ads appearing next to dubious content. As we have blogged, YouTube has endured numerous incidents over the years, invoking the ire of embarrassed brands.
Such incidents underscore the difficulties advertisers face in ensuring their ads are not placed next to harmful or misleading content. Misinformation can damage customer perceptions and public trust in a brand, especially when it involves sensitive political issues. Advertisers must remain cautious and proactive to avoid unwanted associations with controversial content.
The Role of Algorithms in Ad Placement
Much of the problem stems from the reliance on algorithms to automatically place ads across platforms like YouTube. These algorithms, while efficient, lack the nuanced judgment required to avoid harmful content. According to a report by Eko, over a dozen large brands had ads placed alongside xenophobic content, including videos spreading falsehoods about Haitian immigrants. These videos garnered millions of views, providing not only visibility to the misinformation but also financial gain for the creators of such content.
YouTube, for its part, claims to have systems in place to restrict monetization of videos that violate its advertiser-friendly guidelines. However, as these incidents show, those systems are far from foolproof. While YouTube removed some of the flagged videos, others remained under review, raising questions about the platform’s ability to enforce its policies consistently.
Misinformation and Financial Consequences
Ads placed next to misleading or harmful content tend to receive fewer clicks, with click-through rates 46% lower compared to ads placed alongside neutral content. This drop in engagement not only affects the return on investment for advertisers but also tarnishes a brand’s image in the eyes of consumers.
The issue isn’t limited to YouTube. Other platforms, such as X (formerly Twitter), have seen a sharp decline in advertiser confidence due to similar concerns. Under Elon Musk’s leadership, trust in X has plummeted, with only 4% of marketers believing ads on the platform are safe for brands. As a result, many brands are cutting their ad spending on the platform.
The Limits of Keyword Blocklisting
To mitigate the risks, many advertisers use keyword blocklisting, which prevents ads from appearing next to content containing specific terms. However, this method is not without its flaws. Automated keyword blocklisting often leads to false positives, where high-quality content is misclassified, resulting in lost revenue opportunities for both advertisers and publishers.
For example, an article about filmmaker David Lynch was flagged for containing the term “lynch,” a term that in this context had no connection to harmful content. This misclassification led to the suppression of premium content, forcing advertisers to rely on lower-quality sites that bypass brand safety filters.
The rise of made-for-advertising (MFA) sites has further complicated the landscape. These sites avoid using blocklisted terms, making them appear brand safe to existing technology, but they often host low-quality content designed to game the system. As a result, advertisers are left unprotected from potentially damaging content, despite their efforts to safeguard their brand.
AI’s Role in Brand Safety
Many brand safety solutions rely on AI to identify and block harmful content. However, recent reports have raised concerns about the effectiveness of AI in ensuring brand safety.
A report by Adalytics found that hundreds of brands appeared next to unsafe content on user-generated content (UGC) websites, despite using AI-driven brand safety technology from providers like DoubleVerify and Integral Ad Science (IAS).
Brands have questioned whether these AI systems are truly capable of providing the level of protection promised. The report revealed inconsistencies in how AI tools categorized web pages, with some containing violent or racist content being labeled as low risk, while reputable sources like The Washington Post were flagged as high risk. These findings suggest that AI technology may not be as reliable as many advertisers have been led to believe.
The Need for Greater Transparency
Advertisers and brand safety experts are calling for more transparency from tech platforms and brand safety providers. Brands want to understand how these systems work and why they sometimes fail to catch harmful content. Without transparency, advertisers cannot make informed decisions about their ad placements, nor can they trust that their brand safety investments are worthwhile.
A Way Forward: Advanced Brand Safety Solutions
As the election season intensifies, brands need to go beyond traditional keyword blocklisting and embrace more advanced brand safety measures. Contextual analysis, which evaluates the source, intent, and environment of content, can help advertisers make more informed decisions about where their ads appear. Understanding the mindset of the audience—whether they are seeking information, entertainment, or news—can also improve the suitability of ad placements.
Large language models (LLMs) offer promising advancements in this area, as they can interpret content with greater nuance than simple keyword detection. By incorporating contextual analysis and mindset considerations, advertisers can better navigate the complex media landscape and ensure their ads appear in safe, suitable environments.
In addition, platforms like YouTube must provide advertisers with more granular reporting, allowing them to track ad placements at the URL level rather than just the domain level. This level of transparency would empower advertisers to make more informed decisions and hold platforms accountable for their brand safety practices.
True Interactive Can Help
As brands face increasing pressure to protect their reputation during a highly charged election season, it is clear that current brand safety measures are not enough. Businesses must adopt more sophisticated strategies, including contextual analysis and greater transparency, to ensure their ads appear in environments that align with their values. Platforms like YouTube must also step up their efforts to protect brands from harmful content, or risk losing the trust of marketers altogether.
Meanwhile, At True Interactive, we’re helping our clients ensure brand safety. For instance, we are taking extra precautions around excluding certain types of audiences other than “sensitive content” and have started to expand our restrictions into Politics & News, World Music, and a few other topics/audiences just to avoid.
Visit our website to learn more about our capabilities.
Image by Gerd Altmann from Pixabay