Inaccurate brand safety measures can under-monetise diverse podcast content, research says

Report by Sounder and Urban One highlights inaccuracies in keyword-based brand safety measures

Traditional keyword-based brand safety and suitability measures are often inaccurate and can lead to under-monetisation for diverse podcast content, research says. 

Podcast hosting platform Sounder partnered with Black-owned media organisation Urban One’s audio divisions Radio One & Reach Media and its podcast network to release a whitepaper on traditional keyword-based vs AI/ML-driven brand safety and suitability in podcasting.

The research aims to help the industry understand concerns from diverse creators about AL/ML-driven brand safety and suitability models, which included the “one-size-fits-all approach” that isn’t inclusive of cultural nuances, transcription quality, and content categories discussed by marginalised voices being over-blocked by legacy solutions. 

“This research is a significant step forward in ensuring equal monetisation opportunities for diverse creators in podcasting,” said Sounder CEO and co-founder Kal Amin. “By addressing concerns about AI/ML-driven brand safety and suitability models, Sounder’s AI-based technology supports authentic and inclusive content representation.”

The research project found that traditional keyword-based brand safety and suitability measures may not take into account the context of podcast episodes and inaccurately flag content as unsafe. This was proven in the research by testing this method on Urban One’s podcast content, which showed that a standard keyword blocklist solution removed 92% of all episodes from available inventory.

This is compared to contextually focused AL/ML based models such as Sounder’s solution and AI measurement platform Barometer, which only flagged 10% of episodes as potentially unavailable for monetisation. Additionally, the report found that Black English (AAVE) content that’s transcribed incorrectly can also lead to misclassifications. 

Sounder also tested this on several other podcast cohorts including a specific selection of podcasts by Black creators outside Urban One’s podcast network, a randomised selection of top podcasts by Black creators, and a randomised selection of podcasts without any indication of background and representative of the general podcasting population. 

The report concluded that AI/ML-led brand safety and suitability solutions are developed without bias and can prevent over-indexing on safety by considering topics, intention, tone and more instead of just keywords. It can also detect changes in content risk and adapt to the culture and capture new risks early. 

“At Urban One, we are passionate about having real, authentic dialogue with our audience around topics that matter to them, while also creating equitable conversations with brands and agencies, ensuring that they can invest confidently in brand safe, relevant environments,” said Urban One audio division chief revenue officer Josh Rahmani. “This research is a step in the right direction to help advertisers see the bias that exists in legacy brand safety solutions which disproportionately impact Black-owned and Black-targeted media.”