On Content Standards
Magnite Team
July 16, 2020 | 3 min read
Inventory quality and brand safety are sprawling topics. This post focuses on how we decide whether a site or app can use our core technologies and what we do to enforce those standards across our platforms.
Introduction
It may help to start with the basics: Magnite isn’t a publisher, so we don’t create or host content such as news articles, videos, or podcasts. Publishers hire us to help them sell ad space, and we provide technology that connects them to advertisers.
Advertisers rely on us to ensure their ads are placed on sites that they deem “brand safe.” We enforce baseline content standards across all our publishers, ingesting and honoring the site blocklists advertisers pass to us through their buying platforms, and providing complete transparency on where their campaigns run. In this way, ultimate control rests in the advertiser’s hands.
In the end, we’re incentivized to meet advertisers’ needs, not just because it’s our responsibility, but because if we don’t, our publishers (and therefore we) may not get paid.
Baseline Content Standards
These are the standards our advertisers expect us to enforce across all our platforms. The bedrock of this policy is the prohibition of extreme content. As we define it, extreme content is relatively easy to recognize. It includes hateful supremacist speech, direct calls for violence or harassment, gratuitous depictions of violence, pornography, or materials that advocate illegal activities such as sexual abuse, fraud and piracy.
We also prohibit harmful disinformation, which we define as the repeated distribution of deceptive content that’s reasonably likely to cause offline harm. In other words, publishing content that’s merely false isn’t enough for us to remove a publisher under this standard. We need to see a pattern of deception around topics that are likely to get people hurt.
Enforcement
Enforcing these standards starts with our marketplace quality & brand protection team. The team is comprised of dozens of analysts, researchers, engineers and testers, many with deep experience identifying sites with dubious provenance, content and authorship. Before we work with a publisher, this team needs to clear it through a series of checks that they repeat throughout the life of the relationship.
Monitoring thousands of publishers in real-time is an enormous technical challenge, and the global proliferation of deceptive and inflammatory content has made the task much more critical. As a result, our team uses third-party monitoring and classification services to flag content for possible removal, and we will continue to invest in ways to address this issue. This combination of human and technical resources provides us with multiple perspectives on each site and allows us to be more thoughtful about identifying prohibited content.
Though this work is challenging and nuanced, our goal is to be as consistent and fair as possible. This is why we encourage our clients and partners to report potential violations and help us refine and better articulate our policies. We thoroughly investigate all concerns and follow them with corrective action when warranted.
If you have any questions or feedback about any of the above, don’t hesitate to reach out to your Magnite representative.
March 2022: We’ve updated this post to provide more clarity on our disinformation policy and enforcement efforts, with a focus on our Sell-Side Platform (SSP) business, which represents the overwhelming majority of our revenue. You can read the old version of this post here.
Contact Us