Social media platforms fall short on enforcing ads policies

31 October 2024

By: Max Read and Jessica Mahoney

This Dispatch is part of a series assessing platform enforcement of guidelines ahead of the 2024 US elections. Our Election Scorecard provides a comprehensive analysis of platform community standards and enforcement. You can find the report and a platform-by-platform comparative analysis on ourwebsite.  


As the US general election approaches, ads will continue to be a key tool for political campaigns and outside groups to fundraise, promote their agendas and activate voters. According to the Brennan Center for Justice, US political ad spending on Google (including YouTube and search) and Meta totaled more than $619 million for the period between January 2023 and August of 2024. X took in over $15.5 million in political ads spending during the same period, and Snap earned over $16 million over the first 10 months of 2024.[1] Unlike organic content, ads can be targeted to users based on geography, interest areas, and a range of demographic factors. This targeting ability makes them a powerful vector for spreading political messages – and disinformation – to key voters.  

ISD’s Platform Preparedness Analysis explored the policies which four major social media companies  (YouTube, Meta, Snap, and X) have in place to govern transparency and moderation of political advertisements. These policies cover issues including ad funding disclosures, fact checking of political ads, and voter suppression content. 

Researchers conducted a cross-platform analysis of ads that ran on X, Facebook, Instagram, Snapchat, and YouTube between April 1 and September 30 to assess how effective these policies are enforced to combat false claims about election integrity during a period of six months when many voters were tuning into election news.  

Methodology: 

As ISD’s prior analysis shows, researchers’ access to political ads data varies across platforms, and each platform’s ad library (where available) contains different information and functionality. This variability makes a fully identical analysis across platforms impossible. ISD therefore took different approaches to accessing and analyzing platforms’ ad data:  

  • Meta (Facebook and Instagram): ISD used Meta’s Ads API to search for ads that included keywords related to election integrity.
  • X (formerly Twitter): ISD used a third-party tool to query X’s publicly available political ads repository for ads that contained keywords related to election integrity. 
  • Google (YouTube and search): Analysts manually searched for ads run by groups known to spread false claims about election integrity (Google’s ad library only enables searches by advertiser name, not keywords). 
  • Snap: ISD reviewed a sample of ads in Snap’s publicly available political ads database (Snap’s database does allow for keyword queries).  

Researchers reviewed ads in each dataset for false claims about election integrity and other policy violations.  

Key Findings 

ISD found that while each platform has policies to prevent paid amplification of false or misleading claims about election processes, most allowed ads that included false claims that likely violated civic integrity policies. These include ads that baselessly claim mail-in voting is insecure, and claims that the 2024 elections will likely be “rigged” or “stolen.” 

  • X, Meta, and Google all allowed ads that misleadingly tied concerns over immigration enforcement to election integrity. Most prominent among these were false claims about the prevalence of noncitizens voting in ads that likely violated prohibitions against ads that “intended to undermine public confidence in an election” (X) or “attempt to delegitimize the election” (Meta). 
  • Five advertisers – The Heritage Foundation, Judicial Watch, TheBlaze, Honest Elections Project Fund and The Daily Caller – spent over $600,000 in total on ads including election integrity keywords on Meta platforms. While not all the ads crossed the threshold of violating Meta’s civic integrity policy, the scale of spending reflects a significant investment in casting doubt on election security.  
  • On X, at least one news outlet ran ads that promoted election denial narratives. As a news publisher, the ads are exempt from disclosures required for political ads. 
  • Snap was the only platform on which researchers did not identify any ads that violated platform policies.   

Platforms Allow False Election Claims in Ads 

Meta remains a stalwart in the digital advertising rankings and is projected to reach over $61 billion in annual ad revenues by the end of 2024. According to an analysis by the Brennan Center, advertisers who spent at least $5,000 on Google and Meta spent roughly $619 million in political ads, with at least $248 million spent on the presidential race alone. A small but significant proportion of Meta’s profits from ads have come courtesy of ads that promote false and misleading claims about election integrity.  

Meta’s policies prohibit ads that “discourage people from voting or call into question the legitimacy of an upcoming or ongoing election.” Yet the company has taken hundreds of thousands of dollars from groups and individuals running ads that contradict this policy, specifically the clause prohibiting ads that attempt to delegitimize the election. These include both broad misleading claims about the prevalence of election fraud – specifically, the prevalence of noncitizens voting – and ads that falsely claim certain voting methods are prone to fraud.  

For example, Americans for Legal Immigration PAC ran an ad claiming 42,000 noncitizens are “poised to steal the 2024 elections” in Arizona. The Heritage Foundation, the conservative think tank responsible for Project 2025, has run similar ads claiming, “illegal immigrants threaten the integrity of our elections.” Heritage spent roughly $164,000 on over 300 ads promoting election fraud claims in the six-month period studied, reflecting the group’s investment in the issue and the scale of Meta’s profits from it. 

Meta has also allowed ads that cast doubt on the security of specific voting methods. In July, the Conservative Political Action Conference (CPAC) paid to promote an Instagram post that “ballot harvesting, early voting, mail-in ballots, and counting ballots after election day… have [been] used and are likely to [be] use[d] again to rig the election.” TheBlaze, a partisan news and media company, ran the same ad promoting a video, “Voter Fraud Exposed: How Elections Can be Stolen” seven times between June and September of this year. In one post, the outlet claims that Google “rejected [the ad] for ‘Unreliable Claims’ even though we brought the receipts…”. 

Figures 1-3: Top left: CPAC promoted an Instagram post including false claims about mail-in voting. Top right: Americans for Legal Immigration PAC Facebook ad claiming noncitizens are poised to steal elections. Bottom: TheBlaze ad claiming elections can be stolen.

X is a comparatively small player for political ads. According to the platform’s political ad disclosure – which does not capture all spending on political ads, as discussed below – the platform earned roughly $15 million from these ads between January 1 and October 26, 2024. Even with these relatively meager revenues, X has allowed several ads that violate its advertiser policies.  

X’s political ads policy prohibits “false or misleading information intended to undermine public confidence in an election.” Researchers identified several ads that included false claims about noncitizens voting, including ads claiming Democrats “IMPORTED millions of ILLEGALS to vote in our elections,” and that “Dem poll workers in battleground states BLATANTLY changed ballots from Trump to Biden.” One ad campaign from the National Republican Senatorial Campaign Committee (NRSC) cost only $600 to reach over 20 million views, according to X’s metrics. 

Figures 4 and 5: Left, an ad from the National Senate Republican Campaign Committee claiming that Democrats are importing noncitizens to vote; Right, an ad from the group American Mission claiming that Democratic poll workers are changing votes.

X’s news publishers exemption enables media outlets to publish ads promoting misleading claims about elections. This exemption means the ads are not included in the platform’s political ad disclosure report, and news publishers can therefore run ads with false or misleading information about elections, without disclosing targeting or spending data. The Washington Times, for example, ran ads with unfounded claims that unspecified government offices are encouraging noncitizens to register to vote. These ads, which have been viewed at least 66 million times, are not considered political ads under X’s advertising policies, and the platform does not disclose how much was spent on them or user targeting data.  

Figure 6: An ad run by the Washington Times on X amplifying the false claim that noncitizens are allowed to vote.

Google’s political advertising policy claims the company supports “responsible political advertising” and expects “all political ads and destinations to comply with local legal requirements.” In its advertising policies section on “Misrepresentation,” Google cites “unreliable claims” as one form of misrepresentation that violates its policies, such as if an advertiser makes claims “that are demonstrably false and could significantly undermine participation or trust in an electoral or democratic process.” 

Figures 7-8: Ads from Judicial Watch claiming that elected Democrats are purposefully negligent in their duties to carry out elections fairly.

Despite Google ads API not allowing for keyword searches, prohibiting researchers from fully understanding the volume of election denial ads across Google spaces, ISD researchers were able to manually find several examples of advertisements that violated Google’s stated policy against undermining participation and trust in US elections. The right-wing “watchdog” group Judicial Watch ran multiple ads which accused “the Left” of purposefully refusing to adopt election integrity laws or “clean” voter rolls to rig elections. The ads were run on YouTube and clearly violated both Google and YouTube’s stated policies against election misinformation and undermining trust in the democratic process. 

Conclusion 

Despite claiming to hold political advertisers to the same or more stringent content policies regarding misinformation as organic content, Meta, X, and Google have all profited from ads promoting false claims of election fraud or noncitizen voting. While limited ads data disclosure restricts the depth of possible analysis, this research evidences negligent policy enforcement and the symbiotic relationship between the platforms, and advertisers aiming to sow distrust in elections. 

Both X and Meta allowed ads that baselessly used the specter of noncitizens voting to undermine confidence in US elections. Meta further allowed ads that disparaged mail-in voting, and Google allowed ads claiming voter rolls are intentionally unmaintained in order to enable election fraud. In all cases, these ads would appear to run against the platforms’ civic integrity and political ads policies. But all three platforms’ enforcement failures suggest a step back from commitments to enforce policies on undermining confidence in democratic processes.  

As Americans go to the polls, voters are polarized in their views on the security of the election system. As this research shows, social media platforms may be exacerbating that trend – and profiting from it – by allowing advertisers to target users with election denial content.  

End Notes

[1] Data from Snap and X political ad disclosure reports.

The images in this article may include memes and screenshots of content produced by actors including violent extremists, information operations and conspiracy theorists. These are reproduced to analyse and critique the images and educate our audience. We are not able to identify whether the components of these images may include copyrighted work and so are unable to acknowledge ownership under fair use doctrine. If you are the owner of an image that has been included, in part or in whole, in this article, and wish to be acknowledged, please get in touch at info@isdglobal.org.