According to a company spokesperson on Tuesday, Facebook owner Meta (META.O..) is preventing advertisers in other regulated industries and political campaigns from utilizing its new generative AI advertising products. This means that they are denying access to tools that lawmakers have warned could accelerate the spread of false information ahead of elections.
Following the publication of this piece, Meta made the decision publicly known on Monday night through modifications to their support center. Although it does not have any limits directly on AI, its advertising guidelines forbid advertisements containing material that the company’s fact-checking partners have refuted.
In a note appended to several pages outlining the functionality of the tools, the company stated, “As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features.”
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries,” it continued.
The policy change was made a month after Meta, the second-largest digital ad marketplace in the world, declared it would begin to give advertisers more access to AI-powered advertising tools. These tools allow advertisers to instantly create backgrounds, edit images, and change ad copy in response to simple text prompts.
Initially, starting in the spring, only a select few advertisers had access to the capabilities. The firm stated at the time that they were on schedule to launch for all marketers worldwide by the following year.
Following the excitement surrounding the release of OpenAI’s ChatGPT chatbot last year—which can respond to queries and other prompts with written replies that resemble those of a human—Meta and other tech firms have hurried to introduce generative AI ad solutions and virtual assistants in recent months.
The safety guard rails the businesses want to implement on such systems have not been made public yet. Still, Meta’s decision on political advertisements is among the most essential AI policy decisions the industry has made yet.
The largest digital advertising provider, Alphabet’s (GOOGL.O) Google, revealed last week the release of comparable image-customizing generative AI ad technologies. According to a Google representative who spoke to Reuters, the company intends to keep politics out of its products by prohibiting a list of “political keywords” as suggestions.
Additionally, Google has scheduled a policy update for mid-November that would mandate that any election-related advertisements carry a disclaimer if they use “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Political advertisements are blocked on TikTok and by Snap (SNAP.N), the owner of Snapchat, in its AI chatbot. In addition, Snapchat employs human reviewers to verify the accuracy of all political advertisements, ensuring that no deceptive AI is used. X, the former Twitter, has not released any tools for generative AI advertising.
Last month, Nick Clegg, the chief policy executive at Meta, declared that generative AI usage in political advertising was “clearly an area where we need to update our rules.”
In advance of the recent AI safety meeting in the UK, he issued a warning, urging governments and tech firms to prepare for the possibility that the technology may be used to rig the 2024 elections. He also called it focusing us on election-related information that “moves from one platform to the other. Clegg informed Reuters that Meta was preventing the creation of lifelike representations of prominent personalities by its user-facing Meta AI virtual assistant. Meta promised to provide a mechanism to “watermark” artificial intelligence-generated material this summer. Except for parodies and satire, Meta strictly prohibits deceptive AI-generated videos in any content, including organic, non-paid postings.
The independent Oversight Board of the firm said last month that it would investigate the viability of a ch strategy, taking up a case involving a manipulated video of US President Joe Biden that Meta claimed it had left up since it was not artificial intelligence (AI)-generated.