With election season just around the corner, Google and YouTube are keeping a close eye on AI-altered political advertising — a growing problem as campaigning intensifies and political candidates turn to generative AI.
According to a new update to Google’s political content guidelines, any advertising materials that contain “synthetic” or artificially altered people, voices or other events must “prominently disclose” their use within the advertising itself.
Google already bans the use of deepfake content in advertising, but the expanded disclosure rules now apply to all AI changes beyond minor changes Washington Post reported. The policy excludes synthetic content that has been modified or generated in a way that “does not affect advertising claims,” and AI can also be used in some video and photo editing, such as image resizing, cropping, etc Color correction, error correction or background editing.
ChatGPT traffic increases as students return to school
Political advertising and its intersection with Big Tech is becoming a major part of the upcoming 2024 election. Elon Musk recently announced that Ban on all political ads – just as platform users report a rise in unlabeled ads appearing in their feeds.
A September report from Media Matters for America found meta platforms failing to enforce the company’s policies on political ads, citing unlabeled right-wing ads on Facebook and Instagram.
Google’s new policy goes into effect in November and applies to election advertising on Google’s platforms, including YouTube and third-party websites that are part of the company’s advertising network.