Dec 3 Reuters Despite widespread concern that generative AI could interfere with major elections around the globe this year, the technology had limited impact across Meta Platforms39; apps, the tech company said on Tuesday.

Coordinated networks of accounts seeking to spread propaganda or false content largely failed to build a significant audience on Facebook and Instagram or use AI effectively, Nick Clegg, Meta39;s president of global affairs, told a press briefing. The volume of AIgenerated misinformation was low and Meta was able to quickly label or remove the content, he said.

The snapshot from Meta comes as misinformation experts say AI content has so far failed to significantly sway public opinion, as notable deepfake videos and audio, including of President Joe Biden39;s voice, have been quickly debunked.

Coordinated networks of accounts attempting to spread false content are increasingly shifting their activities to other social media and messaging apps with fewer safety guardrails, or are operating their own websites in order to stay online, Clegg said, adding that Meta took down about 20 covert influence operations on its platform this year.

Clegg said Meta was overly stringent in its content moderation decisions during the COVID19 pandemic, resulting in content that was mistakenly removed.

The company heard feedback from users who complained that their content had been removed unfairly, and Meta will aim to protect free expression and be more precise in…