Facebook whistleblower Frances Haugen issues a frightening warning about AI, saying it could soon have “civilization-changing effects.”

Advances in artificial intelligence could have “civilization-changing effects” and rapidly increase the amount of dangerous misinformation spread online, a former Facebook employee has warned.
Whistleblower Frances Haugen said that as AI becomes more widespread and the economy becomes more reliant on software in data centers, an “era of opacity” will dawn around the world.
The former engineer and product manager, who left Facebook in 2021 after thousands of documents were leaked showing toxic content was knowingly spread from the platform, said without stricter regulation there would be “a repeat of what we saw in on social media,” on a much larger scale.
“As we start with scalable systems running in data centers, a very small number of people can have a level of power that impacts civilization,” Ms. Haugen told the National Press Club on Tuesday.
“There are very few people at Facebook who really understand how these algorithms work, and yet it still impacts what everyone sees in the news.”
“When we see the extent to which this can impact, it can have really serious consequences.”

Whistleblower Frances Haugen said that as AI becomes more widespread and the economy becomes more reliant on software in data centers, an “era of opacity” will dawn around the world
Millions of people around the world use artificial intelligence programs every day.
ChatGPT has more than 100 million users worldwide. 1.6 billion visits were generated in June.
Recent surveys suggest that almost one in four Australians use AI programs in their workplace and up to 70 percent of children aged 14 to 17 have used AI software at least once.
Ms. Haugen – who built a successful tech career in California’s Silicon Valley before testifying that Facebook promoted hate speech and distributed eating disorder content to teenagers on Instagram – said newer algorithms promote “the most divisive, polarizing content” for profit to generate.
“It used to be that the content that kept you engaged was the best.” Then they said, “If you reshare it, get a like or a comment, then that’s good content,” she said.
“Now our most controversial, polarizing and worst content gets the most distribution.”
“If we don’t pass laws that say, ‘Hey, we’re not going to repeat the mistakes of social media, we need to have transparency… we need to protect whistleblowers,’ we’re going to see repeats of what we saw with Facebook.”
“Any time there’s a profit motive and there’s no feedback cycle to correct those lies, we’re going to see those gaps get bigger and bigger.”

The number of ChatGPT users increased from 100 million in January to 180.5 million in August. Image: Marco Bertorello/AFP
Calls for greater regulation of online hate speech and misinformation are failing in Australia, according to a leading free speech advocate. The protection of whistleblowers and the media is failing.
Award-winning journalist Peter Greste said Australia needed a mechanism that took into account the role of journalists and whistleblowers.
He also said the government and the ADF needed to be “far more transparent” about their work, especially amid a growing crisis in the Middle East.
“I just think that we as a country and particularly our security and defense agencies are too focused on secrecy and the secrecy of these things at the moment,” Professor Greste told the National Press Club.
“I think it causes a lot of problems for everyone.”