Highly convincing hoaxes using images, audio and video are being made and distributed online through a type of artificial intelligence known as generative AI.
Australia's eSafety Commissioner Julie Inman Grant said technology had evolved quicker than policy and was not properly regulated.
It also posed a risk to safety, with generative AI used for manipulation, misinformation and extortion.
"The genie is out of the bottle there," Ms Inman Grant told the Connecting Up Conference in Melbourne.
"The risk and the harm to humanity is just too great."
Ms Inman Grant said the platforms benefiting from generative AI needed to ensure it was safe for users and the broader community.
She reflected on how seatbelts were introduced to cars in the 1970s, saying car companies originally pushed back against the changes but now compete on their safety record.
"Why is it that the technology industry, with all of their collective brilliance, isn't really prioritising the safety of people?" Ms Inman Grant asked the conference.
"Nobody wants to go onto a platform that's toxic or harmful or where they're being attacked all the time.
"The industry needs their seatbelt moment and I think it's starting to happen as more governments look at addressing this."
Ms Inman Grant called on the industry to slow down and address the issues around artificial intelligence before they become worse.
"We need to move a little bit more mindfully before we unleash these powerful technologies and put them into the hands of the everyday person," she said.
"We're going to get to the point where we don't know what's real and what's fake anymore because the deep fakes are becoming so realistic.
"If we can't have an easy way to detect digital provenance, misinformation is going to be an everyday experience."