The government said it would make tech platforms set codes of conduct governing how they stop dangerous falsehoods spreading, to be approved by a regulator. The regulator would set its own standard if a platform failed to do so, then fine companies for non-compliance.
The legislation, to be introduced in parliament on Thursday, targets false content that hurts election integrity or public health, calls for denouncing a group or injuring a person, or risks disrupting key infrastructure or emergency services.
The bill is part of a wide-ranging regulatory crackdown by Australia, where leaders have complained that foreign-domiciled tech platforms are overriding the country's sovereignty, and comes ahead of a federal election due within a year.
Already Facebook owner Meta has said it may block professional news content if it is forced to pay royalties, while X, formerly Twitter, has removed most content moderation since being bought by billionaire Elon Musk in 2022.
"Misinformation and disinformation pose a serious threat to the safety and wellbeing of Australians, as well as to our democracy, society and economy," said Communications Minister Michelle Rowland in a statement.
"Doing nothing and allowing this problem to fester is not an option."
An initial version of the bill was criticised in 2023 for giving the Australian Communications and Media Authority too much power to determine what constituted misinformation and disinformation, the term for intentionally spreading lies.
Rowland said the new bill specified the media regulator would not have power to force the takedown of individual pieces of content or user accounts. The new version of the bill protected professional news, artistic and religious content, while it did not protect government-authorised content.
AzVision.az
More about: