Traditionally regulators have existed in place to protect the public from the actions of companies or institutions.
Perhaps for the first time, broadcast regulator Ofcom is to be charged with protecting members of the public from each other.
Culture Secretary Nicky Morgan and Home Secretary Priti Patel released a joint statement following a consultation on how to protect the public from so-called ‘online harms’ - and said the were “minded” to expand Ofcom to do the job.
The watchdog is to be put in charge of ensuring internet companies like Facebook, Twitter and YouTube maintain a “duty of care” to their users - ensuring illegal content posted by users is removed and monitoring for issues like cyberbullying and self-harm images.
Firms could face fines - or worse - if they fail to protect their users from seeing harmful material.
Here’s how the proposed watchdog will work - and some of the potential pitfalls.
What websites will it cover?

In theory, any website which publishes ‘user generated content’.
So that covers Facebook, Twitter, Instagram, YouTube and Reddit for starters.
But it’ll also cover smaller, ‘niche’ online forums - although it’s thought Ofcom’s role policing and regulating them will be proportionate to their size.
And it’s likely to end up covering the comment sections of news websites too - because comments are also ‘user generated content’.
Does this mean I’ll have to watch what I say when I’m talking about politics online?

Probably not.
Ofcom’s main focus, it seems, will be on illegal material - child sexual abuse images and terrorist propaganda.
But they’ll also expect firms to have basic standards for what kind of (legal) behaviour and content is acceptable on their platforms - and will regulate the firms to ensure that is in case.
Crucially, they’ll have a “duty of care” towards their customers, especially children and vulnerable people.
Ofcom will be drawing up their own guidelines to require firms to tackle things like cyber bullying and self-harm.
But it seems they won’t be directly involved in policing things like catfishing - where people pretend to be someone else to deceive others - or other kinds of disinformation.
We won’t know exactly where the line will be drawn until Ofcom draws up its guidelines. And the point of regulating through Ofcom rather than the government doing it through the law is that an independent regulator will be able to react more quickly to “emerging threats.”
Or, as critics would see it, they’ll be able to react more quickly to “emerging moral panics”.
In the meantime, firms will be expected to follow their own ‘voluntary’ guidelines.
Does this mean I can complain to Ofcom about something I’ve seen posted on Twitter?

Complaints will still be handled by the social network, rather than handed to the regulator.
And today’s statement says: “Recognising concerns about freedom of expression, the regulator will not investigate or adjudicate on individual complaints.”
So no, you won’t be able to complain to Ofcom about someone’s bad tweets, sorry.
What about fake news?
The statement appears to suggest Ofcom won’t be in charge of anything to do with ‘electoral integrity’ - which will be tackled in the Defending Democracy programme currently underway at the Cabinet Office.
That probably means things like online political disinformation, advertising and propaganda (other than terrorist propaganda) won’t be within their remit.
That has the added benefit of ruling out an Ofcom investigation into the Conservative Party rebranding its own Twitter account as a fake fact checking service during the election campaign.
What about…porn?

Here’s where it gets a bit more vague.
The statement says: “We will not prevent adults from accessing or posting legal content, nor require companies to remove specific pieces of legal content.”
That said, it does suggest placing a responsibility on firms to “use a proportionate range of tools including age assurance, and age verification technologies to prevent children from accessing age-inappropriate content and to protect them from other harms.”
The government quietly shelved its statutory online age verification plan last year, after it had been bungled and delayed for years.
So it seems the government wants to make companies do it themselves, although it remains unclear whether this will be regulated in the same way as other content - with fines and warnings.
What if companies don’t keep up their ‘duty of care’ - and what if they’re based overseas?
The watchdog will be able to issue “notices and warnings, fines, business disruption measures, but also senior manager liability, and Internet Service Provider (ISP) blocking in the most egregious cases.”
The Mirror understands “business disruption measures” could mean ordering firms like PayPal not to allow payments to a certain company.
And “Internet Service Provider (ISP) blocking” means ordering all UK ISPs to block websites if they refuse to comply.
All of these are to be confirmed in legislation which will follow before the regulator takes control.
But the Mirror understands both are “on the table” as potential solutions to sites like 8Chan or independent pornography sites which are based overseas, have no representation in the UK and refuse to remove illegal content or implement age verification.
It also raises the usual concerns of what happens in the case of an online forum where a user has posted illegally obtained material - say, hacked emails published by Wikileaks.
Would the new regulator - or the government - have the power to order UK ISPs to block such a website on that basis?
What if something gets taken down wrongly?
Today’s statement says the regulator will introduce a system allowing “the opportunity for users to appeal” against content removal decisions.
And the previously published White Paper on online harms says there must be a mechanism for companies to appeal against enforcement decisions.
But the details of both have yet to be confirmed, and will be laid out later.