https://www.spectator.co.uk/article/the-hidden-harms-in-the-online-safety-bill
The Hidden Harms in the Online Safety Bill
Jonathan Sumption
UK Spectator. August 20.
Weighing in at 218 pages, with 197 sections and 15 schedules, the Online Safety Bill is a clunking attempt to regulate content on the internet. Its internal contradictions and exceptions, its complex paper chase of definitions, its weasel language suggesting more than it says, all positively invite misunderstanding. Parts of it are so obscure that its promoters and critics cannot even agree on what it does.
Nadine Dorries, the Culture Secretary, says that it is all about protecting children and vulnerable adults. She claims it does nothing to limit free speech. Technically, she is right: her bill does not directly censor the internet. It instead seeks to impose on media companies an opaque and intrusive culture of self-censorship – which will have the same effect.
As things stand, the law distinguishes between online publishers (like The Spectator) that generate content and can be held responsible for it; and online intermediaries (Google, Facebook, etc) that merely provide online facilities and have no significant editorial function. Mere intermediaries have no obligation to monitor content and are only required to take down illegal material of which they are aware.
The Online Safety Bill will change all this. The basic idea is that editorial responsibility for material generated by internet users will be imposed on all online platforms: social media and search engines. They will have a duty to ‘mitigate and manage the risks of harm to individuals’ arising from internet use.
A small proportion of the material available on the internet is truly nasty stuff. There is a strong case for carefully targeted rules requiring the moderation or removal of the worst examples. The difficulty is to devise a way of doing this without accidentally suppressing swaths of other material. So the material targeted must be precisely defined and identifiable. This is where the Online Safety Bill falls down.
Some of the material targeted by the bill is obviously unacceptable. Illegal content, such as material promoting terrorism or the sexual exploitation of children, must be moderated or taken down. Such content is already banned under existing legislation. It is defined by law and can be identified with a fair degree of accuracy. Some material, notably pornographic images, must be restricted to adults: in practice, this requires online age verification. So far, so good.
The real vice of the bill is that its provisions are not limited to material capable of being defined and identified. It creates a new category of speech which is legal but ‘harmful’. The range of material covered is almost infinite, the only limitation being that it must be liable to cause ‘harm’ to some people. Unfortunately, that is not much of a limitation. Harm is defined in the bill in circular language of stratospheric vagueness. It means any ‘physical or psychological harm’. As if that were not general enough, ‘harm’ also extends to anything that may increase the likelihood of someone acting in a way that is harmful to themselves, either because they have encountered it on the internet or because someone has told them about it.
This test is almost entirely subjective. Many things which are harmless to the overwhelming majority of users may be harmful to sufficiently sensitive, fearful or vulnerable minorities, or may be presented as such by manipulative pressure groups. At a time when even universities are warning adult students against exposure to material such as Chaucer with his rumbustious references to sex, or historical or literary material dealing with slavery or other forms of cruelty, the harmful propensity of any material whatever is a matter of opinion. It will vary from one internet user to the next.
If the bill is passed in its current form, internet giants will have to identify categories of material which are potentially harmful to adults and provide them with options to cut it out or alert them to its potentially harmful nature. This is easier said than done. The internet is vast. At the last count, 300,000 status updates are uploaded to Facebook every minute, with 500,000 comments left that same minute. YouTube adds 500 hours of videos every minute. Faced with the need to find unidentifiable categories of material liable to inflict unidentifiable categories of harm on unidentifiable categories of people, and threatened with criminal sanctions and enormous regulatory fines (up to 10 per cent of global revenue). What is a media company to do?
The only way to cope will be to take the course involving the least risk: if in doubt, cut it out. This will involve a huge measure of regulatory overkill. A new era of intensive internet self-censorship will have dawned.
The problem is aggravated by the inevitable use of what the bill calls ‘content moderation technology’, i.e. algorithms. They are necessarily indiscriminate because they operate by reference to trigger text or images. They are insensitive to context. They do not cater for nuance or irony. They cannot distinguish between mischief-making and serious debate. They will be programmed to err on the side of caution. The pious injunctions in the bill to protect ‘content of democratic importance’ and ‘journalistic content’ and to ‘have regard to’ the implications for privacy and freedom of _expression_ are unlikely to make much difference.
As applied to adults, the whole concept of restricting material which is entirely legal is a patronising abuse of legislative power. If the law allows me to receive, retain or communicate some item of information in writing or by word of mouth, how can it rationally prevent me from doing the same thing through the internet? Why should adult internet users be infantilised by applying to them tests directed to the protection of the most sensitive minorities? There are surely better ways of looking after the few who cannot look after themselves.
It is bad enough to be patronised by law, but worse to be patronised by official discretion. The bill will empower Ofcom, the regulator, to publish codes of practice with ‘guidance’ and ‘recommendations’, which will become the benchmark for regulatory action against internet intermediaries. All this will happen under the beady eyes of ministers. Ultimate power lies with the secretary of state, who can direct them to change their guidance and specify categories of material which she regards as harmful.
What might these categories be? The government’s White Paper and public statements by the Department of Culture, Media and Sport suggest that they will include ‘misinformation and disinformation’. There have been suggestions that this might include climate change denial and Covid disinformation. Ministers will say, citing section 190(4), that their policies are aimed at the public good, so that material which undermines them causes harm. It is no good saying that Ms Dorries is a nice lady who would never do anything so horrid. Her successors may not be. In a society which has always valued freedom of _expression_ and dissent, these are powers which no public officer ought to have.
We had a glimpse of this brave new world during the pandemic. Facebook, YouTube and the like were keen to curry favour with the government and stave off statutory regulation by taking a ‘responsible’ view of controversial questions. YouTube’s self-censorship policy was designed to exclude ‘medical misinformation’, which it defined as any content which ‘contradicts guidance from the World Health Organisation or local health authorities’. Criticism of government policy by David Davis MP and Talk Radio were temporarily taken down. The Royal Society, Britain’s premier scientific society, proposed ‘legislation and punishment of those who produced and disseminated false information’ about vaccines. This kind of thing is based on the notion that intellectual enquiry and the dissemination of ideas should be subordinated to authority. What the Royal Society meant by ‘false information’ was information inconsistent with the scientific consensus as defined by some recognised scientific authority, such as themselves.
The Online Safety Bill has been put on hold until the new prime minister takes office. So it is worth reminding the successful candidate why Britain has traditionally rejected attempts by the state to control the flow of information. In part, it is an instinctive attachment to personal freedom. And in part it is a recognition of the politically dangerous and culturally destructive results.
All statements of fact or opinion are provisional. They reflect the current state of knowledge and experience. But knowledge and experience are not closed or immutable categories. They are inherently liable to change. Once upon a time, the scientific consensus was that the sun moved around the Earth and that blood did not circulate around the body. These propositions were refuted only because orthodoxy was challenged by people once thought to be dangerous heretics. Knowledge advances by confronting contrary arguments, not by hiding them away. Any system for regulating the _expression_ of opinion or the transmission of information will end up by privileging the anodyne, the uncontroversial, the conventional and the officially approved.
We have to accept the implications of human curiosity. Some of what people say will be wrong. Some of it may even be harmful. But we cannot discover truth without accommodating error. It is the price that we pay for allowing knowledge and understanding to develop and human civilisation to progress.
Jonathan Sumption is an author, medieval historian and former Supreme Court judge