Internet censorship | content suppression methods

Content suppression methods

Technical censorship

Various parties are using different technical methods of preventing public access to undesirable resources, with varying levels of effectiveness, costs and side effects.

Blacklists

Entities mandating and implementing the censorship usually identify them by one of the following items: keywords, domain names and IP addresses. Lists are populated from different sources, ranging from private suppliers through courts to specialized government agencies (Ministry of Industry and Information Technology of China, Islamic Guidance in Iran).[12]

As per Hoffmann, different methods are used to block certain websites or pages including DNS poisoning, blocking access to IPs, analyzing and filtering URLs, inspecting filter packets and resetting connections.[13]

Points of control

Enforcement of the censor-nominated technologies can be applied at various levels of countries and Internet infrastructure:[12]

  • Internet backbone, including Internet exchange points (IXP) with international networks (Autonomous Systems), operators of submarine communications cables, satellite Internet access points, international optical fibre links etc. In addition to facing huge performance challenges due to large bandwidths involved, these do not give censors access to information exchanged within the country.
  • Internet Service Providers, which involves installation of voluntary (as in UK) or mandatory (as in Russia) Internet surveillance and blocking equipment.
  • Individual institutions, which in most cases implement some form of Internet access controls to enforce their own policies, but, especially in case of public or educational institutions, may be requested or coerced to do this on the request from the government.
  • Personal devices, whose manufacturers or vendors may be required by law to install censorship software.
  • Application service providers (e.g. social media companies), who may be legally required to remove particular content. Foreign providers with business presence in given country may be also coerced into restricting access to specific contents for visitors from the requesting country.
  • Certificate authorities may be required to issue counterfeit X.509 certificates controlled by the government, allowing man-in-the-middle surveillance of TLS encrypted connections.
  • Content Delivery Network providers who tend to aggregate large amounts of content (e.g. images) may be also attractive target for censorship authorities.

Approaches

Internet content is subject to technical censorship methods, including:[2][5]

  • Internet Protocol (IP) address blocking: Access to a certain IP address is denied. If the target Web site is hosted in a shared hosting server, all websites on the same server will be blocked. This affects IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find proxies that have access to the target websites, but proxies may be jammed or blocked, and some Web sites, such as Wikipedia (when editing), also block proxies. Some large websites such as Google have allocated additional IP addresses to circumvent the block, but later the block was extended to cover the new addresses. Due to challenges with geolocation, geo-blocking is normally implemented via IP address blocking.
  • Domain name system (DNS) filtering and redirection: Blocked domain names are not resolved, or an incorrect IP address is returned via DNS hijacking or other means. This affects all IP-based protocols such as HTTP, FTP and POP. A typical circumvention method is to find an alternative DNS resolver that resolves domain names correctly, but domain name servers are subject to blockage as well, especially IP address blocking. Another workaround is to bypass DNS if the IP address is obtainable from other sources and is not itself blocked. Examples are modifying the Hosts file or typing the IP address instead of the domain name as part of a URL given to a Web browser.
  • Uniform Resource Locator (URL) filtering: URL strings are scanned for target keywords regardless of the domain name specified in the URL. This affects the HTTP protocol. Typical circumvention methods are to use escaped characters in the URL, or to use encrypted protocols such as VPN and TLS/SSL.[14]
  • Packet filtering: Terminate TCP packet transmissions when a certain number of controversial keywords are detected. This affects all TCP-based protocols such as HTTP, FTP and POP, but Search engine results pages are more likely to be censored. Typical circumvention methods are to use encrypted connections – such as VPN and TLS/SSL – to escape the HTML content, or by reducing the TCP/IP stack's MTU/MSS to reduce the amount of text contained in a given packet.
  • Connection reset: If a previous TCP connection is blocked by the filter, future connection attempts from both sides can also be blocked for some variable amount of time. Depending on the location of the block, other users or websites may also be blocked, if the communication is routed through the blocking location. A circumvention method is to ignore the reset packet sent by the firewall.[15]
  • Network disconnection: A technically simpler method of Internet censorship is to completely cut off all routers, either by software or by hardware (turning off machines, pulling out cables). A circumvention method could be to use a satellite ISP to access Internet.[16]
  • Portal censorship and search result removal: Major portals, including search engines, may exclude web sites that they would ordinarily include. This renders a site invisible to people who do not know where to find it. When a major portal does this, it has a similar effect as censorship. Sometimes this exclusion is done to satisfy a legal or other requirement, other times it is purely at the discretion of the portal. For example, Google.de and Google.fr remove Neo-Nazi and other listings in compliance with German and French law.[17]
  • Computer network attacks: Denial-of-service attacks and attacks that deface opposition websites can produce the same result as other blocking techniques, preventing or limiting access to certain websites or other online services, although only for a limited period of time. This technique might be used during the lead up to an election or some other sensitive period. It is more frequently used by non-state actors seeking to disrupt services.[18]

Over and under blocking

Technical censorship techniques are subject to both over- and under-blocking since it is often impossible to always block exactly the targeted content without blocking other permissible material or allowing some access to targeted material and so providing more or less protection than desired.[5] An example is blocking an IP-address of a server that hosts multiple websites, which prevents access to all of the websites rather than just those that contain content deemed offensive.[19]

Use of commercial filtering software

Screenshot of Websense blocking Facebook in an organisation where it has been configured to block a category named "Personals and Dating"

Writing in 2009 Ronald Deibert, professor of political science at the University of Toronto and co-founder and one of the principal investigators of the OpenNet Initiative, and, writing in 2011, Evgeny Morzov, a visiting scholar at Stanford University and an Op-Ed contributor to the New York Times, explain that companies in the United States, Finland, France, Germany, Britain, Canada, and South Africa are in part responsible for the increasing sophistication of online content filtering worldwide. While the off-the-shelf filtering software sold by Internet security companies are primarily marketed to businesses and individuals seeking to protect themselves and their employees and families, they are also used by governments to block what they consider sensitive content.[20][21]

Among the most popular filtering software programs is SmartFilter by Secure Computing in California, which was bought by McAfee in 2008. SmartFilter has been used by Tunisia, Saudi Arabia, Sudan, the UAE, Kuwait, Bahrain, Iran, and Oman, as well as the United States and the UK.[22] Myanmar and Yemen have used filtering software from Websense. The Canadian-made commercial filter Netsweeper[23] is used in Qatar, the UAE, and Yemen.[24] The Canadian organization CitizenLab has reported that Sandvine and Procera products are used in Turkey and Egypt.[25]

On 12 March 2013 in a Special report on Internet Surveillance, Reporters Without Borders named five "Corporate Enemies of the Internet": Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany). The companies sell products that are liable to be used by governments to violate human rights and freedom of information. RWB said that the list is not exhaustive and will be expanded in the coming months.[26]

In a U.S. lawsuit filed in May 2011, Cisco Systems is accused of helping the Chinese Government build a firewall, known widely as the Golden Shield, to censor the Internet and keep tabs on dissidents.[27] Cisco said it had made nothing special for China. Cisco is also accused of aiding the Chinese government in monitoring and apprehending members of the banned Falun Gong group.[28]

Many filtering programs allow blocking to be configured based on dozens of categories and sub-categories such as these from Websense: "abortion" (pro-life, pro-choice), "adult material" (adult content, lingerie and swimsuit, nudity, sex, sex education), "advocacy groups" (sites that promote change or reform in public policy, public opinion, social practice, economic activities, and relationships), "drugs" (abused drugs, marijuana, prescribed medications, supplements and unregulated compounds), "religion" (non-traditional religions occult and folklore, traditional religions), ....[24] The blocking categories used by the filtering programs may contain errors leading to the unintended blocking of websites.[20] The blocking of Dailymotion in early 2007 by Tunisian authorities was, according to the OpenNet Initiative, due to Secure Computing wrongly categorizing Dailymotion as pornography for its SmartFilter filtering software. It was initially thought that Tunisia had blocked Dailymotion due to satirical videos about human rights violations in Tunisia, but after Secure Computing corrected the mistake access to Dailymotion was gradually restored in Tunisia.[29]

Organizations such as the Global Network Initiative, the Electronic Frontier Foundation, Amnesty International, and the American Civil Liberties Union have successfully lobbied some vendors such as Websense to make changes to their software, to refrain from doing business with repressive governments, and to educate schools who have inadvertently reconfigured their filtering software too strictly.[30][31][32] Nevertheless, regulations and accountability related to the use of commercial filters and services are often non-existent, and there is relatively little oversight from civil society or other independent groups. Vendors often consider information about what sites and content is blocked valuable intellectual property that is not made available outside the company, sometimes not even to the organizations purchasing the filters. Thus by relying upon out-of-the-box filtering systems, the detailed task of deciding what is or is not acceptable speech may be outsourced to the commercial vendors.[24]

Non-technical censorship

PDF about countries that criminalize free speech

Internet content is also subject to censorship methods similar to those used with more traditional media. For example:[5]

  • Laws and regulations may prohibit various types of content and/or require that content be removed or blocked either proactively or in response to requests.
  • Publishers, authors, and ISPs may receive formal and informal requests to remove, alter, slant, or block access to specific sites or content.
  • Publishers and authors may accept bribes to include, withdraw, or slant the information they present.
  • Publishers, authors, and ISPs may be subject to arrest, criminal prosecution, fines, and imprisonment.
  • Publishers, authors, and ISPs may be subject to civil lawsuits.
  • Equipment may be confiscated and/or destroyed.
  • Publishers and ISPs may be closed or required licenses may be withheld or revoked.
  • Publishers, authors, and ISPs may be subject to boycotts.
  • Publishers, authors, and their families may be subject to threats, attacks, beatings, and even murder.[33]
  • Publishers, authors, and their families may be threatened with or actually lose their jobs.
  • Individuals may be paid to write articles and comments in support of particular positions or attacking opposition positions, usually without acknowledging the payments to readers and viewers.[34][35]
  • Censors may create their own online publications and Web sites to guide online opinion.[34]
  • Access to the Internet may be limited due to restrictive licensing policies or high costs.
  • Access to the Internet may be limited due to a lack of the necessary infrastructure, deliberate or not.
  • Access to search results may be restricted due to government involvement in the censorship of specific search terms, content may be excluded due to terms set with search engines. By allowing search engines to operate in new territory they must agree to abide to censorship standards set by the government in that country.[36]

Self-censorship by web service operators

Removal of user accounts based on controversial content

Deplatforming is a form of Internet censorship in which controversial speakers or speech are suspended, banned, or otherwise shut down by social media platforms and other service providers that generally provide a venue for free speech or expression.[37] Banking and financial service providers, among other companies, have also denied services to controversial activists or organizations, a practice known as "financial deplatforming".

Law professor Glenn Reynolds dubbed 2018 the "Year of Deplatforming", in an August 2018 article in The Wall Street Journal.[37] According to Reynolds, in 2018 "the internet giants decided to slam the gates on a number of people and ideas they don't like. If you rely on someone else's platform to express unpopular ideas, especially ideas on the right, you're now at risk."[37] On August 6, 2018, for example, several major platforms, including YouTube and Facebook, executed a coordinated, permanent ban on all accounts and media associated with conservative talk show host Alex Jones and his media platform InfoWars, citing "hate speech" and "glorifying violence."[38] Reynolds also cited Gavin McInnes and Dennis Prager as prominent 2018 victims of deplatforming based on their political views, noting, "Extremists and controversialists on the left have been relatively safe from deplatforming."[37]

Official statements regarding site and content removal

Most major web service operators reserve to themselves broad rights to remove or pre-screen content, and to suspend or terminate user accounts, sometimes without giving a specific list or only a vague general list of the reasons allowing the removal. The phrases "at our sole discretion", "without prior notice", and "for other reasons" are common in Terms of Service agreements.

  • Facebook: Among other things, the Facebook Statement of Rights and Responsibilities says: "You will not post content that: is hateful, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence", "You will not use Facebook to do anything unlawful, misleading, malicious, or discriminatory", "We can remove any content or information you post on Facebook if we believe that it violates this Statement", and "If you are located in a country embargoed by the United States, or are on the U.S. Treasury Department's list of Specially Designated Nationals you will not engage in commercial activities on Facebook (such as advertising or payments) or operate a Platform application or website".[39]
  • Google: Google's general Terms of Service, which were updated on 1 March 2012, state: "We may suspend or stop providing our Services to you if you do not comply with our terms or policies or if we are investigating suspected misconduct", "We may review content to determine whether it is illegal or violates our policies, and we may remove or refuse to display content that we reasonably believe violates our policies or the law", and "We respond to notices of alleged copyright infringement and terminate accounts of repeat infringers according to the process set out in the U.S. Digital Millennium Copyright Act".[40]
    • Google Search: Google's Webmaster Tools help includes the following statement: "Google may temporarily or permanently remove sites from its index and search results if it believes it is obligated to do so by law, if the sites do not meet Google's quality guidelines, or for other reasons, such as if the sites detract from users' ability to locate relevant information."[41]
  • Twitter: The Twitter Terms of Service state: "We reserve the right at all times (but will not have an obligation) to remove or refuse to distribute any Content on the Services and to terminate users or reclaim usernames" and "We reserve the right to remove Content alleged to be [copyright] infringing without prior notice and at our sole discretion".[42]
  • YouTube: The YouTube Terms of Service include the statements: "YouTube reserves the right to decide whether Content violates these Terms of Service for reasons other than copyright infringement, such as, but not limited to, pornography, obscenity, or excessive length. YouTube may at any time, without prior notice and in its sole discretion, remove such Content and/or terminate a user's account for submitting such material in violation of these Terms of Service", "YouTube will remove all Content if properly notified that such Content infringes on another's intellectual property rights", and "YouTube reserves the right to remove Content without prior notice".[43]

  • Wikipedia: Content within a Wikipedia article may be modified or deleted by any editor as part of the normal process of editing and updating articles. All editing decisions are open to discussion and review. The Wikipedia Deletion policy outlines the circumstances in which entire articles can be deleted. Any editor who believes a page doesn't belong in an encyclopedia can propose its deletion. Such a page can be deleted by any administrator if, after seven days, no one objects to the proposed deletion. Speedy deletion allows for the deletion of articles without discussion and is used to remove pages that are so obviously inappropriate for Wikipedia that they have no chance of surviving a deletion discussion. All deletion decisions may be reviewed, either informally or formally.[44]
  • Yahoo!: Yahoo!'s Terms of Service (TOS) state: "You acknowledge that Yahoo! may or may not pre-screen Content, but that Yahoo! and its designees shall have the right (but not the obligation) in their sole discretion to pre-screen, refuse, or remove any Content that is available via the Yahoo! Services. Without limiting the foregoing, Yahoo! and its designees shall have the right to remove any Content that violates the TOS or is otherwise objectionable."[45]