The number of states that limit access to Internet content has risen rapidly in recent years. Drawing on arguments that are often powerful and compelling such as "securing intellectual property rights," "protecting national security," "preserving cultural norms and religious values," and "shielding children from pornography and exploitation," many states are implementing extensive filtering practices to curb the perceived lawlessness of the medium. Many others are debating the enactment of similar measures and pursuing technological solutions to complex sociological issues. The following briefly describes the various methods of Internet filtering, the inherent limitations of filtering, and the OpenNet Initiative’s methodology for the study of filtering practices.
Overview of Internet Censorship
Internet censorship and content restrictions can be enacted through a number of different strategies which we describe below. Internet filtering normally refers to the technical approaches to control access to information on the Internet, as embodied in the first two of the four approaches described below.
1) Technical blocking
There are three commonly used techniques to block access to Internet sites: IP blocking, DNS tampering, and URL blocking using a proxy. These techniques are used to block access to specific WebPages, domains, or IP addresses. These methods are most frequently used where direct jurisdiction or control over websites are beyond the reach of authorities. Keyword blocking, which blocks access to websites based on the words found in URLs or blocks searches involving blacklisted terms, is a more advanced technique that a growing number of countries are employing. Filtering based on dynamic content analysis—effectively reading the content of requested websites—though theoretically possible, has not been observed in our research. Denial of service attacks produce the same end result as other technical blocking techniques—blocking access to certain websites—carried out through indirect means.
2) Search result removals
In several instances, companies that provide Internet search services cooperate with governments to omit illegal or undesirable websites from search results. Rather than blocking access to the targeted sites, this strategy makes finding the sites more difficult.
Where regulators have direct access to and legal jurisdiction over web content hosts, the simplest strategy is to demand the removal of websites with inappropriate or illegal content. In several countries, a cease and desist notice sent from one private party to another, with the threat of subsequent legal action, is enough to convince web hosts to take down websites with sensitive content. Where authorities have control of domain name servers, officials can deregister a domain that is hosting restricted content, making the website invisible to the browsers of users seeking to access the site.
4) Induced self-censorship
Another common and effective strategy to limit exposure to Internet content is by encouraging self-censorship both in browsing habits and in choosing content to post online. This may take place through the threat of legal action, the promotion of social norms, or informal methods of intimidation. Arrest and detention related to Internet offenses, or on unrelated charges, have been used in many instances to induce compliance with Internet content restrictions. In many cases, the content restrictions are neither spoken nor written. The perception that the government is engaged in the surveillance and monitoring of Internet activity, whether accurate or not, provides another strong incentive to avoid posting material or visiting sites that might draw the attention of authorities.
Points of Control
Internet filtration can occur at any or all of the following four nodes in network:
1) Internet backbone
State-directed implementation of national content filtering schemes and blocking technologies may be carried out at the backbone level, affecting Internet access throughout an entire country. This is often carried out at the international gateway.
2) Internet Service Providers
Government-mandated filtering is most commonly implemented by Internet Service Providers (ISPs) using any one or combination of the technical filtering techniques mentioned above.
Filtering of institutional level networks using technical blocking and/or induced self-censorship occurs in companies, government organizations, schools and cybercafés. In some countries, this takes place at the behest of the government. More commonly, institutional-level filtering is carried out to meet the internal objectives of the institution such as preventing the recreational use of workplace computers.
4) Individual computers
Home or individual computer level filtering can be achieved through the installation of filtering software that restricts an individual computer’s ability to access certain sites.
Countries have been known to order filtering at all of these levels, whether setting up filtration systems at the international gateway to eliminate access to content throughout the entire country, instructing ISPs to block access to certain sites, obligating schools to filter their networks, or requiring libraries to install filtration software on each individual computer they provide.
Filtering's Inherent Flaws
Filtering technologies, however, are prone to two simple inherent flaws: underblocking and overblocking. While technologies can be effective at blocking specific content such as high profile web sites, current technology is not able to accurately identify and target specific categorizes of content found on the billions of webpages and other Internet media including news groups, email lists, chat rooms and instant messaging. Underblocking refers to the failure of filtering to block access to all the content targeted for censorship. On the other hand, filtering technologies often block content they do not intend to block, also known as overblocking. Many blacklists are generated through a combination of manually designated web sites as well as automated searches and, thus, often contain websites that have been incorrectly classified. In addition, blunt filtering methods such as IP blocking can knock out large swaths of acceptable websites simply because they are hosted on the same IP address as a site with restricted content.
The profusion of Internet content means that Internet filtering regimes that hope to comprehensively block access to certain types of content must rely on software providers with automated content identification methods. This effectively puts control over access in the hands of private corporations that are not subject to the standards of review common in government mandates. In addition, because the filters are often proprietary, there is often no transparency in terms of the labeling and restricting of sites. The danger is most explicit when the corporations that produce content filtering technology work alongside undemocratic regimes in order to set-up nationwide content filtering schemes. Most states that implement content filtering and blocking augment commercially generated blocklists with customized lists that focus on topics and organizations that are nation or language-specific.
How ONI Studies Internet Filtration
For more information on our methodology, tools, and data, please see our FAQ.
Measuring and describing the rapidly spreading phenomenon of Internet filtration defies simple metrics. Ideally, we would like to know how Internet censorship reduces the availability of information, how it hampers the development of online communities, and how it inhibits the ability of civic groups to monitor and report on the activities of the government, as these impact governance and ultimately economic growth. However, even if we were able to identify all the websites that have been put out of reach due to government action, the impact of blocking access to each website is far from obvious, particularly in this networked world where information has a habit of propagating itself and reappearing in multiple locations. With this recognition of the inherent complexity of evaluating Internet censorship, we set out with modest goals – to identify and document filtering.
Two lists of websites are checked in each of the countries tested: a global list (constant for each country) and a local list (different for each country). The global list is comprised of a wide range of internationally relevant and popular websites including sites with content that is perceived to be provocative or objectionable. Most of the websites on the global list are in English. The local lists are designed individually for each country by regional experts. They have content representing a wide arrange of content categories at the local and regional levels, and content in local languages. In countries where Internet censorship has been reported, the local lists also include many of the sites that are alleged to have been blocked. These lists are samples and are not meant to be exhaustive.
The actual tests are run from within each country using specially designed software. Where appropriate, the tests are run from different locations to capture the differences in blocking behavior across ISPs and across multiple days and weeks to control for normal connectivity problems.
The completion of the initial accessibility testing is just the first step in our evaluation process. Additional diagnostic work is performed to separate normal connectivity errors from intentional tampering. There are a number of technical alternatives for filtering the Internet, some of which are relatively easy to discover. Others are difficult to detect and require extensive diagnostic work to confirm.
After analysis, the results are released in the form of country and regional reports. Please check the website often for updates!