Internet Filtering in Europe 2006-2007

Note: a newer version of this profile is available at Country Profiles: Europe.

Introduction

In less than a decade, the Internet in Europe has evolved from a virtually unfettered environment to one in which filtering in most countries, particularly within the European Union (EU), is the norm rather than the exception. Compared with many of the countries in other regions that block Internet content, the rise of filtering in Europe is notable because of its departure from a strong tradition of democratic processes and a commitment to free expression. Filtering takes place in a variety of forms, including the state-ordered takedown of illegal content on domestically hosted Web sites; the blocking of illegal content hosted abroad; and the filtering of results by search engines pertaining to illegal content. As in most countries around the world that engage in filtering, the distinction between voluntary and state-mandated filtering is somewhat blurred in Europe. In many instances filtering by Internet service providers (ISPs), search engines, and content providers in Europe is termed “voluntary,” but is carried out with the implicit understanding that cooperation with state authorities will prevent further legislation on the matter.

The scope of illegal content that is filtered in Europe pertains largely to child pornography, racism, and material that promotes hatred and terrorism, although more recently there have been proposals and revisions of laws in some countries that deal with filtering in other areas such as copyright and gambling. Filtering also takes place on account of defamation laws, and this practice has been criticized, particularly in the UK, for curtailing lawful online behavior and promoting an overly aggressive notice and takedown policy, where ISPs comply by removing content immediately for fear of legal action. ISPs in Europe do not have any general obligation to monitor Internet use and are protected from liability for illegal content by regulations at the European Union (EU) level, but must filter such content once it is brought to their notice. Therefore the degree of filtering in member states depends on the efforts of governments, police, advocacy groups, and the general public in identifying and reporting illegal content.

Efforts over the past decade have been underway to create a set of common policies and practices at the EU-level on Internet regulation. This is viewed as necessary to promote regional competitiveness and commerce, to counter Internet crime and terrorism, and to serve as a platform to share best practices amongst nations. Notable advancements in regulation at the EU level—although not directly in the area of filtering—include the definition of ISP liability toward illegal content and obligations toward data retention.

Regional regulation

A recurring theme throughout this overview will be the overlapping nature of individual country-level law and regionwide regulation. Countering criminal activity on the Internet and promoting the overall competitiveness of the Internet industry have been the primary reasons cited to develop a regional regulatory framework.1 A regional approach in Europe has its beginnings with a request by the European Council to the European Commission to produce “a summary of problems posed by the rapid development of the Internet” and to assess the need for regulation in April 1996. The Commission produced a report titled “Illegal and Harmful Content on the Internet” and a Green Paper on “The Protection of Minors and Human Dignity in Audiovisual Services” in response. Based on these documents, “a common framework for self-regulation (of the Internet) at the European level” was drafted, which culminated in an Action Plan on Promoting Safe Use of the Internet. The plan, adopted on January 25, 1999, and operational up to 2002, outlines the basic principles underlying Internet content regulation at the European level.2 Broadly, undesirable content on the Internet is classified either as “illegal” or “harmful.”

The scope of “illegal” content tends to vary between countries, although there are certain issues where there is a greater amount of consensus, such as child pornography, trafficking in human beings, racist material, material promoting terrorism, and all forms of Internet fraud (such as credit card fraud).3,4 “Harmful” material, as defined in the plan, is that which might offend the values and sentiments of others and could pertain to politics, religion, or racial matters, and could also vary significantly between cultures.

The plan emphasizes the need for action in five broad areas in order to curb illegal and harmful content on the Internet:5

1. promoting voluntary industry self-regulation and content monitoring schemes, including the use of hotlines for the public to report illegal or harmful content;

2. providing filtering tools and rating systems that enable parents or teachers to regulate the access of Internet content by children in their care, while allowing adults access to legal content;

3. raising awareness about services offered by industry among users to allow them to leverage the Internet more fully;

4. exploring the legal implications of promoting the safer use of the Internet; and

5. encouraging international cooperation in the area of regulation.

Europe also maintains a regional policy that is generous in limiting ISP liability under the Electronic Commerce Directive, 2000/31/EC. Article 12, the “mere conduit” exception provision, absolves ISPs from liability for information transmitted over their networks as long as they did not initiate the message, select or modify the information, or select the intended recipients. The exemption also extends to the “automatic, intermediate and transient” storage of information, provided it is for a “reasonable period.” The latter is left to be specified by member states. Article 13 deals with caching—granting exemption from liability for the “automatic, intermediate and temporary storage of information” that is carried for the exclusive purpose of making onward transmission more efficient. Article 14 addresses the liability associated with hosting content, stating that ISPs “will not be liable for hosting information, provided they do not have actual knowledge that the activity is illegal and, upon obtaining such knowledge, act quickly to remove it.”6 Finally, Article 15 precludes ISPs from any general obligation to monitor content or data transmitted or stored through their services. Further, ISPs are not required to actively seek facts that might indicate illegal activity.7 These provisions that grant ISPs substantial immunity from liability over illegal content are consistent with the law and practice of many other countries around the world that seek to expand Internet use and promote freedom of expression.

Social filtering

Action to regulate obscene content started with individual countries and the implementation of voluntary ISP-level filtering programs. The landmark model of large-scale voluntary ISP filtering in Europe originated in the UK.8 BT, Britain’s largest ISP, serving about a third of the country’s home Internet users, launched Project Cleanfeed in June 2004,9 in consultation with the British Home Office. Under the auspices of this project, BT filters Internet content based on a blacklist of Web sites that contain images of child abuse as defined by the amended Protection of Children Act, 1978,10 hosted anywhere in the world. The list is compiled by the Internet Watch Foundation (IWF), a not-for-profit organization, in consultation with government, industry, the police, and the public. IWF provides the list to its members, which today include ISPs, mobile network operators, content providers, and search engines such as Google and Yahoo!11 Those attempting to access the illegal content hosted abroad receive an error message as if the particular page were unavailable as a result of other connectivity problems.12 Illegal content that is hosted within the UK, including child abuse images and content that is criminally obscene or incites racial hatred, is required to be taken down by ISPs and content providers under a notice and takedown regime.13 Although this form of filtering is termed “voluntary,” by the end of 2007 all broadband consumer ISPs in Britain are expected to have implemented a similar system, failing which regulatory enforcement might be considered.14,15 Other countries, such as Norway, Sweden, Denmark and Italy, have implemented similar programs, while Finland is currently considering doing so.16

Filtering also takes place through “voluntary self-regulation” by search engines. As of early 2005 all major search engines in Germany — Google, Lycos Europe, MSN Deutschland, AOL Deutschland, Yahoo, T-Online, and t-info — have come together to form an organization that filters search results that are harmful to minors, based on a list provided by a government agency in charge of media classification. The move is seen as a response to pressure for voluntary self-regulation by industry at the EU level, and arguably the fear among industry that a failure to comply will result in increased legislation. The system has been criticized, however, for a lack of transparency,17 since the search engines cannot disclose the list of Web sites to the public, as per a codex signed by them.18 In addition, disclosure would defeat the purpose of filtering search results, as the sites are removed only from the search results, not from the Internet.

Internet content is also monitored through online surveillance by authorities in the UK. The Child Exploitation and Online Protection Centre (launched in April, 2006) made thirteen arrests in July 2006 after beginning investigations into pay-per-view Internet services.19 The police in Britain have also been vested with the power to pass on the personal details of those who access illegal content online using credit cards to banks, based on an amendment to the Data Protection Act (1998).20 Banks will then cancel the cards as a breach of their terms of service.

The public in nineteen European countries assist in identifying and reporting illegal content —particularly in the area of child pornography — through a network of hotlines that have been implemented on the basis of a recommendation at the EU level.21 In Austria authorities were able to uncover a “child-pornography ring” involving seventy-seven countries in February 2007, based on a report by a man working for a Vienna-based Internet file-hosting service.22 Recent reports show that the Save the Children Denmark Hotline, financed jointly by the European Commission’s Safer Internet Plus Programme had nearly 9,000 reports of child abuse images in 2006 alone.23 The police in Spain were able to arrest ninety people in 2004 in the country’s largest operation against the distribution of child pornography, facilitated by the hotlines. The INHOPE Association acts as the coordinator of the network of hotlines, including in countries outside Europe such as Australia, Brazil, Canada, South Korea, Taiwan, and the United States.24

Although early filtering efforts had fairly limited agendas, proposals and laws are emerging in many nations toward filtering in other social realms, such as gambling and betting. A proposal was drafted in 2002 to revise Swiss federal laws on lotteries and betting, such that those providing access to games that are considered illegal face fines up to 1 million Swiss francs or up to a year of imprisonment. This effort was suspended in 2004, and no further action has been taken since. As of February 2006 ISPs in Italy are not allowed to provide access to Web sites that offer online gambling. The list of Web sites to be blocked is compiled by the Autonomous Administration of State Monopolies (AAMS, a part of the Ministry of Economy and Finances), which issued the decree.25 The most broad-based proposal yet for filtering comes from Norway, where the government is considering blocking access to foreign gambling sites, Web sites that “desecrate the Flag or Coat of Arms of a foreign nation,” sites that promote hatred toward public authorities, contain hate speech or promote racism, offensive pornography sites, and peer-to-peer sites that offer illegal downloads of music, movies, or television shows.26

Nationalistic filtering

There are no examples in Europe of filtering carried out to silence political opposition such as those that the ONI has documented in other regions. There are, however, examples of filtering that seeks to maintain the legitimacy of government institutions and preserve national identity. In December 2002 a local Swiss magistrate, Françoise Dessaux, ordered several Swiss ISPs to block access to three Web sites hosted in the United States that were strongly critical of Swiss courts,27 and to modify their DNS-servers to block the domain appel-au-people.org.28 The Swiss Internet User Group and the Swiss Network Operators Group protested that the blocks could easily be bypassed and that the move was contrary to the Swiss constitution, which guarantees “the right to receive information freely, to gather it from generally accessible sources and to disseminate it” to every person. However, there was strong enforcement, as the directors of noncompliant ISPs were asked to appear personally in court, failing which they faced charges of disobedience.

On March 7, 2007, the video-sharing Web site Youtube was blocked in Turkey as per a court order, following the posting of certain videos on the site that were found to be derogatory toward Turkey’s founding father, Mustafa Kemal Ataturk, the Turkish people in general, and the Turkish flag. The blocking invoked Article 301 of the Turkish Penal Code, known as the main obstacle to freedom of speech, which defines insults toward Ataturk as well as “Turkishness” as a crime. Turkey’s leading ISP, Turk Telecom, complied with the order but petitioned to the court to allow access to the site to be restored. The court agreed on the condition that the particular videos were removed. The two-day blocking was heavily criticized both within Turkey and abroad and likened to “closing a library because of a single book that was found to be improper.”29

Hate speech

European states are also increasingly taking action against online hate speech, applying their offline policies to the Internet. Some efforts raise important issues such as the jurisdiction over material on the Internet. For example, a French court in 2000 ruled that U.S.-based Yahoo! Inc. is liable under French law for allowing the people of France access to auction sites that include Nazi memorabilia, and demanded that Yahoo! must ensure that this content is impossible to access from France or face fines.30 The issue was raised by two French not-for-profit organizations31 dedicated to fighting anti-Semitism.32 Yahoo! brought suit in a U.S. District Court in San Francisco, claiming that the French court’s ruling was unenforceable in the United States. The U.S. court ruled in Yahoo!’s favor in November 2001, but in 2004 a panel of the 9th U.S. Circuit Court of Appeals overturned the ruling by the lower court on the grounds that it “did not have sufficient jurisdiction over the French parties.”33 After reconsidering the decision, the 9th U.S. Circuit Court of Appeals dismissed Yahoo!’s case in January 2006 despite claiming jurisdiction over the matter because Yahoo! had already removed the materials and, therefore, the requirement to block would not have done any actual first amendment harm.34

Similarly, the German Federal Court of Justice ruled in December 2000 that material glorifying the Nazis and denying the Holocaust must be censored as per German law, regardless of where it is hosted, based on a case involving an Australian-based Holocaust revisionist who was using the Internet to spread his message denying the atrocities of World War II.35 In another case, seventy-eight ISPs in Nordrhein-Westfalen were ordered to block access to two foreign Web sites in 2002 that contained neo-Nazi content.36 The same regional government of Düsseldorf also took an anti-censorship activist to court for posting hyperlinks on his Web site to radical rightwing content that had been censored.37

Other European countries also have laws against holocaust denial and ban material that promotes racial hatred. These have been “harmonized” in a protocol to the Council of Europe’s cybercrime treaty, which requires that “any written material, any image or any other representation of ideas or theories, which advocates, promotes or incites hatred, discrimination or violence, against any individual or group of individuals, based on race, color, descent or national or ethnic origin, as well as religion if used as pretext for any of these factors” and “material which denies, minimizes, approves of or justifies crimes of genocide or crimes against humanity” must be made illegal by the signatories.38 As with all illegal content, once brought to their attention, ISPs must either take down or block the relevant Web sites depending on whether they are hosted within the country or abroad.

Defamation

Member states of the EU have expressed the need for a simplified framework that should be applied with respect to rules concerning defamation by media or publications via the Internet and other electronic networks. The generally used principle in cases of defamation concerning the media—that the law of the country where the defamed person lives is applicable—implies that media organizations must know the privacy and defamation laws of each European country, which is criticized as impractical. In Italy, for example, in 2000, a man in “a trans-border custodial battle” claimed that his ex-wife, now resident in Israel, was responsible for posting statements and images on the Internet that were defamatory of him and derogatory of his ability to care for their two daughters. The Italian Supreme Court, known as the Suprema Corte di Cassazione, overturned a prior verdict from a lower court, affirming that Italy’s laws of libel apply to content on foreign Web sites accessible by Internet users in the country,39 citing that while the offending statements were posted outside of Italy, the effects were felt within the country and are therefore subject to the national laws.

The issue of the need for a unified framework was brought to the fore once more in February 2007, as a part of the European Parliament’s second reading of the Rome II Regulation, which seeks to establish rules on the applicable law to noncontractual obligations relevant to publications via the Internet and other electronic networks. The Parliament’s proposed amendment is that the law applicable should be that of the country to which “the publication or broadcast is most directed,” which is to be determined “by the language of the publication or broadcast, or by sales or audience size in a given country as a proportion of total sales or audience size, or by a combination of these factors.” Further, the amendment suggests that if these are not easy to determine, “the relevant law will be the one of the country where editorial control is exercised.” With regard to the right to reply, it is suggested that the applicable law should be that of the country in which the publisher or broadcaster has its “habitual residence.” The text, which has been adopted by the Parliament, is not expected to find easy favor with the European Council and must undergo a standard conciliatory procedure where member states and Members of European Pariliament (MEPs), in equal representation, debate the proposal, and it will be approved as a regulation if an acceptable compromise is reached.40

In their current form, defamation laws at the country level, particularly in Britain, have been criticized for leading to a “Web takedown” culture where ISPs immediately remove content that is allegedly defamatory when brought to their notice, for fear of facing law suits. The concern in Britain, as in other nations, is that this can have a “chilling effect” on lawful online content and behavior.41

A landmark precedent in the UK led the way for the establishment of a “notice and takedown” system. In Laurence Godfrey v. Demon Internet Limited, a defamatory statement was made on a posting to a newsgroup called “soc.culture.thai,” available on a server at the provider Demon Internet Limited. The message was found to be forged and only appeared to come from Godfrey. Despite a request by Godfrey to take down the content, as it was defamatory of him, the ISP did not comply. As a result, he claimed damages for libel under §1 of the Defamation Act, 1996, and settled with Demon out of court.42

The Libel Law in Britain is known to be particularly sympathetic to libel plaintiffs—and is often contrasted with the law in the United States in this context—such that many individuals from outside countries have sued publications in the UK, despite a relatively small circulation there, for a better chance of winning. Following the Jameel v. Wall Street Journal Europe case,43 the law was loosened in October 2006. There has also been debate over whether the protection of the reputation of individuals is in conflict with the Human Rights Act of 1998, insofar as it might infringe upon the right to free speech.44

Copyright

A few countries in Europe have begun to employ Internet filtering to combat copyright infringement, evolving toward the notice and takedown approach used in the United States. In Denmark, as per a ruling of the Copenhagen City Court on October 2006, TDC, the country’s largest ISP, blocked access to a Web site that distributes illegally copied music;45 and in February 2007, as mentioned earlier, Norway proposed filtering on a much larger scale that would include blocking of peer-to-peer sites offering illegal downloads of music, movies, and television shows.46

In an example of the application of copyright to the Internet, on March 16, 2007, the police arrested the owner of Arenabg.com, one of Bulgaria’s largest BitTorrent trackers and one among the country’s ten most popular Web sites,47 providing links to copyrighted music, movies, and software.48 Despite the owner’s release within twenty-four hours, the Web site was filtered for the period March 16–19, by police order, on the grounds that it was “necessary to prevent foreign interference with the torrent trackers.”49 The order to filter the site was lifted by the General Office for Fighting Organized Crime (GDBOP), but has resulted in considerable citizen protest for what is considered unjust treatment toward the owners and operators of torrent sites.50 Following the arrest, other tracker Web sites have reportedly closed, some under threat of confiscation of property by the police, or have moved their servers abroad to avoid prosecution under the Bulgarian Copyright Law. The extent of actual filtering of these sites in the country is not known because there are differing reports regarding accessibility by various ISP subscribers. Given that BitTorrent trackers point to content but do not host it, the legal recourse to deal with the copyright violation associated with these Web sites is especially unclear.51

Law suits concerning alleged copyright infringement by search engines have been raised in a few countries, with recent rulings in favor of a notice-and-takedown policy that could arguably serve as a precedent for other countries in the region. In February 2007 the Brussels Tribunal found Google Inc. to be in violation of national copyright laws in a case raised by Copiepresse of Belgium, a trade group representing seventeen of Belgium’s French- and German-language newspapers, and the company was fined 2.4 million pounds retrospectively for the breach.52 As per a translation of the ruling, “the reproduction and publication of headlines as well as short extracts, and the use of Google's cache, the publicly available data storage of articles and documents, violate the law on authors' rights.”53 The former refers to the Google News service,54 while the latter to Google Web Search. The outcome is that Google cannot include references to articles, pictures, or drawings of Copiepress members through its Google News service without prior agreements, and must remove Belgian newspaper content from its search results. Failure to comply will result in fines of 25,000 euros a day.

Google intends to appeal against the judgment, stating that Web search results and the news service in fact drive more traffic toward the newspaper Web sites, and that Google News does not earn any advertising revenue from this. Copiepress, however, holds that by allowing users to bypass the front pages of newspapers and link directly to articles, newspapers lose advertising revenue. In addition, by making old newspaper material available through its cache, newspapers effectively lose the ability to charge customers for access to their archives, while Google Web Search does in fact earn advertising revenue for this service. The court ruling also states that all copyright holders can notify Google in case of infringement, and the search engine will have to remove content within a twenty-four-hour period or pay a 1,000 euro daily fine.55 This could lead to an attitude of risk aversion and immediate compliance on the part of ISPs, content providers, and search engines—similar to instances of alleged defamation—in the face of potential law suits.

Google had run into similar difficulty in France with respect to its news service when Paris-based Agence France Presse (AFP) had sued the company for USD 17.5 million in 2005. The suit was dropped in April 2007, following a licensing agreement where Google would be allowed to use stories and photographs from AFP for its news aggregator and for other Google services, including products that the company is expected to launch in the future. The financial terms of this arrangement have not been publicly disclosed.56 It is argued that out-of-court settlements in Europe for copyright infringement should not be surprising, because the legal defenses available in the region in this respect are relatively weak.57

At the regional level, Intellectual Property Rights (IPR) pertaining to Internet content is dealt with by two directives: the Copyright and Related Rights in the Information Society adopted on April 9,2001, and the Electronic Commerce Directive 2000/31/EC, which came into force on June 8, 2000. Article 5 (1) of the Copyright Directive exempts ISPs from liability for copyright infringement where “reproduction is transient or incidental,” when copies are an integral part of a technological process “whose sole purpose is to enable onward transmission in a network between third parties by an intermediary or a lawful use of a work or other subject-matter to be made.” The second condition for exemption is where the copies have “no independent economic significance,” and this is left to be adjudged independently by courts in the respective member states. As per the first condition, ISPs and telecommunications operators do not need to request permission to transmit transient copies across their networks. However, the second condition implies that ISPs still face a situation of differing degrees of liability across the member states of the EU, and the directive has been criticized in this regard.58 The EC Directive deals with the liability of ISPs toward content more generally, but with important implications for copyright. In particular, the directive provides a “mere conduit” exception, limits liability for content associated with the caching and hosting functions, and exempts ISPs from any general obligation to monitor.

Security

Security concerns in Europe have resulted in legislation concerning the surveillance and monitoring of Internet use. Although distinct from filtering, these have many parallels in their potential impact upon online freedom of speech. A recent and controversial area of legislation at the EU level in this regard pertains to the surveillance of traffic data and its retention, which was passed in March 2006 and must be put into effect for Internet traffic by March 2009.59 As per the European Data Retention Directive, ISPs in the various nations are required to retain specific data pertaining to communications—in particular, with regard to Internet access, e-mail and telephony—for a minimum period of six months but not exceeding two years. The data to be retained do not concern the content of communications. The aim is to bring about a “common code” of data retention in order to facilitate the tracing of illegal content and the source of attacks against information systems, and to identify those who use the electronic communications networks for terrorist activities and organized crime.60 As the directive is implemented across the member states, privacy groups are concerned about the ability of ISPs, search engines61 and Web companies to retain data and monitor people’s online habits. Moreover, the retention period of up to twenty-four months has been argued to be an unjustifiable length of time.62

An example of security legislation at the country level is a proposed law drafted in March 2007 in Sweden, which would give the national defense intelligence agency power to monitor all cross-border phone calls and e-mail traffic without court order. This will be carried out by the National Defence Radio Establishment (FRA) in the form of searches for sensitive key words through the use of computer software. With some suggested amendments, the Swedish Legislative Council has approved the proposal to go forward. Concerns for privacy have been raised, including for communications within the country, which are often routed via servers hosted abroad.63 Critics include the country’s national security police agency, SAPO, which considers the proposal to be in violation of “personal integrity.”

Conclusion

Filtering of online content takes a variety of forms among the nations of Europe. Examples include orders issued by states to ISPs to take down Web sites that contain illegal content if they are hosted within the country, blocking orders by enforcement authorities for illegal content hosted abroad, and search engines that filter results pertaining to illegal content as a form of self-regulation. Although forms of filtering by search engines and ISPs are often referred to as “voluntary self-regulation” in some countries, there appears to be an implicit understanding that cooperation with government orders will forestall further legislation.

Filtering in European countries has also given rise to several legal disputes over the question of jurisdiction involving content that is hosted abroad. While the degree of filtering that takes place tends to vary among nations, there is a concern in many countries over an apparent increase in the overall extent of filtering, as manifested in recent proposals and revisions in laws. Filtering in European nations has, however, largely been confined to content that is illegal, and the extent has been tempered by public dialogue, adherence to law, and commitment to free speech, although the latter is more constrained than it is in the United States.

At the EU level there have been efforts over the past decade to create a common platform of “harmonized” Internet regulation. With regard to the filtering of online content, the emphasis has been on greater cooperation among industry, the public, and enforcement authorities within nations and increased voluntary industry self-regulation. Although EU level discussions were initially focused on the various forms of illegal content online (in particular, to do with child pornography, racism, and xenophobia), there is an increased attention toward the use of the Internet for terrorism and organized crime in recent years. The latter has spurred legislation in the area of data retention, and much debate on the need for greater security measures versus the associated implications for privacy. There have also been recent advancements in terms of regulation at the EU level in the areas of defamation law, copyright, and defining ISP liability for online content. Creating a common platform for legislation at the regional level is a slow and complex process given the significant differences in the cultures and existing legislations in the countries of the European Union.

NOTES

    Regions: 
    Topics: