United States and Canada

PDF Version
Note: a previous version of this profile is available at United States and Canada, 2006-2007.

Introduction

The Internet in the United States and Canada is highly regulated, supported by a complex set of legally binding and privately mediated mechanisms. Technical filtering plays a minor role in this regulation. The first wave of regulatory actions in the 1990s in the United States came about in response to the profusion of sexually explicit material on the Internet within easy reach of minors. Since that time, several legislative attempts at creating a mandatory system of content controls in the United States have failed to produce a comprehensive solution for those pushing for tighter controls. At the same time, the legislative attempts to control the distribution of socially objectionable material on the Internet in the United States have given rise to a robust system that limits liability over content for Internet intermediaries such as Internet service providers (ISPs) and content hosting companies. Proponents of protecting intellectual property online in the United States have been much more successful, producing a system to remove infringing materials that many feel errs on the side of inhibiting legally protected speech. National security concerns have spurred on efforts to expand surveillance of digital communications and fueled proposals for making Internet communication more traceable.

After a decade and half of ongoing contentious debate over content regulation in the United States, the country is still very far from reaching political consensus on the acceptable limits of free speech and the best means of protecting minors and policing illegal activity on the Internet. Gambling, cyber security, and dangers to children who frequent social networking sites —real and perceived—are important ongoing debates. Canadian legislators have been less aggressive than their U.S. counterparts in proposing specific legislative remedies for problems arising from Internet use. Canadians have been more inclined to employ existing regimes developed for regulating offline speech and less apt to propose broad solutions. Canadians do not currently pursue copyright infringement online with the same zeal as their U.S. counterparts. Neither does Canadian law provide the same formal protection for intermediaries. Unlike the United States, publishing of hate speech is restricted in Canada. Under section 320.1 of the Canadian Criminal Code, a judge can issue a warrant authorizing the deletion of (publicly available) online hate propaganda from computer systems located within the jurisdiction of the court.

Public dialogue, legislative debate, and judicial review have produced filtering strategies in the United States and Canada that are different from those described elsewhere in this volume. In the United States, many government-mandated attempts to regulate content have been barred on First Amendment grounds, often after lengthy legal battles.1 However, the United States government has been able to exert pressure indirectly where it cannot directly censor. In Canada, the focus has been on government-facilitated industry self-regulation. With the exception of child pornography, Canadian and U.S. content restrictions tend to rely more on the removal of content than blocking; most often these controls rely upon the involvement of private parties, backed by state encouragement or the threat of legal action.2 In contrast to much of the world, where ISPs are subject to state mandates, most content regulation in the United States and Canada occurs at the private level.

The United States and Canada both have relatively high Internet penetration rates. In each country, nearly three-quarters of the population has access to the Internet.3 Despite such high Internet penetration rates, the two countries have relatively low broadband subscription rates, with the United States at 23 percent and Canada at 28 percent. Internet subscription rates on the whole are only slightly higher: the United States has a 24 percent subscription rate, while Canada’s rests at 31 percent.4 The broadband stimulus push of President Barack Obama’s administration in early 2009 may improve these rates in the United States.

These high rates of Internet usage increase the ability of citizens to publish and widely distribute dissenting points of view. At the same time, Internet users engage in a large number of other online activities, such as accessing pornography, that test a society’s dedication to free expression and privacy.

Regulating Obscene and Explicit Content

The United States Congress passed the Communications Decency Act (CDA) as part of the Telecommunications Act of 1996. Signed into law by President Bill Clinton in February 1996, the CDA was designed to criminalize the transmission of ‘‘indecent’’ material to persons under 18 and the display to minors of ‘‘patently offensive’’ content and communications.5 The CDA took aim not only at the authors of ‘‘indecent’’ material but also at their Internet service providers, although it offered them each safe harbor if they imposed technical barriers to minors’ access.6

Prior to taking effect, the CDA was challenged in federal court by a group of civil liberties and public interest organizations and publishers who argued their speech would be chilled by fear of the CDA’s enforcement. The three-judge district court panel concluded that the terms ‘‘indecent’’ and ‘‘patently offensive’’ were sufficiently vague such that enforcement of either prohibition would violate the First Amendment.7 ‘‘As the most participatory form of mass speech yet developed,’’ Judge Stewart Dalzell wrote in a concurring opinion, ‘‘the Internet deserves the highest protection from governmental intrusion.’’8 The U.S. Supreme Court affirmed this holding in 1997, invalidating the CDA’s ‘‘indecency’’ and ‘‘patently offensive’’ content prohibitions.9 In the landmark case Reno v. ACLU, the Court held that CDA was not the ‘‘least restrictive alternative’’ by which to protect children from harm. Rather, parent-imposed filtering could effectively block children’s access to indecent material without preventing adults from speaking and receiving this lawful speech.10 Other sections of the CDA continue to remain in force, including Section 230, which provides immunity to ISPs for content that third-party users place online.11 Section 230 has had an undeniably powerful impact in promoting free speech in the United States. A growing body of case law suggests that it is being used by ISPs to settle or quickly dismiss claims that are brought against them.12 Many question whether the sweeping protections offered by Section 230 offer in fact too much protection for online speech and excessively limit the ability of victims and the state to suppress harmful speech.13

Lawmakers responded to the Supreme Court’s decision in Reno v. ACLU by enacting the Child Online Protection Act (COPA) —a second attempt at speaker-based content regulation. In COPA, the U.S. Congress directed its regulation at commercial distributors of materials ‘‘harmful to minors.’’14 The slightly narrower focus of COPA did not solve the constitutional problems that doomed the CDA. The district court enjoined COPA on First Amendment grounds.15 After a few trips to the Supreme Court and back for fact-finding, the district court issued its ruling in March 2007, finding COPA void for vagueness and not narrowly tailored to the government’s interest in protecting minors. Once again, the court held that criminal liability for speakers and service providers was not the ‘‘least restrictive means’’ to accomplish the government’s purpose because the private use of filtering technologies could more effectively keep harmful materials from children. The Third U.S. Circuit Court of Appeals later affirmed this decision, and, in January 2009, the Supreme Court put the legislation to rest—at least for now—by refusing to hear the case.

Plaintiffs successfully argued that CDA and COPA would chill the provision and transmission of lawful Internet content in the United States. Faced with the impossible task of accurately identifying ‘‘indecent’’ material and preemptively blocking its diffusion, ISPs would have been prompted to filter arbitrarily and extensively in order to avoid the threat of criminal liability, while writers and publishers would feel compelled to self-censor.

Stymied at restricting the publication of explicit material, congressional leaders changed their focus to regulating what someone might hear, rather than what they say. The Children’s Internet Protection Act (CIPA) of 2000 forced public schools and libraries to use Internet filtering technology as a condition of receiving federal E-Rate funding. A school or library seeking to receive or retain federal funds for Internet access must certify to the FCC that it has installed or will install technology that filters or blocks material deemed to be obscene, child pornography, or material ‘‘harmful to minors.’’16 The Supreme Court rejected First Amendment challenges to CIPA, holding that speakers had no right of access to libraries and that patrons could request unblocking.17 In response, some libraries and schools have rejected E-Rate funding,18 but most have felt financially compelled to install the filters.

In the aftermath of CDA, COPA, and CIPA, Internet filtering in the United States is carried out largely by private manufacturers. These companies compete for market share in a lucrative business area. Schools, businesses, parents, and other parties wishing to block access to certain content have a broad range of software packages available to them.19 While some programs filter heavily, permitting access only to a ‘‘white list’’ of preapproved sites (for example, those appropriate for young children), others generate blacklists of blocked sites through a combination of automated screenings of the Web, staff members who ‘‘rate’’ sites on appropriateness, and user complaints. Although CIPA mandates the presence of filtering technology in schools and libraries receiving subsidized Internet access, it effectively delegates blocking discretion to the developers and operators of that technology. The criteria ‘‘obscene,’’ ‘‘child pornography,’’ and ‘‘harmful to minors’’ are defined by CIPA and other existing legislation, but strict adherence to these rather vague legal definitions is beyond the capacity of filters and inherently subject to the normative and technological choices made during the software design process. Moreover, while CIPA permits the disabling of filters for adults and, in some instances, minors ‘‘for bona fide research or other lawful purposes,’’20 it entrusts school and library administrators with deactivating the filters, giving them considerable power over access to online content. Once FCC certification requirements have been met, it is these individuals who shoulder the burden of ensuring access to constitutionally protected material.21

Attempts to filter Internet content in the United States have also reached the state level. In 2004, Pennsylvania authorized the state attorney general’s office to force ISPs to block Pennsylvania residents’ access to sites that the attorney general’s office identified as child pornography.22 A district court struck down this regulation as unconstitutional where this state law in effect was regulating activity occurring wholly outside the state’s borders, but did not strike down the act due to overbreadth.23 The court noted that ‘‘there is an abundance of evidence that implementation of the Act has resulted in massive suppression of speech protected by the first Amendment.’’24

The complexities of government-led efforts to restrict online speech have given rise to quasi-voluntary initiatives supported by the force of law. Since possession and distribution of child pornography are criminal acts in the United States, service providers respond to removal requests and report any requests to the National Center for Missing and Exploited Children. In June 2008, the New York state attorney general signed an agreement with Comcast, AT&T, Inc., AOL, Verizon Communications, Inc., Time Warner Cable, and Sprint to purge their servers of child pornography identified by the National Center for Missing and Exploited Children.25 The agreement attempts to curtail access to child pornography by implementing a new system to rapidly identify child pornography images as well as responding to user complaints about child pornography. In addition, several ISPs agreed to stop supporting access to Usenet newsgroups, identified by the attorney general’s office as a source of child pornography.

The desire to protect children from harm online continues to drive efforts at content- based restrictions on the Internet. Law enforcement agencies use pressure to convince private companies to take on voluntary Internet regulatory initiatives. Concerns over child safety online have focused attention on the potential risks associated with time spent on social network sites such as Facebook and MySpace, where children may come into contact with sexual predators and be subject to cyberbullying by their peers. Law enforcement officials in the United States have been vocal in promoting age and identity verification systems in order to better police online sites frequented by minors.26 The Internet Safety Technical Task Force, a group of technology companies, Internet businesses, nongovernmental organizations, and academics, was brought together by agreement with 49 U.S. state attorneys general to study the use of technologies by industry and end users to promote Internet safety for minors. The task force report of January 2009 recommended a model of collaboration among industry groups, law enforcement, and others rather than implementation of a series of mandatory technical controls to protect children online.

Another U.S. legislative attempt to control online speech, the Megan Meier Cyber- bullying Prevention Act, would criminalize ‘‘severe, repeated and hostile’’ speech online.27 This proposed legislation, named after a girl who committed suicide thought to be induced by online harassment, has been harshly criticized as unnecessary, given the existing off-line remedies for harassment, and for its potential impact on protected online speech, as it could be applied to many incidents of online speech far beyond the cyberbullying targeted by the legislation.28 Seventeen of the 50 states have passed laws against cyberbullying.29

While legislators in the United States have pursued broader definitions of offenses and mandates on Internet filtering, Canada has tended to act conservatively in response to online obscenity. In its response to online sexually explicit material, Canada has made only de minimis amendments to preexisting law.30 Legislators have simply revised existing obscenity provisions to encompass online offenses. For example, the passage of the Criminal Law Amendment Act of 2001 established online acts of distributing and accessing child pornography and luring a child as crimes.31 The Criminal Code mandates a system for judicial review of material (including online material) alleged to be child pornography. It does not, however, require ISPs to judge the legality of content posted on their servers or to take corrective action prior to a judicial determination.32 If a judge determines that the material in question is illegal, ISPs may be required to take it down and help the court identify and locate the person who posted it.33

There have been instances in Canada of ISPs attempting to filter content hosted outside of Canada despite regulatory uncertainty in the area. For three days in July 2005, the Canadian ISP Telus blocked access to a Web site run by members of the Telecommunication Workers Union during a labor dispute containing what Telus argued was proprietary information and photographs that threatened the security and privacy of its employees.34 This unilateral action by Telus deviated from the general practice of Canadian ISPs to pass on any and all information without regard for content in exchange for immunity from liability over content.35 This action also conflicted with Section 36 of the Canadian Telecommunications Act, which states that, without the approval of the Canadian Radio-Television and Telecommunications Commission (CRTC), a ‘‘Canadian carrier shall not control the content or influence the meaning or purpose of telecommunications carried by it for the public.’’36 Telus’s blocking also affected the customers of other ISPs that connect via Telus.37 The matter was resolved when Telus was able to obtain court orders from Alberta and British Columbia requiring the Web site operator, who lives and works in Canada, to remove the offending materials (the site was hosted in the United States).38

In August 2006, Canadian human rights lawyer Richard Warman filed an application with the CRTC to authorize Canadian ISPs to block access to two hate speech sites hosted outside of Canada.39 The CRTC denied the application, but the decision recognized that although the CRTC cannot require Canadian ISPs to block content, it could authorize them to do so. However, the CRTC noted that the ‘‘scope of this power has yet to be explored.’’40 In a 2009 decision by an Ontario court, Richard Warman was successful at getting an order for a Web site to disclose the identities of eight of its anonymous contributors.41 The decision has been appealed by the defendants.42 The rules that the court relied on were general duty of disclosure rules in Ontario civil procedure that were not written with the intent of applying to this situation. The state of court involvement in online speech therefore remains uncertain.

In November 2006, Canada’s largest ISPs launched Project Cleanfeed Canada in partnership with Cybertip.ca, the nation’s child sexual exploitation tipline. The project, modeled after a similar initiative in the United Kingdom, is intended to protect ISP customers ‘‘from inadvertently visiting foreign Web sites that contain images of children being sexually abused and that are beyond the jurisdiction of Canadian legal authorities.’’43 Acting on complaints from Canadians about images found online, Cybertip.ca analysts assess the reported information and forward potentially illegal material to the appropriate foreign jurisdiction. If a URL is approved for blocking by two analysts, it may be added to the Cleanfeed distribution list. Each of the participating ISPs voluntarily blocks this list without knowledge of the sites it contains, precluding ISP involvement in the evaluation of URLs. Blocked sites fail to load, but attempts to access them are not monitored and users are not tracked.44

Since Project Cleanfeed Canada is a voluntary program, the blocking mechanism is up to the discretion of the ISPs. Sasktel, Bell Canada, and Telus all claim to block only specific URLs, not IP addresses, in an attempt to avoid overblocking.45 Beside the significant public outcry that would most likely occur, overblocking itself may be illegal under the Telecommunications Act mentioned previously.

Under Section 163 of the Canadian Criminal Code, accessing child pornography—as well as making it accessible—is unlawful.46 Therefore, the filtering of such content does not infringe on rights of access or speech afforded by the Canadian Charter of Rights and Freedoms within Canada’s constitution. Moreover, because ISP participation in Project Cleanfeed is voluntary, the blocking of sites through the project cannot be said to be state sponsored. However, the project remains controversial for other reasons. First, Project Cleanfeed has not yet sought or received authorization from the CRTC. Second, the blacklist maintained by Cybertip.ca remains secret, as publishing a ‘‘directory’’ of child pornography would itself be illegal. This lack of transparency inevitably generates distrust of the list and the process by which it is compiled. Third, the procedure for appealing the blocking of a site may have implications for anonymity.47 A content owner or ISP customer may complain to the ISP or directly to Cybertip.ca, which will reassess the site and, if necessary, obtain an independent and binding judgment from the National Child Exploitation Coordination Centre. It is unclear whether this process might expose the complainant’s identity and create a potential for abuse of that individual’s rights by the ISP or perhaps even by authorities.

Canada’s response to online obscenity and its voluntary filtering initiative are minimal in contrast to the more vigorous regulatory efforts of the United States.

Regulation of Online Gambling

In 2006, the United States House of Representatives passed legislation designed to limit online gambling by prohibiting the transfer of funds to gambling sites. The Unlawful Internet Gambling and Enforcement Act (UIGEA), which was slipped into the SAFE Port Act,48 banned gambling, prohibited online poker sites and other betting companies from ‘‘knowingly accepting’’ money from United States –based customers, and encouraged financial institutions to deny Internet gambling transactions. Since the act’s inception, its legality has been in question.49

Two states in the United States have attempted to further limit gambling online. In October 2008, a circuit court judge in the state of Kentucky granted a request by the governor to have 141 Web sites used by online gaming operations transferred to state control.50 In January 2009, following a petition filed by members of the Center for Democracy and Technology, the Electronic Frontier Foundation, and the American Civil Liberties Union of Kentucky,51 a Kentucky appeals court overturned the judge’s request.52 In May, 2009, John Willems, director of the Alcohol and Gambling Enforcement Division (AGED) of Minnesota’s Department of Public Safety (DPS), filed an order requiring that 11 ISPs, including Comcast, Charter, and Verizon Wireless, prevent state residents from reaching approximately 200 gambling sites.53 iMEGA (Interactive Media, Entertainment, and Gaming Association) had filed a lawsuit against Willems seeking an injunction to block implementation of the AGED order,54 which was later dropped when the Minnesota DPS reached a settlement with iMEGA. ISPs are no longer required to block state residents’ access to gambling sites.55

In 2008, Representative Barney Frank (Democrat, Massachusetts) again announced plans to introduce legislation aimed at overturning the UIGEA.56 He had failed a previous attempt in 2007 in the form of an act entitled the Internet Gambling Regulation and Enforcement Act.57

The legality of online gambling in Canada is unclear, as few gaming cases exist to provide guidelines, although persons running online gaming operations can be subject to criminal liability.58 As a result, offshore gambling sites are currently legal to use in Canada.59 Advertising of such services is generally held to be illegal in Canada.

Defamation

As in other countries, the potential for legal liability for civil violations, including defamation and copyright, constrains the publishers of Internet content and certain service providers in the United States and Canada. These pressures can have a ‘‘chilling effect’’ on lawful online content and conduct, and can threaten the anonymity of users. The content and court adjudication of such laws constitute state action, even when the lawsuits and threats are brought by private individuals or entities. One crucial factor in determining liability for defamation is the provider’s relation to the content —whether the provider functioned as a carrier, distributor, or publisher of the defamatory content. In the United States the common law has been overridden by a federal statute, a holdover portion of the CDA, 47 U.S.C. 230. A key part of the CDA survived judicial scrutiny. Section 230 immunizes ISPs for many of their users’ actions including defamation (copyright and criminal activity is excluded): ‘‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’’60 Moreover, the First Amendment shields speakers from liability for much speech about public figures.61

Canada has no statutory equivalent to the statutory protection for ISPs under CDA 230. However, Canadian case law suggests that ISPs are entitled to a certain degree of immunity: in June 2004, the Supreme Court of Canada unanimously held that ISPs cannot be held liable for violations of Canadian copyright law committed by their subscribers.62 The decision ruled that the act of caching content by an ISP would not make it liable and that an ISP’s knowledge of potential infringements by subscribers is not necessarily sufficient to create liability either.63 In Canada, ISPs are therefore able to escape liability if they prove that they are merely acting as ‘‘conduits.’’64 They may, however, face liability as publishers if they exercise editorial control over material. This situation stands in contrast to the United States, where CDA 230 provides publisher immunity to ISPs, limited only where the provider or host has acted as an ‘‘information content provider’’ and actually created some or all of the content.65 An important caveat to the U.S. immunity is that it does not apply to intellectual property law—while the Canadian situation exemplified in the case described earlier does provide immunity to ISPs regarding intellectual property matters such as copyright.66 Overall, both Canadian and U.S. service providers receive legal protections that favor the protection of free speech online. Canadian ISPs, however, lack the clearly set out statutory protection that exists in the United States and may feel compelled to take down allegedly defamatory content (e.g., postings to message boards) when threatened with the possibility of costly lawsuits.

Copyright

U.S. copyright law has evolved more quickly than Canadian law both in addressing the issue of ISP liability and in encouraging removal of infringing material. The Online Copyright Limitations of Liability Act, a part of the Digital Millennium Copyright Act (DMCA) of 1998,67 gives service providers a ‘‘safe harbor’’ from liability for their users’ copyright infringement provided they implement copyright policies and provides the legal basis for a notice-and-takedown regime. Where a service provider unknowingly transmits, caches, retains, or furnishes a link to infringing material by means of an automatic technical process, it is protected from liability so long as it promptly removes or blocks access to the material upon notice of a claimed infringement.68 Section 512 (c) of the DMCA69 provides that ‘‘a service provider shall not be liable for monetary relief, ... , for injunctive or other equitable relief, for infringement of copyright by reason of the storage at the direction of a user of material that resides on a system or network ... if the service provider

  • does not have actual knowledge that the material or an activity using the material on
    the system or network is infringing;
  • in the absence of such actual knowledge, is not aware of facts or circumstances from
    which infringing activity is apparent; or
  • upon obtaining such knowledge or awareness, acts expeditiously to remove, or dis-
    able access to, the material;
  • does not receive a financial benefit directly attributable to the infringing activity, in a
    case in which the service provider has the right and ability to control such activity; and
  • upon notification, . . . responds expeditiously to remove, or disable access to, the ma-
    terial that is claimed to be infringing or to be the subject of infringing activity.’’

The notice-and-takedown provisions of the DMCA have been put to broad use and have proven to be an effective instrument for combating copyright infringement online. This has also been seen as giving copyright owners —potentially anyone who has fixed an ‘‘original work of authorship’’ —unwarranted leverage over service providers and their subscribers. When a provider is notified of an alleged infringement, risk aversion encourages it to remove or disable access to the specified material, probably without first informing the subscriber. The subscriber may file a counternotice and have the content restored if the copyright owner does not file a claim in court, but such challenges are rare.70 Subscribers, like the providers hosting their Web sites, are more likely to concede to takedown pressures, even when an infringement may not actually be occurring. If a subscriber is sued, his or her identity may be subpoenaed, as in cases of defamation, and with similarly little judicial scrutiny.71 Major search engines such as Google comply with hundreds of removal requests a month, even though it is not even clear that provision of a hyperlink would incur copyright liability.72

When Canada began to consider amending its copyright laws, it appeared to be following in the footsteps of the United States. In 2004, the House of Commons Standing Committee on Canadian Heritage retabled its Interim Report on Copyright Reform, which proposed a ‘‘notice and takedown’’ policy similar to that of the DMCA, under which Canadian service providers would be compelled to remove content immediately upon receiving notice of an alleged infringement from a professed copyright holder. The report came under fire from the Canadian Internet Policy and Public Interest Clinic (CIPPIC), Digital Copyright Canada, and the Public Interest Advocacy Centre (PIAC); numerous petitions and critiques followed, calling for balance between the rights of content creators and fair public use.73 The ‘‘Canadian DMCA’’ has since been proposed, in the form of Bill C-61 in 2008, which appears to be even more restrictive that the U.S. DMCA.74 The consensus on this bill is that it is unlikely to pass, although it continues to be a priority of the Conservative government.75

With no legislation yet enacted, Canadian ISPs have implemented a ‘‘notice and notice’’ policy for handling copyright infringement. This policy would be continued under Bill C-61.76 ‘‘Notice and notice’’ was a concept originally proposed in the now-defunct Bill C-60, which was dropped from the legislative agenda in 2005 with the collapse of the Liberal government.77 Under this policy, copyright owners send notices to ISPs regarding possible copyright infringement by subscribers. Providers then forward these notices to their subscribers —instead of being obligated themselves to remove the content.78 Even though the notices do not mean that immediate legal action will follow if infringing activities do not cease, they have been successful in getting significant portions of infringing subscribers to remove their materials.79

Legal protections against defamation and copyright infringement afforded under U.S. and Canadian law are in tension with the rights of service providers and Internet users. This often gives rise to the censoring and self-censoring of material. Canadian service providers erring on the side of caution may remove content from subscribers’ sites, as U.S. providers do when informed of alleged copyright violations. User material is therefore subject to censorship based on unsubstantiated claims. Moreover, because subpoenas offer plaintiffs an avenue for ascertaining subscribers’ identities without scrutiny, the potential for misuse of these subpoenas can instill a fear of improper discovery in subscribers that leads to self-censorship. These chilling effects have been well documented,80 and while they are indirect rather than direct state-mandated filtering, they constitute real censorship of online speech.81

Computer Security

Security concerns drive many of the state-mandated limitations on the speech and privacy interests of citizens. These security concerns in the United States and Canada take two forms: national security and computer security.

Computer security has led to certain content restrictions in the United States and Canada. Concerns about unwanted messages reaching computers, in various flavors of spam, have prompted content-based restrictions such as the CAN-SPAM Act of 2003 in the United States. In Canada, a National Task Force on Spam was convened in 2005 to study the spam problem.82 While some laws, such as the Personal Information Protection and Electronic Documents Act, were found to at least tangentially apply to spam, the task force found a need for legislation directly limiting spam that originates in Canada.83 The ‘‘Anti-Spam Bill’’ was finally tabled by the Canadian Government on April 24, 2009, as the Electronic Commerce Protection Act (Bill C-27) and is headed for committee review.84 Government materials accompanying the release of Canada’s ECPA point to plans to establish a Spam Reporting Centre similar to the U.S. FTC reporting mechanism.85 The U.S. Congress has considered a range of options for limiting the free flow of bits across the Internet to address the problem of malicious software infecting computers, though most of the efforts to filter information based upon content deemed to be computing security risks are carried out by private firms or individuals on a voluntary basis.86 Calls are also being made to promote greater responsibility among ISPs for malicious software spread over their networks in order to contain the worst of ‘‘zombie’’ computers sending spam and distributing malware, in the interest of preserving network safety for other connected PCs. In sum, there is still an active, ongoing discussion about how and why regulation of the flow of obviously malicious code over the Internet might take place.87

Network Neutrality

As a new Federal Communications Commission begins its work in the Obama Administration, network neutrality and the problem of bandwidth throttling are near the top of the list of issues it must tackle. One common mode of filtering Internet traffic is for ISPs to discriminate based upon the type or amount of data sent or requested through the network. Many people have had the experience of seeking to send an e-mail to a colleague with a large attachment, such as a photo or a video, only to have the e-mail bounce back with a note stating that an e-mail server along the way had rejected the message because of its size. Writ large, this same issue arises for ISPs and their users. Providers practice various forms of network management, where they decide to favor some data packets over others, often to combat network scourges like spam and malware. Some ISPs, for instance, allow users only a certain amount of bandwidth for certain activities. In August 2008, the FCC ruled that Comcast, a large ISP, had violated federal network neutrality rules when it practiced bandwidth throttling to prevent usage of the BitTorrent service.88 The Comcast decision—a vote of 3–2 by the commission—marked the first such intervention by the FCC, but by no means resolved the issue of what kind of reasonable network management ISPs are permitted to practice. The new Obama administration FCC will likely be called upon to consider new legislation by Congress, new regulatory systems, and new allegations of infractions of the sort carried out by Comcast.

Surveillance

Concerns related to national security in the United States have contributed to the development of an extensive and technologically sophisticated online surveillance system. The U.S. surveillance system was expanded significantly under the Bush administration following the attacks of September 11, 2001. Government wiretaps are reported to have included taps on major Internet interconnect points and data mining of Internet communications.89 Tapping these interconnect points would give the government the ability to intercept every overseas communication and many domestic ones. The U.S. government has moved to dismiss lawsuits filed against it and against AT&T by asserting the state secrets privilege; district courts in California and Michigan have refused to dismiss the lawsuits. If the allegations prove to be true, they show that the United States maintains the world’s most sophisticated Internet surveillance regime. The Bush administration also pushed to expand the Communications Assistance to Law Enforcement Act (CALEA) to force providers to give law enforcement wiretap access to electronic communications networks. The attorney general under the Bush administration, Alberto Gonzales, called for data retention laws to force ISPs to keep and potentially produce data that could link Internet subscribers to their otherwise anonymous communications.90 During Barack Obama’s election campaign, he criticized both the Bush administration’s use of warrantless surveillance and its reliance on the state secrets privilege, yet in January 2009 defended congressional legislation immunizing telecommunications companies from lawsuits regarding their participation in the Bush administration’s surveillance programs.91

The U.S. government is required to produce annual reports on the number of wire- taps it conducts under Title III of the Omnibus Safe Streets and Crime Control Act of 1968 (the ‘‘Wiretap Act’’), as well as communication interceptions conducted under the Foreign Intelligence Surveillance Act (FISA) and the Pen Register and Trap and Trace statute (Pen/Trap statute).92 No reports have been provided under the Pen/Trap statute since 1998.93

In Canada, Part VI of the Criminal Code governs the powers of law enforcement to engage in electronic surveillance of private communications when conducting criminal investigations. The Criminal Code requires the production of annual reports on the details of the interceptions that occur.94 Canadian electronic surveillance for foreign intelligence is primarily undertaken by the National Defense’s secretive Communications Security Establishment (CSE), which operates in close cooperation with its U.S. counterpart and other allied intelligence networks. A commissioner is appointed to review the actions of the CSE and produce annual reports commenting on the adherence of the agency to its legislative mandate in the National Defense Act.95 The commissioner’s annual reports, while providing some oversight, provide little additional transparency, as no statistics on the number of communications interceptions are reported.

Conclusion

While there is little technical filtering in either country, the Internet is subject to substantial state regulation in the United States and Canada. With respect to surveillance, the United States is believed to be among the most aggressive countries in the world in terms of listening to online conversations.

Legislators in both countries have imposed Internet-specific regulation that limits their citizens’ access to Internet content. In addition, lawmakers have empowered private entities to press Internet intermediaries, including ISPs, for content removal or to carry out filtering. Although the laws are subject to legislative and judicial debate, these private actions may be less transparent. Governments in both countries, however, have experienced significant resistance to their content restriction policies, and, as a result, the extreme measures carried out in some of the more repressive countries of the world have not taken hold in North America.

Notes