To stop the defamation of the Jewish people... to secure justice and fair treatment to all
Anti-Defamation League ABOUT ADL FIND YOUR 


The Dangers Inherent in Web 2.0

Posted: April 15, 2008

Remarks by Christopher Wolf
Chair, ADL Internet Task Force and Chair, International Network Against Cyber-Hate (INACH)

To the Westminster eForum Keynote Seminar:
Web 2.0 -- Meeting the Policy Challenges of User-Generated Content, Social Networking and Beyond

London, UK, April 2008

Thirty-four years ago, I lived here in London and attended the London School of Economics for my American junior year abroad.  In university, the only online community I knew was the one waiting online for tea at Passfield Hall, near Russell Square, where room and board was had for ₤9 a week.  (You can just imagine the quality of the cuisine at that price; I had a lot of curry take away that year).

Communications home were patchy, with a weekly or bi-weekly call being cut short by my father who urgently reminded me that "these calls cost money."  An alternative was the pay phone at the Baker Street tube which occasionally gave callers unlimited free long-distance through a glitch in the system.

Today, university students around the world are using Skype, have hompages on Facebook and MySpace, are sharing music and movies on peer-to-peer networks, and are posting videos to YouTube at an astonishing rate.  The Internet today is providing incredible opportunities for communication, entertainment, commerce and education.

And Professor Murray, at my alma mater, the LSE, is helping students understand the impact computers and the Internet are having on the substantive law of the United Kingdom, Europe and the United States.  Professor Murray's analyses of the effects of regulatory structures on the development of the Internet community are enormously important.  Thank you, Professor Murray for your work.

While the Internet is a powerful engine for worldwide connection, it also has proved to be a breeding ground for intolerance, hate and even terrorism.  For 20 years, hatemongers have had technological tools available to them to spread globally their messages of intolerance, conspiracy, historical distortions and denials, and calls for violence.

As an Internet lawyer vitally concerned with the public policy issues affecting the development of the Internet, I have focused on the issue of Internet hate speech for more than a decade.  I currently chair the Internet Task Force of the Anti-Defamation League in the United States, and also serve as chair of the International Network Against Cyber-Hate (INACH), an NGO based in Amsterdam comprised of other NGOs fighting online hate speech.  Ironically, I have found that online privacy – a major focus of my day-to-day legal work for clients at Proskauer Rose – serves to shield online haters with a veil of anonymity that allows them to remain unidentifiable and unaccountable for their actions.

Monitoring the evolving use – or, I should say, misuse of the Internet by hate groups has been a challenge.  In the early days of the Internet, even before the birth of the World Wide Web, some organized hate groups recognized the potential of technology to disseminate their messages and further their goals, using computerized bulletin boards – which seem antique by today's standards.  Whereas previously hatemongers met in dingy basements and distributed their propaganda in plain brown wrappers, the Internet gave them a new place to meet, largely undetected.

The evolution of the Internet into the World Wide Web provided people, including extremists, with new ways to communicate with each other and with a vast new potential audience, using not only words, but also pictures, graphics, sound, and animation.  Recently, The New York Times had a front page headline that read "An Internet Jihad Aims at U.S. Viewers," and the story beneath chronicled how terrorists routinely use the web in all languages to recruit and to incite violence.

Our repulsion to hate speech is not just because it is painful to see and hear.  Despite the schoolyard adage about "sticks and stones", words do have a lasting, injurious effect.  And words of hate often are the seeds of actual violence, witness the Internet Jihad.  Following a online hate speech conference in Poland last year, I went to Auschwitz and if ever there was an illustration of words of hate leading to real-world hate and death, that was it at Auschwitz.

Over the past couple of years, the Internet toolbox available to hatemongers has had several new items added to it.  Previously, those of us addressing online hate speech have focused on the proliferation of web sites advocating hate and violence.  Today, such hateful and dangerous web sites still exist.

But today, we also are in the world of what is called "Web 2.0," which has transformed the way the Internet is being used.  Certainly, the problem of hate-filled web sites still exists, and in fact is getting worse.  But more problematic is the sudden and rapidly increasing deployment of Web 2.0 technologies to spread messages, sounds and images of hate across the Internet and around the world.

As we know, "Web 2.0" refers to a second generation of web-based communities and hosted services – such as social-networking sites and user-generated video sites whose purpose is to promote new connections, collaboration and sharing between users.  MySpace, Facebook and YouTube are the most prominent examples of Web 2.0 technologies.

An illustration of  how fast these new technologies are growing: videos on YouTube created more traffic on the Internet in 2006 than existed on the entire Internet in the year 2000.  My Space and YouTube, along with Facebook are what are known as "killer aps" on the Internet today, used by millions.

And the virus of hate has infected these new technologies.  On YouTube, for example, thousands of hate videos have been uploaded, with messages of racism, anti-Semitism, homophobia, and intolerance towards minorities generally.

Videos that glorify Hitler and Nazism, and that deny the Holocaust, are often found online.  If offered in an educational context, with explanation of their hateful origins and of how they glorified or played a role in the deaths of millions, perhaps such material would serve history.  But they are not offered in that context; they are posted to provoke hate and to recruit haters.  The "comments" section on You Tube which allows users to post their reactions to the videos makes clear that the purpose and effect of the videos is to inspire hate and violence.

Videos are not restricted to the so-called user generated content sites, of course.   Just a few weeks ago, the anti-Muslim Dutch politician Geert Wilders posted a film entitled Fitna condemning the Koran and the Muslim religion, to his own web site.  So controversial was the film that the YouTube trailers for the film posted in February led the government of Pakistan to block the YouTube site in its entirety.

Wilders also created a stir when he announced that he would premier the film on his Web site hosted by the United States company, Network Solutions (best known for its domain name services).  Network Solutions suspended, saying that it did so to investigate whether the site's content violated its "acceptable use policy" which prohibits hate propaganda.

On the blogosphere in the United States, Network Solutions was called a censor and a coward, with scores of posts praising the Internet for its "anything goes" culture. Some argued that if Wilders's movie is offensive and prompts violence, so be it - that's the price of Internet freedom.

I wrote in an opinion piece in the International Herald Tribune explaining that it is simply not the case that in the United States anything is permitted anything on the Internet. Child pornography and child predators are not allowed online. Threats of harm directed at specific individuals are illegal. Certain invasions of privacy are disallowed. And copyrighted text, music and movies may not be posted legally without the owner's permission.

I explained in the newspaper piece, as I will elaborate in a moment, that freedom of speech does not mean that Internet companies have to publish anything others want them to display. Indeed, it would likely be a violation of that freedom to require such publication.

I also explained that while some may disagree with the decision of Network Solutions and argue that the best way to address the content of Wilder's film is to post an analysis or rebuttal, Network Solutions was well within its rights not to host hate-filled content, especially  that which people feared would set off riots around the world and cause emotional turmoil.

     *  *  *

Let me turn from online hate videos and focus for a moment on social networking sites.  It probably will not surprise you to learn that there are Facebook and MySpace sites demonizing Muslims and Jews alike, and attacking minorities of all kinds.  Such hate material violate the terms of use on virtually all of the mainstream, Web 2.0 sites.  YouTube, MySpace and Facebook prohibit content that is harmful, offensive or illegal or that violates the rights or threatens the safety of any person.  On all three sites users have the right to report material violating the Terms of Use.  However, such reports often are ignored and the content proliferates faster than conscientious users can report it.

And I also must include in my recitation of online hate the issue of cyberbullying.   According to the Pew Internet & American Life Project, almost 90% of youth in the U.S. are online.  Fifty percent have a cell phone.  For the current generation of young people, e-mailing, IM-ing, text messaging, chatting and blogging are a vital means of self-expression and a central part of their social lives.  There are increasing reports, however, that some youth are misusing Internet and cell phone technology to bully and harass others, and even to incite violence against them.  The organization, Fight Crime: Invest in Kids, reports that more than 53% of young people have been the targets of cyberbullying.  In another survey, almost 80% of Internet-using adolescents indicate being aware of cyberbullying that occurs online and over one-third report that they have seen their friends bully others online.  For some of these youth, online cruelty may be a precursor to more destructive behavior, including involvement in hate groups and bias-related violence.

You may be familiar with the case of Megan Meier, a 13-year-old Missouri girl driven to suicide by relentless online bullying by a neighbor posing as a teenage boy.

And then there is the site called JuicyCampus,, which some universities have blocked because the site promotes anonymous posting of hateful attacks on students.

So, what is the solution to the proliferation of hate speech online – on web sites, on YouTube, on social networking cites, and through cyberbullying?

As a lawyer, you might expect me to say "there ought to be a law".   One response to the presence of such vile material online is through lawmaking and law enforcement.  Legislatures around the world have heeded the call for new laws aimed at Internet hate, except notably in the United States where the First Amendment prohibits broad regulation of speech.  The Internet hate protocol to the Cybercrime Treaty is a prime example of a heralded legal solution to the problem.  And even in the United States, while there are not new Internet-specific laws, existing laws against direct threats or incitements to violence or terrorism have been used against online miscreants.

There are those who think laws are the way to regulate hate speech, and there are those, like me, who think laws often-time are futile and ineffective, and that those concerned about hate speech should focus primarily on the other tools available to fight online hate, such as the voluntary involvement of the Internet industry and the powerful tool of education.

It may surprise you to hear my reaction to the chorus demanding new laws, given that I am an Internet lawyer.  In my professional life, I regularly employ an array of laws to go after violations of the law that appear online.  I was one of the first lawyers, if not the first, to go after illegal downloading and file sharing of music way back in 1996, long before Napster.
But my response to the visceral calls for new laws to deal with hate speech, is "Not necessarily."  I might even put it more strongly:  "Laws addressed at Internet hate are perhaps the least effective way to deal with the problem, and create a sense of false security promoting inaction and under use of the other tools available to fight online hate."

To be sure, there are clear cases where legal enforcement is absolutely required, such as where an individual or identifiable group is targeted for harm. And there also are cases where legal action serves to express decent society's outrage against speech that goes well beyond the pale of what is acceptable in normal discourse, especially in light of recent history.  In countries like Germany and Austria, the enforcement of laws against Holocaust deniers – given the bitterly sad history of those countries – serves as a message to all citizens (especially impressionable children) that it is literally unspeakable to deny the Holocaust given the horrors of genocide inflicted on those countries.  With that said, there are many who believe that prosecutions such as that of David Irving do more to promote his visibility, and to stir up his benighted supporters, than they do to truly quell future hate speech and enlighten the public.

But the reflexive use of the law as the tool of first resort to deal with online hate speech threatens to weaken respect for the law if such attempted law enforcement fails or is used against minor violations.  The case brought against Yahoo! to enforce the French law that prohibits the selling or display of neo-Nazi memorabilia in the end trivialized the speech codes directed at Holocaust deniers, and created a series of precedents that could prove unhelpful in future, more serious prosecutions.

Likewise, prosecutions in the U.S. against persons accused of maintaining Web Sites that promoted terrorism failed when it was demonstrated that the content that triggered the prosecutions appeared elsewhere, unchallenged, in more respectable academic sites.  Those cases demonstrated perfectly that deciding what speech is in or out of bounds can be extremely difficult, especially when on the Internet the very same content can appear in a variety of locations.

Which brings me to my chief objection to the use of the law as the primary enforcement tool:  Given that the U.S. with our First Amendment essentially is a safe-haven for virtually all Web content, shutting down a Web Site in Europe or Canada through legal channels is far from a guarantee that the contents have been censored for all time.  The borderless nature of the Internet means that, like chasing cockroaches, squashing one does not solve the problem when there are many more waiting behind the walls – or across the border.  Many see prosecution of Internet speech in one country as a futile gesture when the speech can re-appear on the Internet almost instantaneously, hosted by an ISP in the United States.

The Chief Judge of the American 9th U.S. Circuit Court of Appeals Alex Kozinski recently observed at a forum at Pepperdine University that in a day when Internet speech is not capable of suppression, the ability of the law to have a moderating effect is now gone.   Essentially he asked with respect to the First Amendment:  What use does a constitutional limitation have on government restrictions on speech when the government no longer has the practical ability to control speech at all?

Judge Kozinski argued that today we live in an age when whistleblowers are unknowable, documents are leaked without consequence, blogger journalists are anonymous and judgment proof, and the mainstream media is in financial peril. Any attempts to restrict speech results in that speech replicated a thousand times over. As such, the First Amendment jurisprudence that allows for at least some control of speech in areas such as defamation, copyright, online threats, and the like,  is now obsolete.

But Judge Kozinski did not address the symbolic importance of the law in some cases.  Certainly the prosecutions under the law of Germany of notorious anti-Semites and Holocaust deniers Ernst Zundel and Frederick Toeben sent messages of deterrence to people that make it their life's work to spread hate around the world that they may well go to jail.  And, again, such prosecutions expressed society's outrage at the messages.  But all one need do is insert the names of those criminals into a Google search, and you will find Web Sites of supporters paying homage to them as martyrs and republishing their messages.

And it must be noted that the cross-border prosecutions give support to repressive regimes like China to request international support and assistance in enforcing their laws, which they justify as important as the laws against Holocaust denial but which in fact are laws squelching the free expression of ideas.

I am not saying that law has no role to play in fighting online hate speech – far from it.  I am saying that countries with speech codes should make sure that the proper discretion is employed to use those laws against Internet hate speech, lest the enforcement be seen as ineffectual resulting in a diminished respect for the law.  And I am saying that the realities of the Internet are such that shutting down a Web Site through legal means in one country is far from a guarantee that the Web Site is shuttered for all time.  Certainly in absolute terms, new laws have not stemmed the tide of new web sites and social networking sites containing hate speech.

I should note here that there have been interesting developments very recently in the United States where courts have begun to chip away, just a little, at the immunity ISPs and web sites enjoy from liability for the postings of their users.  The day may come, if a trend continues, where the potential for legal liability for tortious speech of others may compel ISPs and web sites to more actively monitor what goes out through their service.  We will have to wait and see.

Obviously my view is that the law is but one tool in the fight against online hate.  It may sound like a cliché, but I firmly believe that the best antidote to hate speech is counter-speech – exposing hate speech for its deceitful and false content, setting the record straight, and promoting the values of tolerance and diversity.  To paraphrase U.S. Supreme Court Justice Brandeis, sunlight is still the best disinfectant – it is always better to expose hate to the light of day than to let it fester in the darkness.  The best answer to bad speech is more speech.

In February, I spoke on Internet hate speech in Jerusalem at Israel's Foreign Minister's Global Conference on Anti-Semitism.  Coinciding with the conference, Shimon Peres, the 84-year-old leader of Israel recently challenged a group of international students to use their time on Facebook to counter the spread of hate and bullying. He said:  "Anti-Semitism is a disease of everyone. Persecuting minorities, discrimination, xenophobia and violence exist in many countries in the world," Peres told the group. "You have the opportunity to teach your friends about the memory of the Holocaust so that these horrors will never be forgotten and will never be repeated."  Peres doesn't have his own Facebook account yet as far as I know, but I wouldn't be surprised if he gets one soon.  And he is right about the power of viral speech online – if kids see their peers repeatedly speaking out against hate and intolerance, and reminding others of the effects of hate speech historically, it will make a difference.
 In addition to counter-speech, education is hugely important, because kids are the most impressionable, susceptible victims of hate speech.  I commend to you the anti-cyberbullying curriculum of the Anti-Defamation League

At the ADL, as well as at INACH, through its member organizations, we seek voluntary cooperation of the Internet community – ISPs and others – to join in the campaign against hate speech.  That may mean enforcement of Terms of Service to drop offensive conduct; if more ISPs in the U.S. especially block hateful content at Network Solutions did in the Geert Wilders film example, it will at least be more difficult for haters to gain access through respectable hosts.  Likewise, perhaps more universities will put their foot down when it comes to sites like JuicyCampus, whose only purpose is to humiliate and harass students.

But in the era of search engines as the primary portals for Internet users, cooperation from the Googles of the world is an increasingly important goal.  Our experience at the ADL with Google the site "Jew Watch" is a good example.  The high ranking of  the hate site Jew Watch in response to a search inquiry using the word "Jew" was not due to a conscious choice by Google, but was solely a result of an automated system of ranking.  Google placed text on its site that apologized for the ranking, and gave users a clear explanation of how search results are obtained, to refute the impression that Jew Watch was a reliable source of information.

I am convinced that if much of the time and energy spent in purported law enforcement against hate speech was used in collaborating and uniting with the online industry to fight the scourge of online hate, we would be making more gains in the fight.  That is not to say that the law should be discarded as a tool.  But it should be regarded more as a silver bullet reserved for egregious cases where the outcome can make a difference rather than a shotgun scattering pellets but having marginal effect.

Nearly four years ago, in June of 2004, I spoke at a meeting in Paris at a meeting of the OSCE, the Organization for Security and Cooperation in Europe.  The OSCE meeting focused on the misuse of the Internet by hate groups and its connection with real-world hate crimes and terrorism.  The tenor of my remarks were not unlike my remarks today, especially with respect to the utility of legal regulation of speech.  I concluded my Paris remarks, as I have in similar programs in Sweden, Poland, Germany, Israel and in my own country, calling for global cooperation in fighting the pernicious Internet hate speech.

Following me was the former French Minister of Justice  who directed a small rant in the direction of American lawyers like me when he exclaimed "Stop hiding behind the First Amendment!"  What he meant was that while European countries make it illegal for racists, anti-Semites and xenophobes to display Nazi symbols, to deny the Holocaust and otherwise to demean minorities with words and images of hate, the First Amendment to the United States Constitution prevents such laws in America.  Essentially, he was blaming the US for the global epidemic of online hate speech.

I did not have a chance to respond to the Minister of Justice, but had I the chance I would have explained that the reality of the Internet in terms much like that of our Judge Kozinski.  Even if somehow Americans could be convinced that the First Amendment must yield on the Internet, and the Supreme Court has made it plain that will never happen, even European style speech codes online will not turn the tide against online hate speech, whether on web sites, on posted videos or in social networking sites.  We must deal with the new reality of law taking a back seat to other remedies – to the use of counter-speech, education, and the involvement of Internet companies to combat the scourge of hate speech online.

Thank you for your attention.

Print This Page
More About Hate on the Internet
ADL's Cyberbullying Curriculum

ADL On-line Home | Search | About ADL | Contact ADL | Privacy Policy

2008 Anti-Defamation League