Combating Anti-Semitism in Cyberspace
Posted: February 25, 2008
Remarks of Christopher Wolf
Chair, Internet Task Force,
of the Global Forum
for Combating Anti-Semitism
Israeli Ministry of Foreign Affairs
February 24-25, 2008
It is a great honor for me to be included in this extremely important conference. The scourge of anti-Semitism is growing, and nowhere is that more evident than online, on the Internet. I chair the Internet Task Force of the Anti-Defamation League, as well as the International Network Against Cyber-Hate (INACH). In those roles, I routinely see the incitements to hatred and, often, violence against Jews. The Internet allows instant publication to millions with the click of a mouse. Some of the Internet sites are blatant and graphic; others are more insidious because of their subtlety or facial appearance of legitimacy.
An Example of Online Anti-Semitism
One example of anti-Semitism was this one, found on a web site recently where the online participant wrote:
Jewish identity in the past has been locked into the Holocaust experience. . . . It is a very good example of [how] a community can overplay a historic experience to the point that it begins to repulse friends. . . . The world did feel sorry for the episode but when an individual or a nation refuses to forgive and move on the regret turns into anger. . . . The Jewish identity in the future appears bleak. . . . We have created a culture of violence (Israel and the Jews are the biggest players) and that Culture of Violence is eventually going to destroy humanity.
In one short scholarly-sounding passage, the author manages to blame the Jews for a culture of violence in the 21st Century world and to trivialize the Holocaust, where Jews were the victims of the worst violence of the 20th Century.
One reaction to the just-quoted online posting came from Judea Pearl, father of the late Wall Street Journal reporter Daniel Pearl, who was murdered by Muslim extremists in Pakistan in 2002 because he was Jewish. Mr. Pearl said: "Too many people were killed, abused or dispossessed in the past century by words of irresponsible authors, often disguised as scholars or humanitarians, who pointed fingers at, and blamed one segment of society for the ills and maladies in the world."
You probably want to know on what web site such vitriol appeared. There are literally thousands of web sites of hate groups online. Was it the web site of Stormfront, the "white nationalist community?" Or the American Nazi Party? Or the Ku Klux Klan? Or Jew Watch (a highly-ranked site on Google)? Or some unknown site masquerading as authoritative?
No, it was none of those sites.
The just-quoted vitriol was contained in a January 7, 2008 essay on Jewish identity, published by The Washington Post on the popular "On Faith" portion of its washingtonpost.com web site. Indeed, the posting was advertised prominently on The Washington Post web site's home page, viewed by millions of visitors.
On Faith's moderators Sally Quinn, author and Post writer, and Newsweek editor Jon Meacham had posted a question to a panel of religion experts concerning the television series on public broadcasting entitled "The Jewish Americans." They wrote: "We know what 'Jewish identity' has meant in the past. What will it mean in the future? How does a minority religion retain its roots and embrace change?"
Amazingly, panelist Arun Gandhi, a grandson of pacifist Indian leader Mahatma Gandhi, wrote the just-quoted anti-Semitic diatribe in response.
Equally amazing is the fact that the primary Washington Post editor didn't have a problem with Gandhi's words, which he viewed simply as criticism of the Israeli government. One level up, the Post editor's boss said, "I read the piece as being a pacifist's critique of Israeli policy, not an anti-Semite's criticism of Jews – and as both a Jew and an editor, I take anti-Semitism seriously." Grudgingly, the senior editor says, "We should have asked Gandhi to clarify."
Like many web sites, the Post site allowed people to post their reactions. Among the many who responded was Emory University Medical Professor Jack Arbiser, who wrote: "I was astounded that you chose to run the overtly racist rant by Arun Gandhi. . . . Judaism and Jewish identity are not based upon the Holocaust. Jewish identity is based upon the Torah, a God-given document to humanity. . . . His screed would have fit in well with Nazi philosophy, except for the inconvenience that Gandhi would have not been awarded Aryan status."
The On Faith site's hosts Meacham and Quinn finally apologized on January 18th. And on January 25th, Gandhi was forced to resign as president of the board of M.K. Gandhi Institute for Nonviolence, housed at the University of Rochester because of what he wrote on the Post's web site.
His excuse for the essay was that it was written in a hurry in India while he was leading Americans on a tour through "Gandhi's India." He wished it had been "more careful, more dispassionate, diplomatic and not so harsh." He does not regret what he wrote about Israeli policies, "but I know many Jewish people who work for peace," a phrase one-step removed from the bromide, "some of my best friends are Jews."
Still, despite the recognition that what Gandhi wrote was tantamount to raw anti-Semitism, he will stay on The Washington Post religion panel and his posting containing his rant is still online. Gandhi is being asked to write another piece about what he has learned from this experience of posting an anti-Semitic rant, and the online panel will discuss the issue. Many say that one of the advantages of the Internet is that hate speech can be responded to with counter-speech, and that is the case with the Gandhi posting. The "On Faith" web site is filled with responses by prominent thinkers, including David Saperstein, Director of the Religious Action Center of Reform Judaism, and hundreds of responses from ordinary readers. But, like letters to the editor in newspapers, responses never receive the attention or prominence of the original writing.
This example of Internet hate speech is cited here because as profoundly troubling as extremist hate sites are, the fact that raw anti-Semitism found its way so easily onto the web site of one of America's most prominent newspapers is cause for alarm. The point is that the Internet is a vehicle for hate in many ways.
The History of Online Hate Speech
For twenty years, hatemongers have had technological tools available to them to spread globally their messages of intolerance, conspiracy, historical distortions and denials, and calls for violence. Even before the birth of the World Wide Web, some organized hate groups recognized the potential of technology to disseminate their messages and further their goals.
In the 1980s, a leader of the Ku Klux Klan and a neo-Nazi publisher collaborated to create a computerized bulletin board accessible to anyone with a computer, phone line, and modem. The bulletin board, "Aryan Nation Liberty Net," was subscription-based and designed to recruit young people, raise money, and incite hatred against the "enemies" of white supremacy.
In the early 1990s, many bigots united in organized online discussion groups called USENETs. USENET newsgroups were similar to the "Aryan Nation Liberty Net" but were more easily accessible to anyone with Internet access. USENETs were free and provided a venue for participants to write, read, and respond to messages of hate.
The Internet also has become a distribution point for information on how like-minded haters can cause real-world harm. For example, Alexander James Curtis, who faced criminal charges for harassing civil rights leaders and vandalizing two synagogues, published an Internet guide in 2000 called "Biology for Aryans" that described the use of botulism, anthrax and typhoid for terror. Prior to that, on March 23, 1996, the so-called "Terrorist's Handbook" was posted on the web, including instructions on how to make a powerful bomb. The same kind of bomb was used in the Oklahoma City bombing of the U.S. government office building, killing and injuring scores of people.
The evolution of the Internet into the World Wide Web, with its easily accessible and inviting graphic interface, provided people, including extremists, with new ways to communicate with each other and with a vast new potential audience, using not only words, but also pictures, graphics, sound, and animation. Recently, the New York Times had a front page headline that read "An Internet Jihad Aims at U.S. Viewers," and the story beneath chronicled how terrorists routinely use the web in all languages to recruit and to incite violence.
Web 2.0 and Online Hate
Over the past couple of years, the Internet toolbox available to hate mongers has had several new items added to it. Previously, those of us addressing online hate speech have focused on the proliferation of web sites advocating hate and violence. Today, such hateful and dangerous web sites still exist.
But today, we also are in the world of what is called "Web 2.0," which has transformed the way the Internet is being used. Certainly, the problem of hate-filled web sites still exists, and in fact is getting worse. But more problematic is the sudden and rapidly increasing deployment of Web 2.0 technologies to spread messages, sounds and images of hate across the Internet and around the world.
"Web 2.0" refers to a second generation of web-based communities and hosted services – such as social-networking sites and user-generated video sites whose purpose is to promote new connections, collaboration and sharing between users. MySpace, Facebook and YouTube are the most prominent examples of Web 2.0 technologies.
MySpace and Facebook are popular social networking web site offering interactive, user-submitted networks of friends, personal profiles, blogs, groups, photos, music and videos. MySpace was started in 2003, and purchased by Rupert Murdoch in 2005 for $580 million. YouTube is a video sharing web site where users can upload, view and share video clips. YouTube was created in mid-February 2005. In November 2006, Google acquired the company for 1.65 billion dollars.
To show you how fast these new technologies are growing, videos on YouTube created more traffic on the Internet in 2006 than existed on the entire Internet in the year 2000. My Space and YouTube, along with Facebook are what are known as "killer aps" on the Internet today, used by millions.
And the virus of hate has infected these new technologies. On YouTube, for example, hundreds of hate videos have been uploaded. The BBC recently reported a video which appeared on YouTube showing uniformed soldiers exchanging Hitler salutes. And British neo-Nazi groups post videos hoping to recruit kids to their cult. Another video portrayed Zyklon-B tests on humans purportedly to show that gassing at death camps did not really happen.
YouTube also recently has included music videos from the neo-Nazi heavy metal band "Landser," which contain images of Hitler and swastikas. One of the band's hits is a tribute to Rudolf Hess, a top Nazi deputy of Adolf Hitler.
YouTube also features clips from the 1940 anti-Semitic Nazi film "Jud Suess" made under the supervision of Joseph Goebbels to justify anti-Semitism. It is considered one of the most hateful depictions of Jews on film.
If offered in an educational context, with explanation of their hateful origins and of how they glorified or played a role in the deaths of millions, perhaps such material would serve history. But they are not offered in that context; they are posted to provoke hate and to recruit haters. The "comments" section which allows users to post their reactions to the videos makes clear that the purpose and effect of the videos is to inspire hate and violence.
The situation is no better on social networking sites. The New York Times recently reported that an anti-Islamic group with a profane name using the "F" word has formed on Facebook with the purpose of bashing Islam and its followers, and inspiring hatred. At last count, the group had more than 750 members. The creator of the anti-Islam group denied that his group is hate speech and is claimed that his attack on the religion is covered by his right to free speech. It now appears that the group has been deleted from Facebook.
But Facebook's editorial control seems isolated. There have been repeated instances on Facebook of Jews depicted in stereotypical and hateful ways, portrayed as spiders and rattlesnakes, and referred to as dirty Zionists. Yet, apparently Facebook decided not to act with respect to that hateful content, allowing it to remain online.
In August 2007, complaints about the Nazi propaganda were lodged by a German government-sponsored Internet watchdog group, Jugendschutz.de, with Google – parent of YouTube. The complaints were not responded to, although some of the complained – about videos disappeared. The Central Council of Jews in Germany threatened legal action against Google, the parent of YouTube, given the German laws against the display of Nazi propaganda.
The International Network Against Cyber-Hate (INACH) now reports that YouTube is more responsive to complaints about hate-filled videos that violate its Terms of Service. Still, it is impossible to monitor and remove all hate-filled videos (just as it impossible to use a complaints process to protect all copyrighted material that might be posted online). It is even harder to address hateful comments posted alongside hate videos and also posted even alongside innocuous videos, which the haters use as a vehicle to vent their rage.
Is the Problem of Online Hate Out of Control, and Should We Just Give Up?
One may ask, given the vast wave of information contained on the Internet, why even bother trying to control hate speech online? Well, here's why: The effect of such content on people – especially children – and on society is profoundly troubling. As a matter of principle, society must take a stand about what is right and wrong. And, in addition, although little empirical study exists, there is no question that there is a link between hate speech online and real world violence.
Perhaps the best way to illustrate how social networking sites facilitate hate speech on the Internet, and the connection between online and real-world manifestations of hate is to tell you about a recent episode originating in Manhasset, New York, a bedroom community of New York City on the North Shore of Long Island. A 2005 Wall Street Journal article ranked it as the best town for raising a family in the New York metropolitan area.
An individual names John Rocissano as a graduate of the Manhasset High School, someone described by a neighbor as a "good kid." After graduation, Rocissano attended community college and found a job at the local Staples office supply store.
By day, Rocissano helped customers find printer cartridges and copier paper. By night, like many young adults, he used the social networking site, MySpace, to connect online with people sharing his interests. On MySpace, he became a group leader of the National Alliance discussion site.
The name, by itself, does not say much. National Alliance could be a well-intentioned group. But it is not.
National Alliance is a neo-Nazi, white supremacist hate group recognized for decades as one of the most formidable white supremacist groups on the country. The founder of the National Alliance is William Pierce, the author of The Turner Diaries, a novel calling for the violent overthrow of the federal government and the systematic killing of Jews and nonwhites in order to establish an "Aryan" society. The Turner Diaries is thought to be the inspiration behind Timothy McVeigh's bombing of the federal building in Oklahoma City, which resulted in the deaths of 168 people.
Rocissano also was inspired by the teachings of the National Alliance. On MySpace, he listed The Turner Diaries as his "favorite book." At first, he merely handed out fliers for the National Alliance, to recruit new members.
But in the late summer of 2007, he and a friend went on a hate crime spree in Manhasset, home to a large Jewish community, including Holocaust survivors. Over the Labor Day holiday weekend in the United States, the pair painted red swastikas and other graffiti espousing hate on an elementary school, on a school bus in a high school parking lot, at a synagogue (where they also smashed windows), on a home in nearby Roslyn Estates, New York, and on a street sign in a residential neighborhood. Before their violence escalated any further, they were arrested and charged with misdemeanors – first offenses for each. Ironically, in the "About Me" section of his MySpace home page, Rocissano wrote: "Don't judge me until you get to know me."
On MySpace, as well as on the social networking site Facebook.com, there are hundreds of groups featuring the words "Hitler" or "Nazi," many established to promote neo-Nazism and other anti-Semitic feelings. The "virtual community" of haters no doubt gave Rocissano the feeling that his views were mainstream and acceptable, and that it was OK to act on them.
Had the police searched the computer of the newly apprehended hate criminal, they likely would have seen evidence of visits to YouTube and viewings of hate videos posted there.
The MySpace, Facebook and YouTube materials join the thousands of web sites that deny the Holocaust and that espouse virulent anti-Semitism; others portray gays and lesbians as subhuman in the guise of promoting so-called "family values;" and still other web sites contain racial epithets and caricatures. As new technologies for information become available over the Internet, members of hate groups have proven themselves to be "early adopters."
Another example: online gaming is popular and hate-filled online games now are available at lightening fast speeds thanks to broadband technology. There are numerous games that celebrate in gory detail the random killing of minorities.
Online "Direct Marketing" of Hate Speech
Some call Internet hate speech the "direct marketing" of racism and violence. And as bad as the directly-racist and violent web sites and Internet content may be, perhaps more troubling are the hate sites masquerading as scholarly and reliable sites.
Stormfront, which describes itself as the "White Nationalist Community" hosts a site about Martin Luther King that appears to be legitimate but, in fact, contains racist propaganda. To a schoolchild doing homework research, the site is terribly misleading and has the potential for instilling biased and hateful preconceptions in young minds.
Before the Internet, hate speech largely was available only in plain brown envelopes and down dark alleys, and its reach was limited. Rallies rarely attracted large crowds. Now, on the Internet, hate is on display for all to see, and the potential audience is vast.
The dawn of hate on the Internet has wreaked havoc on American society with a marked increase in hate crimes. Online recruiting has aided many hate groups linked to violence against Jews, African-Americans, gays and lesbians in their efforts to increase their membership. In fact, Don Black, former Grand Dragon of the Ku Klux Klan, noted that, "as far as recruiting, [the internet has] been the biggest breakthrough I've seen in the 30 years I've been involved in [white nationalism]."
The Role of Law in Addressing Online Hate Speech
An understandable immediate reaction to the hate found on the Internet is "there ought to be a law." But, in the United States, the First Amendment to the United States Constitution applies with full force to the Internet, the Supreme Court has ruled. And that freedom of expression protection means most speech is permissible unless it threatens imminent violence directed at identifiable victims. To be sure there also are laws against pornographic content or if intellectual property rights are violated online. And in the U.S., hate speech, online or off, can be used in some jurisdictions as evidence to show a prohibited motivation for a crime.
In Europe and elsewhere around the world, by contrast, there are laws prohibiting online hate speech and images. Why the difference in approach? Although freedom of expression is a valued principle in most modern democracies, it is counterbalanced by the belief that government has a role in protecting its citizens from the effects of hate and intolerance. Nowhere is this belief stronger than in Germany and it neighbors, countries that less than a century ago witnessed how words of hate against Jews and other minorities exploded into the Holocaust, with the attendant murder of more than six million people.
As a result, there are laws in Germany and elsewhere in Europe that prohibit words and images attacking religious, racial and sexual minorities, and that revive the words and images of the Nazi era. In Germany, Volksverhetzung (incitement of hatred against a minority) is a punishable offense under Section 130 of the Germany's criminal code and can lead to up to five years imprisonment. Volksverhetzung is punishable in Germany even if committed abroad and even if committed by non-German citizens, if the sentiment was made accessible in Germany.
A famous instance of German prosecution of someone whose hate speech was launched from abroad but was available in Germany is Ernst Zundel. Zundel is a Holocaust denier who published "The Hitler We Loved and Why" and "Did Six Million Really Die" while he lived in the North America. Zundel was deported from the U.S. to Canada and onward to Germany, and tried criminally in the state court of Mannheim on outstanding charges of incitement for Holocaust denial dating from the early 1990s, and including for materials disseminated over the Internet. On February 15th, 2007, he was convicted and sentenced to the maximum term of five years in prison.
Similarly, an Australian Holocaust denier, Frederick Toben, used his Australia-based web site to publish his benighted views. Upon visiting Germany, he was arrested, tried, and convicted of violating German law as a result of his Australian-based web site that was viewable in Germany. The conviction and subsequent jailing made Toben a hero of sorts among Holocaust deniers, so much so that he was a featured speaker at the infamous conference sponsored by the Iranian government on whether the Holocaust really happened. And the convictions did not do much to silence their hate speech. All one need do is insert the names of Toben and Zundel in a Google search bar, and you will find web sites of supporters paying homage to them as martyrs and republishing their messages.
There is of course a danger, beyond the scope of our focus here today, of nations squelching political speech in the name of eradicating hate speech. So the power to control in the hands of reasonable state actors may be appropriate, but it is a power that can be abused by less responsible regimes.
In addition to national laws like that in Germany used to convict Toben and Zundel, the Council of Europe has included in the Cybercrime Treaty a prohibition against online hate speech. Specifically, the provision bans "any written material, any image or any other representation of ideas or theories, which advocates, promotes or incites hatred, discrimination or violence, against any individual or group of individuals, based on race, color, descent or national or ethnic origin, as well as religion if used as pretext for any of these factors." It also outlaws sites that deny, minimize, approve or justify crimes against humanity, particularly the Holocaust.
The treaty is beginning to be implemented through legislation among European member countries. The United States is a signatory to the Cybercrime Treaty but did not sign the protocol on online hate speech, in light of its invalidity domestically under the First Amendment. And the European Union recently passed legislation extending to the Internet its "broadcast rules" that restrict hateful and other content deemed inappropriate.
The Effect of the First Amendment to the United States Constitution
There is a fundamental difference in approach in the United States to hate speech. The framework of the First Amendment presupposes that just as hate speech is permissible, so too is speech intended to counter and negate such hate speech. Simply put, for every hurtful lie told about a group of people, someone can tell the truth about the falsity of stereotypes and about how important it is to judge people as individuals. But in the Internet era, it appears there are more people interested in spewing hate than in countering it. On the social networking sites and on YouTube, inflammatory, hate-filled content overwhelms the limited efforts to promote tolerance and to teach diversity. And, as we have seen, hate speech inspires violence.
What does that mean for the Internet worldwide? We have seen that countries – like Germany – criminalize Internet hate speech and issue orders requiring people to take down web pages and video that would be illegal in the United States. Indeed, people have been arrested and jailed because of their online content. Does that mean that the laws in Europe result in a "cleaning up" of the Internet? The answer is no.
The borderless nature of the Internet means that if placing certain content on the Internet is illegal in one place, all one needs to do is place the prohibited content on the Internet in a jurisdiction where it is legal. That means the United States, which is the most permissive nation in the world when it comes to allowable speech, can serve as host to hate-filled content that is illegal elsewhere. Once launched from the United States, it is viewable worldwide, except in certain situations where there is massive censorship blocking incoming Internet content, such as China. And one need not be physically present in the United States to launch content from an Internet server here. Telecommunication lines make remote Internet hosting simple from someone overseas.
So laws addressed at Internet hate, even though understandable in light of a nation's history, are perhaps the least effective way to deal with the problem. There may be symbolic value in prosecuting hate speech online, to show that a country will not sit idly by and allow speech that is contrary to its values of tolerance and personal respect in light of its history, such as in Germany. But the reflexive use of the law as the tool of first resort to deal with online hate speech threatens to weaken respect for the law if such attempted law enforcement fails to stop the content from appearing online, as most often will be the case since it can be re-posted in the United States once taken down abroad, or if it is used to deal with minor violations.
I do not mean to suggest that the First Amendment in The United States is a license for haters to post anything online. There are legal lines that can be crossed that subject individuals to criminal liability for what they say online. These include specific threats targeted at specific individuals or speech that provides support for and promotion of terrorism, or constitutes a conspiracy. As the United States Supreme Court stated in a 1963 decision, "[W]hile the Constitution protects against invasion of individual rights, it is not a suicide pact," and thus the First Amendment yields when speech directly threatens the public safety. But, of course, most hate speech is abstract in its rantings.
Non-Legal Approaches to Countering Hate Speech
The law is but one tool in the fight against online hate. Indeed perhaps that the best antidote to hate speech is counter-speech – exposing hate speech for its deceitful and false content, setting the record straight, and promoting the values of tolerance and diversity. To paraphrase U.S. Supreme Court Justice Brandeis, sunlight is still the best disinfectant – it is always better to expose hate to the light of day than to let it fester in the darkness. The best answer to bad speech is more speech. Regrettably, it is not fashionable to promote tolerance and diversity, and to counter hate speech, on the Internet. Hate sites far outnumber sites with messages to counter hate speech.
So what are other possible antidotes to hate speech online? The voluntary cooperation of the Internet community – Internet Service Providers (ISPs) and others – to join in the campaign against hate speech is urgently needed. If more ISPs in the U.S. especially block content and following their Terms of Service, it will at least be more difficult for haters to gain access through respectable hosts.
The latest social networking and video sites go to great pains to eliminate obscene (but not legally pornographic) content because of the anticipated public outcry over the appearance of such material. That is why YouTube's videos all are "G rated." A similar effort could help eliminate hate content, but it appears that public demand for such editing is needed to prompt adequate attention.
But in the era of Search Engines as the primary portals for Internet users, cooperation from the Googles of the world is an even more important goal. The experience with Google concerning the hate site "Jew Watch" shows how Search Engine companies can help. When entering the search term "Jew," the top result in Google was the hate site "Jew Watch." The high ranking of Jew Watch in response to a search inquiry was not due to a conscious choice by Google, but was solely a result of an automated system of ranking. In response to contacts from the Anti-Defamation League, Google placed text on its site that apologized for the ranking, and gave users a clear explanation of how search results are obtained, to refute the impression that Jew Watch was a reliable source of information.
INACH has reported that over a recent four year period, it received complaints on fifteen thousand cases of online hate. By forwarding the complaints to ISPs and search engines, more than five thousand hate sites, discussion threads, videos and music files were removed. Still, requests for removal frequently are not acted upon, as evidenced by the recent case of Germany's Jugendschutz.de complaining to YouTube but receiving no response.
For the time being, YouTube is the single major video-sharing portal. So its decisions on what content appears do make a difference. But where there are multiple outlets for content, as is the norm on the web, the effectiveness of the take-down remedy is limited. For example, a subscriber to an ISP who loses his or her account for violating that ISP's regulations against hate speech may resume propagating hate by subsequently signing up with any of the dozens of more permissive ISPs in the marketplace.
The problem of hate speech on the Internet is not one that is easily solved. The law has a limited role to play, especially in light of the permissive rules in the United States which allows hate speech to be launched for viewing worldwide. The ISP and search engine operators could, if they wished, play a greater role in controlling hate speech, but even their efforts, unless coordinated, may have limited impact. Thus, Justice Brandeis' remedy of more and truthful speech to counter the harmful effects of hate speech may, in the end, be the most enduring solution. Just as words do motivate people to act, and in the context of hate speech – to act criminally, perhaps words of tolerance and understanding will motivate people to control their basest instincts. In the end, right-minded people saying and doing the right things may be better than any technological or legal approach.