There is something mildly unsettling about the cyberpolice’s fixation with child pornography. At the Internet Content Summit, held last week in Munich and hosted by the Bertelsmann Stiftung, kiddie porn was repeatedly denounced by participants. To judge from the general tone of the comments, it embodied the ultimate evolution of evil.
I don’t mean to dismiss the problem. Child porn is grim stuff, pedophilia even worse and there is evidence that the Internet has provided new opportunities for pedophiles and purveyors of kiddie smut. Rachel O’Connor, a psychologist at the University of Cork, has said that “one of the most significant factors influencing the growth of child pornography on the Internet is the ease of dissemination and collection. Anonymity and convenience have revealed an extraordinary level of sexual interest in children. Presumably, this interest was either dormant or latent in the past.”
Still, let’s not lose perspective. A UNESCO report cites a U.S. nongovernmental organization that found at least 21,000 pedophilia sites on the Net. Parry Aftab, a leading authority on the subject, found 30,000 sites relating to child abuse or pedophilia — out of 4.3 million sites. O’Connor’s research shows that child pornography accounts for only 0.07 percent of 40,000 newsgroups worldwide.
The uproar over kiddie porn also frequently blends imperceptibly into another issue: concern that children are seeing pornography on the Net. While parents are right to want control what their children see, or what is brought into their homes, this is another topic altogether. Blurring the two issues makes finding a workable solution to either that much harder.
This kind of fuzzy thinking is all too common when it comes to discussing the Net. There are several reasons for the lack of intellectual rigor: laziness or sloppy thinking is most frequent, but sometimes something more insidious is at work. Individuals with their own agendas try to piggyback their issue onto other issues. Politicians are often to blame, looking for ways to score quick points on the strength of (usually heartbreaking) news. Sadly, journalists are ready accomplices, always looking for a salacious angle to a story.
Fixing the problems takes discipline. Recognizing that there are separate problems is the first step. So let’s start at the beginning: What are we talking about?
A key distinction has to be drawn between illegal content and harmful content. To take one example, just about all of us agree that pedophilia is bad. But more precisely, pedophilia is illegal in the real world. Enforcing those laws — and strengthening international law-enforcement cooperation — is the best response to that problem.
Other solutions have to weigh potential costs and benefits. Imposing penalties on Internet Service Providers for transmitting porn images or messages sounds good, but it may be too burdensome for ISPs. Will they monitor — or, to use a slightly more inflammatory, but no less accurate word, eavesdrop on — other, unrelated messages and images, or shut down potentially dangerous sites out of fear of liability?
When you have an answer to that one, try this: Governments are obliged to provide protections to individuals accused of violating laws. ISPs are not. Ensuring that ISP decisions will be open, transparent and fair will force the government to secure rights indirectly. Why bother? My gut instinct is that it is far easier to let governments bear the burden through their normal legal and administrative processes.
Harmful content is another matter (even though it does invite all the due-process considerations just mentioned). The real problem with harmful content is that the definition of harmful varies from community to community. Ponder this especially revealing example: Americans consider sex and nudity especially offensive. Those two subjects top their list of inappropriate material for minors. Europeans are less concerned about sexual content; their main beef is violence, a topic that Americans rarely get hot and bothered about. (There are signs, however, that Americans are waking up to the fact that inordinate amounts of bloodshed can be a disruptive influence on adolescent development.)
It is essential that we be sensitive to those cultural differences (and use the word cultural verrry loosely). Let me rephrase that: Without sensitivity to those differences, no scheme has a chance of working. No scheme that incorporates value judgments about content will work. That means that a multidimensional framework is needed.
At the Munich Internet Content Conference mentioned earlier, systems dubbed “layer cakes” were all the rage. It doesn’t take a whole lot of imagination to figure them out. They integrate filters, rating systems and even hotlines — dedicated communications systems that link users and ISPs and sometimes the authorities — to give individual users some control over what goes into their homes and on their computers.
This integrated approach goes a long way toward finding a balance between user needs and free expression. The buzzword these days is self-regulation, and while it too is intuitively appealing, it is more ambiguous than it seems at first glance.
Who is the self? The user? The content provider? The ISP? The manufacturer? A case can be made for each.
I put my money on the user. Who else can best decide what is appropriate for his or her life and is capable of making those judgments stick? Of course, it is impossible to protect children 24 hours a day, seven days a week, but that is life as we know it and the Internet makes only a marginal difference.
Parents worried about the medium’s impact have a simple task: to educate themselves. Learn about filters and ratings systems. Surf the Net. Most important, spend time with your kids while they are surfing. See what appeals to them and why. Find ways to channel their energies into constructive pursuits. And at the end of the day, turn the computer off and remind them why cyberspace is a pale reflection of the real world. Parenting is a verb, not an adjective.