Extremist Groups Use Games and Social Media to Recruit Teens

From Gamergate to Ivory Tower Academicians to Parents: The Pathway to Grooming and Recruitment of Teens


“We use algorithms in many parts of our lives in order to make decisions about us, and the Internet, the ad technology, and the ways in which the internet has been commercialized, algorithms are being used in the very same way to monetize our attention.”

Dr. Michelle Ferrier, Founder, TrollBusters

INTRODUCTION

Dr. Joel Beeson, co-producer of “Raised by Wolves” interviewed Dr. Michelle Ferrier on recruitment of rural teens to domestic terrorism and other radicalized groups through social media and online technologies. Clips from this two-hour interview appear in the forthcoming documentary. The interview here has been edited and organized to address the broad themes of Gamergate and online recruitment, Appalachia and media deserts and doing journalism at the intersection of multiple, competing identities. This transcript was exported on Jul 14, 2020. Read more about the documentary “Raised by Wolves” here.

Dr. Joel Beeson, professor, West Virginia University

Part 1: Gamergate and Recruitment of Rural Youth

Beeson: In the years of Gamergate and your founding of TrollBusters, it also coincided with this mass adoption of Minecraft and online gaming for youth around the world. And teachers and parents adopted Minecraft as a beneficial STEM activity. So hundreds of thousands of youth were mainstreamed into gaming culture, which was pretty populated with racist, misogynistic, anti-Semetic and anti-Muslim content. So can you comment on this disconnect between what was there, or maybe even explain; talk about what was this gaming space populated with? 

Dr. Michelle Ferrier, founder of TrollBusters and executive director of the Media Innovation Collaboratory. Ferrier has been conducting content and network analysis and monitoring online hate since 2006. Ferrier was interviewed by Dr. Joel Beeson, co-director of the forthcoming documentary “Raised by Wolves.”

Ferrier: Sure. Let me step back actually a little bit further than Gamergate, because while I started TrollBusters at about the time of Gamergate, as you may know, I’d been doing work in and around online spaces for many years before that. Actually what preceded Minecraft as a gateway for young men into the gaming culture was Runescape, and other early programs like that, that connected in online forums, young teens with each other around the globe. This was years before Minecraft and Gamergate. I was a columnist at a paper here in Florida, writing as an African-American woman about my life as a professional here in Florida and as a mother. And that’s actually when I began TrollBusters; it wasn’t called TrollBusters then, it was called Spot Hate. And my goal was to track the rise of a white supremacist rhetoric online across the United States. That was 10 years before Gamergate and I watched as my own family and sons struggled with getting online, and the use of social spaces from MySpace to Facebook, to TikTok, and Vine, and other types of technologies. When you talk about young teens and particularly young teen men, I can tell the story in my own family because my young children, one of whom was about 10 at the time, was very much into these online games and Runescape. And this was the very early days, as I said, even before MySpace and some of the other social media. What I noticed during that time of Runescape was he’s learning craft. And so Runescape, like Minecraft, is a game that you take on certain talents and skills, etc. You build your reputation in world, selling and/or trading and/or finding objects and going on missions, perhaps with other people to be able to find objects and things like that. So at the age of 10, my son became a world renowned fletcher; somebody who creates arrows, and he would sell these in Runescape and became world renowned. So being tech savvy, he learned how to create these arrows and was able to sell them worldwide and became an expert in those technologies. I think it’s important for us to understand that at the age of 10, to be recognized worldwide as a global expert is a very heady position for a young person to be in. He went from those games and Runescape, to other multiplayer online shooter games and things like that. Now, he was not allowed to play those games, this happened behind the scenes unbeknownst to his parents for quite a while, that he was playing these multiplayer games. But again, at the age of 10/11/12, was commanding large units, going on military forays online, and staying up till two, three, four o’clock in the morning to play with comrades around the world, in these games.

So you have to understand the mindset of what it’s like as a young person, not only to be cultivated and in some of what are seemingly, and can be very innocent ways, to play a game and then devolve into online spaces through these games that are grooming you in a particular way for leadership, for coordinated activity online and drawing you into spaces where you are listening to Joe Rogan, where you are on subreddits, listening and watching and hearing other points of view, and perhaps being exposed to terrorist and extremists online who are trying to cultivate you into a certain mindset. So the online spaces have always been there since their development. 

Gamergate really brought those activities to the forefront because there were women who were calling out those behaviors that were happening in the spaces. So it’s been a long trajectory in terms of really looking at how our young people have been cultivated online and the ways in which we’ve tried through Section 230 of the Communication Decency Act, or through blocking or other kinds of technologies to limit young people’s exposure to this kind of violent content, but we’ve had some real challenges in that area. 

The algorithms and filtering mechanisms, the blocking tools and the parental tools that may be in place, may not be sufficient algorithmically for us to be able to catch all of the content in the filters. It may not be sufficient for us to catch the kind of content in human moderation. Advertisements and other kinds of recommendations may also be juxtaposed against innocent content in ways that we can’t predict. 

Dr. michelle Ferrier, Founder, TrollBusters

Beeson: Can you talk about those challenges? 

Ferrier: So the work that I’ve done with TrollBusters has really been looking at this online space. And as I said, almost 15 years ago, beginning to look at and map the trajectory and the growth of online white supremacists and the spaces and places in which they were operating. One of the things that we began to look at obviously was the ways in which the laws, at least here in the United States, have shaped these platforms in the ways that they develop their technologies. And so the Communications Decency Act when it was created, was designed to help put some legal teeth into protections for young people online. Section 230 of CDA, specifically prohibits us being able to use platforms, or actually fight platforms, for the content that is promoted on their platforms. And so Section 230 asks us to hold platforms harmless; that they are just the conduit for this information, and that the content that’s traveling on these platforms from the individuals and the users themselves, are the ones that should be held liable for the content and not the platform itself. 

What that has done at its very base level in the structure of the internet and how the internet has grown, has created multibillion dollar giants, who in effect through their algorithms and other things do regulate the space of speech online, and have cultivated whether it’s on Facebook or Twitter or Reddit or Gab, or some of the other more walled gardens that have been created online for more niche conversations, they’ve created a space where victims or targets of online harassment have no way to be able to get redressed for the harms that have come against them. And so we see platforms that have grown from national to international global entities, that have been moderating the town hall, the town square, and the space for free speech all over the globe. 

Beeson: So, I think it’s important that Section 230… One of the challenges we have is, as we wrote, understanding all of these parts, right? And trying to unpack that for parents and teachers who haven’t dug deep into it and doing the research as you have. 

Ferrier: Mm-hmm (affirmative).

Beeson:  I think it was a real challenge, and I think one of the assumptions is there’s someone minding the store, right? When a parent gives their child a device and maybe puts on parental controls, I think if they assume that their child is not going to be exposed to extreme content, pornography, gore, violence, sexual content, and those kinds of things, because of an experience with broadcast and film media and how they’re presented as sort of similar. Even my students who were setting this and had lessons on it, have a difficult time sort of understanding that. So I think what makes it so difficult then as someone who is defamed online, you have people that are trying to ruin that reputation, doxx them, all the things that happened to you. What is the difficulty with dealing with this online versus let’s say, if the newspaper published something about you? 

Ferrier: Right. So online harassment in particular is really pernicious because of the speed and the anonymity of the actors online. So the harassment that journalists have received really is longstanding. We can go back to the beginnings of journalism and recognize and see the ways in which journalists have been attacked. The online space however , is one that really is dangerous for a couple of reasons. One, when somebody is under initial attack online, it’s difficult for anyone else to jump into that space to try and be a support to them. So the activity itself can be very fierce, can be very violent, can be very fast and furious. So the same kinds of game networks that developed are the same online communities that are used to be able to recruit people, to be bad actors online, and to target particular users online. Those back channels, whether it’s Gab, or Reddit, or Facebook groups, are also used to send out messages and campaign information about how to attack targets online. Ultimately what we see is the individual focusing on, and receiving, these very hateful messages. It could be in “public” through tweets that are visible, but oftentimes they may also be getting emails. They may be getting direct messages. They may be getting other back channel correspondence that doesn’t even touch the kind of violence that we’re already seeing on the surface, visible and in public. 

Journalists have received videos of beheadings. They’ve received signage and posters that we’re going to come kill you. We’re going to attack you in physical space. So when we talk about online hate in particular and online harassment, we see a significant effect because of the mechanisms of the online space. One, as we talked about in Section 230, the platforms themselves have some immunity from the types of harms that may come out of the content that comes across their platforms. 

These journalists, in turn, have become targets of disinformation and online harassment campaigns
themselves. Journalists who cover information operations and investigative journalists scrutinizing those in power are particularly at risk for coordinated disinformation and harassment campaigns because of their reporting of matters that officials would rather keep quiet. Gendered disinformation and its weaponization against journalists who cover information operations has become a central aspect of how these campaigns work. Organizations that track online harassment have identified a significant increase year over year, alongside the expansion of these new reporting beats.

Courtney radsch, report for OSCE

So while there is some moderation of the content that goes on behind the scenes, the volume of content obviously on all of these platforms is significant, and it’s global, and it’s in different languages, it’s in text, it’s in video. The algorithms and filtering mechanisms, the blocking tools and the parental tools that may be in place, may not be sufficient algorithmically for us to be able to catch all of the content in the filters. It may not be sufficient for us to catch the kind of content in human moderation. Advertisements and other kinds of recommendations may also be juxtaposed against innocent content in ways that we can’t predict. 

Therefore we really don’t have the kind of controls that I think we’d like to believe we do over how we see the internet, and the internet that we do see is very different for every single person based on the browser they’re using, the cookies that have been gathered on that browser about their behaviors and their habits, the ad server software that’s using all of the clues of their digital footprint to be able to analyze and put content up against what they’re seeing. Search recommendations that are drawing from prior searches, as well as other searches from people in their region to suggest content to them. All of these are drivers and ways of gate keeping that we have put into our technologies, but aren’t visible to parents and others who think that there are people behind the scenes that are actually controlling and moderating the content that comes to people. And it is not a cohesive system and we’ve made it look as if we have control, when in fact we don’t. 

Beeson: And why is it then that, for instance, a teacher or parent can be looking at their social media feed and saying, what’s wrong with this? And their child might have a completely different feed?

Ferrier: Right. You’re looking really at the algorithmic level, at the code level, that the machinery, the technologies, have been designed in such a way that it is taking every single clue of your behaviors online; what you click on, how long you linger in those spaces, when you log on, who else you interact with, or tag, what kind of content you post online. It’s using all of those pieces of information as well as the actual content that you deposit, to make decisions about who you are, how and what kind of content you might want to see, and to be able to maximize your attention online so that they can sell things to you.

So if we don’t know how those algorithms are built or how they’re making their calculations, and these are the same algorithms that not only are driving our content and what we see on the web, but algorithms that are driving our credit scores, or our risk pool in terms of our eligibility for certain kinds of insurance, or looking at our housing and loan information and trying to determine what your risk is as an applicant. We use algorithms in many parts of our lives in order to make decisions about us, and the internet, the ad technology, and the ways in which the internet has been commercialized, algorithms are being used in the very same way as to monetize our attention. I don’t think we have helped our students and our parents understand the ways in which the technology at the code, the content, and the conduit layers, are all complicit in driving our students down a funnel towards content that is not as wholesome as perhaps we’d like it to be. 

Beeson: Yeah, because there’s a saying that we used to say; what bleeds, leads, right? 

Ferrier: Right.

Beeson: So graphic sensational stories are ones that bring eyes to it. Why did they put the National Enquirer at the checkout stand, like with the candy? So what it is, is monetizing extreme content and I suppose that means keeping people’s eyes on the screen. What does that mean for children or young people for our next generation? 

Ferrier: So , one of the things that early on, I guess this is probably close to 20 years or so ago, there was an organization called Coalition for Quality Children’s Media. I was one of their early judges, a parent that would work with them to receive video media, children’s games, and things like that. And I would convene a group of young kids and we would demo the software or preview the video, etc. And then, I would work with the young people to talk about what they felt, what they saw, what they heard, and how they felt about that content. And that information was then used to go back out to parents to help them make choices about what kind of media they wanted to see and whether it was age appropriate, whether the content was reasonable, whether there were stereotypes in the content, etc. So, there have been people that have been trying to, on the content side, as you indicate, give some kind of markers as to what is appropriate for particular ages. And that includes content. 

But as I mentioned, when we look at this problem of online harassment and the cultivation of young people, we have to look at multiple layers. The complex of the problem is situated around three areas, the content layer, which is where we put those labels of ratings, whether it’s for movies, or TV content, etc. But those are indicators that somebody has reviewed the content and has provided some kind of a label on there for parents to help them understand what’s appropriate. There’s no way we can or have been able to, on the content layer, provide that same kind of signalling on news content, on social content, on all the types of material and artifacts that go on the web. And so, you see that very recently in the challenges of both Twitter and Facebook to [Former] President Donald Trump’s tweets and the concerns about fake news, misinformation, and disinformation, and trying to label those for their audiences so that they know. 

So, on the content layer, we can see we’ve already got challenges in being able to determine what is appropriate speech, what is the appropriate platform, and how do we control for the view of those kinds of pieces of information when they are disaggregated and removed from their initial context and spread on social media in a variety of different ways. So, that’s just on the content layer, . We’re not even talking about the code or the algorithmic layer that’s also making choices and decisions about that content. And we’re also not talking about the conduit layer, which is really those distribution channels and access. We know in rural areas there’s very limited access to broadband, to cellular, etc. And so, that does limit the ability of young people to be able to get other kinds of information that they might need to counteract the information they may be getting in more niche spaces online.

PARENT VIDEO: https://vimeo.com/732079253/fa15c9691e

Beeson: Wow. Interesting. Yeah. In addition to the research and various ecosystems online community building, you’re also of Appalachia yourself. So you have a unique perspective and their wealth and distrust of elite and mainstream media communities. Can you comment on the failures of media? It’s our communities who do not see themselves in the media and how that contributes to susceptibility to grooming in the online spaces? 

Ferrier: Sure. Oh gosh, I’m unpacking that one. So I’ve lived on both ends of Appalachia, both in Ohio and Southeast Ohio, and then down in Southwest Virginia, near the Tennessee border. And I’ve lived in both of those spaces while examining the news and information ecosystem and trying to look at where to from here. Particularly in the past 10 years, as we’ve seen the rise of social media and the use of social media to provide some immediacy, timeliness, and also to insert some other voices into the narrative. We’ve seen the social media channels really push back against the mainstream media narratives that have cropped up around particular issues, whether it has been Hurricane Katrina and mainstream media’s representations of people of color in New Orleans or whether it has been the representation of rural America around the opioid epidemic. We have lost the capacity for news and information at the local level. And in some places we never really had it. 

And the struggle I think we have right now as a profession, is that we are struggling on all three levels, the code level, the content level, and the conduit level, to be able to break through the noise of social media and misinformation that is out there and to create a place in a space where we can share news and information with our audiences. And unfortunately we really have lost a lot of that battle over the past 10 years as social media and information has been dis-aggregated from the brand, from the media website and platform to discrete digital artifacts that float on the web that lose their context, that lose their authority, that lose their veracity. And so we have parents, readers, audience members, politicians, and others that see information on the web and have to make critical choices about whether that information is true, whether the sender’s true, whether there might be bias in that. And those are very sophisticated and difficult choices to make. Even journalists have shared fake news. And so it’s not a special skill or gene or something that everybody should have. It’s something, a muscle, that has to be worked and cultivated for people to understand exactly and to make those choices about what content might be true and to do the due diligence to look at other points of view and come to their own opinions. 

Beeson: Yeah, I think that is the central difficulty here. No social media profile news because it’s in their benefit to feed people things that outrage them.

Ferrier: Right. You have to understand that the platforms themselves are designed for attention. The platforms are built based on the psychology of our brain and how it works. And so they’re designed for stickiness. They are designed to grab our attention and hold our attention and keep us online. And so they have studied the research about how our brains operate and they recognize that very strong content, whether it’s very positive content or very negative content, it draws people’s attention, it causes people to share that content more, that those tightly held passions are the things that inflame people and keep them online. 

And so around Gamergate, we found that Facebook was actually experimenting with people’s feeds to test this brain research and was feeding certain kinds of content into people’s feed, either positive or negative to determine how people would act and behave based on that. And we discovered… I say we discovered this, journalists discovered this because our feeds turned very dark, particularly journalists of color, who were a lot of my friends on social media. We discovered our feeds turned very dark. And actually, I ended up shuttering my account for several months because I didn’t like the way I was feeling when I logged onto Facebook. And I began to ask a lot of my other friends on Facebook if they had noticed the turn in their feed to darkness. Now, as you said, as journalists, we normally would look at our news values, “If it bleeds, it leads” and other kinds of news values of what we determine is newsworthy. But this was violence and negativity beyond what I had seen in my 20 plus years online and in online spaces and it was consistent with others and what they were seeing. And we recognized, and it came out later about a month later, that Facebook had actually been experimenting with people’s feeds to see whether or not they could tweak the algorithms to capitalize on the emotional involatile content and the attention of people towards that content. And in fact, that’s exactly what they did.

Ferrier: What we’ve seen in the past month alone, is Facebook wavering on some of those borders of what they consider free speech and beginning to remove and label some of that content. We’ve seen Twitter also try and put some constraints on that speech. But ultimately, this is going to come down to a regulatory issue of privacy that the Congress is going to have to take up more fully and really wrestle with Section 230 and the immunity that has been granted to these platforms to be able to operate. We’ve seen globally, a push back against these global international corporations and other countries get a very different social media experience than we do here in the US because of that. And so I think we’re going to see here in the U.S. increasing pressure on those platforms to be held liable for the harms caused on those platforms and using those platforms to format hate against individuals and groups of people. 


Beeson: Yes, I see that very clearly. I don’t know if the will to regulate might be happening. So now then, we’re trying to help parents and teachers understand how adjacency works online. There are no functional boundaries between benign and toxic content. This content porousness is exploited by bad actors, bad actions from team and spots, AI and economic engines. 

Ferrier: Absolutely. And, the difficulty in moderating that, right? Because these groups are very sophisticated at gaming any kind of moderation, filters. 

Beeson: It’s like the Boogaloo Boys are now. It’s the big igloo at the ice house and that kind of thing. Could you talk about how it’s not that kids are literally contacted by the KKK and asked to join or be a Nazi, but that there is a process of being groomed hearing this adjacent content? 

Ferrier: Right. So what I call that is juxtaposition. When we talk about digital rhetoric and digital advocacy and how some of the tools work, juxtaposition is one of the ways of inserting yourself into the narrative. And so what we’ve seen over time is on the content layer, what we try and do as journalists, is that “objectivity.” We try and bring the perspectives of multiple sources into our content to bring that juxtaposition and those multiple points of view into one particular space. Because of the ways the social media algorithms are designed, they are making the choices about what that juxtaposed content might look like, whether it is ads and what we’re going to try and sell you, whether it is recommendations of who you might want to follow or be aligned with online, or whether it’s recommendations about the next piece of content you might want to see or hear. Those automated functions are operating based on decisions that are baked into those algorithms. And until we have more transparency, as well as representation in the construction of those algorithms, we’re going to continue to see the types of problems in being able to filter and block this type of information. 

What we’ve seen over time, and you hinted at this, is that we’re moving away from, and these groups are very sophisticated rhetorically. And what I mean by that, not only rhetorically, but technologically. So we have this real stereotype of this teen boy in a basement that’s hacking away on their computers, that’s down in these subreddits, that are talking about white supremacist issues and race wars and things like that. 

But I don’t think we really recognize, as you said, the ways in which the slippery slope, the grooming, that occurs online in very subtle ways. It draws people down this funnel towards this content. And it is a slow and slippery slope that uses a move from textual content to visual content. And so what we’ve seen is the use of emojis, combinations of emojis, memes, visuals like TikTok dances or videos or short videos, explainers. etc. They’re using visual content that can’t be caught by our algorithms, that isn’t sophisticated enough to read the photograph or the video and the content and be able to tell us if this is something that should be blocked. And so these groups have very sophisticated ways of being able to manipulate those algorithms to good effect. For example, one of the techniques that we’ve seen against journalists that would not be very visible to anyone, is the mass reporting of a journalist’s account to a platform, Twitter, Facebook, whatever it might be. Now, the algorithms will only note that multiple people have flagged a particular piece of content and after a certain amount will take that piece of content down and then send a report to the user saying, “You have been suspended because of this content. You need to contact us to get your account reinstated, etc.” The content that has been flagged can be completely innocuous content. And what we see are the users that have been coordinated in other spaces to go after and target a particular person and report them. And then that journalist’s account is taken offline for 24, 48, 72 or more hours while they work with the platform and the humans to get their account reinstated. So we’ve seen significant ways in which these groups have used their technological advantage to be able to subvert the filters, to use technology and the very tools designed to help us to deal with this kind of content: the filtering and blocking tools. And they’re many steps ahead of us in terms of being able to design for and filter for the kinds of ways in which they’re bastardizing our language, creating their own visual language through hand signs, as I said, combinations of emojis and other code that they’re using to cultivate a language and a group of people online in and around that rhetoric. So it is a very dangerous and coordinated activity that we see happening. And we would be remiss if we did not look at, very deeply, the coordinated ways in which white supremacist groups have used, write-in campaigns and other types of fear and intimidation tactics like this. They’ve just moved it online. This is not new. And by focusing only on content or only on ratings, or only on filters and not looking at the code content and conduit layers of this whole issue online, we are basically putting our fingers in the holes of a dam and are missing really the forest for the trees. In focusing in on filtering, etc, and not recognizing that this is a coordinated activity by bad actors, including white supremacist groups, government entities, government actors here, and abroad who are manipulating our conduits as well as the content to drive dissent between people to create confusion and to do exactly what they’ve done here in the United States, which is create an environment of fear and intimidation and violence, where people are afraid to speak out.

Beeson: For people who don’t know about this, why is it that the gaming world online has been a, sort of, locus of where white supremacy, anti-immigrant, misogyny, and all of those things really have gotten that sort of groundwork that this has sprung from? 

Ferrier: If you go back and look, you can see how the military has used video games as a way to cultivate talent for our armed forces. So the use of video to recruit people for armed forces, for police, for military action, etc, is long standing. And we saw with the growth of online multiplayer games, what the gaming world did was create a technological space for conversations that could not happen in public, right? So the back channels of these online multiplayer games is that you’ve got a headset on. You’re connected and talking to people all over the world about coordinating military campaigns and developing an expertise in communication and using distributed communication tools to be able to get the job done. So the gaming world created a technological mechanism that allowed for the coordination of activity globally that could remain hidden. It also gave a place and a space for those young people who were using those games to be recruited in a violent way. If they’re already in a multiplayer game that’s violent, finding and seeding that with adjacent content that’s violent, language that’s violent, “being an adult in the room”, and then being able to use that space to groom talent. It is exactly why I think gaming has shown up as the recruitment space, really, because they’ve recognized the value of visual content. They’ve recognized the power of video and visuals as a recruitment tool and have capitalized on that to be able to wield it both to recruit, as well as to instill fear and intimidation of people who are trying to speak against the violence. 

Beeson: You see the figures of the Alt-Right, like Proud Boys and other folks who were involved in Gamergate, certainly emerging as these celebrities who have become the heroes of young people. Sort of the way we look at some of this stuff online, especially with teenagers and that kind of thing is, blaming the individual, right? Well, they have bad parents, right? They don’t try and keep them from this content. And we’re in this right now, in this kind of lack of white nationalist nation. It’s part of our culture and we’re in a point of flattening, but with the pandemic, this sort of intersection of economic failures, of systemic failures of our health care system and whenever these kinds of crises happen, it blows the cover off of this, sort of, notion that that is promoted politically that we are in a color-blind nation. 

Ferrier: Yeah.

Beeson: Yeah, it seems as though, I forget who said this, but they said, platforms have all the power without the responsibility. That seems to sum up, encapsulate that. One of the issues is, I think that we need to look at who enables the system, and who benefits from it, rather than thinking about, “What can a parent do to set limits to their kids?” That’s certainly important, and I spend too much time doing it, I still spend too much time. 

Ferrier: I think it’s frustrating for us to put this responsibility on parents. I think this whole pandemic experiment has told us, there is no way, as a parent, that we can moderate where our children are in all spaces, ongoingly. There’s just no way we can do it. We cannot be attentive to the casual ways in which they might even see screens in a storefront, or overhear something in airport TVs, etc. We have too many places and spaces where screens have taken over, accidental contact with content that’s inappropriate can happen. There’s just no way. There’s no way. So if we recognize from the start that we are behind the eight ball in trying to keep up with and manage this content, there are really two things we can do. One, is to prepare parents to have these conversations with their children about the sexual, violent, and other kinds of grooming activity early on, having those age-appropriate conversations and helping our students and our children be able to recognize when they see it, and feel comfortable reporting it to us so that we can look at solutions at that time. 

So parents really do have a very limited ability to be able to affect change, and I think we beat up on them too much to try and create this envelope around their children, which is impossible to try and do. The responsibility, I think, is twofold. One lies with the platform and the second lies with our laws. Our current cyber-stalking laws are being tested. We don’t have enough case law to push back against the cyber-stalking that’s happening, nor do we have the digital forensic skills to be able to investigate these types of harms fully to be able to get the type of support that we need. So we need, first of all, our laws to be tested. We need to have law enforcement, because right now they are the agents to enforce those laws, we need to have law enforcement educated about the places and spaces that this happens in, and the digital forensic tools that they might be able to use to find out who the perpetrators are and to bring them to justice. 

The Generation Zeitgeist program was developed by TrollBusters for the Rural Digital Youth Resiliency Project for young creators and digital natives. Our curriculum moves teens from users to creators, defining the digital landscape and its uses and harms. Through 12 educational posters and short video explainers, we empower young people to engage in digital spaces on their terms. Find the module here.

Right now, we have seen these operators be able to operate with impunity. They have no fear. They operate with no fear of retribution whatsoever because we have no teeth to the cyber-stalking laws that are in place. We have law enforcement that has no clue about how social media works and how to be able to investigate these types of crimes. And they are difficult to investigate. And so we have no supports in place in time that it takes from the immediate time of an attack to the three, four, six, and sometimes months later that the police may get around to being able to investigate a case like this. Because there is no direct physical harm, most of the times they will not take up cyberstalking cases at all. And so they will ask you to come back to them when something’s harmed you, which, if you’re receiving death threats, is not a very comforting thought, that you would come back only after you’ve been harmed. 

Beeson: Do I have to wait until my kid becomes a neo-Nazi and shoots up a black church or a mosque or a synagogue?

Ferrier: Right. So let’s take that example. Say you’re the parent of somebody, and you suspect that your child has devolved into this content. What can you do? One, if you don’t have control over your child’s accounts, etc, and you don’t have and can’t collect the evidence yourself to be able to show that this content is juxtaposed against their game content, or there are people online, etc, you have no evidence. And so where do you go? Do you go to the police? How do you share that with them? This is a mental health issue. Do you take your child to a counselor and talk about some of what they’ve seen online? That might be an avenue. But right now, those are really the only two avenues you have as a parent, because the school isn’t going to help you in dealing with this, unless maybe it is a child that’s in school with your child. But there are very few places that can deal with online harassment of this type and be able to help walk through some solutions with people. 

And that’s part of why Troll Busters has continued, because there’s no continuity of care. What happens once you’re initially attacked? The week after, the month after? What happens to your reputation that’s destroyed online? What happens when the activity is consistent and people continue to send you these things? How do you do your work as a journalist? And so there’s implications for us as journalists and the profession and the work that we do. But as children, I think we need to recognize that whether we’re talking about children or adults, the online environment is going to become a much more difficult one for us to suss out the reality. We’re already being told to discount what we see on video in front of us. And so we always are going to be in a position of having to critically think about and assess the veracity of what we see on these online spaces. That means we have to train everybody, not just children, but adults as well, and recognize that we may not know what truth is. 

Beeson: Yeah. One last question. Why is it for people… What I heard is that misogyny is often the glue that unites all of these extremist groups. Can you comment on that? 

Ferrier: Misogyny is at its core, because if you look at our country in particular, look at Congress, look at our presidential races, look at how women politicians are treated online versus in other spaces, you will see some significant differences in how they are treated, both online by the populace, as well as by the pundits themselves. There is a deep misogyny online, and it is a backlash, really, to the women that went online and were using these original technologies, the internet, the worldwide web, social media, etc, as spaces and safe spaces. So it was built. We thought this was a place where we might be safe, and using anonymity and other tools be able to create those safe spaces. As I said, technology is somewhat neutral. It can be used for good or evil. So any technology can turn and be weaponized. But we saw women adopt these spaces and places to be able to have freedom of expression, and saw the backlash by Gamergate and others to women moving into these spaces to reclaim some power and the power of speech. So I do believe misogyny is at the base of this, and that women have been using online spaces to collaborate and gain power. And what we’re seeing is a real information war online, wrestling over, really, the issue of power and women, because it’s important worldwide, really, to continue to subjugate women. We’re going to see that play out in these online spaces in very violent and sustained ways. So I do believe there is a core of misogyny. But ultimately, this is a question of power and who has it. And as you mentioned earlier, who benefits from the system being built the way it is. And I believe that it’s set up to benefit the 10%, and the 90% of us are fodder for the machinery to profit off of our pain on these platforms. 

So in order for us to really move forward, we have to make some choices. We can make choices as individuals about whether we’re going to use these tools or not, whether to disclose information in a minute by minute, tweetly basis as to whether we’re going to tweet that piece of information or not, and try and moderate our own digital footprint. But ultimately, our democracy is at risk, and these platforms are not as egalitarian and neutral as they would have you believe. And we will have to develop the legal muscle to be able to ensure that there are consequences for the types of harms that are happening on these social media platforms. 

Can you tell I’m angry? I was rage-tweeting last night on Twitter. Totally different issue. We have no clue. I was railing online yesterday about sexual harassment, and sexual harassment of students, particularly. But I was describing a particular story of cyberstalking that happened when I was Dean at Florida A&M university. And we have no clue. We’ve not done a good job in physical space of dealing with sexual assault. We have not dealt with the online environment at all in terms of its effects on our students and how we handle the investigation, support, and resolution of those types of cases. And so we are woefully unprepared for what we’re seeing happening online right now, unfortunately. 

Beeson: Okay. Is there anything else you’d like to say? 

Ferrier: I’m deep-sighing over here, because I’m struggling to think, is there a solution? There is no magic bullet. There is no one size fits all solutions for families, for communities, for the United States as a country, or for a platform, even. I think what’s critical is to bring folks together around this conversation. Help people understand the implications and the ripple effects of their work. I’m trying to share with photo journalists to help them understand that there is another side, and that there may be some ways in which they can do things differently that would solve both their journalistic purposes as well as do no harm. 

So ultimately, I feel like with this online grooming, that raising awareness and helping people understand what it is when they see it, feeling comfortable with being able to report it, having agents, whether it’s researchers, service providers, human rights defenders, and others there to be able to catch them if they are in a place where they’ve been targeted can help. And help assess what they need, whether it’s technological support or emotional support, etc. And then help put them on a pathway to learning, and basically open up their filters, whether in physical and online spaces, to be curious and learn about different viewpoints and help them make up their minds about what they’re seeing in the world. 

Dr.  Beeson is a professor at West Virginia University and co-director of the forthcoming documentary “Raised By Wolves.”

Dr. Ferrier Ferrier is the founder of TrollBusters and the executive director of the Media Innovation Collaboratory. She’s been researching and creating in online spaces before there was a world wide web.

%d bloggers like this: