Preview Mode Links will not work in preview mode

The Social Media Clarity Podcast


Sep 26, 2017

Ban the Banhammer - Episode 28

Scott and Randy discuss the (mis)use of the various forms of the "ban" tool, and provide alternative techniques.


Show Links


Transcript

Randy: Ban the ban hammer.

Scott: What?

Randy: Ban the ban hammer.

Scott: Wait a minute. We gotta talk about this.

Randy: Welcome to the Social Media Clarity Podcast. 15 minutes of concentrated analysis and advice about social media and platform and product design.

Scott: So in this episode we're going to focus on what seems to be the moderator's tool of choice, the ban.

Randy: And how it is, most often in our experience, the wrong tool.

Scott: Yeah, it could be the right tool in the right circumstance, but it's mostly misapplied.

Randy: Yeah, if you're reaching for it first, it's probably the wrong tool, but we'll talk about that in detail. We could set this up by talking a little bit about our experiences encountering other people talking about the ban hammer. Scott, you find a wonderful reference post. Do you want to tell us a little bit about it?

Scott: Sure. So, this is just an example. Steve Brock who's the director of moderation services at Mzinga that talked about the difficulties of rehabilitating online trolls. But in it he's talking a lot about bans. How to identify trolls, how to ban, what different bans are, and how to apply them, and whether or not they are effective. So this is a good example of how a lot people tend to think about dealing with misbehavior in online communities.

Randy: Although, we're going to be picking a little bit on Mr. Brock, by no means is he unique in most of these positions. In fact, were going to talk a little bit about how each point has its own challenges and each point carries forward the error of the previous one, and it leads you to a place this is both undesirable and expensive.

Scott: So what are the steps, Randy

Randy: The steps are first you identify the troll, figure out who it is you're wanting to take action on. Someone who is doing harm to your site. You perma-ban them. We'll explain the different bans in a few minutes. The idea is to kick them off the site and make their identity no longer accessible. He also suggests removing all content, doesn't mean they're all bad. And if they return with a new account, immediately ban that as soon as you detect that. You could try hell-banning. This is number five, he says, "But they'll find out." He's actually right about that, and then he says, "Well, the abuse will get worse once they figure out you're pulling tricks on them." It turns out, number six, you have to assume that they can't be reformed, so you've got to stay vigilant. You have to stay on it all the time. And for his final step he says, "Therefore, you need 24 7 coverage and so you need to hire enough moderators to cover your site." It's our contention that this entire list that leads to outsourced 24 7 coverage of your site and a constant battle with removing people follows from errors starting at the very beginning of the list.

Scott: But before we get into that, let's actually define a few of the terms that have come up already. Perma-ban, it's a ban that is based on the identity, banning the account. There's no fixed time out. That's it. You ban them, they're gone. You can ban based on their account. You can hit their IP address. You could try to ban them based on credit card so they can't start a new account if you're a paid account. Also, there's kind of this nuclear option of removing all of the content. Regardless of whether some of the content was actually good, you ban the person, therefore, their content must also go too. So, that's the perma-ban.

Randy: I'll talk a little bit about hell-banning, which I mentioned earlier. This is also known as shadow-banning, stealth-banning, or ghost-banning. It's strange. It's hiding content from the community except for the creator of the content, the person you're hell-banning. The idea is, you'll see that you posted, but no one else will see the post. Meant to be discouraging, or meant to just let you burn out your energy. It goes all the way back to The Well. The Well had a method for doing this where people would actually selectively stop reading content from other people, and this led to destroyed threads that no one what was going on in the thread because you could never tell who was actually reading what. I want to quickly, even though I know were just making a list, I want to go ahead and shoot hell-banning in the head-

Scott: Yup.

Randy: So we don't have to talk about it much more. Because this involves a bunch of complicated technology, which is trivially defeated by anyone who is malicious and confusing to those who aren't.

Scott: Yup.

Randy: The community doesn't know what they're missing, and when someone knows someone and talks about it and they find out they're hell-banned, you end up with the community talking about hell-banning, not whatever your content is.

Scott: Right.

Randy: Scott and I are both unified. There's a lot of moderators like us. Do not waste any time on technology to hide from your user that their behavior is unacceptable.

Scott: It wastes a lot of time because it leads inevitably to two results. Steve Brock calls it out perfectly. They'll find out, and then they just abuse even more and even harder because they feel like they've been cheated in some way. And then the other one is anyone else who isn't hell-banned or ghost-banned gets paranoid about whether or not they've been ghost-banned. If any technical glitch occurs, then they suddenly think that some action has been taken against them. This is not healthy for your community. You'll spend time assuaging people of their paranoia then you will building the actual community and trust. It destroys trust. So those are two types of bans. There's another type of ban, if you will, and it's the time out. It's a temporary suspension from being able to contribute in some particular way. This breaks up into a couple of ways. You can limit somebody's permission where they can't post, or they can't reply. You can even limit their ability to log into the system, but the idea is that it's a time out, so that you can communicate with the person. Or you can degrade their service. Randy, you have some good stories about degrading service.

Randy: There are often reputations systems for detecting egregious behaviors, and I'm talking about is specifically, the spamming behavior. I worked for Yahoo for five years. When they would detect either a mail spamming bot, or a bot hitting the search engine to get results to use to make SCO, what they didn't do is ban IP addresses. What they did instead was build a reputation database and degraded service. What that meant was, when a request would come in from a highly suspected spamming robot, they would serve it, they would just serve it very slowly. This is kind of a low level taxing. What happens if you ban them all, we've seen one of the few days yahoo was actually down, they made a change to their interface for search, and all the spamming robots in the world that were hitting yahoo started failing instantly. They were getting instant errors back from the web servers. This was creating a denial of service attack, as a result of all the robots who were never used to failing now retrying instantaneously. So hundreds of thousands of robots were now sending hundreds of requests per minute.

Scott: Yeah, that's bad.

Randy: They put back the interface because there was this kind of detante in degraded service design.

Scott: So that's not the same thing as ghost-banning, that's just degrading somebody's service because it's targeted. Spammers want to be able to spread their spam as quickly as possible and move on to the next target, and if you slow them down, you're actually costing them money.

Randy: Spamming behavior is different from whatever trolling behavior is. The reason we say ban the banhammer is because cases like we've outlined here are missing the key point. The category error is the difference between troll and trolling. The difference between being a spammer, a person, and spamming. We really have a problem with online social contributions. It isn't people, it's behaviors. The only thing you can really evaluate is the content. It's trolling that's the problem, not trolls.

Scott: Right. It's really important, and we've talked about his in the past, and I talk about this when I give workshops, you focus on the behavior, not on the person. In Sociology, there's a thing called the fundamental attribution error, and that's basically when you take a behavior and you ascribe that as a personality trait to a person. So if somebody does something that is a violation of your terms of service, they post something that is borderline racist, they are not necessarily a racist. They are not necessarily a troll. They've done something and then that's a specific behavior that can be addressed as opposed to simply assuming this is who they are and they'll never ever be different. You do wind up in the exactly that idea of trolls can never be reformed if you make certain assumptions that their behavior is tied intrinsically to their personality. We just know that's not true.

Randy: We even know that ID's aren't people. Back to the post, the person comes back over and over with multiple ID's. So an ID banning solution is no solution at all. But sometimes, it's the reverse. Sometimes there's no person. When we start talking about spamming, the spammer, the mythical person who is doing the spamming is not reachable. He's got a hundred thousand robots doing stuff. You don't even know where he is. You don't know how to reach him. You can't back through those robots. It's the robots that are exhibiting the behavior. So you have to deal with the robots in that case. In the case of trolling, you have to deal with the trolling posts. What are the things that are causing a problem. What are against your terms of service or your community guidelines.

Scott: We're saying ban the banhammer. When you're reaching that as your first tool, it's probably the wrong thing to reach for, but there are times when we do need to use this kind of a tool in specific instances, and spamming is one of those instances.

Scott: Let's define it a little bit better, because a lot of people will call all kinds of things spamming, including just an off color comment.

Scott: They have zero or even negative quality to your community. They have absolutely no contribution at all. They're not even part of the discussion. There's a lot of them, or they're coming really fast. There's not a human behind those particular posts. At that particular point, what we're doing is we're throttling the input. Instead of treating it as a community problem. We're treating it as a bandwidth problem.

Randy: And bans are not my first tool for dealing with that. My first tool for dealing with that is content hiding, described at the end of my book, "Building Web Reputation Systems." In the final chapter we talk about how we enabled users on Yahoo Answers to mark items, such as spam, and we started to trust them. We came up with a method by which we could trust them and we could literally, within 30 seconds of when a piece of spam would come up, it would be hidden from the network. What hidden means is kind of the opposite of hell-banning. That item disappears for everybody, and a notice is sent back to the author, and this deals with that problem, which is if the author is just a robot, the author can't mitigate it. You won't be sending back a note saying, "No, no, no, this is my real content. I got ganged up on," or something. This is why, when we turned on this mechanism on Yahoo Answers, spamming vanished, literally within two weeks. The spammers picked up and they left and they went somewhere else.

Scott: And that's because you were using the crowd to surgically remove the bad content.

Randy: Yes. So the point there is there was no banning of the user account. It wasn't necessary. The user account became inactive because it no longer could successfully post.

Scott: You didn't have to ban anybody. They abandoned their effort.

Randy: That same process is used on accounts that are more tightly tied to people, the people who care about their postings. If they have them reported using the same mechanism, not for spamming, but for tastelessness, or some breaking of the rules. The same mechanism will trigger. The content could be hidden and they would receive a note explaining to them what the community gave them as feedback about what needs to change, and they could change it. They weren't banned. The problem with the ban is when it does tie to a person, it's an ending. It's an invitation either to an escalation or an ending. It's the last thing you should ever do if you do it. If the first thing you do is ban someone, they can't correct the behavior, and you come off very poorly.

Scott: It's a slap in the face.

Randy: The customer lost forever.

Scott: At Schwab Learning, I had the ability to ban people, but we never did. We dealt with spam by an escalation process. It would evaluate what came in, and I would either pull the content, or I would hold the content, but I would always contact the person. I had different levels of contact. I had the, "Oh, you made a mistake. What you wrote looks like spam. I'd love to hear more about you." And so that's not an ending. I was opening up a bigger beginning. "Tell me more. Please participate more, and prove to me that this isn't just spam, but it looks like spam, so I'm worried about it." Then, there was the self-promoters. There was this gray area about solicitation in that particular community. So we'd have some people who were very well meaning, and would make their own products, and would want to promote them to other parents, and I would say, "Hey, you know, I'm really sorry but self promotion is not okay, but if you want to talk about other things, and you want to promote this on your profile, when you talk about other things on our community, people will see your profile, and then they'll see what you're trying to advertise. We're giving you that space to be able to do that. Then there was, "That's it. You're a spammer. I've pulled your content. You violated my terms of service. Please don't come back. I've canceled things." That always gave a chance for somebody to give me a response back. We didn't have a huge amount of spam, but invariably if it was that bad, nobody responded. But I would usually get some kind of response from the other messages from anything from, "Oops, I'm sorry," to "How dare you." We took it from there. But that was a discussion.

Randy: Yeah. So what you want is beginnings, right? You want dialogues, as much as you can afford them. If you're going to pay people to moderate, they should be having conversations, not just destroying them.

Scott: Ideally. Unfortunately, a lot of moderation services aren't really set up that way. They're set up to remove content based on terms of service. It's difficult to find moderation services that you can spend the money, to take the time, to help actually foster communities. It's a shame.

Randy: It's mostly a scaling problem. If you've got user generated content in great quantity, as I mentioned earlier, if you're going to invest in tools, don't invest in hell-banning. Invest instead in customer feedback so that the users can tell each other how to behave and reinforce that behavior. That's one way to increase the leverage you get out of your paid moderation people so they can spend time with specific cases that need their attention. If the community is keeping the new kid who shows up and doesn't know how to behave from posting a crappy question or answer on Stack Overflow, then you don't need paid moderators. In fact, Stack Overflow is one of the largest, richest communities with the highest quality content of it's type in the world and it has very, very few top level moderators. All moderation tasks are actually done by any contributor who cares enough to generate enough content of enough quality on the site. If you ever want to see what a site looks like when it doesn't live with a banhammer as it's first line of defense, Stack Overflow or any of the Stack Exchange servers are really interesting to show how they incrementally give authority to you as you succeed in contributing to the community.

Scott: At Schwab Learning, there was a point where I started to teach the other community members exactly what I was doing. I decided there was a point where I said, "I'm just going to be transparent about this, and start doing this in the public." Especially when it was the nice stuff, and started showing people how I approached potential spammers by addressing their behavior and saying, "Hey, maybe this is a mistake. We'd like to hear more from you." The community picked up on it. I was no longer the first line of defense against spam. The community became the first line of defense. What they would do is they would engage with anyone who looked like it was spam, and they would try to talk to them and draw them out, and if they weren't able to draw them out, then they would bump it up to me and say, "I think this person's actually trying to spam us."

Randy: I do consulting on social media product design, and discussions about moderation are a critical part of what I consult on. So new clients often give me administrator access, or moderator access to their communities so that I can see what's going on behind the scenes. One of my clients, I was looking around for moderation information and I discovered a profile for a user, and they had a field on the profile that the administrator's could see that said how often they'd been banned. This person had been banned six times. This is the example of the banhammer going insane. These are perma-bans. You prevent them from participating and then apparently, they could appeal, and then you could put them back on, and then they ban them again for similar behaviors and so on. The banhammer is the wrong tool. In fact, every offense was the same, and it was a minor offense. Technology changes to the software to discourage that behavior that would have been more effective in changing the behavior.

Scott: So we're saying ban the banhammer and we've been giving hints at things to do instead, ways to think about behavior online that won't cause you to think, "I've got to use that banhammer right away." And so, let's get real detailed and talk about exactly what you can do instead of using the banhammer.

Randy: Number one, start by defining the behaviors you want to encourage and the ones you're trying to discourage from your contributors in your community. This should be baseline for choosing the actions you take going forward.

Scott: These behaviors are not people. Yes, you have your community guidelines, but understand that if somebody violates a community guideline you don't punish the person. You give them an opportunity to correct the behavior.

Randy: Amen. So based on your available resources, you can either develop tools to facilitate the community marking the content, to give feedback privately to the contributor, so that they know they should make some changes.

Scott: Giving them a chance means that you're focusing on the content that they're producing. If a piece of content is clearly violating your terms of service, or a piece of content is clearly being generated by a bot, or it's clearly a spam going off to somewhere else, or it's illegal, then yes, you're going to want to remove the content.

Randy: If you're at scale, you need the tools you need to find them. Sometimes your community is small, and a personal conversation is the right choice. Other times, your community is huge and you have to have the tools to scale or you will never solve the problem. People have tried to buy the solution with human moderation, and they've all given up. At scale, you need help. You need tools, and if you're lucky you can get tools to enable your community to do a lot of the basic work.

Scott: Even if you're not at scale, don't overlook the ability to enlist your community in helping you identify and correct behaviors of people who are coming into your community. We're talking about avoiding the banhammer, which is a tool, and we're talking about all the other ways you can reduce damaging behavior in your community, and these are skills that anybody can employ including your community. So you can teach your community the same things. Teach them to engage and try to sus out the difference between is this person actually trying to harm you, or is this person just making a misstep about their behavior? If they can't handle it, then you become the escalation process. You then support your community as a community and you can get a small amount of scale even with a small community out of this.

Randy: Very true. And you might be able to get incremental tool development to support the community. So for example, if you don't have a community platform, if you don't have report abuse as a button on content, assuming you have that much incremental tool development maybe significantly cheaper if all you want to do is count the number of people who mark this thing as violating the terms of service. Stack Overflow has a tool I'm not a big fan of but it's functional. You can spend one of your points to actually give someone a negative score. I don't like the math of this, but if enough negatives go in fast enough, they immediately read it as a hidden content line. So they get community feedback immediately, and then there's an escalation process that can occur to appeal. They recently changed this to improve the initial response, negative feedback pattern by changing the name from deleted to on hold, which invites a conversation and to have a community practice of, is it you leaving negative, one, or anything other than the most obvious spamming behavior, you should leave a comment about how to improve the post. It's kind of a social system that they've evolved to go along with their mechanical systems. A mechanical system doesn't have to be complicated, but it provides a mechanism for social evolution.

Scott: Reframing the idea of flagging away from this is bad, it shouldn't be here, to this is problematic, and we want to fix it.

Randy: I consulted on discourse.org's moderation mechanism and it does just that. When several people mark a thing as a problem, and the problem is not illegal or spam, but it's a content problem, the content still gets hidden, but the message that goes the user invites them to edit it, to fix based on the feedback from the community, and if they do edit it, it will be able to be re-posted immediately with no flags on it. So we say, "You can fix it. You can go back to square zero with this post, immediately. Give it a try." So, we presume that if it's not the most egregious kind of errors that the content hiding will be temporary until the problem is resolved. This is how people can learn the behavior that is expected of them in the community.

Scott: I would like to see a lot more systems offering something like that. All too often, it is a post and punish model. You post it and either it goes away, and you're punished somehow, or you succeeded and it stays. This is what missing from a lot of these is that we're just not giving enough people chances and giving them the agency and the respect to actually change their behavior.

Randy: This leads to the kind of thinking that was in the article when it said that, "Trolls are irredeemable." What do you think it's going to take if you never accepted their bad stuff from the beginning, and the community said if you want to post here, please don't be a dick, and there's a dick button, they will learn to conform, or they will leave. You don't have to kick them out because their content never appears. And by the way, it turns out to be the same pattern. So the pattern is, "Do I post things that are to my only benefit and to the harm of others, or do I contribute to this community?" The definitions of those vary from place to place, and it is the community who can help you enforce them as well as your moderators. So your moderators can focus on the real exceptions.

Randy: Ban the banhammer.

Scott: Ban the banhammer.

Randy: Alright, we should say goodbye though.

Scott: Oh yes. We should say goodbsye. Thank you very much for listening. We hope that this has been some help. So, don't reach for the banhammer.

Randy: Yes, people are not nails. Catch you later.

Randy: For links, transcripts, and more episodes, go to socialmediaclarity.net. Thanks for listening!