The Red Couch: Review of Mira Grant’s Newsflesh Trilogy, Part 1

[Edited to add: Spoiler alert towards the end of this post. No complaining.]

I want to state at the outset that I’m neither a lover nor an avid consumer of zombie media. What zombie films I’ve watched have been either to humor friends of mine or as part of a general fondness for the horror genre which can, on occasion, involve zombies. Feed was the first zombie novel I’ve tried reading.

With that disclaimer out of the way, I have to admit that I really rather enjoyed the Newsflesh trilogy. I know, I was surprised, too. Part of my indifference to the zombie subgenre arises from a plot standpoint – zombies just aren’t very compelling antagonists. They’re mindless. They’re homogeneous. They lack motivation, plans, goals, personalities – in short, zombies fail because they are, quite simply, too stupid to be interesting. In effect, zombies are essentially an ambulatory disease, a background condition to render the world more dangerous and tenuous as our protagonists stumble about trying not to die.

Attempts to move past this (the first of two major issues I have with the zombie genre) have cast zombies as the result of sinister governments or corporations, a form of biological warfare taken to the extreme. The Resident Evil franchise falls into this category – at this point, zombies are just window dressing for a plot about the evils of corporate greed, overzealous nationalism, or any of a number of other standard tropes. They represent either a deliberate weapon, or the result of human folly on a massive scale. Again, they are no more than props.

My second difficulty with the zombie genre is the presumed fragility of civilization. Zombie films are, for the most part, stories about the collapse of human order used to demonstrate a variety of morality tales, the details of which aren’t particularly necessary for this review. As such, they’re a form of apocalypse literature, and show us how our dependence on law and order, our belief in civilization, are always one step away from disintegration. Which can be interesting, if handled very carefully, but until recently rested on very tenuous logic – civilization is normal, zombies appear, civilization collapses, humans are reduced to hardscrabble existence.

Which is kind of an interesting morality tale if what you want is to show people either how necessary civilization is (or we devolve into cannibals and wanton destruction) or if you want to demonstrate how people are better off without the sort of modern civilization and governance that we’ve come to take for granted, but doesn’t make a whole lot of sense. Modern resources and infrastructure are a lot more adaptable than these doomsday prophets seem to think. Widespread animation of the dead would certainly disrupt the way we live now. It would almost definitely require a restructuring of custom and paradigm to deal with a new problem. But civilization is unlikely to simply disappear entirely, so zombie films largely felt like parody to me.

Enter Newsflesh. Set about two and a half decades after the initial Rising, zombies are established to be neither the result of nefarious plotting nor the unfortunate side effect of nefarious experimentation. Instead, human efforts to improve medical technology by harnessing viruses that will defeat both cancer and the common cold interact in an unexpected way to produce a hybrid virus capable of seizing control of a mammalian nervous system and animating it for a period after death. At first, it seems like a tragic accident, one that has resulted in a restructuring of social norms – children raised after the Rising grew up in tightly secured and controlled environments. Leaving the home is always dangerous, and large gatherings of humans are rare, because if one individual suddenly dies (asthmatic attack, brain aneurysm, heart attack, &c.), everyone in the vicinity is at risk. Onset of full infection is extremely, almost cartoonishly, rapid; it happens within a matter of minutes or hours, depending on a variety of host conditions.

What makes this so pernicious, so ubiquitous, is that the hybrid virus causing zombiedom is pretty much all mammals carry. It renders humans largely immune to the common cold and to cancers of all kinds, crosses the placental barrier so that all infants are born with the virus, and resists all attempts to clean it out. So far, we have a tragic story of human ingenuity gone wrong in an unforeseeable way.

We also have the story from the perspective, initially, of Georgia Mason, an online journalist. A blogger. Her heroes include Edward R. Murrow and Hunter S. Thompson, and her obsession with “The Truth” reminds me of a much more straight-laced Spider Jerusalem, though that might just be because she’s flanked by a tech genius fiction writer and her brother, an adventurous news blogger in the tradition of Steve Irwin. Like Jerusalem, she and her compatriots follow along on the campaign trail of a Presidential hopeful; like Jerusalem, they begin to encounter a dangerous conspiracy that eventually infects Georgia with the amplified hybrid virus. We lose her train of narration near the end of the first book when her beloved brother, fellow blogger Shaun, shoots her before she loses her faculties entirely…though not before she writes a final, impassioned plea to the world to “Rise Up.”

The scenes where the bloggers encounter espionage, danger, and officious government assholes dovetail nicely with the inclusion of excerpts from their respective blogs and the description of their frantic efforts to “tell the truth” through quick-return journalism, all of which calls to mind the scene where Spider Jerusalem reports on the Angels 8 riots, perched on a rooftop with his laptop on his knees, surrounded by exotic dancers. The scene where Georgia interviews Republican competitor Tate and describes his cold, empty stare calls to mind Jerusalem’s opinion of the Smiler. All of which sets us up for a grand finale where the intrepid bloggers topple Tate and his conspiracy.

But we discover that the conspiracy goes much farther than Tate, and we begin to see a lot more possibilities for drawing parallels as the situation unfolds in Deadline.


Through Bars: Lesson Plans

I am a procrastinator. If there was a “Procrastinators Anonymous,” I would probably feel pressured to attend meetings. (And then I would flip tables and leave in a huff, because “I am powerless against my addiction” is just about the worst possible lesson to teach people trying to recover their lives. I’ll probably write a whole post about that at some point.) I have developed some techniques to remain productive with school work despite my proclivities, but ultimately it never seemed like all that big of a deal – I always managed to turn in homework on time, even if it might not have been the absolute best quality work that I could have produced.

But when the work I’m doing is not for my own benefit, but for someone else’s, this tendency to put things off becomes an issue. Now that I’m teaching a class, I find myself struggling against the procrastination habit while grading and writing lesson plans, and then feeling guilty about it.

I’ve mostly been fairly good about grading; usually I return assignments with grades and feedback by the next class period. Where I have the most trouble with procrastination is actually in my lesson plans, and I think this is the result of habituated stress responses. When I’m stressed about something, I often put it off. Even when I try to do work on such a project, I struggle to get anything done. Between distractions and the occasional overwhelming sense of panic, even starting can be a nightmare. Though I’ve improved on this front in the last few years, it’s still a fight, every time.

The first thing I did to prepare for teaching this class was assemble a basic syllabus, including a course calendar. So far, the assignment due dates have stayed the same, but what I cover in lecture/discussion bears only a passing resemblance to the original calendar. For one thing, I really had no idea how to gauge time requirements for topics. I didn’t know how long it would take to lecture, or how long discussions would last. I didn’t know what exercises I would have the students do. I just listed concepts I knew would be important to cover, about three per class period, and arranged them based on their applicability to the speeches and papers I was assigning.

So what usually winds up happening is that I will start a lesson plan (an outline/list of things to talk about or exercises to do) for the coming class at the end of class, while I’m waiting for clearance to leave the building. I sometimes add to it during the week, but mostly the night before/morning of class I am updating, rearranging, and cleaning up the previous plan. If I come up with exercises during the week, I’ll jot them down.

I think some of this is lack of experience. One of the things I’m doing right now is experimenting and building a repertoire; I warned my students early on that they were sort of guinea pigs and that I was figuring things out as we went, because I think it’s only fair that they know what they’re getting into. Some of this is also that I am very comfortable, and in fact prefer, speaking extemporaneously. Give me an outline and I can roll with it. More than that, I can rock it. Extemporaneous lecture allows me to respond more organically to questions and discussion points from students.

One thing I try to do is go beyond any assigned reading. Either I will talk about concepts or tools that are from book chapters I’m not assigning, or I will take what they’ve read and walk them through application. Sometimes what I do is give more context to what they’ve read, as with the ethics chapters, when I would expand on the readings’ coverage of theories. I always try to have some discussion (often mixed small group or partners followed by full-class) in every class so that it’s not me lecturing all the time, though I joke about liking the sound of my own voice and my love of writing on the board.

Basically, I ask myself, “What do my students need to know about this topic? What would make this class enjoyable?” and then I play with methods for reaching those outcomes. As I work with my students, I develop a better understanding of what they value out of our meetings – usually practical applications. So two classes ago, we talked about syllogisms. This last class meeting, I had them write a syllogism using the thesis for their upcoming speeches as a claim – basically coming up with the major and minor premises underlying their thesis statements. Then we used the audience analysis we had talked about to discuss what types of people would probably be easier or harder to convince using those premises as a starting point, and how we could gauge that likelihood using audience analysis tools. Then we added types of proof to the mix, discussing when and why we use certain types of evidence. During the last bit of class, we started talking about using language to strengthen arguments or impacts.

Practical application of theory made the theory more accessible and more meaningful. Using their own thesis statements made it immediate and let them see results of that application. Explaining why I was covering certain things the way I was helped them understand that my focus is not just for them to present good speeches and write good papers in this class – my goal is to equip them with tools that help them make better sense of the world and communicate their ideas in a variety of contexts. So far extemporaneous lesson plans seem to be going well, though working in this fashion is sometimes stressful for me, and I feel like I could probably put together even better classes if I was better at planning. So that’s still on my to-do list.

But my students come to class excited and walk out thoughtful. And that’s a wonderful starting point.


Through Bars: Study Hall & the Fragility of Opportunity

Every weekend, the institution where I teach holds a study hall in the activities building, where prisoners can gather to work on homework and receive assistance from fellow inmates or outside tutors. My introduction to volunteering in the prison came through attending study hall sessions, where I was able to put my rusty Spanish skills to work helping a man with homework from the introductory Spanish class taught by a fellow prisoner.

Now that I teach a class myself, the study hall period is a time for me to meet with my students and give them more individual support on assignments; this week, their first paper for the class is due, so I went in over the weekend to help those who were struggling with their essays.

The limitations of teaching in a prison environment mean that I cannot hold office hours as instructors at most universities do. Instead, I meet with my students in a large room full of tables, chairs, and other men studying all manner of topics – everything from oceanography to calculus to law to philosophy. It’s a bit chaotic, but the opportunity to discuss with a handful of students their thesis statements and the hypothetical ethical dilemma they had to come up with for their paper was valuable. Through individual feedback and group discussion, they were able to move toward stronger thesis statements, clearer understandings of the ethical frameworks I had asked them to learn, and a better grasp of how to apply, compare, and contrast those frameworks with their own ethical systems.

Another, unexpected, benefit of study hall was the chance to be a little less formal. Since my primary pedagogical training comes from working as a peer tutor in a writing center, I prefer for this sort of interaction to be comfortable, egalitarian, supportive, and informal. In addition to discussing the paper, we also talked about math, video games, music, and the process of personal growth. Though I try for our class meetings to have a balance between lecture and discussion, since I do spend a some of the class standing in front of the room and writing on a whiteboard, an uneven power dynamic builds – through the use of space, if nothing else. Of course, maintaining some degree of authority is necessary to keep a class moving forward, but I valued this opportunity to sit at a small table with some of my students and just talk.

One thing I have noticed in the nearly half a year that I’ve spent volunteering in the prison is how starved many of the men are for conversation. I’ve encountered many individuals who have fascinating stories and perspectives; unlike many conversations I have on the outside, the men inside bring an intensity and focus to even casual discussions that I have rarely seen. The opportunity to openly voice opinions and explore complex ideas seems to be a rare one in prison life, and for the most part, the men I have worked with value those chances that do come their way very highly. This was most apparent in the mixed-enrollment class I took in the spring; almost half of the students in that course came from the university where I study. All of the students from outside were female, though the instructor was male.

Every class period I would participate in and observe conversations between incarcerated men and free young women. The men were respectful, polite, and very interested. The sort of polite disinterest that so often permeates conversations between classmates elsewhere was almost nonexistent. When someone spoke, whether to a conversation partner, a small group, or in a full-class discussion, everyone else involved listened carefully and responded thoughtfully. It was refreshingly positive and affirming.

This description might sound idealist. I am very aware that not all conversations in prison proceed in this manner; having observed some interactions between prisoners and guards, or between prisoners and other prisoners, I can say that those conversations carry an entirely different tone and weight.

But honestly, I think that’s exactly the point. Most interactions within prison are fundamentally about power – who has it, who doesn’t, and how it can be manipulated from both sides of that divide. Prisoners tease each other, wheedle favors from guards and each other, and watch everyone, both prisoner and guard, with a wary eye. Even relatively positive prisoner/guard interactions carry that undertone of fundamental power disparity, and interactions between prisoners occur within an atmosphere of surveillance.

In a classroom, such concerns seem to be shifted to the back burner. The focus there is on taking advantage of the opportunity to learn, an opportunity these prisoners seem to value much more highly than their free counterparts in university systems around the country. Like the student who is the first in their family to attend college, these incarcerated students view their education as something precious. They carry a burden of desire and need that privileged college students simply don’t.

All of this has helped shift my own perspective on education. I have always felt that education was important and valuable, that it represented an opportunity for betterment and growth on intellectual, emotional, and economic levels. But because my parents told me from a young age that I would be going to college, it was also an opportunity that I have taken for granted. Returning to college after a hiatus spent struggling to learn how to cope with mental illness had changed my perspective; because I spent years convinced I would never be able to return, I was enthusiastic and eager when I did. I also treated it like something I could lose, because for me it was something I had lost. I don’t think most students, particularly ones who are white and middle- or upper-class, really grasp that. Like me, they take educational opportunities for granted.

Incarcerated students don’t. They understand that attending class is a privilege they can easily lose – through infractions, through transfers to other institutions, through scheduling conflicts or paperwork mixups…through many avenues of change inside the prison, both within and beyond their control. To them, education is something precious, not just because of its potential for change and self-improvement, but because of its fragility, and the tenacity with which they grip it while they have it is something I admire and will always remember.


Hannibal Lecter Hour Part 1: The Man Himself

I have an unhealthy fascination with serial killers. I know this, I know it’s a bizarre thing to find fascinating, but a lot of my academic interests involve the excavation of taboo subjects (sex, sexuality, mental illness, crime & punishment, death & bereavement, etc) so I guess it’s not that weird for me. But ever since that first meeting with Nancy Drew I’ve loved whodunits and crime novels, and eventually I grew out of Nancy Drew and Hardy Boys and discovered Agatha Christie and Patricia Cornwell and a host of other books. Crime novels were one of the few things my mother and I could share when I was growing up, and a large part of my love of the Scarpetta novels and the Stephanie Plum series is from reading them after my mom finished them. All of this is just context, lead-in for the meat of the matter, which is a trend that a friend of mine suggested I explore on here after a FB post that got a lot more responses than I expected.

That trend is tied to the depiction of serial killers and psychopaths in popular media, and in my mind it links to Hannibal Lecter as the iconic urbane neat-freak murderer with serious culinary skill. This first occurred to me when watching the new series Hannibal, and has kind of stuck with me. I’ve termed it the pathologization of masculine neatness and because of his starring role in the trope, Lecter gets his name in the series title.

When we first meet Lecter in Silence of the Lambs, Clarice walks past a gauntlet of hooting and hollering men to reach him. He’s standing in the center of a neat, spotless cell, eerily silent and smiling; in contrast to his fellow inmates, his uniform appears not only clean but pressed. He greets her politely. The other men in the nearby cells represent what we expect of imprisoned men – a descent not just into madness, but into primitive behavior. They treat Clarice like an object; it’s offensive and off-putting, but it’s not strange. Hannibal, however, treats her like a person. In a place meant to degrade the human animal and draw him closer to his basest nature, to wear him down, he retains a certain attention to detail that exemplifies rationality.

Perhaps this is what makes him so frightening. The sort of horrors that Buffalo Bill is committing, that we are told through hints and suggestions Hannibal himself has committed, are deeply disturbing…monstrous. To distance ourselves from this problem, to avoid asking if we or someone we know could be capable of such terrible things, we say that someone has to be crazy to do them. When we call them crazy, we also push them farther away from being human; we make these criminals into alien beings and we tell ourselves that their supposed evil must be clearly visible because they are, after all, inhuman monsters.

But what makes Hannibal, and those like him, so terrifying, is that they don’t appear “crazy.” They don’t fit neatly into the little boxes we make for them, where they display comfortingly familiar pathologies – throwing feces (animalistic); masturbating in public (primitive); blurring and violation of gender lines (because trans*ness is coded as insanity far too often). No, Hannibal’s neatness and his demeanor mark him as rational. And the only thing more terrifying than a monster who brutally kills people is a rational human being who brutally kills people.

Some of this comes also from contradictions – Hannibal is treated with far more care and precautions than the more obviously disturbed prisoners…yet he displays no outward signs of a deranged mind. The contradiction here keys a sense of dissonance and makes us uncomfortable. What have the guards seen that we have not? What do they know that we do not? In addition to the dissonance of the rational monster, this lack of knowledge taps into our fear of the unknown, giving us a rational yet unpredictable beast that appears all too human, all too much like us, at least on the surface.

Next time on the Hannibal Lecter Hour: we analyze the star of his own serial killer soap opera, Dexter.


Tower Troubles: The Author Authority Myth

This whole tabula rasa  issue (discussed in this post) is also very closely connected to what I like to term “the myth of author authority,” or the tendency in scholarly literature to pretend that the author doesn’t exist and therefore whatever is on the page exists somehow devoid of any bias or personal experience possessed by the person writing it. It is the idea that by linguistic tricks we can pretend there is no human author, only words of unvarnished truth magically placed on a page.

These efforts tap into a sort of modern Jungian archetype, that of the Objective Researcher or Objective Observer, who can cross the world to live among savages, record their strange behaviors, and return to civilization to publish a book which renders primitive ceremonies intelligible to educated white folk, all without allowing himself (yes, traditionally the Objective Observer is male, for a variety of reasons) to stain any of this observation, recording, or interpretation with any but the clearest of eyes and mind.

It should come as no surprise that this myth rests firmly on a colonialist foundation. That foundation is largely white and primarily male, because it is power that allows us to believe these things about ourselves. White male academics, rich by the standards of much of the world, could get away with this approach to knowledge because they held the power to interpret phenomena. Their experiences alone could go unquestioned, because they set the standard by which experience was judged. In doing so, they consistently created an exoticized Other in their travelogues, ethnographies, and studies.

Yes, I claim that they created this Other. By uncritically filtering their observations through their own beliefs and expectations, what they produced bore as much resemblance to the lived reality of the people they studied as historical fiction does to real events. All of us, no matter how careful, ultimately have to acknowledge that our understanding of the world is imperfect because it is always subject to interpretation in a process framed by our prior experiences. That process of interpretation feels completely natural and reasonable when we are surrounded by people whose experiences are similar to our own – people of the same race, gender, class, language, and culture, a position inhabited by many academic researchers.

Does this mean that all academic research is useless or worthless? No, far from it. What it means is that there is a troubling history of appropriation and colonization in academia, one that those who conduct or use academic research need to be aware of, especially in the Humanities disciplines. The sort of work we do is fundamentally subjective in a way that the hard sciences are not. Which is not to suggest that the hard sciences are immune to this sort of problem – phrenology being a prime example of (pseudo)science being used to justify terrible beliefs and practices because it was filtered through human bias – but rather to acknowledge that the work of interpretation in humanities disciplines like anthropology, sociology, history, communication studies and cultural studies is different from other types of research. Our data frequently involves words and stories, and it is our task to look for unifying themes; to interpret sometimes very divergent data.

One of our primary responsibilities in this task is to place our own biases and experiences under the microscope at the same time that we analyze other people’s experiences. This does several things -
-it decreases the power distance, because in doing so we acknowledge that there is nothing inherently superior or different about our own experience that means it should go unquestioned;
-it also demonstrates a willingness to “show our work” so that others have a clearer view of how we reached our conclusions;
-it forces us to perform vulnerability in a way that draws us closer to our so-called research subjects by subjecting our experiences to the same scrutiny we give theirs;
-it helps us surface potential bias or conflicts of interest;
-it allows others to see where our conclusions might be influenced by our experiences;
-and it gives us practice in the ongoing project of understanding ourselves. After all, how can we expect to be able to explain others’ experiences if we can’t even adequately explain our own?

Ultimately, by treating white (frequently male, frequently middle- or upper-class) educated experience as a sort of default category with a unique access to objective reality, academics have perpetuated a lie. By refusing to question our own experiences, we have perpetrated the illusion that the experiences of people who don’t fit into these narrow categories are somehow other, lesser, deviant, deficient, alien, primitive, even subhuman. When people who are not part of this small elite group read that research, what they encounter is a systematic devaluing of their experiences, their thoughts, their reality. They see people like them dissected, subject to a penetrating gaze, over and over and over again; and what do we dissect except dead things, objects?

There are researchers who are pushing back against this myth, however, particularly as the academy grudgingly expands to include more persons of color and people of “developing” or “underdeveloped” countries. Linda Tuhiwai Smith, in particular, in her book Decolonizing Methodologies, lays out some fantastic suggestions for ways to conduct research without falling into this trap.

One of the things that makes pushing back against this difficult is fear. Fear that by rendering yourself vulnerable in this way, you also render your status as an academic and a researcher vulnerable. You risk being laughed at, or dismissed, or treated as a silly postmodernist, or not getting published. Defying the conventions of a genre can be challenging. But ultimately, working toward a more honest, more accessible, more just academy is to everyone’s benefit. I want to be an academic because I want to have a positive impact on the world. I can’t do that through a career based on the sleight-of-hand that underpins this myth of author authority.


Gamer’s Corner: Morally Repugnant Acts

This past spring, I worked on a research project investigating tabletop roleplaying games. Since it was a preliminary project, the end result of it was a question that I hope will fuel future research endeavors for me (potentially to be featured on this blog!): How do players (in or out of character personae) regulate and negotiate moral and ethical behavior at the table? I mainly bring this up because of some experiences I had in recent sessions of the Exalted game in which I am currently playing. 

Exalted is a particularly interesting game in regard to this question. For one thing, it lacks the formal alignment system of D&D and its direct derivatives. For another, the system that sort of fills a similar role, is much more dynamic and complex. Instead of a location determined by your placement along a bivariate system (Law/Chos, Good/Evil), each character has four Virtues (Compassion, Conviction, Temperance, and Valor), each rated from one to five. Virtues are rolled on two occasions – to reinforce your ability to perform aligned actions (for example, rolling Conviction to resist torture, or Valor to avoid running from a fight against a superior opponent), and to limit your ability to perform contrary actions. In order to execute a bound, helpless prisoner, for instance, you must fail a Compassion check. To do something that contravenes your motivations or violates an intimacy, you have to fail a Conviction check. To give in to an addiction or temptation, you have to fail a Temperance check. And to run from a fight (any fight), you have to fail a Valor check. 

Moreover, the core conceit of Exalted is that you were an ordinary person until you suddenly became the recipient of godlike powers. Unfortunately, the godlike powers were tainted by the Primordials (not Titans we swear) that the first Exalted overthrew. And because these powers (referred to as exaltations) get handed off to new people each time their wielders die, receiving an exaltation can give you some serious flashbacks. By the end of the First Age, the Exalts had started going mad; many of them were egomaniacs, doing crazy things with their power and often abusing it. So all that glorious power comes with some serious baggage – you’re likely to have some seriously messed up voices in your head, which can even take control of you if you possess certain flaws. 

In the game I mentioned above, I play a Night Caste Solar – a thief. This one’s a little more Indiana Jones dungeon-delver with some education than your average pickpocket or roofwalker, but her power comes from a long line of thieves, assassins, secret police, and spies. In our second session, we inadvertently ended up freeing a collection of slaves and setting them up in a village of their own. Two sessions ago, we visited the village only to find that pirates were trying to extort tribute. We wound up fighting the pirates; we captured a couple of the crew, but their leader, unfortunately, escaped. 

When none of the other characters stepped forward to question our captives, I took it on myself to interrogate them. Neither would answer my questions with any useful information, so my character ended up torturing them by cutting off pieces of them to get them to talk. Of course, because torture isn’t nearly as effective as certain people would like you to believe, while they did talk, very little of it was useful, and my character wound up killing them. And then, because the whole thing was thoroughly disgusting, vomiting and hiding in a tree with her familiar for a few days. 

As I relate this, I can’t help but notice the dissociation that occurred as I typed up this account; in the previous paragraph, I switch from the first person (I took it on myself; my questions) to the more distant third (my character ended up torturing them). The actions of my character occur at my direction, so they are ultimately my actions; in this case, however, both I and, by extension, my character, felt so disgusted by those actions that even in this sort of account I feel the need to distance myself from what my character “did.” 

The other characters were also shaken by these events, but none of them made any effort to intervene. When my character informed them that she was done dealing with prisoners and they were in charge of the next ones, none of them argued. The only flak I caught was from one player who (questionably in or out of character) observed that I hadn’t gained any new information from the torture; all I had was dubious confirmation of information we mostly already possessed. 

By contrast, last session saw a duel between another player’s character and an NPC seeking revenge for his dead wife. Because this antagonist survived the encounter (barely), he was bound and kept on our ship as we traveled until he regained consciousness – at which point the PC he had attacked told him to “Give my regards to your wife,” shot him, and threw him overboard, all while he was still bound and helpless. The other two characters present protested his choice to execute a helpless prisoner; they exhibited far more evidence of upset at this situation than at the torture. 

I’m still thinking through the implications of this (was it that people were more energetic in the second session? does it have to do with the fact that I was trying to accomplish something beneficial? did the gender of the perpetrating players play a role?) but it does raise interesting questions. How do we, as players, make sense of situations where we portray evil behavior? What factors play into the way we do so? How do we respond to the morally ambiguous behavior of other players’ characters? Why do we let some activities go and start in-character arguments over others? How do we incorporate these reactions into the characterization of our character-personae? If one of the major attractants of role-playing games is to get to indulge in behavior that is, in the real world, usually met with legal or social rebuke, why do we become so upset when we do finally indulge?


Through Bars: Ethics Discussions

So the last two weeks in my public speaking & debate class has been our unit on ethics. I included an ethics unit for a couple of reasons – because my previous speaking classes have done so; because I thought the DOC would be more inclined to approve a course proposal for a debate class if it included such a unit; but mostly because I understand that teaching people to monitor, control, and improve not just their ability to speak publicly, but their ability to persuade others of things is a potentially problematic area of instruction when you talk about people who have been convicted of crimes.

I don’t know what the majority of my students are in prison for. For most of them, I don’t even know if this is their first time through the system or if they’re repeat offenders. I know one student is in for life, that a different student comes from a profoundly troublesome and abusive childhood, and that at least one student has a history of some form of substance abuse. What I do know is that being charming and persuasive can create a lot of opportunities to cause other people harm – we need look no farther than our own political system and the long history of abusive entertainers for evidence of this.

I also know that one of the major concerns (at least, on paper) of corrections systems has to do with the regulation not just of prisoners’ behavior, but also of their cognition. The very concept of the panopticon (those towers in prisons that guards sit in to watch prisoners) is rooted in a desire for prisoners to internalize the watchful gaze of society in order to drive them toward self-regulation. The original design for these towers called for a system where guards could watch prisoners at any time while going unobserved themselves – thus, prisoners would never know for sure whether they were being watched, and would begin to act as though they were being watched at all times, modifying their behavior accordingly. The same concept is at work behind the use of traffic cameras in Portland, Oregon; because there are so few cameras, they are rotated between multiple intersections so that drivers cannot be sure whether they will be caught.

The problem with this approach (now streamlined through electronic surveillance) is that it depends on an external imposition into internal processes. It doesn’t change the way a person thinks, not on a deep level at least, and its success depends heavily on the willingness to resort to other forms of punishment, which ultimately depend on the threat of force. Being observed may affect behavior of which we are ashamed, and it may affect behavior for which we have been punished in the past, but it doesn’t change an individual’s analysis of “good” and “bad” behavior, only “punishable” and “unpunishable” behavior.

A particular goal of mine in this unit was to respect the agency of my students. That meant never calling them bad people (directly or by implication), never suggesting that they lacked a moral or ethical code, and never being condescending to them. My approach was to acknowledge that everyone already had a personal ethical code, consciously or unconsciously, and to use the readings I assigned to familiarize them with a variety of systems, and the vocabulary used to describe those systems, of evaluating behavior. Their assignment for this unit was a reflection paper, in which they have to select an ethical paradigm they consider different from their own, apply both it and their own paradigm to a hypothetical situation, and analyze the differences and similarities. The idea, which I explained to them (because I believe that as an instructor I should have a good reason to assign work and that I should be willing and able to explain my reasoning to those I ask to complete an assignment), is for them to test their ethical codes.

We talked about establishing a series of questions to ask ourselves to determine the rightness and necessity of specific behaviors, which I described by discussing some ways I have applied this approach to my own behavior. In partners or in small groups, they discussed the ethics of hypothetical communication situations I provided. Several of them struggled to apply unfamiliar ways of thinking when I asked them to evaluate situations based on specific paradigms, but they all worked at the problems, and were generally willing to admit when they had difficulties and to try and consider why it was difficult for them.

As I was making the rounds between groups to listen in and occasionally offer advice (I’m starting to get used to reflecting questions back for students who are afraid to be wrong), I heard one student tell his groupmates, “I’ve never really sat and thought about this kind of thing before.” And that’s what I was looking for – not just to get them to be thinking, but to give them space and tools for them to think through complex situations, without being judged. The fact that they all seemed willing, even eager, to wrestle with these questions means that I’m very much looking forward to reading their essays when they turn them in.


Follow

Get every new post delivered to your Inbox.