Tuesday, June 7, 2016

You Are Listening To Los Santos

I’m sure I don’t need to explain the appeal of aimless, nighttime driving to anyone with a reasonable amount of wanderlust, a healthy number of anxieties, and access to a car.
Likewise, I’m sure I don’t need to explain the way that experiences had in a virtual gameworld can create emotional resonances and impart significant meaning in the place of a missing “real-world” experiences (or, maybe, in spite of existing-but-less-unsatisfying “real-world” experiences) to anyone who’s ever been deeply engrossed in a great video game.
Here’s the part where I bewilder myself by arguing that Grand Theft Auto: San Andreas is a great video game. And five or so years since I last played it, no less.
 A game can be great in a lot of different ways, for sure, but here’s one.
For most of my time in college, I rode out the rolling cascade of existential panics, romantic panics, financial panics, pragmatic panics, and artistic panics in my car. Whether it was driving to the woods, driving to another city, or just driving in a circle around town until the gas gauge slipped dangerously close to the point from which the cash in my back pocket would be unable to revive it sufficiently until the next payday, my car was my home-away-from-home, or maybe just my home.I first played San Andreas after I'd moved to Washington for grad school. A different time, a different place. A different car. A car that chose to be reliable on its own terms. A little scary in a new place where getting stuck between towns at night might mean not seeing another car on the road until the next afternoon, or the next next afternoon. Night driving was (mostly) out of the equation. Sort of.
I got a huge kick out of Grand Theft Auto III, the first of the “modern” GTAs*. I loved the neon and narrative of Vice City. I spent a lot of time in both games doing the stereotypical GTA things. Wreaking havoc. Trying to set up daisy chains of exploding cars. Trying to outrun the tanks and helicopters for as long as possible. Ignoring the story missions. Driving the ambulance (for some damn reason, I loved driving the ambulance missions). Listening to the radio, cruising the interstate. It was all good. Then San Andreas hit, and it was so fucking huge. Laughable now, sure, in the face of many AAA games’ open worlds, but back then it seemed enormous. And it wasn’t just huge; there was so much space.
Coming from the Ohio suburbs, living on the Palouse was strange at first. Surrounded by farmlands and rolling plains, the town just…ended at a certain point. At night, beyond the lights at the edge of Pullman’s limits, there was just darkness and the howls of coyotes. During one of my first days on the WSU campus, I got up high enough on a hill to see out across the campus to where the town ended and Nothingness (the wheat fields) began, and beyond them…space. I had a moment of vertigo and had to sit down with my head between my knees to catch my breath. It was so disorienting I tried to avoid raising my eyes to the horizon at all for days afterward.
My reaction to San Andreas’s space was the complete opposite. The (virtual) world was still bounded by rules, the screen, memory restrictions, and somehow it felt more grounded. Despite there being a lot of things to do in San Andreas, the geographical areas in its world that held a traditional game-y meaning, its places, seemed dwarfed and outnumbered by the amount of spaces in the game that seemed to exist just for you to pass through them, to create distance and time between the “meaningful” areas. So you see immediately what I did, of course. I spent almost all my time in the empty spaces. Cruising along the highways and dirtbike trails (but highways especially) that served as liminal spaces between the three urban hubs that housed most of the game’s ludic content, cranking the expansive soundtrack** through my car’s radio, I unwittingly and perhaps autoimmunologically captured the experience of night driving. And I don’t mean that in a making-the-best-of-a-bad-situation way, I mean it literally. Memories of cruising up and down the highways of San Andreas at night (night both in-game and outside my apartment, simultaneously) are, upon recollection, qualitatively indistinguishable from a lot of the cruising I did around Ohio in my younger days. 
Once during my time in Pullman, I flew down to San Francisco for a conference and rented a car for the weekend. Stressed out and overcaffeinated, rocking around in my too-small hotel room at 1am, I decided to risk taking the rental out around the city/island with nothing but an AAA atlas on the passenger seat to guide my way, because a long time ago we didn’t all have GPS and as such were subject to the whims of our more romantic yearnings without a robot to argue us out of them. I ended up on some damn interstate-y road almost immediately, and even though it was a big city, that late at night the roads were nearly empty because in my experience, San Francisco is a city that has the grace to chill the fuck out after about 11pm or so. Anyway, I'm driving the road only occasionally well-lit by a traffic light or the headlights of a passing car, CD player cranked, zoning out and letting anxiety and city bleed away into the space behind me, salt smell from the sea blowing in through my open windows…and amidst this totally sublime experience, my first honest, coherent thought was “Holy shit, I’m in San Fierro!” San Fierro, of course, being the name of the virtual San Francisco from Grand Theft Auto: San Andreas.
I’m still not sure if there’s poetry in gameplay, but there certainly can be in gameworlds.
One of my favorite cruising songs from the San Andreas soundtrack, it has a droning quality that suits night driving:
* I put “modern” in quotes because even though this is true in some genre-historical sense, I was playing GTA III when the kids who rave about these days hadn’t yet mastered pooping outside of their pants. 
** The soundtrack in San Andreas was also an order of magnitude larger than that of the previous games. Okay, not literally. But, this soundtrack was finally large enough and diverse enough, I think, that it stopped the in-game radio from being a novel-but-repetitive virtual-radio gimmick and allowed it instead to function as a part of a virtual world that was for the first time trying (and for the first time, often succeeding) to be a replacement for real rather than just an approximation of it.

Wednesday, December 9, 2015

On Grad School And Mental Health

I originally wrote this post as a response to this article.

This is a great article, and an important one, but I would add something. You shouldn’t in any way take this to imply that I think anyone who’s suffering serious psychological stress because of a doctoral program should just “toughen up,” because that’s not what I mean. But, I think there’s something to be said, in addition to the article’s recommendations, for preparing yourself for the reality of what a doctoral program is going to be like before jumping in because “Yay, free tuition!” or “It’d be cool to be a doctor!” or because you’re not sure what to do with your life because you’ve always been in school and you’re not sure where to go next.

I’ve experienced a lot of the stresses the article mentions and have known many, many other people who have, but I also remember from my own experience in grad school that there were a significant amount of grad students in and around my program who had begun their doctoral program simply because teaching for their tuition seemed like a good financial deal and they were at a point in their 20s when they weren’t quite sure what else to do with themselves. Four years of relative financial stability sounded great! Hell, that’s why I accepted a position in my M.A. program, so I get it. But you can get away with that with an M.A. program in some cases. I did. Less so with a doctoral program.
I was aware from the get-go how difficult my doctoral program was going to be. I came in with a strong idea of what I wanted to study and what my dissertation was going to be about. I knew (almost) from day one who I wanted on my dissertation committee. I was prepared to put in godawful hours for four years and had accepted that I would not be likely to even get a decent-paying job at the end. It was still really hard. Many people came in with absolutely none of those things figured out. I can’t imagine how hard it was for them. Many of those people never finished, and suffered needlessly (it seemed to me) in the process.

So, a few things. First, if you want to take on a doctoral program for the value of the knowledge and skills you’ll gain and the experiences you’ll have while completing it, then go for it. Don’t sign up for the future sake of some assumed employment that you hope to receive at a later date and have that be your motivation. You’ll almost certainly be disappointed in the kind of job market we’re facing now. I had a stressful and ultimately great time in my program, but remained realistic throughout about my likelihood of getting a tenure-track job or even an adjunct job after getting the degree. It made a huge difference, I think, in how stressed I was on a daily basis for those four years. Accepting that what I was doing then might be the extent to which I ever progressed in the field helped me appreciate what I was working on at the time and kept me from worrying about the future overmuch. In short, if a doctoral program is just a means to an end for you, you should probably avoid joining it.

I was lucky enough to get a tenure track job after two years of searching, and let me tell you it does not get any easier. Worrying about committee service, teaching an absurd load of classes, playing department politics, staying current in your field and giving presentations, seeking out opportunities for initiatives that will separate you from the rest of the pack come promotion time…well, let’s just say that I’ve worked harder in the last two years at this job than I ever did as a graduate student. It’s only because of the mental structures I built as a grad student, the ones that helped me learn to minimize stress and manage tasks reliably, that I can function in this job without being miserable. Imaging a future tenure-track position as a relatively relaxing paradise is deluding yourself, and if the doctoral program is more stressful than fulfilling the “real” job will be as well. To use an appropriate metaphor, if you were in a psychologically abusive relationship with another person, you wouldn’t expect it to suddenly become fulfilling and supportive when you reached your fourth anniversary, would you? Why would this be any different? It’s not.

Which leads me to my last point: if you are struggling in the midst of a program, seriously consider getting out. There’s no shame in moving on to something else if it isn’t working out for you. I was able to get through because of my aforementioned preparation and a certain degree of mental flexibility. If I hadn’t been allowed to do things on my own terms, if I hadn’t had a large amount of control over my schedule, over my course content, and over the content of my research and the makeup of my committee; if I had been paid less; if I hadn’t had a largely supportive cohort of friends and a lot of great mentors, I would have quit. It wouldn’t have been worth it. You need to know where your threshold is and decide ahead of time on what you’re going to do if you’re driven past it, not after you’re already at your wits’ end.

Simply put: know what you’re getting into ahead of time and don’t be afraid to back out if it’s not working for you. Certainly, all of the things recommended in the article are useful suggestions, but you can also better avoid falling into a psychologically abusive cycle in the first place by honestly confronting the reality of doctoral programs rather than embracing the romanticized and ultimately destructive myth of “the scholarly life” that we’ve collectively inflated for so long. Nobody does this on purpose, of course, but it’s an easy thing to do without realizing it and it is at least partially to blame for the huge gulf between expectations and reality that lead many grad students down the dark paths described in the original article.

Wednesday, November 25, 2015

On Living In Places Nobody Likes

I’ve been wanting to write a post like this for awhile, in part from a desire to constructively critique what I see as an unhelpful attitude in hopes of making it better, but also in part from a desire to, if I’m being honest, just straight-up complain about something that bugs me. Hey, this is the internet, after all.

In Wanderlust: A History of Walking, Rebecca Solnit writes that “When you give yourself to places, they give you yourself back.” I haven’t lived in a lot of places in my life, but I’ve moved around more than most people I know well, which is to say that within the context of my friends and family, at least, I see myself as a bit of a vagabond. And Solnit’s is probably the best single-sentence explanation of what I’ve learned through my relocations about the relationship between people and places.

I like it because it captures both the good and the bad. Like any relationship, if you give very little to a place, you’re likely to get little in return. If you give a lot in the form of negativity (distrust, discontent, etc.), you’re likely to get those feelings echoed back to you. On the other hand, the more willing you are to give a place a chance, to assume the best (or at least hope for the best), the more likely that place will be to give you a chance to find a meaningful niche within it.
Why bring this up? Well, the last few places I’ve lived are places that are seen as being...umm...not ideal by many of their residents. So I’ve had a lot of time over the last decade to both a) listen to other locals talk about how much the place where I live sucks, and b) think about why, if said place’s sucking is so apparently obvious, I still enjoy living there.

Lindsey and I have talked about this a lot, especially when we lived in Pullman together. In my eyes, Pullman is a pretty typical college town that commits the Apparently Unforgiveable Sin of not existing within a half-hour’s drive of a thriving metropolitan area. In a country where gentrification, suburban sprawl, and exurban sprawl have made it increasingly difficult to find places with legitimate local character, apparently what a charming city nestled within the geographically-fascinating Palouse and featuring a functional downtown needs is to be closer to a mall. Go figure.
Anyway, what Lindsey said once during one of these conversations has always stuck with me. I’m paraphrasing, but it was something along the lines of “Pullman is a great place, you just have to make your own fun.” I think that’s been true of most places I’ve lived in my life, and I think that it’s just as much an explanation of how and why I’ve come to appreciate each of those places in their own ways as it is an explanation of why others find it easy to hate them.

Pretty much every time someone complains about small-town life, the complaint ends with “Well, if it was just more like Chicago...” or “I used to live in Portland before I came here, and...”. I’m generalizing, obviously, but there’s typically an implication that if Small Town X was more like Big City Y, it wouldn’t have anything wrong with it. Now I admit to being in the apparent minority of people who enjoy small town life over big city life, so I’m biased, but this implication seems to be built on shaky foundations. Typically, it’s based on “culture”: the small town has no culture, and the big city does. If there was more “culture” in the small town, it would be better. I can appreciate this line of thought, to a degree. I like live music. I like the arts. I like seeing these things, and not having to travel five hours to do so. But more often than not, “having no culture” really seems to boil down to “not having places where I can go to spend lots of money.”

Complaints about there not being a fancy restaurant in town seem to be less about the quality of the food and more about not being able to publicly drop a hundred bucks on a meal every Friday night. Complaints about there not being a cocktail lounge in town are less about the cocktails and more about not being able to be seen at a cocktail lounge. We’re all working within the cultural assumption that to be successful is to be able to have (and be able to spend money on) particular types of experiences, and when our living situation doesn’t allow for those types of experiences, we get nervous. In the absence of the established narrative, it’s hard to demonstrate that we’re successful members of society in the ways that we’ve been taught to.

That probably sounds a little snarky, so I should say that I don’t mean this point as an attack on any particular person or people, but instead as a general observation: complaints about small town life by people who are accustomed (or want to be accustomed) to big city life often revolve around the lack of “culture,” and the shortage of options for an “art scene” or a “nightlife,” both of which are tied pretty inextricably to conspicuous consumption, whether particular individuals recognize that connection consciously or not. Generally speaking, what you’re really asking for when you’re asking for “more culture” is more ways to spend more money, and more ways to be seen spending that money. As much as I love going to concerts and drinking martinis, living in small towns has really helped me appreciate how just as much fun can be had at a much lower price point, with the only real loss being that you might not feel quite as cool posting on Facebook about the night you spent marathonning Full House episodes and eating cheap pizza as you might uploading photos of eating architecturally-unsound hors d’oeuvres at a $50-a-plate Greek restaurant. But who cares?

Well, you probably do, at least a little bit. I do, at least a little bit. There’s the rub.

And this is where it all links back to Lindsey’s point. Cities, and other large urban areas that have the economic infrastructure for it, thrive on letting you trade your money for a sense of identity. They tell you what makes you a successful, happy member of their community, and that usually (though not always) happens to involve spending lots of money on stuff. I’ve spent a few weeks in Portland. I’ve spent a few weeks in Seattle. I’ve spent a few weeks in Chicago. I’ve spent a lot of time in Cleveland. And I could easily tell you how people in those cities behave. I could tell you where you get your donuts in Portland. Where to get a drink in Seattle. Where to get pizza in Chicago. I’d be generalizing, sure, but that’s sort of the point: there’s a template for these places, and if you’ve been there even a few times, you’re familiar with at least some of it. If you asked me what a Chicagoan does, I’d have a ready answer, despite never having lived there. If you asked me what someone from Klamath Falls does, I’d have no idea what to tell you, and I’ve lived here for almost three years. We...go look at the falls, I guess?
Photo from here.
Cities tell you who to be. Do you have to be that person? Of course not, but having a prefabricated sense of place provided for you can sure be nice. And I think that is, in many cases, what people are upset about when they decide that they hate a place like Pullman or Klamath Falls: take that ready-made identity away, and people have to actually work to find their place in a community, they actually have to think about building identity. Hell, they might even have to do something uncomfortable to find meaning, like going to moon rock bowling night with a weird coworker or talking to a Republican. The horror!

Despite what my above italics might imply, I totally understand the allure of having all of these things sorted out for you by a place; however, I strongly believe that it’s actually a very valuable experience to figure these sorts of things out on your own. Like with most things in life, the journey is just as important than the destination, if not more so. Figuring out who you are in the context of a new place seems much more beneficial and self-edifying to me than just plopping down in a new place that’s full of opportunities to do things that you would have done anyway, in the same way you would have done them anyway, like shopping and eating at the same old chains. Two of the things that really helped me grow into Klamath Falls when I first moved here were “shopping around” for a mexican restaurant I liked by trying a bunch of the local places and creating new running routes while learning about the geography of my neighborhood and the nearby parks in the process. There were some duds in both cases (turns out taco trucks generally don’t have vegetarian tacos), but overall it was way more fun and more productive than just having my old patterns reinforced.

This is the cycle that Solnit describes in Wanderlust in action. In a place where I wouldn’t have had to work to find my own fun, so to speak, I would never have bothered. Giving nothing, I would have received little, if anything, in return. The city itself pushed me, in this instance, and I’m glad it did. But it also took some effort on my part, of not just throwing up my hands and sighing and thinking “No taco trucks with veggie options? What a backwards-ass shithole!” which is the equivalent of many people’s reactions to places like Pullman and Klamath Falls.

Anyway, I’m repeating myself at this point. So let’s wrap this up. There is one other thing that seems to frequently factor into people’s complaints about particular places that I think is worth addressing here, even if it’s sort of a minefield: politics. POLITICS!
Politics.

Everywhere I’ve lived since I started grad school (eastern Washington and southern Oregon) has been predominantly conservative politically. And yet, because of what I do for a job and because of my own personality, beliefs, political leanings, etc., most of my friends and acquaintances are pretty liberal-minded. Not all, by far (and that’s part of the point of this...point), but most. So the other predominant complaint I’ve often heard about these small towns is that they’re filled with intolerant, unempathetic conservative shitheads (I’m paraphrasing). A few problems with this:
  1. While it’s demographically true that most people in, say, Klamath Falls lean toward the conservative end of the political spectrum, I’ve seen little evidence that the percentage of that majority that are intolerant, unempathetic shitheads significantly exceeds the percentage of people who are shitheads pretty much anywhere else. And in places where I’ve lived where the liberals are the majority, there are just as many intolerant, unempathetic, liberal shitheads grumbling about the conservatives. Which brings me to my second point...
  2. Politics in this country these days has become a team sport in the worst way. My personal beliefs would probably be categorized by most as swinging hard to the left, but the truth is, I try to see where everyone is coming from. When I think back to getting together with a bunch of liberal-minded grad students at a bar and griping about how the conservatives are ruining the country (which I’ve done many a time), now I realize two things. First, that complaining about someone else being intolerant and unempathetic based solely on their professed political team, you yourself are being pretty obviously intolerant and unempathetic. Second, that these sorts of judgmental get-togethers absolutely affirm the stereotypes conservatives often hold about “ivory tower” liberals...and reinforcing that particular stereotype is not useful in any way.
  3. We all have to live together locally before we can function well globally (or, in this case, nationally or even state-ly). As I hinted at above,  I have a lot of friends and family who lean more toward the conservative side of the spectrum. But they’re nice people, so who cares? There are a lot of people I’ve met who share my political leanings but they aren’t nice people, and so I’m not particularly inclined to associate with them. To expand this out a bit, in terms of on-the-street, day-to-day interaction with the community, Klamath Falls is far and away the friendliest and most welcoming place I’ve ever lived. I’m aware that many of the people I meet and chat with on the street might (gasp!) disagree with me on the legality of abortion, or might (oh no!) be from the other side of the tracks, or might even (my god!) currently be homeless, but they’re all pretty goddamn friendly, and I like to think that that counts for something.
This is all to say that while I certainly understand the desire to live in a community of like-minded individuals (and believe that that’s an imperative if you’re from an oppressed group who is more likely to be treated poorly in a politically unfriendly environment) in my experience, I’ve found living among difference to be challenging and often instructive. Sure, it would be nice to live somewhere with more vegetarian options, or to live in a place where open mic nights and poetry readings were the norm instead of the exception, but there’s also a value to living among people who don’t share your values. Those people aren’t going anywhere, ever. So what do we gain by continually trying to create “communities” that make sure “Us” stays separate from “Them”? Evidence shows overwhelmingly that the best way to understand and accept someone of a different race, orientation, nationality, or political perspective is to spend time getting to know them. And yet being in a community that forces us to do this is seen by so many as undesirable. I certainly understand this knee-jerk reaction, but there’s already so many ways these days to turn our little corners of the world into echo chambers where we’re the scrappy underdog who sees through the bullshit to the truth. This is especially possible online, and I can’t help but suspect that social media has something to do with our increasing distaste for having to live near people who are unlike us in the physical world. Extending this echo-chamber mentality to our real-life neighborhoods, to our towns, to our cities, is disastrous to our sense of community. And yet it seems to be what most people want.

So, I live in a place that is not much like me, but I work at putting myself into it. And the self that I get back is a self that’s a little more willing to accept difference every day. I try to understand, for a small example, that a taco truck not selling vegetarian tacos isn’t a political statement or a personal assault on me, but just an acknowledgment of the local demographic. I try to understand this and just find a different place to eat instead of posting a negative Yelp! review and patting myself on the back for scoring another point against The Conservatives in “the culture war,” whatever that is.

Despite globalization, despite the internet, I believe that we live locally first. And, locally, people are people first, and they are ideologies,  moralities, and politics second. There are some good people here, and I’m trying to learn how to be one of them. That’s all.

Sunday, November 22, 2015

Why It's Hard To Be Excited About The Nationally-Ranked Cougs

I’ve been meaning to write a post on this issue for a long time. I’m not feeling especially articulate at this exact moment, and I haven’t learned anything new recently that’s changed my understanding of it, but reading the linked article above coupled with watching the growing enthusiasm of many of my Facebook friends as WSU’s football team climbs up in the rankings made me feel like maybe it was time to finally get some of my thoughts down.

As someone who has enjoyed watching college sports (especially football) for thirty-four years and someone who has taught college students for over a decade, it’s become increasingly clear to me over time that college sports (especially football) function mostly as a business designed to benefit a select few financially while hurting universities, university students, university faculty, the cities universities are located in, and most of all, “student-athletes” in many different ways. College sports (especially football) do this by providing a product that’s really goddamn fun to watch, a product that’s created on the backs of extremely cheap labor. Actually, “extremely cheap” doesn’t really capture it, since student-athletes are forbidden by the very people who could, in theory, pay them to receive any sort of recompense for their performance. There are a lot of dimensions to this problem, and it exists in a lot of sports, but in the interest of not turning this into a mandatory TL;DR, I’m going to focus on one sport - football - and three big problems I see with the system that student-athletes play that sport within.

For one thing, the effect that college football has on universities financially is ten different ways of messed up. Perhaps the most insidious dimension of this is the one the early part of the article focuses on: funding athletics and new athletic facilities using student fees, and particularly student fees that non-athlete students are charged (at least somewhat) surreptitiously when tuition across the country is already skyrocketing, and students are already undertaking decades’ worth of loans just to attend average-quality schools who are better known for their football teams than their academic programs. Now, it seems, the universities themselves are in on this debt-juggling tightrope act, gambling their future financial stability (i.e., the value of their students’ tuition) on the hope that upgrades to their sports programs will successfully fund future, hopefully endless growth in enrollment and alumni donations. My thoughts on pretty much every university’s willingness to set “Infinite Growth, Forever” as its only “strategy” is a post for another day, but let’s just say that it’s pretty much impossible for any university to achieve this and that universities that bank on their “plan” for infinite growth working based on money they might pull in some day from a football team that doesn’t exist yet (or an 70,000 person stadium that doesn’t exist yet, etc.) is just digging themselves a deeper hole to fall into when they do inevitably fall.

The toll football in particular has on student-athletes physically has been well-documented. Why we wring our hands (if we still do) over concussions and the resulting mental illness and suicides in the NFL but don’t do the same for college players, who tend to be much more impressionable, less informed, and less capable of building a multi-million-dollar nest egg before injury drives them off the field for good is beyond me. But there it is.

But the facet of this issue that’s closest to my heart is the fiction that we’re talking about “student-athletes” rather than “athlete-students,” or even just “athletes” when we talk about college football players. These kids are brought to these schools, often on full-ride scholarships, not to learn, not to get an education so that they can make their way in the world after they almost inevitably fail to graduate up to the NFL, but to play football for the school more or less for free so the school can make more money and gain more prestige in eyes of possible donors. Granted, I only have experience at a few universities, but from what I’ve seen, the notion that student-athletes are supposed to, or are even able to take their education seriously while juggling it with football is a joke.

The linked articles in the previous paragraph affirm what I saw again and again in my years as an instructor at WSU: students unable to pursue the education that supposedly comes part-and-parcel with their coming to the school to play football because football ends up overshadowing everything else. It’s especially heartbreaking because many of these students are being recruited from foreign countries and/or low-SES situations to play, and in many of these cases, getting a free education is ultimately more important to them than playing football. Unfortunately, it seems, you get what you pay for.

This was brought home to me in particular one summer when I taught a class almost entirely full of incoming students who were also undergoing their first summer of training and practice with the WSU football team. Many of these kids had never dreamed of getting to go to college except maybe on the back of a football scholarship, and, as you might imagine, they were just as excited about attending college as they were unprepared for it. Their writing skills were atrocious, and by and large they needed a lot of remedial help with “simple” college-level skills like time management and note-taking. Many were absolutely ESL students (or whatever the acronym is these days) that would never receive the level of ESL assistance that their non-sport-playing peers would. Generally speaking, WSU actually had a pretty fantastic infrastructure for getting disadvantaged students the help they needed, but this just wasn’t a possibility for the football players. They were too busy with football.

Time and time again, I had last-minute cancellations from students scheduled to meet with me to get extra help, students who couldn’t make it to appointments with tutors because the tutors were only available during the hours that football was happening, students who missed class because they’d been to six hours of class and eight hours of practice the day before and just couldn’t stay awake long enough to make it to an 8am class, a few hulking, male students near to tears in my office because they felt that they just didn’t have time to devote any attention to their education, and suspected that if they complained, they might have their scholarships taken away (how likely this really was, I don’t know). It was explained to me by team staff in no uncertain terms a few times during this course that struggling student-athletes’ grades should just “get better” or else, as if I was an undercover detective caught in illegal intrigue instead of an educator. I didn’t give in to these “requests,” and at least a few of the students ultimately failed the class partially as a result. I heard from colleagues later, though I can’t verify the veracity of these reports, that those students more or less magically ended up with passing grades in the course after some discussions between the football team’s staff and the registrar, which is not only an undermining of the university’s supposed values but also a disservice to the students who would have benefited hugely from gaining the knowledge needed to legitimately pass those classes before moving on to football-less careers, whether it be right after college or (for a lucky few) after a stint in the NFL.*

Where I work now, we have no football team, which, while it’s a constant point of complaint for locals who “have to” root for either UO or OSU instead, makes education a lot more central to the mission of the school. We have strong teams (both men and women) in many other sports, but I find that their coaches and related staff are nearly always interested in the success of their players as students first and as athletes second. Might this have something to do with the fact that Oregon Tech isn’t part of the NCAA? I don’t know, but my guess would be a big fat “yes.”

So, yes, while there’s a part of me (an old, entrenched, Canton, Ohio part of me) that’s excited to see WSU in the national rankings this week, every time I see another former student or former faculty member, or current faculty member give a shout out to the Cougs’ football team on social media, it’s hard not to wince. I was born five miles from the Pro Football Hall of Fame. Within a few minutes of being born, I had a blue plastic football placed in my hand and had my picture taken with it. I “get” football about as much as one can without ever having played it in an official capacity. But damn if college football isn’t messing up these kids’ lives, and by extension the workings of many otherwise great universities across the country that could be even greater if they could be bothered to value education over the supposed money-making machine that is college sports.

I’m sure I’ll watch a few games over this Christmas holiday, but it’s hard to be as excited about Bowl Week as I used to be when I was younger. I always feel a bit dirty watching a college football game now. And that’s how it should be. For all of us.

* I’m sorry to be so spectacularly vague in this paragraph, but 1) this was nearly four years ago at this point and 2) I’m hesitant to be too specific because…well, for obvious reasons.

Saturday, May 3, 2014

Procedural Generation and Emergent Gameplay as Storytelling Devices

I've been sitting on this one for awhile, with a pile of hyperlinks to various articles about procedurally-generated worlds in gaming stacked up in a post titled "Drafty Draft," but it was finally starting to play Minecraft seriously again this week after not playing much since, oh, 2010 that helped me figure out what I really wanted to say. Which is, briefly, that procedural generation in gaming is mostly awesome.

Not entirely awesome, mind you. As I ranted about at length previously in my now-locked personal blog, "procedural generation" has recently gotten roped in with "retro aesthetic," "roguelike," and "RPG elements" in the quadfecta (?) of current gaming buzz-qualities that ensure that it ends up in a lot of games that don't really use it effectively, just because it looks good on a feature list. For instance, taking out procedurally generated worlds in something like Teleglitch might actually add to the tension of the game, because you'd have some idea, at least, of what's coming, precisely what to be afraid of. Not procedurally generating the worlds of Don't Starve might actually lead to the player's accumulation of knowledge about a world's consistent geography and how to use that to his/her advantage when they next respawn, creating a situation not unlike the amazingly effective respawn mechanic at play in Rogue Legacy. But then there are games like Terraria and Minecraft, which arguably wouldn't work at all without the procedurally-generated worlds that their play takes place in.
The game that taught me not only to embrace death, but to look forward to it.
But what I wanted to write about today has less to do with how procedural generation plays into a game's mechanics and more to do with how it might factor into a game's story.

I got my start writing legitimately (if you want to call it that) about games by writing about how narratives and stories function (and fail to function) in gameworlds. Given that most modern video games are a complex combination of pre-scripted "storytelling" through cutscenes and/or "events" that eliminate 99% of player agency paired with spur-of-the-moment player interactivity, I believe that games create unprecedented possibilities for cooperative storytelling/worldbuilding (they're the same, I'd argue) between the "authors" (developers) and their audience, but that they also contain unprecedented pitfalls. The only reason most people would scoff at the idea of a "Citizen Kane of video games" is because currently the pitfalls are like sinkholes and potential peaks are fragile, windswept pinnacles.
Like this one.
In this article, Beraldo argues that "the main point is that games are a shared authorship experience. You will better remember that time you managed to evade cops opening fire at you by diving into an air duct in Deus Ex than the fact that one of the bosses was an ubercyborg with an accent." And this is what really got me thinking. I'm a huge Deux Ex fan, and this describes me completely. I remember, and appreciate, the game's Gibson-influenced cyberpunk story as something engaging and interesting and fairly audacious for a video game, especially in 2000. But I remember and appreciate it precisely because it was a very effective wrapper within which my role as a player interacting with and reacting to the gameworld around me played out. I cared about the story because the unexpected moments of air-duct-diving that emerged through gameplay were so compelling. I cared more about the air-duct-diving than I would have otherwise because the story wrapped those moments in a sense of fictional gravitas that is lacking in a game like, say, Tetris or Super Mario Bros. So it's a symbiosis, in a way. But it's a lopsided symbiosis that's unevenly reliant on the strength of the gameplay. A game without a compelling story can still be a fun game. A good story without compelling gameplay is just a slow, QTE-ridden low-budget movie.
It's unequivocally true.
But I'm oversimplifying. The reason Beraldo's example works so well for me is that he's not just talking about Deus Ex as a game that contains both story and gameplay, but a game that contains both story and emergent gameplay. The reason diving into an air duct to escape gunfire in the game is such a tense, compelling, and ultimately fun experience is precisely because there is no particular "level" or scripted scene in the game in which this occurs. Instead, it occurs as a result of the player's behavior in the world, the reactions of a guard or enemy robot to that behavior, and then the by-the-seat-of-your pants improvisation the player must undergo to escape what is likely a much better armed and armored (and thus unbeatable by main force) enemy. The scenario emerges from the way the game's rules and its world are designed, not as a discrete, pre-plotted encounter, the outcome(s) of which are already predetermined by the game's code (at least, not in a direct sense).

Warren Spector believes that games that enable these types of player-driven experiences are "the only thing that sets us apart in any meaningful way from other mediums." But it's not just a matter of being able to claim a niche: he refers to games that depend on emergent play as "engines of perpetual novelty." Not only do these games offer novel play scenarios in the short term (like Beraldo's example), they also focus more in general on the player's interaction with and experimentation within the gameworld than on herding the player along one developer-curated path, the only possible "right" experience. As Spector says, developers interested in emergent gameplay should "[e]mbrace the idea that your job is to bound the player experience, to put a sort of creative box around it -- but you don't determine the player experience. It's not about 'here's where every player does X.'"
Yeah, no more of this, please.
To me, emergent gameplay adds not just moment-to-moment novelty, but potentially limitless replay value (in games like Don't Starve, for example) that isn't reliant on new DLC and achievements that require you to replay the exact same content over and over again to achieve some esoteric level of mastery. Replay values comes not from the desire to hone your skills in a vacuum to become really good at doing one specific thing, but from the near-limitless possible permutations of your gameplay and its results and your reactions to those results that can occur in the gameworld.

Of course, non-emergent types of games just are fine (Spector and I agree on this, and I recently had a hell of a good time playing ), but I've also found that his claim that "once players get a taste of [an emergence-based] game, it's very hard for them to go back" to be very true in my experience. This really started for me with Minecraft way back in 2010. There were a lot of other things that were unique about the game to me at the time, but what really stuck out (and what still does, honestly) was the experience of trying to survive the game's nights early on while slowly learning the rules of the world and building a primitive shelter. This was emergent gameplay at its finest, determined entirely by the rules of the world and my actions within them rather than the limited gameplay dimensions of reaching a particular, required goal or the need to complete a particular level.

I think that this type of emergent gameplay is something that procedural generation can mesh with quite well. Where the effect of procedural generation is perhaps blunted in something like Risk of Rain, where the ways in which you interact with the world around you are necessarily very limited (in this case by a Contra-esque sort of gameplay), in a game like Minecraft (or Deus Ex) there are enough gameplay possibilities that randomized worlds, and randomized content within those worlds actually force a much wider variation in gameplay experiences than "Oh, I'm the purple guy that shoots the big green guy with the grey gun" or "Oh, I'm the red girl that shoots the little black dogs with the yellow bow...so that's different!"
I hope you like finding the teleporter.
And this is where story comes back in. In games with emergent gameplay, procedurally-generated content does even more to encourage players to make their own compelling experiences with the "creative box" by making sure that every player's experience is unique in its details while taking place in a gameworld that would still be recognizable to other players who are familiar with the particular game.

This is how you end up with pieces like Quintin Smith's "Mine The Gap" or Tom Francis' "The Minecraft Experiment," both "virtual travel narratives" that I examined during my dissertation work.

When a player's experience is unique in the details but relatable more generically, it makes their story simultaneously more personal and more shareable, makes it seem more like a compelling narrative that they are invested in and have helped create instead of just some game that they're playing simply to pass the time. Any Deus Ex player understands Beraldo's example immediately, and likely has a similar example of their own. But, because of emergent gameplay, their own example is also importantly different, importantly personally theirs. Any Minecraft player understands the broad strokes of Smith's narrative immediately, but because of emergent gameplay and procedurally generation, they necessarily have importantly different, importantly personal stories, because it's not only their gameplay experience that's fundamentally different, it's also the world that that gameplay takes place in.

And, well, I think that's neat. 

Friday, May 2, 2014

What We Have Here Is a Failure To Communicate

So, when I started this new blog, I was really excited by the prospect of being able to write publicly and semi-officially (though not too officially) about teaching, researching, and so on, in the hope that what I had to say would both help other people who might be wrestling with the same or similar problems and encourage other people to suggest solutions for me.

Well, for the last four months, apparently I haven't actually had anything to say. Well, that's not true. Let's just say instead that expecting to be able to blog about academia regularly while also taking on a load of committee service work, teaching four classes a term, and trying to get an entirely new minor program off the ground in my first two terms as a faculty member was a fool's errand.

My workload is a bit lighter now, but I'm not going to guarantee it's going to stay that way for long. In the meantime, though, I wanted to write briefly about one of the biggest stumbling blocks I've hit in my first few terms here; again, both in the hope that it might be helpful for potential readers to read and in the hope that someone out there might have some solutions of their own that can help me reframe and better understand the issue(s) I'm facing.

So, I'm currently teaching a lot of different courses. In two terms, I've introduced six entirely new courses, and next term (starting in a few weeks!) I'm going to introduce three more classes, for a total of nine new classes in my first year. Logistically, this is a ridiculous amount of work to juggle, but thematically, it's pretty coherent. At least in my head it is.

All my courses, however they are listed in the catalogue, are under the aegis of "the humanities," and this, combined with the specificity of my own training and research experience, means that all the courses have some elements in common. Students read stuff, and write stuff. They work in groups to discuss the issues that arise in the processes of reading and writing. They make arguments, and (hopefully) back those arguments up with evidence, source synthesis, and critical thinking. Whether we're reading novels or poetry or comic books, playing video games, or watching documentaries, our approach to the material is, generally speaking, a pretty straightforward literary analysis one. This is, at least for now, a limitation of my training, but also an expression of my belief that literary analysis gives one the tools to basically and fundamentally analyze and question all works that fall under the rubric of what I call "culturally expressive media." People write novels, film movies, and make video games to be expressions of their personal and cultural experience of the world (sometimes more intentionally than others). By studying these works through this lens, I believe we can simultaneously better understand the creators' cultural situations and how various media can implicitly and explicitly make arguments, while interrogating our own assumptions about society, or what-have-you.

Pretty straightforward cultural-studies-literary-analysis-humanities-type stuff, right?

Well, it's totally not working. Which is weird, because it worked before.

I find that my students are asking variations of "Why do we have to do this?" much more frequently than they ever did at WSU and I'm finding it much harder to convince them (on average) to become invested in what they see as squishy, non-objective, non-quantifiable work beyond the level of "If I don't work sort of hard, I'll get a bad grade and hurt my GPA."

Now, the problem certainly isn't that I'm incapable of addressing the value of the humanities in today's university environment. In fact, sometimes that seems like all I do. Sometimes it feels like 90% of my job is to explain to other people why my job is important. So, I've gotten pretty good at it.

I also realize that when you're teaching at a technical institute, you're going to get a certain kind of student, to whom learning about the cultural significance of The Illiad might pale in comparison to things they do in their other classes, like build robots (no, seriously, they do).

But. I guess it's difficult for me to understand precisely how a student goes from thinking one of those things is preferable to the other to thinking that one is useful and the other one is pointless. I can throw all the statistics I want in the face of many of my students, proving that graduates with humanities knowledge are going to further down their career path than those who don't have it, and thus make more money in the long term. That employers look for critical thinking skills and cultural awareness in addition to your "piece of paper." That many of the professional associations that eagerly await our graduates to fill their open positions specifically contact us regularly to make sure that their future hirees are getting an appropriate education in ethics, logic, critical thinking, and global citizenship. And, yes, that there's really not that big of a gap in employment between those who have humanities degrees and those who have science degrees.

None of it seems to make any difference, though. Which leads me to believe that there may well be a profound shift occurring right now, not just away from liberal arts education, but from education in general. The kind of "education" many of my students seem to want to get at this school is actually what we'd call "vocational training" if there wasn't such a stigma attached to that term. And that's fine, I guess: college has become such an expensive, loophole-ridden process while simultaneously becoming so necessary for employment that it's easy to see why students want to know what they're paying for before they pay for it, and a $70,000/year job at the end of four years is a lot better return on your investment (that lands you in massive debt) than "being a better person" or "understanding other cultures."

So I get it. But, I think there are a lot of hidden costs to replacing universities with huge job-training factories, and if we continue to go down that road as a culture, I think there are a lot of potential dangers specifically in continuing to call what we're doing "education." Because I believe education should extend beyond just job training, regardless of whether students can immediately see the value in more learning more ephemeral subjects and skills or not. And I think every students who can afford it will reap the vast benefits of this broader kind of education throughout the rest of their lives whether they go on to manage a McDonald's or work as the CEO of a corporation, or do anything in between.

But when I say these things to many of my students, they just roll their eyes and settle in to suffer through 10 of their 30 hours of (30 hours in-class, our of four years of school!) of humanities education.

So, I guess my question is: with so little time in the classroom, how do we convince students that there is more to education than just skill-based training and certification? How do we convince students that sometimes it's worthwhile to invest money in something other than getting more money?

I have a lot reasons that I could share convincingly with, say, the audience of The Chronicle, but I don't have any reasons that 18 year-olds  who have already chained themselves to massive, possible lifelong debt, want to hear.

Tuesday, April 1, 2014

Social Media Might Just Be Impossible To Teach (For Me)

I've taught a few variations now on a class that I'm currently calling "Digital Diversity." It's a class that I really enjoy teaching, and that focuses on a topic very close to my heart: discussing all of the insidious and wonderful little ways that our day-to-day lives are tied seemingly inextricably to the internet and computing technology nowadays.

Though most of my classes get a better reception the more times I teach them (which I like to think indicates my ability to adjust and improve according to feedback and more pedagogical reflection) or at the very least balance out at a pretty acceptable level of success, Digital Diversity has been pretty distinctly and obviously less successful each time I've taught it. I've been trying to figure out why for a long time, in the process pouring a lot of thought and extra work into solving the problem (including writing a few more blog posts about it, which are currently sitting at the top of my "Drafts" folder), and I haven't come to any solid conclusions or discovered any obvious solutions yet. However, stumbling across an old email this morning might have helped me understand what's going on a little bit better.

The email in question was from an old colleague of mine and it popped up in gmail as one of the results of a totally unrelated search, a weird quirk in the search algorithm. It was a reminder about a project said colleague was working on as part of a class we were taking together. The colleague was interested in studying Twitter, and so, for one particular week of the seminar, she was requesting that we all install the Twitter app on our phones, enable all notifications, and then be prepared to discuss the experience of being "always-on" through the app when class reconvened.

That's it. Today, a discussion about how annoying Twitter notifications are is like a discussion about how annoying it is when the weather calls for sun and it starts raining. Everyone gets it. It's hardly a topic that needs the environment of a graduate-level seminar in cultural studies to happen. But I distinctly remember being extremely interested in the topic from an academic standpoint when this seminar was going on. I remember my colleagues and the professor being extremely interested. And I remember the discussion being engaging and enlightening. That seminar took place maybe six years ago. In the last six years, social networking over the internet has become so tightly integrated into most of our lives that to speak of it analytically, critically, is, for want of a better word, passé.

I was a graduate student in an environment in which talking about the politics of identity creation on Facebook, on Twitter, and so on, was a burgeoning topic of interest and (apparently) intense importance, at least in the discourses of cultural and media studies. And I was learning about these subjects through research and dissertation work as they were being spoken about and written about in that same way. And I was learning to teach about these subjects as they were being spoken about and written about in that same way.

Flashforward to six years later, and it's beginning to feel like my pedagogical approach is flawed. I learned (in part) how to teach while working under the assumption that these technologies and our applications of them were exceptional and that they should be presented, discussed, and deconstructed as such. In 2014, there is nothing more reflexive (or at least, so it seems) to college-age students than using Facebook and Twitter as one of their main ways (if not their primary way) of interacting with the world outside themselves socioculturally. In my class, when I want them to think about whether or not Facebook's interface inherently and importantly limits their ability to express themselves to their friends in ways that might not exist in face-to-face conversation or conversation over the telephone, they are bored.

This is, of course, nobody's fault, but I'm not sure how to overcome the difficulty nonetheless. Every time I change the course, I update readings, I change assignments, I alter notes, I approach subjects from different angles. On a day-to-day basis I read at least 2-3 articles regarding technology use and its effects on our culture, not just to "stay current" but because it's what I'm interested in as a person, not just as a teacher. It's not as if I'm coming to class with anecdotes about Friendster and screenshots of my old GeoCities page (don't try to find it; you can't). But I wonder if there's a chronological or maybe even generational gap that's effecting my pedagogy here: to me, even fifty years from now, a lot of these technologies and our applications of them will always be a) fascinating and b) worthy of critique, because I remember what it was like when they first came into being, how dramatically things changed. It's possible that all the research, revision, and pedagogy workshops in the world aren't going to be able to close that gap between me and students who take classes like Digital Diversity.

Thoughts?

Thursday, November 21, 2013

So, Getting Used To Working At A Teaching School Is Weird, But Good.

I want to start this post by mentioning some things that I like. I'm going to do this because after that, I'm going to talk about how I don't like some of them as much as I like others, or, at least, how some have become more important to me than others, their value more obvious, and I don't want to give the impression that I dislike the less-liked things...I just like them less than maybe I once did.

When I switched my undergraduate major from Computer Science to English Literature way back in 2001 (or maybe 2002), I did it because over the first 2.5 or so years of college, I'd realized that I fundamentally, absolutely loved the lifestyle higher education allowed me to have. I was perfectly happy working 12 hours a day if that work was reading, learning, questioning, writing, exploring, and if that time was spent with and around others who also valued those activities. I was happy enough with this lifestyle and wanted to live it badly enough to give up a major that would have easily (in the early 2000s) landed me a job right out of college that would have paid significantly more from the onset than my current tenure track job will ever pay even at the full Professor level.*

My enthusiasm for this lifestyle has waxed rather than waned over the years. It's become less and less financially practical to actually make a living by working in higher education as an actual educator (it's another story if you want to be an administrator), but that continues to not really matter much to me. I still love writing, I still love reading. During my time at WSU, where I was required to teach courses while pursuing my graduate degrees in order to receive funding, I even discovered that I actually really like teaching, something I would never have imagined about myself back in the KSU days, when I was just desperately looking for an excuse to stay enmeshed in the intellectual environment of a large public university after getting my bachelor's degree. That said, some of my favorite memories of my graduate career involve participating in great seminar discussions after wading through hundreds of pages of theory, and condensing that theory and tweaking it to plot and write up a dissertation that I thoroughly enjoyed and am (mostly) proud of. So, by the time I received my Ph.D, I had, usefully, realized that I loved both teaching and researching.

Interestingly, my new job wants me to do one of these and not the other. It's 2013, so I'm pretty sure you can guess which is which.

When I first learned this, during a post-interview phone call with my then-potential employer, my heart sank a little. Taking a job where I taught a heavy course load and wasn't expected to engage in research (read: "We won't give you time or money to do any research"), seemed utterly depressing to me. I loved the research I was doing, and couldn't imagine giving it up to "just" teach more classes.

Well, as you can probably guess, this is the part where I decide that I don't like some things as much as others: flash forward about seven months, and I feel completely different about the whole teaching vs. research thing. I realize that my opinion isn't for everybody, and I'm not writing it here to force it on anyone else, but just to share my own experience. So.

Working at a teaching-oriented university is awesome.

See, the always-underlying issue I had with the work I was doing before was that it was pulling me in two opposing directions basically constantly. I love teaching and I love doing research, but when you're doing both at the same time, you're always doing both to about half of your ability. Or, at least that's how I've always felt. When I was able to just sit and read and write for a week, I was able to create some of my best-ever work. When I was able to just focus on lesson planning and teaching for a week, I was able to teach some of my best-ever classes. Out of eight years of work at WSU, how many times did I get these "one-only" opportunities? Maybe 2-3 times apiece. That's not an indictment of any particular person or department or policy, just an observation of how the general expectations that come down from on high at a huge public research university force you to mix, mingle, and often ultimately shortchange your priorities.

Here, it's easy: I work on teaching all the time, except for when there's a committee meeting or another service-related opportunity, and then I put the teaching aside for an hour and focus on that. The transition back is much easier, because helping to guide the course of a small department has a lot in common with teaching (and learning). Research, though it can certainly inform one's teaching, doesn't have a lot in common with pedagogy itself. Unless, I suppose, you are researching pedagogy. Which I was not.

So, practically speaking, my job's not nearly as confusing. I'm switching gears much less often, and though I work just as hard and often get just as tired, I'm not burdened with that feeling of being stretched too thin, of knowing I could teach better if I didn't have to attend my fourth conference of the year or knowing that I could really knock this article out of the park if I didn't have to teach three classes this semester, and so on.

Existentially, focusing just on teaching is a lot nicer, too. This is what's really surprised me, and what's possibly going to piss off some of the gentle readers out there. Surprisingly, "service" is no longer a bad word to me. Once, it was that pesky thing that you half-assed on your CV, the grad student equivalent of putting Who's Who Of American High School Students on your college application. Now, it's things that I do naturally every day as a result of being part of a small department, in a small town, with a recognizable and manageable community of students and faculty and administrators.

Because this is such a small school and the only thing that's more important on faculty evaluations than "service" is teaching, there's (generally speaking, at least) a positive attitude toward university- and department-related activities built into the way most people approach their work here. And this is great for me. Since I started in September, I've already made progress on helping to introduce a new academic program to the school, created six new courses that should all be seeing the light of day over the next 2 years or so, made inroads toward getting placed on two faculty committees whose work I think of as very important to the university and community at large, and met informally with a large number of faculty members to discuss how we can improve our Humanities offerings to more directly suit their programs' needs. I have a lot more time to meet with students individually, I've talked to every member of my department at length, both about work and more mundane things, and I'm setting up one-on-one, face-to-face meetings with the Provost and President of the university to discuss my thoughts on revamping the Humanities offerings and general education requirements.

I don't list all of those things to brag, but to make the points that a) I'm doing lots of things that are going to (hopefully) be directly valuable to the university and its student population and b) all of these things are way more meaningful to me than getting another article published in some obscure journal after months of toiling away alone in my office.

I once read an article (and I really wish I could remember where, because it would help this post out a lot) that suggested, pretty compellingly, that requirements for "research" in the humanities was something colleges and universities just sort of started requiring middle-level faculty to engage in before they could qualify for tenure for the sole reason that there wasn't much else for them to do besides teach classes. In larger universities, departments got so big that there was really no meaningful way to include all faculty in the processes of governance and upper-level decision-making, so they were, essentially, given something else to do instead to hide the fact that the system had become too cumbersome to care about their input anymore. Now, I don't know how accurate that is (like I said, the article was pretty convincing, but I also didn't follow up by investigating its claims in detail or anything), but it rings fairly true based on my own experience.

Like I said at the onset, I love doing research and writing articles, and writing my dissertation is one of the most rewarding things I've ever done academically. But it's hard to regularly put so much time and effort into publications that are likely to only be read by a few people who are already hanging out in the same echo chamber as you, especially when the time spent writing could be applied to teaching or to working to make a positive change in your immediate department or university. Research is great, but if given the choice between working on an essay of mine for an hour or serving on a committee that's working toward a positive infrastructural change at my university for an hour, I'll always choose the latter. It's more immediate, and to me, more ultimately meaningful.

Certainly, when you write to publish, you're learning a lot about your field and then you're "creating new knowledge." But I can still do that by reading a bunch of books before choosing one for a class (which I do every semester) and then discussing that text in class, requiring my students (and thus myself) to critically think about the material in it for a couple of weeks. I can blog, tweet, talk to my colleagues, and engage in other forms of meaning-making based off of the things I'm reading and the concepts I'm pondering, all without having to resort to the draconian and ultimately somewhat hollow experience of hurling myself into the maw of academic publishing again and again. I haven't written any new article material yet this year and I don't really plan to, but I don't feel dumber as a result. I feel both smarter and more like my energies and talents are being applied to tasks that have a more immediate benefit both for me and for the rest of the university.

Of course, I don't mean this as a condemnation of humanities research or anything dramatic like that; it's just that it's been striking to me to realize how much more focused and simultaneously more relevant and productive I feel without having to worry about it as much anymore. I've been surprised at how hollow it seems (again, just to me) now that I have an actually well-defined and relatively powerful role to play in a small department that's excited about my input in a way that a larger department just couldn't be. It's a combination of factors that obviously isn't available to everyone who is in this line of work, and even if it was, I imagine a lot of people wouldn't want it. But it's working out really well for me by allowing me to stop focusing on the things that I've been told Matter Greatly and focus instead on things that matter to me intrinsically.

* At least two friends of mine who majored in CS around this time immediately started jobs with six-figure salaries after receiving their B.S.

Monday, November 18, 2013

Low-Attendance Classes Are Awesome. I Think.

This is something I think about and talk about face-to-face, in meatspace, with friends and colleagues a lot. But I've always been hesitant to post about it on the internet in fear that it will get a lot of those knee-jerk, judgmental responses the internet is so famous for. I've had a few good discussion arise from my blog posts lately, though, and very few heads have been bitten off. So I thought I'd try.

I take attendance when it's required that I do so by the school. Insofar as I'm allowed to formulate my own attendance policy, I have none. I make it clear to my students up front that a big portion of my class is discussion, participating and "meaning-making" in the classroom, on a day-to-day basis. If they choose not to show up for this, I don't directly penalize them, because I believe (and have seen after teaching over 1,000 students) that they are indirectly penalized and almost entirely across the board perform worse than students who have perfect or near-perfect attendance and participation.
Pudding, meat, etc.
For better or worse, this is generally my approach to teaching at a university: you, the student, are (nominally, at least) an adult, and you're free to make your decisions about attending class, about doing the work, about putting in the time, etc. I make my expectations clear up front, I make myself readily available to you if you want to do the work but don't maybe know how to do the work. Even if you don't know how to correctly want to do the work, I'm willing to help. But I'm hesitant to force students to put in a good effort because then it won't be a good effort...at least that's how it seems to me. This has been my approach for a long time, and I'm mostly happy with it, though sometimes I feel bad about not being more prescriptive, as in "Student A wouldn't have failed if I'd have forced him to turn in more drafts of his essay!" But that brings it's own set of problems, I suppose.

Anyway, the point I'm trying to get at is that I find myself frequently happy when I have a low-attendance day or week in class rather than frustrated (which seems to be the "normal" faculty reaction to such things). I think it's because it reinforces the implicit assumptions I'm making in my attendance policy as I've described it above: if the students who show up are choosing to show up, and their motivation for showing up is interest, if not in the subject matter then at least in GPA-based success, then a low-attendance day actually raises the percentage of interested students in the classroom.

As someone whose courses are dialogically-based 95% of the time, I'm all in favor of smaller class sizes as a rule, but even courses with a twenty-student cap are typically going to have around ten students who would just barely not rather be dead than be in my classroom. On the other hand, every single class session I've had where there were 30-40 students enrolled and only 3-4 showed up (something that happens depressingly frequently) have been absolutely brilliant class sessions.

There's a part of my brain that feels like I should feel guilty for thinking this, but I can't tell if it's a logical part, or the part that occasionally insists that we should give a trophy to every child, even if they come in last place, because trophies make people feel good and sad people are sad.

Monday, November 11, 2013

An Ode To Instagram

So, thus far on my new work-related blog, I've had three posts. One is arguably work-related and the other two not so much.

But I realized something awesome about Instagram the other day and wanted to share, and since I at least nominally research and write on new media for a living, I don't think it's too far off the mark to post it here as well as my Super-Secret Blog of Feelings and Secrets.

As if it wasn't clear from my original post on this blog, I have a pretty ambivalent relationship with social media. I used MySpace for awhile, but I was never quite sure what it was for besides streaming other people's music. I started using Facebook because my friend Toria signed me up using my laptop while I was in the bathroom one day at work. I started using Twitter because it was required for a class. I started using Foursquare because I needed another example of digital storytelling to talk about in my dissertation. I started using Google+ because I was just curious if it was actually better than Facebook.*

In short, I didn't really start using any of these services for what they were meant for; namely, for socializing. Like a dog with a type of food he's never encountered before, I've always been content to just take a lick here and there and then spit the thing back out, because, frankly, the food I usually eat has always been good enough.** I started researching and writing on Americans' use of social media not because I was a huge fan, but because it was so fascinating to me that we could collectively become obsessed with a form of expression that was simultaneously so full of promise and yet so empty in execution.

I post a lot on Facebook, but it feels fairly pointless, unless I'm sharing news that I know will interest or entertain someone specific. I post a lot on Twitter, but it's the internet equivalent of having "conversations" with the colleagues that speed past my open door en route to their class: it's nice for a quip or shout-out here or there, but ultimately not much beyond that. I still use Foursquare from time to time, but only because it's fun from a gaming perspective: I like earning new badges and mayorships in that shallow Unlock Steam Achievement way. If each Foursquare interaction took about a second longer, I'd probably stop using it entirely.

Instagram, though...Instagram is cool. I realized this the other day as I was waiting for one of those things you occasionally find yourself waiting for, like someone to call you back or your girlfriend to use the restaurant bathroom. During those little spaces nowadays, when you know you don't have enough time to Accomplish Something but there's also not enough time to Fall Into A Deep Reverie And Reflect On The Meaning Of Life, I usually pull out my phone and tool around. In this case, I went to my Instagram feed and started looking at the pictures posted recently by the people I follow. And I realized after about four pictures that it was just really awesome, interesting, and creative in a way that all those other social networks rarely are.

Part of it, of course, is the requirement that one uses visuality to express themselves on Instagram. There's a huge gap between posting "I ate veal tonight" on Facebook or Twitter and turning that written expression of mundane fact into poetry.*** However, even if you take the most soulless, robotlike photograph of your food and post it to Instagram, you're still making decisions about how to frame the photo, what light you're taking it in, the angle you're taking it at, etc., etc. You are forced, at the options screen, to make choices about the frame that goes around your picture, if you want a caption, if you want to apply a filter, and so on. Even if you opt out of all of these choices, that itself is a choice. It's a social media app that requires you to express yourself artistically and creatively, even if only in the smallest of ways, and for that I love it.

Of course, this also democratizes interesting content creation in the sense that you don't need to be able to describe your plate of veal like Whitman or Keats to make it interesting: visually, to me at least, it's really exciting to be able to actually see what other people I know find important in their lives. Written language is a fun tool to play with, but not everybody likes to play with it. Not everybody is comfortable enough using it to play with it. A neat photo on Instagram is just a click away (a few more if you want to apply the Amaro filter first, which you should probably do).

There hasn't been a single time over the last year or so that I've checked my Instagram feed that I haven't found something that struck me as a legitimate piece of point-and-click art. I'm lucky if that happens once a week or three on Facebook or Twitter. Other social media networks feel a bit like I'm living on a street with everyone I know and we all leave our curtains open and our windows open and talk really loudly, even when we're just talking to ourselves. Instagram, instead, feels a little bit like those times in my undergrad Creative Writing classes when we'd all write some super-sloppy, semi-outrageous, but totally earnest poem and read it out loud to the rest of the class. There was always something a little bit silly about everyone's poem (including your own), but the fact that everyone was sharing made it so everyone deserved to be taken seriously.

I think this happens with Instagram at least in part because the platform forces you to share in this way if you choose to use it at all, but I think I'm okay with that.

* It is, but nobody uses it, so it's sort of a hollow victory.
** This is a strained metaphor, but it's also happening on the carpet in front of me right now, so it's what came to mind.
***And here I mean lowercase-"p" poetry, not uppercase-"P" Poetry.