Blog

Reading “Amusing Ourselves to Death” in 2025 Part 2

The cover of Amusing Ourselves to Death by Neil Postman

A rejoinder one might make to Amusing Ourselves to Death forty years on is that text (if not print) has come back in a very big way. Texting has become a primary means of communication. Many social media playforms — despite all the various turns to video — remain text-based. Just these two phenomena alone mean that many of us read and write much more than we would have had we been the same age in 1985. Add in generative AI tools, the most popular of which are text chat based, and we live in the midst of a huge resurgence of reading and writing (of a particular kind). At the same time, television as envisioned by Postman is no longer what it was. Many of us don’t channel-surf these days and instead are able to choose — often guided by an algorithm — what and when we are going to watch. Commercial breaks, while clearly not a thing of the past, are often optional if one is willing to pay to avoid them. About 40% of Americans still watch broadcast or cable TV, but that has been in steady decline since the rise of streaming. It is quite possible we are either in a fundamentally new media environment, requiring a new kind of analysis along the lines of Postman’s critique (one I am sure is ongoing in the various media-related disciplines), or that text itself can just as easily carry on the legacy of the fragmented world of TV.

Indeed, for Postman, the problem facing us cannot be reduced to the precise medium through which we consume information, but rather the entire media environment that shapes how that consumption is accomplished. The rise of text in the age of TV (or, more accurately for today, the internet), will only ever be reshaped by the dominant medium: “Television arranges our communications environment for us in ways that no other medium has the power to do” (78). So I get it. And yet, I can’t help but consider the historical narrative drawn by the book that contrasts the world of print of the eighteenth and nineteenth centuries and the world of television of the twentieth and ask: how different are these, really? Is the era of print as rational as Postman would have us believe? The book is not a work of history. Indeed, for all its academic trappings it is, in many ways, a polemic. It isn’t meant to get into the nitty-gritty historical details. But we might reverse the narrative and, by asking what aspects of the TV environment existed prior to TV, witness as much continuity between the two as difference.

“From Erasmus in the sixteenth century to Elizabeth Eisenstein in the twentieth,” Postman declares, “almost every scholar who has grappled with the question of what reading does to one’s habits of mind has concluded that the process encourages rationality…To engage the written word means to follow a line of thought, which requires considerable powers of classifying, inference-making and reasoning” (51). I am not a scholar of print and media, but this strikes me as naive. Also in the sixteenth century, Martin Luther published On the Jews and their Lies (1543), among other texts. His antisemitic publications led, pretty directly, to restrictions on Jewish life and their expulsion from Protestant areas of Germany, to say nothing of his broader influence on political antisemitism in the nineteenth and twentieth centuries. During the French Revolution, print was used to rile up the crowds, spread conspiracy theories, and destabilize various political systems and experiments. These two examples seem to belie the idea that “in a culture dominated by print, public discourse tends to be characterized by a coherent, orderly arrangement of facts and ideas. The public for whom it is intended is generally competent to manage such discourse” (51). Print, just like TV and social media, is perfectly capable of presenting disordered ideas in the service of whomever is able to shape the discourse as well.

Other historians, in particular Robert Darnton, have shown how the “Age of Reason” (as Postman refers to the Enlightenment) was undergirded by what we might today call slop or trash (or, more charitably, popular works that mixed their politics with pleasure). Titillation and pleasure was as much a part of the Age of Print as reasoned discourse. Later, in the nineteenth century, the newspaper ushered in the fragmentation that Postman lays at the feet of TV. The French mass daily or penny press featured the fait divers, the chronicling of random assortments of events from real life, in short format that, as Vanessa Schwartz has argued, aped the conventions of flânerie, and “implied that the everyday might be transformed into the shocking and sensational and ordinary people lifted from the anonymity of urban life and into the realm of spectacle” (36). The classifieds posed their own problem, as several recent books have shown, to efforts to create a media environment premised on rationality, reason, and respectability.

The popularity and success of the mass press was undergirded, moreover, by advertisements and classifieds, a history that Postman does provide his own gloss on: “As late as 1890, advertising, still understood to consist of words, was regarded as an essentially serious and rational enterprise whose purpose was to convey information and make claims in propositional form” (59-60). This I would say would come to a surprise to many late nineteenth century commentators who feared the effect of advertising and mass consumption on the population, especially in democracies. Émile Zola’s Au bonheur des dames (1883), for instance, traces the ways the new department stores of the 1860s manipulated their customers into buying not want they needed, nor even wanted, but what they didn’t even know they desired. I was lucky enough to be in Paris during a recent exhibition of nineteenth and early twentieth-century poster art at the Musée d’Orsay; certainly few of those conveyed much information. Included on the wall text was a, 1897 quote from the French artist Henri Jossot: “The poster on the wall should scream out; it must violate the gaze of passers-by.”

I am not going to claim that nothing changed with the arrival of TV, but whatever it was was not as big a break with what came before as depicted in Amusing Ourselves to Death. This sense of continuity might slightly reshape how we might wrestle with Postman’s conclusion — the one I ended my last post with: “The problem…does not reside in what people watch. The problem is in that we watch. The solution must be found in how we watch.” (160). In my previous post, I said that perhaps we might apply this appeal to our approach to social media. Here, I might put the conclusion into greater question. If watching and reading are less necessarily different than presented in the book, then the problem is not that we watch, but actually may be in what we are watching as well as in how we watch. The medium is the message, but there is no medium that is or will be innocent of our own susceptibility to disinformation and the emotions.

Reading “Amusing Ourselves To Death” in 2025 Part 1

Cover image to Amusing Ourselves to Death by Neil Postman

My book club decided to read Neil Postman’s Amusing Ourselves to Death (1985) this month. I confess that I knew very little about it, but the person who chose it described it as apt for the current moment and, now having read it, not only agree, but am surprised that it hasn’t come up more in the wake of Trump’s 2016 election. (That said, a quick Google search shows that I’m far from the only person to see its relevance today). The book is not only prescient, but in some ways works better today than it might have done when it was first published. At the same time, I think it overstates some of the transformations that have occurred in the television era and creates a flawed (if not wholly false) dichotomy between print and other kinds of media.

I’ve decided to jot down some thoughts on the book. In this post, I’ll focus on Postman’s central claim and why the book so struck me reading it in 2025. Next, I’ll lay out some of the things that the book, for all its brilliance, misses and how we might nonetheless use it to think about our present predicament.

Continue reading “Reading “Amusing Ourselves To Death” in 2025 Part 1″

The Master of Horror

I watched John Carpenter’s The Thing (1982) for the first time about a year ago and just couldn’t believe that I had missed out for so long. I then randomly watched his The Fog (1980) and Assault on Precinct 13 (1976) and was officially obsessed even though I’m not a huge follower of horror films. Since Blank Check had done a series on him, I decided to dive right in and watch all of his directorial theatrical films. What a run! Can’t recommend engaging with a filmmaker like this more (my next will be a much more manageable watch of the Wachowski’s films).

Here’s my ranking, completed before listening to the last episode of Blank Check’s miniseries on his films:

  1. The Thing (1982)
  2. They Live (1988) (Basically a tie)
  3. Halloween (1978)
  4. The Fog (1980)
  5. Escape from New York (1981)
  6. Starman (1984)
  7. Assault on Precinct 13 (1976)
  8. Prince of Darkness (1987)
  9. Big Trouble in Little China (1986)
  10. Christine (1983)
  11. In the Mouth of Madness (1994)
  12. Dark Star (1974)

Everthything above this line is worth watching.

  1. Memoirs of an Invisible Man (1992)
  2. Escape from L.A. (1996)
  3. The Ward (2010)
  4. Village of the Damned (1995)
  5. Vampires (1998)
  6. Ghosts of Mars (2001)

Follow me on Letterboxd!

AI in the History Classroom

I am very much on record of being extremely tired having the same discussion about AI over and over again. Over and over again, colleagues both on my campus and more broadly express deep anxiety, frustration, and anger about the ways that “AI” has been deployed by various tech companies and disappointment in students who have taken to it both innocently and for less appropriate reasons. All of which I share. Indeed, I have noted with increasing dismay how difficult students find reading and understanding relatively short texts, a problem exacerbated (though almost certainly not simply caused by) the use of AI to summarize for them. However, these conversations almost always remain just gripe sessions, ending without any real solutions or advice about what to do in the classroom. My own policy, thus far, has been to ban its use, clearly explain why I am doing so (in short: the goal of a history class is to learn to think and write on one’s own), and sometimes devote some class time to discussing it. I am 100% that such bans have not been entirely effective, though I do think that by taking time out of class to talk about it and explain my reasoning has been successful in lessening its more nefarious uses. That said, it is obvious to everyone reading about AI in higher education to anyone in the classroom that students are using it all the time and I am somewhat at a loss as to what to do about it.

So I had a bit of a different response than a lot of people to the recent publication of “Guiding Principles for Artificial Intelligence in History Education” from the American Historical Association. Most people in Bluesky dismissed it outright, as accepting what should not be accepted in the first place: that AI is here to stay (I’ll note here that I hate referring to ChatGPT and tools of its kind as “AI,” which it is not, but that seems to be the terminology). On the other hand, I’ve seen a few responses lamenting that so many historians are dismissing AI out of hand, especially its possible uses in research (this I saw on a private forum of AHA members, so no link). I think both miss what the document is trying to address: what should educators, some of whom are going to be entering the classroom in two weeks, actually do about these tools now and as they exist in the world and are being used by our students? What practical advice might be helpful for instructors developing syllabi right now? Taken on those terms, I find its advice somewhat helpful, if occasionally less clear than it might be. It ends with a serious “wtf?” So some thoughts. (I also, as an aside, wish folks commenting on these kinds of documents would remember that they were produced by their colleagues who, I try to believe, deserve grace and the assumption that they are not shills for tech companies).

First, I read the document as premised on different assumptions than those animating AI-boosters in both tech and higher education. The recent Microsoft-produced list of professions most likely to be replaced by AI laughably included “historian” near the top, which of course simply means that whomever (or whatever) made the list doesn’t know what a historian actually does. As the “Guiding Principles” explains, “Generative AI tools risk promoting an illusion that the past is fully knowable.” The Microsoft list speaks to a more broadly shared misunderstanding about what historians actually do. Historians seek out new knowledge, interpretations, sources, and ideas; they do not simply recreate (as generative AI does) what is already there. The past does not exist independently of our interpretation of it, ready for us to simply discover. My department — prior to the advent of generative AI — redesigned our introductory history course to focus on precisely this point: teaching students that history is an interpretative discipline. Doing so, one might hope, will show the deep limitations of AI in doing the work of history.

When addressing some of these limitations, however, I wish that the “Guiding Principles” had been more forceful. Having read — and taught — a recent article describing AI “hallucinations” as “bullshit” it is worth asking whether it is worth using AI when it has a significant chance of doing so rather than “work[ing] to counter these hallucinations when they appear.” Rather, it seems to me, that AI tools might be best suited to use cases with a clear, user-defined dataset and/or for purposes of refinement and formatting rather than search and/or text generation. In my own life, I admit that I have found generative AI useful in making a schedule of habits and tasks that I had some trouble getting my head around and in planning a road trip, both of which involved me feeding it the data and it then working through a problem that would have taken me a great deal of time. Any tool that bullshits its results does not seem suited for the kinds of tasks we set ourselves or our students in our professional lives.

Third, I am sympathetic to why the “Guiding Principles” declare that “Banning generative AI is not a long-term solution” even as it has been my own solution thus far. On Bluesky, I’ve seen a number of comments arguing that the AHA has betrayed historians, with people saying that they’d blackball any researcher who submitted anything written with the help of AI, and that we should hold the line. A lot of this, I think, comes from a place of true distress at how tech companies have, without our consent, fundamentally changed our relationship to the internet, to research, to writing, and, most importantly, to our students. But I do not think, based on what I have read and seen, it is realistic to hope that this is simply going to go away or that the AHA is in a position to stop its spread. I agree that we should have clear standards about the use of AI in research (and I agree that no-one should be using it to write their articles), but that was not the purpose of this document. The tools are out there and basically every single one of our students is already using it. 

Screen shot of a table from the "Guiding Principles" discussed in the post
Screenshot

In that sense, finally, I am both annoyed and gratified by the practical advice the the “Guiding Principles” provides regarding the need for “concrete and transparent policies.” Breaking down the various ways that students are using these tools in the Appendix helped me better understand some of the questions I need to ask about my assignments and how I will approach AI conversations in the classroom. Indeed, while I do not think my policies will be changing have read the document — I still find the use of generative AI to be counter to the goals of my courses — I do think I can better answer students when they inevitably ask about specific use cases. For instance, I actually do not mind if a student uses AI to help them format a footnote. I already allow them to use (and use myself) citation managers and I see little difference in transferring that work (especially for a short paper) to a different tool. I appreciate being prodded to think clearly about the various ways that students are going to use these tools so that I can formulate a response and a policy in advance.

What concerns me, and I think this is where the Committee needed to really rethink their approach (my “wtf?” moment), is that the document does not just provide a sample template for an AI policy, but also provides one that is already filled out. I do not think this was the intent of the Committee, but it reads as recommendations for an AI policy, rather than an example of a completed syllabus policy. And so, when readers come across an AHA-branded document that claims that it is acceptable to “ask generative AI to identify or summarize key points in an article before you read it,” people are rightfully alarmed. One of the points of a history class is to read the article. Even worse is suggesting that it is ok for students to generate a historical image, which seems wildly inappropriate even if the student cites such use. Such language has been circulating on social media and is shaping how people are responding to the document as a whole.

I, by and large, hate these tools. I hate how Google is basically unusable now. I hate how tech bros think they know better than those with expertise. I hate how these companies have reshaped our world without our consent. I hate how the actual use cases for these tools seem much more narrow than people think. I hate how the widespread use of these tools is going to lead to a much dumber world, where new ideas have much more difficulty getting out there. But I also don’t think I’m in a position to stop it. Instead, we need new strategies to get students (to say nothing of the broader public) to value the purpose of learning itself, to get excited about the process, and to recognize the importance of the skills that AI-boosters claim (read: lie about) will be replaced. I am doubtful that the AI bubble is going to just burst, as I see some people claim on social media. I hope I am wrong, but am planning for being right.

Neverending Fantasy: A Final Fantasy Replay Journal Part 1

The cover image of Final Fantasy on the Nintendo Entertainment System.

I don’t think I’ve taken enough advantage of one aspect of tenure, which is that I can now write about some of my interests without worry about whether it will turn off a job committee. One of these is my life-long love of video games and I hope to use this space to write a bit more about them (and some other culture I love). Indeed, I have a few long-term projects ongoing, one of which is to very very slowly read all the Hugo Award winners (even those written by bad people). I’ve recently opened a StoryGraph account and have added my remaining Hugo books to my “to-read” shelf.

A new project is to slowly play or replay all of the Final Fantasy games through the PS3-era now that I’ve purchased the Pixel Remaster versions. I may skip Mystic Quest (which I feel like would just be a slog without a remake available), 7 (which I replayed shortly before Remake was released), 11 (just can’t see myself playing an aging MMO), and 12 (which I replayed when the Zodiac Age was released), but I will plan on posting some thoughts when I get to those. I have never played FFII or FFIII, so those will be completely new experiences for me.

The original Final Fantasy holds an awkward place in my journey as a gamer. It was the first RPG I ever played, but I never owned it. My memories of it are vague and hazy since I never played it for very long when it was first released. In fact, I think I played it after encountering what was then Final Fantasy II (IV) and Mystic Quest at my friend Thomas’s house. We considered it even at the time a curio, something pretty basic compared to the sweep of Final Fantasy IV. There’s no plot to speak of, the characters are just blank pages, and NPCs either say nothing or cryptically send you to your next destination. Even Mystic Quest had more to grab the player in terms of its plotting and characterizations, the latter of which has always been the highlight of the series. Indeed, what I missed most from this game was the silliness and humor of later games (something completely missing from the latest game as well).

That said, what strikes me most returning to the game decades after its release is just how playable it remains. The Pixel Remaster’s quality of life additions go some way to easing the modern player into the game. The most important and significant change (beyond the map and auto-battle) is that you now auto-target a new enemy when one is defeated (I assume this was introduced in other re-released before the Pixel Remasters). In the original, if you had targeted an enemy that disappeared you strike the empty space. This actually required quite a bit of strategy that has been lost in the new versions. At the same time, I found the game fairly breezy. I did use a guide to help direct me when I didn’t want to just meander the world looking for the next destination, but there is sufficient signposting that I didn’t have to do so every second. Such wandering was enough to level-up to move through the game without much difficulty (though that did make me wonder if the difficulty had been turned down in this edition). It’s no wonder this game became a template not only for its series, but an entire genre. 35 years later it’s still perfectly enjoyable to play.

Spoilers, such as they are, after the break.

Continue reading “Neverending Fantasy: A Final Fantasy Replay Journal Part 1”

My 2024 Reading List

One of my resolutions last year was to just read more for pleasure and indeed fiction proved one of my most joyful escapes in what was a pretty rough year. I kept things fairly light, with a lot of genre fiction I had been meaning to get to for a while now (I have the long running goal of reading all the Hugo Award winners). I think I had wanted to read a couple dozen books and I got close. If we count Moon Witch, Spider King as a few books considering the length, I think I can say I achieved my goal.

Here’s my list of books I read (not counting any for work), with some scattered thoughts.

  1. Nettle and Bone by T. Kingfisher – My hot take is that while the shenanigans around Babel at the 2023 Hugo Awards was deplorable, the better book actually won.
  2. Network Effect by Martha Wells – How many series just get better and better?
  3. Orlando: A Biography by Virginia Woolf – Finally knocked this one off my to-read list and am looking forward to watching the Tilda Swinton adaptation this year.
  4. Pietre le Letton by Georges Simenon – First in a couple of old-school mysteries I read this year.
  5. The Big Sleep by Raymond Chandler – The second
  6. Number Go Up by Zeke Faux – I normally avoid pop history/reporting turned into books, but this was a funny, lucid explanation of the bs behind crypto.
  7. Red Team Blues by Cory Doctorow
  8. Count Zero by William Gibson – Will finish this trilogy this year.
  9. Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychadelic Science by Benjamin Breen – I blogged about this one.
  10. Stand on Zanzibar by John Brunner
  11. To Your Scattered Bodies Go by Philip José Farmer
  12. Everyone Knows Your Mother is a Witch by Gretch Rivka – The best historical fiction I’ve read in a good while.
  13. Just Kids by Patti Smith – As memorable as everyone said it would be
  14. The Deep Sky by Yume Kitasei
  15. Tomorrow and Tomorrow and Tomorrow by Gabrielle Zevin – Entertaining, but my gamer brain kept telling me that the central creation of the story couldn’t have been made before the Indie game surge of the 2000s.
  16. The Kamogawa Food Detectives by Hsashi Kashiwai – Made me so hungry.
  17. Moon Witch, Spider King by Marlon James
  18. Near Strangers by Marian Crotty
  19. Slouching Toward Bethlehem by Joan Didion
  20. The Talented Mr. Ripley by Patricia Highsmith

Paris and the Public Urinal

The “Homewood Privy, c . 1801,” Johns Hopkins University Homewood Campus, Personal Photograph.

One of my ongoing academic obsessions, since they were the subject of my very first publication, is the history of public urinals. I even got to help out with a memorial plaque that’s going up in Paris at one of the last remaining pissotières in the city (not sure if it’s up yet). I can’t help but notice them, especially historic ones like the one on Johns Hopkins’s campus in Baltimore pictured above, when wandering around a city. But also when they pop up elsewhere. Right now I’m reading Patricia Highsmiths’s The Talented Mr. Ripley for the first time and when Tom first goes to Paris he describes what he first notices:

It was the atmosphere of the city that he loved, the atmosphere that he had always heard about, crooked streets, gray-fronted houses with skylights, noisy car horns, and everywhere public urinals and columns with brightly colored theater notices on them.

The public urinal indelibly marked the city, here as one part of its very modernity. The Talented Mr. Ripley first appeared in 1955; by the 1980s, the classic pissotières were removed in favor of the self-cleaning (and pretty gross) facilities that now dot the Parisian landscape.

They Rule Our World

On my flight back from the annual meeting of the Western Society for French History, I was seated next to a woman who struck me as the quintessential representative of San Francisco. I didn’t quite catch all the details, but needless to say she is quite wealthy and lives near Jack Dorsey, runs a foundation, is involved in multimillion dollar research and charitable endeavors, and is quite enthusiastic about both spiritualism and the possibilities of AI. Some of my least favorite words — “influencer,” “thought leader” — were used un-ironically. She was very nice and, though I am not someone who wants to chat with strangers on a plane, seemed genuinely interested in my work and my experience in San Francisco. But she also expressed surprise when I described as “creepy” the idea of putting my research into an AI chatbot so that readers might have a “conversation” with AI-me, the implication being that I was this weird luddite behind the times. The confidence she expressed not only that this kind of tech was the future, but that it could be harnessed by her and her cohort to solve both our material and our spiritual problems typified what I know of the world of Silicon Valley and especially its current role in our politics.

I was reminded of my chat after reading a recent article in The Atlantic that was going around Bluesky on various issues facing Business school research. The article focuses on the aftermath of the discovery that a major figure in the world of business psychology had used fraudulent data in their research. With a subject that read, “The rot runs deeper than almost anyone has guessed,” my initial impulse was to just quip “I could have guessed” and move on. To anyone with a passing familiarity with the difficulties and problems of behavioral psychology general and with business schools specifically (or just listeners to If Books Could Kill), the idea that much of the conclusions of this world are often, to be generous, a bit suspect is not that surprising.

One of the research conclusions that the Atlantic article describes as now being put into question is that doing a small routine (or “ritual”) before a presentation can help the performance of the presentation. As the Atlantic documents, though this idea is regularly cited in the literature, the data underpinning it has now been shown to have been manipulated.

When I got this part of the article, I came up a bit short. That’s because of the things my airplane neighbor told me — and that she suggested should be spread far and wide — was that research showed that teachers who did just a small act of meditation or reflection every day before entering the classroom showed huge gains in the classroom. Students who had teachers who did this, she told me, had their GPAs rise by something like two points.

Obviously, this doesn’t exactly sound right! But it’s the kind of “life hack” — as the Atlantic terms it — that is so central to these kinds of studies, ones we now know to be not only empirically suspect, but also often rest on fraud. And here’s the thing, the people who believe it, like this philanthropist, are the ones with the money, means, and ability to shape our world. They are the ones making solutions — she was on her way to pitch San Francisco as the site of a major study on homelessness — that rest on essentially made-up conclusions. The story, it seems to me, is not simply that these fields need, like their peers in Psychology, to take enact methodological reform and to rethink their research incentives, but the influence the simple answers they provide hold over policymakers and others with widespread influence over our society. We’re about to see this at its most extreme with Musk and Ramaswamy, but its not as if Democrats are immune. The model described in the Atlantic quite literally rules our world.

New Article: Josephine Butler in Paris

I’ve been working on an article examining Josephine Butler’s campaign against morals policing in Paris for quite a while now and it’s finally out. This article, the first published section of my new book project, explores how Butler’s advocacy against regulated prostitution shaped and was shaped by her time in Paris during the 1870s. It attempts to connect her arguments against regulated prostitution to a nascent critique of policing in democratic society more broadly, while also highlighting her indebtedness to a multivalent discourse around race in the early Third Republic.

Please feel free to reach out if you would like a copy of the article and are not able to access it via the link above.

Digital Exhibits on “Gender, Race, and Class in Modern Europe”

The last time I used WordPress for an assignment was in my old job at the University of Southern Mississippi where I taught a course I called “History in the Digital Age.” I decided to try to incorporate the use of WordPress for a simple digital exhibit into a more traditional course this semester, “Gender, Race, and Class in Modern Europe.” This majors-level course covered selected themes in European history through the perspective of marginalized peoples, primarily as it related to changing definitions of citizenship. I needed an assignment that incorporated independent research and writing, but I did not want to assign a standard paper. This was mostly just to vary things for my students, many of whom had already taken a course with me where a standard research paper was the assignment. I also think that basic knowledge of WordPress is a useful skill for everyone to have.

Continue reading “Digital Exhibits on “Gender, Race, and Class in Modern Europe””