Praxis: Topic Modeling of Historical Newspaper

“What is Distant Reading?”, the title of a NY Times article by Kathryn Schulz has provided one of the simplest ways to understand the topic, “understanding literature not by studying particular texts, but by aggregating and analyzing massive amounts of data.” One might be wondering how to utilize distant reading. In this praxis assignment, I have used Topic modeling, a distant reading approach to analyze historical newspapers.

Newspapers that have survived the course of time are among the most valuable sources of information accessible to academics for researching civilizations and cultures of the past, especially in the context of historical research. Virtually all significant dialogues and disputes in society and culture throughout history were brought to light in newspapers. This is due to the fact that,  as early as the mid-nineteenth century, almost every town regardless of its size saw the establishment of a minimum of one newspaper. Within the newspaper, every facet of the social and daily life of the people is covered with articles such as regarding political debate, promotion of goods and services, and so on and so forth. To this date, no other repository was found with scholarly editorials from history covering controversial topics like political issues, marketing promotion for fashionable clothing, news on major sporting events, and poetry by a local poet all in one place. In a nutshell, for contemporary historians, newspapers record the entire spectrum of human experience better than any other source, giving them access to the past, unlike any other medium.

However, despite their importance, newspapers have remained one of the most underused historical resources for a long time. Historians have found it a daunting task and not to mention sometimes impossible to study historical newspapers page by page for a specific research topic due to the enormous amount and range of material they provide. For instance, just to address one research topic, a historian might have to go through hundreds of thousands of newspaper articles. Ironically, after all these efforts there is still no guarantee of finding the required information.

In this praxis, an attempt will be made to uncover the most interesting and potentially important topics from a period of time using topic modeling on the Paper Past database from DIgitalNZ. As opposed to utilizing the knowledge of historians at the very beginning to find out relevant topics by going through an abundance of newspapers, this praxis relies on a reverse approach. A computational approach will be used for clustering data into topics which will then be evaluated from a historian’s point of view to identify specific patterns in the dataset.

The Experiment

Dataset

Paper Past DigitalNZ is comprised of the following four sections:

  1. Newspapers

This section contains digitized newspaper issues from the eighteenth, nineteenth, and twentieth centuries from the New Zealand and Pacific regions. Each newspaper has a page dedicated to providing details about the publication, such as the period range in which it was accessible online. Also, there is an  Explore all newspapers page in which one can discover the URL of all the newspapers. Papers Past contains only a small sample of New Zealand’s total newspaper output during the period covered by the site. But it is more than sufficient for the intended term paper.

During the year 2015, the National Library of New Zealand incorporated a compilation of historical newspapers into their collection that was predominantly targeted at a Mori audience during 1842 and 1935. For this task to be carried out, the University of Waikato’s Computer Science Department used the digital Niupepa Archive, which was created and made accessible by the New Zealand Digital Library Project in 2000.

  1. Magazines and Journals
  2. Letters and Diaries
  3. Parliamentary Papers

Newspaper articles are the particular topic of interest for this praxis. More specifically, the praxis will build a topic model with newspaper articles ranging from 1830-1845. This timeframe is selected because New Zealand announced its declaration of independence in 1835 and this praxis is particularly targeted to find out the topics that emerged in the society during the pre-independence and post-independence declaration period. Paper past provides an excellent API that is open to all. I have gathered 12596 newspaper articles available in the paper past database using the API. The data was migrated into the pandas data frame for further processing and building a topic model on top of it.

I will not be going to discuss the nitty gritty of model building and technical stuff in this article. Instead, I will focus on evaluation and discussion.

Topic Visualization

The visualization is interactive. If you want to check out the visualization, please follow the URL below.

https://zicoabhidey.github.io/pyldavis-topic-modeling-visualization#topic=0&lambda=1&term=

Evaluation and Discussion

The evaluation of the topic model results in this praxis has been done through online browsing and looking for the history of New Zealand during the time period of 1830-1845 along with using general intuition. A historian with special knowledge of New Zealand’s history might have judged better. Some of the topic groups emerged from the topic model results along explanation provided in table 1 for gensim LDA model and table 2 for gensim mallet model. 

TopicsExplanation
“gun”, “heavy”, “colonial_secretary”, “news”, “urge”, “tax”, “thank”, “mail”, “night”‘Implying political movement and communication during the pre-independence declaration period.
“bill”, “payment”, “say”, “issue”, “sum”, “notice”, “pay”, “deed”, “amount”, “person”Business-related affairs after the independence declaration.
“distance”, “iron”, “firm”, “dress”, “black”, “mill”, “cloth”, “box”, “wool”, “bar”Representing industrial affairs mostly related to garments.
“Vessel”, “day”, “take”, “place”, “leave”, “fire”, “ship”, “native”, “water”, “captain”Represent maritime activities or war from a port city like Wellington.
“land”, “acre”, “company”, “town”, “sale” , “road”, “country”, “plan”, “district”, “section”Representing real-estate-related activities.
“year”, “make”, “receive”, “take”, “last”, “state”, “new”, “colony”, “great”, “give”No clear association.
“sail”, “master”, “day”, “passage”, “auckland”, “port”, “brig”, “passenger”, “agent”, “freight”Representing shipping activities related to Auckland port.
“Say”, “go”, “court”, “take”, “kill”, “prisoner”, “try”, “come”, “witness”, “give”Representing judicial activities and crime news.
“boy”, “pull”, “flag_staff”, “mount_albert”, “white_pendant”, “descriptive_signal”, “lip”, “battle”, “bride”, “signals_use”Representing traditional stories about Maori Myth and Legend regarding mount Albert.

Table 1: some of the topics and explanations from gensim LDA model

TopicsExplanation
‘land’, ‘company’, ‘purchase’, ‘colony’, ‘claim’, ‘price’, ‘acre’, ‘make’, ‘system’, ‘title’Representing real-estate-related activities.
‘native’, ‘man’, ‘fire’, ‘captain’, ‘leave’, ‘place’, ‘officer’, ‘arrive’, ‘chief’,  ‘make’Representing news regarding New Zealand War. 
‘government’, ‘native’, ‘country’, ‘settler’, ‘colony’, ‘man’, ‘act’, ‘people’, ‘law’Representing news about the sovereignty treaty signed in 1835.
‘mile’, ‘water’, ‘river’, ‘vessel’, ‘foot’, ‘island’, ‘native’, ‘side’, ‘boat’, ‘harbour’Representing maritime activities from a port city like Wellington.
‘settlement’, ‘company’, ‘make’,’war’, ‘place’, ‘port_nicholson’, ‘settler’, ‘state’, ‘colonist’, ‘colony’Representing news about Port Nicholson during the war in Wellington 1839

Table 2: some of the topics and explanations from gensim Mallet model

PRAXIS: Tinkering and Text Mining

Since starting my academic journey in DH over two years ago, I’ve been awaiting the moment when I’ll get to learn about text mining/analysis tools. I’ve worked in the “content” space my entire career and I’ve always been interested in the myriad tools out there that allow for new ways to look at the written word. I spent nearly a decade as an editor in the publishing world, and I never leveraged an actual text analysis tool, but I jerry-rigged my own approach for scouring the web for proper usage when I found myself confused on how to best render a phrase. My go-to text analysis “hack” has always been to search google for a phrase I’m unsure of in quotes, and then to add “nytimes.com” to my search query. This is based on my trust that the copyeditors at NYT are top notch and whatever usage they use most often is likely the correct usage. For instance, if I encounter the usage of “effect change” in some copy I’m editing and I’m not sure whether it should be “affect change,” I would do two separate searches in Google.

  1. “affect change” nytimes.com
  2. “effect change” nytimes.com

The first search result comes up with 72,000 results. The second result comes up with 412,000 results. Thus, after combing through the way the term is used in some of the top results, I can confidently STET the use of effect change and move on without worrying that I’ve let an error fly in the text. I’ve used this trick for years and it’s served me well, and it’s really as far as my experiments in text mining have gone until this Praxis assignment.

Diving into this Praxis assignment, I was immediately excited to see the Google NGram Viewer. I had never heard of this tool despite working in a fairly adjacent space for years. Obviously, the most exciting aspect of this tool is its absolute ease of use. It runs on simple Boolean logic and spits out digestible data visualizations immediately. I decided to test it out by using some “new” words to see how they’ve gained in published usage over the years. I follow the OED on Twitter and recall their annual new words list announcement, which for 2022 was produced as a blog post doing its best to leverage the newest additions in its text. You can read the post here: https://public.oed.com/blog/oed-september-2022-release-notes-new-words/

The NGram has a maximum number of terms you can input, so I chose the words and phrases that jumped out at me as most interesting.

The words I chose from the post were (in order of their recent frequency as spit out by NGram): jabbed, influencer, energy poverty, side hustle, top banana, Damfino, mandem, and medical indigency. As you can see, most of these terms are all quite new to the published lexicon — all but “jabbed.” However, jabbed in the early 20th century likely had more to do with boxing literature than vaccinations.

Moving along in this vein, I then looked up the “word of the year” winners dating back the last decade. These words were: omnishambles, selfie, vape, post-truth, youthquake, toxic, climate emergency, and vax. 2020 did not have a word of the year for reasons I suspect have to do with the global pandemic. Looking at the prominence of these words in published literature over the years showed a fairly similar result as the “new” words list.

What I found surprising is that these words and phrases are actually “newer” than the ones I pulled from the new words list. There’s barely a ripple for all these words outside of “toxic,” which has held populat usage for over a century now according to NGram.

Despite to say, as a person who routinely looks up usages for professional purposes, I’m elated to discover this tool. It will not only help me in my DH studies, but will also assist me in editorial work as I look for the more popular usage of terms. Instead of having to us Google’s own search engine and discern the results myself, I can now see simple visualizations that will prove one usage’s prominence over another.

NGram is well and good, but I could tell this was a bit of a cop out when it came to learning the ins and outs of text mining. So I decided to test out Voyant Tools to see if I could get a handle on that. As was noted in the documentation, it is best to use a text I am familiar with so I can make some qualitative observations on the data this is spit out. I decided to use my recently submitted thesis in educational psychology, as there’s likely not much else I’m more familiar with. My thesis is titled, “User Experience Content Strategies for Mitigating the Digital Divide on Digitally Based Assessments.” Voyant spat out a word cloud that basically spelled out my title in via word vomit in a pretty gratifying manner.

This honestly would have been a wonderful tool to leverage for the thesis itself. As I tested 200 students on their ability to identify what certain accessibility tools offered on digital exams do, I had a ton of written data from these students and I could have created some highly interesting visualizations of all the different descriptive words the students used when trying to describe what a screen reader button does.

I’ve always known that text analysis tools existed and were somewhat at my disposal, yet I’ve never even ventured to read about them until this assignment. I’m surprised by how easy they are to get started with and am excited about leveraging more throughout my DH studies.

Praxis assignment: how to ‘read’ a book without reading it

For my text mining praxis assignment, I decided to use Python’s Natural Language Processing (NLP) package, also called Natural Language Toolkit (NLTK). Further to last month’s text analysis workshop, I thought it would be a good idea to put into practice what learnt.

I picked Jane Eyre, a book I read few times in both the original language and a couple of translations, to ensure I could review the results with a critical eye. The idea was to utilise NLP tools to get an understanding of the sentiment of the book and a summary of its contents.

In an attempt to practice as much as possible, I opted for an exploratory approach to trial as different features and libraries. At the bottom of this post, you will find my Jupyter notebook (uploaded on Google Colab) – please note that some of the outputs exceed the output space limit, hence you might need to open them in your editor. In terms of steps undertaken, I was able to create a way to “assess” words (positive, negative, neutral) and run a cumulative analysis on big part of texts to gather a sense on how the story develops. Separately, I wrote a function to summarise the book, which drove the length of it from 351 pages to 217 (preface, credits, and references included), however I am not sure about the quality of the result!! Here in PDF this “magic” summary!

Clearly the title of my blog post is meant to be provocative, but I can see how these algorithms could be used as shortcuts!

Before diving into the notebook though, I would like to share a few of thoughts on my takeaways on text analysis. On the one hand, it is impressive to witness the extent to which the latest technologies can take unstructured data, interpret it, translate it into a machine-readable format, and then make it available to the user for further analysis or manipulation. Machines can now operate information extraction (IE) tasks by mining meaningful pieces of information from texts with the aim of identifying specific data, or targets, in contexts where the language used is natural and the starting dataset is either semi or fully unstructured. On the other hand, I personally have concerns around the fact that text mining softwares perform semantic analyses that often can only leverage a subsection of the broader spectrum of knowledge. This is to say that the results produced by these technologies can certainly be valid, however, could be limited by the inputs and related pre-coded semantics, hence potentially translating into ambiguous outputs.

There is a chance that the below HTML will not render the notebook, if so, you can download it directly from myGitHubGist.

Cumulative distribution anlysis
Positive vs negative “sentiments” as the story develops

Praxis Assignment: Text Mining with Map Lemon

A/N: This post contains a lot of information about my project, Map Lemon. If you don’t want to be deeply confused about what Map Lemon is and why it is, you can head on over to my blog at https://noveldrawl.commons.gc.cuny.edu/research/, as I’m not explaining it for the sake of brevity in this post. The corpus itself is not yet publicly available, so you’ll just have to trust me on the docs I’m using for now.

I’ll start this post with a bit of introduction on my background with text mining. I’m a Linguist. I’m a Computational Linguist. And more importantly than either of those two really nebulous phrases that I’m still convinced don’t mean much, I made a corpus. I am making a corpus, rather (everything is constantly moving changing and growing). It happened by accident, out of necessity—it wasn’t my original research interest but now that I’m deep in it I love it.

My corpus, Map Lemon, is #NotLikeOtherCorpuses (I’m sorry for that joke). It’s not text mined. A LOT of linguistic corpuses are text mined these days and that gets on my nerves in a real bad way. Here’s why:

Let’s use the example that you’re text mining on Twitter to get a better idea of jargon used within a niche community, since this use-case is quite common.

  1. Text mining often takes phrases out of their contexts because of the way platforms like Twitter are structured.
  2. These aren’t phrases that, generally speaking, are used in natural speech or writing. While cataloging internet speak is important, especially to understand how it affects natural S&W, we’re not cataloging as much natural S&W as a result, and I don’t think I need to explain why that’s important.
  3. It’s not situational. You’re not going to find recipes for lemonade, or directions to a lemonade stand yes I’m making a joke about my own research here on Twitter.
  4. You’re often missing demographics that are unknown that can affect the content of the corpus.

I chose to do this praxis assignment to debunk, or at least attempt to, all of those things. I want text mining to work for me. It probably won’t for my use-case, but I should at least be versed in doing it!

Now let’s get into it.

I decided to text mine my own corpus. Yup. I’m at a stand-still with the results I’ve been getting and need material for an abstract submission. Here we go.

So, since my data has already been cleaned before, I went ahead and just plopped it into Voyant. The following ensued:

  • Oh, rats I forgot I can’t just do that with all the demographics and stuff in there and confuse Voyant.
  • Okay, copy the file. Take out all the not responses. Might as well separate them into their respective experiments while I’m at it.

So, the final version of what I’m analyzing is: 1) Just the directions to the lemonade stand 2) Just the recipes for lemonade. I’m not analyzing the entire corpus together since it wouldn’t yield coherent results for this specific purpose due to the difference in terminology used for these two tasks and lack of context for that terminology.

The results from the directions were really neat in that you can follow the correlations and word counts as directions given, basically. Here’s the map from Experiment I so you can follow along:

Here are the most common phrases in the directions given, according to analysis with Voyant:

  • “before the water fountain”; count: 4
  • “take the first left”; count: 4
  • “a carousel on your left”; count: 3
  • Some other phrases that are all count 3 and not as interesting until…
  • “at the water fountain”; count: 3
  • “between the carousel and the pond”; count: 3

Now, okay, these numbers aren’t impressive at first glance. Map Lemon only has 185 responses at present, so numbers like this maybe aren’t all that significant, but they sure are interesting. Map Lemon contains exclusively responses from North Americans, so from this we could postulate that North Americans tend to call “that thing over yonder” a water fountain or a carousel. But also from this we can see the directions Chad gets most commonly: people often send him down the first left on the street; of the group that does not, and has him cut through the park, they let him know that he should pass the carousel on the left; and the lemonade stand is just before the water fountain. All these directions are reiterated in two different ways, so it seems. That sure is neat! Not particularly helpful, but neat.

So let’s look at those cool correlations I mentioned.

  • ‘gym’ and ‘jungle’ – correlation: 1 (strongest)
  • ‘clearing’ and ‘paved’ – correlation: 1
  • This one I’m unsure what is really meant by it, if that makes sense, but it was ‘enter’ and ‘fork’ corr. 1
  • ‘home’ and ‘passed’ – correlation: 1

These look like directions alright! Okay, of course there’s the phrase ‘jungle gym’, but we do see, okay, there’s a paved clearing. I’m sure at some point Chad has to enter a fork, although I’m a bit confused by that result, and yes, many people did have Chad pass the house. Neat!

I’m a bit skeptical of some of these correlations as well, because it’s correlating words strongly that only appear once, and that’s just not a helpful strong correlation. But that’s just how the tool works.

Looking at contexts wasn’t particularly helpful for the directions, as lot of the contexts were for the words ‘right’ and ‘left’.

Now, here’s what was really freakin’ cool: the links. Voyant made this cool lil graphic where I can see all the most common words and their links. And it actually shows… directions! The 2/3 most common paths, all right there, distilled down. Try giving Chad directions for yourself and see what I mean, ‘cause it’ll probably look something like this:

Voyant’s link map for Map Lemon Experiment I

Okay, so the directions didn’t show anything revolutionary, but it was pretty darn cool. Let’s move onto the recipe analysis.

NOW THIS IS FREAKIN’ COOL!!! According to the phrase count tool, everybody makes lemonade about the same! Including using a lot of the same amounts of ingredients and even the same filler phrases!

Ingredients:

  • 1 cup sugar; count: 3 (the semantics of this compared to the other two is really interesting!)
  • 3 cups of water; count: 3
  • 4 lemons; count: 3

Filler phrases:

  • “a lot of”; count: 5
  • “make sure you have”; count: 5
  • “kind of”; count: 4 (context for this one is tricky)

Perhaps that’s a recipe I could try!

Now, okay. If we’re skinning cats, there’s not a lot of ways to skin this one, actually. We specifically chose lemonade for this experiment because it’s ubiquitous in North America and really easy. And the correlations feature wasn’t really helpful this time around for that exact reason (lack of distinguishing words). But look at this cool link map thingy!!

Voyant’s link map for Experiment II

Very helpful! You either squeeze, cut, juice, or halve your lemons—not all four (and two of those are only different out of semantics). Add several cups of water to a pitcher, and stir in (or add—again, semantics) at least a cup of sugar. Boom! There’s our lemonade recipe. This was so cool to synthesize!

At the end of this little project, I am still as annoyed with the inability to use demographics with text mining as I was before. This means that demographics for text mining for my purposes would have to be very carefully limited to control for individual linguistic variants. However, I also see the benefit in this tech; although, I definitely think it’d be better for either larger datasets or datasets that have a very controlled demographic (so that small numbers like 3 and 4 are actually statistically significant). Mostly, for now, it seems that the results just make me say “cool!”. I love the visual links a lot; it’s a really good example of graphics that are both informative and useful. I think it would be a fun side project to try and synthesize a true “All-American Man” using text mining like this. (Side note, that exact sentence reminds me that in my Computational Linguistics class in undergrad, for our final project about half the class teamed up and scraped our professor’s Twitter account and made a bot that wrote Tweets like he did. It actually wasn’t all that bad!)

I think this could potentially be valuable in future applications of my research, but again, I think I need to really narrow down the demographics and amount of data I feed Voyant. I’m going to keep working with this, and maybe also try Google Ngram Viewer.

Voyant Tools & the Paris of Benjamin and Harvey

A quick note: This post will likely be edited and refined. I’m still attempting to find a better way to share my work in Voyant Tools on here but, despite the embedding function working when I preview my custom HTML block, it never seems to function in the same way once I preview the page as a whole. Anyway, if anyone has any tips or tricks as to how to successfully integrate interactive Voyant Tools’ embeddable tools, I’d greatly appreciate it.

Having spent last semester in the second portion of this course clumsily employing Python in a frantic attempt to introduce text analysis machine learning and word embedding to the philosophy of Bernard Stiegler (specifically his Nanjing Lectures, The Neganthropocene, the unpublished Technics & Time, 4, and the collective publication Bifurcate: There is No Alternative), I was thrilled to approach text analysis with a singular focus on the “frontend” hermeneutic experience of experimenting with a text rather than having to build out the backend to support my basic ability to do so. After waffling between a few of the provided examples of open-source text analysis applications, receiving unsolvable error codes from the JSTOR Labs Text Analyzer and SameDiff notifications suggesting that my corpus was too large when trying to evaluate the cosine similarities present between Walter Benjamin’s The Arcades Projects and David Harvey’s Paris, Capital of Modernity (weird, right?), I eventually landed on the wildly intuitive Voyant Tools, largely due to the fact that it functions as suggested and allowed me a wide range of investigative potential without trudging through troubleshooting to get there.

As a result of a recently whipped up blog post for my “Doing Things with Novels” course discussing Benjamin’s Arcades as an ideal subject for digital hypertext projects similar to that of Joyce’s Ulysses (despite each hosting their own digital graveyards of DH projects), I proceeded to explore Voyant Tools via Benjamin’s text, primarily due to its size (1073 pages) and scope of subject matter (its convolutes ranging from the iron industry and Parisian fashion to early street lighting and Marx). After converting my PDF of The Arcades Project to a txt. file, I, ever-avoidant of my Pythonic script employing NLTK that is intended to do this very thing, scrounged around for an open-source application that allows for the removal of stop words, dabbling with sites such as Tools.FromDev before I realized that such a function could be done (with ease) in Voyant Tools. Though I had to edit the stoplist to include the French stopwords that appear throughout the piece (and initially dominated the Cirrus) such as le, la, and du, I eventually cleaned the text in such as way that its word cloud (used as somewhat of an indicator of cleanliness at this point) showed ‘Paris (3856),’ ‘Baudelaire (1319),’ and ‘time (803)’ as Benjamin’s most frequently used terms rather than ‘les.’

Somewhat disappointingly, when provided with a numerically indexed list of most-used terms, little was illuminated or surprising by such insight into the text. Beyond providing a cute and colorful arrangement of a book’s most salient and central components, I fail to really understand what “analysis” could be conducted based on a word cloud. Having read The Arcades Project, the Cirrus above did little more than to confirm that a book about Paris, Baudelaire, and the experience of time amidst 19th-century capitalism was (surprise, surprise) precisely about those very things. The TermsBerry, though certainly offering more interaction and interpretation, provided similarly unremarkable results. Rather than determining the collocate frequency of obvious words such as ‘Paris,’ I chose to explore the connections that one of my favorite Benjaminian terms, phantasmagoria, has within the text, primarily due to a curiosity about the concept’s vague and only partially-formulated application within the similarly incomplete Arcades Project (Benjamin died before this “theatre of all his struggles and all his ideas” could be fully achieved). Despite Benjamin’s usage of this term being varied, pervasive, and at times lucid (as a savoring of false consciousness for the bourgeoisie, as the spectacle operating as replacement for reality, in the manifestations of and our experience with products amidst commodity culture), the phantas* TermBerry offered few connections (besides maybe ‘Time’ and ‘Marx’) that would grant any novel insight into the application of this concept that reading the text (or scholarly analyses of the text) wouldn’t work to more effectively provide.

Inspired by my comparative attempt made earlier in SameDiff, I thought these two tools (Cirrus & TermsBerry) might prove to be more useful when analyzing the use of terms across two distinct texts. As an ideal model for comparative analysis with The Arcades Project, David Harvey’s Paris: Capital of Modernity is a similarly Marxist analysis of Haussmann-era Paris in the 19th century, employing similar cultural and political figures, objects, and phenomenon – the poet-apostle of modernity, Baudelaire, the spatial forms of the Arcades, the time-space compression mentioned in my last blog, along with the transformations in public life via consumerism and the spectacle. In Harvey’s own words, his aim is, “…quite different from Benjamin’s. It is to reconstruct, as best I can, how Second Empire Paris worked, how capital and modernity came together in a particular place and time, and how social relations and political imaginations were animated by this encounter…” (2003, p. 18). All of this considered, the differences between these two theoretical frameworks begin to reveal themselves even through something as simple as a Cirrus word cloud. As one can see, Paris still remains the dominant term but words such as ‘workers,’ ‘labor,’ ‘class,’ and ‘capital’ have risen through the ranks, illuminating Harvey’s more orthodox-Marxist analysis.

Whereas phantasmagoria had a distant assortment of useless connections under Benjamin’s TermBerry, Harvey’s use of phantasmagoria, when the TermBerry is expanded to include 250 terms, only reveals two collated connections: empire & capital. Given phantasmagoria’s limited usage in Harvey’s text, we can return to the text to see exactly how these connections might have occurred;

(18-19): “Benjamin also insists that we do not merely live in a material world but that our imaginations, our dreams, our conceptions, and our representations mediate that materiality in powerful ways; hence his fascination with spectacle, representations, and phantasmagoria.”

(109): “The phantasmagoria of universal capitalist culture and its space relations incorporated in the Universal Exposition blinded even him to the significance and power of loyalties to and identifications with place.”

In each instance, ‘empire’ is located in a nearby sentence but is not directly related to the meaning of the concept. While ‘capital’ makes a certain amount of sense, it is still not enough to determine the core components of a concept purely through the development of a TermBerry. However, it is interesting that, despite Benjamin operating as a primary developer in the conceptual production and employment of phantasmagoria, I feel as if this TermBerry exercise indicates that Harvey’s application more clearly reveals a concise and understandable instance of the concept’s application.

It was around this time that I realized that I could combine the two texts in the same Voyant Tools workspace. Hold your applause. ‘Paris’ and ‘Baudelaire’ still reigned supreme but Harvey’s addition to Benjamin’s tome brought the aforementioned labor-centric terms (‘work,’ ‘workers,’ and ‘class’) into the Cirrus. More than anything, this development provided me with the fun ability to compare the usage of concepts between two texts, as seen below.

For example, while each thinker employs ‘Paris’ to a similar degree, the distinction in their approaches might be said to be seen through Benjamin’s far greater invocation of art in The Arcades Project via his usage of ‘Baudelaire’ and Harvey’s historical-materialist approach via his heightened focus on ‘city.’

This can be made clearer through the trend graph above. From this, we can assume that Harvey’s application of Marxist theory far outweighs Benjamin’s usage of such theoretical frameworks and terminology. Thus far, this comparative tool presents the most scholarly potential in analyzing the ideological, political, and theoretical underpinnings of texts. Additionally, the graph below offers insight into the stylistic methodology each thinker takes in approaching similar subjects – Benjamin, ever the flirt with terminology that invokes a sense of mysticism, is seen using such language (‘dream,’ ‘awaken,’ ‘phantasmagoria,’ ‘reality,’ and ‘enchantment’) to a far greater degree, illuminating again the material approach of Harvey and the uniquely Benjaminian style of analysis that exists in The Arcades Project.

As a last little tidbit, another interesting application of the Trends tool comes through the ability to view a text’s relative frequency of a term’s usage throughout the document’s segments. For example, if I wanted to understand, address, and analyze how both Benjamin and Harvey approached their application of Baudelaire (or how they constructed an argument using Baudelaire as a central element), I could look to the line graphs below to view the rate at which Baudelaire was mentioned and where in the text might be relevant to my investigations of the French poet. I imagine that this could be employed in a multitude of ways and the very basic functionality of this feature represented here barely scratches the surface.

Paris, Capital of Modernity – Baudelaire
The Arcades Project – Baudelaire

Though I feel compelled to conclude this reflection for the sake of the reader, I come to this conclusion shortly after actually realizing the potential of this tool. Having initially approached this blog critical of the insipid simplicity of a word cloud (and haunted by my semi-successfully text analysis project last semester), through my further exploration of Voyant Tools (literally as I wrote this) I came to recognize the potential it offers in the comparative analysis of texts. While working with one text presents many yeah-I-already-know-thats, Voyant Tools’ open-source gift of instantaneous intertextual analysis feels like something I could not only dabble with for hours but also utilize in developing critical approaches to arguments and analyses in the future. In short, I’m excited and this tool is cool.

If I’m able, I’ll attach my Voyant Tools workspace with both The Arcades Project and Paris, Capital of Modernity here.

If that doesn’t work, click below to download Benjamin’s and Harvey’s work. You can easily upload each into Voyant Tools and enthusiastically peruse the endless possibilities that come with the text analysis of two irrelevant texts elucidating how one city briefly functioned two hundred years ago. You’re welcome.

[Colin + Gemma + Zico] on how to hook your audience: the latest in-person workshop experience

@zico @cgeraghty @gemma5teva

Last week we attended the workshop “How to Hook Your Audience”, an event held in person at the Graduate Center aimed at sharing helpful tools and strategies designed to better craft research narratives in both an informative and engaging way. The session was led by Dr. Machulak, the founder of a company that supports both scholars and professionals in bringing their research and ideas to either new or different contexts. Dr. Machulak is also a writer and editor.

The three of us individually decided to join this seminar, but immediately agreed on the opportunity of doing something different by deciding to co-write this post. The idea is to share our views and takeaways, while avoiding potential repetitions on describing the contents and dynamics of the seminar.

In an effort to allow the reader to compare and contrast our personal takeaways and learning experience, we came up with four questions that we decided to separately answer before finetuning the below “interview style” blog post.

Premise: the workshop revolved around 3 key points:

  1. how to deliver the same message, or argument, to different people;
  2. ways of studying and approaching these audiences;
  3. suitability of communication channels based on the above.

Questions we asked ourselves:

1.The moderator described the perfect ‘hook’ as the interception among logos, ethos, and pathos or, in plain English, rationale (or argument), audience, and credibility. Do you agree? If so, in your view, what are the main challenges?

[Colin answers] I sort of agree. Incorporating rationale, audience, and credibility into your hook is great. But I also think that the main challenge could be ending up in overthinking the product: a hook is a hook! It doesn’t necessarily have to sound too clever or anything – that can come later, possibly after you’ve already grabbed an audience’s attention. Even if the hook seems an unsophisticated clickbait, most people will still take that bait – even though they would never admit it!

[Gemma answers] Similarly to Colin, I only sort of agree. Certainly, reading the audience is important, alongside with ensuring that the argument presented is solid, however, credibility could be an issue. Credibility is something that gets built over years, generating a number of difficulties for students who might have, undeniably, valid theses to present, but not that immediate confidence that would translate into authority into the eyes of the audience. Other challenges might rotate around non-native English speakers or international individuals who might find the current lingua franca an impediment to their credibility.

[Zico answers] I kind of agree. However, the challenge lies within the answer. At a glance, questions around logos, ethos, and pathos seem straightforward; nevertheless, answering these questions is rather difficult. At some point, to cater to or hook the audience, one might have to pivot and present ideas in a different way, and this might be extremely challenging.

2. How can what you have learnt at the workshop be applied to Digital Humanities?

[Gemma answers] Digital humanists, by nature, heavily rely on online platforms which, inevitably, entail a huge exposition to different types of audience. As for everything, the keys are the message and who the message is intended for. Hence, simultaneously crucial are the intention and the crafting process. It might sound obvious, but one’s message goes hand in hand with the communication style and the distribution channel/s chosen. In brief, do not assume your audience understands you; do your due diligence, spend some time to prepare, and do not be afraid of tailoring your research to meet your listeners’ or viewers’ needs. 

[Colin answers] Hooking your audience with DH is a different, but exciting, challenge. And that’s why we’re here! For instance, Dr. Machulak showed us the photograph below to showcase what a good hook looks like; in this case the cougar was portrayed to visually represent what 2 meters (6ft) looks like in the context of social distancing.  This sign is also a great example in terms of incorporating Aristoteles’s principles of persuasion. An image or using some form of multimedia to hook your audience, done right, could be a more powerful draw than words.

[Zico answers] The approach introduced by Dr. Machulak might be extremely helpful to push DH projects outside the academia universe. Since asking questions like who we have left behind is at the core of DH, I believe that, by following the logos-ethos-pathos structure, serving a broader audience will be achievable.

3. How could the tension between public and academia be addressed when trying to hook multiple communities outside academia with your research?

[Zico answers] In my opinion, the tension between the public and academia lies within the expectations. Publics expect results while academic research not only has to come up with them but must also address ethics and morals that surround the approach chosen to produce those results. To address the tension between the public and academia, academic research often has to rephrase the message by adopting an audience-first approach, where results will shadow critical topics like ethics and morals. I am not in favor of wall gardening the critical aspects, but highly believe that higher abstraction is a requirement of greater magnitude.

[Colin answers] Identify the specific tension your research brings between academia and the public: most of the times, the tension is just a misunderstanding between two parties, so stating the miscommunication in your hook could be a great way to bridge the gap. Nevertheless, there will be times where you won’t be able to get beyond stubbornness. In those cases, you should use the Context/Audience(Broad)/Audience(Specific) approach learned at the workshop that helps to tailor your hook based on your public and how to ensure the latter actively engages in your research and ideas.

[Gemma answers] There should be no tension in first place, however, sadly, some form of gap is there. In this sense, an academic audience might need far less details when it comes to technical explanations of terminology and contents but could require higher level of information on one’s work’s limitations and methodology. On the other hand, an audience made of non-technical individuals, might not even be aware of issues related to the methodology used, hence the hook should reflect that.

With this in mind, it was interesting for us when a CUNY neuroscientist, who was attending the workshop with us, brought to our attention how she was struggling in explaining to the general public how brain waves work. It was thought-provoking as she confessed to us how “easy” it had been to present her thesis to a purely academic audience, while now having issues in handling non-experts’ expectations and questions. 

4. Overall, what’s the most important thing you’ve learnt?

[Colin answers] Overall, Dr. Machulak presented her material well, and her qualifications on the topic were evident. I particularly liked how she asked us not to disclose any personal projects any of us would choose to share with the group. The most important thing I have learned was the idea of incorporating Aristotle’s principles of persuasion into a hook. Easier said than done, but that will stick with me.

[Gemma answers] My main takeaway rotates around the importance of being able to situate any work I would like to present in the right context, which includes ensuring that my rationale (what are my core claims and evidence?), credibility (why am I the best person to make this argument?) and audience (how will I connect with my target audience?) are intersecting in a point where my hook will become effective and memorable.

[Zico answers] How to introduce academic research into a project and attract a diverse public has always been a difficult question for me to answer. In my opinion, Dr. Machulak’s idea of structuring a project by asking specific questions on the argument presented is extremely helpful. She specifically introduced the terms logos (will it support my immediate argument?), ethos (is it within my areas of expertise?), and pathos (will it resonate with my target audience?). These are all important questions to identify how to hook broader audiences.

Media in Life

As usual, this week’s readings were very thought provoking.  The readings that resonated with me were by Chatelain and Deuze et al. I appreciated Chatelain’s insight about how time is compartmentalized for most of us yet is not brought to our forethoughts. It was insightful for her to point out that in the academy, one is evaluated by their use of time, which is a bit unusual in the working mode for most people.  How time is spent, evaluated and treated was the criteria in her academic career.   She was able to shift the dialogue to an expansive view to include social media.  Her success in using Twitter to engage many people to become actively involved in their community as well as connect to other communities across physical boundaries was a successful use of social media to stimulate change.  Her act of rebellion against staying in the Ivory Tower caused a cascade of actions where scholars connected with a wide variety of fields not usually tapped, such as elementary school teachers.  This dual aspect of social media to cause global connections and at the same time stimulate the local community was also seen in the Deuze et. al. article. As they state,” …media that our children experience are[…] a mixture of national, regional and global. These media can serve to maintain national allegiances and offer a view of the world that reconnects children with another history or opens a window to a new world.” It makes sense that a young adult watching a TikTok video of a person dancing or playing a prank on the other side of the world will connect to it as well as the person.  Ah, they are another person just like me is an unconscious idea.  No matter that they are Asian, Indian, Nigerian or Swedish. A connection is made on a purely human level eradicating a bias and thus causing one more iota of an ism to fade away. Whether it is racism, sexism or any other.  Both the global and the local are affected. Yet, I have a hindrance to me, a human, being negated and my ontological existence is subsumed by media and I am unable to” live a life without by wireless.” There is an assumption being made as well that one cannot live without being fully immersed in media to the point where my body is media.  This is overkill to say the least. “…a life in media is at once connected and isolated, requiring each and every individual to rely on their own creativity to make something out of life: not just to give it, but to symbolically produce it,” is a stretch of a statement. It reeks of being  patronizing and deterministic because it assumes that we, as individuals, are helpless without media to develop ideas, identities, goals, aspirations.  Tools determine my upbringing and consequently my being.  It is not my parents, my socialization, my learning.  It is the tools.  While media tools do in fact expand one’s connection to the world, they do not form the individual. People use media and let’s hope the converse will never be true.

Reflections on “A Life Lived in Media”

Below is a deep dive (possibly complemented with periods of floundering) into Deuze, Blank, and Speers’ A Life Lived in Media. My initial aim was to investigate/interrogate their use of David Harvey’s The Condition of Postmodernity but this led to a jumble of elsewheres. I recall the issue of scope being mentioned in class recently?

In A Life Lived in Media, Deuze, Blank, and Speers advance the notion that an additional ontological turn is necessary for how we understand, interact, and individuate through media. Approaching the question of the “media life perspective,” or the “realization that the whole of the world and our lived experienced” are framed by and mitigated through media, through four terms perspectives (that of invisibility, creativity, selectivity, and sociability), the authors of the piece argue on behalf of the artistic autonomy afforded to us via “media life” and the “endless alternatives and versions” of self-creation that are made possible should one learn to position oneself within media networks and the “always-available global connectivity” that they allow (2012, p. 1, 36). Aware of the possible readings of their piece as a reductive argument made unsuccessfully against the “existential contemplations” of the “panoptic fortresses of governments and corporations that seek to construct a relatively cohesive and thus controllable reality,” Deuze, Blank, and Speers seem satisfied with such a slight-misreading, ultimately concluding their piece with a vapid case of “life being art” bolstered with cherry-picked quotes from the likes of Foucault, Bauman, and Nietzsche (2012, p. 37). With that said, I did like this article.

Much of what Deuze, Blank, and Speers discuss in this piece struck me as salient, familiar, and interesting, such as Harvey’s notion of flexible accumulation and Hearn’s discussion of compulsive outer-directed self-presentation. However, I feel as if whatever conclusive argument was being attempted lacked both clarity and an applied awareness of the severity of the inevitable “loss of self” through the increasingly imperceptible “Mediapolis” (Deuze, et al., 2012, p. 3). In what follows, I will attempt to outline, critique, and expand on Deuze, Blank, and Speers’ four provided terms (invisibility, creativity, selectivity, and sociability), addressing both the arguments made in their respective sections and the appropriate or misplaced employment of thinkers therein.

Invisibility

In addressing media’s ontological possibilities, the term invisibility is used by Deuze, et al. to represent “the disappearance of media from active awareness” (2012, p.1). This disappearance of massive forms of psychological power into our societal, cultural, and political background despite their ongoing creation of the world, to paraphrase Brian Arthur, is discussed in reference to David Harvey’s work on space-time relationships (Bourdieu’s symbolic violence might have found apt application here as well) and their ongoing compression through the transition to “flexible accumulation” and the “rapid deployment of new organization forms” and new technologies of production (Harvey, 1989, p. 284). This brings about the first point of contention I had with the authors’ approach and development of their argument. In Harvey’s The Condition of Postmodernity, flexible accumulation is said to accentuate the “volatility and ephemerality of fashions, products, production techniques, labor processes, ideas and ideologies, values and establishing practices,” perhaps acting as the groundwork for Deuze, et al.’s referencing of it as something of an origin for the fragmentation of self-identity that (ultimately/supposedly) leads to the “potential power of people to shape their lives and identities” to be found in the ever-evolving forms of media available amidst such precarity (Harvey, 1989, p. 285; Deuze, et al., 2012, p. 5). This suggestion that a life lived in media, that the media life perspective, offers some political, economic, artistic, or spiritual program of agency, expression, or self-realization contra the will of the market and in opposition to the logic of production will operate as the crux of my critique throughout what follows. Harvey’s employment of flexible accumulation doesn’t operate as a function in the formation of “fragmented identities” but rather an economic condition that begets “capital flight, deindustrialization of some regions, and the industrialization of others, the destruction of traditional working-class communities as power bases in class struggle” impacting everything from “local networks of influence and power” to the “accumulation strategies of ruling elites” (1989, p. 295). As flexible accumulation found new virtual materials of accumulation in the Digital Age, the precarity found in its nascent neoliberal form has been exacerbated due to the hyper-speed at which technologies of power have advanced and continue to accelerate far beyond our capacity of understanding, leading to what Baudrillard described as a crisis of explanatory logic (1986) and what Bernard Stiegler has simply described as disruption. Perhaps I included references to these thinkers and their decontextualized concepts primarily to highlight the hazy methodology of Deuze, et al. here and the occasional ineffectual nature of academic namedropping in advancing a point. Perhaps I just did it to be cool. Moreover, in a statement intended to align their scholarly approach with Harvey’s, suggesting that along with him, “we do not see people as hapless victims of this seemingly disjointed worldview,” I fail to see or understand how Harvey’s The Condition of Postmodernity distinctly participates in this notion. In fact, Harvey laments this disjointed worldview via the “fragmentation which a mobile capitalism and flexible accumulation can feed upon,” suggesting that it is difficult to “maintain any sense of historical continuity in the face of all the flux and ephemerality of flexible accumulation… [resulting in] the search for roots [ending] up at worst being produced and marketed as an image, as a simulacrum or pastiche…” (1989, p. 303).

While I know these authors are attempting to draw optimistic attention to the potentialities of individuation still existent through digital networks, their borrowing of Hjarvard’s notion of mediatization, suggesting that “media may no longer be conceived of as being separate from cultural and other social institutions” points to what I perceive to be the complete opposite outcome. The production of the self (and each other), as advocated by Deuze et al. in a compressed space in which the interweaving of simulacra into the quotidian forges a cohesive world of life and commodity such that it “conceals almost perfectly any trace of origin, of the labor processes that produced them, or of the social relations implicated in their production” renders any individuation of the self solely mobilized by invisible market forces (Harvey, 1989, p. 300). As I will touch on shortly, this degree of immersion in mediatization does not allow for resistance to anything but the outward protocol of these mediated structures of power, rather than addressing the way in which such protocol clandestinely sculpts life itself (Galloway & Thacker, 2007, p. 78).

Creativity

Deuze, Blank, and Speers, as they do throughout the majority of this piece, fluctuate between being on the mark and offering a hodgepodge of tech-optimism-somehow-effectuated-via-quotes-from-tech-pessimists. For example, the authors state, “When the organizing categories and principles of life are in constant motion, uncertainty reigns” (2012, p. 10). Though I would trace this uncertainty as leading that which was mentioned briefly above (i.e., the “perfected completion of nihilism” posited by Stiegler as being the effective accomplishment of computational capitalism or the hyperreality of Baudrillard – cool, huh?), the authors here, despite their acknowledgment (or celebration?) of the dissolving distinctions between man and machine, posit that a “life lived in media inspires a “creative” outlook to one’s world (2012, p. 13). Deuze et al. briefly root this force of creation in James Carey’s emphasis on “the ritualistic nature of the way people use media and technology to make sense of the world,” drawing the emphasis on this potential for creation away from the “categories of media production and consumption within the parameters of the capitalist project” and shifting focus to how such technology impacts the creative potentials of those interacting with that which is produced-from-above. However, if we are to understand rituals are symbolic acts that stabilize and structure time, then Deuze et al.’s subsequent advocacy for increased production via widespread multimedia literacy directly opposes this notion of the ritual. The “relentless consumption” of rapidly produced and disseminated media, such that we exist within it and are unable to notice, surrounds us with “disappearance, thus destabilizing life,” to borrow the words of Byung-Chul Han (2019, p. 4). Han goes on to note that “rituals produce a distance from the self, a self-transcendence,” rather than a production-of-the-self via “a life lived in media” as advanced by Deuze et al. (2019, p. 7). The authors’ arguments on behalf of data and information networks positioned through this piece that work to advance the necessity of engagement with media platforms in order to continue “existence in a networked digital age” not only negate the symbolic incapacity of such platforms to meaningfully bind people together (Stiegler’s symbolic misery, anyone?) and restore a solid structure to time, but also seemingly fails to recognize their political program is doing little more than advocating the melding of oneself into the digital under the “threat” (hyperbolic, sure) of nonexistence.

Perhaps the oddest element of this section is Deuze and friends’ awareness and inclusion of quotes from Alison Hearn, who suggests that “social media are forms of self-branding mandated by a flexible corporate capitalist project that ‘has subsumed all areas of human life…’”, and Zygmunt Bauman, who states that people “recast themselves as commodities: that is, products capable of catching the attention and attracting demand and customers,” and fail to do anything substantial with these provided frameworks of thought (2012, p. 18). Rather, the authors advance a “pull yourself up by your bootstraps” program to “take advantage” of the potential of creativity provided by the media and the golden possibility of sustained existence should one find success in doing so. To semi-conclusively return to Harvey, he states presciently in The Condition of Postmodernity, “Images have… themselves become commodities. This phenomenon has led Baudrillard to argue that Marx’s analysis of commodity production is outdated because capitalism is now predominantly concerned with the production of signs, images, and sign systems rather than with commodities themselves” (1989, p. 287). As Deuze et al. advance the mastery of creative production in opposition to the forces they seem so frustratingly aware of, one must recognize that that which is created through the “life lived in media” is little more than a baby gazelle being born under the gaze of a pack of lions. To paraphrase Gramsci, when incurable structural contradictions have revealed themselves (thanks Netflix’s The Social Dilemma!), and through their incessant and persistent efforts to maintain power despite the growing acknowledgment of their toxicity, a new “terrain of the conjunctural” will form, and “it is upon this terrain the forces of opposition organize” (Gramsci, 1971, p. 178). The terrain of resistance to the exploitative nature of digital capitalism can not be formed on the virtual terrain that such institutions have created and continue to maintain.

Selectivity & Sociability

For the sake of whoever has made it this far, I’ll condense these two critiques together and attempt to keep it brief. In their analysis of the ways in which social systems or institutions are depicted via the media, Deuze et al. note, “All institutions are dependent on societal representation… This means that an institution’s success in the media becomes necessary for the exertion of influence in other areas of society. Therefore, all functional areas within society have learned to look at themselves through media glasses” (2012, p. 20). Through a process of exaptation, institutions and organizations, regardless of their ethos, have adopted a methodological amalgamation of market strategies, public relations campaigns, and propagandistic approaches, prioritizing (by necessity) their status and position amidst digital networks to the same degree (or, perhaps even greater) that they must in the “real world.” Regarding the subsumption of societal institutions into the wider networks of media, the article does a great job detailing stances on the “non-neutrality” of such networks and their deindividuating effects, primarily through Bauman’s suggestion that “benevolent readings of networked potential of contemporary media life” can quickly lead one to engage in fallacious “internet fetishism (2012, p. 22) and Žižek’s (optimistically framed by the authors) “being together alone” (ibid, p. 23). Seeing as I have neither the blog-space nor the comprehensive understanding of networks as such to tackle each of the authors’ rattled-off references individually, I’ll briefly detail what I found in reference to networks in my readings of outside texts I came across in trying to grapple with the arguments presented here.

In Robert Hassan’s The Condition of Digitality: A Post-Modern Marxism for the Practice of Digital Life, the author resembles Harvey in his assertion that “there is no meaningful past or future in the network, only the digital present,” suggesting that such a resultant time-space compression (à la The Condition of Postmodernity), amplified by the hyper-industrial nature of capitalism and the culture industry, has destroyed the capacity of cultural signs and symbols to linger, producing what Han has described as serial perception, or “a constant registering of the new incapable of producing the experience of duration… instead rushing from one piece of information to the next” (Hassan, 2020, p. 174; Han, 2019, p. 7). Through this process, the marketization and distribution of commodified symbols are accelerated, creating a logic in which the aforementioned institutions and their associated cultural forms are “marked by an inherent lack of originality… where culture ‘eats its tail…”, creating an assimilated sameness that operates fluidly in an “Otherless” market – Han discusses this in The Expulsion of the Other (Hassan, 2020, p. 163). Considering this (and many more salient points made in this work that I won’t include here), networked systems of computational capitalism work to facilitate the flexible accumulation that Harvey described,” rather than to act as an element in the evolution of “media as a playground for the search of meaning and belonging,” as advanced by Deuze et al (2012, p. 5). Sure, networks might allow for novel forms of individuation and transindividuation across networked digital communities but there is no possibility of this occurring without such connections producing raw material for programs of virtual accumulation that allow for the individuation of the network as an entity itself.

So, what’s to be done?

As per usual; I dunno.

However, I did find and partially read a wildly interesting and topical book called The Exploit: A Theory of Networks that provided some unconventional approaches to addressing the issues at hand. To keep things short, I’ll provide a quote from the work rather than trying to slyly incorporate it into a greater discussion:

“When existence becomes a measurable science of control, then nonexistence must become a tactic for any thing wishing to avoid control. ‘A being radically devoid of any representational identity,” Agamben wrote, “would be absolutely irrelevant to the State.’ Thus we should become devoid of any representable identity. Anything measurable might be fatal. These strategies could consist of nonexistent action (bonding); unmeasurable or not-yet-measurable human traits; or the promotion of measurable data of negligible importance. Allowing to be measured now and again for false behaviors, thereby attracting incongruent and ineffective control responses can’t hurt. A driven exodus or a pointless desertion are equally virtuous in the quest for nonexistence. The band, the negligible the featureless are its only evident traits. The nonexistent is that which cannot be cast into any available data types. The nonexistent is that which cannot be parsed by any available algorithms. This is not nihilism; it is the purest form of love” (Galloway & Thacker, 2007, p. 136).

I found this bizarre, Schopenhauerian “denial-of-the-digital-will” approach to becoming-through-a-digital-unbecoming to be totally fascinating. As Thacker and Galloway later note, John Arquilla and David Ronfeldt, the prescient authors of Networks and Netwars, once noted that where programs of resistance once focused on “bringing down the system,” many current network-based political movements have shifted their focus to developing and maintaining connections, to hyper-communication via “a life lived in media” rather than on addressing material mechanisms of control. Much in the same way Deuze et al. advocate for individuation through media forms, Thacker and Galloway note that networks are similarly continuously expressing their “own modes of individuation, multiplicity, movements, and levels of connectivity,” developing with a rapidity that the human is increasingly surpassed by, creating a sense of malaise, impotency, and disempowerment. As Nietzsche notes (since the authors also enjoyed employing his thought in their finale), mankind has always “mercilessly employ[ed] every individual for heating its great machines,” degrading him to a mere “instrument of general utility (Nietzsche, 1986, p. 585; 593). Perhaps continuing to operate these networked machines, despite the different attitudes, protocols, and programs applied from-within in an attempt to simultaneously resist and exploit the digital tools afforded to us via media, does not offer the effective, fulfilling means of self-creation that the authors suggest? Perhaps the tech-pessimists and network-skeptics Deuze, Blank, and Speers reference throughout this piece have more to offer than whatever point it is that they are really trying to get at? I’m not sure. However, to “round out” a quote from Thacker and Galloway used above (as a Deleuzian motion to “look for new weapons“);
“The set of procedures for monitoring, regulating, and modulating networks as living networks is geared, at the most fundamental level, toward the production of life, in its biological, social, and political capacities. So the target is not simply protocol; to be more precise, the target of resistance is the way in which protocol inflects and sculpts life itself” (2009, p. 78).

References

Baudrillard, J. (1986). America. Verso.

Deuze, M., Blank, P., & Speers, L. (2012). A Life Lived in Media. Digital Humanities Quarterly, 6(1).

Galloway, A. R., & Thacker, E. (2009). The Exploit: A Theory of Networks. University of Minnesota Press Minneapolis.

Gramsci, A. (1971). Selections from the Prison Notebooks. International Publishers New York.

Han, B.-C. (2019). The Disappearance of Rituals: A Topology of the Present. Polity.

Harvey, D. (1989). The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change. Wiley-Blackwell.

Hassan, R. (2020). The Condition of Digitality: A Post-Modern Marxism for the Practice of Digital Life. University of Westminster Press.

Nietzsche, F. (1986). Human, All Too Human: A Book for Free Spirits. Cambridge University Press.

Yes But

The readings for this week were a bit less connected than I had anticipated, not bad, just surprising since the other readings revolved around 1 topic. The Risam reading pointed out simple concepts yet with profound ramifications.   The design concept of ‘Less is more.” applies in DH also.  Not having to chase the latest and most powerful tools, one can still get the work done and even make it more egalitarian.  However, I noticed several points which seemed inconsistent. I appreciated the mention in the reading of, “documentary culture… has been profoundly shaped by colonialism” while they acknowledge “being postcolonialist advocating for a universal implementation to computing’ as being contradictory, yet they do not satisfactorily respond to it.  They go on to state,” At the heart of this state of affairs is the role of capital in the control of scholarly production.” Yes, but doesn’t that have to be so? Isn’t having capital also a necessity for having access to digital computing or archives.  Something seemed missing from the conversation.

The Michael et. al article reminded me of a Drucker article I read, The Virtual Codex from Page Space to E-space. We are in the digital realm and it is expanding to engage us in the realm of books.  Drucker discussed that books are not about what they are but what they do, and for us to continue along that line of thinking we must access digital realms that extend what books do. The way a book works is best described with an architectural metaphor of ‘program,’ which constitutes activities. The ‘program’ of a book is the activities derived from it. So, it makes natural sense that Manifold or hybrid publishing is the next iteration of the activity.  

The readings regarding Open Access gave me a pause.  I appreciate the concept of Open Access to make for a more egalitarian society. However, again, something seemed missing.  Something I can’t put my finger on yet is a big blind spot.  I appreciated the concept of allowing all to access anything.  I think OER is a good choice for some classes.   I think that breaking down barriers to education is important. But, I have questions that come up and some that are slightly below the surface where I can’t articulate them yet.  For example, how are we to pay for a writer?  How would a writer sustain a family with that sort of livelihood? The person or people who write text books take several years to write one, and the sales and circulation of the books is not substantial. How will they support a family if the book is made free of charge? What if they are not full time or tenured? Are we eradicating the profession of a writer?  What happens when everything is free?  Does the adage of ‘Too much of a good thing is not good.’ apply here?  There isn’t a person who knows me who has not heard me laud the tremendous wonders of free courses from iTunes University (just closed) to all of the Open Courses in many of the major universities, yet of the hundreds (if not more) of the people I have told how many actually took one or listened to one? Again, something is missing, but I’m not sure what. The concept seems nice, but I have questions as to the implementation and results The readings gave me many ideas about open access, but also gave me just as many questions.

For those interested in public access sites, OER or Open Courses here are a few:

https://www.edx.org

https://oyc.yale.edu

https://ocw.mit.edu

https://online.princeton.edu

https://online.stanford.edu/free-c

http://pressbooks.oer.hawaii.edu/oertraining2018/

https://openpedagogy.org

ourses

Blog post (The Remix)

That word ‘remix’ keeps popping up in our readings, and every time it does, the synthesizer intro to Donna Summer’s “I feel love” starts playing in my head. Specifically, a remix of the song – Patrick Cowley’s 1978 nearly 16-minute version. Cowley adds his bold synthesizer, electric guitar(I think), and aggressive extra percussion riffs over the song’s unmistakable synth rhythm. The product is a musical flirt between Cowley and Summer. His additions run wild over the original track, only for Donna Summer’s warm, smooth voice to mellow things out. It was a pure labor of love for Cowley. Being a bootleg recording, his version didn’t earn him a penny, and only a handful of copies were ever pressed on vinyl. Yet, it’s known today as one of the best remixes ever, and I think it is an excellent example of the approach we should take when attempting to remix a thing. In other words, you must love the thing you want to remix. 

Cowley didn’t have access to the 16-track original version of ‘I feel love’ Giorgio Moroder produced in 1977. Instead, he worked with his own vinyl copy of the record. Lauren Martin, a collaborator of Cowley’s, recalls in an interview with Mixmag, “I used to stand there and watch over Patrick’s shoulder while he worked on these electronic boxes and patch-boards and I just had no idea what he could be doing…now…I realize that he didn’t have sequencers and he didn’t have MIDI. He was doing it the hardest way possible: by hand.” It cannot be overstated how painstakingly slow and tedious this process must have been compared to trying to create something even remotely similar using the digital wizardry tools available today. And yet he produced something that sounded like it was plucked directly from the future. Sadly, Cowley would not live long enough to see how influential his ‘I feel Love’ remix, along with his other music productions, would become. 

In the early 1970s, Cowley studied music at City College of San Francisco and made the city his home afterward. Later in the decade, he met the musical artist Sylvester and quickly became collaborators and friends. Cowley played synthesizer on Sylvester’s 1978 Album Step II, which includes the hugely inspirational hit “You Make Me Feel (Mighty Real).” Cowley was instrumental in creating Sylvester’s signature pulsing disco sound. Probably the best example of this is their collaboration on the Cowley-written song –“Do You Want To Funk.” In late 1981, Cowley became ill; doctors couldn’t find out what was wrong with him. In truth, he was dying of undiagnosed AIDS. Patrick Cowley passed at his home in the Castro District in November 1982. His friend Sylvester died of the same disease in 1988. 

Patrick Cowley dying of AIDS and his work on “I Feel Love” are two separate things. All his musical work is an incredible gift to the world. But there’s something about Patrick Cowley being one of the beautifully creative people chopped down in their prime by a cruel stigmatized disease that makes his remix of ‘I feel love’ extra special. Not that it needs to be because — It’s so good.