Something personal

*Disclaimer*this is not an academic post

I would like to share with you something personal which I think has some relevance in respect to this week’s readings on the “datalogical” turn, the “digitization of everything” and, eventually, the digitization of us.

My partner was asked to attend an online cryptocurrency course (how original these days!) and, as part of the terms of reference, he had to consent to the use of a software that would analyse his interactions with other participants.

As his first videocall with the course group was on speaker I could hear most of the conversation (I promise I was not being nosy!), which was pretty uneventful as the class members were introducing themselves, cracking some jokes, laughing, and lightly chatting about the newly introduced workload, probably in an effort to neglect the burden of those additional tasks that would stretch everyone’s already-stretched capacity. I ended up not paying too much attention; it seemed to me like a relatively friendly dialogue among people who did not really know each other. How wrong of me!

Shortly after the call, a report was circulated with a detailed analysis of the participants’ interactions.  What I read was scary. The report quantified the number of words said by each person, the “energy” and “sentiment” attached to each word (whatever that means), the reactivity to other speakers’ statements and some corrections to improve the “quality” of the conversation (i.e. add some breathing, stop more often between sentences, etc.). And, to push the use of the word outrageous to its full extent, the software would also rate participants. Thus, for example, a shy individual would get a very low mark (you need to speak up!), whilst a witty person would be assessed as “too engaging” (sorry mate, you’re trying too hard!).

What is the moral of this anecdote?  Well, in a sense I found this, what I would have implored to be left confined in an experimental dimension, utterly shocking. Who defines whether a word is positive or negative? The dictionary? History? Talk-shows? An arbitrary use can drive almost any word, or sentence, in both directions. Figures of speech also have nuances and their meaning depends on how they are used; irony and sarcasm are frequently an expression of scepticism; an uncertainty in the voice can represent an only partially formed opinion..and I guess the list of examples of ambiguity could be endless.

So, this is the day of reckoning, this is it: the augmented version of text analysis, speech analysis, that everyone was waiting for and, with a great simultaneous feature (it took virtually no time for the software to produce this report)! This is AI at its core.

Why do I care? Simple, this is my very personal room 101: AI’ed conversations that would reveal one’s insecurities in speaking, unveil gesticulation as a form of protection, highlight badly translated jokes and expose those insular cortexes, like mine, that have not yet found their full identity in a language rather than another. This AI application is what I would call forced Darwinism, where the wide adoption of a certain technology, especially in everyday life, might result in a mechanism aimed at standardizing conversations, flows and, ultimately (and sadly), thoughts.

I still have hope though. I hope people will be “equipped” with the freedom of refusing the adoption of a such discriminatory use of speech analysis. And I hope people will be able use this right.    

4 thoughts on “Something personal

  1. kristina

    Wow, that sounds like a disturbing experience for you both Gemma! It is concerning that AI has also found its place in education, mental health, correctional facilities etc, populations that don’t typically have the freedoms of refusal. I was reading an article about correctional facilities that keep databases of voice prints, (if an inmate wants to use the phone, they agree to this). The data collection includes pretrial detainees as well. Thanks for sharing:-)

    1. Gemma S. Post author

      Hey Kristina, it sounds like speech analysis technologies have really been adopted as an almost standard practice in many aspects of people’s lives – I found surprising how quickly businesses (from corporates to education bodies) have started to adopt AI in their BAU. In this sense, I’m not sure what might come next. I guess my question is whether this is really needed; I acknowledge that there are positive and helpful uses of AI but, at this stage, it looks like a huge data collection/”big brothering” exercise to me.

  2. Majel Peters (She/her)

    Thank you for sharing this, Gemma. It’s a fascinating example of overapplying technology because we can, not because it’s needed or even helpful, and instead quite hurtful. Hurtful in both the personal sense, but in the larger context of all of the things you brought up in your blog: who gets to decide and qualify what behavior is acceptable and desirable. There’s a great Ezra Klein podcast that discusses the theory of status and how it impacts all of our interactions. It comes to mind because this AI example seems like a very blatant attempt to codify status cues based on one narrow definition of what merits status.

    https://www.nytimes.com/2022/09/13/opinion/ezra-klein-podcast-cecilia-ridgeway.html

    1. Gemma S. Post author

      Hey Majel, I’m in agreement with your view on how this specific application of speech analysis technology was embedded into the learning module as a “nice-to-have” rather than a required feature. Yet, a “nice-to-have” might end up shaping something so vital as people’s way of interacting.
      And thanks for sharing the podcast, I do see the parallel between the AI anecdote and some of the aspects described in the status theory, especially when the panelist touches on how status exists in other people’s opinion of us and how this makes us self-aware of our own “position” in the social hierarchy whilst working as a distorted incentive to mold our approach and social interactions.

Comments are closed.