Book review: May I Ask a Technical Question, by Jeff Krinock and Matt Hoff
The authors ground their discussion in research and terminology, and offer a useful appendix with nearly 40 key terms in the context of larger technological issues. While the book is self-published and seemingly unreviewed elsewhere, it is worth reading, as it encourages a more mindful, cautious approach when it comes to embracing new digital technologies.
In the book, Krinock and Hoff say they
are not trying to paint digital technology as being in and of itself a ‘bad’ thing; [they]’re focused on encouraging people to consider where, when, how and why we embrace all things digital, and [they]’re especially concerned about the countless situations in which we’re not even given a chance to weigh for ourselves the merits versus the costs of moving human tasks into the digital realm (83).
In other words, they want to encourage a more careful, thoughtful evaluation of the technology we incorporate into our lives.
Krinock draws upon his former experience as a pilot to illustrate some of the potential drawbacks of technology. As pilots rely increasingly on “digital nannies” to monitor and detect issues during flight, and even switch on auto-pilot entirely, they lose their situational awareness and ability to act skillfully when things go wrong. Krinock writes:
Digital technology gave me “eyes” in the night that no human eyes could match and took away elements of my inherent situational awareness—a type of awareness that needs regular care and feeding (through constant practice) in order to be effective at times of emergencies” (37).
Krinock and Hoff use the term digital reliability as “a new definition that goes beyond statistical analysis of computer task failures and includes an analysis of how, where, and when digital technology causes human pain or displacement” (42). In other words, digital reliability asks you to assess how reliable a new digital technology is in fulfilling its intended purpose. Are there any unintended consequences that might detract rather than enhance our humanity, freedom, or self?
Instead of assuming the new technologies always take us one step forward with human progress, Krinock and Hoff say “we need to measure the digital wonders showing up continually in our lives not simply by their abilities and the tasks they perform for us, but also … measure and consider what human and social tasks, abilities, traditions, skillsets, and opportunities they displace” (39).
As a relatively short essay (112 pages, followed by a 40 page terminology appendix), the authors don’t delve into too many concrete examples. More examples would have brought the text more to life, but the authors do ground the discussion in some valuable classics, such as Mary Shelley’s Frankenstein and other authors, including Neil Postman, Langdon Winner, Jean-Pierre Dupuy, Joseph Weizenbaum, and more. Much of the emphasis focuses on the dangers of surveillance—including “sousveillance,” which is a lateral, peer-based surveillance rather than top-down monitoring.
Despite not having more specific examples, the ever-burgeoning list of current issues provides plenty of conversation topics: social media causing self-esteem, anxiety, and isolation; AI replacing workers; cars damaging the climate and destroying pedestrian-friendly cities; algorithms leading to gender and racial discrimination; content creation tools amplifying marginalized voices but at the same time exacerbating disinformation. The list goes on.
In such an awakening about the double-edge influence of tech, May I Ask a Technical Question? is a welcome encouragement to be more mindful and cautious of every new digital innovation. For example, a discussion of the latest technological novelty, generative AI tools like ChatGPT, would undoubtedly be enriched by the theoretical grounding found in this book.
Admittedly, while reading the book, I found myself wondering about the line between tech paranoia and cyber-skepticism. The former entails assuming negative intentions from companies, while the latter involves being inquisitive and considering the repercussions of their actions. Being in the tech industry, it’s difficult for me to assess my role in it all.
I do think the authors could provide a more balanced discussion in places. But I get that as cyber-skeptic literature, the authors’ goal is to raise awareness and “jolt those of us who work in science, technology, or engineering to see a bigger picture regarding potential consequences of failed or misused technology” (111).
The appendix at the end of the book is especially useful for introducing readers to the terminology of the cyber-skeptic genre. It defines and discusses nearly 40 key terms in the context of larger technological issues. This section is invaluable for empowering readers to investigate the potential drawbacks of digital technology with more precision. The following are terms that stood out most to me; I’ve included brief summaries (my own) of each.
- Control centrism:
- Belief system that advocates for centralized decision-making through digital technology to regulate and govern society and environment.
- Science of communication and control theory, examining ways people, machines, and natural systems interact, with potential to transform us into mechanized automatons.
- Conviction that human events and movements are ruled by an unseen and impersonal force, leading to a sense of absolved responsibility.
- Digital dementia:
- Detrimental impact of over-reliance on digital devices on cognitive abilities and brain volume, causing us to forget how to do tasks that were once second nature.
- Digital reliability:
- Trustworthiness of digital technology and its ability to function without causing harm, disruption, or control of individuals and society.
- Situational awareness:
- Capacity to recognize, process, and comprehend essential elements of data about our surroundings.
- Social engineering:
- Act of attempting to shape popular attitudes and social behavior on a large scale.
- Gnomic will:
- The process of decision-making when a person does not know what they want, contrasted with natural will, which is the movement towards fulfillment without deliberation. Will is discussed in relation to digital technology and the idea that humans may have other forms of knowing and decision-making beyond the logic of digital algorithms.
- Philosophy using triadic dialectic (thesis → antithesis → synthesis) to understand reality, often used in social engineering and visible in combination of digital technology and planned obsolescence.
- Approach of introducing political or social change in small steps, leading to unanticipated consequences and subscription-based services and bloatware.
- Lateral surveillance:
- A form of peer-to-peer monitoring that involves the observation and reporting of the actions of others without consideration of social or political standing.
- A hypothetical human being whose nature has been improved through the use of applied science and other rational methods, creating a composite being that is part biological, part mechanical, part electronic.
- Physical structure proposed by philosopher Jeremy Bentham as an ideal way to monitor prisoners, mirrored in today’s digital technology by widespread collection of personal information.
- Participatory Panopticon:
- Metaphor for when citizens willingly subject themselves to perpetual surveillance, leading to loss of privacy and mystery and creating a system in which users are both guards and prisoners.
- The conversion of a concept or idea into a concrete thing, such that the human beings behind the technology are obscured and absolved of their responsibility for the technology’s success or failure. For example, in the legal system, judges may refer to the law when making a ruling, making it appear as though the law is a separate entity from the judge.
- Effect of being aware of possibility of being watched, leading to silencing of minority opinions and weakening of internet’s capacity to be an impartial platform for open discussion.
- An intellectual pathology that recognizes problems based on just one criterion: whether they are “solvable” with a technological solution at our disposal. This may lead to a self-feeding cycle of technology adoption where we neglect real-life problems and focus too much on the faux solutions.
- Act of recording activities from a participant’s perspective, as opposed to that of an observer, monitor, controller, or regulator (surveillance).
- Whig Theory of History:
- The belief that history is an inevitable march upward into the light and that progress is inevitable.
(Note: These summaries were largely assisted by Open AI’s Playground.)
You can get a copy of My I Ask a Technical Question? on Amazon and elsewhere.
Note: Jeff Krinock sent me a copy of his book to review.