Site icon The Harrisonburg Citizen

Community Perspective: The Life Raft of Truth and the Ocean of Lies

A Four Part Series by C. David Pruett

Photo Credit: Mike Miriello, James Madison University

PART II: The Social Dilemma

“A lie can travel halfway around the world while the truth is still putting on its shoes.”—attributed to Mark Train

On the recommendations of friends, I recently watched the Netflix documentary The Social Dilemma.  It should be required viewing of all Americans—because it exposes the roots of our national dysfunction, the paralysis that blocks solving our most pressing problems: the neglect of rural America, the loss of manufacturing jobs, structural racism, extreme economic inequity, climate change, endless wars, decaying infrastructure, you name it.

The chilling 90-minute exposé features interviews with the movers and shakers of Silicon Valley, the brilliant visionaries, programmers, and executives of the world’s tech giants: Google, Facebook, Twitter, Apple, Reddit, etc. Almost without exception, these idealists were drawn into the industry by the Internet’s promise to link the entire world in a golden age of information. And to a person, each gradually grew disillusioned—then terrified—at the digital Frankenstein they’d helped create. Where did it all go wrong?

Google, Gmail, Facebook, Twitter: these are all “free” services. But there’s no free lunch. So these companies engage in fierce competition for the attention of users.  Every incremental fraction of a second that a user remains “engaged” translates into greater ad revenues. Obscenely rich, these companies have amassed fortunes by perfecting the dark art of subtle psychological persuasion, gradually modifying your behavior and mine without our conscious awareness.

There’s a saying in the industry: “If you are not paying for the product, you are the product.” These ubiquitous digital platforms exploit sophisticated AI (artificial intelligence) algorithms to track each and every keystroke you and I make, and how long we remain on a particular link. From thousands of such seemingly innocent snippets of data, the algorithms construct a psychological profile of each “user”: our individual habits, preferences, and vulnerabilities. Put another way, the algorithms know how to push our buttons, and they do so to tweak our behaviors in the direction desired by the advertiser.  All this happens without human intervention and normally without intentional malevolence. It’s all just electronic manipulation to make money. But sometimes, as revealed in another Netflix documentary, The Great Hack,  “weaponized” data has been used for genuinely nefarious purposes: to tip the Brexit referendum in England and to sway the outcome of the 2016 election in the U.S.

Consider, for the sake of illustration, the radically different business models of Wikipedia and Google. Like public radio, Wikipedia is crowd-sourced by the voluntary contributions of its readers.  In contrast, Google depends on ad revenues.  Any user, regardless of their political persuasion or psychological profile, will see the same display on Wikipedia.  Not so on Google (or any search engine).  When you use Google to surf, the algorithm prioritizes the results of the search according to your individual preferences. For example, suppose I enter “voter fraud” into the search window.  Depending upon my profile, I may be directed first to the Washington Post, or to Fox News, or to QAnon; that is, to whatever site—regardless of legitimacy—will hold my attention longest and generate the most ad revenue.  Over time, the accumulation of gentle algorithmic nudges can push users further and further down conspiratorial rabbit holes. To put a fine point on it, the business model of Wikipedia promotes consensual, commonly accepted truth, while that of Google, Facebook, and other advertisement-based platforms promotes alternate versions of reality. No wonder our politics is so polarized. We literally inhabit alternative “factual” universes.

There’s another saying in the industry: “Only two industries call their customers ‘users’: drugs and software.” The analogy is fitting.  Outrage and indignation, emotions incited by incendiary posts, have been shown to be addictive.  This too is exploited for psychological manipulation to enhance ad revenues. A 2018 MIT study in Science revealed that fake news travels through the bloodstream of the Internet six times faster than real news. Why?  Because fake news is more novel, and it elicits stronger emotions, chiefly surprise and disgust. In short, fake news is addictive, like a tiny hit of cocaine.

Feigned outrage similarly taints 24/7 cable news, whose networks compete for ratings and market share, some with little to no regard for truth.  In sum, money and outrage have poisoned the noosphere, and much of the American populace is addicted to disinformation.

“Democracy Dies in Darkness” is the sobering slogan of the Washington Post, darkness being euphemistic for ignorance.  If the events culminating at the U.S. Capitol on January 6, 2021, have taught us anything, it’s that disinformation can kill democracy far faster than “darkness.”  It behooves us all, for the sake of the nation, to get better at ascertaining truth. Stay tuned for Part III: Discerning Fact from Fiction.

Dave Pruett is Professor Emeritus of Mathematics at James Madison University (JMU).  In addition to three decades of mathematics teaching at various levels, he has worked for a decade in NASA-related aerospace research. Dave is also the author of Reason and Wonder (Praeger, 2012), the outgrowth of an award-winning JMU Honors seminar that explores the nexus of science and spirituality and was the subject of a JMU TEDx talk in 2018.

An earlier version of this article ran in Like the Dew (https://likethedew.com).

Exit mobile version