Skip to main content

A Drop of Ink: A Great Promise Turns Into Great Threat

By Reed Anfinson
Publisher
“Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century,” Stephan Lewandowsky and Anastasia Kozyreva write in an article published by OpenMind magazine. 
At its inception, the internet was seen as a source of news that would free millions of people in repressive countries from state-controlled information, giving them a voice to break oppression’s bonds. In America, it would free us from the gatekeepers at major news outlets and democratize the spread of information with each person having an equal voice, strengthening democracy’s base. It’s proven to be a false promise.
“Today, democracy is in retreat, and the internet’s role as driver is palpably clear,” Lewandowsky and Kozyreva write. “From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.” They see the internet as “both savior and executioner of democracy.”
Its role as executioner is built into its profit model - earning profits off the profiles of its users. The “platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers,” Lewandowsky and Kozyreva write.
We pay for our access by sacrificing our privacy. We pay by allowing internet companies to build complex profiles on each of us that they then use to sell us goods, services, and self-serving information. We are targeted, manipulated subtly and overtly, and not to just buy things. Our political beliefs are entrenched and too often with enraged by half-truths and outright lies for their profits.
As their technological sophistication improves, their profiles of each of us deepens. We are willing victims of advertising and messaging by platforms “incentivized to align their interests with advertisers, often at the expense of users’ interests or even their well-being,” Lewandowsky and Kozyreva write. With each purchase, each friend requested and each one granted, each click on a link, and each share they fine tune our profiles. 
The internet platforms know how to capture our attention and hold it with their sophisticated algorithms. They provide information that rewards our need for approval, support for what we believe, and what makes us angry.
“Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness,” Lewandowsky and Kozyreva write. Their tactics exposed, Facebook has apparently changed its algorithms, but none of us know for sure; we don’t get to verify what their technology does or doesn’t do.
 “A study by Mozilla researchers confirms that YouTube not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content,” Lewandowsky and Kozyreva write. Mozilla is the not-for-profit organization behind the Firefox browser. “In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger,” they write.
Our politicians then use the same attention-grabbing techniques, feeding us fear and outrage to gather support. In the process, they degrade our democracy. Their twisting of citizens’ beliefs reaches into the farthest reaches of American life, infecting the city dweller surrounded by millions and the rural resident alone on a farm.
Lewandowsky and Kozyreva argue social media and internet platforms must give us the tools to judge if they act fairly and in the public interest.
“Protecting citizens from manipulation and misinformation, and protecting democracy itself, requires a redesign of the current online ‘attention economy’ that has misaligned the interests of platforms and consumers,” Lewandowsky and Kozyreva write.
There must be greater transparency and more individual control of personal data. When asked, most people support their right to privacy online. They agree their information should only be shared with their permission. Further, that permission can’t be coerced as a condition of access to a site.
We can’t be manipulated based on the “health, sexual orientation, or religious and political beliefs” that internet companies have gathered about us online. 
We need to know if what we are reading is from a trustworthy site or if we are being manipulated by emotionally charged false content. We have the right to know who is putting out the information: is it a political campaign, religious organization, or a lobbying organization for guns, immigration policy, climate action, or healthcare.
“Democracy is based on a free marketplace of ideas in which political proposals can be scrutinized and rebutted by opponents; paid ads masquerading as independent opinions distort that marketplace,” Lewandowsky and Kozyreva write. 
We have the right to know “exactly how algorithms curate and rank information” they provide us. Too often, we only find out how we are manipulated when one of their employees becomes a whistle-blower or when independent researchers discover troubling patterns.
Lewandowsky and Kozyreva argue that independent auditing agencies should be allowed to examine the algorithms of online companies to identify their harmful nature. “Outside audits would not only identify potential biases in algorithms but also help platforms maintain public trust by not seeking to control content themselves,” they say.
“In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights,” Lewandowsky and Kozyreva write. “The best solution lies in shifting control of social media from unaccountable corporations to democratic agencies that operate openly, under public oversight,” they write.
Lewandowsky is a cognitive scientist at the University of Bristol in England and Kozyreva a philosopher and a cognitive scientist working on cognitive and ethical implications of digital technologies and artificial intelligence on society at the Max Planck Institute for Human Development in Berlin.

Sign up for News Alerts

Subscribe to news updates