What can be done about our modern-day Frankensteins?

2333
Can technology be tamed? Or have we already lost complete control? Tom Simpson, CC BY-NC-ND

What can be done about our modern-day Frankensteins?

File 20171222 16518 9f9hlq.jpg?ixlib=rb 1.1
Can technology be tamed? Or have we already lost complete control?
Tom Simpson, CC BY-NC-ND

Adam Briggle, University of North Texas

In 1797, at the dawn of the industrial age, Goethe wrote “The Sorcerer’s Apprentice,” a poem about a magician in training who, through his arrogance and half-baked powers, unleashes a chain of events that he could not control.

About 20 years later, a young Mary Shelley answered a dare to write a ghost story, which she shared at a small gathering at Lake Geneva. Her story would go on to be published as a novel, “Frankenstein; or, the Modern Prometheus,” on Jan. 1, 1818.

Both are stories about our powers to create things that take on a life of their own.

Goethe’s poem comes to a climax when the apprentice calls out in a panic:

    Master, come to my assistance!
    Wrong I was in calling 
    Spirits, I avow, 
    For I find them galling,
    Cannot rule them now.

While the master fortunately returns just in time to cancel the treacherous spell, Shelley’s tale doesn’t end so nicely: Victor Frankenstein’s monster goes on a murderous rampage, and his creator is unable to put a stop to the carnage.

Who foretold our fate: Goethe or Shelley?

That’s the question we face on the 200th anniversary of “Frankenstein,” as we find ourselves grappling with the unintended consequences of our creations on Facebook, to artificial intelligence and human genetic engineering. Will we sail through safely or will we, like Victor Frankenstein, witness “destruction and infallible misery”?

Will science save us?

In Goethe’s poem, disaster is averted through a more skillful application of the same magic that conjured the problem in the first place. The term for this nowadays is “reflexive modernity” – the idea that modern technology can be applied to deal with any problems of its own creation and that whatever problems arise from technoscience, we can fix with more technoscience. In environmentalism, this is known as ecomodernism. In transhumanist circles, it is called the proactionary principle, which “involves not only anticipating before acting, but learning by acting.”

“Frankenstein,” by contrast, is a precautionary tale. Imbued with the impulse to transform nature, humans risk extending beyond their proper reach. Victor Frankenstein comes to rue the ambition to become “greater than his nature will allow.”

He laments: “Learn from me…how dangerous is the acquirement of knowledge and how much happier that man is who believes his native town to be the world.”

Hubris, he seems to warn, will be the death of us all.

The rise of the Silicon Valley refuseniks

This same worry over hubris appears to be creeping up among today’s scientists, engineers and entrepreneurs, many of whom seem to be getting cold feet. After creating something, they’ve turned around and denounced their very creations.

Are they like the apprentice calling for the master to rescue him? Or are they, like Frankenstein, engaged in a futile quest to squelch something that is already beyond our control?

Sean Parker has now dubbed himself a ‘conscientious objector’ of Facebook, the company he helped spawn.
Paul Sakuma/AP Photo

Consider Sean Parker. The co-founder of Napster and an early investor in Facebook recently announced his status as a social media “conscientious objector.” Facebook, he claims, is likely damaging children’s brains and definitely exploiting human psychological weaknesses.

There are more Silicon Valley refuseniks. Justin Rosenstein, the inventor of the Facebook “like” button, has deleted the app from his phone, citing worries about addiction, continuous partial attention disorder and the demise of democracy at the hands of social media. Former Google employee Tristan Harris and Loren Brichter, who invented the slot machine-like, pull-to-refresh mechanism for Twitter feeds, are both warning us about the dangers of their creatures.

Anthony Ingraffea spent the first 25 years of his engineering career trying to figure out how to get more fossil fuels out of rocks. From 1978 to 2003, he worked on both government and industry grants to improve hydraulic fracturing. His own research never panned out, but when he learned of the success of others and the magnitude of chemicals and water required, he was “aghast” and said, “It was as if [I’d] been working on something [my] whole life and somebody comes and turns it into Frankenstein.” Over the past 10 years he has become one of the nation’s leading fracking opponents. The industry that once funded him now regularly trolls and attacks him.

Jennifer Doudna is one of the main scientists behind the gene-editing technique known as CRISPR. In her new book, “A Crack in Creation,” she writes that CRISPR could eliminate several diseases and improve lives, but it could also be used in ways similar to Nazi eugenics. Doudna has revealed that she has nightmares where Hitler asks her to explain “the uses and implications of this amazing technology.”

Elon Musk worries that with artificial intelligence we are “summoning the devil.” AI is, for him, “our greatest existential threat.” Musk has super-charged Dr. Frankenstein’s initial impulse of evading his abominable creation: He is working on interplanetary colonization so that we can run all the way to Mars when AI goes rogue on planet Earth.

Treating technology like a child

The anthropologist Bruno Latour chastised Musk for this kind of thing. The way Latour sees it, the moral of Frankenstein is not that we should stop making monsters but rather that we should love our monsters. The problem wasn’t Dr. Frankenstein’s hubris, but his unfeeling – he abandoned his “child” rather than educating it so that it could learn how to behave.

Latour’s point is that no amount of technological advance will give us total control and a blissful detachment from the world. Instead, technology, like parenting, will always require being constantly folded into new developments, tending, fretting and caring.

Musk’s initiative OpenAI, which seeks to develop safer AI technologies, is more what Latour has in mind.

As it turns out, Latour is putting his own advice to the test. He is the creator-in-chief of the scariest monster of our times. This creature is not actually a product of science, but rather a way of thinking about science. Latour spent his career showing how scientific facts are socially constructed, and that there is no such thing as unbiased access to truth.

In short, he argues that objectivity is a sham and science is never really settled or certain.

Now, of course, he’s watching in horror as this spirit of deconstruction and distrust takes root in our post-truth age of alternative facts, climate change denialists and partisan media bubbles.

In a recent interview, Latour admitted that he now regrets his earlier “juvenile enthusiasm” in attacking science and vows to reverse course:

“We will have to regain some of the authority of science. That is the complete opposite from where we started doing science studies.”

In order to love our monsters, we have to have some basic agreement about when they are misbehaving and what to do about it. That agreement comes through widespread trust in the traditional institutions of truth: science, the media and universities. Latour sought to liberate us from the paternalism of the experts inhabiting these institutions, and it was a noble quest.

The ConversationBut his acid, combined with the chaos of social media and the greed of big money, has corroded things more deeply than he imagined. Now it is bias all the way down, everything is susceptible to a knee-jerk accusation of “fake news.” Climate change may be the ultimate abomination or maybe it’s a hoax. Who can tell? The skepticism-induced paralysis is hardly conducive to chasing monsters.

Adam Briggle, Assistant Professor of Philosophy and Religion Studies, University of North Texas

This article was originally published on The Conversation. Read the original article.

LEAVE A REPLY