Is it time for a presidential technoethics commission?

2765

Daniel N. Rockmore, Dartmouth College

A recent New York Times article highlighted the growing integration of technologies and textiles, displaying a photograph of a delicate golden nest of optical fiber. The article reported that this new “functional fabric” has the added quality that it “acts as an optical bar code to identify who is wearing it.”

Is this a feature or a bug? This smart material would certainly be a new milestone in the march of technology and the marketplace to erode personal privacy. Would a suit made of this material need to come with a warning label? Just because we have the technological capability to do something like this, should we?

Similar questions could have been asked about putting GPS technology in our mobile phones, drones in the air and the “cookies” resident on our devices to dutifully record and transmit our online activity. Right now, those conversations happen in corporate boardrooms, the media, fictional books and films, and academic settings. But there isn’t a broad national conversation around the ethics of the steady digital encroachment on our lives. Is it time to create a presidential commission on technoethics?

Elevating the discussion

Such a commission might take its cue from the presidential committees that have been put in place over the past 50 years to study the issues that have come up about biological research. In 2001, President George W. Bush created the President’s Council on Bioethics (PCBE) to address concerns about genomics work and genetic engineering, largely inspired by advances in stem cell research and cloning.

The PCBE couldn’t halt research, and neither can its successor, the current Presidential Commission for the Study of Bioethical Issues. But it does provide a high-profile forum for important conversations among a broad and changing group of scientists, ethicists and humanists. Their discussions in turn inform local, state and national policymakers about the possible ethical implications of groundbreaking research in biology.

Results take the form of commission votes and reports. For example, after six months of lively public debate, the 2001 PCBE ultimately voted 10-7 to recommend allowing biomedical cloning (with regulations) based on stem cells, an outcome that seemed to have great influence in the national conversation.

University groups and other review boards overseeing research projects look to the commission’s reports as indicators of the best thinking about moral, social and ethical considerations. At Dartmouth, for example, we have a Committee for the Protection of Human Subjects that regularly discusses wider issues during their review of research proposals that involve people. As corresponding groups at universities nationwide consider approving or suggesting modifications to proposed study designs, they can guide, or at least nudge, researchers toward the norms identified by the commission.

Turning to technology

When it comes to modern technologies, those types of conversations seem to be less dialogue and more broad statement. For example, various scientists in the news increasingly warn us about the dangers of artificial intelligence.

They express concern about the coming of “the singularity,” the term coined by inventor and technology pioneer Ray Kurzweil to denote the time at which machine intelligence surpasses human intelligence. It’s unclear how that moment would be determined: for example, while some forms of intelligence are ready-made for a “Terminator Test” (machines are already better than humans at doing arithmetic and playing Go), others (such as artistic creation or social intelligence) seem outside that kind of competitive context. But the fact remains that we are thoughtlessly deploying technologies with little concern for, or debate around, their context and implications for society.

At what point does concern trump convenience? For a little “thought experiment” consider the “mind-reading” technologies being researched in neuroscience labs around the world. They aim to recognize brain activity related to identifying images and words.

One day, a technology could automatically take that activity measurement, interpret it and store it on your computer or in the cloud. (Imagine the marketing potential of such a service: “With DayDreamer, you’ll never lose a single great idea!”)

How does society decide who owns the thoughts? Right now it might depend on whether you had those thoughts at work or at home. Maybe your employer wants you to use this software because of the fear of missing a good idea that you may have dismissed. It’s easy to imagine government agencies having similar desires.

Movies like ‘Ex Machina’ address how humans interact with artificial intelligences.
bagogames/flickr, CC BY

Literature and film seem to be the formats that of late have done the most to expose the possibilities of our new technologies. Movies like “Ex Machina” and “Her” provide much to think about regarding our interactions with machine intelligence. Books like “Super Sad Sweet Love Story” and “The Circle” (coming out soon as a movie) raise all kinds of questions about privacy and transparency. Works like these continue a long tradition of art as a spur to broad societal (and classroom) debate. Indeed, I even made written responses to those books part of a network-data analysis course I taught last winter at Dartmouth College. Why not couple such informal and episodic exposures with formal open and considered debate?

A presidential technoethics commission would provide that opportunity. Worries about “the singularity” might lead to a report on the implications of an unregulated Internet of Things as well as a larger robotic workforce. Privacy presents another obvious topic of conversation. As more of our lives are lived online (either knowingly or unknowingly), the basic definition of “private life” has been slowly transformed both by security concerns and corporate interests. The steady erosion of “self” in this regard has been subtle but pervasive, cloaked as a collection of “enhancements of user experiences” and “security concerns.”

With initiatives such as “the right to be forgotten” and recent search-fixing lawsuits against Google, the European Union has taken to the courts for its discussion of the societal implications of information collection and hegemony of its management. In the U.S., too, there are obvious and significant legal considerations, especially around civil liberties and possibly even implications for the pursuit of knowledge. The Apple-FBI clash over phone unlocking suggests that a significant fraction of the American public trusts industry more than the government when it comes to digital policies.

If history is any guide, corporate interests are not always well aligned with social goods (see – among others – the auto and tobacco industries, and social media), although of course sometimes they are. Regardless, a commission with representation from a range of constituencies, engaged in open conversation, might serve to illuminate the various interests and concerns.

Discussion, not restriction

All that said, it is important that the formation of a commission not serve as administrative and corporate cover for the imposition of controls and policies. The act of creating a forum for debate should not serve as justification for issuing orders.

A former colleague who served on the 2001 President’s Council on Bioethics tells me that in general, it was formed to consider the implications of advances in bioengineering related to “tampering with the human condition.” Even without going so far as considering personal thoughts, surely questions around topics like wearables, privacy, transparency, information ownership and access, workplace transformation, and the attendant implications for self-definition, self-improvement and human interaction are at the foundation of any consideration of the human condition.

There are important ethical implications for new and imagined digital technologies. People are beginning to address those implications head-on. But we should not leave the decisions to disconnected venues of limited scope. Rather, as with the inner workings of human biology, we should come up with social norms through a high-profile, public, collaborative process.

The Conversation

Daniel N. Rockmore, Professor, Department of Mathematics, Computational Science, and Computer Science, Dartmouth College

This article was originally published on The Conversation. Read the original article.

LEAVE A REPLY