Ethics and Technology

| Greg Adamson

The year 2018 marks the bicentennial of Mary Shelley’s Frankenstein: The Modern Prometheus. Even the worst of the movie renditions retain her key ethical question: What responsibilities do we have for the technologies we create? The “we” includes both technologists and the community as a whole. Shelley chose to focus on the loneliness of a new being. The artificial intelligence (AI) devices we create today are far short of sentience, so our challenges tend to relate to the social impact and the way that the technologies we create will change us.

Ethics and technology have been constant, if not always agreeable, partners for the past century. The twentieth century itself was split in two. Prior to World War II (WWII), a sense of confidence in the narrative of progress guided scientists, technologists, governments, and society in general. Technology would lift us out of poverty, ignorance, and hardship, and provide a world of abundance.

The development and use of nuclear weapons during WWII created a sense of shock, firstly in the scientific and technical communities, and then in the general community. We now, demonstrably, had the capacity to destroy our world. In the aftermath of WWII, technology and community values developed different paths. In 1959, C. P. Snow’s The Two Cultures [1]  emphasized the difference in a provocative style. Environmental concerns arose in the 1960s, along with other challenges including the “digital divide,” a gap between those who gained benefits from technology and those who didn’t.

Seeking to address those societal concerns from the technical perspective, in 1972 IEEE established the forerunner to the Society on Social Implications of Technology (SSIT). The 1980s and 1990s saw the rise of new fields of engineering, including environmental engineering and engineering for development. By the early 2000s, awareness of many issues related to ethics and technology had gained broad public awareness, and IEEE adopted the tagline “Advancing Technology for Humanity.” Today, SSIT has five areas of focus: (1) technology ethics, (2) development technology, (3) technology sustainability, (4) access to technology, and (5) the impact of emerging technologies.

The twentieth-century discussions of ethics were dialogues about what we should do, given the options available. In this century, the discussion has become much more pressing, examining what we have to do. There are several reasons for this, including the following three:

  1. Technology advances. Whether we speak about Moore’s Law, the Singularity Function, or simply technology change, the accumulative impact of technological innovation is changing the face of the world from year to year. For example, autonomous vehicles are being built and tested today. The discussion is no longer speculative.
  2. The growing impact of these advances. We can take just three examples: (1) autonomous machines and weapons, (2) the human-machine interface, and (3) the future of work. While it would be generally agreed that AI is not currently “intelligent” in a human sense, the rapidly expanding capacity of machines, weapons, and algorithms is leaving us behind. While in the past a bridge failure could be attributed to a design or material flaw, technologists can no longer explain exactly how an AI device has come to a conclusion. This means we have to develop new tools to manage the risk of this uncertainty. The development of brain technologies that allow users to control devices simply by thinking, and which may allow machines to reverse this process, shows that the traditional field of medical ethics must now be used to help us in our development of such technologies. While there is furious debate about whether automation will create permanent unemployment for a large part of society, there is no doubt that the rate of displacement of work, including skilled professionals, is dramatic.
  3. Expectations of company behaviour. Regulatory focus on poor financial compliance behaviour is now spilling over into the technology field. Over the past decade, total fines and remediation imposed on financial services organizations is approaching $400 billion. Since 2010, the equivalent cost for just two technology failures (BP’s Gulf of Mexico spill and VW’s emissions concealment) is approaching $60 billion and is expected to rise further. These fines are occurring in many jurisdictions, so can be expected to continue.

Another way to think about technology and ethics, which cuts across the professional and development categories, is between “microethics” and “macroethics.” Microethics addresses the local and immediate, making sure products are safe and reliable, creating a culture of trust in technologists’ work, rejecting bribery and corruption, and related areas. Macroethics asks broader questions, such as, what are the risks in developing this technology? This is particularly important in certain fields such as biomedicine and AI.

With these areas of change, now is a good time to re-examine the connection between ethics and technology. Broadly speaking, there are two separate ways that ethics and technology intersect from the perspective of a technologist.

One relates to the behaviour of individual technologists in their professional activity. This is the world of codes of ethics. In October 2016, the White House and New York University’s Information Law Institute issued a report from a July 2016 workshop on AI. The report made specific mention of IEEE and other professional organizations working in the AI field, calling on them “to update (or create) professional codes of ethics that better reflect the complexity of deploying AI and automated systems within social and economic domains.” IEEE is currently considering how best to respond to this input.

A separate but related aspect of the ethics landscape is the consideration of ethical and societal impacts in the process of developing new technologies. Action here involves technology professionals, but also company policies and culture, government regulation, and the broader community (particularly if a technology becomes unpopular). IEEE has recently increased its engagement in this aspect of the landscape. The launch of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (AS) and the subsequent development of the P7000 family of standards (which includes P7001 and P7002) have spearheaded the building of consensus on key facets of the AI/AS conversation. This complements existing content from Technology & Society Magazine and other publications, as well as events and other projects across IEEE. All of these activities contribute to the IEEE TechEthics™ program, a broader effort being launched at IEEE to foster open, broad, and inclusive conversation about ethics in technology.

While ethical behaviour is about doing the right thing, it doesn’t follow that the right thing is intuitively obvious. Just as technologists learn to assess risks in their work, they need to learn how to identify ethically challenging circumstances. For students, this is usually already a part of their current curriculum. The growing difficulty of the challenges, however, means that there is a universal need for greater ethics education. While some coursework is currently provided to students, in the typical workplace there is only a weak tradition of in-service ethics training. Such training is important both to reinforce university education, but also because until a technologist enters the workforce it is difficult to gain a practical sense of the ethics challenges one will face. Government workers and technologists in the not-for-profit sector have similar responsibility for their activities and decisions. For entrepreneurs building new companies, managers in medium-size enterprises, and executives in corporations, the responsibility is once again heavier, as a poor ethical “tone at the top” is universally recognized as a leading cause of ethics breakdown. Here responsibility exists in relation to both the product produced and to the training of staff.

As is evident from these examples, IEEE is involved in all aspects of the ethics and technology discussion. We encourage all interested people to get involved. A simple way to start is by contacting us for more information.

References

  1.  Snow, C. P., The Two Cultures. London: Cambridge University Press,1959.

Greg Adamson • Chair, IEEE Ad Hoc Committee on Ethics Programs, Chair, IEEE Technical Activities Ad Hoc Committee on Design for Ethics, g.adamson@ieee.org

Greg Adamson is chair of the IEEE Ad Hoc Committee on Ethics Programs and the IEEE Technical Activities Ad Hoc Committee on Design for Ethics. He is past-President of the IEEE Society on Social Implications of Technology, and is active in the IEEE TechEthics™ program. He is also an honorary Associate Professor at the University of Melbourne, and chair of the IEEE conference series on Norbert Wiener in the 21st Century. His research interests include professional ethics frameworks, and Norbert Wiener’s contribution to an understanding of technology and society.