If you had to pick one, would you rather:
- see an ad for a consumer device you’ve looked at beside every web search you do,
- own a doll that reports recordings of its interactions with your child, or
- drive a car which could kill the driver to avoid harming others.
At the moment all could be true, but you don’t get to vote.
Autonomy is upon us and the sad reality is that while as human beings (and consumers), we are materially impacted by those staging our thinking and intentionally biasing our decisions, we have no insight into the rules by which that Autonomy operates. In this context, Autonomy is any robot or machine (digital or physical) which interacts with a human being based on a set of rules without being operated/piloted in real time by another human being. And while we may dismiss the intrusion as merely marketing, with a higher and higher percentage of our daily experience delivered digitally (and therefore capable of scale impact/manipulation), the greater the need for transparency as to how this Autonomy is making decisions.
It feels easy to dismiss today. A targeted ad for a pair of running shoes showing up alongside a search doesn’t seem important. But when digital/autonomous and physical worlds intersect, the risk grows materially. In 2015, when an interactive Barbie doll was launched that recorded the interactions of its users (children), privacy advocates were aghast at the intrusion. And yet every day more and more Google Homes and Amazon Alexas are finding their way into our homes and recording our activities. Another obvious example comes up in the philosophical debate as to whether an autonomous car may choose to kill the driver instead of hitting a school bus or running into a crowd of people.
Every technological breakthrough requires infrastructure supporting its safe integration into society. The horseless carriage would not have achieved ubiquity without infrastructure such as roads, fueling stations, auto insurance, traffic laws and signals, and so many other ecosystem services. Artificial intelligence and Autonomy are in their infancy and currently lack the infrastructure required to be readily adopted by society.
There are multiple initiatives and standards within the IEEE that are seeking to address this issue.
- IEEE P7000™:Model Process for Addressing Ethical Concerns During System Design (Working Group already in process)
- IEEE P7001™: Transparency of Autonomous Systems(Working Group already in process)
- IEEE P7002™: Data Privacy Process(Working Group already in process)
Our challenge now as we move towards an Autonomy economy is how to define the appropriate ethical infrastructure which will enable an entire new class of AI-enhanced jobs, services, and capabilities.
The authors believe an “On Purpose” infrastructure will build trust and offer transparency into the operation of Autonomy. An “On Purpose” infrastructure registers and maintains an overt and specific intention or goal of autonomy, enabling transparency and auditability of autonomous actions against the registered intent.
Standards can play a direct role by establishing a registry of purposes that are clearly defined and to which designers of autonomous systems could subscribe. This structure could be similar to how SIC (Standard Industrial Classification) codes are used across industries today. Such “purpose” categories may include package delivery, marketing, entertainment, self-help, health care, fitness tracking, etc., with a level of granularity under the broad categories allowing for auditability. This type of transparent standard and registry would allow designers, operators, and users to maintain a common expectation for how the systems should operate and therefore flag discrepancies if systems move beyond their stated purpose. Standards of deviation from these purposes should also be established to enable crowdsourced monitoring of autonomous actions and an industry standard reporting mechanism for validating the efficacy of autonomous systems.
Those who study the nature of human decision making may be disappointed by the malleability of the heuristics we use to govern most actions and the inherent bias exhibited in our behavior. Therefore, we should seek a model for Autonomy which enhances human ethical decision making and transparency. Autonomy should act “on purpose” where the full extent of the purpose is clearly articulated and therefore actions (and the associated decision-making process) are auditable against the stated purpose. If your companion robot reached out to hold your hand, wouldn’t it be nice to be able to press the “WHY” button and have the machine tell you why it did what it did?
One firm, Precision Autonomy, has begun building this “On Purpose” infrastructure and is applying it to drones/UAVs. At present, drones have an intent that is easily understood and overtly expressed in the form of a mission plan. And they can be tracked in real time to detect any diversion from this mission plan (purpose). This type of transparent operation will continue to build confidence in the adoption of drones while establishing baseline industry infrastructure for more complex Autonomy.
In contrast, much Autonomy has entered our lives and homes without visibility into its purpose and rules for decision making. In the case of the Barbie doll, was it clear that “interactive” meant that it was going to report interactions with children? In the case of Google Home, is it clear that all sounds within a certain proximity can be reported? We need to create transparency for some simple and important concepts by asking the following:
- What is the stated purpose of the Autonomy?
- How do I verify that the Autonomy is following its purpose?
- Who is the beneficiary—what is the customer value proposition?
- Who is the beneficiary—what is the underlying business model?
- Can I opt-in or opt-out with reasonable knowledge of the autonomous actions?
For example, we suspect few people would interact with a service whose stated purpose was to record interactions and sell them to those who manipulate decision-making processes for buying products. May the buyer beware: when the service is free, often you are the product.
With appropriate “On Purpose” infrastructure in place, the human condition will be enhanced. People will interact and extend themselves with Autonomy in ways that have yet to even be imagined. With new human-centered on purpose capabilities, entire new segments of the economy will form, driving new jobs while enabling the upward march of humanity. IEEE members should be seeking to get involved in initiatives such as P7000 to help shape the categories and definitions of purposes, which can form the basis of a new class of registry such as the Standard Industrial Classification (SIC) structure.
Mark Halverson is the CEO of Precision Autonomy, whose mission is to enable the safe commercial and social integration of autonomous technologies. Precision Autonomy operates at the intersection of artificial intelligence and robotics to allow UAVs and other unmanned vehicles to operate more autonomously. Precision Autonomy has developed an “On Purpose” infrastructure, ensuring machines operate in transparent, predictable, and auditable ways while always keeping human needs at the center.
Mark has over 25 years of consulting experience, working with the world’s largest corporations in shaping strategies to embrace innovation and disruption.
Leanne Seeto is a Co-Founder of Precision Autonomy whose mission is to make autonomous IoT services a safe reality. Integrating government, corporates, education and people in the Autonomy Economy. Leanne has over 20 years’ experience working in Sydney, London, Tokyo and in the US in startups through to multi-national corporations. She has worked with large corporations developing new market strategies and commercializing disruptive technologies. She has a BSc in Applied Mathematics and Computer Science and is an alumnus of Singularity University. She is the Communications Committees Co-Lead for the IEEE Global Initiative for Ethically Aligned Design of Autonomous Systems.