Crisis usually occurs because of hazards and disasters and ends impacting a community or whole society. Even though hazards are mostly unavoidable, being prepared for it helps in alleviating and handling crises better. The crisis situation we chose for our study is the COVID pandemic with the emphasis on contact tracing applications. Maimuna and Divya explored the ethical issues relating to contact tracing applications. Our goal was to identify human-centered design practices and recommendations that could assist in building ethically sound contact tracing applications.
Role of Human-Centered Design (HCD) in crisis
The challenges emerging in the crisis context can be captured by using technical solutions with a human-centered research approach (Sawhney, 2020). In the crisis situation like the current COVID pandemic, human-centered research helps to capture various challenges relating to physical, psychological, ethical, professional perspectives.
For the COVID pandemic, human-centered researchers and experts are contributing in different ways. Supporting and developing and new forms of remote works, supporting communication and combating misinformation, interfaces, and infrastructures for tracing and limiting the spread of the virus, promoting healthy behaviors through digital nudging, mitigating social isolation and social risks are some of the key areas where human-centered research and design are currently focusing (Dalsgard, 2020).
Example context – Contact tracing application
“Contact tracing is the process of identifying, assessing, and managing people who have been exposed to disease to prevent onwards transmission.” —(WHO, 2020).
In recent times, proximity tracking apps have been created for the surveillance of the Covid-19 epidemic in many countries. Manual or traditional contact tracing consumes an enormous amount of time, resources, and influences how much a person could recollect about their contacts. Proximity tracking apps usually measure the closeness of the devices through signal strength (e.g. via Bluetooth) in order to identify the potentially infected person. For example, Koronavilkku is a smartphone app developed in Finland to identify the user’s exposure to coronavirus. The aim of this app is to support healthcare officials in the process of contact tracing efficiently and effectively.
Reframing HCD based on the challenges of contact tracing applications
In the first part of the course, Kiko and Divya worked together to identify the potential challenges relating to the design of contact tracing applications (Part 1 summary blog post). The stakeholder mapping which included both the human and non-human factors provided us an opportunity to look more about the virus and its origins itself. That also helped us in reflecting on what can be done beyond the contact tracing.
To address the complex and interconnected problems from the contact tracing technologies that emerged largely in COVID-19, we employed Nitin and Anh-Ton’s ‘Framework of ecologies that influence contestations in Participatory Design’ as a methodology to map out the overall ecology of this crisis domain. The following figure represents the key challenges of building contact tracing applications that were mapped with the opportunities that help in dealing with those challenges.
The design criteria that aids in building contact tracing applications are:
- Building trust and empowering individuals
- Convey the values for users
- Be transparent about data collection
- Highlight the explainability of the application
- Be proactive about data protection
Why AI ethics?
The significance and role of artificial intelligence rose sharply in the past few decades and is already deeply ingrained in today’s daily digital occurrences. Software today relies on it to create suggestions, predictions, object recognition, decision making, automation, transportation, and myriad other applications. In the future, the emergence of superintelligence is inevitable and it would impact the world on a grander scale far more deep-seated into the fabric of our society. However, with the inexorable march of AI arises disruption to societal values and we need to radically intervene into this before the impending AI revolution.
AI holds the benefit of being unbiased and fair due to the algorithm’s dearth of encoded social and moral code for generating judged output. However, as all things man-made but particularly AI exacerbates human biases against marginalized societies. This is conclusive in numerous cases where IT organizations have failed in producing unbiased software results creating a backlash from the users.
Supporting AI is conducive to the advancement of our economy and modernization. However, it is imperatively necessary to bridge the gap between value-laden propositions and fairness in AI. This is done by reorienting its contribution to individualistic values to societal ones via a speculative design approach. Shifting to a human-oriented approach encompasses multiple areas of scrutiny. As per European Group on Ethics in Science and New tech (EGE)’s statement on AI, Robotics and Autonomous System proposal, it requires (each explained in summary):
- Autonomy – To provide the people autonomy over the technology being used
- Human dignity – To make users aware of whether their interaction is with a human or a machine
- Responsibility – To have AI attuned to serving the social good via democratic development processes
- Justice – To exclude biased dataset and support the fair distribution of benefits
- Democracy – To regulate crucial decisions on policies
- Sustainability – To account the environment as the stakeholder
- Data protection and privacy – To allow user complete control over their data
- Law and Accountability – To ensure parties accounted for infringing on human rights
- Security and mental integrity – To ensure the product is safe for human in all capacity via test trials and iteration phases (European Commision, 2018)
By considering these fundamental ethical principles in the development of the AI Ethics framework is a move forward to creating people-oriented solutions and socio-technical systems. With further feedback and tests from its enforcement, the undetectable nuances and conflicts in these principles can be remediated and the statement restated.
All the technological advancements impacted by easing and facilitating our daily life opened the market for various advancements in diversified domains. But this has disturbed the moral boundary with the ability to make autonomous decisions and by keeping man-out-of-loop (Royakkers et al., 2018). The development and implementation of AI solutions have posed various ethical issues relating to privacy, security, safety, lack of transparency, lack of fairness, etc. All these ethical issues led to influencing the users’ trust. In order to tackle the ethical issues when developing AI solutions, various public entities and private organizations developed ethical guidelines of AI.
In 2018, companies such as Google and SAP released their framework on AI Ethics in society. The common principles in publicly released frameworks pose no conflicts and generally address fairness, accountability, transparency, safety, privacy, security, and inclusiveness. However, with myriad such frameworks, diligence is a requisite on the influence of AI systems in different organizations. To date, the monopolistic tendencies in the majority of the undesirable results from AI-based systems are very inherent. (Safiya U.N., 2018). This can be debated that there is a need for a more inclusive framework authorized by an organization representing and abetting the society hereafter. However, there is a needed basis such as in volatile times of crisis, it is best for all parties to collaborate for knowledge sharing.
The EU published a framework on Trustworthy AI in July 2020. As demonstrated in the image below, the framework addresses the inter reliance of the principles showcasing that alienation of one topples and disregards the structure of the entire ethics. This entails the prioritization of all aspects of the framework. Furthermore, it states the importance of constant evaluation and iteration of the framework. It targets different stakeholders engaged in the AI-based system. The developers are required to implement the design in the development process, deployers are to follow it and the end-users are to be informed about these requirements with autonomy on it. To summarize the framework, it includes systemic, individual, and societal aspects:
- Human agency and oversight
Users should be able to make informed autonomous decisions regarding AI systems
- Technical robustness and safety
AI systems are developed with a preventative approach to risks and unacceptable harm.
- Privacy and data governance
AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle
The system provides an explanation on its functionality and points out where any unwanted results may have happened in both expert and layman terms
- Diversity, non-discrimination, and fairness
- Ensure that any dataset used is clear of unfair biases to prevent discrimination and prejudice.
- Ensure to include larger classes of stakeholder as prosumers instead of consumers
- The disabled have access to the system and usability be catered to them
- Societal and environmental wellbeing
- The environment should be considered a stakeholder to ensure what effect it may have on the environment and societal perspective.
- Ensure how sustainable the system is for these stakeholders.
Establish who is responsible for the system development and who to address if there are queries about it (ec.europa, 2020)
Despite the positive initiative towards creating AI Ethics principles by various organizations, it is yet to be concretized. There are still existing debates on what constitutes ‘ethical AI’ and which ethical and technical standards are required for its realization. The frameworks lack actionable mechanisms for practical application of ethics and are not legally binding the IT producers to prioritize ethics in their solution. There is a further need to enforce morality without any trade-offs to innovating ICT solutions. Thus, ethics in ICT and AI will remain open-ended projects and more likely so will require educated iteration over the years.
Potential ethical challenges in Contact tracing apps
We discerned the ethical issues relating to the contact tracing applications from literature, qualitative research through a brief workshop session, and quantitative research through survey work. We also carried out a comparison on identified ethical issues and the European Commission’s Trustworthy Guidelines for AI, in order to see the similarities and how the guidelines correlate with the ethical issues.
The workshop was conducted on the Miro platform. We attempted to identify potential issues regarding the human-centric context that materialized in the first version of the corona tracing app released publicly. In this case, we referred to Koronavilkku as an example since it’s a Finland based application. During the workshop, an overview of Divya and Kiko’s project on “Reframing Human-Centered Design based on the challenges of contact tracing applications” was shared with the audience. Followed by a discussion of the project’s aforementioned resultant design criteria that aids in building contact tracing applications. These criteria were laid out as seen in the image above. The participants later engaged in testing the application and reflected on how well it upheld the criteria. The criteria are in accordance with the EU’s trustworthy AI Framework being streamlined to certain principles conveying the broader context of HCD design.
The workshop outcome demonstrated that application development has covered aspects of the criteria in relation to technology and privacy extremely well. The application elucidates the technical functionalities clearly to the end-users and navigates them to the right websites for more detailed information. The application and its background system on exposure notification API usage are also made open source in Github for public evaluation of the app providing complete transparency, human agency, self-empowerment, and accountability (THL 2020). The applications decentralized architecture ensures the absolute privacy of user’s data by avoiding collecting any personal information and location tracking but relying on proximity protocols to detect each other. In addition to technical details, the app also provides information on what action to take when the end-user suspects an infection, whom to contact, and what to do in the app to alert others via notification. The end-user is also provided with a health staff for any health-related advice. The application has ensured a high degree of trust which is also afforded via the high trust in the Finnish government.
However, the application had shortcomings with regard to the human-centric aspect. The application consisted of only one language support (Finnish) thus ignoring the minority societies marginalizing them. Though there were clear instructions, it is incomprehensible to users not fluent in the local language. English language support is slated to be added soon but the app’s crowd-based nature thwarts the point of its effectiveness if a substantial percentage of society is disregarded at an early stage. Urgency and prioritization have also become a factor to be considered in this case.
There is also a digital divide issue on how the developers have catered to users without Android and iPhones. As per the framework’s value interrelation, building trust is heavily reliant on diversity and fairness, human agency and oversight, and societal well being. As aftermath, this lack of these app features has disrupted the measure of other values. The value measurements from the survey are found to be consistent with the qualitative research assessment. The workshop also brought on other psychological challenges that should be considered in app development in crisis time. Such as how are the user’s mental health and anxiety managed when they discovered an infection?
Koronavilku is a very new application and is far from finished. According to the website, the developers have a backlog of features that will be added in the near future which includes multiple language support, EU Data compatibility, and so on. It is trusted that they are in compliance with the EU’s AI ethics framework and through multiple iterations, they will be fully compliant on ethics in the near future.
Role of design in overcoming challenges
The COVID19 pandemic has thrust designers in a scenario where they have to contemplate on how urgent innovative technical solutions comply with AI Ethics. The AI Ethics field has just started to be taken seriously by all ICT-based organizations, due to the prevalence of human empowerment via technology itself. Users are involved in the technology market as prosumers and technology thrive off their constant interaction and communication. The role of human-centered design is imperative to bridge the tech and the human gap.
From the workshop, we surmised that the application was a short term resolution to the current circumstances. The ethic designer should be asking broader questions on the long term aspect of the effect of the virus and how the application can scale to counteract it. There should be an understanding of how society resiliency can be fortified to live with the virus in the long run. Stakeholders besides humans should be considered such as animals and the environment on how the virus affects them and how to use technology to support them.
In the next stages of reconstructing the ethical design application to COVID 19 app development, the stakeholders should be rehashed to include environmental entities, overlooked societies, and animals. Carry out research on empathizing with the stakeholders on their reaction to the virus and its infections. Use the results to map an ecosystem to define their needs and wants. The map helps to ideate and prototype solutions to what can be done to aid the beneficiaries in their perils with or without technology.
The key requirements highlighted by EU’s guidelines for trustworthy AI such as accountability, fairness, privacy, security, societal and environmental well-being, etc. should be considered during the application development. Also, with respect to the application domain, these requirements can be prioritized due to an urgents basis. Furthermore, it is vital to understand the importance to analyze the ethical aspects during the design phases rather than bolting in ethics in the later stages of development.
This course sheds light on the importance of carrying out human-centered research and design in the crisis. Through our project work, we were able to comprehend the role of ethics when designing systems for the crisis and dig deeper into the nuances of AI ethics. Moreover, we attempted to reframe the HCD practices and incorporate a trust-driven approach for building contact tracing applications.
WHO, Ethical considerations to guide the use of digital proximity tracking technologies for COVID-19 contact tracing, Interim guidelines (2020)
Royakkers, L., Timmer, J., Kool, L., and Est, R.V. (2018), “Societal and ethical issues of digitization,” Ethics and Information Technology, Vol.20, pp.127-142.
European Commission, Ethics Guidelines for Trustworthy AI, https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines
Peter Dalsgaard. HCI and Interaction Design versus COVID-19. Blog Post. ACM Interactions. Sep 06, 2020. Work-in-Progress Google Doc: How researchers and experts in Human-Computer Interaction and Interaction Design can contribute to the COVID-19 crisis.
Nitin Sawhney, Human-Centered Research and Design in Crisis, CS-E4002, Lecture slides, Aalto University, Summer 2020.
Noah Blier, Bias in AI and Machine Learning: Sources and Solutions, Article, Lexalytics.com, Summer 2019. Available: https://www.lexalytics.com/lexablog/bias-in-ai-machine-learning
European Commission, Ethics Guidelines for Trustworthy AI, July 17, 2020 Available: https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1
European Commission, Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, March 9, 2018. Available: http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
Safiya Umoja Noble, Algorithms of Oppression, Book, January 2018, Available: https://www.amazon.com/Algorithms-Oppression-Search-Engines-Reinforce/dp/1479837245