Enforcing Ethics

The use of artificial intelligence is exponentially growing as the world further digitizes. As addressed in the lectures, as all things man made but particularly AI can amplify human biases against marginalized societies. Myriad human biases are still very prevalent and inadvertently encoded into the algorithms driving digital services and products used by the public society. This is attributed to AI being disproportionately governed by a small percentage of the socially dominating sect of society. 

Currently, fixes to this issue are introduced later in the project development phases and are targeted solutions to a small pool of problems ignoring the wider picture of the main concerns voiced out. The book Algorithms of Oppression by Safiya Umoja Noble has emphatically demonstrated this using examples from Google search results where searching innocuous words used to produce objectionable results and how it was fixed [1]. 

To effectively mitigate this digital redlining of fairness, discrimintaion and trust, many IT companies have adopted AI ethic frameworks ensuring safeguard of bias, accountability, transparency, privacy and value alignment. There however still issues arising from setting these responsible approaches.  Issues arising from this are enforcing these principles and being stringent about it and being mindful of the outcomes of their autonomous systems. Biggest names in tech have fallen foul of not adhering to their principles [2]. In order to ensure their AI is adhering to the principles, the outcomes have to be taken into account and this cannot be done unless more diverse voices are included during the development and testing phases. The software industry right now is still struggling to recruit such voices. In addition, technological fixes do have their limits as they need mathematical notions on fairness which is difficult to produce at today’s stage.

 Imposing transparency brings about another issue. Currently, AI models are managed by for profit companies and regard their models to be part of intellectual property to be remained in a black box. In addition there’s a risk of introducing vulnerability to attacks and companies facing lawsuits. In order to steer around this transparency paradox, companies need to reorganize how they profit off, manage risks and protect their algorithms. Not to mention, consider the ethical implication of the outputs from their AI and consistently adopt design principles on ethics through all phases of the project development.

It can be said that ethical AI problems AI will persist and continue to shape our world however long humans harbor discriminatory preconceptions. There are significant technical efforts to detect and remove bias from AI systems, but it is fair to say that these are in early stages.

References:

[1]Algorithms of Oppresions book

[2]Tay ChatBot by Microsoft: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Blog image from https://towardsdatascience.com/https-medium-com-mauriziosantamicone-is-artificial-intelligence-racist-66ea8f67c7de

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.