Skills for the AI future

As a part of my doctoral research I am interested in researching and understanding the future of work and skills that future professionals will need. I wanted to tie the recent lectures and discussions (Ethics & Politics of AI in Society, AI Ethics in Practice: Designing for Ecosystems, Decolonizing AI & Rethinking Resistance) on AI and ethics to this context, and started wondering how this is being addressed amongst practitioners and leaders, who are the ones investing in and harnessing new technologies. This is a brief practice-based investigation on the topic, from the viewpoint of organisations, leadership and future of work.

What skills and capabilities do professionals and leaders need for the future and does ‘AI understanding’ make the list?

It’s no news that technology is augmenting, transforming and disrupting work and societies at a faster speed and scale than before. AI is seen as a top trend that drives industry growth (WEF, 2018), and several reports and research indicate different estimations on how many jobs will be replaced by machines (Kang, 2019), how many jobs will disappear, and highlight the need for upskilling and reskilling professionals to stay ahead (Bruce-Lockhart, 2020; WEF 2018). The ‘inherently human skills’ such as empathy, emotional intelligence, collaboration, creativity and critical thinking are brought to the forefront as the most important and rising skills for the future (WEF, 2018). 

The same lists and reports often also highlight how technical skills, especially for leaders, will gain less importance, as artificial intelligence and other technologies will take over many subject-specific and technical tasks. However, as this is the case across many industries, it is surprising that there seems to be very little emphasis and discussion on skills such as data literacy and governance, ethical considerations and understanding or ‘AI literacy’. 

AI can certainly provide new opportunities, drive change and be assisting in making processes and systems more productive, leaving more time for leaders and professionals to focus on high value-creating tasks, but it is seen also as a threat (e.g. Crawford, 2018; WEF, 2018). The use of AI can result in awry outcomes, faulty conclusions and have severe implications for societies and individuals. For example, in an interview with NRP, Kate Crawford shares an example from Amazon describing how automated CV mapping algorithms caused serious faults in recruitment processes and since the creators did not understand the system, they were not able to fix it (NPR, 2019). The risks and uncertainties related to AI has stalled many organisations to adopt the technology (Ammanath et al, 2020) but certainly stalling or slowing down is not a solution in the long run.

Data: what, by whom and why?

Data is the fuel for AI, and big data seems to be the answer to everything. Chris Anderson, a former Wired editor-in-chief, said already more than 10 years ago:  “with enough data, the numbers speak for themselves.” (Anderson, 2008) and it seems like this notion still holds true in many minds of leaders and decision-makers. Kate Crawford contests this notion and points out how data is not objective, but the creators and organisers of the data give it meaning and definitions: 

“Data and data sets are not objective; they are creations of human design. We give numbers their voice, draw inferences from them, and define their meaning through our interpretations. Hidden biases in both the collection and analysis stages present considerable risks, and are as important to the big-data equation as the numbers themselves.” (Crawford, 2013)

She calls for data scientists to learn from social scientists, who inherently practice and embed a critical and qualitative lens towards their data and analyse what cognitive biases they might bring whilst collecting, analysing and interpreting the data (Crawford, 2013).

How could topics such as data biases, the need for education of the people who design and program algorithms, or issues with data gaps and missing data sets, as highlighted e.g. by D’Ignazio (2019) in her talk, be brought to the forefront in organisations? 

Is AI for everyone or just for the selected few? How might we bring everyone on board?

One of the challenges of even discussing these, is that AI and many advanced technologies still seem to be a topic for the selected few, who are the engineers, scientists and programmers developing these algorithms and systems. It feels like a complex field to grasp, that requires special education and ‘beautiful mind’ -level thinking, which makes it hard to care and engage in these conversations. 

However, based on the discussions and examples shared in this course, it is quite evident that we all should care, and be able to engage in the practice and discourse regarding talk where, how, when, why and by whom AI is being used and deployed.

There are a number of articles and reports aimed at leaders to gain more understanding of AI and the disruption it is creating – all with the underlying message that no one will be left out and everyone will be impacted by this (e.g. Ammanath et al, 2020, PWC, 2017, Jones, 2018). The focus of these narratives is very much on the impact of AI on efficiency, productivity, outcomes and the bottom line for organisations, with little or no mention of the societal and ethical principles, connection to values or understanding the ‘thinking’ of AI.

Leaders are being called out to understand AI better to be able to manage and supervise their investment (Oesch, 2018; Smith & Green, 2018), Smith & Green (2018) talk about roboethics, and challenge programmer training to include more understanding of ethical practices and implications, and Martinho-Truswell (2018) claims that to get the most from AI, not only programmers or leaders, but all employees should understand the technology better and organisations should invest in educating everyone. 

Aligning organisational efforts and enabling understanding across the board might not only increase preparedness to identify and solve ethical issues and mitigate some of the risks and uncertainties related to AI (Ammanath et al, 2020), but also create new opportunities for its adoption and use, as professionals across the organisation can identify where AI could provide value.

“Understanding machine learning can make an employee more likely to spot potential applications in her own work.” (Martinho-Truswell, 2018)

Martinho-Truswell (2018) claims that all employees should understand these three questions: “How does artificial intelligence work? What is it good at? And what should it never do?”. She highlights the importance of understanding how humans learn and how machines ‘learn’. The key difference lies in humans using heuristics, assumptions and simplifications to make sense of large amounts of data or complex issues, whilst a machine learning algorithm uses all data points to create patterns and create an output. The decisions the machine makes are based on predefined parameters and categories, that have been designed by people. (Martinho-Truswell, 2018). 

What do we need to create the world we want to create? 

Even though lack of transparency and explainability in AI is addressed as a concern and challenge by practitioners (Ammanath et al, 2020), there seems to be little actions and practice to resolve and advance this. Frameworks and committees many organisations have already in place to ensure ethical practice and use of AI might be a good start, but having more understanding of the opportunities and limitations of AI, could assist organisations in having more transparency, explainability and auditability. If more employees across the board have more understanding, it could ensure the frameworks and values are more likely to be fostered, enforced and questioned. What are then the skills, approaches and principles organisations need to embrace and future professionals need to contribute towards this? 

“If employees have thought about proper ethical limitations of AI, they can be important guards against its misuse. ” (Martinho-Truswell, 2018)

Instead of industries fighting for the few highly skilled talent of data scientists and programmers, I wonder if we see more emphasis on these technology-skills across the board in the near future? Jones (2018) calls for an “AI culture”, but what would that mean and include, and how would it manifest itself? Will there be more room for deep questioning, value-based alignment and reflection, amidst the business pressures and focus on the bottom line? Will we default into seeing AI as an “technical inevitability”, where we only can fix the symptoms and “tweak the edges” as Kate Crawford (2018) put it, rather than tackling the root causes and problems?

AI is just another technology that will disrupt and transform our organisations and societies, and at the same time it seems like it is not quite like anything we’ve known or seen in the past. As we keep developing AI, will we be able to ‘reverse-engineer’ the decision, to walk the steps to understand how, why and based on what information certain outcomes and conclusions have been made? What does AI accountability look like? How can we practice questioning not only what data we have, but what is missing, to reveal biases and gaps in the outcomes it creates? Could AI support us in revealing and unpacking our biases, rather than enforcing them? 

As we’ve been discussing in the context of crisis, each disruption comes with opportunity, and this again is an opportunity for organisations and societies in Kate Crawford’s (2018) words, to think about: “What kind of world do we want and how can technologies serve that vision rather than driving it?”. So what are the skills and capabilities we need to have to create that world? 


Ammanath, B. et al, 2020. Thriving in the era of pervasive AI. Deloitte. Accessed 16th August 2020

Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired. Accessed 16th August 2020. 

Bruce-Lockhart, A., 2020. Davos 2020: Here’s what you need to know about the future of work, World Economic Forum. Accessed 16th August 2020.

Crawford, K., 2018. You and AI – the politics of AI, Royal Society 2018 series: You and AI. Accessed 16th August 2020.

Crawford, K., 2013. The hidden biases in big data. Harvard Business Review, 1(4).

D’Ignazio, C. 2019. Feminist Data, Feminist Futures. Eyeo Festival. Accessed 16th August 2020.

Jones, M. 2018. Why Companies Will Need To Create An AI Culture To Achieve Success. Forbes. Accessed 16th August 2020.

Kang, S., 2019. To build the workforce of the future, we need to revolutionize how we learn, World Economic Forum. Accessed 16th August 2020.

Martinho-Truswell, E., 2018. 3 questions about AI that nontechnical employees should be able to answer. Harvard Business Review. Digit. Artic, pp.2-4.

NPR, 2019. Artificial Intelligence Can Make Our Lives Easier, But It Can Also Go Awry. National Public Radio. Accessed 16th August 2020.

Oesch, T., 2017. What Do You Need To Teach Leaders About Artificial Intelligence? Training Industry. Accessed 16th August 2020.

PWC, 2017. Sizing the prize. What’s the real value of AI for your business and how can you capitalise? Accessed 16th August 2020.

Smith, A. M., & Green, M. (2018). Artificial Intelligence and the Role of Leadership. Journal of Leadership Studies, 12(3), 85-87.

WEF, 2018. Future of jobs 2018 report, World Economic Forum. Accessed 16th August 2020 .


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.