Ethical considerations for AI strategies

Ethical considerations for AI strategies

I have been talking to clients about how AI impacts aspects of user experience and CX for almost a decade, but I am also interested in how strategic choices could alter the impacts of AI on our industry and society as a whole. I was recently certified by the London School of Economics and Political Science (LSE) in AI ethics, which informs how I guide organisations towards more ethically aware strategies.

AI helps us to organise overwhelming amounts of information, automate inefficient tasks and enable innovative business models. However, AI can also create informational asymmetry in which organisations and governments can wield unwanted power over people. The ethical use of AI is strongly connected to the legitimate use of informational power, which relies on transparency about how and why data and AI is being used. The nature of AI and machine learning is that it can understand large data sets and make connections that we cannot, but that sometimes results in black boxes whose algorithmic outputs cannot be explained in a way that would be understandable to most humans. This means organisations can sometimes feel as though they have little control over the impacts of AI on greater ethical concerns.

However, every organisation still has the power to make conscious decisions about how they design and deploy AI to mitigate harm, while being as transparent as possible about the choices they make and what they do and don’t know.

In addition to thinking about the granular ways an organisation will work with and on AI, they can also consider how their AI strategy could impact the following key ethical challenges:

  • Equality and discrimination

  • Workplace fairness

  • The right to privacy

  • The local economy

  • The local environment

  • Social cohesion in the community 

Equality and discrimination

Creative destruction happens when new business models replace old and unproductive ones, but it can also impact the resources, opportunities and wellbeing of people unequally. One example is the way in which it is being used to predict consumer patterns, which expects gig workers to change their shifts with precarious and unstable schedules. AI can also reproduce and worsen discrimination because of choices in the learning algorithm or because of the data itself; statistically biased because it excludes some demographics or systematically biased because it reflects a reality which is systematically biased, further entrenching those patterns. For example, a study by the Berkeley Haas Center for Equity, Gender and Leadership analysed 133 AI systems across different industries and found that about 44 per cent of them showed gender bias, and 25 per cent exhibited both gender and racial bias. 

Does your AI strategy consider auditing algorithms for bias and involving stakeholders in the design process with participatory AI?

Workplace fairness

As mentioned above, employment can be unequal in terms of the distribution of money and opportunities. In the workplace AI recruitment models can exacerbate this issue by using opaque algorithms to discriminate against candidates by replicating statistical and systematic biases. AI is also triggering some level of de-professionalisation in which AI outperforms workers at the key tasks that make up their profession. 

Does your AI strategy consider how you will upskill your teams equally and if you have inclusive AI governance frameworks? Do you have a human-centric plan for the employees and departments who could be displaced?

The right to privacy

Digital economy business models rely on large amounts of personal data, making it nearly impossible for individuals to track and/or remove sensitive information when it has been re-sold by data brokers. Differential privacy is built around the idea of privacy as anonymity, but it still allows for statistical learning from datasets, meaning models can still be used to exercise power over people by learning about a group to which they belong, revealing personal information or finding patterns people did not want to disclose. Generative AI has also created ‘privacy in public’ challenges for visual artists and authors as they need to share their work in the public domain yet LLMs can use that content without consent.

Does your AI strategy consider transparency, data minimisation, user consent, access and correction, and responsible AI development and governance?

The local economy

AI innovation can increase economic opportunities but AI could also create market failures if not monitored. Economic lore would say that for perfect competition, no single actor in the market has the power to fix prices or impose terms of exchange and both sellers and buyers need to have as much information as possible. Joseph Heath (2014:15) outlines the role of the market in contemporary societies: “The ultimate goal of the economy as a whole is to satisfy human needs. The demand for various goods is an expression, however imperfect, of the intensity of these needs. The function of the price system is to channel resources toward the satisfaction of the most important of these needs. Market competition produces an efficient allocation of resources only if a market respects a set of idealised Pareto conditions.” Yet many large tech platforms currently do have this outsized power and do not abide by the traditional rules of the marketplace.

Does your AI strategy consider the role of open data sharing and interoperability standards, collaborative AI platforms, strengthening intellectual property protections, and advocating for more rigorous antitrust scrutiny of mergers and acquisitions in the AI industry? Are you working with local AI experts and home-grown tools?

The local environment

Training and operating large language models requires vast amounts  of increased energy and water consumption, emissions, extraction and e-waste generation. AI hardware involves mining in often unsustainable ways, and the disposal of outdated AI technology creates e-waste that can contaminate soil and water. AI can also be used to suppress environmental activism.

Does your AI strategy consider energy-efficient practices? Do you prioritize sustainability by educating employees and partnering with vendors committed to sustainability? Are you displacing or adversely impacting indigenous people as a consequence of your AI practices?

Social cohesion

Misinformation (the unintentional spreading of false information) and disinformation (the intentional spreading of false information) has been an epic challenge to social cohesion, yet AI increases our ability to do both. Products powered by generative AI enable any individual to develop realistic-looking text, video, and audio portraying fake events. The platform design of social networks means people are enabled to pass on false information faster and more effectively. In pluralistic societies people will always disagree. However, to have a cohesive society we need some level of value alignment, which disinformation and misinformation can make relatively difficult. 

Does your AI strategy consider how you will analyse the language, sentiment, and structure of content to detect patterns and identify potentially misleading or false content? 

Conclusion

In the coming years, every organisation has the opportunity to be defined by the role they play in the development and distribution of this world changing technology. State regulations will evolve, but how people work on or with AI will have the biggest impact on the nature of our future societies. Asking the right ethical questions now is worth the positive impact your strategy could have on our collective experiences in the future.

Popular AI tools for strategic design workflows

Popular AI tools for strategic design workflows

Digital transformation strategy for a global sporting organisation

Digital transformation strategy for a global sporting organisation