Practicing ‘responsible AI’ for programme sustainability

Per Sedihn, CTO at Proact IT Group AB

In May 2019, San Francisco, a city at the heart of the digital and technology revolution, made the unprecedented step of banning law enforcement agencies from using facial recognition. The decision, which blocks a service that the police and other agencies are using to apprehend criminals, comes amid unease around the threat of the technology making the US an overly oppressive surveillance state.

Facial recognition has been widely held up as one of the most prominent use cases of Artificial Intelligence (AI), so San Francisco’s decision raises serious issues around the technology. This is especially the case as organisations move beyond the experimental phase of AI, with Accenture research suggesting that almost three-quarters (72%) of businesses have already adopted it. Meanwhile, McKinsey claims the number of companies that embedded at least one AI capability into business processes more than doubled between 2017 and 2018.

AI is a vital tool to helping businesses drive profitability, but it requires careful management to avoid inflicting damage on brand reputation, employees and customers.

Why AI ethics are being questioned

Machine learning algorithms which drive AI decisions detect patterns and provide recommendations based on their training data and experiences. They don’t have the benefit of context or understanding of the implications of the results they provide. For example, an algorithm that has been trained to detect shoes based on stock photos can have limited effectiveness when used with photos of shoes taken on mobile phones. Similarly, in October 2018 Amazon had to abandon an AI tool that was trained to vet job applications. It had taught itself that male candidates were preferable to female, as a result of male dominance across the tech industry in the previous decade.

Avoiding AI dangers

It can, therefore, be dangerous to rely on decisions made by AI alone. Businesses need to approach the possibilities of AI with careful optimism and ensure they consider the practical implications of AI projects before implementing them. Of the businesses surveyed in Accenture’s research, only 63% have an ethics committee that reviews AI usage, and 30% don’t offer ethics training to employees.

Business leaders must guarantee they use AI responsibly and in an ethical manner, be they from businesses concerned about reliability or senior directors focused on risk management and ROI. This requires that they confirm that AI outputs are fair, levels of personalisation don’t cross into discrimination, data acquisition doesn’t occur at the expense of consumer privacy and that they balance system performance with transparency when AI is used to make predictions.

It’s important to communicate any limitations of the AI where possible and then carry out rigorous testing to build up the capabilities of your system. Here are a few pointers on where to start:

Create a framework for ethical AI: As the San Francisco decision proves, debate is well under way about what’s right and wrong when to comes to AI deployment. The biggest questions are around how we marry AI systems with human judgement, specifically when it comes to ethical decisions. It’s therefore vital to have a framework in place that helps you keep up to speed with the development of your project and ensures AI systems are secure, transparent and accountable.

Appoint an AI ethics officer: This emerging role will be crucial to governing your AI development process and ensuring you make the right bias and diversity considerations. The individual appointed should be able to work cross-department to ensure that AI governance is representative of the business as a whole.

Senior exec buy-in remains crucial: While instinct may suggest that data science teams and other technical experts are the people best positioned to ensure AI outputs are fair, senior business leaders have a major role to play. The CEO, in particular, is vital to delivering consistently responsible AI systems, so it’s crucial they are kept up to date with the development of AI to ensure they can ask the right questions and provide the right guidance to prevent potential ethical issues.

Ensure AI data initiatives meet compliance regulations: The emergence of stricter compliance regulations, such as the General Data Protection Regulation (GDPR), have seen businesses re-examine how they use customer data. Issues like the right to be forgotten and right to be evaluated by a human mean that businesses need to ensure provision is made for the handling of personally identifying information (PII) and that clear metrics are in place for AI initiatives.

Become a responsible AI leader

Enterprise adoption of AI is gathering pace and business leaders need to think now about what this means for their organisation. There’s no template for how AI programs can be established or governed and so every business needs to develop their own framework with ethical considerations sown deep into its fabric.

Successful implementation of AI will depend on establishing trust with both employees and your customers, and that’s why responsible deployment is so crucial. As employees build trust in the insight AI provides, they will become increasingly likely to use it more regularly in their daily routines. Focusing on fostering customer trust will encourage them to use your products, which in turn will help to improve your brand reputation, the effectiveness of your AI project and your levels of innovation.

Governance also extends to control of data. Organisations may have specific reasons to maintain data outside of the public cloud, such as for data privacy or sensitivity. We work with our partner NetApp to help make this process as smooth as possible. We help our customers onboard with NetApp’s Private Storage (NPS) solution, which provides the architecture they need to link to public clouds using fibre connections. This means their data can reside in their own data centre but can connect seamlessly into other data centres. Our customers can use NPS to find the balance between public and private data storage they require, with the flexibility to optimise for performance, scale and cost.

By putting dedicated roles in place to direct AI development and compliance, along with courting sufficient buy-in from senior leadership and supporting with best-in-class technology, you can lay the foundations for sustainable growth. This will help to make AI a valuable part of how your organisation operates and delivers services to customers.

Teilen

Verwandte Inhalte

Kontakt aufnehmen

Wir würden uns freuen, von Ihnen zu hören. Nutzen Sie das Kontaktformular um uns eine E-Mail zu schicken.

Kontaktieren Sie uns!

Wir freuen uns sehr darauf, von Ihnen zu hören. Besuchen Sie uns, rufen Sie uns an, senden Sie uns eine Nachricht oder folgen Sie uns auf Social Media.

Durch Klicken auf "Absenden" stimme ich den in der Proact-Datenschutzrichtlinie festgelegten Bedingungen und Bestimmungen zu.