It seems like everyone is excited about the potential for Artificial Intelligence (AI). The hype has become intense, and understandably so. The technology has become part of everyday life, whether it’s speaking to Siri on your iPhone, Netflix predicting what shows we might like based on previous choices, or your Grab finding the fastest way home.
In business and especially in the world of financial services, banks are reshaping their business models using AI for a widening range of functions. They are using it to better identify suspicious transactions and catch money laundering, deploying it in credit risk assessments or using it to improve the customer experience by predicting what services you might like to use next.
While the technology promises to help streamline and improve our lives, we also need to be aware of the decisions that AI is making that might impact people’s lives significantly. For example, there are more than 110 million of underbanked people in Southeast Asia; you don’t want to see credit decisions being made in a black box that people can’t explain or that start to show bias in their decision making. To engender trust in the technology it is therefore vital that AI is both ethical and explainable.
To address this, regulators in the region are developing frameworks to help guide the safe and reliable development of AI. Across Asia Pacific, Singapore seems to have made the biggest strides in this direction – the country’s AI Governance and Ethics initiatives have won an international award, including Asia’s first Model AI Governance Framework released in January 2019. For financial institutions, the Monetary Authority of Singapore (MAS) has specifically issued a set of principles to promote fairness, ethics, accountability and transparency (FEAT) for the use of AI in finance.
Australia and New Zealand also had similar initiatives in 2018. The federal government in Australia pledged close to A$30 million over four years to enhance local AI capabilities, including the development of a national AI ethics framework. In New Zealand, the AI Forum is seeking to action on its recommendations for its report on ‘Artificial Intelligence: Shaping a Future New Zealand’, in partnership with industry, Government and academia. This includes adapting to AI effects on law, ethics and society.
In India, the federal government has also adopted a national AI strategy that recommends setting up a Centre for Studies on Technological Sustainability to address issues related to ethics.
Such efforts raise the bar for both explainable AI and ethical AI, which I predict will be the tech industry’s biggest development in 2019.
Racing to Contribute Toward Ethical AI
As a chief analytics officer, I’m deeply involved in finding ways to help support the industry’s embrace of ethical AI. Here’s how our recent patent work unfolds:
Blockchain: Even though Bitcoin, the most famous instantiation of the blockchain, had a lousy year in 2018, this underlying technology is on fire in novel business applications such as car rentals. In 2018 I turned my thoughts on blockchain inward, producing a patent application around using blockchain to ensure that all of the decisions made about an AI model are being recorded and are auditable. These include the model’s variables, model design, data utilised and selection of features, as well as the ability to view its latent features, and all scientists who built portions of the variable sets and model weights. The sum and total record of these decisions provide the visibility required to effectively govern models and satisfy regulators.
Explainable latent features: Another patent addresses the immaturity of the AI industry overall—which is painfully evident when the conversation turns to machine learning algorithms’ explainability. Specifically, for all of data scientists’ talk about deep learning being game-changing technology, questions about the details of learned patterns in a shallow or deep neural network are usually answered with quizzical silence, even at the largest companies. This is completely unacceptable for anyone who has to talk to a customer about the model or represent it to a regulator.
My patent for explainable latent features “explodes” a neural network model in a sparely connected multi-layered model, such that each hidden node can be explained succinctly. I recently talked about explainable latent features at an innovation workshop, and the audience was very enthusiastic about building transparency into models in this way.
Bias removal: This ethical AI topic is little broader. It looks at restricting the type of data that would go into a model build, to prevent the introduction of bias—a conceptual cornerstone of ethical AI. I’ve filed two patents to facilitate decision-making on whether particular data and derived variables are suitable in a model or not. For example, a model that factors in a person’s height would be useful in calculating the production cost of a pair of blue jeans (which typically have the same price, irrespective of the inseam length), but not a loan applicant’s earning potential or creditworthiness.
All of these patents fit neatly under an umbrella of regulatory embrace of ethical AI, which is accelerating dramatically as governing bodies enforce the requirement to understand and explain the AI models set to power today’s financial decisions. Having a common set of principles serves as good industry benchmarks as we see an uptake in AI by financial institutions in Asia Pacific be it for fraud analysis, customer service or to make more informed business decisions. By putting these principles into practice, it ensures that AI is considered more thoroughly and holistically in its deployments. The confidence and trust that this fosters in the industry will empower more ethical and innovative use of AI technologies.
Written by Scott Zoldi, Chief Analytics Officer, FICO