IBM intros cloud service and toolkits to detect AI bias

AI
Image credit: Tatiana Shepeleva / Shutterstock.com

IBM has introduced a new software service that it says gives businesses new transparency into AI by automatically detecting bias and explaining how AI makes decisions as those decisions are being made.

The ‘Trust and Transparency’ service runs on the IBM Cloud and works with models built from a wide variety of machine learning frameworks and AI-build environments such as Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML. This means organizations can take advantage of these new controls for most of the popular AI frameworks used by enterprises.

The service can also be programmed to monitor the unique decision factors of any business workflow, enabling it to be customized to the specific organizational use.

The automated software service explains decision-making and detects bias in AI models at runtime, capturing potentially unfair outcomes as they occur. It also automatically recommends data to add to the model to help mitigate any bias it has detected.

Explanations show which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence. Also, the records of the model’s accuracy, performance and fairness, and the lineage of the AI systems, can be traced and recalled for customer service, regulatory or compliance reasons (such as GDPR compliance).

AI bias IBM
Users of IBM’’s Trust and Transparency capabilities for AI obtain an explanation of why a recommendation was made. Explanations show which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence. (Image credit: IBM)

IBM says it is also making available new consulting services to help companies design business processes and human-AI interfaces to further minimize the impact of bias in decision making.

In addition, IBM Research will release an open-source AI bias detection and mitigation toolkit, bringing forward tools and education to encourage global collaboration around addressing bias in AI.

The AI Fairness 360 toolkit is a library of novel algorithms, code, and tutorials that will give academics, researchers, and data scientists tools and knowledge to integrate bias detection as they build and deploy machine learning models. While other open-source resources have focused solely on checking for bias in training data, the Fairness 360 kit will help check for and mitigate bias in AI models.

These announcements were accompanied by new research by IBM’s Institute for Business Value, which reveals that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.