Artificial Intelligence

As the American AI Initiative advances AI, what can companies do about bias?

  • Artificial intelligence and machine learning will automate much of finance.
  • But to mitigate biases in the algos, FIs must have a real plan.
close

Email a Friend

As the American AI Initiative advances AI, what can companies do about bias?

This article was contributed by Sanjay Srivastava, Chief Digital Officer at Genpact, as part of Tearsheet’s new Thought Leaders contributor program.

In February 2019, President Donald Trump signed the American AI Initiative, an executive order that aims to guide research and development, federal resources, workforce training, and standards for the ethical use of artificial intelligence. This initiative presents an opportunity for American enterprises and financial institutions to be world leaders in AI, if they can establish a solid foundation for data, the right talent, and governance for issues like bias. Let us look specifically at AI bias, which is now a top concern for financial institutions and consumers alike as AI adoption grows.

In the second edition of Genpact’s global AI research study, we find more than two-thirds of consumers are concerned about robots discriminating against them in decision-making. Among senior executives, nearly all (95 percent) say their companies are taking steps towards mitigating bias. However, only 34 percent are addressing the problem holistically with governance and internal controls.

To combat bias and ease concerns, organizations can take measures into their own hands, starting with identifying what causes bias.

Biases in data samples and training

One big cause of bias is a lack of diversity in the data samples used to train algorithms. For instance, some lending institutions use AI to sift through large amounts of consumer data to automate and shorten the loan approval process. But if an organization trains an algorithm with available data only on borrowers from affluent neighborhoods, then the system may likely discriminate against future applicants from other areas because they fall outside of the model’s parameters.

Another source of bias is training, namely incomplete or improper use of algorithms. For example, chatbots are a common fixture now among banks. Ideally, a chatbot can learn from conversations and become more personable with customers over time, leading to better experiences. But just as chatbots can learn the right things to say, they can also learn bad things like politically incorrect language—unless trained not to do so. When companies rush training and neglect comprehensive planning and design, these type of issues come up.

Diversity is the key to combatting bias

The best way to combat AI bias is with diversity in both datasets and the teams working with AI. Companies need broad dataset that can address all use cases. If an organization only has homogenous internal data, then they can look to external sources to gain a more complete picture. There is even synthetic data available now that is gaining popularity for testing and validating models via machine learning.

Diverse teams can solve for training bias. When there is only a small group working with a system, it becomes partial to the ideas of a select few. Bringing in a group with different skills, thinking, and approaches leads to more holistic solutions. Armed with industry and process knowledge, these domain experts can think through potential biases, train the models accordingly, and provide governance frameworks to monitor for biases and promote trust in the technology.

One bank used AI to automate 80 percent of its financial spreading process, including extracting figures from documents and formatting them into templates. To train the AI so that it would pull the right data while avoiding bias, the bank relied on a diverse team of experts with data science, customer experience, and credit decision expertise. Now, it applies AI to spreading on 45,000 customer accounts across 35 countries.

As AI adoption grows and national interests increase, bias will continue to be a matter of concern. Enterprises will have to take it upon themselves to proactively mitigate bias through diverse datasets, teams with domain expertise, and the proper governance frameworks.

0 comments on “As the American AI Initiative advances AI, what can companies do about bias?”

Artificial Intelligence, Member Exclusive

Deep Dive: What’s in the black box? The challenges of explainable AI in digital finance

  • In the wake of reports of discrimination and negative impacts on consumer wellbeing, regulatory scrutiny is increasing on FIs that use AI models.
  • AI is central to digital finance – but do we understand how it truly works?
Rabab Ahsan | November 14, 2022
Artificial Intelligence, Sponsored

How financial institutions can get the most out of artificial intelligence

  • Artificial intelligence is a complex technology with a variety of applications. For best results, financial institutions should implement AI with a specific need, plan, and strategy in mind.
  • AI can help financial institutions move beyond a transactional, generalized digital banking experience, and deliver branch-like banking on digital channels at scale.
Finalytics | November 07, 2022
Artificial Intelligence, Data

‘This year, banks will strive to balance two opposing forces’: The state of AI in banking 2022

  • AI developments in banking have so far been restricted mostly to back-end uses. This year, there is a desire among service providers to focus on innovating more for the front-end.
  • As AI becomes smarter and banks begin holding increasingly intimate data about their customers, the industry is expected to progress slowly and responsibly.
Subboh Jaffery | January 31, 2022
Artificial Intelligence, Sponsored

Four ways we can ensure AI lending works for everyone

  • Real competition in credit scoring and rigorous fair lending testing for all players will help expand credit access
  • Built correctly and with care, AI models can be even more transparent than legacy scores, which have yet to undergo real public scrutiny of their disparate impact
Zest AI | December 14, 2021
Artificial Intelligence, Member Exclusive

A deeper look into what makes a successful chatbot with Bank of America’s Erica

  • Erica reported 19.5 million users, over 100 million interactions and 90% efficacy for useful answers.
  • BofA’s Hari Gopalkrishnan says that the key to Erica’s success is a balance of personality and functionality.
Minahil Shahab | August 26, 2021
More Articles