Artificial Intelligence

As the American AI Initiative advances AI, what can companies do about bias?

  • Artificial intelligence and machine learning will automate much of finance.
  • But to mitigate biases in the algos, FIs must have a real plan.
close

Email a Friend

As the American AI Initiative advances AI, what can companies do about bias?
This article was contributed by Sanjay Srivastava, Chief Digital Officer at Genpact, as part of Tearsheet’s new Thought Leaders contributor program. In February 2019, President Donald Trump signed the American AI Initiative, an executive order that aims to guide research and development, federal resources, workforce training, and standards for the ethical use of artificial intelligence. This initiative presents an opportunity for American enterprises and financial institutions to be world leaders in AI, if they can establish a solid foundation for data, the right talent, and governance for issues like bias. Let us look specifically at AI bias, which is now a top concern for financial institutions and consumers alike as AI adoption grows. In the second edition of Genpact’s global AI research study, we find more than two-thirds of consumers are concerned about robots discriminating against them in decision-making. Among senior executives, nearly all (95 percent) say their companies are taking steps towards mitigating bias. However, only 34 percent are addressing the problem holistically with governance and internal controls. To combat bias and ease concerns, organizations can take measures into their own hands, starting with identifying what causes bias. Biases in data samples and training One big cause of bias is a lack of diversity in the data samples used to train algorithms. For instance, some lending institutions use AI to sift through large amounts of consumer data to automate and shorten the loan approval process. But if an organization trains an algorithm with available data only on borrowers from affluent neighborhoods, then the system may likely discriminate against future applicants from other areas because they fall outside of the model’s parameters. Another source of bias is training, namely incomplete or improper use of algorithms. For example, chatbots are a common fixture now among banks. Ideally, a chatbot can learn from conversations and become more personable with customers over time, leading to better experiences. But just as chatbots can learn the right things to say, they can also learn bad things like politically incorrect language—unless trained not to do so. When companies rush training and neglect comprehensive planning and design, these type of issues come up. Diversity is the key to combatting bias The best way to combat AI bias is with diversity in both datasets and the teams working with AI. Companies need broad dataset that can address all use cases. If an organization only has homogenous internal data, then they can look to external sources to gain a more complete picture. There is even synthetic data available now that is gaining popularity for testing and validating models via machine learning. Diverse teams can solve for training bias. When there is only a small group working with a system, it becomes partial to the ideas of a select few. Bringing in a group with different skills, thinking, and approaches leads to more holistic solutions. Armed with industry and process knowledge, these domain experts can think through potential biases, train the models accordingly, and provide governance frameworks to monitor for biases and promote trust in the technology. One bank used AI to automate 80 percent of its financial spreading process, including extracting figures from documents and formatting them into templates. To train the AI so that it would pull the right data while avoiding bias, the bank relied on a diverse team of experts with data science, customer experience, and credit decision expertise. Now, it applies AI to spreading on 45,000 customer accounts across 35 countries. As AI adoption grows and national interests increase, bias will continue to be a matter of concern. Enterprises will have to take it upon themselves to proactively mitigate bias through diverse datasets, teams with domain expertise, and the proper governance frameworks.

0 comments on “As the American AI Initiative advances AI, what can companies do about bias?”

Artificial Intelligence

Is Generative AI successfully making inroads into the banking industry?

  • Do Generative AI tools have the power to propel banks into a new era of personalization and efficiency? 
  • Dive into how banks are utilizing the power of Gen AI, what if anything is holding them up and whether technology providers have a head start in the space.
Rabab Ahsan | September 13, 2023
Artificial Intelligence

53% of consumers trust Generative AI for financial planning

  • Customers are showing willingness to purchase products recommended by Generative AI as well as trust its use for financial planning.
  • Even though customer appetite seems to be strong, FIs are not rushing into Generative AI adoption, especially in consumer-facing products.
Rabab Ahsan | June 22, 2023
Artificial Intelligence

The environmental cost of Generative AI: A conundrum for FIs

  • Generative AI models are posing many new questions about the future of business, work and even industries like finance.
  • However, advancements in the field have left much to be desired when it comes to the impact on the environment. How will the financial industry be affected by Generative AI's carbon footprint?
Rabab Ahsan | May 30, 2023
Artificial Intelligence

The Race in Generative AI: Understanding the players, the game, and how it will affect financial services

  • From ChatGPT to Amazon's Bedrock, here is a who's who of Generative AI and how it is being used in the financial industry.
  • Before we really get into knowing the names in Generative AI, we will have to step back into history to meet the father of modern computation.
Rabab Ahsan | May 19, 2023
Artificial Intelligence, Green Finance, The Green Finance Podcast

The Green Finance Podcast Ep. 15: Why climate AI is essential to reach net zero

  • Today, we're talking about artificial intelligence – a tool uniquely positioned to help manage the complex issues presented by climate change.
  • To help us understand more about how AI can help us solve the climate crisis, I've invited BCG's leading sustainability expert Mike Lyons to expand upon the study findings and explore real-world, practical applications for climate AI.
Iulia Ciutina | December 09, 2022
More Articles