- Pioneers by Multimodal
- Posts
- Transforming Risk Management with Automated Decision-Making
Transforming Risk Management with Automated Decision-Making
With Stephen Taylor, Chief Information Officer at Vast Bank
Ankur’s note: Hey there, and welcome back to Pioneers.
Today marks the beginning of our new 4-episode mini-series of Pioneers. Over the next month, we’ll be focusing on four key topics on how AI is specifically impacting the banking sector.
Each edition will include the top takeaways from each episode distilled down into concise, actionable advice.
Today’s first deep dive is entitled “Transforming Risk Management with Automated Decision-Making”, with Stephen Taylor, CIO of Vast Bank.
Check out this tweet thread for a quick look:
You can no longer scale your bank’s services simply by hiring more people. Efficiency ratios remain stagnant as banks struggle to keep up with growth.
By applying decision AI and generative AI, banks can:
[THREAD]
— Ankur A. Patel (@aapatel09)
8:50 PM • Apr 8, 2024
Here’s a quick TLDR of the highlights:
AI hallucinations occur when AI systems produce incorrect, nonsensical, or misleading outputs, posing significant risks in high-stakes applications such as healthcare and finance.
Real-world examples of AI hallucinations, like IBM Watson Health's unsafe cancer treatment recommendations and biased facial recognition systems, highlight the need for proactive measures to mitigate risks.
Factors contributing to AI hallucinations include biased or incomplete training data, unexpected inputs, model complexity, data noise, and a lack of causal understanding.
Strategies for mitigating AI hallucinations include task decomposition, leveraging different AI tools for specific sub-tasks, and combining human oversight with AI systems.
Enhancing explainability and transparency in AI systems through techniques like feature importance analysis and adhering to standardized frameworks can help build trust and ensure responsible AI development and deployment.
Transforming Risk Management with Automated Decision-Making: A Conversation with Stephen Taylor, CIO at Vast Bank
As AI systems become more advanced and ubiquitous, a growing concern has emerged: AI hallucinations. Hallucinations are when AI systems confidently produce incorrect, nonsensical, or misleading outputs, which can have serious consequences in high-stakes applications.
But these are not merely a theoretical problem; they have already manifested in real-world scenarios. In 2016, Microsoft's chatbot Tay began generating racist and offensive tweets within hours of its launch, forcing the company to shut it down. More recently, OpenAI's GPT-3 language model has been shown to generate plausible but factually incorrect information, highlighting the potential for AI to spread misinformation. And who could forget Google Gemini’s recent depiction of the founding father’s of the United States as people of color, which was a much publicized gaff for the tech giant.
As AI continues to permeate various industries, from healthcare and finance to legal and transportation, addressing the issue of AI hallucinations is paramount. Building trust and reliability in AI systems is crucial for widespread adoption and safe deployment. In this article, we will explore the risks and consequences of AI hallucinations, examine the factors contributing to their occurrence, and discuss strategies for mitigating their impact. By understanding and addressing this challenge, we can work towards developing AI systems that are not only powerful but also trustworthy and reliable.
The Risks and Consequences of AI Hallucinations
AI hallucinations pose significant risks, particularly in high-stakes applications where the consequences of incorrect or misleading outputs can be severe. In healthcare, for example, an AI system that hallucinates could lead to misdiagnoses or inappropriate treatment recommendations, potentially jeopardizing patient safety. A real-life example of this risk is the case of IBM Watson Health, which was designed to assist doctors in making cancer treatment decisions. However, internal documents revealed that the system sometimes gave "unsafe and incorrect" recommendations, such as suggesting that a cancer patient with severe bleeding should be given a drug that could worsen the bleeding.
The impact of AI hallucinations extends beyond individual instances and can have far-reaching consequences. As Stephen Taylor, CIO at Vast Bank, points out on the recent episode of Pioneers, "The biggest issues in mission-critical content creation are making sure that the data is right." Inaccurate or misleading outputs generated by AI systems can erode trust in the technology, hindering its adoption and potential benefits. This erosion of trust can be particularly damaging in sectors such as healthcare, where patient trust is crucial for effective treatment and care.
Real-world examples of AI hallucinations underscore the need for proactive measures to mitigate their risks. In addition to the aforementioned cases of Microsoft's Tay chatbot and OpenAI's GPT-3, there have been instances where AI systems have generated biased or discriminatory outputs. For example, a study by MIT researchers found that facial recognition systems exhibited higher error rates for people with darker skin tones, raising concerns about fairness and bias. Similarly, a ProPublica investigation revealed that an AI system used by U.S. courts to assess the risk of recidivism was biased against black defendants, falsely labeling them as more likely to re-offend than white defendants.
The consequences of AI hallucinations can also extend to the realm of public safety and security. In one alarming example, an AI-powered facial recognition system used by the Detroit Police Department wrongfully identified a man as a suspect in a shoplifting case, leading to his arrest. This case illustrates the potential for AI hallucinations to contribute to wrongful arrests and infringements on civil liberties.
To address the risks and consequences of AI hallucinations, it is crucial to develop robust frameworks and guidelines for AI development and deployment. This includes establishing rigorous testing and validation processes, implementing safeguards against biased or misleading outputs, and ensuring human oversight and intervention in critical applications. By proactively addressing these issues and adhering to ethical principles, we can work towards building AI systems that are reliable, trustworthy, and beneficial to society.

Every business leader wants to know if investing in AI will pay off. I don’t have a crystal ball, but I do have a step-by-step process that helps my team and I estimate ROI for clients.
We’ve shared it on our blog to help you calculate ROI for your business.
Read it, implement it, convince stakeholders with it.
P.S. We still calculate ROI for leaders interested in investing in our AI solutions. If that's you, please book a free consultation with us to discuss the next steps.