ai

Why GenAI is More than Hype

Finance has had more than its fair share of hype cycles, so you’d be forgiven for thinking that Generative AI was just one more. But, according to Mastercard, the technology is on the brink of significant economic disruption. 

“Unlike other technologies that have seen hype cycles, generative AI exhibits clear use cases, has led to the creation of robust solutions, and is developing swiftly,” said Mastercard in its Signals Report, published this month. 

 Ken Moore, Mastercard’s Chief Innovation Officer and Head of Foundry
Ken Moore, Mastercard’s Chief Innovation Officer and Head of Foundry

“The excitement and rapid advancement of generative AI exhibits similar characteristics to technologies that have proved their staying power, like the internet or the smartphone,” said  Ken Moore, Mastercard’s Chief Innovation Officer and Head of Foundry. “For instance, it’s not something that impacts one industry or geographic region. Instead, it can be applied cross-functionally. “

Its development, however, is the result of various different vectors that could supercharge the sector beyond the usual hype cycle. 

Democratisation Of AI

Less than a year ago, industry leader, Open AI, launched their Chat GPT interface, hurtling the average consumer into a futuristic world. No longer did one have to be technically competent to experience the power of GenAI. 

Chat GPT plug-in feature allowed companies to utilize the chatbot as a value-added service. Klarna, Expedia, and Instacart are some of many that have utilized the function to enhance consumer experience. Converted into a practical tool, ChatGPT has been used to search for information and create personalized recommendations. 

While OpenAI brought clarity to the power of Gen AI, ChatGPT has been available primarily as a hosted model, allowing OpenAI access and storage rights to any data imputed into the system. This made it impossible for those who deal with private and customer data to use it, resorting to building their own models.

In response, Meta launched their own Large Language Model (LLM), dubbed LLaMA as open-source GenAI. “The competitive landscape of AI is going to completely change in the coming months, in the coming weeks maybe, when there will be open source platforms that are actually as good as the ones that are not,” said vice-president and chief AI scientist at Meta, Yann LeCun soon after the launch. 

Yann LeCun, Vice-President and Chief AI scientist at Meta
Yann LeCun, Vice-President and Chief AI scientist at Meta

“The ubiquity of models coming to market, from GPT-4 to Llama 2, has democratized the technology to enable individuals with no technical expertise to harness its capabilities easily,” said Moore. 

Mastercard noted that a shift to open source could empower institutions to use the technology safely without compromising their propriety data and that of their customers. This could supercharge adoption, making GenAI accessible for enterprises, both large and small. 

“Models that can access and learn from specific data, such as transaction history, can provide better banking interactions,” stated the report. 

Open Banking and GenAI’s symbiotic relationship

Data plays a significant role in the technology’s ability to develop further, and the timing of generative AI’s launch is critical. Coinciding with the global mass acceptance of open banking, access to a wider network of data could also push the technology forward. 

While Europe is at the forefront of open banking, countries around the world are in the process of adopting policies to enhance their local adoption. In the U.S., so far, fintechs have taken the helm, but the CFPB recently announced their intention to finalize a proposal for market-wide access by 2024. 

According to Mastercard, the influx of open banking data, in parallel with the launch of open-source generative AI, could spark a revolution in financial services. 

“Through open banking, generative AI can access a broader dataset and consequently create more sophisticated models in specific verticals,” stated the Mastercard Signals report. 

“Open banking platforms’ existing privacy and security frameworks will let consumers control the use of their data by these AI models.”

Moore elaborated, explaining that increased data availability could unlock the technology’s full potential due to the sheer volume needed to train LLMs efficiently. “As the amount of data grows, AI models can become increasingly better at their designed tasks,” he said.

Evolution of an existing tool

Although many experts indicate GenAI is still in its early stages, already, AI has already been deployed across the vertices of the digital ecosystem for quite some time. 

“While its full potential will take a few years to materialize, the financial services industry has been harnessing other forms of AI for years,” said Moore. “This experience will help accelerate the adoption of Gen AI as the initial use cases are easier to understand, and many of the skills and capabilities are in place.”

Mastercard is one of many institutions that has deployed AI across their solutions, buying into the efficiency and processing power of technology even before large language models like ChatGPT were caught in their cycle of hype. 

The development of AI into its GPT form gives the technology a tangibility, despite its nascency, driving power into companies’ adoption. Already, companies can see its applicability even while understanding GenAI isn’t yet in its most powerful form. 

“Financial institutions will leverage it for personalized banking services, fraud detection, and regulatory compliance,” the Mastercard report stated. “Gen AI is instrumental in risk mitigation, as it can generate synthetic data sets to test the efficacy of fraud detection algorithms.”

However, for now, the technology is limited, relying on a symbiosis with humans for oversight. 

“Human oversight will likely be an essential function of gen AI for the foreseeable future,” said Moore. “One of the current challenges we call out is around fake information and what are known as hallucinations, which will require some level of development before we can remove human checks and balances.” 

Robert Antoniades, Co-Founder and General Partner of Information Venture Partners
Robert Antoniades, Co-Founder and General Partner of Information Venture Partners

“AI systems are typically trained on historical data and patterns, making them reliant on dated information. What if something unprecedented happens? What if social norms or environments evolve? What if laws change? What if the data it’s trained on is inaccurate or biased? While AI can automate many tasks and assist with decision-making, it should be seen as a tool to enhance human capabilities rather than replace human judgment entirely.”

These “hallucinations” and concerns over biased outcomes may limit the technology for the foreseeable future. 

“You have to understand that in financial services, if it’s anything important, it has to be 100% accurate,” said Robert Antoniades, Co-Founder and General Partner of Information Venture Partners. “There’s no room for hallucinations. There’s no room for errors. AI-generated answers are fascinating to see because they’re actually decent, but they are not accurate.”

“For the purposes of prospecting or marketing, it’s fine. But for financial advice, no, absolutely not. For record keeping. Absolutely not.”

RELATED: Generative AI in fintech goes far beyond the ChatBot

AI-Specific Regulation still to come 

While these concerns are becoming ever more prevalent with the increased use of AI, the space still operates under light regulation. While laws, such as data protection rules, may touch the technology, AI-specific laws are yet to come, and it may be a long time before they are implemented. 

“Introducing any powerful new technology raises ethical questions and warnings about potential misuse,” said Moore. “Generative AI is no exception, and we see the need to create standards, governance, and a set of principles which underpin its responsible use.”

“This can support generative AI’s long-term viability by preventing products or solutions from going to market if they are untested or not secure, protecting the consumers and companies that build them… While some anxieties may be excessive, job disruption and the breach of privacy rights are real risks associated with AI that need to be addressed to ensure responsible usage.” 

An example of this can already be seen in the entertainment industry. Already, writers, artists, and actors have posed questions about copyright infringement and the future security of their jobs. 

Until regulation is set, the industry could face similar vilification to that of the crypto industry, which continues to lose trust as bad actors come to light. 

“It’s not a question of if generative AI will be regulated, it’s a question of when,” said Moore. “As new opportunities emerge, balancing the technology’s revolutionary potential against a formidable set of risks and challenges is essential to its adoption and longevity….It will make the space easier to navigate for good actors while helping to prevent bad actors from engaging in fraudulent activities.”

Until that point, trust in the technology rides on financial companies’ ability to self-regulate, carefully considering the implications of any innovations they implement. 

  • Isabelle Castro Margaroli

    Isabelle is a journalist for Fintech Nexus News and leads the Fintech Coffee Break podcast.

    Isabelle's interest in fintech comes from a yearning to understand society's rapid digitalization and its potential, a topic she has often addressed during her academic pursuits and journalistic career.