hummingbird

The Fintech Coffee Break – Joe Robinson, Hummingbird

Hi guys, welcome to the Fintech Coffee Break. I’m your host Isabelle Castro. This week I sat down with Joe Robinson, co-founder and CEO of Hummingbird, to talk about generative AI and chat GPT and fighting financial crime. Hummingbird is on a mission to fight financial crime using better communications and technology. AI is a huge component. We spoke about how the advancements in AI were shaping the way companies can detect fraud, but also how the technology was being used by the criminals themselves requiring out of the box thinking to stop their attempts.

Isabelle Castro 0:40

Hi, Joe, how are you?

Joe Robinson 0:43
I’m doing well. Good morning.

Isabelle Castro 0:45
Good morning. Nice to have you on the show.

Joe Robinson 0:48
Thank you. Thanks for having me.

Isabelle Castro 0:51
You’re welcome. So to begin with, what gets you up in the morning?

Joe Robinson 0:56
Well, coffee and fighting financial crime. I’m Joe I’m the CEO of Hummingbird, a company focused on a mission to fight financial crime. So really believe in that, and we’re very mission driven about what we do.

Isabelle Castro 1:11
Okay, nice, Joe. And what about your journey to Hummingbird, your career journey to it. And then what led you to found the company.

Joe Robinson 1:23
I’m first and foremost, a tech enthusiast. So I got into the tech industry in 2006. At an online video company called Bright Cove, and I worked a few different jobs there started a few different startups after that, and then landed in my first job in finance at Square in 2011. And square taught me a lot about the payments industry and building products and things like that. It also exposed me to some of the financial crime considerations that payments companies have. And following that I went into the early crypto industry. And at circle I became the vice president of risk and data science. This is really where I got my my first sort of view of what the anti fraud world looks like I started working as an operator in that space and left that spent a few years off work that I do for a little bit and and then decided to start a company focused on fighting financial crime and Hummingbird was born.

Isabelle Castro 2:29
Nice, you’ve got a prestigious past record of placements really great. According to your website, Hummingbird is on a mission to fight financial crime using better communications and technology. AI is at the front of everyone’s mind, and I’m sure you guys use it to some capacity. Yeah, what in your eyes has been the most disrupting factor for the financial industry?

Joe Robinson 3:03
Well, Hummingbird is the first company that’s really focused on the investigation experience. So we complement other technologies that our customers use to detect anomalous behaviour and things like that. And then when they put that in front of a professional investigator, we help facilitate that experience in the data collection, and some of the communication that those investigators actually do with law enforcement. And as AI has become more of a topic this year with large language models and chat GPT and others, what we’re seeing is both use cases in the fraud world, but also a lot of potential for use in the anti crime world as well to help more efficiently move information between investigators in law enforcement, specifically, the communications right writing some parts of the reports, communicating what happened, and potentially better summaries of information about the subjects involved in the crimes. So we’re, we’re very, I’m a tech optimist. It’s funny to talk about this topic, because I’m actually very optimistic about most new technology. But my role also requires me to think about how it might be used for crime. So I have a fun kind of bifurcating perspective on it.

Isabelle Castro 4:28
Yeah, it sounds quite fun. Like you have to see both sides before kind of coming up with a solution, right?

Joe Robinson 4:34
Yeah, absolutely. And I think probably the first thing professional investigators will need to think about is what new crime typologies might emerge or be boosted by large language models in AI tech. And then hopefully, there are companies like ours also working on providing better tooling for them to do their work as well. So

Isabelle Castro 4:59
That’s really, really interesting. So how can financial institutions counter bad actors by using AI tools to detect and prevent fraud and money laundering, would you say?

Joe Robinson 5:25
I think of it in two themes. One is detection, and the other is efficiency. So we’ve seen the rise of machine learning as a technology applied to detection. For a few years now, machine learning can be used to spot strange behaviour, anomalous transactions, customer profiles that don’t quite look right patterns of activity, things like that. All of that detection helps, you know, protect our credit cards from fraud and things like that. On the efficiency side, you know, once financial crime has occurred, particularly well organised large scale financial crimes like money laundering, there’s a system of communication and collaboration that has to happen between the financial industry and law enforcement. Usually, the financial industry is the first to see things and investigate further, but they then provide reports and information to law enforcement who can actually prosecute these crimes. All of that takes time. It takes effort, it takes documentation, it takes writing, it takes research and analysis and things like that. And large language models and other forms of AI are great tools for that, right, they can make that process a lot less tedious, a lot less manual, they can help with the flow of information between those two team members.

Isabelle Castro 6:50
Okay, and I guess, like with the kind of rise of I asked, because instant payments is kind of front of mind at the moment, with fed now being introduced in that kind of capacity, instantaneous capacity, is AI going to be kind of like a defining technology for your ability to fight fraud.

Joe Robinson 7:13
Absolutely, it can certainly help, right. So on the detection side, smarter models that are more precise, are going to help with instant payment mechanisms, regardless of the sort of channel or backbone that they’re using. A lot of more complex crime only emerges after patterns of activity spanning weeks, or months or even years. And that type of crime tends to represent large money flows and sophisticated organisations that are doing a lot to hide. And so I think the application of AI and LLM, there’s really, you know, how do you summarise information across hundreds of 1000s? if not millions of data points? And how do you sort of concisely summarise that information in a way that people can understand what’s happened, and, you know, understand how it’s related to the purpose of the business involved, or the people involved, and then communicate that information over to law enforcement.

Isabelle Castro 8:16
I’m gonna I’m gonna go into generative AI and Chat GPT in a second. But I just want to kind of like go on that point. There’s this whole issue of explainability. I’ve talked to other people who about AI, and they say that this is kind of quite a, quite a roadblock, to being able to develop AI solutions. What’s your take on this?

Joe Robinson 8:43
Absolutely, particularly for sensitive practice areas like financial compliance. If you think about the purpose of financial compliance, it’s to keep our financial system safe. And that means safe both from criminals, but also safe for consumers to use and free from discrimination against consumers. And so, you know, any sort of machine learning or large language model or neural network or anything like that, that’s making decisions about the behaviour of a consumer, it needs to be explainable, right, there’s significant risk that a model could make predictions or flag activity based on variables that are discriminatory or things that we wouldn’t want to account for in our detection models. And so model explainability is really important. It’s very important for both the financial institution and their regulators to understand how they arrived at certain decisions. And that’s mostly a consumer protection for all of us, right? There’s a need to know where it came from and why.

Isabelle Castro 9:54
I can imagine this gets really really difficult when you go into more complicated things like genderative AI is the case. I mean, I’m new to this.

Joe Robinson 10:06
Absolutely. And you know, big disclaimer, I’m not a data scientist who’s personally developed a large language model or generative AI before. But I think, you know, I believe there are ways to do it. So Hummingbird, one of the things we do is facilitate workflows. And we believe that different tools can be used as part of the workflows. But if there’s always what we call a human in the loop, seeing what the model is doing, understanding at every step of the way, it reduces the potential for a model to move through a large scale decision with no supervision, and create a, you know, discriminatory outcome or something like that. So keeping that human loop in in the loop and keeping an expert reviewing things is one of the ways we believe these tools can be used appropriately.

Isabelle Castro 11:00
That seems very kind of like so they don’t get lost in the likes few steps down the line, and they know exactly what’s going on. Right.

Joe Robinson 11:10
Yeah, absolutely. I mean, if you think about some of the emergent models today, they’re trained on billions, if not trillions of different data points and documents and things like that. And so it’s it’s really quite difficult. You know, in the words of Sundar Pichai, I’m not this is not a direct quote, but he said, It’s really quite difficult to know exactly how it arrived at certain generative AI responses. But we can always have a human reviewing what’s coming out providing a quality assurance check or sort of a check against that information, and then making an informed decision about what to do with a particular case. And so we’re very, you know, optimistic about that as a path for investigators to use these tools.

Isabelle Castro 12:00
And I mean, within generative AI, there’s, I mean, there’s a lot of things with generative AI, but one major focus has been Chat GPT. Is this focus just like a phase? Is it a fad? Or is Chat GPT gonna make a significant impact? In financial services? It’s

Joe Robinson 12:22
gonna make a very significant impact. Yeah. It’s every time I learn about something new, which is almost weekly, it just is like, the things that people are doing with generative AI are mind blowing. It might might be, you know, interesting to speak about the crime use cases where criminals are using these things. Do you want to start care?

Isabelle Castro 12:44
Yeah, definitely. That was one of my next questions is how do they use it? Because I’ve seen kind of notes that they that financial criminals do use Chat GPT, but I don’t I really don’t understand how, please explain.

Joe Robinson 13:00
Yeah. Well, a lot of ways and one of the things about criminals is they’re very, very creative and very sophisticated. But, you know, a few things. And the caveat, before I go into this, again, I’m a technology optimist. So I think it’s good to be aware of these things. But I’m more focused on optimistic use cases. And I believe that tools will bring a lot of good into the world. That one of the things that I’m most worried about is the combination of generative text models and generative speech models with deep fakes the ability to recreate somebody’s likeness, and image in video format, and have them speaking and saying something that they never actually said, right. And we’re seeing the beginning of this in legitimate use cases. In fact, there are different, you know, TV shows, movies, and things like that coming out now, where some of the characters are actually generated. As part of the, you know, the CGI, basically, there’s actually a new Black Mirror episode on this as well. That’s quite interesting. But think about that used by either a malicious government or some sort of attacker or something like that. You could generate volumes and volumes of fake video and information, using real people’s likeness, and essentially flood the market with data and information that isn’t real and makes it very difficult for people to distinguish what’s real from what’s fake. So I think we need some emergent models around how to give provenance to video content and media so that we know what’s real. We know what the human actually said. So that’s, that’s one way and in finance, that means input As a nation of customers, right impersonation of me or you and opening the accounts in our names, and using our voice or our likeness to get around some of the security checks and the Know your customer requirements that financial institutions have.

Isabelle Castro 15:16
Okay, I’m interested now how I know you don’t currently kind of target this. But how would you how would you go against this kind of deep fakes even more sophistication in generative AI? For criminals? How would you even start to go against that?

Joe Robinson 15:42
I actually think for the use case we’re describing that you can’t detection and labelling of deep fakes might actually be the wrong path. And that the correct path might actually be ways to stamp or legitimise or verify the source of real information. And that adding that label of provenance to real videos and real communications and things like that might be more common in the future. So if you’re not seeing that, you’re sort of aware that, you know, the content might not be real, might be deep fake. And if you are, then you can trust that it’s from the source think of, you know, verified checkmarks, and things like that, although I know, there’s some baggage around that now. So,

Isabelle Castro 16:35
yeah, I know, Carry on, carry on. Carry on.

Joe Robinson 16:39
No, I think it’s gonna be a huge problem, though, I think generative models, the issue is they generate, right. And so you have to think about these things in massive volumes, right? Because they will generate volumes and volumes and volumes of text and video and audio and things like that.

Isabelle Castro 16:57
Yeah, it does sound like a very kind of difficult threat to go up against. I mean, on a general scale, would you would you in Hummingbird implement generative AI? Are you planning to? And if so, where would be your first target point?

Joe Robinson 17:22
Yes, absolutely. We think there’s a lot of great potential on that efficiency gain. One part of investigating financial crime is actually writing up the behaviour. And that takes time, it’s quite tedious. We know from our internal metrics that when somebody’s doing one of the reports, they’re spending about 40% of their time, just writing. And, you know, that could all be reduced dramatically, the result of which would be faster communication between the financial industry and law enforcement. Now, that’s all got to happen with appropriate privacy and data security considerations, which makes the application of generative AI more difficult in finance than it is in other industries. But I believe we’ll get there. There’s a lot of great companies already working on that, including, you know, both of the main infrastructure providers, Amazon and Microsoft. So people are aware of the issue and working towards it. But it needs to happen. And it needs to be very secure and very privacy preserving before we can use that kind of thing.

Isabelle Castro 18:31
Okay, I guess fraudsters don’t have to think about this kind of thing. So they kind of have a head start.

Joe Robinson 18:38
Yeah, another another use for fraud. That’s an interesting one to think about is, you know, very common fraud scams are things like elder exploitation, or romance scams and dating apps and things like that. And for many years, this is nothing new for many years, fraudsters have been able to buy your private information and personally identifying information in you know, spreadsheets of 1000s or 10s of 1000s of people on the dark web. Now imagine using generative AI, trained on that personally identifiable information can create highly personalised attacks and scams at scale, right running 1000s or 10s, of 1000s of scams in parallel, versus, you know, what they’re doing today, which is largely manual and typing things out themselves and trying to be as personal as possible, but often failing and sounding very weird or scammy. Right. So the sophistication of the attacks actually goes up quite a bit through the use of some of these tools.

Isabelle Castro 19:44
I mean, the fraud kind of economy, do you call it a fraud economy? is huge, right? I was talking to someone the other day and they said that it’s the largest economy after America and China is Data is that what you’re up against?

Joe Robinson 20:02
I haven’t heard that stat. The United Nations published a relatively well, well, you know, regarded stat in 2016, that the world has around 1.5 to 2 trillion in illicit flows of funds. So fraud or money laundering or things like that, and that’s trillion per year. And so that that represents somewhere between two and 5% of overall real GDP. So, yeah, it’s a big industry, sadly,

Isabelle Castro 20:34
yeah. Well, yeah, it’s good that there are companies like hummingbird that going against it, because when I heard that, I was like, well, we’ve got a very big mountain to climb.

Joe Robinson 20:49
We, we tell our new employees in their onboarding that working at hummingbird will change their their worldview, when they just go outside and walk around, right? We’re literally financial crime and various forms of it are all around us. They’re very much embedded in every part of our society.

Isabelle Castro 21:10
Okay, no, no, I’m good. I’m gonna go out the house today. And I’m gonna be very scared. Okay, so some people are concerned with Chat GPT’s development, and especially some industry leaders. I mean, they were getting behind a petition to stop its development. Not that long ago. What’s, what’s your opinion on this? I know, you said that you’re an optimist. But are their concerns are unfounded?

Joe Robinson 21:38
I think they’re well founded. And I think their voice provides a balance, right? Because we don’t want our people pursuing the technology for the technology’s sake, without thinking about the ethics and implications. I don’t think we can, you know, I don’t think we can put this one back in the box. At this point, the the models are out there, they’re open source, there’s probably hundreds of 1000s of different projects now, based on some of the foundational models that came out earlier this year. And so now, I think we have to assume that it’s going to be part of society moving forward, and think about the appropriate ways to use these things, to regulate them to guide development, and to support people that might be impacted by them. You know, first and foremost, writers and people whose likeness I was reading the news this morning about the Writers Guild strike in Hollywood. And I guess the actors association is also considering a strike about deep fakes and things like that. And they’re really just on the frontlines, right. I think many different industries will probably be changed by this, and the many different workers will be impacted. So we’d need to have that conversation as well. I think that people calling for more regulation, including Sam Altman, are right, we need to be looking at this and thinking about thoughtful ways to apply regulation to this technology.

Isabelle Castro 23:13
No, I agree. I mean, AI isn’t the first emerging tech to kind of hit the financial industry in this way. How can financial institutions learn from past instances of emerging tech from like mobile and crypto? How can they use that to prepare for this AI moment?

Joe Robinson 23:38
It’s good to be paying attention, right? It’s good for traditional institutions, which are large and foundational to society to be looking at these things, experimenting with them, running safe experiments, thinking about how they might impact their customer base, their own operations, things like that. It’s good for regulators to be doing the same. And I think globally, we see regulators beginning to experiment with these things and think about the implications. So it just is a matter of taking it seriously. It’s not it’s not happening tomorrow or in the future. It’s happening right now. It’s happening right in front of us. And I think the financial industry needs to be a part of the conversation as well as regulators.

Isabelle Castro 24:24
Okay, cool. That’s a good bit of advice. I’m gonna go see your curveball question. If you could live in any decade or era, what would it be and why?

Joe Robinson 24:38
I, this might be a sappy answer. I would live now. I think. I think we’re living in a golden age of humanity. You know, I know that there are many things to be stressful, but I think statistically on most measures, we’re better off than any humans in the history of world of the world before us. I’m and I’m excited for everything that I’m doing at hummingbird. I’m excited for what folks are doing innovation and tech. And I just think it’s a great time to be alive. So there’s my campy answer to that.

Isabelle Castro 25:15
No, I like that. I like that. I agree it’s a good time to be alive. It’s a little bit difficult as well, but every era has their difficulties. How can people find you online, follow you or maybe get in contact?

Joe Robinson 25:30
Well, you can reach me through hummingbird. My email address is Joe out. I won’t say the rest. And then LinkedIn is a great way to get in touch as well. So

Isabelle Castro 25:42
okay, great. Thank you so much. You’ve been a really, really good guest. I’ve really enjoyed having you on the show.

Joe Robinson 25:49
Thank you so much. Appreciate it.

Isabelle Castro 25:51
You too. Have a good rest of your day. You too. As always, you can reach out and chat with me on my personal LinkedIn or Twitter @IZYcastrowrites. That’s about it. Why? But for access to great daily content, check out Fintech Nexus on LinkedIn, Twitter, Facebook or Instagram. You can also sign up for our daily newsletter brain new structure inbox. For more fintech podcast fun, check out the website, where you can find more fascinating conversations hosted by Peter Renton and Todd Anderson. That’s it from me. Until next time, enjoy your downtime.

RELATED: The Fintech Coffee Break – Jonas Gross, The Digital Euro Association

  • Isabelle Castro Margaroli

    Isabelle is a journalist for Fintech Nexus News and leads the Fintech Coffee Break podcast.

    Isabelle's interest in fintech comes from a yearning to understand society's rapid digitalization and its potential, a topic she has often addressed during her academic pursuits and journalistic career.