🎙 F-Squared Podcast #7 – Kamal Budhabhatti on Voice, AI, and the Future of Digital Banking
From Craft Silicon to Little: How One Engineer is Rewiring Africa’s Fintech Infrastructure
Listen on Spotify or Apple Podcasts
Make sure to Subscribe to the YouTube channel to get notifications when a new episode is out.
🚨🚨 New Sponsor Kit Just Released 🚨🚨
Want to reach Africa’s top fintech decision-makers? We’ve updated our offering with premium content, executive visibility, and strategic advisory options.
Reach out at samora@frontierfintech.io or click below to explore how we can work together.
Why I Wanted to Have This Conversation
I’ve written before about Kamal Budhabhatti’s journey, how a formerly deported factory worker became one of the most important technical architects in Africa’s digital banking history. While others entered through the boardroom, Kamal came in through the backdoor of the server room. From building Bankers Realm for Equity Bank to scaling Craft Silicon into a core banking powerhouse serving over 150 million users, Kamal didn’t just witness the rise of Africa’s fintech wave, he wrote much of the code that enabled it. And when banks began in-sourcing their tech, he preemptively launched Little Cab, both as a hedge and a laboratory for product experimentation. That instinct to always think two steps ahead; technically, commercially, and strategically is what makes him a rare voice worth listening to.
This conversation picks up where the last deep dive left off. That piece explored Craft Silicon’s evolving AI strategy, how Kamal and his team view AI not as a plugin, but as a foundational platform shift with existential implications for digital banking. In this episode, I wanted to go deeper. What’s it like to build an LLM strategy inside a core banking provider? What does it mean to experiment with voice, agentic interfaces, and intent-based APIs in a fintech context? What are the hard constraints; data, compute, talent that Africa needs to solve for if it’s to avoid becoming a mere reseller of foreign AI products? This isn’t just a story about a new feature or shiny tech demo, it’s a sober, technically grounded discussion with someone who’s built serious infrastructure and knows exactly what’s at stake. Kamal is not here to hype AI. He’s here to figure out how we make it useful, sustainable, and ours.
In This Episode, You Will Hear
The moment Kamal realised traditional machine learning locked bias into credit models and why a large language model changed the game;
How Little is piloting voice commands that let riders say “Get me a cab to Industrial Area” and see a car arrive';
Intent plus Model Context Protocol explained in plain English and why it matters for African APIs;
The hidden cost of global infrastructure: 37% of Little’s revenue flows to Google Maps and why local alternatives are overdue;
Why Africa’s AI talent gap is a bigger barrier than regulation and how Kamal now hires with an AI first principle;
A candid take on entry level jobs, re-skilling, and why frictionless user experience is his single KPI for any AI roll out;
🔑 Key Lessons for Fintech Builders
Technology and Product
Large language models outperform traditional ML by removing bias and surfacing new, unexpected insights;
Voice interfaces lower the barrier to entry by bypassing literacy and UI complexity;
Intent-based APIs are faster, safer, and more scalable than ad hoc middleware integrations;
Platform Strategy
Relying on global infrastructure like Google Maps can drain margins, local stacks are a strategic necessity;
Kamal’s failed attempt to get banks to collaborate on a shared app reveals how institutional inertia kills even well-loved ideas;
Products that become verbs (like Fuliza or “Take a Little”) show the value of building deep user habits;
There’s a great opportunity for a Financial AI that understands your entire context, but the struggle will be connecting it into real financial plumbing - a case for open banking;
Talent and Culture
AI-first hiring is now a practical filter: if a human can't outperform AI, don’t make the hire;
Platform builders need hybrid fluency—understanding compliance, operations, UX, and engineering simultaneously;
🧠 Strategic Takeaways
For Operators
Think beyond features, build primitives others can plug into. If collaboration stalls, build your own wedge product and scale from there.
For Investors
Forget vanity metrics. Fintechs controlling data, infrastructure, and distribution are positioned for durable advantage.
For Policymakers
Local infrastructure matters. Africa’s digital future can’t be outsourced, invest in the rails, not just the apps.
🧾 If You Are Interested In
Why fintech collaboration often fails and what to build instead;
The real-world economics of AI deployment in African digital banking;
How voice, agentic UX, and intent-based systems are reshaping fintech platforms;
How to build for scale without becoming a reseller for someone else's AI;
→ then this episode is a must listen.
Transcript
Samora Kariuki: Today on the F Squared podcast, I sit down with Kamal Budhabhatti, the CEO of Little and one of Africa's most prolific tech entrepreneurs. Over six months ago, we explored the early signals of AI's impact. But in AI years, that might as well be a decade. In this episode, we zoom in on what's changed, how Kamal is experimenting with AI at both Little and Craft Silicon, and what banks, fintechs, and startups should really know about AI today. From core infrastructure to customer experience, this is a grounded local take on one of the biggest technological shifts of our time. So Kamal, welcome.
Kamal Budhabhatti: Thank you.
Samora Kariuki: We did our article around six months ago, and it was quite popular. I got so much inbound. I shared some with you, of course, but a lot I didn't share. The whole thing was, I think, it was refreshing to see a number of things. One, a lot of people found it refreshing to see that Little Cab actually has almost 5 million users, and it's a super app, a homegrown super app. So a lot of people meet you and speak about business and your story, and I think that story has been told. But I really value your insights around technology and where everything is going. So, welcome.
Kamal Budhabhatti: Thank you very much. Thank you.
Samora Kariuki: So again, it was six months ago, and we talked about AI, and in AI, six months is a lifetime. Things are moving so fast. You had explained some of the decisions you're making at the time. But one thing I found interesting in my conversations with a lot of banks and bankers is, when you speak about AI, the first thing that bankers tell you is that we've been using AI for a long time. But when you drill deeper, it's machine learning. Whereas when we spoke, you mentioned that you were doing credit models, you've built a lot of data using machine learning, but then now when you use this LLM kind of framework, the results were just extremely different. Could you just walk us through specifically what that meant, what was happening, and what was the shift in the results from your machine learning models to your LLM?
Kamal Budhabhatti: So what we felt is when you do machine learning on a certain set of data, you actually enter with certain biases because the way you have designed a machine learning model is you are the one who is designing. So obviously, when you are designing as a company, as an individual, you are already starting with a biased approach because you have a perception in the mind that this is what you think.
Samora Kariuki: Exactly.
Kamal Budhabhatti: With a large language model, I think that's not the case because first of all, you are not touching the base model. So you already have a base model which, to a certain extent, is considered to be not biased, especially to your business. It's a general model. So when you start anything with a general model, there is nothing biased that comes in, and so the results we feel are more interesting, are more factual than a biased decision that you might. On a machine learning internally, we also find it many times; as programmers, we know that this is what it should be. So when you start getting the results, you keep pushing the results to be biased towards your internal perceptions. So we felt large language models, because of the models available, it is easier to deal with now than having your own data and use a machine learning algorithm to train from nowhere.
Samora Kariuki: Exactly. And I think this was the experience. Stripe, a massive payments company globally, they built a generational model for the entire business, and they found that there was this card fraud, carding fraud. But the ability when they moved now to their LLMs, the ability to detect fraud moved from 50 something percent to 90 something percent. It was a large shift. And I guess maybe the underlying thing there is that they, as payments people, they thought that this is how this fraud should be detected, and this is what you should check for. Whereas when they moved now to a general model, it starts looking for patterns that most people would ignore. Did you find a specific data point, for instance, when you are doing credit models that we were missing this thing entirely?
Keep reading with a 7-day free trial
Subscribe to Frontier Fintech Newsletter to keep reading this post and get 7 days of free access to the full post archives.