COLUMN – OLAF RANSOME | 95% of AI PoCs fail, according to MiT research. Of the remaining 5%, the majority fail at the project stage. I asked a project manager what is and isn’t working with AI and their clients.
Why even bother to try to be part of the 1% or so of AI projects which are successfully implemented?
I would bet that for most readers AI is a topic in that at the very least you will be curious. I’d also bet that plenty of readers are like me; I see it, I see what it alleges it can do but have not had anything to trigger me diving deeper.
Why bother? AI is causing the 4th Industrial Revolution. Its potential seems huge and ultimately it may well be even bigger than we can imagine. In whatever you are doing right now, it might just be that there is no place for AI. Yet.
AI is a topic for several folks in my network. My friend Mark Bolton is a programme and project manager who is knee deep in things AI with a diverse set of clients – an ideal person to find out more about what works or doesn’t.
How to think about AI
Mark started by offering me a view on where AI is at right now: “It’s more like a human than like a machine. It’s like having an intern. Super enthusiastic and willing to work all hours but doesn’t really know enough to be immediately useful.”
Then he reminded me of what we typically expect from the systems we use. We expect them to be “deterministic”; we have rules and expect the same outcome every time. I am Swiss and as the Bankers’ Plumber I love a good process. AI models are different; they are “probabilistic”, i.e. they construct a likely output, and they are prone to hallucinate. They want to provide an answer and may simply mis-interpret patterns.
Mark’s next input was on what is easy to review and digest versus what isn’t. He cited the example of tax filings and visa applications. In both of these use cases, there is always a mix of structured and unstructured data. As an industry, financial services is really good with the former: many readers will have the dubious “pleasure” of understanding what values can be in which field of a SWIFT aka ISO message. I have to smile at memories of debating how to put certain codes in field 72 of an MT202 message to persuade a correspondent bank to make a timed payment to CLS no later than a certain time. Imagine you are filing taxes in the US for a client who has bank and broker statements from the US, the UK and Azerbaijan. The first two will likely have been seen before and you’d have rules for interpreting them. That last one from Azerbaijan will be less familiar, maybe unstructured. This is where there is both potential and peril.
AI processes are normally private, but they will often augment the private data with things which can be found on the internet. Even quite successfully, Mark told me. He offered me a real-life example; for an immigration filing in Singapore for a family moving from Canada, he saw AI tools able to do 95% of the work accurately. The art is to understand that limitation and then have the discipline to do the checking on the 95% and to do the further research to get to 100%.
Long ago in ancient Greece, Plato warned about how the introduction of writing would weaken our memories. There is a known phenomenon called the “Dunning Kruger effect”, which describes how we may develop “a false sense of mastery over the automated tools and techniques being employed”.
So, in applying AI, success is a function of both accepting that 95% automation is pretty good and then having the discipline to check outputs and then complete the last 5%.
About 10 years ago, somebody else in my network was in a very senior position at Google looking after large swathes of research, including AI. Even then he was hiring clever AI folks who on a 50% engagement were being paid multiple millions. Once inside the gates, he found they had to spend the majority of their time scrubbing data to make it useable.
Let’s assume that a typical PoC is something which a department wants to do. Mark’s findings build on my Google friend’s experience. PoC’s can be made to work, establishing a “ground truth. Then comes the assessment of what would be needed to take the next step. A common theme in many failure cases is that the data demand is for things that are outside the department and or not in good shape.
This gets us to a crucial point about what might work and what likely won’t. Mark’s recommendation is that before doing a PoC, you should understand what data sources you are going to need. In the case of tax filings, it is relatively easy to get sample data of standard documents. At the other end of the spectrum, your idea for an on-line credit approval process might well need some risk data. I have never yet found a bank with a single centralised, enterprise-wide risk data system. That is a lot of data to source and scrub. Now just maybe your firm has a data lake, and everything is super easy to access. If so, lucky you. Simple mantra: homework comes before the PoC.
Start in the middle
This brings us to what is most likely the most important ingredient for successful AI deployment. Central rather than de-central local efforts are key. With that comes the role of the Chief Data Officer (CDO) and the Chief AI Officer. Might be one role, might be two, albeit two who work very closely together. Done right, some central control will avoid the disappointment of doing PoC’s only to find that the data is some combination of not available, not usable. One benefit of centralisation here is that if data is poor / not available, then the evidence is pretty visible. Maybe this gets the CDO the leverage to say: “if we want to reap the benefits of AI, then we have to get the foundations right, which means we need to execute on data strategy.”
I think I will end here with something that Steve Jobs said, well nearly: “Stay hungry, stay foolish and stay curious”.
Referring to himself as The Bankers’ Plumber, Olaf Ransome is founder of 3C Advisory LLC – drawing on decades of senior operational experience from large banks. To connect, find his LinkedIn page here.











