If you would like to earn CPE credit for listening to the show, visit earmarkcpe.com slash FPA. Download the app, take a short quiz, and get your CPE certificate. Finally, if you enjoy listening to FP&A Today, please go to your podcast platform of choice, click the subscribe button, and leave a rating and review of the show. And now, on to the show. From Data Rails, this is FP&A Today. ♪
Welcome everyone to this special live edition of FP&A Today brought to you by Data Rails as part of FP&A Con. I'm Glenn Hopper, host of the FP&A Today podcast and author of AI Mastery for Finance Professionals. Today's session is titled Data Analytics and AI for FP&A Teams. This will be a fast-paced, practical conversation focused on real use cases, hands-on tools, and how finance teams can thrive in the AI era.
This session is being recorded and will be released as a special episode of the podcast. And I'm joined today by three outstanding guests. First, Nathan Bell, co-founder and managing partner at Vi Consulting. Nathan works with CFOs to build integrated finance, technology, and data strategies. And he's led enterprise transformations at Gartner, Embark, and Native American Bank.
He holds degrees from Harvard, DePaul, and the Graduate School of Banking at Colorado. Next, Anna Chomina, fractional CFO and founder of Blend to Balance. She has held executive finance roles including global CFO at SoftTech and cluster CFO at Sandoz. Today, she helps small and mid-sized businesses apply AI to everyday finance operations and writes the Balanced AI Insights newsletter. And finally, Ana Yamashita, manager of solutions consulting at DataRails.
Ana has worked across SaaS and finance previously at Vena and now at Data Rails, where she helps FP&A teams implement modern performance management solutions. She has a background in psychology and a passion for data storytelling, enablement, and financial modeling. I love the psychology background. I feel like in this era of AI, psychology probably helps, you know, you know how to talk to the bots.
So, well, I guess let's go ahead and dive in here. I want to start off with how AI is being used in FP&A today. So maybe, Nathan, you want to start us off and tell us what are some of the ways you're seeing AI applied in real FP&A workflows today? Yeah, thanks, Glenn. And I have a couple of examples here. One is just my own personal example. When I was CFO of Digital Trends Media Group,
And we had, and I joke at the time, it was really an FPNR team, not really doing much analysis, right? We were lucky to ship the financials out on time and maybe do a mini variance analysis, but yeah.
Once we were able to really get the data we needed and do some automation, and you hear automation intermingled with AI a lot these days. I think there's a lot of confusion around automation and RPA, and that's been around a long time. But also just how that can be super useful for F&A teams and what we were doing with it from just trying to get the reports out to variance analysis to ultimately doing predictive analytics, which to me is the gold standard of where most people want to be these days.
And then once you're able to make that move, then it's true prescriptive analytics, which is where I get the most excited about. But for us, it was just getting
a common lingo together, you know, with multiple versions of truth. We have that situation where I was at Digital Trends Media Group where my team, we'd put the reports together, we'd go to the quarterly offsite, I would do, you know, the board pack and presentation. I'd go first and I'd say, "Here's the numbers, here's how this looks." And then you'd have the head of sales go after me or head of marketing or HR operations.
And they would all say, I see Nathan's numbers, but mine's slightly different. Here's what my forecast looks like. What Nathan doesn't have and his team doesn't have is this information about a certain client or it's not in the CRM, it's just in our head or this is how we look at it or how we calculate this metric in our group. And those multiple versions of truth was something that for FP&A is real. I'm sure most people are probably nodding their head right now and they're like, yes, I've been there. We think our numbers are
you know, should be what goes in the board packet. We'd have everybody else have a variation of that.
And I think AI can really help solve that. It's going to be near impossible to get to one single version of truth. When I worked at Gartner, we used to joke that was like chasing Bigfoot, getting an organization to, you hear about it, but actually nobody's ever really seen them, right? Like a company that has a single version of truth with metrics. But if you can get to what we call sufficient truth, where there is a common data dictionary and glossary, where everybody is operating from a metrics and KPA standpoint, then you can start automating things and start shipping out
normal variance type analysis and freeing up that time to be a true business finance partner that I think FD&A really wants to be. Well said. And so we have two Anas on the panel. So we're going to go Ana T and Ana Y. So I guess Ana T, we'll go to you next. What are some common misconceptions about AI that come up when you start working with finance teams?
First of all, thanks for having me here. And second, the biggest misconception that I hear is that AI is going to take our jobs. This is the thing that comes up every time we talk about AI or when I show the AI capabilities. And this is not what is actually happening. So AI, at least from the point that I see today, is not going to take our jobs. It is taking the part of
our jobs that is manual, involves a lot of data processing, repetitive. Yes, it makes our jobs maybe more meaningful.
Yes, research actually shows that people are starting to work more efficiently, but they are not working less. This is important, right? So we are spending our time on something more valuable, becoming better partners to our teams. So I don't think AI will change our jobs. I think that the standards will change and that we will need AI to meet these new standards.
Isn't that amazing? Every time efficiency goes up and productivity goes up, it's never like we get more work-life balance. We just get more and more work for the company. Like you, I do a lot of speaking around AI and I do a lot of live demos, which it's always scary if you're using generative AI to be trying to do financial analysis live, but I'm almost...
Like it used to really stress me out, but these days I'm almost happy when the analysis does something wrong because everybody kind of breathes a sigh of relief. Maybe I don't want them to be too comfortable because there are, you know, there are a lot of automations that are coming, but at the same time, we use this stuff every day and you see nothing right now suggests we're going to, we're ready to pull the human out of the loop. So. Which is also good, which means that you will not lose your job. Yep. Yep. Yeah.
So, well, I know why in your work with Data Rails clients. And by the way, plug, I guess it's a self-promotion or self-interest, but I love what Data Rails is doing with AI in the platform right now. So just if anybody hasn't seen that, get a demo of the AI that's in the Data Rails platform. Super cool stuff. But back to my question before I interrupted myself.
In your work with Data Rails clients, how are you seeing smaller and mid-sized teams starting to use AI? What does that early AI adoption look like to you? Yeah. I think that to Ana T's point, a lot of the prospects that come to us are not saying that they need AI because they want to do something to give themselves free time. It's just that they already have all of these different things that now are competing interests of what they can get done on a daily basis.
I think that AI is really great to help with that point. I think one way that a lot of people use it, which is honestly very low hanging fruit, is to answer questions from different stakeholders. If someone's coming in saying, "What did I spend last month?"
rather than some of the prospects having to go in, pull a report, having to send that email out, which maybe takes 15 minutes of time, but it's a lot of switching between different tasks. Then it's interrupting something, it's interrupting a thought process. Now, these end-users can just ask Data Rails itself. What did I spend last month? What was I supposed to spend last month? Those are really easy surefire ways that people get use out of different AI tools really quickly.
I think that that's great for the end user, but in terms of maybe more in the finance department, everyone is very, very excited by our storyboarding. Using AI to actually create some of our board decks immediately to enable you to have a really good starting point when it comes to the commentary that you want to show on these board decks.
And my favorite part of that is that everything can be overwritten. So just like you were saying, Glenn, sometimes it's like almost a sigh of relief to see like, okay, like it did something wrong. But being able to go in and actually override or kind of add in your own commentary as well, you're really using the best of both worlds. You're getting a lot of time back because that starting point is being made for you. But then also you're not losing any accuracy because you still need that human intervention to take it from like 95 to 100.
Yeah, and I work on AI solutions for clients all the time. And a lot of times SMBs can be left out of this because they don't have the budgets. And a lot of times they don't have the data to build these bespoke projects. So something like Data Rails, having it built in, and I'm seeing more and more now, even like in Snowflake, there's the Snowflake Cortex where you can interact with your data.
And then there's just more and more tools. And I think for a lot of businesses, their first experience with AI, where it's truly integrated in the system, not just doing stuff in chat GPT at the employee level, but it's going to be when the software companies start doing. And I know a lot of them, you know, it's going to be built into your ERPs, your CRMs, your billing systems and all that. And it's just a matter of time. But to our previous points on the human in the loop,
nobody's ready to turn it over and say, oh, we're going to let the bot put together our board package. But if it can get you 80% there and you become more of an editor than a doer, that's super exciting. Yeah, and I'd love to, this is a really good topic and I'd love to chime in here. I got,
I was at Gartner in the finance practice and the data analytics team. And I got to see and advise CFOs from all size companies. One of the most common things that they had me do was review their board pack, review their PowerPoint presentations and try to figure out how much do I put in there? What's the narrative? What should I talk about? You know, they're taking snapshots and embedding Excel spreadsheets into PowerPoints and trying to drop the whole P&L in there and, you know, all that type of stuff. And then
What would happen is they would try to get in front of me like, "I think I know all the questions that are gonna come my way. "I'm gonna send out this board pack "and hope somebody has questions ahead of the meeting "because I don't wanna get ambushed." But inevitably, there's gonna be a ton of questions real time in those board meetings, exec offsites, exec meetings. They're not gonna be able to answer these one-offs. But now what I'm seeing is folks able to real time in that meeting, give me five minutes,
and then just ask, right? You're talking about data rail solution or embedded NLQ, NLP type solutions. And all of a sudden you do have that answer because you want to keep engaging in those meetings. You don't want to disappear because you're stuck behind a spreadsheet trying to answer a question from 20 minutes ago. So now the CFO can still be present, right? And still get that answer. So then the conversation keeps moving. Yeah, that's great. And we, so we have a couple more minutes before I want to get to the next segment. So we have a couple of questions.
questions here. And I want to... So Dinesh had a question, does AI learn from what we override and change the comments in the future? And I don't want to keep everything just limited to Data Rails here. Obviously, they're the sponsor, but we're not trying to just do a Data Rails commercial here. So I would say that the way...
It's going to depend on the interface. So, you know, in ChatGPT, it has a memory. Like if you were coming out of the system and using just ChatGPT, it's got memory and you can sort of build things into the prompt by directing the memory over time of you. AI in general, it just depends on the application
application. So if you're talking about changing the model's response because of the way it was trained, that's not going to happen. But when you give feedback, you know, there is a feedback loop in the training of the models, but that all happens before we're using it here with generative AI. So that feedback could happen in more of a memory that changes the prompts going forward. So it's kind of a vague answer on my part, if anybody wants to add any color to it, but that's
Sort of a blanket-wise statement, it's tough to respond to that. In general, I mean, it's no, but the prompt can be changed a bit in the memory of the system, if that makes sense. Ana, why? I'll let you address the question on AI prompts and data rails. Are they secure? Yes.
So in terms of within Data Rails, every individual tenant is kind of its own little ecosystem, right? It's completely closed. And we do that because of security reasons. We want to make sure that you guys have your data completely secured. We have all of the kind of like security certificates that we would want and need. And so...
In terms of like the prompts that happened in data rails, everything is kind of secured into your own tenant, which is great from a security perspective. But I would say that that closed idea means that sometimes we have to think about whether when we want like things like market data, how are we going to get it into that system? Which is always just kind of another conversation around how we want to get that done. Okay.
Okay, let's get to some overview of tools in action, stuff that you guys are using today. So I know each of you has helped companies implement AI or advanced analytics and finance, and we'll have some experiences here. Maybe Ana T, we'll start with you. And if you've got like just a three to five minute story from your fractional CFO work, you know, identify the challenge, what tool you used, if there's any kind of way to show ROI, because ROI is a question we get all the time, right? So if you've got that, that would be great too. Yeah.
Yeah, that's really a great question. ROI is like it always gets on the table when you're discussing any kind of AI based automation. And I
So I have some examples where ROI was above 1000% and these were mostly custom solutions. And I unfortunately cannot share the details. I can only share that the executives that were presented with this project said, we're stupid not to be doing that. So sometimes the use cases are very, very impressive. It doesn't happen all the time. And the tool that I'm using mostly
in my fractional CFO work is ChatGPT. Just because it's so versatile. I used to have a lot of various LLMs. I used to have Plot, ChatGPT, Gemini. But now, because ChatGPT has this memory feature, it learned so much about me that all the answers that I get are already tailored to what I need. And I keep a lot of information about my project in it. So this is something I cannot work without.
In my clients' situations, so sometimes just getting a corporate access to ChatGPT, putting the policy around it, and teaching the team how to use it brings a great improvement in productivity. Now, ROI in this case is a little bit harder to measure, right? Because it's a lot of
small things you do every day, which you do faster. And it's not like you're working less, like we discussed a couple of minutes ago. It's like you are bringing more value. You are able to be a part of the discussions where previously you were like crunching numbers to keep this discussion running. You are bringing more insights to your leadership team when previously it took you, I don't know, several days to do a data analysis or to put some report.
report together. So there is definitely ROI. There are companies who are measuring this ROI. I've seen numbers 30%, 50%, but it's really hard to say and depends on the role a lot. So if the role is including a lot of manual data processing, then the time saving will be higher. If the role is executive and you spend a lot of time on the meetings and maybe not so much, but still a lot of productivity gains.
I have some experience with implementing some custom AI tools. I would say that my experience is mixed here. Even the tools like data rails, which is like a industry standard in some way, a lot depends on what data the company has and what quality of data they have. So to be able to use these data processing AI tools, you really need to look at the data first and maybe on the quality of processing.
So, unfortunately, I cannot share the details of this project, but I'm also curious to hear what you, Glenn, have. I'm sure you have these examples of like over a thousand percent ROI.
Right. So and this is a really, really exciting thing. Yeah. And I think Nathan and I can probably speak to very similar use cases. I'm probably going to let him let him take that one. But I do. And there's a lot of questions here. I think several of the panelists, if not all of us, are sort of champing at the bit.
to talk about them but around security and data privacy we've got some time at the end for faqs and maybe we can set some time there and i know anna that's something on a t that's something you wanted to talk about as well so i'm going to actually push that down the road but i i don't know why i'm going to ask you to ask a um or answer one of these questions as you answer yours so that i'm looking at the chat now there was one about the model
Yeah, what LLM does Data Rails use? Yeah, if you can answer that, and then I actually have a panel question for you as well. For sure. So in terms of the LLM, we use the OpenAI LLM, so the same backing as ChatGPT. That being said, the way that we use the LLM is in order to help us understand how you are supporting
speaking, right? Understand context of words, what sentences mean. In terms of the actual data that we're using in order to answer your questions, it is the data that is within your specific environment. So I think that's a very...
necessary distinction that can kind of be mixed up sometimes, especially when we start thinking about like the security questions that have been popping up a little bit too. Yeah, exactly. And I guess as sort of follow up to that, but the question I really wanted to ask you from this section, I'd love to hear a client story from Data Rails that shows showcases how AI has helped with planning automation or reporting any sort of any of the new functionality that's built into the tool. Yeah.
So I would say in terms of reporting, like I was talking about that storyboard feature, I think that that is something that people are always using all of the time for reporting. One of our newer things that has come up more recently is Data Rails Cache. And what Data Rails Cache does, it allows us to pull in data from banks. And then from there, we can make it a lot easier to do the categorization. So then we can make our reporting a lot easier. Where does AI fit into this? Well, that's going to be in the categorization piece.
Right. So we want to make it so that it's easier for for data rail to actually do the work of some of these bank categorization so that we can say, OK, this transaction goes with this vendor. This memo means that this is going to go into this kind of expense, things like that. So then just like with other pieces of the AI that I've talked about before, it makes these categorization things more of a review process than like a tedious task that you're dreading to do.
That's great. You know, Nathan, I think you and I are spending a lot of time in this space here. I really want to hear from you about
kind of on the enterprise level and bonus points if it involves master data, automation, or like a significant digital transformation. - Yeah, absolutely. And I think to what Ana was just talking about, I've got a lot of examples and to a certain extent, I can't mention the names 'cause a lot of the clients were at Gartner, as you can imagine. But this idea of mapping your data to your GL, your chart of accounts, this is just a universal problem and so much time is burnt there.
But a lot of this starts on a lot of the engagements that I would go on or advise on was kind of on the boring side of things, to your point, the data governance, the master data management, having a data governance committee where you're agreeing on the metrics and the KPIs. Because you can't go tell AI and train AI to go map it all, right? Unless you know and agree internally what you're going to call something. And I call it, you know, the algorithm or calculation, but the data ingredients, where are they coming from? And if you can't sit down with your CTO, head of marketing,
sales to sit down and say, this is how we put things in our system and this is how we're doing ours and we can't get any kind of agreement, then it doesn't matter what tool or technology you roll out. It's going to fail. And the amount of hundreds of clients I saw, they would talk to me about two years into huge technology investments that are still in the red on their ROI because they had
No type of structure, no data governance, no master data management. It was incredible. On the enterprise side, typically what I would see on some of my larger clients that were bigger problems was, hey, we've acquired over the years these multiple companies. We've got three ERPs now running at the same time, or your whole call with subs. We're trying to do a roll-up, wrap-up. These ERPs are all capturing in different ways.
We don't know. This is all manual still. We might have even, it might have a data rails. They might have, you know, other systems, but it's not going to be helpful when you've got multiple ERPs running. And I think for me, you kind of start with that data governance, master data management, and then we will go and set up, as you know, the data warehouse or data lake. I like a lake house because,
If you're doing any kind of advanced analytics, you have that flexibility. And then you need a data hub. And that's that data semantic layer where your glossary, dictionary, all of those things are going to live, all those things that you agree on. I like to do a data governance watermarking. So when you're looking at reports coming from a BI tool, Power BI look or Tableau, that you'll see a mark that says this is past the internal governance standards for going out external reporting, that kind of thing. Yeah.
I want to interject just because you and I have talked about this before, that watermarking is great, especially if you're using defined KPIs, but self-serve data marts where people... So do you watermark if somebody makes their own? Yeah, so there is a problem. We tried to roll out self-serve. We were trying to solve the problem that we had so many incoming questions about the reports on the P&L Boundsheet, different departments,
we're like, oh, I wonder if we rolled out self-service analytics, if this would just solve that and now the finance team won't have to be constantly hit up for all these questions. It actually just created more problems for us. Like you said, we went from having 50 standard reports in BI dashboards, 200, 250 to 300. And they were showing up in all these meetings and other high-level reports and nobody knew what
was the truth with those. I've worked with all the major BI players. I've worked with Snowflake, which I'm a big fan, as you know, of Snowflake. I've worked with AWS. We've worked with DataCleanRooms. I think AWS has a really good solution there. I've worked with Snowflake's DataCleanRoom as well. I'm kind of agnostic. I believe in a composable tech stack. A lot of CFOs like the bundled package. Oh, we're going to all Microsoft shop or AWS shop or Google
Google Cloud shop, but I think you need that flexibility to choose the right tool, not just be stuck with everything one, you know, Microsoft. You know, as you were talking about the sort of the data definitions and kind of made me think about schema and how we interact with all this data. And then different companies have different levels of data, but I'm going to, and I'm not going to go too far down this rabbit hole for any of the crew here today.
who's tuned in today that is more on the technical side or wants to go back and have your team research something. Right now, I'm losing my mind on Model Context Protocol, the ability to interact with your own data. It replaces APIs it takes,
all this stuff we used to have to do through these weird rag solutions to get to data. But Model Context Protocol is making, whether it's you have data in your SQL database or Snowflake or whatever it is, and being able to interact with that data or getting stuff from the web and the tool use. Then the other one right now is N8n. So N8n with Model Context Protocol, the ability to integrate and interact with data, it's really opening up all kinds of
new pathways for automation and for just interacting with the data. And if we have time then we can go, I don't wanna go too far down it, but I did wanna mention those of tools in action of something. And if we think we're okay on time, let's just do a lightning round. Maybe one answer from each of you and we'll go on a T, on a Y, and then Nathan. Most valuable tool you used in the past year for FP&A and why? - Chad's giving a T for me.
just because it's so versatile and I work with a lot of clients. So Ana, why? I feel like I know what your answer is going to be. I feel like my answer is pretty biased, so I'll give two. But I would say the data rails is great. We're a great tool. I would say ChatGPT is really useful, specifically around like definitions and stuff, right? Like because then it's like we don't need to remember all these different formulas or different things anymore. Now, if I have instead of an encyclopedia, I have a very easy reference to go to as well.
Nathan, what you got? I use ChatGPT all day, every day, of course, as you know that, and create a lot of custom GPTs. I'm a huge fan of Notebook.
LM. And one of the most recent things I've done with that is create a prompt vault where I can now keep all of my prompts in one place and easily access them. So I'm not having to redo my work again and again and again. Great. Yeah. Great tool that when notebook LM came out, that was one of every, every couple of months. I mean, the arms race and AI just is so fast. It's hard to keep up with everything, but every couple of months, something comes out that just kind of blows your mind and notebook LM that
creating an audio podcast from your notes was mind blowing to me to get, you know, a 17 minute podcast that you didn't prompt in any way. And so for our audience who doesn't know what Notebook LM is, it is from Google and it's notebooklm.google.com or whatever. But you can take data from multiple different sources, PDFs, research papers, URLs, videos, aggregate it all. And it puts it into a single document
folder that you can interact with the data, create FAQs, create quizzes. It's a super cool, and it does those audio overviews, but a super cool product. And everybody who's on this, I'm sure, knows what I mean. We don't need to define ChatGPT anymore, but...
I want to just add, when you make a really good prompt that you've worked at for 30 minutes to an hour and you're really proud of it, you obviously don't want to lose it. Everybody knows probably what I'm talking about. When you finally get it right and it works, 99 percent of the time when you use it, it gets you what you need out of it. Having a way to store and archive that and access it really quickly is why I think the book element is really fantastic there.
FP&A Today is brought to you by DataRails, the world's number one FP&A solution. DataRails is the artificial intelligence-powered financial planning and analysis platform built for Excel users. That's right, you can stay in Excel. But instead of facing hell for every budget, month-end close, or forecast, you can enjoy a paradise of data consolidation, advanced visualization, reporting, and AI capabilities.
plus game-changing insights giving you instant answers and your story created in seconds. Find out why more than a thousand finance teams use Data Rails to uncover their company's real story. Don't replace Excel, embrace Excel. Learn more at datarails.com.
All right, this one. So I think you guys and myself included here, I'm about as big of an AI evangelist as there could be out there. But those of us as power users of the tools know that it also has limitations. So no shortage of hype around AI. But I want to talk about what's real and I don't know why, because data rails came so quick to market with their tool, where if you look at the big ERP players out there, they keep talking about it, but they're slow to roll it out.
And I think that they just haven't figured out how to sort of solve for hallucinations and all the issues that come up. But I guess on a why from you, since you guys are using it now today, and I'm sure you get a million questions from customers and potential customers, what do people kind of, what do they often misunderstand about hallucinations?
where we are right now with generative AI and the tools available for FP&A? And where do we, is there somewhere where we need to kind of recalibrate expectations? I would say that to Ana T's point before, I think that a lot of people think that, okay, this is going to do a job completely end-to-end for me. It's going to replace a person that that's coming, that we already have.
Right. I think that a big thing to understand about AI in my perspective is that it has to learn. Right. You can when you kind of start from scratch, you can almost think of it as as a baby in it, in that you have to kind of create its world. You have to tell it what it needs to know. You have to teach it about in the FNA context. You have to teach it about functions. You have to teach it about what is a variance? What is the budget? What do we need to look for?
And so with that, it is crucial that we have good people that are people that are good at communicating in order to help actually make AI functionality very useful for your team. And I think that once people understand that better, then one, there's a bit of a sigh of relief of like, OK,
okay, cool. Like I'm not in competition here, which is always great. But also there's an understanding of like, oh, okay, this is an incremental way to make my life better, right? Those increments might happen very quickly, but it is still incremental. It's not just like in a snap, you're not going to understand what you're looking at anymore because you are a massive partner in terms of bringing, of utilizing AI functionality in order to
to get your board decks, to get your plans, to get all of those things done in a more kind of efficient way.
Nathan, any trends or technology? You mentioned Gartner, but we're like rocketing over the Gartner hype cycle here. But are there any trends or technologies right now that you think are actually overhyped? Or maybe on the flip side of that, is there something that we're perhaps underhyping in the midst of all this? And are there some out there that you are already saying this is going to be a fundamental change?
I think there's a couple of things on there's a lot to talk unpack on there, but a couple of things is this, all these AI solutions that claim to be fully autonomous out of the box. And that's just not true. And I think, unfortunately, there is also a lot of AI washing going on with vendors when they're like, I've got AI now. Right. And it's,
It's never going to work out of the box. It's what I was just talking about. There's no context. It has no domain knowledge, no business knowledge. It's going to have to train. When you see software solutions, and there's a lot I like, that are out there claiming it's just going to roll weeks after deployment. It's a deception. Finance is messy. Data isn't clean. You still need that human oversight and
governance, if you will. But there are some cool tools that are happening that to me are addressing some huge needs, which is dealing with those multiple versions of truth, dealing with that data governance, obviously the privacy. And I'm a huge fan. You heard me mention earlier with data clean rooms. I think they're extremely underutilized.
I think Snowflake was kind of one of the first ones. I worked with Haboo, which is now part of LiveRamp. But AWS has a really good solution there. And if you're in an industry that's highly regulated, healthcare, medical, those types of things, you need to be looking at these solutions as well. And then there's some things just for the FP&A folks that are just really going to make your life easier
And one I think you're familiar with, glenamalgam, is another one example of making your GL entries just that much quicker and easier when it can be trained pretty quickly. And you tell it, this is what this is, this maps to here, and it makes suggestions, and
And then you just, instead you're reviewing and approving most of the time instead of actually making those entries. And another one that I'm pretty excited about is anomaly detection. And there's so many applications within finance and FP&A for that, whether it's finding revenue leakage situations or everybody does a leaky bucket exercise when times are tough, finding those areas. Are there advantages that we're not taking when it comes to invoicing and paying early to get discounts?
Are our clients with our invoices, are they not paying the penalties that they're supposed to be paying for being late? Fraud, are we seeing multiple invoices for different companies coming from the same address? What's going on there? Those types of things that you pretty quick ROIs as well with those solutions.
So a lot of people tuning in here, you know, prior to generative AI, if we'd been having this panel on AI and finance, where, like you mentioned, where we were talking about a lot of, you know, machine learning, we'll call that classical AI, pre-generative AI. It's nerd land where you have
to write Python and you have to be a coder to use all of it. And now generative AI is doing all this. So a lot of times, I think all the examples you were just talking, I think some people in the audience might be thinking, can I do that in chat GPT? And I think there's a thing we need to clarify here. When we talk about AI, we're talking about the broad field of AI, which includes
the traditional machine learning that you were talking about and the new generative AI also falls in that. It is an important distinction and the generative AI part for like fraud detection, anomaly detection, and all the things you were just talking about, maybe that's a top layer that lets you interact with those things in natural language, but the real machine learning in deep stuff is happening in the background. Just a point of clarification around the differences and the types of AI there. I guess on a T,
I know when you come into a client, they may just be, they may have this FOMO around everybody's doing AI. I've got to do AI. I don't know. I don't even know how to AI. Where do I start? So when you're helping clients pick which tools to use, how do you guide them? Do you give them like red flags to watch out for? Or where do you go with that? I mean, it's not only about picking the tools, right? Before you even get to picking the tools, I try to build some kind of roadmap
And to identify the cases where they don't even need the tools and having a corporate access to LLM with a secure environment and policy and training around it is enough. Sometimes the process that they are planning to improve is so critical for them that we would go with something really custom because sometimes
the security, maybe some, I don't know, like regulatory landscape that they operate in demands very high attention to this aspect.
And then in the middle is out of the box tools, right? So it's always a mixture of things that you are implementing. The red flags in the tools is if we get to the tools, I would say if the AI tool is saying that it is everything to everyone, this is a huge red flag for me. If it is forecasting, if it is, I don't know, some kind of analysis, like whatever finance process it covers,
It cannot be everything for everyone. So it should be focusing on some kind of company, enterprise, startup. They all operate in very different environments, right? So this is one of the red flags. And another one is if the company is not able to answer how the data is handled, who owns the data, and how is the security and data access is managed. So if they're not able to answer this question, this is a no-go.
I'm watching the clock go by and it's flying. I really want to leave time to Q&A. So I'm going to go lightning round on this. One question every finance leader should ask before signing a contract with an AI vendor. I would just ask, like, A, can you explain to me in very simple words what your AI functionality is? Keyword is simple words. I don't want buzzwords. Nice, nice. Nathan?
First question would be, how long is this contract? Second is, you know, obviously, what is your engine? Is it, you know, chat GPT? Are we talking cloud? Are we, is it custom? Is it Lama? Like, obviously, uncover that quickly. And then the third one, I know you said one, but would be, how clean does my data have to be out of the box for this to work? Anati? Yeah, who owns the data and how it is managed.
All right. So I do want to talk about what everybody's struggling with around this is building data fluency and FP&A teams, because we've been talking about digital transformation for three decades. We've been talking about, you know, data democratization for 15 plus years at this point, everybody getting their data fluency higher. But it wasn't really until generative AI came along that everybody it's like the clarion call for this. It's like, oh, now is time. So
AI is only part of the equation. The other part is our people. So we're all scrambling right now trying to figure out how to use this. But I want to know from your perspective what it takes to build real data fluency inside FP&A because we know to use AI, it has to start with the data. So Ana T, we'll start with you. How do you help teams upskill and what does that look like in practice? Yeah, so my...
practice is maybe unusual at least for this team we have here. I work mostly with smaller companies who sometimes don't even have like a dedicated FP&A team and I usually don't start with the data assessment because everybody thinks their data is perfect or at least ready for AI. So we start with
okay, how can you apply AI in which processes you can apply it? And then if everything goes well, the finance team gets access to various tools and they start experimenting. And this is where they understand that their data is not perfect at all. And that the answers that they are getting from AI are not great, not because they are asking the wrong questions, but because their data is
kind of dirty. So I don't start with data usually, although you're very right to say that people aspect is a very important aspect of that. But I start with the tools and I bring the teams to the understanding that the data needs to be clean and ready. And sometimes they need to take a step back
and to take care of the data, implement some kind of like a data warehouse before they even can move forward with any kind of like AI implementation, like at scale, right? And so, yeah, everybody understands that
like they need data, not a lot of people understand what does it actually mean to have this clean data? What is the definition of clean data? Where you need to pay attention to the sources of the data, et cetera, et cetera. So it's a long process. And the organizations I work with, they do have a lot of bad data, a lot of
Nathan, you've worked across finance and tech, and I'm wondering in the two areas, and maybe everything is merging more with tech, but finance and tech really seem to be joining more and more together. But I'm wondering, how do you bridge that gap between the business teams, the domain experts on that side, and then the technical teams and domain expertise that they have? I mean, we have to work together to the point where
There's almost an expectation that everybody has a bit of both those skill sets in them, which is hard to come by. But I don't know what you're seeing out there with that. Yeah, that's right. I think just listening to Ona talk as well, on my many calls I went on at Gartner with CFOs, inevitably we would ultimately end up bringing a CTO or CIO on the call because, to your point, finance team is data illiterate. The CTO tech engineer team is finance illiterate. Right.
There's so much issues when, hey, IT just stood up all of our BI dashboards and we gave them the specifications, but the moment they deployed them, we all got in and started trying to use the dashboards and none of it made sense. Even the labels of the data made no sense and we couldn't build anything. We couldn't use it.
And all this time and money was wasted. It really does start by building a shared language between the two. And that goes back to what I was saying on that data governance side of things, getting together and not just mapping the data. IT knows how to do a data map. It's mapping them to decisions and knowing who's creating the data, who's using the data, consuming the data, who can update the data, who's actually controlling. If I need a field changed or added into my CRM or my ERP, who's going to do that?
And do they know what that field is being used for? They might just see a Jared ticket come by, right? ITC's this thing, like, hey, we need to make this field structured, moving from unstructured and required. Well, why? What's the financing trying to do with this? And that bridge between them, it's that common language, as I mentioned before, when you agree on the metrics and KPIs and saying, hey, there's the ingredients that are going to go into this. Here's how finance is using it. Here's it just showing up business decision-wise and comes
company and then IT, finance doesn't understand how complicated it can be for them from a data engineering standpoint to go actually get that data and have it, you know, wrangled in a way that's useful.
Ana, why? What advice would you give to a finance pro who's not technical, but wants to become more data savvy, more AI literate and sort of get up to speed with what we need to know for the AI driven world where we are now? Yeah, I would say that if possible, with kind of like the AI technology that you're using, I would try to find an answer to a question that I already know the answer to that took several steps.
So for example, if we're thinking of like, okay, I want to understand like where did a variance come from? I want to see it like split up by, sorry, if I'm looking at like my variance in terms of budget versus actuals for revenue.
in total, right? And then I want to see that split up by customer. I want to see which customer was the one that contributed to that variance the most. I want to see then that customer, all of the information split out over months, right? You can always get to that kind of answer by, I'm assuming, pulling reports or asking different people. And so then you get to that final answer there. And if you're working with a tool that is supposed to be able to give you a similar kind of answer,
try to get it to get you that answer, right? Whether maybe it's a chatbot that you're prompting, right? Try to get down to that level of granularity. And what you'll do is one, probably get a little frustrated because it will probably take some tinkering in terms of what your prompts are or different things like that. You'll also see if there are limitations, what do those limitations look like? How can I actually use this tool moving forward? And then
Also, because you already know the answer that you're looking for, once you get to that final answer, you will know, okay, did this actually help me or not? And again, you've gained so much information because you already knew where you're trying to get to, but now you've filled out all of that context around it about how this tool is actually going to help you moving forward.
I say, you make a good point. I just want to add, you know, if I consulting, we design AI projects around real business questions. Can we predict churn by product line or where is cash burn outpacing our plan? And that anchors data fluency and impact.
Security and compliance is with generative AI, with all these, you know, taking your data, uploading it into the cloud when you're using these tools, a lot of questions around that. So before we go to the broader Q&A and on a T, you mentioned in the prep, what should finance leaders be thinking about as they integrate AI into sensitive financial processes? Yeah. So I...
So when I raised up this security question, I wanted to say that I see companies that on the one hand think that AI and security cannot be together. And then they block all the initiatives that are even remotely AI related for specifically finance teams. And in reality, there is a way to handle data security. It's not a blocker.
On the other hand, I see teams who just go forward with AI without even thinking about all the security implications. So I just wanted to
bring it to the attention of finance leaders and explain that on one hand, this is not a blocker, but on the other hand, this is a very important aspect of AI implementation. That's it. I don't want to spend more time on that, but I just see a lot of bad stuff happening there, especially on this like breakfast implementation.
All right. So when I get on my soapbox in our after hours special that's coming up, I'm going to start a fight with CISOs is what I'm going to do. But we'll save that for later. So Nathan or Anna, why? Anything you'd add, especially from a system or implementation standpoint? Yeah, I just want to add it.
You know, I always see the finance land of audits. We always begin with access controls and audit trails, especially when AI is touching sensitive financial data. We encourage finance leaders to ask vendors one key question. Can your outputs be traced, tested, and explained in plain English to my auditors? And if not, it's not enterprise ready.
Ana, why? Anything from your end? I would say like from an implementation standpoint, like just like Ana T was saying, there are trade-offs when you want security like you need on the finance team. And some of those limitations are going to be like, can we do an automatic feed of market data? Can we do an automatic feed of exchange currencies? Can we do an automatic feed of anything else? And I think that just knowing that you're having those, you're...
balancing those two, I think is important to realize when you're going into your implementations. Okay. Eight minutes for Q&A. A couple of questions on Microsoft Copilot. I've been pretty vocal about my thoughts on it. Yeah, I've heard some of your feedback and I have not met a person who would prefer Copilot to ChasGPT.
But there are a lot of organizations who have Copilot on default. So my answer to that, at least get a good training. There is a way to improve your interaction with Copilot.
It is different. It's not the same as having ChatGPT. But if this is your situation, at least get the training and try to get the most out of this tool. Nathan or Anuay? Some of our listeners probably don't have a choice because corporate is...
locked out ChatGPT and maybe they're on Copilot. I try often to use Copilot to do the things I use ChatGPT for, and it just always disappoints me. And I keep waiting for it to somehow catch up or be better. It should be better integrated with Excel and embedded AI in Excel is a dream. But I just, my personal experience, it's just not there yet.
Yeah. So I will say I'm bullish long-term on Microsoft. They put what 50 over 15 billion into open AI. They've done their acqui hire thing. They're going to get it. We're going to see Clippy's revenge. Clippy's coming back and he's going to be helpful. Actually, there were a lot of questions around prompts. I think we could probably go round Robin here, Nathan on a Y on a T prompt tips.
For me, start with context. The hardest part is why I need a prompt library is writing the, you know, once it gets to know you, as Ana mentioned earlier on the call, it's much easier, but always giving it the context. And you don't want to rewrite that because you don't know how inconsistent you might even rewrite it every time you have to do that. So write it the one time. Embed as much of your company and your industry jargon and knowledge and domain expertise as much as you can in there and then keep it consistent.
every time. Keep your outputs consistent with your organization. Because even within an industry, organizations talk about things differently. And keeping that context is so critical in your prompting. Yeah, I would say to the like...
to use like keywords. So if you know, like keywords are going to be, I don't know, used kind of more broadly. So I think that that's really useful. And also if you're trying to get consistent answers, ask the consistent kind of questions. I think that's where having a prompt library is so useful because then you can kind of treat it like Madlibs take out the parts that you don't need, but you have that same sort of structure. So then you're going to get an answer in a structure that you probably are going to like.
Yeah, well, my favorite tip or trick is to use AI to write prompts for AI. Like, explain what you're trying to do and let ChatGPT handle that and then tweak what you get. This is specifically great for long prompts so you don't have to type all the time.
a lot of things into the window. And it works in most of the times. And in terms of consistency, ChatGPT has this amazing memory feature. So this is important to understand
Check what it has in memory about you and manage that. So tell it, forget that I'm traveling next week. Focus on my work environment. If you use your, for example, or you have the same account for your work and for your leisure requests.
And check, like the best way to do it is to ask it based on everything you know about me, put like the five, the most important things and tell it to forget whatever you think is not relevant and add whatever is relevant for your work. So the answers that you will be getting will be much higher quality. And if you consistently manage. I love that tip. I just want to add, I agree with that, Anna. And I personally like to use Claude to write my more complex prompts.
I find it's more useful there. Jessica Jarduli had some good advice in the chat. She said, I start all my prompts with context, e.g., you're a CFO and expert in finance specific to the fintech industry. Explain ASC 606 and your target audience is a high school accounting class. And you might say, I might add to that, you know, you might say, give me two paragraphs on this. Give me five bullet points. Give it, let it know everything.
how much data you're looking for. Cause sometimes if it's, if chat GPT is feeling frisky, you'll think you're getting a paragraph and suddenly you've got a, a 2000 word essay that it just spit out. So, um,
I like to give those guys. So we do have a couple other questions in about two more minutes. This is a great one from Vernon, integrating machine learning requests to an outside system like R or TensorFlow for machine learning. And the best example of that that I could think of would be Cortex and Snowflake. And really that it's about integrating all these different data sources, having generative AI go in and instead of
having to query, write SQL queries or whatever, when you have that layer of generative AI where you can query directly, either whether it's a data lake or directly into a system through an API or through MCP, that's how you could work generative AI into your ML pipeline. Nathan, any other thoughts on that about as far as how you could actually, it's almost like low-code, no-code machine learning, but instead you're using a generative, a natural language prompt
in order to do that. Yeah, I love your example as well. And the integrations and APN and MCPS we were mentioning earlier, you and I had a good conversation on that. As that gets more advanced, I think things are just going to be so much easier. And you see like for, I don't know, everybody else is on, we're on Google. And now that I can give access to ChatGPT to my Google Drive and
and being able to have data and shared folders just move in and out of there. For that, that's really nice. But yeah, the integrations, then there's the whole separate conversation of the world that was floating of integrations and how to use what and all the add-ons and just there's a lot. It can be overwhelming.
So anytime if your data is not in a Faraday cage, if you are connected to a network anywhere, your data is not secure. If you are uploading your data into Snowflake, if you're uploading your data into AWS, if you're uploading it into your cloud-based ERP, those are all threat vectors for your data. If you're uploading your data into Gemini, right?
Claude, ChatGPT, Manus or DeepSeek, anywhere where you're uploading your data, it is a new threat vector for you. And all the models right now, so all those companies, I don't know about the Chinese-owned ones, but all the companies that I mentioned are SOC 2 compliant. So if you trust Google to not leak your emails and your Google Sheets and all that, then why would you not trust Gemini with your data that you put into there? Now...
They're also on all of these models and someone mentioned it in the chat. If you, so OpenAI, ChatGPT defaults to, if you don't go into the settings and change it, it's going to upload your data to help train the models.
defaults to we're not going to use your data to train the models. I don't know where Gemini is. All the different ones have different settings. So first off, look at that setting. Now, this is where I will fight the CSOs. I'm not a security specialist. However, I'm something of a specialist on LLMs. LLMs do not learn facts.
They learn probabilities. So if I go to my favorite LLM and type in Michael Jordan is a blank 999 times out of a thousand, it's going to say basketball player because there's thousands and thousands of articles about Michael Jordan being a basketball player. However, he also played baseball for a couple of seasons. So maybe one out of a thousand times, it'll say Michael Jordan is a baseball player, but it doesn't.
have a memory of it doesn't know everything about michael jordan other than how much it's been exposed to that so conversely uh as much as i like to think i'm out there well known if if i went to chat gpt right now and typed glenn hopper is a blank it doesn't know there's like
hundred other glenn hoppers in the world that i know of and it doesn't know anything about me so if i went had it where my data was not turned off that my data is going up and training the model if i put in my social security number my blood pressure what you know what medications i'm taking the chances of that leaking through all the training and coming out to a person is slim to none now that's not saying be flippant with your company data or with your client data but the way that these llms work
One mention of this instance, the security risk is disproportionately overblown around all this. Companies have to figure out a way around this. I know that there's ways you can wall it off. You can run smaller models locally and all that. There's the Azure solutions or the OpenAI Enterprise. There are all kinds of ways to wall this data off. But okay, I just rambled a whole lot. I would love to hear additional thoughts from you guys on that and how you're handling it because
I say all that when I'm dealing with client data, I have to tell them, look, this is what we're doing. Is it okay if we use your data in this way? And it's so I know how I treat it for my data, but it's different with company data and client information. How good is your cyber insurance?
Go ahead, Anna. I agree with you, but it's hard to sell to security guys. Oh, yeah. Oh, 100%. Yeah. But the important thing is, okay, if...
Even if you are concerned that your data will be used for training models, I mean, don't do it on a free account at all means. Just it's not a fortune to pay for a paid account and switch this setting off. Like don't use my data to train the model. This is already good enough, I would say. Like this is the standard. We don't have any standards around that, but I would say this is a good standard.
But then if you are concerned about some kinds of data, explicitly clarify it in the policy and train your employees what is okay, what is not okay. It is much worse when people don't know what is good for the company and what the company's tolerance is, right? And going back to what you said,
I mean, yes, probably my data will not show up, but if I'm handling sensitive client data, if I'm handling medical records, I would be very, very cautious about how these things work. And nobody knows exactly how, right? So you don't want to be responsible for leaking your client data or for leaking your patient's data, right? But for a lot of data that the company handles,
it doesn't really matter that much. And having a corporate account and some training and some guardrails is good enough. So don't be too restrictive. This is what I'm trying to get from CISOs. Don't be too restrictive. They tend to be over-restrictive. Like any level of data confidentiality is a no-go for a LLM. And for finance teams, that always means
you can use it on nothing. Even the email, even the bank email is something that I can use it on. Contracts, no. So, like, really have a conversation around that. Have a conversation about why you think this is dangerous. In many cases, CISOs are not able to answer this question. Like, what is going to happen? They don't know. But, like,
Even if they have very low tolerance to sharing the data within LLM, define what is still okay. Give us something. Nathan, you were starting to say something? Yeah, I just had a bunch of thoughts. I work with a lot of CISOs and of course they're going to be conservative. That's their job. That's their main job, if you think about it. I like to laugh because I've worked with a lot of companies and I've seen from having no policy at all to having very restricted policy, but
Policy is not always common practice. They might have this document and everybody's like, oh, I know it's there, but everybody's doing their own thing and it's not locked down. How do you account for every individual that has access to ChatGPT or Cloud or whatever it is they're using? How do you know? It's really hard. IT is just kind of playing whack-a-mole as these different things pop up. It's really important to do training and have a policy. But unless you're dealing with, I think, kind of the big three, which is, you know,
SharePoint, SOC 2 compliance, medical data, PII, those types of things. Don't overthink it. Nobody really wants your data as badly as you think they do. I mean, I'd be terrified if I went to ChetchyPT and says, what is Nathan Bell's social security number and have it accurately respond back because somebody hacked it and put it in there? Or I'm trying to do insider trading and find out what the big market movers, what they're doing and somehow their data got into ChetchyPT.
and all of a sudden ChatTPT has it and is answering to people that are querying it. That's terrifying, right? That could happen. All right. Well, that brings us to the end of our session. So thank you to our incredible panel, Nathan Bell, Ana Tiomina, Ana Yamashita. Covered a lot today. We got real use cases, tools, strategies. And if you want to continue the conversation, I'm sure anyone...
on this panel, myself included. Connect with us on LinkedIn or reach out through the various websites. We all love talking about this stuff. And this session will also be available on demand and as a podcast episode on FP&A Today. And thanks again to Data Rails and to all of you for being part of the conversation.