Diligence, Decisions and Building on Shifting Ground
This e-book includes edited excerpts from a conversation between Ed Brandman, Founder & CEO, ToltIQ, and Desmond Fleming, Host of The Longest View. Ed shares his perspectives on AI, private markets, and what it takes to build for real PE workflows. To watch the full episode please visit: https://www.youtube.com/watch?v=bAZe2piPuo4
Download Report
Diligence, Decisions and Building on Shifting Ground
Perspectives on AI, private markets, and what it takes to build for real workflows
This e-book includes edited excerpts from a conversation between Ed Brandman, Founder & CEO, ToltIQ, and Desmond Fleming, Host of The Longest View.
PART ONE: THE OPPORTUNITY
The Known Quantity
The problem with applying AI to a corporate document repository comes down to one thing: you don't know what you actually have. There are five versions of the financials, seven versions of the client contract, two versions of the same presentation. Nothing wrong with that in a corporate environment. Different people have different needs, zero-trust environments exist for a reason. But when you want to find a needle in the haystack or correlate information across documents, recency and relevancy are so intertwined that you can't be confident you're working from the most current version of anything.
Diligence is completely different. The word people use to describe applying AI to proprietary content is RAG — retrieval augmented generation. But what makes diligence so interesting as an application is that you're working over a known quantity of documents, a corpus you can have high confidence is the right set. Maybe the seller only provided three years of financials instead of five. Maybe they gave you 20 of the top 50 customer contracts. But what all parties to a transaction agree on is that the virtual data room is the repository. Everything you need to execute your analysis is either in there, or you go back to the other side and ask for it. That agreement doesn't exist in a corporate setting.
That's the fundamental difference. In a corporate environment, AI has to make assumptions about relevancy and recency it can't reliably make. In diligence, you define the universe. The model works within a bounded, agreed-upon set of documents, and that changes what it can do for you.
The CIM Is Not the Whole Story
Every CIM tells the best possible version of a story. The bankers built it that way. The company is a market leader in segments one, two, and three. The growth rates are benchmarked against a carefully selected peer group. The narrative is clean.
Then you get VDR access, and you have the actual financials and the actual customer contracts. In a pre-AI world, reconciling those two things was a people problem. You spent hours every day, you staffed up support teams, and you still left work on the table.
Now I can load the CIM and the underlying documents and ask the model directly: compare what's presented in the CIM against what's in the contracts. Is this consistent? Take sustainability — a company might be making explicit claims about their sustainability performance and goals. With three years of sustainability reports in the data room, I can ask: are they actually getting better, or are they getting worse? Because every year they're going to tell a great story, and every year they're probably not going to revisit what they didn't deliver on last year.
The time window for diligence isn't going to expand. You sign the NDA, you have a funding date, you're either in exclusivity or running against other bidders. The clock is fixed. What we're focused on is how you do more within that window — go wider on topics you'd normally skip, go deeper on topics that deserve it. The constraint is time, not intelligence.
Trust, But Verify
A foundation of the ToltIQ platform is that everything has to be cited. Every response, in every part of the platform, comes with references to page numbers and hyperlinks that take you to the exact location in the source document. That's not a feature we added — it's a principle we built around.
People will become more trusting of AI output over time. That's a reasonable expectation, and the progress on reducing hallucination risk has been real. But right now, in a business context, when you're writing a check at the end of the day, citations are what build confidence. If you're referencing financials, a tax policy, an operational structure, an org chart, you need to know the exact page and the exact document the model pulled from. You should be able to go verify it yourself.
That's where the industry is right now: trust but verify. I want to see the output. I want to test it. And then I want the trail that lets me validate it. That approach isn't a sign of distrust in the technology — it's how you build trust in it over time.
The Honest Reckoning
The augmentation value is real. The number of things AI enables teams to do will be significant. But I don't discount what this means for the labor force. At some point the economics kick in: if one person with AI can do the work of three, what happens to the other two? Maybe demand expands. Maybe it opens up new research tracks that weren't possible before.
Those are possibilities. But there's a genuine challenge coming in the next two to three years. As AI tools advance and agentic architecture matures, a lot of companies are going to be asking hard questions about workforce structure and what true productivity looks like with AI in the loop.
On the model side, we focus most closely on OpenAI and Anthropic. Both have a healthy mix of consumer-facing feedback and enterprise revenue, which matters. You want models being stress-tested at scale by real users, not just benchmarked in a lab. They take meaningfully different approaches to training and model design.
One example: if you're a free user of ChatGPT, your data can be used for training unless you turn that off as a paid user. Anthropic's Claude, including the free tier, never uses your data for training. It's not an option. That's a fundamentally different philosophy. OpenAI publishes broad safety guidelines. Anthropic has built around what they call constitutional AI — a specific, published set of rules that governs how the model behaves, designed to make it as helpful and as harmless as possible at every interaction.
We follow both closely. How they evolve influences how we build.
Building on Shifting Ground
What most people miss about building at the application layer is that the models keep changing, and that's not a minor issue. In theory there's backward compatibility, but models also make leapfrog moves.
Something you tried to do three months ago suddenly just works. Something that worked beautifully three months ago now needs a completely different approach. How you generate a table, how citations behave, how the model interacts with external content, all of it shifts as models get smarter and add new capabilities.
Every technology platform built before AI — Salesforce, Workday, SAP — was designed around deterministic outcomes. You run a report, you get X. Every time. That's the assumption baked into the whole SaaS stack. AI doesn't work that way. You're building on top of models that are getting more capable, while also managing outputs that aren't fully predictable. That's a genuinely new challenge, and there aren't enough people in the industry who know how to navigate it.
I don't think non-deterministic outcomes are inherently bad. Ask three people in an office to build the same presentation and you'll get three different versions. That's fine as long as you're confident in the source material and the logic. The problem is when you can't explain why the output looks the way it does. We'll get a result and a client will ask, why is it doing that? And sometimes the honest answer is that the model changed its guardrails and we're catching up.
You have to build with the assumption that the ground will keep shifting. And you have to be comfortable with that.
The Models Are as Dumb as They'll Ever Be
Part of working effectively with AI is reframing your expectations around speed. We're conditioned to instant feedback loops. Search returns results in milliseconds. Traditional SaaS is deterministic and fast. AI doesn't always work that way. The output sometimes takes time, and the connection between what AI generates and the downstream systems you need isn't always seamless.
The leading-edge firms are running into this constantly. People want the benefit, but they also want it now. And when you combine that expectation with non-deterministic outputs, traditional approaches like A-B testing stop working. You can't systematically test variants when you don't know what the outcome will be.
You have to be comfortable in a fluid, iterative process where some days things work exactly as expected and other days they don't.
The firms that win are making a fundamental bet that the models keep improving. My view is simple: at any given point in time, the models are as dumb as they're ever going to be. Every release makes them smarter. Every month that passes, the floor moves up. The question isn't whether the technology is good enough today — it's whether you're building the reflexes to take advantage of it as it gets better.
Industry Knowledge Is the Moat
I'd describe myself as a very nontraditional entrepreneur at 57. I'm not a startup guy in the traditional sense. My edge is relationships and deep knowledge of how this industry actually works. You know the CTOs. You know the CIOs. You know the workflows. You can hire people quickly because you've spent decades in the same world. Shortcut, shortcut, shortcut.
But what matters more than any of that is that I lived inside the business. At J.P. Morgan, I worked in trading, capital markets, and operations. At Robertson Stephens on the West Coast, it was electronic trading. At KKR, I had operational responsibility as CIO — sitting with LPs to understand what they needed, talking to the CTOs of portfolio companies, working with investment professionals at the diligence table. Every step of my career, people let me go deep into the work. I didn't just support it. I understood it.
That shapes everything about how we build. Our moat isn't just the technology — it's the fact that the people building it have worked inside private equity, private markets, and private credit operations. They're not entrepreneurs who identified a market opportunity from the outside. They're practitioners who felt the problem. They know what pre-AI diligence actually looked like, where the pain was, where the hours went.
The challenge we're designing for is real: the vast majority of private equity, real estate, and infrastructure investors have never used sophisticated technology beyond Excel and Bloomberg. They're expert analysts working in a world that has historically required very little from software. Building for that user, and doing it right, is a different problem than building another SaaS tool.