Back to Insights
March 24, 2026

All the Knowledge is in the Documents

This eBook contains edited excerpts from a conversation between Ed Brandman, Founder & CEO of ToltIQ, and Edwin Yun, Host of the Road to Carry Podcast.

Download Report
All the Knowledge is in the Documents

All the Knowledge Is in the Documents

Ed Brandman on AI, documents, and the future of private equity Edited excerpts from a conversation between Ed Brandman, Founder & CEO, ToltIQ, and Edwin Yun, Host of the Road to Carry Podcast.

All the Knowledge Is in the Documents

When you're using ChatGPT Enterprise or Claude Enterprise, you're using what's called a multimodal model — one that handles text, images, voice, and video. There's a real trade-off at the heart of that breadth: how deeply you can engage with document content versus how quickly you can respond to the user. For most use cases, that's fine. In PE due diligence, it's a problem.

Business documents — the ones that define a deal — are a different category entirely. Credit agreements. Confidential information memorandums. Vendor agreements. Purchase and sale legal docs. Investor presentations. Regulatory filings. Lawsuits. There's so much complexity in how knowledge is organized inside these documents that models — while brilliant across many categories — struggle with the relationship between images and text, or indentation within a chart or table.

Take a stacked bar chart with three different shades of blue. You can read it easily as a human being. A model might not be able to differentiate those shades at all — it sees them as black. Those are subtle things. And then add a credit agreement with multiple amendments followed by new credit agreements that get signed. All of that has to fit together correctly. The model doesn't automatically know how.

What happens during diligence is that nobody is cleaning up the documents and creating well-structured data sets you can run analytics on. That's where associates come into play — ripping apart documents, building Excel models, looking at financials. If it's a public company, the financials are probably clean. If it's private, they could be messy — maybe a tax-based business rather than GAAP-based, with financials that only get updated once a year. There are a lot of hard problems in that.

At its core, this is a document problem — not a data problem. In the PE world, many of the assets being purchased are long-dated: around for 5, 10, 15, 20 years or more. Maybe family-owned. Maybe a public-to-private. Either way, all the knowledge is in the documents. The challenge in a data room is scale — tens of millions of tokens, rarely contained cleanly in any one document. Trying to piece together the story of knowledge from all of that is a genuinely hard problem. It's the reason we built what we built.

Built for Everyone Means Built for No One

What we're doing differently starts with ingestion. We ingest the actual documents. We deconstruct them into their parts and build what's called a vector-based database structure off of embeddings. That creates rich, usable knowledge — so when you're asking a question, we can pull curated content on retrieval and feed a specific subset of that knowledge into a model to get a really good response back. And importantly, with citations — for every paragraph, in some cases every section of a table or bullet points in a response. That's a fundamentally different way of working with data.

Most of our clients run ChatGPT Enterprise or Claude Enterprise alongside us — sometimes both. We're not replacing those tools. But what we're doing with documents is something those platforms weren't built to do at this depth. The security model is different too. If you upload documents into Claude or ChatGPT, they've got solid security controls — table-stakes stuff you'd expect. But they're not under NDA with you. And in most cases when you're working on a deal, you are under NDA.

We're unique in that we sign NDAs with each of our clients, which effectively creates the equivalent of a tri-party agreement from a protection standpoint. The documents you load into our environment are under NDA. They're secured in your own holding pen — effectively an S3 bucket in Amazon terms — segregated from anything happening on the model provider side.

The arms race between model providers is real too, and we've seen the leader change three or four times in the last 18 months alone. That's exactly why we have been model agnostic from day one. We use models from OpenAI, Anthropic, Google, and Cohere — each with unique capabilities for different parts of our software. As AI has become a core part of PE firms' future state, those firms are building their own AI teams and running their own model testing. We don't want to be the platform that tells them which model they can or can't use.

How We Actually Work

On top of that security foundation, we've built a set of standardized workflows. Not overly restrictive or prescriptive in how you use them — but specific. If you're working on the extraction of vendor data, credit terms, or a lease agreement, that's a specific workflow in our platform. It's different from a chat conversation back and forth.

It's also different from a blueprint, where maybe you want to model your IC memo off of a prior one. Just like we deconstruct documents, we'll deconstruct your memo and build things that make that work well together. The workflow logic we've developed, combined with the security architecture and the model agnosticism, means our clients are getting a system that fits how PE teams actually work — not a general-purpose tool that was adapted after the fact.

That distinction matters more than it might sound. The document types in PE due diligence are specific. The workflows are specific. The stakes around accuracy and confidentiality are specific. Building something that actually addresses those specifics — rather than assuming a general tool can be bent to fit — is a different kind of bet. It's a harder one to make early, and a harder one to build. But it's the right one for this problem.

The Work Is Already Changing

If 2025 was an experimentation year, 2026 is rapidly becoming a year where people are using these tools in real-world scenarios to solve real-world problems — diligence, sourcing, closing, marketing, sales. We use them ourselves inside running ToltIQ. I'm a 30-person firm. I'll probably never be more than 50 people, because the AI agent capabilities for my sales team, my marketing team, my engineers are giving me a two to three times multiplier — or more — on those resources.

Think about what a Claude for Excel plugin actually does. Think about all the modeling that investment professionals spend their time engineering and re-engineering — building out that customer cube 17 different ways. What if you still do the initial work, but now you can use AI to iterate on it to get better output? And not just the raw data — it'll have the formulas and the cells. It'll handle circular references, which anyone who's dealt with Excel will find funny, but those are exactly the things that were tripping up models six months ago. The models are getting really good at solving those problems.

There's also real risk accumulating for the McKinseys and the Bains and the BCGs of the world. There's an enormous amount of information on the internet, and a lot of it is good. Some of it is slop, sure. But there is a lot of good information that companies publish, that research and marketing sites publish. The problem before AI was that it was very hard to scrape the web when the websites themselves changed constantly.

With the vision capabilities being deployed and the new ways web agents have been built, that information has been democratized. You could be a one-man wrecking crew building a research report that you used to go out and spend a quarter of a million dollars on. That's not a hypothetical — that's happening right now.

The bang for the buck, the multiplier effect on speed, the depth at which you can do research on the sourcing side and the diligence side — that's where the impact is the highest right now. And it's not slowing down.

2027 Is Closer Than You Think

Mid-market and lower mid-market firms are taking a very similar approach to AI, and they're actually adopting it much faster — perhaps out of necessity. You're handicapped by the number of investment professionals on your team, the number of operating partners you may have. You don't have the same scale, and you don't necessarily have the balance sheet to invest in building things internally the way a large GP could. The larger firms — KKR, Blackstone, Carlyle, the top 15 or 20 — are on a different trajectory. They're trying to build a lot themselves, which has resulted in a slower uptick in adoption.

What a platform like ours does, especially for mid-market and lower mid-market, is democratize the ability to do in-depth diligence and accelerate the time in which you can do it. You've got dry powder, you're looking to deploy it, you're seeing a lot of opportunities — how do you make the best use of the resources you have? Can you get a two to three times multiplier out of your team without people working until 3 in the morning? Because they're probably already doing that. We shouldn't make it worse.

The biggest bang for the buck right now is on activities that are document-intensive rather than data-intensive. The finance team, compliance, credit, risk management, asset management — those are data-intensive functions with tools like Snowflake and Databricks. That turns out to be a much harder problem for AI to crack. The main reason: documents have something that data doesn't. Documents have context.

There is a story to be told in looking at a CIM, a regulatory filing, or an investor presentation — beyond just the numbers. The story is in the chart itself: what's on the X and Y axis, what's the overlay, what are the three paragraphs surrounding that chart or graph. You put all that together and it's a far more powerful set of information to feed into a language model than pure data alone.

On the operational side of the house — carry calculations, IRR, LP reporting — you can't afford to get those wrong. Those are the books and records of your firm. AI will help there over time. But on the front end of the business, from sourcing all the way through closing, that is a document-driven problem with data that augments it — not the other way around.

Start Before You're Ready

The advice is simple, and it's honestly how I started. You're all making enough money in private equity that you can afford a $20 or even a $200 a month license for either ChatGPT or Claude. Get one — you don't need both. And start experimenting with it outside of work.

Follow a public company. Download their investor presentations from the last two years and try to assess what's changed. Look at some earnings transcripts. Pull down research reports on something that's a hobby of yours. If you play Dungeons and Dragons, there are actually really complex documents related to how the game gets played — what can you learn from analyzing those? Take a picture of the food on your plate and see how well the model identifies it. Take a picture of something in a museum and see what it can teach you. Those exercises help you understand how vision works in these models. The way to get less anxious about AI at work is to use it in your daily life.

There are so many entry points. You're planning a two-week trip in the Caribbean — what are the best beach bars on the three islands you're going to visit? That sounds silly, but it'll start a conversation with the model. It'll lead to follow-ups, and you'll start to understand how that works. Start simple. The sophistication follows.

The other thing I can't stress enough for people in a work context is this: you have to be okay being iterative. And that runs counter to how every PE professional thinks. I have to get it absolutely right the first time. If that means staying up until 4 in the morning, then that's what it takes. You're dealing with something that doesn't care how often you ask it questions. There's no penalty for asking again.

That whole approach — the pressure to nail it on the first pass — is something you need to consciously let go of when you're working with AI. It's changing the way you have to think about problem solving.

The One-and-Done Mistake

The single biggest mistake people run the risk of making right now is the one-and-done. They try it once, or a handful of times — whether that's ChatGPT, Claude, or a vendor solution — and they don't get the answer they expected. Or in the worst case, they get a hallucinated answer, although that happens far less frequently now. Either way, they don't get what they believe is the definition of great.

Most private equity professionals have a very low tolerance for inaccuracy and a very high expectation of quality. I get that.

But the conclusion they draw — I tried it, it didn't work for this specific use case in this specific scenario, therefore AI is useless in PE — is just factually a bad way to approach the technology. The models are continuously improving. The vendor landscape is continuously changing. Deciding to wait until it gets better means you're not leveling up the skill set of your team. And that puts you behind the curve of competitors who are much more willing to iterate, experiment, and build that capability now.

The PE instinct is to be sure before you act. That instinct has served the industry well in a lot of contexts. This isn't one of them. The firms that are pulling ahead aren't the ones who waited for the technology to be perfect. They're the ones who started experimenting while it was still imperfect — and kept going.