Back to Insights
March 24, 2026

Earn the Room: Lessons in technology, trust, and private markets

This e-book includes edited excerpts from a conversation between Ed Brandman, Founder and CEO of ToltIQ, and Marc Andrew, Founder of The Private Markets Forum and Host of the Modern Capital Podcast. To listen to the audio version of the full conversation, please visit: www.thebrief.private-markets.com/p/the-20-trillion-pdf-problem-and-the-man-who-s-fixing-it

Download Report
Earn the Room: Lessons in technology, trust, and private markets

At a Glance

EARN THE ROOM

Lessons in technology, trust, and private markets


This e-book includes edited excerpts from a conversation between Ed Brandman, Founder and CEO of ToltIQ, and Marc Andrew, Founder of The Private Markets Forum and Host of the Modern Capital Podcast.


Listen to the full episode: www.thebrief.private-markets.com/p/the-20-trillion-pdf-problem-and-the-man-who-s-fixing-it


PART ONE: THE FOUNDATION

Learn the Business First

When I joined KKR, I brought lessons from JP Morgan, Robertson Stephens, and a short stint at PwC. But the senior partners were direct about something from the start: my experience was in public markets. This was private. Different pace, different regulatory framework, different culture.

So I listened. A lot. I had the CFO Bill Janicek, my boss Perry Gulkin, and ultimately Todd Fisher, and several investment professionals—Mike Michelson, Jamie Green, and others—who were willing to talk to me about deals over lunch, answer questions when I walked into their offices. I had to do my own homework on top of that.

One of the things that still stands out to me from those early years: I walked into a world that wasn't zero trust. It was the opposite. You trusted everyone and only restricted access when you had a reason to. If you were inside a PE firm in 2007, you could sit in on an IC meeting, read the portfolio company reports, ask questions. I used that.

I spent one to two hours every day just keeping up with what KKR was doing on the product side, the client side, understanding how the finance team worked.

That's how I built my credibility—and why I never went to the business with a proposal before I understood what they were actually trying to do.


No Upside to Waiting

When I got the basics running—video conferencing, consistent email, phones upgraded from copper-wired to IP—that was when I could go to the business and say, here's how I think technology can actually help you. Not before.

We were very deliberate. We didn't try to solve everything at once. There was a team in capital raising that wanted a better way to track institutional investors—how many conversations they'd had, how the whole CRM process worked. This was the earliest days of what Salesforce would eventually become. We didn't like what was in the market, so we built our own. That gave me eyes and ears into what the business was actually doing, and it opened the door.

One of the senior partners who had grilled me hardest during the interview process—Paul—ended up being one of the most collaborative. He came to me with 30 portfolio companies that needed quarterly analysis before reports went to LPs. He wanted to streamline that process and asked me to partner with him on it.

That set the tone. I was largely successful at KKR because of the partnerships I built with senior people who weren't necessarily technophiles but were willing to embrace change.

Over eleven years, deal teams started pulling me into diligence exercises. I got named to the boards of two portfolio companies. You earn that access by being useful, not by showing up with a roadmap.


PART TWO: THE REAL PICTURE

Waiting Is Not a Strategy

I hear a version of the same argument a lot right now: this AI stuff is moving so fast, I'll wait six to nine months until the dust settles and then make a decision. I understand the instinct. But the reality is that a firm waiting even six to nine months isn't building up new muscle memory. You're not experimenting, you don't know where the real pitfalls are, and your whole organization isn't leveling up.

You hear words like hallucinations and security risk in the news, but if you're not actually using these tools, you don't really know what those terms mean in practice. You're managing a perception, not a reality.

We went through this with the cloud. KKR was an early adopter of Box. We were competing with the largest investment banks for tech talent. Maintaining file servers is a thankless job—I wanted those people working on problems that actually moved the business forward. It wasn't perfect—cloud security was a whole new challenge compared to four-wall physical security. But the willingness of our team, and frankly of the users, to try it paid off quickly. People stopped carrying USB drives around. They could self-provision document access. Compliance had an audit trail they could check themselves. Early adoption of meaningful technology puts you ahead of the curve and tends to keep you there longer than most people expect.

The lesson carries straight to AI. There's no real upside to delaying. If anything, the longer you wait, the further behind you fall.


It's Not a Search Engine

The first thing most people do with AI—and I get it, it's a logical starting point—is use it the way they use Google. Type in a question, get an answer. But if you stay at that level, you're barely scratching the surface.

AI isn't just about information retrieval. It's about connecting dots you can't see yourself. It's about generating content that augments your work. And when you start putting all of that together—whether you're in marketing, sales, or product design—there's a leveling up that happens across your entire organization. It permanently changes how you work.

People have to wrap their heads around the fact that their jobs will look different. That train has left the station. Their value to an organization isn't just their own knowledge anymore. It's how they complement that knowledge with what they can do alongside an AI tool.

I think about the BlackBerry. If you'd told someone back in the BlackBerry days that it was going to change the nature of the way we buy things, interact, schedule, and communicate—they would have laughed at you. It had a terrible camera. But that's exactly what happened, because the underlying capability kept improving and the friction kept dropping. We're in that same moment with AI right now. If we're not already at the tipping point, I think it plays out fairly significantly over the next 24 months.


The Security Conversation Is Real—But Not Paralyzing

I don't envy anyone sitting in the CTO or CIO seat right now. The security questions around AI are legitimate. But broadly speaking, the enterprise-grade solutions from the major model providers are world-class. You'll pay more for them, you'll need some administrative setup, and you'll have to manage connectors to your internal data. That's all manageable.

The mistake a lot of firms made early on was that they moved too slowly on the enterprise side—so people just started using the free versions on their personal accounts. They wanted to learn the technology, which is understandable, but they weren't thinking about what was happening in the background. Models were training on data in the free tier. Connectors were opening up backdoors into company environments. Those were real gaps.

The good news is that AI security, broadly, isn't fundamentally harder than what we dealt with during the cloud transition or the shift to mobile. Those felt scary at the time too.

The exception is agents. Autonomous agents introduce a genuinely new risk category that previous technologies didn't. When an agent has tools to connect to the outside world and is operating with some degree of autonomy, you can no longer guarantee exactly what it's doing at any given moment. That's not a reason to avoid agents—but it's a reason to be clear-eyed going in.

What people were calling agents eighteen months ago was not a real autonomous agent. What the latest reasoning models are producing now is something different.


**PART THREE: THE MARKET LENS ** Vertical Wins. Horizontal Gets Absorbed.

The question I keep coming back to when I think about what's defensible in the age of AI is pretty simple: are you doing something the major model providers can't or won't do for you?

If you're a horizontal solution—essentially a smarter enterprise search with an AI layer on top—I genuinely struggle to see how you hold off Microsoft, Google, Anthropic, or OpenAI over time. They have the distribution, the capital, and the models.

What I think is defensible is deep vertical focus. Marrying specific, complex business workflows to AI capabilities in a way that requires real domain knowledge to do well.

In our case, that means understanding what a SIM actually looks like for a GP stakes deal versus an industrial company, knowing how scanned legacy documents behave in a VDR, knowing what a credit agreement looks like when a covenant is being bent. That's not something a general-purpose tool is going to get right on day one.

I'm also skeptical of the pure open-source thesis right now. There are solid use cases for open-source models, but the leading proprietary models are differentiated not just at the LLM level—they combine vision, code generation, artifact creation, and a growing toolset around all of it. Meta's Llama effort has been good, but it hasn't displaced the frontier models.

Staying at the leading edge of the model providers while going deep on a specific industry problem—that's the combination I'd bet on.


Peak Seats: The SaaS Model Is Changing

I'll probably regret saying this publicly, but I'll say it anyway: I think we've hit peak per-seat, per-month SaaS revenue. If you looked at the total universe of SaaS seats today, I don't think that number is going up. Maybe the chairs move around, but the headcount of people paying for software subscriptions isn't growing the way it was.

The reason SaaS valuations were what they were is that these businesses were both cash flow machines and growth machines.

If AI connectors from OpenAI, Anthropic, and Google make it easy to interact with SaaS platforms without adding seats—because you're consuming data, not inputting it—that changes the fundamental growth assumption.

Some core SaaS revenue is sticky. The workflow is embedded deeply enough that it's not going anywhere soon. But the premium for growth may not hold at the same levels.

We don't price on a per-user basis. ToltIQ is usage-based. We look at a firm's deal volume over the course of a year and price based on that tier. A small, highly active team can pay as much as a large team with lower volume. That's intentional—I care about usage, and at the end of the day, I'm paying for tokens. That cost doesn't go away regardless of how many people are logged in.


**PART FOUR: THE FOUNDING STORY ** Why Due Diligence, and Why Now

My son Matthew brought me back from retirement. He was running cybersecurity at Duolingo and called me one day after one of my national park road trips. He said he thought generative AI was going to change the nature of how people work. I was skeptical—my closest prior exposure to AI was algo trading back in the NASDAQ market-making days. But he said he'd put in the sweat equity on nights and weekends if I was willing to explore it.

So I pulled together a few former KKR colleagues and we started experimenting. This was GPT-3.5, 4,000 tokens at a time. We'd get excited about 8,000. It's almost hard to reconcile that with where we are today.

Matthew pushed me to think about what problem to solve, and kept pointing me back to diligence. My first reaction was that it was a hard problem—document-heavy, complex, hard to even get documents into a model. But we tried it, and it worked.

A lot of people assume the biggest AI opportunity in finance is back-office automation. I'd push back on that. Operational processes are hard to automate because the data isn't clean, the rules don't all hold, and introducing AI into a tech stack that wasn't built for it sets you up for a bar you're unlikely to clear—and may not even come close to.

Diligence is different. It hasn't changed since the beginning of time. The first deal I was ever brought over the wall on at JP Morgan, we literally walked into a room full of boxes of documents. Decades later, the workflow is essentially the same—just with a VDR instead of a cardboard box.


The Bet We Made on Documents

Here's what actually happens in the field. The whole world moved to virtual data rooms—Intralinks, Datasite—but what does every investment professional do even today? They zip up the VDR contents, pull it into a local drive, and work from there. Because the VDR isn't where the rest of their content lives. The expert network calls are somewhere else. The research reports are somewhere else. The models they've built are somewhere else.

We spent a lot of time on that problem—building a pipeline for VDRs that could hold up to 5,000 documents. The question we kept coming back to was: what if a smart investment professional, with a head full of things to check and dig into, could just talk to all of those documents regardless of where the knowledge was?

The firms I'd call competitive in our space mostly started with connectors and web scraping. We made a different bet. My read was that the major model providers were going to solve connectors and web search on their own because their consumer products would demand it. So we focused on the hard document problem—the one they weren't going to solve for us.

And then the models kept getting better. When we started, we had no vision capabilities. Charts and graphs were invisible. Then a model came out three months later, and then another three months after that. A million tokens on day one was $25. Today that same million tokens can cost anywhere from a few dollars to $5 depending on the model—roughly a 90% compression, with meaningfully smarter models doing the work. Faster, smarter, and cheaper at the same time. You normally never get all three.


**PART FIVE: THE PLATFORM ** Due Diligence Used to Be a Reading Assignment

Marc Andrew described it well: due diligence used to be a reading assignment. You'd log into the VDR, download what you could, read everything you could get your hands on, synthesize it in your head or across a committee, and try to build a picture of the company.

With ToltIQ, that zip file goes into the platform. Maybe you're also connecting SharePoint. Maybe you're uploading expert network call transcripts related to the sector or the company. All of that drops into a deal folder. We auto-ingest the documents, auto-tag them, and build the connections between the content when we construct the vector database.

Then the investment professional has a set of paths to work from—all designed to be collaborative.

You could be starting a team chat for financial or commercial diligence. You could be iterating on a purchase and sale agreement. You could be a private credit investor pulling data for your downstream credit platform, or a real estate investor managing leases. We try to meet the professional where they actually are.

The other thing that's now possible: if the CEO of a company you're evaluating has been on the podcast circuit, you can load those transcripts alongside the strategic plan and compare what they're saying publicly with what they're committing to internally. You couldn't do that before. Nobody was sitting down to transcribe and cross-reference hours of audio. Now it's just part of the workflow.


The Secret Sauce Isn't the Model

Different asset classes look very different from a document standpoint. Secondaries diligence doesn't look like primary diligence. Infrastructure deals come with geospatial documents. Real estate has leases that behave differently than credit agreements. What you're dealing with document-by-document matters a lot.

The thing I think people are still underestimating as they build out their AI ecosystems is how much knowledge lives inside documents that goes well beyond the raw data you can extract. A 10-K has interesting tables.

But what's more powerful is when you connect those tables to the surrounding text, to the chart on the same page, to the footnote three pages later that qualifies the number. That's a different kind of intelligence.

How you deconstruct a document, reassemble it, and tag it correctly—that's our secret sauce. It's where we put a disproportionate amount of time, and it's what we keep evolving. A vision model can see a geospatial document, but knowing how that document fits into the broader story of the deal is a harder problem.

We've continued to double down on being excellent at the thing we think we add the most value to, rather than spreading across everything a deal team touches. That focus is intentional.


The Models Keep Surprising Us

The smarter the models get, the more useful they become—not just because they're processing the same content better, but because they keep training on more of the world's knowledge.

Early on, one of our clients was looking at a set of credit agreements and asked if they could query the model about something called the Chewy Phantom Equity Guarantee. I had no idea what that was. The early model did what early models did—it hallucinated a plausible-sounding answer. Nine months later, a new model came out. Same question, spot-on answer. The model had clearly been trained on HoldCo versus OpCo debt structures, on equity guarantees, on the specific mechanics of these instruments.

That's a remarkable shift. You can now ask a model, with a document loaded in front of it, whether a specific industry-standard term is followed or broken in that agreement. A year ago that wasn't reliable. Now it is.

There's a ceiling argument that keeps getting made—that models will plateau, that the scaling laws will run out. All the research to date says that hasn't happened. Most of the training has been done on US content.

Imagine when the models are just as fluent in French financial documents, Egyptian legal contracts, Saudi deal structures. The universe of knowledge these models will be able to draw on is still expanding. We're not close to the edge of it.


Stay on the Train. Don't Pick the Engine.

We made a deliberate decision early on not to get married to any one model provider. We don't want to be a GPT-only shop or an Anthropic-only shop. Let them compete. Our job is to stay current with whoever is building the best tools and give our clients the flexibility to choose if they want to.

What's been more interesting to me is how our clients actually use the platform. I'd estimate that only about half the use cases are ones I would have anticipated. People are creative when they get real access to AI tools and start exploring problems on their own terms.

I would have assumed that investment teams doing similar types of diligence in similar sectors would behave roughly the same way. They don't.

The way a firm thinks about its IC process, how long it intends to hold an asset, the specialization of the team—all of it shapes what they look for, how they search, and what they consider relevant when assessing a company's future.

That's taught us something about the product too. The platform needs to be flexible enough to accommodate workflows we haven't fully anticipated yet, not just the ones we designed for.


**PART SIX: THE CLOSE ** Build the Ecosystem. Don't Try to Own It.

The DealEngine partnership gets at something I feel strongly about: there is no single platform right now that solves everything for a deal team, and anyone telling you otherwise isn't being straight with you.

DealEngine has built a focused outside-in sourcing workflow—connectors, strong technology, and a clear point of view on how deals get found and initially evaluated. What we do takes over once you're inside the VDR. There's natural handoff in both directions—outputs from sourcing feed into diligence, and things that surface in diligence can feed back into how you think about the next sourcing cycle.

My philosophy is straightforward: I want to be excellent at a focused set of things, not mediocre across many. I have no interest in building a 100-person team trying to do everything. I'd rather do fewer things better, regardless of how much capital I have available.

That means being open to partnerships with firms that are better at their part of the deal lifecycle than I am, and building the infrastructure—API, MCP—that makes connecting with us easy for vendors who want to.

What I tell my team is: if we can be a trusted, high-quality part of the ecosystem rather than trying to be the whole ecosystem, we'll be a more important part of how the deal lifecycle evolves in the age of AI than if we tried to own everything and did none of it well.