LBO Modelling Automation in PE: A Practical Guide in 2026
LBO Modelling Automation in PE: A Practical Guide in 2026
There's a telling moment that happens in most private equity firms when a new deal drops. The CIM lands in the analyst's inbox at 4pm. The VP wants a first-pass model by 8am. Within minutes, the analyst has ten browser tabs open, lots of Excel sheets, and about twelve hours to build something an investment committee will scrutinise in a room full of people who have spent decades doing exactly this.
That pressure is real, and the work it produces is often messier than anyone admits.
Models get rebuilt from templates that were already rebuilt from older templates.
Assumptions get transcribed from PDFs manually. Circular references get patched at midnight. Error checks get skipped because there isn't time.
AI doesn't eliminate this situation. But it does change the shape of it—substantially, if you use it correctly.
This article covers what that looks like in practice: the specific tools available today, where LLMs fit in the stack, and why the teams getting the most out of this aren't trying to take humans out of the process. They're restructuring what the humans spend their time on.
What the Bottleneck Actually Is
Before looking at solutions, it helps to be clear about where the time actually goes in an LBO model. There are two distinct layers of work, and they respond to automation very differently.
The first is mechanical: sources and uses, debt tranches, cash sweep mechanics, PIK toggles, management rollover, returns waterfall, IRR and MOIC calculations. This follows a mostly predictable structure. An experienced VP could sketch it on a whiteboard in twenty minutes. Building it cleanly in Excel—with properly structured formulas, working error checks, no hardcoded numbers in formulas, sensible scenario architecture—takes several hours.
The second is analytical: translating a CIM or management accounts into realistic assumptions, stress-testing revenue drivers against sector benchmarks, identifying where the debt package creates stress at different EBITDA trajectories, and forming a view on what the business is actually worth at what price. This requires judgment. It cannot be automated, and you wouldn't want it to be.
The mistake most teams make when first approaching AI tools is conflating the two. The goal isn't to automate the judgment—it's to compress the mechanical work so that analysts and associates have more time and headspace for the parts that actually differentiate one firm's analysis from another.
The Stack: Where LLMs Fit
There are several entry points where LLMs (We will be using Claude as an example, as the current frontier model) can be applied to LBO work. They are not all equivalent, and the right choice depends on what your team needs.
Claude in Excel
The most immediately accessible option for most deal teams is now built directly into the tool they already spend most of their day in. Claude is available as an Excel add-in through the Microsoft 365 suite, and for PE work the use cases are concrete.
Within a live model, you can highlight a block of cells and ask Claude to audit the formula logic, flag circular reference risks, or explain what a complex nested formula is actually doing. For a VP reviewing an associate's model before an IC meeting, this alone is meaningful—catching a mis-linked cell in a debt schedule that propagates incorrectly through to the returns waterfall is exactly the kind of error that slips through manual review at midnight.
More substantively, Claude in Excel can build model components from a brief description of what you need. Describe the amortisation schedule for a €300m TLB at EURIBOR + 425bps with a 1% annual cash sweep and a 5-year maturity, and Claude will generate the formula structure. You review it, test it, and drop it into the model. The analyst understands what it does—they just didn't spend forty minutes writing it.
For sensitivity tables, Claude can set up the range definitions, link them to the correct driver cells, and format the output. For scenario architecture—base, downside, and stressed cases—Claude can replicate the structure across sheets and flag where assumptions are hardcoded versus formula-driven.
One practical note: Claude in Excel works best when you treat it as a rigorous co-pilot rather than an autopilot. The output needs to be read and tested, not just accepted. But the discipline of reviewing AI-generated formulas is generally faster than writing them from scratch, and it forces a level of documentation that self-built models often lack.
Claude via the API (for Custom Workflows)
For firms that want something more integrated than an Excel add-in, Claude's API is where purpose-built deal workflows get built. The most common applications at PE firms today:
Document ingestion and assumption extraction. Upload the CIM, audited accounts, and management presentation. Claude reads the documents, pulls out historical revenue, EBITDA, working capital days, capex, existing debt quantum and terms, and produces a structured assumption sheet. Crucially, it can flag where numbers differ across sources—management accounts showing one EBITDA figure, the CIM showing another—and note where assumptions rely on projections rather than historical fact. The analyst reviews the output and challenges what doesn't look right. They don't spend three hours transcribing numbers.
Debt schedule and model scaffolding generation. Once assumptions are agreed, Claude can generate Python or VBA code for the core model structure: income statement, balance sheet, cash flow, debt schedules, returns waterfall. This isn't magic—it's pattern recognition applied to a well-defined problem. But the code is consistent, commented, and easier to audit than something built under time pressure at 2am.
Interpretive memos. After the model is built and scenarios are run, Claude can produce a first draft of the written investment case: what the returns are sensitive to, where the deal breaks, what the implied valuation is at different exit multiples. The analyst edits it, not writes it from scratch.
Claude in PowerPoint and Word
Anthropic's Microsoft 365 integration extends beyond Excel. Claude in PowerPoint can take a finished model and draft the IC presentation—slide structure, return summary, scenario tables formatted for a committee audience. Claude in Word can draft the investment memo against a firm's standard template, pulling in the numbers and the narrative from the model and the diligence process.
The practical implication: the 48 hours between "model finished" and "IC pack ready" compresses significantly. The analyst's job becomes editing a coherent first draft rather than assembling one from scratch.
Claude Code
For firms with engineering resource or technically capable analysts, Claude Code enables more ambitious build-outs: automated model generation pipelines, scripts that pull data directly from financial data providers (Moody's, Dun & Bradstreet, S&P, and others now have MCP connectors that work with Claude), and internal tooling that standardises assumption sets across deal teams.
This is the layer where firms start building durable competitive advantage rather than just saving hours on individual deals.
The Human-in-the-Loop Principle
This is the part most AI vendor conversations gloss over, so it's worth being direct: every meaningful application of Claude to LBO modelling requires active human oversight, and not as a compliance formality.
There are three specific reasons for this.
AI models can be confidently wrong. Claude produces fluent, well-structured output. That makes errors easy to miss if you're not checking closely. A debt schedule that looks correct and is formatted properly can still contain a formula logic error that a qualified analyst would catch in thirty seconds on review. The fluency of the output is not a proxy for its accuracy.
Judgment cannot be delegated. The model is a representation of a thesis about a business. Whether that thesis is correct—whether the revenue assumptions are credible, whether the management team has the track record to execute the plan, whether the entry price is sensible given the risks—is a human responsibility. Claude can tell you what the IRR is at 8x exit. It cannot tell you whether 8x is a reasonable exit assumption for this particular business in this particular market cycle.
IC accountability is personal. When a deal goes to committee, the analysts and VPs presenting it are accountable for the numbers and the reasoning. AI tools change where the time goes, not who owns the outcome.
In practice, the teams implementing this most effectively have settled on a clear division: Claude produces drafts, analysts review and own. The review step isn't optional or perfunctory—it's the point. The AI compresses the drafting time; the human applies the judgment.
A useful check before any Claude-generated model output goes to IC review: can every assumption in the model be traced to a source document, and can the analyst explain why each number is the right one? If not, the review hasn't happened yet.
Ready Solutions: The Market Today
For firms that don't want to build workflows from scratch, there are several commercial platforms worth knowing about.
Mosaic
Mosaic is the most purpose-built option in the market and the one with the widest institutional adoption. Used by Warburg Pincus, CVC, Bridgepoint, New Mountain, Evercore, and others, it raised an $18m Series A in April 2026 and is probably the closest thing to a standard platform for AI-assisted LBO modelling.
What makes Mosaic different from general-purpose AI tools is its deliberate choice to use deterministic, rules-based algorithms for the model mathematics rather than generative AI. The implication is that the model output doesn't hallucinate—the debt schedule arithmetic is always correct, the formulas follow best-practice conventions, and the Excel download has working formulas with proper colour-coding and no hardcodes in formula cells.
Mosaic Autopilot can initiate model creation from an email prompt. Mosaic Vision reads a screenshot of financial projections from a CIM and immediately allows you to adjust growth rates and add transaction assumptions. The platform reports customers achieving up to 20x faster completion of core deal analyses.
The transparency angle matters too: everything is downloadable to Excel. There's no black box, and the output is fully customisable. For senior professionals who want to verify the maths before it goes anywhere near an IC, that auditability is not a minor point.
AlphaSense
AlphaSense doesn't do LBO model construction, but it sits at the front of the deal workflow in a way that feeds directly into the modelling process. It aggregates broker research, expert call transcripts, SEC filings, earnings calls, and news into a single searchable interface, with AI-powered summarisation and the ability to query across thousands of documents at once.
For the market intelligence and sector benchmarking that underpins a credible set of model assumptions, AlphaSense is the tool most serious PE and banking teams use. Its Excel add-in pulls data directly into models.
Dili
Dili focuses specifically on extracting financial data from unstructured documents—audited financials, tax returns, management accounts. The core use case is financial spreading: taking a stack of messy source documents and producing a clean, structured financial history.
For firms where the bottleneck is the early data extraction stage rather than model construction itself, Dili addresses something that Mosaic doesn't focus on.
ToltIQ
ToltIQ takes a data room perspective: you upload the entire VDR and query it with natural language. For operational due diligence—working through legal documents, customer contracts, HR records, compliance materials alongside the financial analysis—ToltIQ covers ground that pure modelling tools don't touch. SOC 2 Type II certified with zero data retention, which matters for confidential deal processes.
The Custom Build Option
The platforms above cover most common PE workflows well. But some firms—particularly those where proprietary modelling methodology is itself part of the investment edge—are building on Claude's API directly, integrating with internal data sources and developing tooling that reflects exactly how their deal teams work rather than adapting to a vendor's structure.
The argument for this approach is that it produces something genuinely proprietary. The argument against it is that it takes engineering resource and time to build, and most PE firms don't have the capacity or the know-how to do so—especially small to medium companies. The honest answer is that for most mid-market firms, one of the platforms above plus Claude in Excel is probably the right starting point.
A Practical Workflow: What Actually Changes
Putting this together, here's what an AI-assisted deal workflow looks like for a team that has integrated these tools thoughtfully. The time estimates are illustrative, not guaranteed—they vary by deal complexity and team experience with the tools.
Stage 1: Data room ingestion (was 3–4 hours; now 30–45 minutes)
The analyst uploads the CIM, management presentation, and financial statements to the document extraction tool (Dili, AlphaSense, or a Claude API workflow configured for this purpose). The output is a structured assumption sheet with sourcing notes and flagged inconsistencies. The analyst reviews, challenges numbers that look off, and adds sector context that the tool doesn't have.
Stage 2: First-pass model (was 5–7 hours; now 1.5–2 hours)
Using reviewed assumptions, the analyst either uses Mosaic to generate the model scaffold or uses Claude in Excel to build components incrementally. The analyst runs the model, checks the debt schedule mechanics, tests the returns at various scenarios, and verifies that the formula logic is correct. This review step is non-negotiable and takes time—but an hour of reviewing a clean draft is faster than six hours of building from scratch.
Stage 3: Scenario analysis and sensitivity tables (was 2–3 hours; now 30–45 minutes)
Claude generates the scenario structure and sensitivity tables. The analyst adds the case-specific assumptions, runs the scenarios, and produces the summary outputs. Claude drafts the written interpretation: which assumptions drive returns most, where the deal breaks, what the implied valuation implies for exit comparables.
Stage 4: IC preparation (compressed but still substantial)
Claude in PowerPoint drafts the presentation structure. Claude in Word drafts the investment memo against the firm's template. The VP's job becomes editing and challenging a coherent draft rather than assembling one over a weekend. The time saved here is real, and it goes to the things that actually matter: management reference calls, customer conversations, competitive analysis.
Total time from CIM to IC-ready pack: for a reasonably standard buyout, teams using this workflow are getting from 60–70 hours down to 25–35. That's not a marginal improvement. At a firm running 30–40 deal processes a year, it's the difference between stretching the team and running a properly staffed process on every deal.
What Still Doesn't Change
A few things AI tools don't solve, and won't any time soon.
Sector expertise has to come from somewhere. A healthcare services model requires understanding of reimbursement risk, CMS rate changes, and labour market dynamics. A software model requires a view on net revenue retention, sales efficiency, and churn assumptions that are specific to the business, not generic. That expertise lives in your team, in expert networks, and in management conversations—not in any AI tool.
Commercial due diligence is still human work. Talking to former executives, customers, and competitors; forming a view on management quality; understanding why the business has the competitive position it claims to have—none of this gets automated. If anything, the time saved in model construction should go here.
And the investment decision is still a judgment call. The model is a framework for thinking about a business. Whether to buy it, at what price, with what capital structure, is a decision that an investment committee makes based on everything they know—the quantitative analysis and the qualitative picture together. Claude can sharpen the quantitative work. The qualitative judgment remains entirely human.
Where to Start
For a deal team that hasn't yet integrated AI tools into LBO workflows, the most practical starting point is usually Claude in Excel—it requires no new platform procurement, no integration work, and no change to existing model templates. Spend a few weeks using it to audit formulas and generate individual model components. Build intuition for where it helps and where it requires more careful review.
From there, Mosaic is the natural next step for teams that want a purpose-built platform rather than a general-purpose tool. Its Excel output compatibility means it fits into existing workflows without requiring teams to abandon their modelling conventions.
For firms with more ambition—and the appetite to build something that reflects their specific investment process—Claude's API, combined with connectors to financial data providers and the Microsoft 365 add-ins, offers the infrastructure to build something genuinely proprietary.
The goal in all cases is the same: more time on the judgment, less time on the mechanics. The AI does the drafting. The analyst does the thinking. That's a better job, and—done right—it's a better process.
If you're exploring how to implement an AI Strategy for your company, or an AI-assisted modelling workflow within your deal team, we'd be happy to walk through what's worked for other firms at different stages of AI adoption.