Skip to content

Federico Negro · 2026-03 · 5 min read

Most AEC AI Apps Are Just Claude Wrappers

The only defensible AI product in AEC has proprietary data or a proprietary algorithm. Everything else is a wrapper that won't last.

The wrapper problem

Every week there’s a new AI startup for architects. A chatbot that reads your specs. A tool that generates space programs. An app that parses submittals or writes meeting minutes.

Most of them are wrappers — a user interface on top of a large language model, with some prompts baked in. The LLM does the work. The app just passes your input through.

This isn’t a business. It’s a feature — and one that’s getting easier to replicate every month. The “buy or build” equation is changing quickly in the user’s favor.

The two-part test

Before you buy an AI tool for your practice, ask two questions:

1. Is the data proprietary?

Does the product sit on top of a dataset you genuinely can’t access yourself? A curated, licensed, or internally generated body of information that the model can’t reproduce from public knowledge?

UpCodes passes this test. They aggregated building codes from every jurisdiction in the country — something that was technically public but practically inaccessible. No one was going to download, parse, and cross-reference thousands of municipal amendments on their own. The data is the product. And the mass user base creates a network effect: the more people use it, the more feedback loops catch errors and fill gaps. That’s a real moat.

Material Bank passes it too. They built a logistics network for physical product samples — a database of materials tied to a fulfillment operation that ships samples overnight, for free. The data layer (what’s available, what designers are specifying, trend signals from order patterns) is proprietary because it’s generated by the platform itself. You can’t replicate that with an API call.

Now consider a product that doesn’t pass. An “AI-powered spec writer” that reads your project documents and generates outline specifications. That’s a prompt. The IBC is public. CSI MasterFormat is well-known. The model already knows how to structure a three-part spec section. There’s no dataset underneath that you couldn’t point Claude at yourself.

2. Is the algorithm proprietary?

Does the product perform a computation that a general-purpose model can’t? A real algorithm — not a prompt chain, not a fine-tuned personality, not a branded UX around a standard LLM call.

A structural solver that calculates load paths — that’s a real algorithm. An energy simulation with validated thermodynamic models — real. A generative layout engine with actual constraint satisfaction and optimization — real.

But most AEC AI tools don’t have algorithms. They have prompts. “Intelligent document parsing” is the LLM reading a PDF. “Smart space programming” is a prompt with a table template. If you could replicate the output by writing a SKILL.md file and pointing Claude at the same data, it’s not an algorithm. It’s a workflow.

Workflow-only tools are the most vulnerable category. Think of Canoa in furniture specification, or Webflow in web development. They built polished interfaces around tasks that are increasingly trivial for a model to perform directly. The workflow was the product when the underlying task was hard. Now that the task is easy, the workflow is just overhead — a middleman between you and the model.

If the answer to both questions is no — no proprietary data, no proprietary algorithm — then the product is a wrapper. It’s a prompt and a UI. And the moment the underlying model improves or a competitor writes a better prompt, the product has no moat.

What changed

Two things made the wrapper model obsolete:

The models got good enough. Claude, GPT, Gemini — they can read drawings, parse specs, write structured output, and follow multi-step workflows. A year ago you needed custom pipelines to do this reliably. Now a well-written prompt handles it.

The tools opened up. Claude Code, skills, MCP servers — these let you connect a model directly to your data and your workflows without writing an app. You can build a spec writer, a submittal parser, or a site analyst in an afternoon, with a markdown file and an API connection. No vendor, no subscription, no lock-in.

The combination means that the value proposition of most AEC AI tools — “we made the LLM easier to use for your specific task” — is no longer defensible. The model is already easy to use. The interface is the terminal, or a chat window, or your IDE. You don’t need a middleman.

Why this matters for firms

If you’re evaluating AI tools for your practice, the wrapper test saves you time and money:

WrapperReal product
DataPublic or easily scrapedLicensed, generated, or proprietary
AlgorithmPrompt chainValidated computation
MoatNone — replicable in a dayYears of data collection or R&D
RiskDisappears when the model catches upDurable as long as the data or algo stays ahead
CostSubscription for something you could do yourselfSubscription for something you can’t

The firms that will get the most out of AI aren’t the ones buying the most tools. They’re the ones that understand what the models can already do — and only pay for what the models can’t.

Build skills, not dependencies

Instead of subscribing to five wrapper apps, build skills that encode your firm’s actual knowledge:

  • Your project naming conventions
  • Your QA/QC checklists
  • Your deliverable templates
  • Your fee calculation logic
  • Your internal standards

That’s the real proprietary layer — your practice’s accumulated judgment, encoded in a format the model can follow. A SKILL.md file that reflects how your firm actually works is more valuable than a startup’s generic “AI for architects” tool. And once you’ve built them, distributing skills across your team is straightforward.

The wrapper apps will consolidate, pivot, or shut down. Your skills stay with you.