Build, Buy, or Borrow? The AI Decision Your Team Is Getting Wrong
The classic build vs. buy framework doesn't hold up in the age of foundation models. Here's how to think about it now — and the expensive mistake most teams make.
Every technology decision eventually comes back to the same question: should we build it, buy it, or find another way?
For most of software history, "another way" wasn't really a thing. You built custom software, or you bought a SaaS tool. That was the whole menu.
AI broke the menu.
There's now a third option — and most businesses either don't know it exists or don't know when to use it. That missing option is costing teams months of wasted effort and hundreds of thousands in misallocated budget.
The Three Options, Actually Defined
Build means training or fine-tuning your own AI model. You own the weights, the data pipeline, the infrastructure. You have maximum control and maximum cost. Historically this meant hiring a team of ML researchers and spending 12–18 months on a problem. Today it still means significant engineering investment, just on a shorter timeline.
Buy means purchasing a vertical AI SaaS product. An AI-powered CRM feature, a document processing vendor, a scheduling tool with AI built in. You get a finished product, take it or leave it, at a subscription price.
Borrow means using a foundation model API: OpenAI, Anthropic, Google, Mistral, as the intelligence layer in something you build. You write the application logic, the prompts, the integration. The actual reasoning is someone else's model, accessed by the token.
The "borrow" option barely existed three years ago. Now it's often the right answer, and most enterprise teams are still reasoning as if it doesn't exist.
Why Most Teams Get This Wrong
The classic build vs. buy decision was essentially: do we have the time and talent to build this ourselves, or is there a product that does it well enough?
That's still a valid question. But it doesn't account for the new economics of AI.
Here's what changed:
Fine-tuning a model used to require 100K+ examples and a team of ML engineers. Now it can mean 13 examples and a weekend — as I wrote about with TinyLoRA. The cost of "build" dropped by an order of magnitude for certain tasks.
Foundation models can do things no SaaS product existed to do. If you want to automate document review against custom criteria, analyze customer calls for non-obvious patterns, or generate compliance-ready summaries from unstructured data, there often isn't a "buy" option. The product doesn't exist. "Borrow" is the only path.
The "borrow" option compounds. When a new model drops (GPT-5.3 aka garlic?, Claude Opus 4.6, Gemini 3.1 Pro), your system gets smarter automatically. You didn't build the intelligence — you just called it. Your competitive position improves without additional investment.
A Practical Decision Framework
Here's how I walk clients through the decision:
Buy when:
- The problem is commodity (scheduling, note-taking, basic CRM intelligence)
- You need it this month, not in three
- The vendor has vertical-specific training data you couldn't easily replicate (medical coding, legal clauses, financial compliance)
- You don't care about owning the logic
Build (train/fine-tune) when:
- You have proprietary data that gives you a real competitive advantage
- The model needs to learn your domain's nuances in ways general models don't handle well
- You're operating at scale where API costs become prohibitive
- You need full control over the model for regulatory or security reasons
Borrow when:
- You need reasoning, language, or vision capabilities beyond what you'd build
- Your use case is novel enough that no SaaS product covers it
- You want to ship in weeks, not quarters
- You're experimenting — you don't yet know if the use case has ROI
The hybrid reality:
Most good AI systems end up being a combination of all three. Borrow the intelligence (foundation model API), build the wrapper (your application logic, prompts, integrations), and buy the plumbing (vector database, observability, deployment infrastructure).
That's not a cop-out — it's genuinely how well-architected AI products are built in 2026.
The Expensive Mistake
The most common mistake I see: teams treat AI like they'd treat any other software problem and default to buy.
They sign a $150K/year contract with a vendor who has an "AI-powered" product. Six months later, they realize the vendor's AI doesn't handle their edge cases, can't be customized, and is basically a wrapper around the same OpenAI API they could have accessed themselves for $3,000.
Or worse: they go full build, spinning up a data science team to train a custom model — for a use case that a prompt and a few API calls would have solved in an afternoon.
Both mistakes share the same root cause: the decision-maker didn't know "borrow" was on the table.
Questions to Ask Before You Decide
Before committing to any AI initiative, I ask clients four questions:
-
Is there already a SaaS product that solves exactly this problem with enough flexibility for your use case? If yes, seriously consider buying — but validate the flexibility claim before signing.
-
Do you have proprietary data that would meaningfully improve a model beyond what a foundation model already does? If not, fine-tuning probably isn't worth the cost.
-
Can this be solved with a well-designed prompt and an API call? You'd be surprised how often the answer is yes.
-
What's the cost of being wrong? If it's a core business process, start smaller than you think you need to. Validate with a "borrow" prototype before committing to a build.
The Bottom Line
The AI decision framework isn't build vs. buy anymore. It's three-way, and the right answer changes depending on your timeline, your data, your scale, and what already exists in the market.
The teams winning with AI right now aren't the ones with the biggest models or the most sophisticated infrastructure. They're the ones who figured out the right option for each problem — and moved fast.
If you're staring down an AI initiative and aren't sure which path makes sense, that's exactly the conversation I have with clients. Usually takes a half-day sprint to get to a clear answer.
