While the AI world obsesses over benchmark scores and which model can write better poetry, Mistral just made a move that says everything about where the real money is: enterprise.
At NVIDIA GTC this week, the French AI startup announced Mistral Forge, a platform that lets companies build custom AI models trained on their own data. Not fine-tuned. Not RAG’d up. Actually trained from scratch.
And if you think that sounds incremental, you’re missing the point.
The Enterprise AI Problem Nobody Talks About
Here’s a dirty secret: most enterprise AI projects fail. Not because the technology isn’t there, but because these models don’t understand your business. They were trained on the internet — Reddit arguments, Wikipedia articles, Stack Overflow rants. Not your 20 years of internal documentation, your compliance workflows, or that tribal knowledge locked in Susan from Accounting’s head.
The current “solutions” to this problem are band-aids:
- Fine-tuning: Adjusts model behavior slightly, doesn’t fundamentally change what it knows
- RAG (Retrieval Augmented Generation): Bolts your data on at query time, works great until it doesn’t
- Prompt engineering: Hoping the model will behave if you ask nicely enough
Mistral’s pitch with Forge is different: actually train the damn model on your data.
What Forge Actually Does
Forge gives enterprises access to Mistral’s library of open-weight models — including the recently released Mistral Small 4 — and the infrastructure to train custom versions using their own proprietary data.
The key differentiators:
1. Training from scratch, not fine-tuning This matters for non-English languages, domain-specific jargon, and any use case where the base model’s training data is actively unhelpful.
2. No third-party model dependency When you’re running on OpenAI’s API and they deprecate a model or change pricing, you’re at their mercy. Train your own model, keep it in-house, that problem goes away.
3. Forward-deployed engineers Taking a page from Palantir’s playbook, Mistral is embedding engineers directly with enterprise customers. They call them FDEs — people who actually understand both AI and your business domain, helping surface the right data and build proper evaluation frameworks.
“Understanding how to build the right evals and making sure that you have the right amount of data is something that enterprises usually don’t have the right expertise for,” Elisa Salamanca, Mistral’s head of product, told TechCrunch.
She’s not wrong.
The $1 Billion Validation
Mistral claims they’re on track to exceed $1 billion in annual recurring revenue this year. For a company that’s barely two years old, that’s remarkable — and it validates their contrarian bet.
While OpenAI chased viral consumer moments with ChatGPT and Anthropic positioned itself as the “safety-first” option for API developers, Mistral went straight for the enterprise throat. No consumer app. No chatbot for the masses. Just B2B, all day.
The early Forge customers tell the story:
- Ericsson — telecom giant with specific infrastructure needs
- European Space Agency — government work, sovereignty requirements
- ASML — the Dutch chip equipment monopoly, led Mistral’s €11.7B Series C
- Reply — Italian consulting firm embedding AI for clients
Notice a pattern? European institutions that care deeply about data sovereignty, compliance, and not sending their sensitive data to American cloud providers.
Why This Might Actually Work
The boring AI use cases are often the most valuable ones.
When every startup is racing to build the next consumer AI product, there’s a massive greenfield opportunity in just… making enterprise AI work. Helping the insurance company process claims faster. Helping the bank detect fraud better. Helping the manufacturer predict equipment failures.
These aren’t sexy. They won’t get you on the cover of Wired. But they pay — really, really well.
Mistral’s bet is that enterprises will pay premium prices for:
- Models that actually understand their domain
- Full control over their data and infrastructure
- Human experts who can guide implementation
- No vendor lock-in or deprecation surprises
That’s a compelling package.
The Catch
Of course, “train your own model” isn’t free.
The compute costs are substantial. The data pipeline work is substantial. The evaluation framework development is substantial. You need good data, and most enterprises don’t have good data — they have 30 years of accumulated garbage in inconsistent formats across 47 different systems.
Mistral’s forward-deployed engineers are the secret weapon here, but they’re also the bottleneck. This model doesn’t scale infinitely. You can only embed so many humans.
The question is whether Forge can productize enough of the process to make custom model training accessible beyond the Ericssons and ASMLs of the world. Can a mid-sized company afford this? Will Mistral build enough tooling that fewer engineer-hours are needed per deployment?
The Bigger Picture
The AI industry is bifurcating.
On one side: the foundation model labs racing toward AGI, burning billions on training runs, fighting over benchmark supremacy.
On the other side: companies building the picks and shovels for actually deploying AI in the real world.
Mistral is trying to straddle both — they still build frontier models, but their business model is increasingly about helping others build models too. That’s a smart hedge.
The irony is that the “boring” enterprise market might end up being more defensible than the consumer race. OpenAI can ship a better chat interface overnight. Replacing the custom model powering a bank’s fraud detection system? That takes years.
Mistral seems to understand this. And with $1B ARR on the horizon, so does the market.
Forge is available now for enterprise customers. If you’re curious about custom model training and have deep pockets, reach out to Mistral.