Skip to content

Which open-source LLM do I pick if I want to run on-prem or in my own cloud?

Llama, Mistral and Qwen are the three families that matter for SMB. For general Dutch and English work the larger variants are close to commercial quality, but the choice mostly depends on license, hardware and the language area you serve.

Try this first

  1. 1Llama (Meta): good quality, decent multilingual, generous but not free license. Read the license carefully if you build commercial AI for customers, some variants have usage caps.
  2. 2Mistral (Mistral AI): EU company, strong performance, models like Mistral Large and open Mixtral variants. A real reason for SMBs where EU data location is a hard requirement.
  3. 3Qwen (Alibaba): very strong, especially in multilingual and code, Apache 2.0 on open variants. Be aware of geopolitical perception with commercial customers who treat that as a showstopper.
  4. 4Match to hardware: a 7B runs on a good laptop, 32B needs a workstation, 70B+ needs a real GPU server or a cloud instance with A100/H100. Compute first, choose second.
  5. 5Test on your own Dutch content. Benchmarks are often English-only and miss the picture. Twenty real questions compared takes an hour and tells more than any leaderboard.

When to bring us in

Want us to test three candidates on your hardware and content head to head, we can set up the bench in half a day.

See also

None of the above fits?

Describe your situation below. We pass your input plus the steps you already saw to our AI and return tailored next-step advice. If it's too risky to DIY, we'll say so.

Who are you?

For the AI question we need your email and company, so we can follow up if the AI gets stuck, and to prevent abuse.

Limited to 2 questions per hour and 5 per day, kept lean so the AI stays useful. For more, contacting us directly works better for you and us.

Or skip the DIY entirely

Our Managed IT clients do not look these things up. One point of contact, a fixed monthly price, resolved within working hours.