Security

Your AI model choice is a security decision

2026-02-05

Your AI model choice is a security decision

We're building an AI voice agent — a system where customers can call us and an AI agent looks up their support tickets, schedules appointments, and resolves issues. Not a chatbot on a website, but a real phone connection through uWebChat Voice.

During the architecture design, I came to an insight I want to share. Because I think many companies getting started with AI are overlooking this point.

The model sees your customer data

Most discussions about AI model choice focus on speed, cost, and quality. Which model gives the best answers? Which model is cheapest per token?

But as soon as you build an AI agent that *performs actions* — looks up tickets, reads customer data, schedules appointments — the game changes. The model gets customer data in its context window. Not because you're careless, but because it's necessary. The agent needs to see that data to help the customer.

And at that moment, your model choice becomes a security decision.

What it comes down to

With our voice agent, we've set up the architecture so that the AI model can never determine *which* customer data it sees. All data scoping happens in our application layer. The model cannot request data from another customer — the parameter simply doesn't exist.

But once the right data is in the context, you're trusting the model to:

  • Respect the system prompt and not be circumvented through manipulation by the caller
  • Not leak sensitive information it shouldn't share
  • Not combine or reuse data in unexpected ways
  • Behave as instructed, every single time

That trust cannot be enforced with code. It has to come from the model.

Not all models are equal

There is an enormous difference between models when it comes to this trust question, and that difference isn't captured in benchmarks.

With a provider like Anthropic (Claude) or OpenAI, you know there's a published hierarchy for how system prompts versus user input are handled. Extensive research has been done on prompt injection resistance. Data processing agreements are available. API traffic is not used for model training. And there are independent security audits.

With a model from a jurisdiction without comparable privacy legislation — and I deliberately won't name names, but you know which models I mean — you face multiple uncertainties simultaneously. Is your API traffic being logged? By whom? Is it used for training? How robust is the model against prompt injection? You don't know. And you can't verify it.

The GDPR dimension

For us as a Dutch company with Dutch customers, GDPR adds another layer. As soon as you send customer data through an AI model, that's data processing. You need a data processing agreement with your model provider. You must be able to justify where the data goes and how it's processed.

Try getting that data processing agreement from some providers. And even if you get one — can you then justify to your customers that their support tickets, contract details, and contact information pass through servers in a jurisdiction where you have zero control over what happens with it?

What this means in practice

When designing our voice agent, we explicitly established the model choice as an architectural decision, not a configuration option. For all conversations involving customer data, we exclusively use a model with:

  • A verifiable data processing agreement
  • Published policies on data retention and training
  • Proven prompt injection resistance
  • A track record of transparency on security

This means we might pay a bit more per API call. But the alternative — sending your customer data through a black box to save a few cents — is not a trade-off I'm willing to make.

The lesson for everyone building with AI

If you're using AI for internal experiments or public content, it may not matter much which model you choose. But as soon as customer data flows through your AI pipeline, every model choice becomes a security decision, a compliance decision, and a trust decision towards your customers.

Treat it accordingly.

---

*I build AI-driven applications for SMBs with my company Universal.cloud. If you're thinking about AI agents in your own organisation and want to discuss architecture and security, feel free to reach out.*

Want to learn more?

Contact Universal Cloud to discuss how we can help your organization.

Get in touch

Related Articles

Website security: CMS vs static sites compared
Security2026-01-26

Website security: CMS vs static sites compared

Discover why static sites are inherently more secure than traditional CMS platforms like WordPress, and how you can better protect your website.

Read More
Microsoft 365 E3 + EMS vs Business Premium: why enterprise security remains the best choice
Security2026-01-22

Microsoft 365 E3 + EMS vs Business Premium: why enterprise security remains the best choice

With the July 2026 enhancements, E3 becomes even more powerful. Discover why the combination with EMS offers superior protection compared to Business Premium.

Read More
ConnectWise Cloud Backup now includes Entra ID protection
Security2025-12-30

ConnectWise Cloud Backup now includes Entra ID protection

Protect your Microsoft 365 identity layer with comprehensive Entra ID backup - users, groups, roles, and policies now included.

Read More