Enterprise Considerations

Shipping AI in a startup is mostly an engineering problem. Shipping AI inside a Fortune 500 is mostly an organizational, legal, and compliance problem with engineering as a sub-component. This article is the tour of what changes.

Data boundaries

The single most important enterprise concern: which data is allowed to leave the company?

Categories:

  • Public data: marketing, docs, public web. Can go anywhere.
  • Internal data: company info not for the public. Often must stay in-vendor.
  • Confidential / regulated data: customer PII, financial, health, IP. Strict controls.
  • Restricted / classified: very limited handling.

For each category:

  • Who can it be sent to? (Vendor X yes, Vendor Y no.)
  • What can be done with it? (Train? Fine-tune? Cache?)
  • How long can it be retained?
  • Does the user have to consent first?

Large vendors (Anthropic, OpenAI, Google, AWS) offer enterprise tiers with strong data handling: no training on customer data, configurable retention, regional residency, audit logs.

Read the data processing addenda. Rinse and repeat for every vendor.

Vendor evaluation

For each AI vendor, expect a security/legal review:

  • SOC 2 Type II: standard for SaaS.
  • ISO 27001 / 27017 / 27018: ISO security baseline.
  • HIPAA BAA (if healthcare).
  • PCI DSS (if payments).
  • FedRAMP (US government).
  • GDPR DPA: EU data processing.
  • Sub-processor list: who else touches the data downstream.

Also:

  • Penetration test reports.
  • Insurance (cyber liability).
  • Incident response history.
  • Data residency options.

This process takes weeks to months. Plan accordingly.

Compliance regimes

GDPR (EU, plus inspirations everywhere)

  • Lawful basis for processing.
  • Data minimization: don’t collect/log more than needed.
  • Right to access / delete / port user data.
  • DPA with vendors.
  • Breach notification: 72 hours.

For LLM apps: be careful logging, careful with vendor processing locations, careful with training-data inclusion.

HIPAA (US healthcare)

  • PHI can only be processed by HIPAA-compliant systems with Business Associate Agreements (BAAs).
  • Most major LLM providers offer HIPAA tiers (Claude, GPT-4 via Azure, etc.).
  • Audit logs, access controls, encryption.

SOX (US public companies)

  • Internal controls over financial reporting.
  • AI assisting financial workflows must have controls + audit trail.

CCPA / CPRA (California) and US state privacy laws

  • Right to know, delete, opt out of “sales/sharing.”
  • LLM processing as a service is usually OK; data sale is not.

EU AI Act

  • Risk-tiered regulation.
  • High-risk AI systems: extensive documentation, human oversight, transparency.
  • Some prohibited use cases (e.g. social scoring, real-time biometric ID with exceptions).
  • Enforcement ramping through 2026–2027.

Sector-specific

  • Finance: model risk management (SR 11-7 in US), explainability requirements.
  • Insurance: NAIC AI guidance.
  • Defense / government: CMMC, FedRAMP, ITAR.

For each: compliance is not just a checkbox — it shapes architecture.

Auditability

Enterprise requires you to be able to answer:

  • Why did the system give this output to this user at this time?
  • Who/what made decisions about model selection, prompt design, fine-tuning data?
  • What data was used to train any custom model?
  • What is the provenance of any decision the AI made?

This implies:

  • Model registry: every model version, training data, training config.
  • Prompt registry: every prompt version.
  • Trace retention: long enough to answer audits.
  • Data lineage: where each piece of data came from.

Security

Beyond standard SaaS security:

Prompt injection

Adversaries can manipulate the model through inputs (Stage 11). Layer defenses.

Data exfiltration

A compromised AI system can leak training data, system prompts, or other users’ data. Test for it.

Denial of service

Generation is expensive; an attacker can DoS your wallet. Rate limit, cap budgets per user.

Model supply chain

Open-source models from random repos may have backdoors (rare but real). Verify checksums, prefer well-known sources.

Secrets in prompts

Don’t put API keys, passwords in prompts. Use scoped tokens, environment variables.

Identity, authn, authz

Standard enterprise auth applies:

  • SSO (SAML, OIDC) into AI products.
  • Role-based access: who can use which models, see which data.
  • Audit logs of all access.
  • MFA for sensitive operations.

The AI part: ensure tool authorization respects user identity (Stage 11).

Cost controls

Enterprise IT wants:

  • Budget caps per team / project / cost center.
  • Predictable monthly billing (committed-use, reserved capacity).
  • Charge-back / show-back to internal teams.
  • Approval workflows for new models, vendors, increases.

Many AI platforms offer enterprise billing dashboards, but you may need to roll your own attribution.

Change management

Enterprise users hate breaking changes. Plan for:

  • Versioning of prompts and models.
  • Deprecation notice periods.
  • Backward compatibility for APIs.
  • Communicated rollouts for capability changes.

Support and SLAs

Enterprise contracts include:

  • SLAs: uptime, latency.
  • Support tiers: 24/7, named contact, response times.
  • Escalation paths.
  • Dedicated engineering hours at higher tiers.

API providers offer enterprise tiers; if you’re shipping to enterprise yourself, you need similar.

Procurement and contracting

Multi-month sales cycles. Plan accordingly:

  • Master Service Agreement (MSA): legal framework.
  • Statements of Work (SOWs): scope of specific engagements.
  • Order forms: pricing, terms.
  • Security questionnaires (CAIQ, SIG, custom): hundreds of questions, expect to fill out.

If you’re a startup selling to enterprise: get help. Contracts and security reviews kill startup velocity.

Bring-your-own-key (BYOK)

Some enterprises want to provide their own LLM API key:

  • Their AI usage is on their account; they control billing, retention, training opt-outs.
  • Your product orchestrates; theirs is the model relationship.

Architecturally: your software supports per-tenant API keys, model choice, etc. Common in governance-sensitive segments.

On-prem and air-gapped

Some enterprises (defense, health, finance) require fully on-prem or air-gapped:

  • Self-hosted open-weights models.
  • No data leaves the customer’s network.
  • Heavier infrastructure burden on the customer side.

Vendors offering “on-prem AI” stack:

  • Anthropic: limited deployment options.
  • OpenAI: Azure Government, Azure US Sovereign Cloud.
  • NVIDIA NIM: containerized models for on-prem.
  • Vertex AI on-prem: GDC.
  • Open ecosystem: vLLM/TGI deployments.

Procurement-friendly architecture

Design for enterprise from day 1 if that’s your market:

  • Multi-tenant with strict isolation.
  • Configurable model providers (let customers pick).
  • Clear data flow diagrams (where does data go, when, why).
  • Logging and audit from day 1.
  • Role-based access built in.
  • Configurable retention per tenant.

Retrofitting enterprise compliance is harder than building it in.

The org change

Not technical, but real:

  • AI projects involve legal, security, privacy, compliance, business stakeholders, end users.
  • Build relationships early.
  • Define success criteria and risk tolerances collaboratively.
  • Don’t try to ship “ahead of legal” — you’ll lose.

Pitfalls

  • Underestimating compliance lead times.
  • Choosing a vendor without DPA / BAA.
  • Logging PII without policy.
  • Cost overruns from unbounded usage.
  • No model versioning: can’t reproduce a past audit-relevant decision.
  • Over-promising on AI accuracy: enterprise contracts often have penalties.

See also