
Design AI for Enterprises
On this page
Many enterprises are considering ways to design AI initiatives and gain a business advantage. Usually, people start with basic AI and then want to upgrade to enterprise AI. This article summarizes the approaches that Curity recommends, to plan your future enterprise AI direction.
Basic AI
Many people at enterprises use AI tools and technologies and gain a positive first impression.
-
A business analyst could ask a chat client for a report on public data on trends for their industry. The analyst could then review and adapt the output to quickly generate a report for stakeholders.
-
A data scientist could use an AI agent to connect to structured and unstructured data sources that contain customer data. The agent could create a new data source that stakeholders connect to with a chat client.
-
A developer could use an AI coding assistant that helps to build a web application and API. The developer can then review the generated code and adapt it to the enterprise's coding standards.
As a person becomes more skilled at using AI tools, they learn how to train the agent with clear instructions about inputs and outputs, to reduce the need for adaptation. The human remains the responsible party, to ensure that objectives that use AI eventually produce correct results.
Enterprise AI Business Opportunities
Basic AI usage typically involves using free public data and services, or using sensitive data only internally. To gain maximum value, decision makers will need to integrate AI with their digital business and internet users.
- Enable AI agents to use enterprise data and services in dynamic conversational customer experiences.
- Innovate with a faster time to market, to out-perform competitors.
- Make your enterprise services easier to find, to extend your business reach.
There are many new use cases for which enterprises could build AI solutions, such as the following examples.
- Users define conditions and the AI agent creates orders at a future time, when those conditions are met.
- Users ask a support bot questions about their resources, such as insurance policies or investments.
- AI agents create and run backend jobs to reconcile customer data or easily produce reports.
AI agents can also collaborate, to provide new ways to find your data and services. For example, an AI agent might pay for a flight and then locate and call another AI agent that pays for an event at the destination and applies a membership discount.
APIs Control AI Agent Access
Before you can implement internet business use cases, you need foundations to enable AI agents to access your enterprise data. AI agents use interoperable protocols to call APIs when they need to access backend data or services. The Model Context Protocol (MCP) is the current mainstream option.
Although you can implement all API logic within MCP servers, many enterprises will instead prefer MCP servers to be a thin layer in front of existing APIs. MCP servers can expose a targeted subset of existing API endpoints to AI agents as MCP tools. Such deployments enable enterprises to reuse their existing investments in APIs.
Enterprise AI Must Integrate Identity
Enterprise AI use cases operate on customer or corporate resources, so must execute more accurately than basic AI. In particular, you must ensure that identities, user attributes, money and other sensitive resources are handled correctly and safely. You must therefore understand how to safely expose enterprise data to AI agents.
MCP defines interoperable ways to secure AI agent access. Internet users typically run an AI agent with a built-in MCP client that can call enterprise MCP servers. Administrators can restrict the AI agents that can call APIs and the users who can gain access via approved AI agents.
MCP authorization is an OAuth Profile that can use many security standards. You apply end-to-end security to authenticate users, authenticate AI agents, implement human approvals and return short-lived least-privilege HTTP credentials to AI agents, with which the agent can access APIs.
Access Token Design is Critical
The HTTP credential that AI agents send to APIs is the OAuth access token, which contains security context. APIs cryptographically verify the integrity of access tokens before using the received context. You must design access tokens to grant minimal API privileges to AI agents and to provide sufficient context to allow APIs to restrict access.
The following example access token only allows read access to stock information and clearly informs APIs that an AI agent is present.
{"jti": "31b921b8-b166-4173-b633-7480bab89456","delegationId": "d94e9d67-b426-4cff-8613-f7cf2b1ca154","exp": 1762337303,"nbf": 1762336403,"scope": "stocks/read","iss": "https://login.demo.example/oauth/v2/oauth-anonymous","sub": "john.doe@demo.example","aud": "https://api.demo.example","iat": 1762336403,"purpose": "access_token","client_type": "ai-agent","client_assurance_level": 1,"region": "USA"}
The first enterprise AI milestone should consist of the following main steps. While learning these steps, prefer experimenting with low sensitivity data or read-only operations.
- Integrate an end-to-end flow that uses the OAuth security standards required for MCP.
- Issue least-privilege access tokens that deliver security context to APIs.
- Implement API authorization using the access token, to restrict how AI agents use enterprise resources.
- Implement API auditing of AI agent access, so that security teams can govern AI enterprise resource access.
The Design MCP Authorization for APIs article explains how to implement an end-to-end flow. The steps can include trust configuration to enable agent and user onboarding, user authentication, user consent and token issuance. You can run the Implement MCP Authorization code example on a local computer to get connected.
AI Authorization Server Requirements
To implement AI flows, organizations need the OAuth authorization server to support up to date OAuth security standards. In particular, you need control over access token data and token exchange. One way to enable AI integrations is to run an authorization server with the required behaviors alongside existing identity systems.
AI Threat Mitigations
During initial AI agent to API integrations, enterprises should think more about threats like prompt injection, where the agent is given malicious instructions by an attacker. AI agents can potentially hallucinate and perform actions like instructing MCP clients to call unexpected tools or use incorrect input parameters. AI agents could also potentially misuse data returned from APIs. Consider the following human approval mechanisms to mitigate AI threats.
- Configure boundaries on the API endpoints where AI agents can use access tokens.
- Configure short-lived tokens so that AI agent access depreciates to zero standing privilege.
- Present end users with clear consent screens to enable approval of the AI agent's requested level of API access.
- Consider the use of a CIBA flow to remotely authorize backend AI agent jobs.
The API Security Best Practices for AI Agents provides further detail on threats and mitigations. For example, AI agents in some environments can use strong OAuth client credentials to prevent impersonation by a malicious program.
High Privilege Operations
To enable an OAuth agent to perform a high-privilege task, such as a payment, APIs can use the OAuth 2.0 Step Up Authentication Challenge Protocol from RFC 9470. An API can return a challenge response, with an HTTP 403 status code, to indicate that the MCP client must get a new access token with a high-privilege OAuth scope.
Administrators can configure the authorization server to require strong customer authentication (SCA) before issuing an access token with the high-privilege scope. Consent to a high-privilege event such as a payment can also be recorded in a non-repudiable manner. To run an MCP step-up flow, check out the ChatGPT Step-Up Code Example.
AI Federation
Once you have the OAuth foundations to enable enterprise AI, you are well-placed to extend your business reach. When multiple enterprises have AI foundations, they can implement B2B AI use cases. Administrators at each enterprise use mechanisms to establish trust and grant AI agents and users partial access to their data and services.
As agents evolve they will implement more intricate workflows. Agents will act autonomously to call other agents that specialize at implementing particular subtasks within complex workflows. The Agent2Agent (A2A) Protocol defines a way for agents to use natural language to send commands to other agents. Typically, each agent in an A2A flow will use MCP tools to call APIs.
AI federation should separate concerns so that enterprises do not need to implement low-level concerns. For example, Google's Agent Payments Protocol (AP2) is an initiative to extend A2A to enable secure online payments and provide PCI-compliant endpoints that merchants can call.
AI Federation Security Patterns
In federated use cases, data accuracy, security and trust become even more critical. AI technologies use many recent OAuth security standards, and AI initiatives have led to the creation of some new standards.
As AI use cases become more complex, the user experience needs to be kept manageable, to reduce the number of user prompts as AI agents call many MCP servers to perform their tasks. Administrator approval will be a key factor in reducing prompts and enabling Single Sign-On to APIs across trust domains. The SSO for AI Agents with OpenID Connect article explains how that will work.
Another important capability for high security transactions will be to record sensitive AI agent commands in non-repudiable ways. For that, expect agents to integrate with digital wallets that issue Verifiable Credentials, to store the user's runtime intent in a cryptographically signed assertion that backend components like APIs can verify.
Summary
Many enterprises have AI teams who are gaining skills at AI technologies. However, enterprises typically lack an effective AI strategy and unintentionally limit themselves to basic AI. Enterprise AI initiatives often fail due to conflicts between teams. For example, an AI team might spend time developing a customer solution, but a compliance team could block it from going live, due to security and privacy concerns.
As this article shows, the steps to safely enable enterprise AI are very logical. The only development work needed is to follow long-established API principles, centred on a future-proof authorization strategy. The enterprise AI use case clearly shows how, with logical decision making, security can be a business enabler and help to ensure people agreement across enterprise teams.

Gary Archer
Product Marketing Engineer at Curity
Join our Newsletter
Get the latest on identity management, API Security and authentication straight to your inbox.
Start Free Trial
Try the Curity Identity Server for Free. Get up and running in 10 minutes.
Start Free TrialWas this helpful?