/images/resources/architect/ai-agents/ai-agents-enterprise.png

Design AI for Enterprises

On this page

Many enterprises are considering ways to design AI initiatives and gain a business advantage. Usually, people start with basic AI and then want to upgrade to enterprise AI. This article summarizes the approaches that Curity recommends, to plan your future enterprise AI direction.

Basic AI

Many people at enterprises use AI tools and technologies and gain a positive first impression.

  • A business analyst could ask a chat client for a report on public data on trends for their industry. The analyst could then review and adapt the output to quickly generate a report for stakeholders.

  • A data scientist could use an AI agent to connect to structured and unstructured data sources that contain customer data. The agent could create a new data source that stakeholders connect to with a chat client.

  • A developer could use an AI coding assistant that helps to build a web application and API. The developer can then review the generated code and adapt it to the enterprise's coding standards.

As a person becomes more skilled at using AI tools, they learn how to train the agent with clear instructions about inputs and outputs, to reduce the need for adaptation. The human remains the responsible party, to ensure that objectives that use AI eventually produce correct results.

Enterprise AI Business Opportunities

Basic AI usage typically involves using free public data and services, or using sensitive data only internally. To gain maximum value, decision makers will need to integrate AI with their digital business and internet users.

  • Enable AI agents to use enterprise data and services in dynamic conversational customer experiences.
  • Innovate with a faster time to market, to out-perform competitors.
  • Make your enterprise services easier to find, to extend your business reach.

There are many new use cases for which enterprises could build AI solutions, such as the following examples.

  • Users define conditions and the AI agent creates orders at a future time, when those conditions are met.
  • Users ask a support bot questions about their resources, such as insurance policies or investments.
  • AI agents create and run backend jobs to reconcile customer data or easily produce reports.

AI agents can also collaborate, to provide new ways to find your data and services. For example, an AI agent might pay for a flight and then locate and call another AI agent that pays for an event at the destination and applies a membership discount.

APIs Control AI Agent Access

Before you can implement internet business use cases, you need API foundations to enable AI agents to access your enterprise data. In some cases this can involve actions like refactoring monolithic websites to provide API endpoints. If required, introduce an API gateway and follow the steps from the Identity and Access Management Primer article. APIs should follow a Zero Trust API Architecture so that they reject any unauthorized requests from AI agents.

Model Context Protocol

AI agents use interoperable protocols to call APIs when they need to access backend data or services. The first such protocol that you are likely to use is Model Context Protocol (MCP). An early use case might be for employee users to run a chat agent where they register an MCP server URL such as https://mcp.example.com. The agent connects to registered MCP servers and downloads metadata, which includes a list of MCP tools and their descriptions.

Users operating the agent issue natural language commands, such as the following example.

text
1
Generate me a PDF report with all transactions over 100 USD from the last 3 months.

The agent typically forwards the command to a Large Language Model (LLM) that uses Natural Language Processing (NLP). When the LLM detects that a registered MCP server tool can help to process the user's command, the LLM instructs the agent to call a tool endpoint. MCP tools can return raw data that the LLM can manipulate in flexible ways.

Although you can implement all API logic within MCP servers, many enterprises will instead prefer MCP servers to be a thin layer in front of existing APIs. MCP servers can expose a targeted subset of existing API endpoints to AI agents as MCP tools. Such deployments enable enterprises to reuse their existing investments in APIs.

A diagram showing different types of clients accessing APIs via a gateway and how the MCP server functions as a new API entry point for the MCP client by forwarding requests to the APIs.

Agent2Agent Protocol

Another interoperable protocol is the Agent2Agent (A2A) Protocol, which enables natural language commands to be sent between A2A clients and A2A servers. AI agents can act as A2A clients to call other agents, which act as A2A servers. A2A enables complex and sometimes long-running tasks, as the following example illustrates. Agents that specialize in particular sets of tasks can collaborate to complete the overall work.

text
1
Book me a flight to New York on Tuesday night and a baseball game the following evening, using my membership.

A2A clients can be any internet application, such as a customer support web portal. The protocol therefore provides a mechanism to seamlessly expose flexible natural language features to customers. Rather than a customer requesting features that require the enterprise to complete development work, the customer can just change their command.

A2A enables enterprises to run AI agents in backend environments, as internet APIs that provide A2A server entry points. Backend agents use MCP clients to connect to MCP servers and get enterprise data. By running A2A in backend environments, you can organize agent data access in various ways, such as routing all agent requests for enterprise data through an internal API gateway.

A diagram showing how to run agents in backend environments that receive commands from internet applications.

Enterprise AI Must Integrate Identity

Enterprise AI use cases operate on customer or corporate resources, so must execute more accurately than basic AI. In particular, you must ensure that identities, user attributes, money and other sensitive resources are handled correctly and safely. You must therefore understand how to safely expose enterprise data to AI agents.

MCP defines interoperable ways to secure AI agent access. Internet users should authenticate, consent to LLMs using their data, and then approve any sensitive agent actions. When enterprise data is involved, administrators must be able to restrict allowed agents and control their level of access. A2A's design can use the same security mechanisms.

MCP authorization is an OAuth Profile that can use many security standards. You apply end-to-end security to authenticate users, authenticate AI agents, implement human approvals and return short-lived least-privilege HTTP credentials to AI agents, with which the agent can securely access MCP servers, APIs and other agents.

Access Token Design is Critical

The HTTP credential that AI agents receive is the OAuth access token, which contains security context for resource servers (APIs), to enable the correct business authorization. MCP servers and A2A servers are resource server entry points that you must secure. Often, upstream APIs implement the main resource server authorization logic.

All APIs must cryptographically verify the integrity of access tokens before using the received context. You should design access tokens to grant minimal API privileges to AI agents and to provide sufficient context to allow APIs to restrict access. The following example access token only allows read access to stock information and clearly informs APIs that an AI agent is present.

json
12345678910111213141515
{
"jti": "31b921b8-b166-4173-b633-7480bab89456",
"delegationId": "d94e9d67-b426-4cff-8613-f7cf2b1ca154",
"exp": 1762337303,
"nbf": 1762336403,
"scope": "stocks/read",
"iss": "https://login.demo.example/oauth/v2/oauth-anonymous",
"sub": "john.doe@demo.example",
"aud": "https://api.demo.example",
"iat": 1762336403,
"purpose": "access_token",
"client_type": "ai-agent",
"client_assurance_level": 1,
"region": "USA"
}

The first enterprise AI milestone should consist of the following main steps. While learning these steps, prefer experimenting with low sensitivity data or read-only operations.

  • Integrate an end-to-end flow that uses the OAuth security standards required for MCP.
  • Issue least-privilege access tokens that deliver security context to APIs.
  • Implement API authorization using the access token, to restrict how AI agents use enterprise resources.
  • Implement API auditing of AI agent access, so that security teams can govern AI enterprise resource access.

The Design MCP Authorization for APIs article explains how to implement the MCP authorization flow end-to-end. The steps can include trust configuration to enable agent and user onboarding, user authentication, user consent, token issuance and token exchange.

AI Authorization Server Requirements

To implement AI flows, organizations need the OAuth authorization server to support up to date OAuth security standards. In particular, you need control over access token data and token exchange. One way to enable AI integrations is to run an authorization server with the required behaviors alongside existing identity systems.

AI Threat Mitigations

During initial AI agent to API integrations, enterprises should think more about threats like prompt injection, where the agent is given malicious instructions by an attacker. AI agents can potentially hallucinate and perform actions like instructing MCP clients to call unexpected tools or use incorrect input parameters. AI agents could also potentially misuse data returned from APIs. Consider the following human approval mechanisms to mitigate AI threats.

  • Configure boundaries on the API endpoints where AI agents can use access tokens.
  • Configure short-lived tokens so that AI agent access depreciates to zero standing privilege.
  • Present end users with clear consent screens to enable approval of the AI agent's requested level of API access.
  • Consider the use of a CIBA flow to remotely authorize backend AI agent jobs.

The API Security Best Practices for AI Agents provides further detail on threats and mitigations. For example, AI agents in some environments can use strong OAuth client credentials to prevent impersonation by a malicious program.

High Privilege Operations

To enable an OAuth agent to perform a high-privilege task, such as placing a money order, APIs can use the OAuth 2.0 Step Up Authentication Challenge Protocol from RFC 9470. An API can return a challenge response, with an HTTP 403 status code, to indicate that the MCP client must get a new access token with a high-privilege OAuth scope.

Administrators can configure the authorization server to require strong customer authentication (SCA) before issuing an access token with the high-privilege scope. Consent to a high-privilege event such as a payment can also be recorded in a non-repudiable manner. To run an MCP step-up flow, check out the ChatGPT Step-Up Code Example.

In some AI use cases, the user initiates a long-running task and is no longer present to complete a step-up flow. In such cases, you can use the Client-Initiated Back Channel Authentication flow. A typical use case would send a mobile notification to inform the user of final conditions and to collect user approval.

AI Federation

In federated use cases, data accuracy, security and trust requirements become even more critical. Once you have the OAuth foundations to enable enterprise AI, you are well-placed to meet such requirements and extend your business reach. When multiple enterprises have AI foundations, they can implement B2B AI use cases. Administrators at each enterprise use mechanisms to establish trust and grant AI agents and users partial access to their data and services.

As AI use cases become more complex, aim to deliver optimal access tokens to each resource server. Also reduce the number of user prompts, since AI agents call many MCP servers to perform their tasks. Administrator approval will be a key factor to enable Single Sign-On to APIs across trust domains. The SSO for AI Agents with OpenID Connect article explains data integration with trusted partners.

A key ingredient in your enterprise AI strategy should be a future-proof API deployment design that accounts for agents, MCP servers and A2A servers. When agents access enterprise data, prefer backend agent deployments, where you can put in place multiple security controls. As a prerequisite for backend agent deployments, ensure that APIs correctly reject unauthorized agent requests. You can then add security hardening, like assigning agents workload identities.

AI end-to-end flows require token exchanges, and you should audit agent access so that you can monitor it at scale. API gateways can implement those responsibilities. For example, an internal API gateway could use access token attributes to implement coarse-grained authorization and auditing of all agent requests for secured resources. The following diagram shows how agent flows can use tokens that communicate security policy.

A diagram showing federated AI access across organizations.

In some sectors, like payments, AI federation can enable new business opportunities and user experiences. Users would typically consent to limits with which agents operate, or the exact details of a transaction. Merchants would receive verifiable access tokens that contain the details of the user's consent, while payment providers continue to handle lower-level aspects, like PCI-compliance.

For high security AI transactions, the user's consent should be non-repudiable. To enable that, expect agents to integrate with digital wallets that issue Verifiable Credentials, to store the user's runtime intent in a cryptographically signed assertion that backend components like APIs can verify.

Token Intelligence

AI use cases highlight the need for a specialist token issuer, to enable optimal access tokens, with federation and human approvals, so that agents can complete complex tasks. In an OAuth architecture, you can upgrade the token issuer but continue to use your existing identity system for user account storage and all user authentication screens. The existing identity system then assumes the External Identity Provider (IDP) role.

Example Integrations

Curity provides a number of secured AI code examples that you can run end-to-end on a development computer. You can inspect the code to understand various security design patterns for AI agent access.

The Backend Agent with A2A Authorization example also demonstrates the integration of a specialist token issuer. It provides an Azure deployment that uses Entra ID for user account storage and user authentication, with the Curity Identity Server as the specialist token issuer.

Summary

Many enterprises have AI teams who are gaining skills at AI technologies. However, enterprises typically lack an effective AI strategy and unintentionally limit themselves to basic AI. Enterprise AI initiatives often fail due to conflicts between teams. For example, an AI team might spend time developing a customer solution, but a compliance team could block it from going live, due to security and privacy concerns.

As this article shows, the steps to safely enable enterprise AI are very logical. With modern identity foundations, development work only needs to follow long-established API principles, centred on a future-proof authorization strategy. The enterprise AI use case clearly shows how, with logical decision making, security can be a business enabler and help to ensure people agreement across enterprise teams.

Photo of Gary Archer

Gary Archer

Product Marketing Engineer at Curity

Frequently Asked Questions

Newsletter

Join our Newsletter

Get the latest on identity management, API Security and authentication straight to your inbox.

Newsletter

Start Free Trial

Try the Curity Identity Server for Free. Get up and running in 10 minutes.

Start Free Trial