Apigee – Scaling

Our goal is to provide high performance and reliable APIs, and we have to do this we if have just five clients, or if the number of our clients rise to five hundred thousand, we have to maintain our APIs working correctly by scaling.



If an API provides the same static data or data that does not change over a period of time, a cache can be an ally.

Why should we use caching?

  •  Improve performance by reducing network latency and also eliminate the redundant requests,
  • Reduce the amount of load to the backend services,
  • Makes it highly scalable to support more transactions without additional hardware,
  • Can be used to process session data for reuse across HTTP transactions,
  • Support security,

Caches are built on a two-level system:

  • In-memory level (L1): fast access,  each node has a percentage of memory reserved for use by the cache when the memory limit is reached, Apegee Edge removes the cache from memory in the order of time since last access, with the oldest entries removed first.
  • Persistent level (L2):  All message processing nodes share a cache data store (Cassandra) for persisting cache entries. Persisted even if removed from L1 and there isn’t limit on the number of cache entries just in the entries size.

The cache expires only on the basis of expiration settings.

Apigee Edge provides a few cache policies: populate cache, lookup cache, invalidate cache and Response cache.


Populate Cache/Lookup Cache/Invalidate Cache:  Use this to store custom data objects or information persistent across multiple API transactions.

With this policies, we can add or remove the cache entries just using separate policies.

The flow should be first the policy Lookup cache then The policies needed to populate the cache when the cache is empty and the Populate Cache policy.

For instance in the following example the following:

In the following Lookup cache implementation, we are looking to the value in the ‘cachekey’ entry and assigning to the variable ‘logging’.

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<LookupCache async=“false” continueOnError=“false” enabled=“true” name=“Lookup-Cache”>


On the police Populate Cache, we are populating a new entry with the key ‘cacheKey’  with the value from the ‘logging’ variable.


<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<PopulateCache async=“false” continueOnError=“false” enabled=“true” name=“Populate-Cache”>


To create the cache resource, like CacheKey, access to the environments configurations board and on the first tab ‘Cache’ add a new entry. In this board is possible also clean the cache.


Response cache: Caches data from a backend resource, reducing the number of requests to the resource. Apigee supports only subset directives from the HTTP/1.1 cache control specifications on responses from origin servers. So we cannot use several standards associated to HTTP cache control.
To implement this type of cache add a new Response Cache police on the request that you want to cache. The code to cache the ‘Get /cities’ request:


<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<ResponseCache async=“false” continueOnError=“false” enabled=“true” name=“RC-cacheCities”>
       <KeyFragment ref=“request.uri” type=“string”/>
       <TimeoutInSec ref=“”>3600</TimeoutInSec>


Load Balancer


The propose of a Load Balancer is to improve responsiveness and increases the availability of applications by distributing network or application traffic across several services.
Configure the Apigee load balance is really easy, we just need to configure one or more named TargetServers, choose one the available algorithms, they are RoundRobin, Weighted, and LeastConnections.
We can also define a fallback server. It’s also possible to test if the server is running with a ping or pong method and remove the server from the load balancer.


Response cache supports only a subset of directives from the HTTP/1.1 cache control specifications on responses from origin servers, and it can be an obstacle because developers are used to working with the HTTP specifications and are counting with its benefits.






Apigee – Getting started with Apigee

Apigee is a full lifecycle API management platform that enables API providers to design, secure, deploy, monitor and scale APIs, managing the entire API lifecycle. How easy is it to start using Apigee?

I decided to find out by trying a Proof of Concept where the objective was to configure a simple API using just the management UI Apigee Edge.

In this POC I wanted to explore the following features:

  • API Design
  • OAuth 2.0 authentication
  • Security Rules
  • Interaction with External Services
  • Cache
  • Scaling
  • Maintenance
  • Logging
  • Deploy and Version strategy

To test Apigee I used a sample GET CITY API REST service available on the Google Cloud. This service returns and saves cities and points of interest. The final API will support the following list of requests:

GET /cities
GET /cities/{city_id}
GET /cities/{city_id}/pointsofinterest
GET /cities/{city_id}/pointsofinterest/{point_of_interest_id}
POST /cities/{city_id}/pointsofinterest
GET /cities/{city_id}/pointsofinterest

Target Endpoint:


Let’s first define some keywords and explain how Apigee works.

When you first create an account, you are assigned to an organization. Apigee provides you with one or more organizations, also called orgs. Orgs contains developers, developers applications, API products, API proxies and other items needed to configure the team APIs.

API Proxy

Typically an API proxy is a facade for one or more generic  APIs, services or application. With the API proxy, we have more one extra layer between the client and the services, but we also have an additional control layer to manage our services, configure all the policies and rules and we can:

  • verify security tokens
  • collect analytics information
  • serve requests from the cache  
  • perform traffic management

The proxy endpoints follow the restful principles. The HTTP verbs GET, POST, PUT and DELETE are used except for the verb PATCH.

A proxy is responsible for handling requests from the client, execute all the configured policies and forward the request to the back-end server.

The proxy has two kinds of endpoints:

  • Proxy endpoint: includes client-specific policies and it can have three types of flow: a pre-flow, one or more conditional flows, and a post-flow.
  • Target endpoint: includes policies related to a particular back-end server and it can have the same three types of flow (pre-flow, conditional flow, and post-flow) more an extra post-client-flow which executes after the response is sent to the client.

The target endpoint and proxy consist of XML flows or paths. A proxy has a request path which is the path the request takes from the client to the back-end server and has a response path, which is the path the response takes from the target to the client.



A policy is like a module that implements a specific, limited management function. All the policies are written in XML.

Design APIs

From a developer’s and manager’s perspective, the Apigee Edge Spec Editor can be helpful to build the API structure.

Apigee Edge enables to model our APIs by creating OpenAPI Specifications with Spec Editor.

OpenAPISpecification is the Open API Initiative focused on creating, evolving, and promoting a vendor-neutral API description format based on the Swagger. To know more about OpenAPISpecification: https://www.openapis.org/blog

Developers and clients can use the Spec Editor to add new features, create and update a new proxy automatically, create documentation or just consult the API specifications.

One negative point is that we can’t request the Swagger interface to access our APIs through the web browser, but the Apigee team is working to add this feature, and it should appear soon.

My First proxy

In this experiment, I used my API specs to create a new API proxy. I named the IPI CitiesInfo, and all my requests are working without any extra configuration. To add a new proxy through the ApiEdgee you just have to follow this specification:

  1. Open the Develop menu,
  2. Select the API Proxy menu,
  3. Select the button ‘+ Proxy’,
  4. At this point, you can choose between six kinds of proxies. To create the simplest proxy that just makes requests to a backend server, select the Reverse Proxy,
  5. To build a proxy using pre-built specs select the button ‘Use OpenApi’ and select the pretended specs,
  6. Fill in the details if they are incomplete or you are not using specs. Existing API is the backend endpoint,
  7. Click Next,
  8. Select which operations you want to use. But you can select more later.
  9. On Authentication menu select ‘Pass through (none)’,
  10. Select Next. Here you can see and enable or disable the virtual hosts that this proxy will bind when deployed,
  11. After you select Next and you have your first proxy deployed to a test environment and a board where you can configure and test your proxy.
  12. Test your proxy using the endpoint that you can find on ‘Deployments’.

Continue reading “Apigee – Getting started with Apigee”