Apigee – Scaling

Our goal is to provide high performance and reliable APIs, and we have to do this we if have just five clients, or if the number of our clients rise to five hundred thousand, we have to maintain our APIs working correctly by scaling.

 

Cache

If an API provides the same static data or data that does not change over a period of time, a cache can be an ally.

Why should we use caching?

  •  Improve performance by reducing network latency and also eliminate the redundant requests,
  • Reduce the amount of load to the backend services,
  • Makes it highly scalable to support more transactions without additional hardware,
  • Can be used to process session data for reuse across HTTP transactions,
  • Support security,

Caches are built on a two-level system:

  • In-memory level (L1): fast access,  each node has a percentage of memory reserved for use by the cache when the memory limit is reached, Apegee Edge removes the cache from memory in the order of time since last access, with the oldest entries removed first.
  • Persistent level (L2):  All message processing nodes share a cache data store (Cassandra) for persisting cache entries. Persisted even if removed from L1 and there isn’t limit on the number of cache entries just in the entries size.

The cache expires only on the basis of expiration settings.

Apigee Edge provides a few cache policies: populate cache, lookup cache, invalidate cache and Response cache.

 

Populate Cache/Lookup Cache/Invalidate Cache:  Use this to store custom data objects or information persistent across multiple API transactions.

With this policies, we can add or remove the cache entries just using separate policies.

The flow should be first the policy Lookup cache then The policies needed to populate the cache when the cache is empty and the Populate Cache policy.

For instance in the following example the following:

In the following Lookup cache implementation, we are looking to the value in the ‘cachekey’ entry and assigning to the variable ‘logging’.

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<LookupCache async=“false” continueOnError=“false” enabled=“true” name=“Lookup-Cache”>
   <DisplayName>Lookup-Cache</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment>cachedKey</KeyFragment>
   </CacheKey>
   <CacheResource>Cache</CacheResource>
   <Scope>Exclusive</Scope>
   <AssignTo>logging</AssignTo>
</LookupCache>

 

On the police Populate Cache, we are populating a new entry with the key ‘cacheKey’  with the value from the ‘logging’ variable.

 

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<PopulateCache async=“false” continueOnError=“false” enabled=“true” name=“Populate-Cache”>
   <DisplayName>Populate-Cache</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment>cacheKey</KeyFragment>
   </CacheKey>
   <CacheResource>CacheKey</CacheResource>
   <Scope>Exclusive</Scope>
   <ExpirySettings>
       <TimeoutInSec>3600</TimeoutInSec>
   </ExpirySettings>
   <Source>logging</Source>
</PopulateCache>

 

To create the cache resource, like CacheKey, access to the environments configurations board and on the first tab ‘Cache’ add a new entry. In this board is possible also clean the cache.

 

Response cache: Caches data from a backend resource, reducing the number of requests to the resource. Apigee supports only subset directives from the HTTP/1.1 cache control specifications on responses from origin servers. So we cannot use several standards associated to HTTP cache control.
To implement this type of cache add a new Response Cache police on the request that you want to cache. The code to cache the ‘Get /cities’ request:

 

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<ResponseCache async=“false” continueOnError=“false” enabled=“true” name=“RC-cacheCities”>
   <DisplayName>RC-cacheCities</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment ref=“request.uri” type=“string”/>
   </CacheKey>
   <Scope>Exclusive</Scope>
   <ExpirySettings>
       <ExpiryDate/>
       <TimeOfDay/>
       <TimeoutInSec ref=“”>3600</TimeoutInSec>
   </ExpirySettings>
   <SkipCacheLookup/>
   <SkipCachePopulation/>
   <UseResponseCacheHeaders>true</UseResponseCacheHeaders>
   <UseAcceptHeader>true</UseAcceptHeader>
</ResponseCache>

 

Load Balancer

 

The propose of a Load Balancer is to improve responsiveness and increases the availability of applications by distributing network or application traffic across several services.
Configure the Apigee load balance is really easy, we just need to configure one or more named TargetServers, choose one the available algorithms, they are RoundRobin, Weighted, and LeastConnections.
We can also define a fallback server. It’s also possible to test if the server is running with a ping or pong method and remove the server from the load balancer.

Conclusion:

Response cache supports only a subset of directives from the HTTP/1.1 cache control specifications on responses from origin servers, and it can be an obstacle because developers are used to working with the HTTP specifications and are counting with its benefits.

 

References:

https://github.com/anil614sagar/advanceddevjam/blob/master/lab4.md

https://en.wikipedia.org/wiki/Load_balancing_(computing)

 

Leave a Reply

Your email address will not be published. Required fields are marked *