POC – TeamCity

Continuous integration and continuous deployment are practics increasingly used. Continuous integration is the practice of continuously and automatically test all the changes in the code and merge their changes to the main branch using several tools like TeamCity or Jenkins.
This practice has shown several advantages:
Reduced integration risks, with smaller merges we are avoiding that our code collides with the code from our coworkers.
Higher code quality, we will have more tests and will have more time to focus on our code.
With the version control, we can easily find and fix a branch with a bug. It’s comfortable with small branches.
The work for the testers will be more comfortable with different versions and branches can quickly isolate the bugs.
The deployment will be faster, and developers will trust more in their code.

To do this POC, that explores TeamCity as a tool to continuous integration I configured the process to a small project name ‘CityInfo’.

First things first:
Continuous integration and continuous deployment are practics increasingly used. Continuous integration is the practice of continuously and automatically test all the changes in the code and merge their changes to the main branch using several tools like TeamCity or Jenkins.
This practice has shown several advantages:
Reduced integration risks, with smaller merges we are avoiding that our code collides with the code from our coworkers.
Higher code quality, we will have more tests and will have more time to focus on our code.
With the version control, we can easily find and fix a branch with a bug. It’s comfortable with small branches.
The work for the testers will be more comfortable with different versions and branches can quickly isolate the bugs.
The deployment will be faster, and developers will trust more in their code.

To do this POC, that explores TeamCity as a tool to continuous integration I configured the process to a small project name ‘CityInfo’.

First things first:
Build Agent. Build agents are responsible for build and test the project. To configure a build agent is through the TeamCity Server. A build agent uses the software and hardware of the machine that it is installed on.

TeamCity Server. This service is responsible for the manager the builds, agents, users, and reports.

Build Queue. It is a list of trigged builds waiting for a compatible agent. The team city server manages this builds.

Build Artifacts. Information provided by the Build, for example, log files, zip archives, NuGet package.

Project. A project represents one software application/project. One project can have multiple build configurations.

Build configuration. Settings used to build an application. In the build configuration, we have to create build steps, set triggers, and a version control system for example.

Build step. Build configuration consists typically of various build steps. Each build step performs a specific part of the build process. For instance, in one build step, you can compile the source code and in the other run tests.

Build trigger. It is the rule that establishes when the build should be started. The build can be programmed to run with new commits, at specific times of the day.

The process to install TeamCity is straightforward. Just do the download for the JetBrains TeamCity page and follow all the steps.

  1. Run the executable:
  2. Select the components to install, Build agent and server.
  3. Confirme the configurations:
  4. Specify the user to run the server.
  5. Specify the user to run the agent:
  6. Finish the installation.
  7. And that’s it. We have our TeamCity Server running on http://localhost:9191/. The instruction page to start the TeamCity configuration:
  8. In the second page, we should choose a database. In the production environment we should have a private database but in this case, I will choose the default internal database.
  9. Accept the terms and conditions:
  10. Create the admin user:
  11. We already have access to the TeamCity configurations but don’t have an agent. Accessing the agents’ page we can verify that the agent is unauthorized.
  12. We just have to authorize the agent and it will be ready to start running builds:
  13. Now we can create our first project. To create a new project I used a small core .net project name CityInfo. Chose to create a new project. 
  14. For now, I will define my source code from a repository URL.
  15. I have to select the project name and the build configuration name.
  16. The TeamCity server will suggest various steps configurations that match with my kind of project. You can choose or discard however you want. I will choose all to try.
  17. To start I will just enable the build step. 
  18. It doesn’t need any alteration in the build step configurations. Just run a new build to test. 
  19. Success, we have our first build green.
  20. Next step and very important is to run unit tests. To do this I just enabled the tests step.
  21. And without any alteration, trigged a new build and it’s green again. But at this time I have more information about the tests. I can use the build log ant the tests report seeing how our build proceeded and our tests more detailed. If I wanna see the code coverage I need to alter this step configuration and alter the property Code Coverage to JetBrains dot Cover. It will work just on windows machines and produce a report wire we can see the code coverage and analyze where the tests are not covering our code. 
  22. The next step to implement is the NuGet restore. To have this I added a new step with the following configurations:
  23. At this point I cannot run any build because I have an error:
  24. The problem is that I don’t have the NuGet installed in my agent. We can install new tools in the administration configurations, under the tab Tools. 
  25. At this point, we are ready to add jest tests to our build configurations. First, we have to install node and npm in the agent machine. Then we have to create a new step builder runner “Command Line”.  We need to specify the folder where it will run and  the following custom script:
    npm install

    npm test

    If we retrigger the build we will see that it’s green. Success. Analyzing the build log we can see the test result.

  26. At this point, we have our jest test running but we don’t have a fast feedback in the first view of our build. To have this we need to install the package “jest-teamcity-reporter” in our project and alter package.json to have the following configuration:
    "jest": {
     "testResultsProcessor": "jest-teamcity-reporter"
    }

    Now in addition to the nunit unit tests, we already have the jest tests too.

Plan to implement continuous deployment and run integration tests with TeamCity

Visual Studio:

  1. Add a new Configuration Manager using visual studio. Name this configuration Deploy-dev and copy it from Debug settings.
  2. Clicking on web.config file Add Config Transforms for our new configuration settings. Populate the new file with parameters needed in the server.
  3. On project configurations, Tab Package/Publish Web define the following parameters:
    Configuration: Deploy-Dev
    IIS Web Site: Name site on IIS.
  4. You can test these configurations using Web.deploy.cmd

TeamCity:

  1. Create a new Build Configuration and add two environment variables to the project.
    1. Configuration – Deploy-Dev, basically it’s the configuration name that you want to use,
    2. TargetServer- the remote server where you wanna do the deploy.
  2. Add the first step builder MSBuild, select the build file path to the .csproj file and on Comand line parameters run the command need to do the publish, For instance:
    /P:Configuration=%env.Configuration%
    /P:DeployTarget=MSDeployPublish
    /P:MsDeployServiceUrl=https://%env.TargetServer%/MsDeploy.axd
    /P:DeployOnBuild=True
    /P:AllowUntrustedCertificate=True
    /P:MSDeployPublishMethod=WMSvc
    /P:CreatePackageOnPublish=True
    /P:UserName=AutoDeploy\Administrator
    /P:Password=Passw0rd

    You can see that we are using the environment variables here.

  3. In the dependencies tab in our build configuration, we can define this build the order for our build configurations. Per instance, this build configuration should run just after the tests build and just if the tests build had success.

To run the integration test:

Create a new build configuration just to build tests

If the integration tests pass create new build configuration to all the servers and connect everything like dependencies.

SonarQube integration.

SonarQube is a powerful tool capable to create helpful reports about the health of our projects. To have SonarQube working with our projects we need:

  1. Download SonarQube server from this link.
  2. Select your os version and run the file StartSonar.bat. In my case is it in the path “sonarqube-7.3\bin\windows-x86-64”. It should be running in the port 9000. You can access with the link localhost:9000.
  3. Download the plugin SonarQube for MSBuild. Use this link.
  4. Copy the plugin to the SonarQube folder plugins. It should be in the path “\sonarqube-7.3\extensions\plugins.”.
  5. Add a new “command line” step configuration with the following command:
    SonarScanner.MSBuild.exe begin /k:"project-key"
    MSBuild.exe /t:Rebuild
    SonarScanner.MSBuild.exe end

    Run a new build, now you should have new reports on you SonarQube Server.

Conclusion:

References:

https://www.troyhunt.com/you-deploying-it-wrong-teamcity_26/

https://confluence.jetbrains.com/display/TCD18/Concepts

Unit Testing – Guidelines

All of us agree that testing is good and there are numerous advantages to writing unit tests. However, sometimes we have to disagree on how to do our testes, TDD or BDD, what should be our code coverage goal and how can we do it?

The propose of unit tests is to validate that each unit of code performs as expected. It’s the first line of tests or the first line of defense to developers. They are implemented and performed by software developers since the earlier stages of the developer process and for several tools during all the process.

 

Advantages of unit testing:
  1. Lesser bugs are deployed, the product delivered to the client has better quality. Happy clients and the increased of reliance on customer service, quality assurance teams, and bug reports.
  2.  Unit testing ensures that we don’t break anything when a refactoring is needed because we can always find opportunities to improve our code and sometimes we really have to do.
  3. You don’t have to tests everything manually every time you make a change, or you add a new feature.
  4.  The developer has a way to verify the behavior of their code between edits rapidly. The feedback is much faster than with functional and integration tests.
Rules of thumb:

All the tests should respect a list of principles:

  1. Unit tests ensure that individual components work appropriately in isolation from the rest of the code. A unit tests should focus on a single ‘unit of code’;
  2. Unit tests must be isolated from dependencies, no network access, and no database requests.
  3. The tests should provide a clear description of the feature being tested. It should be provided on the test name with the following template:
    MethodUnderTest_inputOrScenarioUnderTest_expectedResult
    Bad example:

    [TestMethod]
    public void UnitTest_ClosetEnemy()
    {…}

    Good example:

    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = 0;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE,actual: actualValue, message: $”ClosestEnemy expected  value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  4. The tests should be arranged with the common pattern Arrange, Act, Assert.
    • Arrange: creating objects and setting them up as necessary;
    • Act: act on an object;
    • Assert: assert what is expected;

    Separating all of this actions the test highlights the dependencies and what the test is trying to assert. The principal advantage is the readability.

    Bad example:

    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    Assert.AreEqual(expected: 0, actual: ClosestEnemy(positions), message: $”ClosestEnemy expected value when there isn’t one element was 0 and the actual value is {ClosestEnemy(positions)}”);
    }

    Good example:

    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = 0;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE, actual: actualValue, message: $”ClosestEnemy expected value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  5. The input should be the mildest possible and just the necessary to tests the current scenario. When creating a new test we want to focus on the behaviors and avoid unnecessary information that can introduce errors in our tests and turn the test hard to read.
  6.  The test should be part of the delivery pipeline and on failure should provide an explicit message or report with the information about what were you testing,  what should it do, what was the output and what was the expected result.
  7.  Avoid undeclared variables. An undefined variable can confuse the reader of the test and can cause errors.Bad example:
    // Act
    var actualValue = GetCityName();

    // Assert
    Assert.AreEqual(expected: “cityName”, actual: actualValue, message: $”CityName expectded value was cityName and the actual value is {actualValue}”);

    Good example:

    // Arrange
    const string EXPECTED_NAME = “Porto”;

    // Act
    var actualValue = GetCityName();

    // Assert
    Assert.AreEqual(expected: EXPECTED_NAME, actual: actualValue, message: $”CityName expectded value was {EXPECTED_CITY_NAME} and the actual value is {actualValue}”);

  8. Avoid multiple asserts in the same test. With multiple asserts per test, if one asset fails the subsequent asserts will not be evaluated and we cannot figure out what is really failing on a single view.
  9. Don’t generate random data. Usually, random data is irrelevant to the test propose. If the data is irrelevant why should we generate it? It just make our code mode unreadble and if the test fail we don’t know the value generated and why it failed unless we log it.

Bad example:

// Arrange
const string ANY_NAME =RandomStringUtils.random(8);

// Act
var actualValue = GetCityName();

// Assert
Assert.AreEqual(expected: ANY_NAME , actual: actualValue, message: $”CityName expectded value was {ANY_NAME } and the actual value is {actualValue}”);

Good Example:

// Arrange
const string EXPECTED_NAME = “Porto”;

// Act
var actualValue = GetCityName();

// Assert
Assert.AreEqual(expected: EXPECTED_NAME, actual: actualValue, message: $”CityName expectded value was {EXPECTED_CITY_NAME} and the actual value is {actualValue}”);

Mocks

Mocking is necessary for unit testing when the unit that being tested has external dependencies. The goal is to replace the behavior or state of the external dependency.
Most languages now have frameworks that make it easy to create mock objects. That’s why using interfaces for your dependency make it ideal for testing. The mocking framework can easily make a mock of an interface simulating the behavior of the real ones.
We can also implement Fakes with the Intuit to replace the dependencies behavior. Fakes replace the original code by implementing the same interface. The disadvantage is that fakes are hard to code to return different results in order to test several user cases and tests can become difficult to understand and maintain with a lot of “elses”.

Example:

Supposing we have the class PizzaBuiller, and this class has one dependency, the CheeseBuilder. We need to have an interface ICheeseBuilder and resolve this dependency with dependency injection.

Our class will be:

public class PizzaBuilder
{private readonly IBreadBuilder breadBuilder;public PizzaBuilder(IBreadBuilder breadBuilder)
{
this.breadBuilder = breadBuilder;
}public Pizza GetPizza()
…. }

Using interfaces we can easily mock their methods and manipulate just the data needed to our unit tests:

var mockedBread = “Bread”;
var mockBreadBuilder = new Mock<IBreadBuilder>();
mockBreadBuilder.Setup(x => x.GetBread(It.IsAny<int>())).Returns(mockedBread);
TestInitialize

The method with the tag [TestInicialize] will be executed before every test. It’s the perfect place to define and allocate resources needed by all the tests. But be careful using this tag, because this is not a bag to put all the arranges, sometimes is more clear what is the test propose with all the arranges inside the test.

Example:

[TestInitialize]
public void Initialize()
{
var mockedBread = “Bread”;
var mockBreadBuilder = new Mock<IBreadBuilder>();
mockBreadBuilder.Setup(x => x.GetBread(It.IsAny<int>())).Returns(mockedBread);

}

Code coverage

Code coverage regards to the quantity of code covered by test cases. Usually, developers try to produce a high level of coverage, but 100 % of code coverage doesn’t mean that we know with 100% assurance that the code does what it should do. Because there are two kinds of coverage:

  • Code coverage: how much of the code is executed;
  • Case coverage: how many of the use-cases are included by the test

Case coverage refers to how the code will behave in different real-world scenarios, and it can depend on several situations making impossible cover all the cases. 100% code coverage does not ensure 100% case coverage.

Should I do unit tests for everything?
If your code doesn’t have any logic, you can have 0% of code coverage or exclude from the code coverage. If your code relies on I/0 dependencies like query a database, capture user input, and so on it will not be easy to mock and test, If you have to do a bunch of mocking to create a decent unit test, perhaps that code doesn’t need unit tests at all.

TDD – Test-Driven Development

TDD focuses on create tests to test how the code is implemented. TDD drives developers to focus on product requirements before write code, contrary to the standard programming where developers write unit tests after the developing the code. Following it, no logic in code is written without unit tests what makes possible to have a very hight test-coverage

The process is always:

  • Write one test;
  • Watch it fail;
  • Implement the code;
  • Watch the test pass;
  • Repeat;

Step by Step process:

To explain the TDD process, I will use a challenge named Close Enemy. The goal is from a  matrix of numbers which will be a 2D matrix that includes just the numbers 1, 0, and 2. Then from the position in the matrix where the element “1” is, return the number of spaces either left, right, down, or up we have to move to reach an enemy that is represented by a 2. In a second phase, we should be able to wrap around one side of the matrix to the other as well. For example: if our matrix  is [“0000”, “1000”, “0002”, “0002”] then this looks like the following:

0 0 0 0
1 0 0 0
0 0 0 2
0 0 0 2

For this input our function should return 2 because the closest enemy  is 2 spaces away from the 1 by moving left to wrap to the other side and then moving down. The array will contain any number of 0’s and 2’s, but only a single 1. If we cannot find any 1 the function should return -1. It may not contain any 2’s at all as well, where in that case should return a 0.

  1. Decide the inputs and outputs and the function signature. Our input will be an int matrix and the output the space between the one element and the closest enemy.
    public int ClosestEnemy(int[][] positions)
    {
    retrurn 0;
    }
  2. Implement one test to fail. To start, we should test the basics that my function should do, the first one or two lines. TDD is about the focus on tiny things only. So, first validate if the one element exists in the matrix. If it doesn’t exist the function should return -1. The test will fail because at this point we still not have the node needed to the feature.
    public void ClosestEnemy_Matrix2X2WithoutOneElement_ReturnNegativeOne()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = -1;
    var positions = new int[][] {new [] { 0, 0 }, new []{ 0, 2 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  3. Implement the code to fix the test:
    public int ClosestEnemy(int[][] positions)
    {
    var onePosition = this.GetOnePosition(positions);
    if (onePosition == null){
    return -1;
    }
    return 0;
    }
  4. Implement another test. Given a matrix without any two, the funtion should return 0. Watch the test failing.
    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = 0;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  5. Fix the code and watch the test passing.
    public static int ClosestEnemy(int[][] positions){
    var onePosition = GetOnePosition(positions);
    if (onePosition == null)
    {
    return -1;
    }
    var listOfEnemies = GetAllTheEnemies(positions);if (!listOfEnemies.Any())
    {
    return 0;
    }
    return 1;
    }
  6. Implement another test. Given a valid matrix, the function should return the minimum space between the one element and the two element. Watch the test failing:
    [TestMethod]
    public void ClosestEnemy_Matrix2X2_ReturnMinimuValueSpace()
    {
    // Arrange
    const int EXPECTED_EMPTY_SPACE_VALUE = 2;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 2 } };
    // Act
    var actualValue = ClosestEnemy(positions);
    // Assert
    Assert.AreEqual(expected: EXPECTED_EMPTY_SPACE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value was {EXPECTED_EMPTY_SPACE_VALUE} and the actual value is {actualValue}”);
    }
  7. Developer the code to fix the test:
    public int ClosestEnemy(int[][] positions)
    {
    var onePosition = this.GetOnePosition(positions);
    if (onePosition == null)
    {
    return -1;
    }
    var listOfEnemies = this.GetAllTheEnemies(positions);
    if (!listOfEnemies.Any())
    {
    return 0;
    }var listOfSpaces = listOfEnemies.Select(enemy => CalculeSpaces(onePosition, enemy));
    return listOfSpaces.Min();
    }

    private int CalculeSpaces((int, int)? onePosition, (int, int) enemy)
    {
    var Xspaces = Math.Abs(onePosition.Value.Item1 – enemy.Item1);
    var Yspaces = Math.Abs(onePosition.Value.Item2 – enemy.Item2);
    return Xspaces + Yspaces;
    }

  8. Implement a test to guarantee that we can wrap around one side of the matrix to the other as well. Watch the test failing.
    [TestMethod]
    public void ClosestEnemy_Matrix4x4_ReturnAroudEmptySpace()
    {
    // Arrange
    const int EXPECTED_EMPTY_SPACE_VALUE = 2;
    var positions = new int[][] { new[] { 0, 0, 0, 0, }, new[] { 1, 0, 0, 0 }, new[] { 0, 0, 0, 2 }, new[] { 0, 0, 0, 2 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_EMPTY_SPACE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value was 0 and the actual value is {actualValue}”);
    }

  9. Implement the code to fix the test.
    public static int ClosestEnemy(int[][] positions)
    {
    var onePosition = GetOnePosition(positions);
    if (onePosition == null)
    {
    return -1;
    }
    var listOfEnemies = GetAllTheEnemies(positions);
    if (!listOfEnemies.Any())
    {
    return 0;
    }
    var listOfSpaces = listOfEnemies.Select(enemy => CalculateAllTheSpaces(onePosition, enemy, positions.Length));
    return listOfSpaces.Min();
    }private static int CalculateAllTheSpaces((int, int)? onePosition, (int, int) enemy, int matrixSize)
    {
    var Xspaces = Math.Abs(onePosition.Value.Item1 – enemy.Item1);
    var Yspaces = Math.Abs(onePosition.Value.Item2 – enemy.Item2);
    var space = ShorterPath(Xspaces, matrixSize) + ShorterPath(Yspaces, matrixSize);
    return space;
    }private static int ShorterPath(int spaces, int matrixSize)
    {
    return spaces <= matrixSize – spaces ? spaces : matrixSize – spaces;
    }


BDD – Behavior Driven Development

BDD focus to test the behavior that is related to business outcomes. Instead of thinking just how the code is implemented we spend some time thinking about how the scenario is. The language used to define the unit tests should be more generic and understandable by all the intervenients in the project, including stakeholders for example.

BDD and TDD are not enemies. In true BDD could extend the process of TDD with better guidelines.

Conclusion:

Yes, you have to spend more time to implement TDD in your project, but I ensure that your client will find much fewer bugs and you can easily refactor your code without any fear. It’s all about priorities, the cost of a bug that makes it into production could be many times larger than the cost of the time spent implementing unit tests.

References:

https://medium.com/javascript-scene/5-common-misconceptions-about-tdd-unit-tests-863d5beb3ce9

https://www.computer.org/csdl/mags/so/2007/03/s3024.pdf

https://medium.com/javascript-scene/what-every-unit-test-needs-f6cd34d9836d

https://www.sitepoint.com/javascript-testing-unit-functional-integration/

https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-best-practices

Apigee – Scaling

Our goal is to provide high performance and reliable APIs, and we have to do this we if have just five clients, or if the number of our clients rise to five hundred thousand, we have to maintain our APIs working correctly by scaling.

 

Cache

If an API provides the same static data or data that does not change over a period of time, a cache can be an ally.

Why should we use caching?

  •  Improve performance by reducing network latency and also eliminate the redundant requests,
  • Reduce the amount of load to the backend services,
  • Makes it highly scalable to support more transactions without additional hardware,
  • Can be used to process session data for reuse across HTTP transactions,
  • Support security,

Caches are built on a two-level system:

  • In-memory level (L1): fast access,  each node has a percentage of memory reserved for use by the cache when the memory limit is reached, Apegee Edge removes the cache from memory in the order of time since last access, with the oldest entries removed first.
  • Persistent level (L2):  All message processing nodes share a cache data store (Cassandra) for persisting cache entries. Persisted even if removed from L1 and there isn’t limit on the number of cache entries just in the entries size.

The cache expires only on the basis of expiration settings.

Apigee Edge provides a few cache policies: populate cache, lookup cache, invalidate cache and Response cache.

 

Populate Cache/Lookup Cache/Invalidate Cache:  Use this to store custom data objects or information persistent across multiple API transactions.

With this policies, we can add or remove the cache entries just using separate policies.

The flow should be first the policy Lookup cache then The policies needed to populate the cache when the cache is empty and the Populate Cache policy.

For instance in the following example the following:

In the following Lookup cache implementation, we are looking to the value in the ‘cachekey’ entry and assigning to the variable ‘logging’.

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<LookupCache async=“false” continueOnError=“false” enabled=“true” name=“Lookup-Cache”>
   <DisplayName>Lookup-Cache</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment>cachedKey</KeyFragment>
   </CacheKey>
   <CacheResource>Cache</CacheResource>
   <Scope>Exclusive</Scope>
   <AssignTo>logging</AssignTo>
</LookupCache>

 

On the police Populate Cache, we are populating a new entry with the key ‘cacheKey’  with the value from the ‘logging’ variable.

 

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<PopulateCache async=“false” continueOnError=“false” enabled=“true” name=“Populate-Cache”>
   <DisplayName>Populate-Cache</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment>cacheKey</KeyFragment>
   </CacheKey>
   <CacheResource>CacheKey</CacheResource>
   <Scope>Exclusive</Scope>
   <ExpirySettings>
       <TimeoutInSec>3600</TimeoutInSec>
   </ExpirySettings>
   <Source>logging</Source>
</PopulateCache>

 

To create the cache resource, like CacheKey, access to the environments configurations board and on the first tab ‘Cache’ add a new entry. In this board is possible also clean the cache.

 

Response cache: Caches data from a backend resource, reducing the number of requests to the resource. Apigee supports only subset directives from the HTTP/1.1 cache control specifications on responses from origin servers. So we cannot use several standards associated to HTTP cache control.
To implement this type of cache add a new Response Cache police on the request that you want to cache. The code to cache the ‘Get /cities’ request:

 

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<ResponseCache async=“false” continueOnError=“false” enabled=“true” name=“RC-cacheCities”>
   <DisplayName>RC-cacheCities</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment ref=“request.uri” type=“string”/>
   </CacheKey>
   <Scope>Exclusive</Scope>
   <ExpirySettings>
       <ExpiryDate/>
       <TimeOfDay/>
       <TimeoutInSec ref=“”>3600</TimeoutInSec>
   </ExpirySettings>
   <SkipCacheLookup/>
   <SkipCachePopulation/>
   <UseResponseCacheHeaders>true</UseResponseCacheHeaders>
   <UseAcceptHeader>true</UseAcceptHeader>
</ResponseCache>

 

Load Balancer

 

The propose of a Load Balancer is to improve responsiveness and increases the availability of applications by distributing network or application traffic across several services.
Configure the Apigee load balance is really easy, we just need to configure one or more named TargetServers, choose one the available algorithms, they are RoundRobin, Weighted, and LeastConnections.
We can also define a fallback server. It’s also possible to test if the server is running with a ping or pong method and remove the server from the load balancer.

Conclusion:

Response cache supports only a subset of directives from the HTTP/1.1 cache control specifications on responses from origin servers, and it can be an obstacle because developers are used to working with the HTTP specifications and are counting with its benefits.

 

References:

https://github.com/anil614sagar/advanceddevjam/blob/master/lab4.md

https://en.wikipedia.org/wiki/Load_balancing_(computing)

 

Apigee – Getting started with Apigee

Apigee is a full lifecycle API management platform that enables API providers to design, secure, deploy, monitor and scale APIs, managing the entire API lifecycle. How easy is it to start using Apigee?

I decided to find out by trying a Proof of Concept where the objective was to configure a simple API using just the management UI Apigee Edge.

In this POC I wanted to explore the following features:

  • API Design
  • OAuth 2.0 authentication
  • Security Rules
  • Interaction with External Services
  • Cache
  • Scaling
  • Maintenance
  • Logging
  • Deploy and Version strategy

To test Apigee I used a sample GET CITY API REST service available on the Google Cloud. This service returns and saves cities and points of interest. The final API will support the following list of requests:

GET /cities
GET /cities/{city_id}
GET /cities/{city_id}/pointsofinterest
GET /cities/{city_id}/pointsofinterest/{point_of_interest_id}
POST /cities/{city_id}/pointsofinterest
GET /cities/{city_id}/pointsofinterest

Target Endpoint:

https://city-info-214013.appspot.com/api/cities

Let’s first define some keywords and explain how Apigee works.

When you first create an account, you are assigned to an organization. Apigee provides you with one or more organizations, also called orgs. Orgs contains developers, developers applications, API products, API proxies and other items needed to configure the team APIs.

API Proxy

Typically an API proxy is a facade for one or more generic  APIs, services or application. With the API proxy, we have more one extra layer between the client and the services, but we also have an additional control layer to manage our services, configure all the policies and rules and we can:

  • verify security tokens
  • collect analytics information
  • serve requests from the cache  
  • perform traffic management

The proxy endpoints follow the restful principles. The HTTP verbs GET, POST, PUT and DELETE are used except for the verb PATCH.

A proxy is responsible for handling requests from the client, execute all the configured policies and forward the request to the back-end server.

The proxy has two kinds of endpoints:

  • Proxy endpoint: includes client-specific policies and it can have three types of flow: a pre-flow, one or more conditional flows, and a post-flow.
  • Target endpoint: includes policies related to a particular back-end server and it can have the same three types of flow (pre-flow, conditional flow, and post-flow) more an extra post-client-flow which executes after the response is sent to the client.

The target endpoint and proxy consist of XML flows or paths. A proxy has a request path which is the path the request takes from the client to the back-end server and has a response path, which is the path the response takes from the target to the client.

 

Policies

A policy is like a module that implements a specific, limited management function. All the policies are written in XML.

Design APIs

From a developer’s and manager’s perspective, the Apigee Edge Spec Editor can be helpful to build the API structure.

Apigee Edge enables to model our APIs by creating OpenAPI Specifications with Spec Editor.

OpenAPISpecification is the Open API Initiative focused on creating, evolving, and promoting a vendor-neutral API description format based on the Swagger. To know more about OpenAPISpecification: https://www.openapis.org/blog

Developers and clients can use the Spec Editor to add new features, create and update a new proxy automatically, create documentation or just consult the API specifications.

One negative point is that we can’t request the Swagger interface to access our APIs through the web browser, but the Apigee team is working to add this feature, and it should appear soon.

My First proxy

In this experiment, I used my API specs to create a new API proxy. I named the IPI CitiesInfo, and all my requests are working without any extra configuration. To add a new proxy through the ApiEdgee you just have to follow this specification:

  1. Open the Develop menu,
  2. Select the API Proxy menu,
  3. Select the button ‘+ Proxy’,
     
  4. At this point, you can choose between six kinds of proxies. To create the simplest proxy that just makes requests to a backend server, select the Reverse Proxy,
  5. To build a proxy using pre-built specs select the button ‘Use OpenApi’ and select the pretended specs,
  6. Fill in the details if they are incomplete or you are not using specs. Existing API is the backend endpoint,
  7. Click Next,
  8. Select which operations you want to use. But you can select more later.
  9. On Authentication menu select ‘Pass through (none)’,
  10. Select Next. Here you can see and enable or disable the virtual hosts that this proxy will bind when deployed,
  11. After you select Next and you have your first proxy deployed to a test environment and a board where you can configure and test your proxy.
  12. Test your proxy using the endpoint that you can find on ‘Deployments’.

Continue reading “Apigee – Getting started with Apigee”

Startup tips: Why you should consider using Firebase

Any startup looking to develop a mobile app for either iOS or Android, a web app or any application that requires a backend, should seriously consider using Firebase as a backend. Why? There are so many reasons for it, but I will try to explain as succinctly as possible.

Initial Zero Cost

Firebase comes at an initial zero cost with the spark plan. This allows you to do a lot. Once you hit the limitations of the free plan, then you have the option of the Flame plan at a fixed cost of $25/month or the pay as you go plan. This type of pricing is great because it allows you to keep costs to a minimum during your development phase.

Includes a NoSQL Database

It comes with a NoSQL database included. You can opt either for the real-time database or the Firestore. There are pros/cons, so you need to decide which one fits your requirements best. There is also built-in database security so you can prevent data from being improperly accessed. You can configure the security rules using a javascript type language and via the Firebase console. This is a really strong point because most of the mobile or web apps nowadays will require a database.

Built-in federated authentication support

Firebase has built-in authentication support with passwords, phone numbers federated identity providers such as Facebook, Twitter, Google, etc. There are SDKs for iOS, Android, Web(Javascript), Node and Java. If you don’t want to create your own login screen, you can just drop in the Firebase Auth UI component into your app.

Built-in messaging support

Firebase has built-in messaging support. This is one of the most popular features and very use to use. In this day of social networks we all understand the power of connecting with your users to keep them engaged.

There is an API for sending programmatic notification messages. For instance based on new content available. Or if you are looking to send ad-hoc messages to your users, it is also possible to send custom messages using Firebase’s Compose Message function.

Remote configuration repository

You can keep all your configuration in a central location and outside your application’s source code. This is a big deal because you don’t want to have to release a new version of the application everytime you change a configuration parameter. There is also the hidden power of setting configuration parameters based on rules. You could, for instance, decide to configure a parameter differently based on geographical location, device type or based on any custom parameter that you decide to create. With this feature, one can keep a lot of logic out of the code. Who likes to create lots of if statements?

Crashlytics – Know when your apps crash and why

No serious app developer can go without this functionality

Crashlytics(previously Fabric) integration to firebase console. With this, you will know if the release of your new app crashes and you will get stack-traces.

You can host your HTML/JS, or even a node.js web app using Firebase’s hosting solution.

Go Serverless

In this day and age, the fewer servers you have to maintain, the better. With servers come maintenance cost, to keep servers patched, security risks as it could be easily hacked if you don’t patch it often enough. Also, it can become quickly a performance bottleneck.

To avoid all of the above, consider going Serverless with cloud functions. Cloud functions(CF) can do operations in the backend, allow you to create integration points with third-party systems(e.g. Paypal payment notifications).  CF can be triggered by URL or based on specific events such as a user signing up, buying a subscription, etc. If money is an issue, the only downside is that if you create a cloud function that connects to third-party services outside of Google Cloud, then you have to immediately start paying for the flame plan.

Firebase has more features that we are yet to try such as the ML kit(image recognition, text detection, image labeling, landmark recognition, etc) or the automated testing solutions, etc

The most compelling reason to use Firebase is that of all the things that it provides are for such a low adoption price. It helps you to go serverless, free of maintenance costs and it does scale….

For more detailed information please visit the firebase website(https://firebase.google.com/) where you can see more details on the features available and the pricing.

Startup tips: Use Gitlab for your code and for Continuous Integration

I would like to recommend Gitlab as a great free tool for startups and charities. When you are a tech startup or a charity, money is tight and resources limited. Consequently, any money/time saved will make a huge difference.

If you go for the free version of Gitlab, what do you get?

  • The Cloud version is free and has no ownership costs.
  • Ability to create as many Git repositories as you need, and they are all private.
  • Manage a team without limitation on the number of team members
  • You can create granular access permissions
  • A nice user interface for peer review and to control merges of code
  • Free wiki for each project
  • Free CI/CD without needing Jenkins! You can set up build pipelines and use a pool of free Docker containers where you can run your build.

Therefore we have used Gitlab extensively for our builds, for instance, building Node.js applications, building Android apps, for our own website, etc.

 

 

Oracle Commerce Cloud Initial Settings

This is the first of a series of articles that Techbiosis dedicates to Oracle Commerce Cloud Services (OCCS). We will describe, step by step, how to do the basic configuration of the platform using the Back Office Tool. This tool is the same merchants use to administer their webstore, including managing the catalog, defining the design or creating promotions, all without relying on the IT team. As Oracle Commerce Cloud is a cloud service, Oracle runs all the infrastructure and provides you only the interfaces necessary to work with you instance. You can find detailed information in the official website.

Accessing Back Office

The first thing you need is access to an instance of Oracle Commerce Cloud. After registration, Oracle provides you the URLs and access credentials for several interfaces. Use the URL of the Back Office Tool and you will be asked to enter your login and password.

Oracle Commerce Cloud - Login

After login, you see the Dashboard which summarises relevant information about your instance.

Oracle Commerce Cloud - Dashboard

In this case, the instance is not configured yet so there is nothing to display.

In the top banner you find several tabs selectors reflecting the operations available in this administrative interface. For now, we are interested in the Settings tab.

Oracle Commerce Cloud Settings

After choosing the Settings tab, you see a screen with two panes: on the left you have several settings options while on right you find the corresponding configuration area. For this article, we choose Site Settings that contains the general instance settings, most of which are only set during the initial configuration.

Oracle Commerce Cloud - Setting Name

Setting the Site Name

The first thing is to name your site. In our case, we called it Techbiosis. Hit the Save button and you will get a pop-up confirming the operation.

Setting the Location

Below the top tabs selectors you have another tab where you choose Location. Here, you find the timezone, language and reporting currency settings.

Oracle Commerce Cloud - Setting Location

Choosing Timezone

Since OOCS is a cloud service it can run anywhere in the world. But emails sent automatically to users or your reports need a time reference, usually consistent with your physical location.

You choose your timezone using the drop-down menu. The search box helps you finding you location. In our case, we chose London – GMT.

Oracle Commerce Cloud - Setting London Location

You get a warning since this change can lead to reports mixing events in different timezones. In this case, you can safely choose Save since you are configuring the system for the first time.

Oracle Commerce Cloud - Setting London Location Warning

Handling Languages

The Oracle Commerce Cloud supports multi language store fronts out-of-the-box. Currently, 29 languages are available and Oracle plans extending this list. By default, the store front offers only a language but adding more is simple.

When accessing a multi language store, the locale configuration of the user’s browser is checked against the list of offered languages. If the preferred one is available, this is used to display pages and pre-configured emails, also determining the format of numbers, dates and currency. Otherwise, the systems uses a fallback mechanism, first trying a parent language (e.g. trying generic English when US English is unavailable) before resorting to the default one. The user can override this mechanism by manually selecting a language from a menu in the store.

You can choose your default language from the drop-down menu Store Default Language. We kept English as our default language.

Oracle Commerce Cloud - Setting languages

After the initial configuration, it is not advisable to change the default language.

For configuring a multi language store you have to click in Additional Store Languages and add more languages.

Oracle Commerce Cloud - Setting Additional Languages

To add Spanish start typing until it appears selected and hit enter, or scroll down and click on it.

Oracle Commerce Cloud - Setting Spanish Language

In our case, we also added Portuguese (Portugal).

Oracle Commerce Cloud - Setting Portuguese Language

It’s that easy to configure a multi language store front!

Working with Price Groups

You may want to handle several currencies in your store, while having distinct price policies and promotions for each one. For this, OOCS offers price groups where each group presents a currency option to the user in the store. The currency is independent of the language.

For accessing the price groups configuration click on the Price Groups tab next to Location tab. You see the Support Price Groups list. The OOCS comes configured with a default price group set to the US Dollar.

Oracle Commerce Cloud - Setting Price Groups

In our system, we already added the Euro to the list. For adding a new price group you click on Add Price Group and a box pops up.

Oracle Commerce Cloud - Adding a price group

You have to provide a name for your new price group. We called it Pound. The ID is automatically generated from the name but you can change it providing that it is unique in the system.

Oracle Commerce Cloud - Setting pounds as price group

Click on the Currency drop-down menu to see the list of all available currencies. We chose GBP – British Pound Sterling.

Oracle Commerce Cloud - Adding a currency from the list as prince group

Click Save.

Oracle Commerce Cloud - Setting the currency of the price group as pounds

The new Pound price group appears in the list and you can make it available in the system by clicking in Activate.

Oracle Commerce Cloud - Price groups list

Now, the store offers three options to the user: US Dollar (the default), British Pound and Euro.

Oracle Commerce Cloud - Activated price groups list

Changing the Default Payment Group

The default price group determines the currency of the prices that users see as default when accessing the store. For changing the default price group, click on the price group name and a box will pop-up. Check the Make Default Price Group box.

Oracle Commerce Cloud - Setting a price group as default

The list reflects this change.

Oracle Commerce Cloud - List with new price group as default

Changing Reporting Currency

For reporting purposes, you can choose any of the currencies configured in the Price Groups. Go to the Location tab and click on the Reporting Currency drop-down box. In our case, we changed the default from US Dollar to Euro.

Oracle Commerce Cloud - Setting report currency

Again, you get a warning since this change may have impact in reports once there is data in the system. Click on Save.

Oracle Commerce Cloud - Setting report currency warning

Conclusion

The basic initial configuration of your Oracle Commerce Cloud instance is that easy! Multi language and multi currency stores are available out-of-the-box with minimum configuration effort. This reduces time and costs to launch you business online.

The next step is configuring the design of your store front so you can see these features in action. That’s the subject for another article.