Unit Testing – Guidelines

All of us agree that testing is good and there are numerous advantages to writing unit tests. However, sometimes we have to disagree on how to do our testes, TDD or BDD, what should be our code coverage goal and how can we do it?

The propose of unit tests is to validate that each unit of code performs as expected. It’s the first line of tests or the first line of defense to developers. They are implemented and performed by software developers since the earlier stages of the developer process and for several tools during all the process.

 

Advantages of unit testing:
  1. Lesser bugs are deployed, the product delivered to the client has better quality. Happy clients and the increased of reliance on customer service, quality assurance teams, and bug reports.
  2.  Unit testing ensures that we don’t break anything when a refactoring is needed because we can always find opportunities to improve our code and sometimes we really have to do.
  3. You don’t have to tests everything manually every time you make a change, or you add a new feature.
  4.  The developer has a way to verify the behavior of their code between edits rapidly. The feedback is much faster than with functional and integration tests.
Rules of thumb:

All the tests should respect a list of principles:

  1. Unit tests ensure that individual components work appropriately in isolation from the rest of the code. A unit tests should focus on a single ‘unit of code’;
  2. Unit tests must be isolated from dependencies, no network access, and no database requests.
  3. The tests should provide a clear description of the feature being tested. It should be provided on the test name with the following template:
    MethodUnderTest_inputOrScenarioUnderTest_expectedResult
    Bad example:

    [TestMethod]
    public void UnitTest_ClosetEnemy()
    {…}

    Good example:

    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = 0;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE,actual: actualValue, message: $”ClosestEnemy expected  value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  4. The tests should be arranged with the common pattern Arrange, Act, Assert.
    • Arrange: creating objects and setting them up as necessary;
    • Act: act on an object;
    • Assert: assert what is expected;

    Separating all of this actions the test highlights the dependencies and what the test is trying to assert. The principal advantage is the readability.

    Bad example:

    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    Assert.AreEqual(expected: 0, actual: ClosestEnemy(positions), message: $”ClosestEnemy expected value when there isn’t one element was 0 and the actual value is {ClosestEnemy(positions)}”);
    }

    Good example:

    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = 0;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE, actual: actualValue, message: $”ClosestEnemy expected value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  5. The input should be the mildest possible and just the necessary to tests the current scenario. When creating a new test we want to focus on the behaviors and avoid unnecessary information that can introduce errors in our tests and turn the test hard to read.
  6.  The test should be part of the delivery pipeline and on failure should provide an explicit message or report with the information about what were you testing,  what should it do, what was the output and what was the expected result.
  7.  Avoid undeclared variables. An undefined variable can confuse the reader of the test and can cause errors.Bad example:
    // Act
    var actualValue = GetCityName();

    // Assert
    Assert.AreEqual(expected: “cityName”, actual: actualValue, message: $”CityName expectded value was cityName and the actual value is {actualValue}”);

    Good example:

    // Arrange
    const string EXPECTED_NAME = “Porto”;

    // Act
    var actualValue = GetCityName();

    // Assert
    Assert.AreEqual(expected: EXPECTED_NAME, actual: actualValue, message: $”CityName expectded value was {EXPECTED_CITY_NAME} and the actual value is {actualValue}”);

  8. Avoid multiple asserts in the same test. With multiple asserts per test, if one asset fails the subsequent asserts will not be evaluated and we cannot figure out what is really failing on a single view.
  9. Don’t generate random data. Usually, random data is irrelevant to the test propose. If the data is irrelevant why should we generate it? It just make our code mode unreadble and if the test fail we don’t know the value generated and why it failed unless we log it.

Bad example:

// Arrange
const string ANY_NAME =RandomStringUtils.random(8);

// Act
var actualValue = GetCityName();

// Assert
Assert.AreEqual(expected: ANY_NAME , actual: actualValue, message: $”CityName expectded value was {ANY_NAME } and the actual value is {actualValue}”);

Good Example:

// Arrange
const string EXPECTED_NAME = “Porto”;

// Act
var actualValue = GetCityName();

// Assert
Assert.AreEqual(expected: EXPECTED_NAME, actual: actualValue, message: $”CityName expectded value was {EXPECTED_CITY_NAME} and the actual value is {actualValue}”);

Mocks

Mocking is necessary for unit testing when the unit that being tested has external dependencies. The goal is to replace the behavior or state of the external dependency.
Most languages now have frameworks that make it easy to create mock objects. That’s why using interfaces for your dependency make it ideal for testing. The mocking framework can easily make a mock of an interface simulating the behavior of the real ones.
We can also implement Fakes with the Intuit to replace the dependencies behavior. Fakes replace the original code by implementing the same interface. The disadvantage is that fakes are hard to code to return different results in order to test several user cases and tests can become difficult to understand and maintain with a lot of “elses”.

Example:

Supposing we have the class PizzaBuiller, and this class has one dependency, the CheeseBuilder. We need to have an interface ICheeseBuilder and resolve this dependency with dependency injection.

Our class will be:

public class PizzaBuilder
{private readonly IBreadBuilder breadBuilder;public PizzaBuilder(IBreadBuilder breadBuilder)
{
this.breadBuilder = breadBuilder;
}public Pizza GetPizza()
…. }

Using interfaces we can easily mock their methods and manipulate just the data needed to our unit tests:

var mockedBread = “Bread”;
var mockBreadBuilder = new Mock<IBreadBuilder>();
mockBreadBuilder.Setup(x => x.GetBread(It.IsAny<int>())).Returns(mockedBread);
TestInitialize

The method with the tag [TestInicialize] will be executed before every test. It’s the perfect place to define and allocate resources needed by all the tests. But be careful using this tag, because this is not a bag to put all the arranges, sometimes is more clear what is the test propose with all the arranges inside the test.

Example:

[TestInitialize]
public void Initialize()
{
var mockedBread = “Bread”;
var mockBreadBuilder = new Mock<IBreadBuilder>();
mockBreadBuilder.Setup(x => x.GetBread(It.IsAny<int>())).Returns(mockedBread);

}

Code coverage

Code coverage regards to the quantity of code covered by test cases. Usually, developers try to produce a high level of coverage, but 100 % of code coverage doesn’t mean that we know with 100% assurance that the code does what it should do. Because there are two kinds of coverage:

  • Code coverage: how much of the code is executed;
  • Case coverage: how many of the use-cases are included by the test

Case coverage refers to how the code will behave in different real-world scenarios, and it can depend on several situations making impossible cover all the cases. 100% code coverage does not ensure 100% case coverage.

Should I do unit tests for everything?
If your code doesn’t have any logic, you can have 0% of code coverage or exclude from the code coverage. If your code relies on I/0 dependencies like query a database, capture user input, and so on it will not be easy to mock and test, If you have to do a bunch of mocking to create a decent unit test, perhaps that code doesn’t need unit tests at all.

TDD – Test-Driven Development

TDD focuses on create tests to test how the code is implemented. TDD drives developers to focus on product requirements before write code, contrary to the standard programming where developers write unit tests after the developing the code. Following it, no logic in code is written without unit tests what makes possible to have a very hight test-coverage

The process is always:

  • Write one test;
  • Watch it fail;
  • Implement the code;
  • Watch the test pass;
  • Repeat;

Step by Step process:

To explain the TDD process, I will use a challenge named Close Enemy. The goal is from a  matrix of numbers which will be a 2D matrix that includes just the numbers 1, 0, and 2. Then from the position in the matrix where the element “1” is, return the number of spaces either left, right, down, or up we have to move to reach an enemy that is represented by a 2. In a second phase, we should be able to wrap around one side of the matrix to the other as well. For example: if our matrix  is [“0000”, “1000”, “0002”, “0002”] then this looks like the following:

0 0 0 0
1 0 0 0
0 0 0 2
0 0 0 2

For this input our function should return 2 because the closest enemy  is 2 spaces away from the 1 by moving left to wrap to the other side and then moving down. The array will contain any number of 0’s and 2’s, but only a single 1. If we cannot find any 1 the function should return -1. It may not contain any 2’s at all as well, where in that case should return a 0.

  1. Decide the inputs and outputs and the function signature. Our input will be an int matrix and the output the space between the one element and the closest enemy.
    public int ClosestEnemy(int[][] positions)
    {
    retrurn 0;
    }
  2. Implement one test to fail. To start, we should test the basics that my function should do, the first one or two lines. TDD is about the focus on tiny things only. So, first validate if the one element exists in the matrix. If it doesn’t exist the function should return -1. The test will fail because at this point we still not have the node needed to the feature.
    public void ClosestEnemy_Matrix2X2WithoutOneElement_ReturnNegativeOne()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = -1;
    var positions = new int[][] {new [] { 0, 0 }, new []{ 0, 2 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  3. Implement the code to fix the test:
    public int ClosestEnemy(int[][] positions)
    {
    var onePosition = this.GetOnePosition(positions);
    if (onePosition == null){
    return -1;
    }
    return 0;
    }
  4. Implement another test. Given a matrix without any two, the funtion should return 0. Watch the test failing.
    [TestMethod]
    public void ClosestEnemy_Matrix2X2WithoutTwoElement_ReturnZero()
    {
    // Arrange
    const int EXPECTED_NEGATIVE_VALUE = 0;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 0 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_NEGATIVE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value when there isn’t one element was {EXPECTED_NEGATIVE_VALUE} and the actual value is {actualValue}”);
    }

  5. Fix the code and watch the test passing.
    public static int ClosestEnemy(int[][] positions){
    var onePosition = GetOnePosition(positions);
    if (onePosition == null)
    {
    return -1;
    }
    var listOfEnemies = GetAllTheEnemies(positions);if (!listOfEnemies.Any())
    {
    return 0;
    }
    return 1;
    }
  6. Implement another test. Given a valid matrix, the function should return the minimum space between the one element and the two element. Watch the test failing:
    [TestMethod]
    public void ClosestEnemy_Matrix2X2_ReturnMinimuValueSpace()
    {
    // Arrange
    const int EXPECTED_EMPTY_SPACE_VALUE = 2;
    var positions = new int[][] { new[] { 1, 0 }, new[] { 0, 2 } };
    // Act
    var actualValue = ClosestEnemy(positions);
    // Assert
    Assert.AreEqual(expected: EXPECTED_EMPTY_SPACE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value was {EXPECTED_EMPTY_SPACE_VALUE} and the actual value is {actualValue}”);
    }
  7. Developer the code to fix the test:
    public int ClosestEnemy(int[][] positions)
    {
    var onePosition = this.GetOnePosition(positions);
    if (onePosition == null)
    {
    return -1;
    }
    var listOfEnemies = this.GetAllTheEnemies(positions);
    if (!listOfEnemies.Any())
    {
    return 0;
    }var listOfSpaces = listOfEnemies.Select(enemy => CalculeSpaces(onePosition, enemy));
    return listOfSpaces.Min();
    }

    private int CalculeSpaces((int, int)? onePosition, (int, int) enemy)
    {
    var Xspaces = Math.Abs(onePosition.Value.Item1 – enemy.Item1);
    var Yspaces = Math.Abs(onePosition.Value.Item2 – enemy.Item2);
    return Xspaces + Yspaces;
    }

  8. Implement a test to guarantee that we can wrap around one side of the matrix to the other as well. Watch the test failing.
    [TestMethod]
    public void ClosestEnemy_Matrix4x4_ReturnAroudEmptySpace()
    {
    // Arrange
    const int EXPECTED_EMPTY_SPACE_VALUE = 2;
    var positions = new int[][] { new[] { 0, 0, 0, 0, }, new[] { 1, 0, 0, 0 }, new[] { 0, 0, 0, 2 }, new[] { 0, 0, 0, 2 } };

    // Act
    var actualValue = ClosestEnemy(positions);

    // Assert
    Assert.AreEqual(expected: EXPECTED_EMPTY_SPACE_VALUE, actual: actualValue, message: $”ClosestEnemy expectded value was 0 and the actual value is {actualValue}”);
    }

  9. Implement the code to fix the test.
    public static int ClosestEnemy(int[][] positions)
    {
    var onePosition = GetOnePosition(positions);
    if (onePosition == null)
    {
    return -1;
    }
    var listOfEnemies = GetAllTheEnemies(positions);
    if (!listOfEnemies.Any())
    {
    return 0;
    }
    var listOfSpaces = listOfEnemies.Select(enemy => CalculateAllTheSpaces(onePosition, enemy, positions.Length));
    return listOfSpaces.Min();
    }private static int CalculateAllTheSpaces((int, int)? onePosition, (int, int) enemy, int matrixSize)
    {
    var Xspaces = Math.Abs(onePosition.Value.Item1 – enemy.Item1);
    var Yspaces = Math.Abs(onePosition.Value.Item2 – enemy.Item2);
    var space = ShorterPath(Xspaces, matrixSize) + ShorterPath(Yspaces, matrixSize);
    return space;
    }private static int ShorterPath(int spaces, int matrixSize)
    {
    return spaces <= matrixSize – spaces ? spaces : matrixSize – spaces;
    }


BDD – Behavior Driven Development

BDD focus to test the behavior that is related to business outcomes. Instead of thinking just how the code is implemented we spend some time thinking about how the scenario is. The language used to define the unit tests should be more generic and understandable by all the intervenients in the project, including stakeholders for example.

BDD and TDD are not enemies. In true BDD could extend the process of TDD with better guidelines.

Conclusion:

Yes, you have to spend more time to implement TDD in your project, but I ensure that your client will find much fewer bugs and you can easily refactor your code without any fear. It’s all about priorities, the cost of a bug that makes it into production could be many times larger than the cost of the time spent implementing unit tests.

References:

https://medium.com/javascript-scene/5-common-misconceptions-about-tdd-unit-tests-863d5beb3ce9

https://www.computer.org/csdl/mags/so/2007/03/s3024.pdf

https://medium.com/javascript-scene/what-every-unit-test-needs-f6cd34d9836d

https://www.sitepoint.com/javascript-testing-unit-functional-integration/

https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-best-practices

Apigee – Scaling

Our goal is to provide high performance and reliable APIs, and we have to do this we if have just five clients, or if the number of our clients rise to five hundred thousand, we have to maintain our APIs working correctly by scaling.

 

Cache

If an API provides the same static data or data that does not change over a period of time, a cache can be an ally.

Why should we use caching?

  •  Improve performance by reducing network latency and also eliminate the redundant requests,
  • Reduce the amount of load to the backend services,
  • Makes it highly scalable to support more transactions without additional hardware,
  • Can be used to process session data for reuse across HTTP transactions,
  • Support security,

Caches are built on a two-level system:

  • In-memory level (L1): fast access,  each node has a percentage of memory reserved for use by the cache when the memory limit is reached, Apegee Edge removes the cache from memory in the order of time since last access, with the oldest entries removed first.
  • Persistent level (L2):  All message processing nodes share a cache data store (Cassandra) for persisting cache entries. Persisted even if removed from L1 and there isn’t limit on the number of cache entries just in the entries size.

The cache expires only on the basis of expiration settings.

Apigee Edge provides a few cache policies: populate cache, lookup cache, invalidate cache and Response cache.

 

Populate Cache/Lookup Cache/Invalidate Cache:  Use this to store custom data objects or information persistent across multiple API transactions.

With this policies, we can add or remove the cache entries just using separate policies.

The flow should be first the policy Lookup cache then The policies needed to populate the cache when the cache is empty and the Populate Cache policy.

For instance in the following example the following:

In the following Lookup cache implementation, we are looking to the value in the ‘cachekey’ entry and assigning to the variable ‘logging’.

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<LookupCache async=“false” continueOnError=“false” enabled=“true” name=“Lookup-Cache”>
   <DisplayName>Lookup-Cache</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment>cachedKey</KeyFragment>
   </CacheKey>
   <CacheResource>Cache</CacheResource>
   <Scope>Exclusive</Scope>
   <AssignTo>logging</AssignTo>
</LookupCache>

 

On the police Populate Cache, we are populating a new entry with the key ‘cacheKey’  with the value from the ‘logging’ variable.

 

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<PopulateCache async=“false” continueOnError=“false” enabled=“true” name=“Populate-Cache”>
   <DisplayName>Populate-Cache</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment>cacheKey</KeyFragment>
   </CacheKey>
   <CacheResource>CacheKey</CacheResource>
   <Scope>Exclusive</Scope>
   <ExpirySettings>
       <TimeoutInSec>3600</TimeoutInSec>
   </ExpirySettings>
   <Source>logging</Source>
</PopulateCache>

 

To create the cache resource, like CacheKey, access to the environments configurations board and on the first tab ‘Cache’ add a new entry. In this board is possible also clean the cache.

 

Response cache: Caches data from a backend resource, reducing the number of requests to the resource. Apigee supports only subset directives from the HTTP/1.1 cache control specifications on responses from origin servers. So we cannot use several standards associated to HTTP cache control.
To implement this type of cache add a new Response Cache police on the request that you want to cache. The code to cache the ‘Get /cities’ request:

 

<?xml version=“1.0” encoding=“UTF-8” standalone=“yes”?>
<ResponseCache async=“false” continueOnError=“false” enabled=“true” name=“RC-cacheCities”>
   <DisplayName>RC-cacheCities</DisplayName>
   <Properties/>
   <CacheKey>
       <Prefix/>
       <KeyFragment ref=“request.uri” type=“string”/>
   </CacheKey>
   <Scope>Exclusive</Scope>
   <ExpirySettings>
       <ExpiryDate/>
       <TimeOfDay/>
       <TimeoutInSec ref=“”>3600</TimeoutInSec>
   </ExpirySettings>
   <SkipCacheLookup/>
   <SkipCachePopulation/>
   <UseResponseCacheHeaders>true</UseResponseCacheHeaders>
   <UseAcceptHeader>true</UseAcceptHeader>
</ResponseCache>

 

Load Balancer

 

The propose of a Load Balancer is to improve responsiveness and increases the availability of applications by distributing network or application traffic across several services.
Configure the Apigee load balance is really easy, we just need to configure one or more named TargetServers, choose one the available algorithms, they are RoundRobin, Weighted, and LeastConnections.
We can also define a fallback server. It’s also possible to test if the server is running with a ping or pong method and remove the server from the load balancer.

Conclusion:

Response cache supports only a subset of directives from the HTTP/1.1 cache control specifications on responses from origin servers, and it can be an obstacle because developers are used to working with the HTTP specifications and are counting with its benefits.

 

References:

https://github.com/anil614sagar/advanceddevjam/blob/master/lab4.md

https://en.wikipedia.org/wiki/Load_balancing_(computing)

 

Apigee – Getting started with Apigee

Apigee is a full lifecycle API management platform that enables API providers to design, secure, deploy, monitor and scale APIs, managing the entire API lifecycle. How easy is it to start using Apigee?

I decided to find out by trying a Proof of Concept where the objective was to configure a simple API using just the management UI Apigee Edge.

In this POC I wanted to explore the following features:

  • API Design
  • OAuth 2.0 authentication
  • Security Rules
  • Interaction with External Services
  • Cache
  • Scaling
  • Maintenance
  • Logging
  • Deploy and Version strategy

To test Apigee I used a sample GET CITY API REST service available on the Google Cloud. This service returns and saves cities and points of interest. The final API will support the following list of requests:

GET /cities
GET /cities/{city_id}
GET /cities/{city_id}/pointsofinterest
GET /cities/{city_id}/pointsofinterest/{point_of_interest_id}
POST /cities/{city_id}/pointsofinterest
GET /cities/{city_id}/pointsofinterest

Target Endpoint:

https://city-info-214013.appspot.com/api/cities

Let’s first define some keywords and explain how Apigee works.

When you first create an account, you are assigned to an organization. Apigee provides you with one or more organizations, also called orgs. Orgs contains developers, developers applications, API products, API proxies and other items needed to configure the team APIs.

API Proxy

Typically an API proxy is a facade for one or more generic  APIs, services or application. With the API proxy, we have more one extra layer between the client and the services, but we also have an additional control layer to manage our services, configure all the policies and rules and we can:

  • verify security tokens
  • collect analytics information
  • serve requests from the cache  
  • perform traffic management

The proxy endpoints follow the restful principles. The HTTP verbs GET, POST, PUT and DELETE are used except for the verb PATCH.

A proxy is responsible for handling requests from the client, execute all the configured policies and forward the request to the back-end server.

The proxy has two kinds of endpoints:

  • Proxy endpoint: includes client-specific policies and it can have three types of flow: a pre-flow, one or more conditional flows, and a post-flow.
  • Target endpoint: includes policies related to a particular back-end server and it can have the same three types of flow (pre-flow, conditional flow, and post-flow) more an extra post-client-flow which executes after the response is sent to the client.

The target endpoint and proxy consist of XML flows or paths. A proxy has a request path which is the path the request takes from the client to the back-end server and has a response path, which is the path the response takes from the target to the client.

 

Policies

A policy is like a module that implements a specific, limited management function. All the policies are written in XML.

Design APIs

From a developer’s and manager’s perspective, the Apigee Edge Spec Editor can be helpful to build the API structure.

Apigee Edge enables to model our APIs by creating OpenAPI Specifications with Spec Editor.

OpenAPISpecification is the Open API Initiative focused on creating, evolving, and promoting a vendor-neutral API description format based on the Swagger. To know more about OpenAPISpecification: https://www.openapis.org/blog

Developers and clients can use the Spec Editor to add new features, create and update a new proxy automatically, create documentation or just consult the API specifications.

One negative point is that we can’t request the Swagger interface to access our APIs through the web browser, but the Apigee team is working to add this feature, and it should appear soon.

My First proxy

In this experiment, I used my API specs to create a new API proxy. I named the IPI CitiesInfo, and all my requests are working without any extra configuration. To add a new proxy through the ApiEdgee you just have to follow this specification:

  1. Open the Develop menu,
  2. Select the API Proxy menu,
  3. Select the button ‘+ Proxy’,
     
  4. At this point, you can choose between six kinds of proxies. To create the simplest proxy that just makes requests to a backend server, select the Reverse Proxy,
  5. To build a proxy using pre-built specs select the button ‘Use OpenApi’ and select the pretended specs,
  6. Fill in the details if they are incomplete or you are not using specs. Existing API is the backend endpoint,
  7. Click Next,
  8. Select which operations you want to use. But you can select more later.
  9. On Authentication menu select ‘Pass through (none)’,
  10. Select Next. Here you can see and enable or disable the virtual hosts that this proxy will bind when deployed,
  11. After you select Next and you have your first proxy deployed to a test environment and a board where you can configure and test your proxy.
  12. Test your proxy using the endpoint that you can find on ‘Deployments’.

Continue reading “Apigee – Getting started with Apigee”

Startup tips: Why you should consider using Firebase

Any startup looking to develop a mobile app for either iOS or Android, a web app or any application that requires a backend, should seriously consider using Firebase as a backend. Why? There are so many reasons for it, but I will try to explain as succinctly as possible.

Initial Zero Cost

Firebase comes at an initial zero cost with the spark plan. This allows you to do a lot. Once you hit the limitations of the free plan, then you have the option of the Flame plan at a fixed cost of $25/month or the pay as you go plan. This type of pricing is great because it allows you to keep costs to a minimum during your development phase.

Includes a NoSQL Database

It comes with a NoSQL database included. You can opt either for the real-time database or the Firestore. There are pros/cons, so you need to decide which one fits your requirements best. There is also built-in database security so you can prevent data from being improperly accessed. You can configure the security rules using a javascript type language and via the Firebase console. This is a really strong point because most of the mobile or web apps nowadays will require a database.

Built-in federated authentication support

Firebase has built-in authentication support with passwords, phone numbers federated identity providers such as Facebook, Twitter, Google, etc. There are SDKs for iOS, Android, Web(Javascript), Node and Java. If you don’t want to create your own login screen, you can just drop in the Firebase Auth UI component into your app.

Built-in messaging support

Firebase has built-in messaging support. This is one of the most popular features and very use to use. In this day of social networks we all understand the power of connecting with your users to keep them engaged.

There is an API for sending programmatic notification messages. For instance based on new content available. Or if you are looking to send ad-hoc messages to your users, it is also possible to send custom messages using Firebase’s Compose Message function.

Remote configuration repository

You can keep all your configuration in a central location and outside your application’s source code. This is a big deal because you don’t want to have to release a new version of the application everytime you change a configuration parameter. There is also the hidden power of setting configuration parameters based on rules. You could, for instance, decide to configure a parameter differently based on geographical location, device type or based on any custom parameter that you decide to create. With this feature, one can keep a lot of logic out of the code. Who likes to create lots of if statements?

Crashlytics – Know when your apps crash and why

No serious app developer can go without this functionality

Crashlytics(previously Fabric) integration to firebase console. With this, you will know if the release of your new app crashes and you will get stack-traces.

You can host your HTML/JS, or even a node.js web app using Firebase’s hosting solution.

Go Serverless

In this day and age, the fewer servers you have to maintain, the better. With servers come maintenance cost, to keep servers patched, security risks as it could be easily hacked if you don’t patch it often enough. Also, it can become quickly a performance bottleneck.

To avoid all of the above, consider going Serverless with cloud functions. Cloud functions(CF) can do operations in the backend, allow you to create integration points with third-party systems(e.g. Paypal payment notifications).  CF can be triggered by URL or based on specific events such as a user signing up, buying a subscription, etc. If money is an issue, the only downside is that if you create a cloud function that connects to third-party services outside of Google Cloud, then you have to immediately start paying for the flame plan.

Firebase has more features that we are yet to try such as the ML kit(image recognition, text detection, image labeling, landmark recognition, etc) or the automated testing solutions, etc

The most compelling reason to use Firebase is that of all the things that it provides are for such a low adoption price. It helps you to go serverless, free of maintenance costs and it does scale….

For more detailed information please visit the firebase website(https://firebase.google.com/) where you can see more details on the features available and the pricing.

Startup tips: Use Gitlab for your code and for Continuous Integration

I would like to recommend Gitlab as a great free tool for startups and charities. When you are a tech startup or a charity, money is tight and resources limited. Consequently, any money/time saved will make a huge difference.

If you go for the free version of Gitlab, what do you get?

  • The Cloud version is free and has no ownership costs.
  • Ability to create as many Git repositories as you need, and they are all private.
  • Manage a team without limitation on the number of team members
  • You can create granular access permissions
  • A nice user interface for peer review and to control merges of code
  • Free wiki for each project
  • Free CI/CD without needing Jenkins! You can set up build pipelines and use a pool of free Docker containers where you can run your build.

Therefore we have used Gitlab extensively for our builds, for instance, building Node.js applications, building Android apps, for our own website, etc.