Bounded Context in my view

In this article, I will share my view about Bounded Context,
What does it mean,?
Why is it required?
The connection between Bounded context and Microservices.
I will try to keep it simple, and this article targeted to that audience who will hear the term Bounded Context while developing Microservices but having a hard time to understanding the Bounded context concept.
Before deep dive into Bounded Context in terms of DDD(Domain Driven Design), Let's understand what is the meaning of that in the real world.
We know the human is the most intelligent species and created different countries for the ruling. But Question is Why they created different countries or Logical Boundaries? In what basis Boundary has been drawn between two countries,  before Human civilization, in the Earth, we only have land there is no concept of country.
One compelling answers come to my mind is that to separate the administration, culture, laws, economics, so each country abstracted its people from other countries unless it is an impossible task to create one Unified culture, language as in ancients time different tribes has there own languages and cultures.
In the same country, there are some predefined rules, language, style every citizen follows. They understand the common language, aware of laws, currency, style so as a brief I can say, Citizens are sync with each other in a country and country has one Unified culture which every citizen follows and understand.
In programming, We apply Bounded context in the same manner, to separate different models/subdomains  from each other in a Domain/Problem space, In a Domain-driven design --Strategic design part,  we introduce Bounded context so in a certain context a model has a certain meaning like the countries where for a particular country--ceratin language, currency has specific meaning, But in a different country that currency or language is not understandable, Because it has no meaning/different meaning respect to that country. Like the word Fool, In English it means Stupid but in Bengali it means Flower !!!!
So If we consider Country as a Bounded context, and language/currencies as a Model inside that context we can easily map the concept of the domain model in a certain bounded context. The model has some meaning for a Bounded context but the same model has no meaning/different meaning for another Bounded context, So by Bounded context, we create a logical boundary where the model and business terms have a certain meaning and the Bounded context separate/hide the models from the outer world. All communication should be done via API. So it is obvious that under a Bounded context -- model and business logic maintains a certain law and maintain its own persistence storage and that is not directly accessible to other Bounded Context.

BoundedContext Communication: Any design has two common parts, abstraction of the data model and communicates with other parts of the system. By Bounded context we separate the data model in a simple term abstracting the commonalities in the business but How one Bounded context communicate with others?
Here the concept of Context Map stepped in, Using Context map we can discover How one Context depends on other Bounded Context, Like are two Context has strong dependencies. or one domain sends a confirmation message to another domain(Conformist) or may use a shared kernel/Shared model, I will talk about Context map later in a separate article, But as of now, you can think context map is for communicating clearly between Bounded Contexts.
At this point, I believe you have a fair bit of idea what is a Bounded context, But still, if you have questions How it fits in Architecture? go to the next paragraph.
Let say we have to design Online Student management system where Student can register to the site choose course accordingly, Pay the Course fee and then he will be tagged to a batch and Teacher & Student is notified about the batch start date and time slot.
As an Architect, you have to identify the bounded context of the different domain related to this business logic.
If we divide the business logic based on related functionality we can found four basic functionality
1. Registration process: Which takes care of Registration of Student.
2. Payment System: Which will process the Course fee and publish online payment status.
3. Batch Scheduling: Upon confirmation of payment,  this function checks the Teacher availability, batch availability and based on that create a batch and assign the candidates or update an existing batch with the candidate.
4. Notification System: It will notify Teacher and Student about the timings and slot information.

So, There are  4 bounded contexts Registration, Payments, Scheduler, and Notification.
Now let's dig down How each Module represents Teacher and Student and Course Models.
Registration process: It only wants the Student information and it needs it demographic information like Name, age, sex, address and which course student chooses.
There is no mean of Teacher in this context
Payment System : It treats Student as Candidates In Payment System only Student/Candidates financial information is required like Account number and based on the course fees it deduct the amount from given account, So here perspective of a student is totally different, Information needed in payment Service is totally different from Registration although Payment system may need few pieces of information like name, address of the student those are very minimal information.
The term "Teacher"  is not valid here, In payment System Teacher treated as Faculty and they can be permanent or Contractor based on the Faculty type Payment System choose the payment calculation either per hour basis or per anum basis.
Batch Scheduling: In case of Batch Scheduling System, it needs bare minimum information from Teacher and Student like name, Address etc. But it needs detailed information, of Course, Batches under the Courses etc.
Notification System: For Notification System, we just need the Teacher and Student email address or Phone number and name nothing else, And it needs the name of the Student management system and Course details, here Student management site acts as Sender and Teacher and Student denotes as Reciever.



Till now we have seen,  same Domain Objects Teacher, Student, Course have different meaning and use-cases for different Context, This is the beauty of Bounded Context ,we have multiple canonical models for same domain Object based on different context, So developers, Business, user are always in the same page when they are talking about a context, the concept of ubiquitous language is woven here, using ubiquitous language DDD create a unified system where every participant understand the language based on the context.
Now, the common question is why the Bounded context term is so popular in Microservices?
To answer this questions first we have to understand that DDD is applicable for Monolith as well as in Microservice, But in case of Monolith, it is vaguer and more of a logical segregation so only good developers can see it. The main reason is in monolith we have a single giant codebase, may we break it multimodule based on DDD create ACL/Translator when one module talks to another but still it needs other modules as dependent jars to invoke its method. Another point is as this is a single code base and multiple coders working on them so some not so skillfull coder can pollute the boundary or domain Object, Architecht can't create a physical boundary based on Bounded Context, But in Microservice architechture it is inherent as Microservice said that rather than a large code base we can create Small services which have it own code base and services are talk to each other through API or messaging, so It clearly says that understand the Business Domain break the Business logic in to multiple Bounded Contexts and each Bounded Context will be a separte codebase and they comuunicate through Context Map, To design the Context map you have to design API carefully, may you can use Port and Hub architechture So your code under the Bounded context is not comminicate with Outerworld and never polluted. Microservice offers this type of strong segregation of Bounded Context. Bounded Context is more visible and understandable in the context of Microservice.
Conclusion: Bounded Context is the basic needs, when you are trying to break a large business logic it helps you to understand how different part of the system use domain objects in a different manner with different terminology. Bounded Context is just a view to properly organize the business logic based on functionality but to make the business logic works --  communication between Bounded context needed, and it uses Context Map for the same. In my next article, I will discuss Context Map.

File Upload Using Angular4/Microservice

Upload a file, is a regular feature of web programming, Every Business needs this facility, We know How to upload a file using JSP/Html as Front-end and Servlet/Struts/Spring MVC as Server end.


But How to achieve the same in Angular 4 /Microservice combination?
In this tutorial, I will show you the same step by step, before that let me clarify one thing, I am assuming you have a basic understanding of Angular 4 and Microservice.
Now directly jump on the problem statement,
I want to create an upload functionality which invokes a FileUpload Microservice and store the profile picture of an Employee.
Let's create an Angular 4 project using Angular.io plugin in eclipse.
after creating the application modify the app.component.ts file under app module.
import { UploadFileService } from './fileUpload.service';
import { Component } from '@angular/core';
import { HttpClient, HttpResponse, HttpEventType } from '@angular/common/http';


@Component({
  selector: 'app-root',
  templateUrl: './view.component.html',
  styleUrls: ['./app.component.css'],
  providers: [UploadFileService]
})

export class AppComponent {
selectedFiles: FileList;
   currentFileUpload: File;
    constructor(private uploadService: UploadFileService) {}
  selectFile(event) {
    this.selectedFiles = event.target.files;
  }
  upload() {

    this.currentFileUpload = this.selectedFiles.item(0);
    this.uploadService.pushFileToStorage(this.currentFileUpload).subscribe(event => {
     if (event instanceof HttpResponse) {
        console.log('File is completely uploaded!');
      }
    });

    this.selectedFiles = undefined;
  }

}
In the @Component decorator, I changed the template URL to view.component.html which actually hold the FileUpload form components. After that, I add a UploadService as a Provider which actually post the selected files to Microservice.
Now, I define a method called selectFile, which capture an event(OnChange event in the fileUpload form field) and extract the file from the target form fields, in this case, the File form fields.
Then I add another method called upload which calls the file upload service and subscribe itself ob Observable<HttpResponse>.

Here, is the view.component.html file
<div style="text-align:center">
<label>
<input type="file" (change)="selectFile($event)">
</label>
<button [disabled]="!selectedFiles"
(click)="upload()">Upload</button>
</div>
Here, I just add a file upload field and when we select a file an onchange event will be fired which call the selectFile method and pass that event to it.
Next, I call the upload method.
Let see the file upload service.
import {Injectable} from '@angular/core';
import {HttpClient, HttpRequest, HttpEvent} from '@angular/common/http';
import {Observable} from 'rxjs/Observable';

@Injectable()
export class UploadFileService {

  constructor(private http: HttpClient) {}

  pushFileToStorage(file: File): Observable<HttpEvent<{}>> {
    const formdata: FormData = new FormData();
    formdata.append('file', file);

    const req = new HttpRequest('POST', 'http://localhost:8085/profile/uploadpicture', formdata, {
      reportProgress: true,
      responseType: 'text'
    }

    );

    return this.http.request(req);
  }

}
here I create a Formdata Object and add the uploaded File into it and using angular HTTP I post the form data to a Microservice running on port 8085 and publish a REST endpoint called /profile/uploadpicture
Hooray, We successfully write the UI part for File upload using Angular4.
if you start the Angular(ng serve) it will look like following,

Let's Build the Microservice part, Create a Project called EmployeeFileUpload service in STS or using start.spring.io, Select Eureka client module to register this microservice with Eureka.
After that, rename the application.properties to bootstrap property. add the following entry.
spring.application.name=EmployeePhotoStorageBoard
eureka.client.serviceUrl.defaultZone:http://localhost:9091/eureka
server.port=8085
security.basic.enable: false   
management.security.enabled: false 
My Eureka server is located on  9091 port, I give a logical name to this Microservice called EmployeePhotoStorageService which runs on 8085 port.
Now I am going to create a Rest Controller which accepts the request from Angular and binds the Multipart Form.
Let see the code snippets of FileUploadController.
package com.example.EmployeePhotoStorageService.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.multipart.MultipartFile;


@RestController

public class FileController {

@Autowired
FileService fileservice;

@CrossOrigin(origins = "http://localhost:4200")// Call  from Local Angualar
@PostMapping("/profile/uploadpicture")
public ResponseEntity<String> handleFileUpload(@RequestParam("file") MultipartFile file) {
String message = "";
try {
fileservice.store(file);
message = "You successfully uploaded " + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.OK).body(message);
} catch (Exception e) {
message = "Fail to upload Profile Picture" + file.getOriginalFilename() + "!";
return ResponseEntity.status(HttpStatus.EXPECTATION_FAILED).body(message);
}
}


}

Few things to notice here, I use a @CrossOrigin annotation by this I instruct Spring to allow the request comes from localhost:4200 , In production, Microservice and Angular app hosted in different domains to allow other domains request we must have to provide the cross-origin annotation,  I Autowired a FileUpload service which actually writes the File content into the disk.
Let see the FileService code,
package com.example.EmployeePhotoStorageService.controller;

import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

import org.springframework.stereotype.Service;
import org.springframework.web.multipart.MultipartFile;


@Service
public class FileService {
private final Path rootLocation = Paths.get("ProfilePictureStore");

public void store(MultipartFile file) {
try {
System.out.println(file.getOriginalFilename());
System.out.println(rootLocation.toUri());
Files.copy(file.getInputStream(), this.rootLocation.resolve(file.getOriginalFilename()));
} catch (Exception e) {
throw new RuntimeException("FAIL!");
}
}

}

Here, I create a directory called ProfilePictureStore under the project it is the same level to src folder. Now I copy the file input stream to the location using java.nio's  Files.copy() static method.
Now to run this Microservice I have to write the Sring Application boot file. Let see the code.
package com.example.EmployeePhotoStorageService;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.feign.EnableFeignClients;
import org.springframework.context.annotation.Bean;
import org.springframework.web.client.RestTemplate;

@EnableDiscoveryClient
@SpringBootApplication
public class EmployeePhotoStorageService {

public static void main(String[] args) {
SpringApplication.run(EmployeePhotoStorageService.class, args);
}


}

Ok, we are all set, Only the last piece is missing from this tutorial that is the pom.xml file.


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.example</groupId>
<artifactId>EmployeeFileUploadService</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

<name>EmployeeDashBoardService</name>
<description>Demo project for Spring Boot</description>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.4.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud.version>Dalston.SR1</spring-cloud.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jersey</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-feign</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-ribbon</artifactId>
</dependency>
 <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-hystrix</artifactId>
   </dependency>
</dependencies>


<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>

Yeah, That's it, if we run the Microservice and upload a file from angular we can see that file is stored in ProfilePictureStore folder, very easy isn't it?
Conclusion: This is a very simple example or I can say a prototype of file upload, without any validation or passing any information from UI apart from the raw file like comments, file name, tags etc. You can enrich this basic example using the Formdata Object in Angular, Another Observation is, I directly call the Microservice  instance from Angular, That is not the case in Production there you have to introduce a Zuul API gateway which accepts the request from Angular do some security checking then communicate with Eureka and route the request to the actual microservice for simplicity I just skip that part.

5 best practices for Building and Deployment Microservices

Many organizations who are going to adopt or adopted Microservices culture they often think Microservices means breaking a big business functionality into small independent services, which can be scaled automatically. But this is the half part of the story, Another part is often ignored but the other part is equally important and if you do not implement the other part please go back and sit with your architects as you are not getting the full advantages of Microservice culture. The other part is an efficient way to handle build and Deployment so developer code is always production ready.

Today I will discuss 5 best practices for build and development for Microservices.



  1. Independent Deployment : When an organization adopting a Microservice culture, The first rule has to be set is, each and every Microservices should be independent. Here what I mean by independent is, Each Microservice should have a separate code base, publish/consume well-defined API, Separate persistence layer, Separate build, and deployment script, and separate deployable artifacts.  So Developer of the Microservices own the process end to end, They must have complete control over their services in terms of technical and management, which help them to protect themselves from any external impediments. When you building a Microservice you should know Other Microservices API which you will go to consume, And also publish a API so that another Microservices Consumes your Service by doing that you can become independent as you know the request/response for the services you consume so you can MOCK them and work independently, also as you publish your API , there is well-defined method signature so unless you change the API signature, whatever you can do with your codebase, even you can deploy it thousand times in a day that will not affect others. Also in Management perspective Team must own the build and deployment pipeline, administrator access in each and every environment even production!!! Should have complete control everything related to Administration, If your organization does not allow this then you are not doing Microservice and not get its essence. Sometimes dependency with Other services is unavoidable( Not management related those are in our hand), or to cope future changes you need to revamp your API, But those are the Exception cases, in my previous article I had a talk about how to handle the same, But again that is exception, By rule every service is independent has its sole identity, Team related to this service owns everything they are all in all of this service. This brings the speed of development and as an organization, you can publish features to end user easily.

  1. CI/CD : One of the main causes of slowness in publishing feature in production is dependency between Dev team and Ops team, Dev team holds the technical part and ops holds the environment for it , This causes a barrier like( communication, availability of environment/artifact etc) as they depend on each other, In a competitive market even a delay of one hour matters, This is not the way to adopt Microservice culture, Ideal Microservice environment would be anytime any code from developers environment  can be production ready , If developers code pass all criteria then it can be published to production within no time and there may be a manual intervention to publish it in production but else should be automated, Microservice culture welcomes DevOps culture automatically. Without DevOps Organisation not utilize the power of Microservice fullest. There should be a build pipeline whenever user checks in a code it will trigger a build then artifacts should be in UAT environment where unit testing will be done then it moves to SIT and integration testing will be fired at last it can publish to production with or without manual intervention.

  1. Environment Replication : In Project Management lifecycle, our traditional approach is to create the different environments like UAT environment, SIT environment, PROD environment etc. This is really a good approach but the problem is that environments are not identical, In SIT we have few servers , A Load balancer manages them but in Production the Infrastructure is totally changed there may be a Primary and Secondary pool of servers for Blue/green deployment or they may have different data center based on geographical affinity. So one artifact different Topologies. One artifact tested against multiple environments, as a result,  we can’t predict how the artifacts behave in Production as we do not have production like infrastructure to test on. The most common bug comes from production is how artifacts behave when there is a heavy load. So to identify how exactly code behaves it needs an Identical environment like production with Load testing capability. It can be done if organization adopt IAAS, Where we can spawn machines seamlessly and clone the developer's environment. So our artifacts should not be only jar, war, gems. An Artifacts should contain the whole environment i.e the OS, Application server, persistence layer all ancillary settings. Nowadays we can achieve it through IAAS(Infrastructure As A Service). Also, Docker is a very popular option to spawn a container and by Docker file, we can instruct what should be packaged inside the container. Puppet and Chef also a good option to achieve that. If you want it should be managed by the third party you can go for AWS or Cloud Foundry, BlueMix etc.



  1. Artifact Repository :  Although it comes with CI/CD I want to highlight this feature. Why is an Organization adopting Microservice architecture in terms of Business perspective?
The answers are
  1. They want to release their feature quickly to end user to stay ahead of competitors.
  2. No Downtime which increases reputation.

    To achieve this organization has to have a robust framework where it can store all the artifacts with its version number as if any production issues occur with current release either we can patch the fix and deploy it to production or if it takes time to fix we can easily revert the changes and deploy the old artifacts so user does not have to wait for fix for next release. So it is utmost important to maintains the artifact repository and it should be very easy to deploy in production, No cumbersome processes so that it needs a special expertise, anyone can do this it should be like a command in CLI.

Like deploy <ServiceName><Version ID><Environment> where
Service ID: Logical Name of the Microservice.
Version ID is the Version you want to deploy.

Environment: Which has a logical name like prod1,prod2,sit1 internally it contains all the information of the servers and apply artifacts to all servers.



5. Scrum of Scrums : We know that one Microservice holds one Domain information. So When an organization wants to build a feature which distributed over multiple domains naturally the work should be distributed in multiple Agile teams. Although in Microservice culture each Team is independent as they can mock the other services, but make functionality works fully it needs an Integration.  So it is utmost important every team's Work Item should be in Sync so that In a defined timeframe all teams did their jobs and all the works can be integrated and tested in the Integrated environment. So Organization must have to conduct a Scrum of Scrums where each team's Scrum master and Product owner sit together and decide the priority items so that business functionality published in production according to the announced release.



Conclusion : Breaking a Complex Business domain into small moving parts and each part work independent is just one side of the coin, To make and publish new features in a hassle-free manner we need to focus on the Management structure of the organization also.  

I am curious to know from The organization who adopted or about to adopt microservice culture how they change their Management policy or what are the other management related challenges they faced when they move into Microservice Culture.


5 Best Practices for REST based MIcroservice

In this tutorial, we will discuss 5 best practices by which you can make your Microservice architecture developer friendly so they can manage and track the error easily.

  1. User Agent : It is very important to provide a meaningful name in request header so if any problem like slowness, huge memory access, spike etc occurs developers can understand from which microservice this request is originated. It is the best practice to provide the logical name/{service id } in the User-Agent property in the Request header.
Ex : User-Agent:EmployeeSearchService



  1. API Version Control :  In REST based Microservices architecture, One Microservice access another Microservices resources via API. So API acts as a Facade to other Microservices. The utmost important part is -- written API in a judicious way so that API is not changed frequently because other Microservices consume the same so any changes in the API method signature is breaking other codes, but the Irony is Change is inevitable as we don’t know the future so today's business strategy may become old in a few days so API must be revamped. As an architect the main challenge is how to cope with these changes, Answer is maintaining the Version, For a major changes you can update the API version and provide notice to your consumers that new Version is arrived so please migrate to the new version within a defined timeframe -- In this time frame as an API provider you have to maintain both versions and constantly send high important notice to your consumer to migrate there codebase call to new version and after that period you can decommission your old version.

  1. Correlate ID :We know that  a business functionality in a MIcroservice architecture is distributed over multiple microservices, So one request from a client internally fans out to many separate requests  and it is highly probable that one of the services is slow or down, But as a developer how to track which service is slow in the Microservice forest, So  somehow we need to group the requests logically. So it is always a good practice to generate a Random UUID for a client request and pass that UUID to every internal request so in a log we can track by the UUID and find the call trace accordingly.

  1. ELK Implementation : Microservice is meant for autoscaling, so in a complex business domain it is very hard to manage Log files for Microservices, say In a system there are 50 microservices involved and each has 10 instances so there will be 50*10=500 logs file will be generated so as a developer it is not possible to log into each instance and collect logs for investigate an Issue. So we need a Centralized mechanism where all logs are dumped and we can do some intelligent search over that log like find out Error or a particular exception or search by the host, correlate ID etc. ELK provide the same where E stands for ElasticSearch L is Logstash and K is Kibana. ElasticSearch dumps the logs and provide a fuzzy search capability Logstash used for collect logs from different sources and transform it. Kibana is a Graphical User Interface where a developer can search the logs as there need. Alternatively, you can use Splunk or any other opensource framework for centralizinig and analysis the log .

  1. Resiliency Implementation : As I said earlier in a Microservice architecture -- many microservices are involved to complete a business functionality. So it is very common that one of the services is not responding which will stop the whole flow to handle such scenario Resiliency is utmost important and each and every service should implement resiliency to provide a seamless experience to the end user.When a service is not responding a certain amount of time there should be a fallback path so user are not wait for the response but immediately notified about the internal problem with a default response. It is the essence of a responsive design.




Conclusion: when building a REST-based Microservices think about two perspective
1. User experience
2. Developer perspective.
A User does not like to wait and S/he needs a clear nontechnical message what is going wrong if there is an internal problem occurs so S/he can communicate with help desk properly. On another hand when it's time for support the microservice we need the all important information in the log as the incident is already happened and based on the log we are analyzing it. So always think about support perspective while developing a Microservice so you can track a flow and get sufficient information from a log.

Deep Dive to Distributed Service Registry

In my previous article, I discussed how to maintain Resiliency in Microservice/Distributed Architecture. In this tutorial, I discussed Distributed Service Registry.


What is a Distributed Service Registry ?

In a service registry pattern, all the services are registered in a Registry(A Map Data Structure). If any service needs an instance of an another service it contacts Registry and gets the service instance. Very simple isn’t it But when we deal with a distributed environment it's getting highly complex.

Let’s discuss why it is getting so complex in a distributed environment. When we are talking about the distributed environment we assume that each service is deployed in a separate web server/application server to be very simple each service runs in a separate JVM. per service per JVM. As Service registry holds the service name and its instances in a key/Value pair so we need a centralized system which only maintains this service registry and performs CRUD operations on it and each service contact this registry for getting required services, so far so good. But the problem is when we make service registry centralized i.e a separate service or JVM we bring a Single point of failure in our architecture.  Think about the scenario if the Service registry is unavailable then our application falls down as no service can contact service registry so it not gets the desired service and fails. So makes service registry centralize is a bad idea.

See the following picture and think how can we improve the architecture.

Service Registry
 
One thing we can do to improve the architecture if we maintain multiple instances of  Service registry so if one is unavailable other can serve the purpose then there will be no SPF(Single point of failures).

Distributed Service Registry Algorithm

The idea is like there will be one Leader Node(Service registry) all the CRUD operation applied on this node, and it is leader node duty to distribute the updated state of the registry to another Service registry(peer nodes). Although by reading it seems very easy and robust solution but it is very complex to implement and a new set of challenges comes with this architecture.

I am trying to talk about the challenges one by one

Leader Election: As the Service registries are connected to each other How we can select one node as a Leader also if that node fails basis on which we will elect another leader who will take over the responsibility?

State Synchronization: As there are multiple service registry nodes now the question is if Leader registry got updated by adding a new instance or deleting an instance then how leader node distribute that state to others and how it becomes to know that all nodes are updated with the latest state?


To solve these problems Distributed registry take helps from Algorithm.




Bully Algorithm for Leader Election :  In Bully algorithm each node/Service registry has a process Id, greater process Id will select as the Leader node. Suppose assume Leader node is not available then How Bully Algorithm selects the next leader.

Step 1: When a node discovers that Leader node is unavailable it starts the election by sending a claim message to all other nodes who have greater id than this node.

Step 2:  When greater nodes are received this claim message, they either show interest or not provide the response for the message, If they are interested they send previous node a response to stop as they will take it from here and send claim message to all other nodes which have greater Id than this.

Step 3: If it does not receive any response from the higher node, then it selects itself as Leader node and sends message to all other nodes by saying it is the Leader.

Bully Algorithm

State Synchronization by Consensus: If any instances are added or deleted from the leader node it has to be reflected on other nodes so every service registry got the same value. To decide the correct state of the registry majority of the node has to agree on a certain value/latest value.  And faulty nodes needs to update its state to be sync with other nodes and system will be eventually consistent.

Consensus works on two major criteria

Value proposition : Each node/Service registry propose a value..
Agreement: Majority of the node should agree on the proposed value.

A consensus algorithm works in following way


Termination
    Every node decides some value.
Validity
    If all registry proposes the same state say  X, then all correct nodes decide X as an updated state.
Integrity
    Every correct node decides at most one value, and if it decides some value X then X  must have been proposed by Leader Node.
Agreement
    Every correct process must agree on the same value X.


Conclusion:  To maintains a Highly available and fault tolerant service registry in a distributed system Consensus and Leader election is the main techniques to be followed.