10 commandments On Microservice Decomposition.


 While we are talking about Microservices, we talked a lot about Domain-Driven design, Event-Driven Architecture, Core domain, Subdomain, Bounded context, Anti-corruption Layer, etc,

Whether you are working in a Brownfield project or a Green Field project, If your organization wants to adopt Microservices, (assuming your organization has a compelling reason for adopting Microservices as it is not a free lunch.) then you need to understand the above terms in detail to properly decomposing your Business domain logic(Business Space) and mapped it with Microservices architecture(Code Space) , so you can gain the benefits of Microservice traits.


In this article, I will try to cover the purposes of the above-mentioned terms while decomposing Microservices and try to fit them under one umbrella concept.


To understand each term from the root, it must be one or more than one article. Having said that, I will concise them in this article and give you the pointers while you are applying the Microservice decomposition strategy in your Organization.


Let’s begin with the 10 Commandments of Microservice Decomposition.


1. View Your Business Domain In terms of Bounded Context and Ubiquitous Language::


Before taking any step on decomposition, the first thing first is to reduce the gap between Product owners and developers, Product owners do not understand Technical term, and the Technical team not understand the importance of a term in terms of Business and how business interpret it, It is like one Portugees talking with one American with their native language no one understood the conversation, so to bridge the gap we need to take below steps,


  1.  gather in front of a Drawing board start a discussion with Product owners what is the objective of the business, what are the actors in a particular feature, what are the terms they used while defining the feature, on every step ask more questions until you figure out what are the conflicting terms, like in Order Context Customer is different than Infrastructure Support context Customer. 

  2. Once you understand the conflicting terms and clubbed the related feature, draw a context so that in each context every domain entity name is clear. 

  3. Define a Ubiquitous language for each context. So the Business team and tech team in sync and using a common language when they communicate.

  4. Start with a Coarse-grained Bounded context, later If has a compelling reason to divide then divide the bounded context, I would recommend not do that if there is a Business reason.



2. Identify the Core Domain and apply Innovative Idea:: 


The core domain is such domain, which brings the revenue to your Business, Say for Online shopping Shopping cart module is the core domain which gives the platform to Buy and Sell (B2C) opportunity to Business and consumers, understand your core module is and think about how you can improve that module which your competitor does not have, any automation, innovations will add advantage and boost your revenue, so pay attention, do R&D invest money on core domain to stay ahead of the competition.


3. Do Cost Optimization on Generic Domains:: 


Generic Domains are such domain which is common in every business in that niche, and already different Third-party already provides the solution and commercialize in the market, Like you Notification module, or Ad Campaign module for your business, it is the best strategy, not to spend money on In house project to create this module and reinvent the wheel unless you have some compelling reason, preferably adopt the Third-party solution for the cheap price. 


4. Think on Support Domains:: 


Core domain needs the Support domains help, to enrich itself and in some cases, Support domain can lead the revenue and can be possible core domains in future. So it is important to think and take decisions to invest in the support domain so that it can generate revenue. Like in a shopping cart domain Inventory Management is Support domain but it is important to invest money to expand inventory locations to cut down the shipping cost, and also invest in algorithms, which can identify the nearest Inventory location for a Customer order to reduce shipping cost.


5. Introduce Anti Corruption Layer:: 


Anticorruption Layer is an integral part of Microservice design it protects Microservices from outerworld malformation, In a real-time Legacy project always you will encounter with such old system which builds on Mainframe or any other language, while you are doing decommission they are the important source of Microservice input data and live side by side with Microservices architecture you can not decompose that system for various reasons.


So it is a good idea to create a facade between Legacy and microservice communication, rather than directly consume data from legacy and create coupling on Microservice and Legacy architecture.


Also think on the Generic domain as they are adopting third party library so rather than directly consume/publish the data according to their contract introduce an anti-corruption layer which insulated the Microservices from outer world contract API, The Port and Hub pattern, so rather than driven by their contract we create our contract and ACL(Anti-corruption layer act as a translator between Microservice and third party contact. It helps you to adopt any third-party library in the future.


6. Identify the Data Communication patterns:: 


Once you decompose the microservices based on the features, and your core services encapsulated their own database/persistence layer,(database per service), the next important things to understand, to complete a feature, how your UI views/components will communicate with each other, is it a sequential flow? At a one-shot your user needs to complete the whole feature, or it can be asynchronous where the user can do a partial functionality and create an intermediate state, another system takes action on the intermediate state and calls back or notify the user to resume the action.


7. Introducing Event-Driven Architecture (EDA):


In a real-time application, your business cases having complex workflows and many branches on the workflows based on the state of the data, based on the state change, workflow took a different strategy, so if you think to expose all by Rest API, you will see that it creates a chatty network not only that each microservices coupled with others and create a spaghetti code and distributed ball of muds.


So somehow we need a clean architecture where each microservices can operate independently without creating coupling, here Event-driven architectures play a vital role, each event is wrapping a change of a state, and the Microservices are followed pub/sub model so one microservice produces it state change and wrap the necessary data in a form of event other Microservices listens to that events and can take the strategy based on the data wrapped in the event. As Events are immutable it also holds the history of an entity or aggregator so if you are adopting an event store and event storming you can generate any statistics and report from the events.


8. Make API contract Clean and concise :


In Microservices you need to publish API so it will act as a contract, so while you are publishing API make sure your api does not publish internal state, think about the encapsulation, and think about the network call, so publish API is such a way that other services can get enough information to carry on their flow, they should not come back multiple times for getting derivative information, also think about the events which events, which you should publish and which must remain inside, maybe you can publish one coarse-grained event rather than publish small internal events.


Example: say internally you have Address Change Event, Personal info change event rather than publish both in API contract, publish a Coarse-grained event called CustomerUpdateEvent.


9. Merge Related Microservices to a Bigger Service:: 


After decomposing if you can experience, few microservices always changing together when a feature needs to be added or updated, then you know, you decomposed it in the wrong way, they must not be segregated to a small service they are part of the same logical unit so it is wise to merge them a single service, it will reduce unnecessary coupling and network call. 


10.  Introduce Supporting tool for Seamless development:: 


Microservices is not free lunch, If you adopt Microservices first thing first be ready to expense on the supporting software as Microservice is distributed we adopt it for scaling resiliency and high availability and reduce Time to Market, it is distributed and works over the network so failure is inevitable and you need to catch the failure at the earliest without spending on infrastructure it is not possible.


If you spend well, then only Microservice allows you to buy out different options and help your organization to grow further.


So spend on CI/CD pipeline, adopt cloud infrastructure, use Tracing tool, use Log aggregator to search logs, use chaos tools for checking how you are prepared for failure, etc.






Conclusion ::

 The above points are necessary while you are decomposing Microservices, I will write an article on each topic on how they play a pivotal role in adopting Microservices architecture.


Also, like to hear from you more on the challenges you faced while decomposing to Microservices.


What is a Microservice?


Although the question is Simple, it is tough to answer as Microservice does not have any de facto standard, many people have many perspectives on Microservices so many definitions.




The compelling definition is.

Microservices is a Suite of services where each service is, bounded by a bounded context and can run, deployed, and scale independently without impacting other services.

So, to make the above statement correct in reality, organizations that are adopted Microservices follow few common characteristics, so

Javaonfly recommends, rather than going by definition go for Characteristics.

Characteristics to be followed for creating a successful Microservices architecture.

Componentization of services:: Service can be upgraded, evolved, deployed, scalable, tested independently.

Database per core services:: Take out the coupling from the database and make it private to core services, create an API contract by which other services get data.

Test cases are the first-class citizen:: as MS is a collaboration of multiple services so Unit test and Integration test is needed often to identify issues, Fail fast strategy is key to success, So need full Unite test case and integration test coverage with automation.

CI/CD Pipeline and Automation:: To become a success, we need CI/CD pipeline and provision to deploy in UAT, SIT, and may be automated deployment to PROD as well. Also If using PAAS, or containerization to treats resources like cattle, not Pet, we can spin servers on the fly for X-axis scaling

You build it you run It Strategy:: Make sure Team are mixed bag Agile teams UI, Backend, dba, QA, and Team is responsible for implementing a Business capability, like Registration Team or Login Team, Order Team, Payment Team, etc, may One Scrum handle Aggregator and core services for that functionality based on organization Team strength but one feature is built that team is the Sole owner of that feature, all bugs, support, deployment, delivery, database tuning, testing will be done by that team.

Strategy for Failure is must:: Microservice architecture dealing with a chain of services call over the network which is not in Teams control, so it bounds to fail but we create MS for high availability so Failure strategy is must, not a good to have kind of things, so always design Plan A, B, C D for failure and lastly fallback.

No Single Point of Failure:: Never ever design a component that can't be scalable through X-axis, so replication is a must for MS, use CAP theorem for your business needs, whether it is the service or database or Cache, config, Load balancer any component.

Reduce Chattiness by Aggregator:: As microservices over network call and a capability spans multiple services design an Aggregator which collects information from core services and returns UI specific response. Use Aggregrtaor carefully it might cause a God Service antipattern

Tracing and Logging:: s microservices over network call and a capability spans multiple services, think about developers how they debug if something goes wrong so use Tracing mechanism, also as MS using X-axis scaling it is not possible to check every server rather need a log aggregator like ELK or Splunk so using correlated id developer trace the call.

Smart endpoint dumb pipe:: Pipe means carrier of instructions i.e network only carry the payload all logic in the MS itself, unlike Service Bus where middleware did all stuff for you!!

Externalization of configuration:: Configuration must be extracted out from MS so we can change them without restarting the services, it helps to achieve high up time for the service.

If the above characteristics are present we can safely say it is Microservice, missing of one or many then that is not a Microservice but we can say they tend to achieve Microservices.


Dilemma on Utility module , making a jar or separate Microservice?

In my previous article, I talked about How you can come to a conclusion about what to choose for your new project Microservice or Monolith? 
As an architect may you follow the points what I mentioned in the previous article and come to a conclusion that you will be going to use Microservice Architecture, A big cheers for you, You promoted yourselves as a first-class citizens of the new era of  Digital world, but(when but is used it means the previous word is meaningless/no value until you solve the next part of but :)) what's next , you heard about Microservice Architecture demands Componentization of service but what is actually a component in Microservice world?
In this article, I briefly discuss What does it mean by Componentization, and when we need to do Componentization of a Utility module what problems we faced?
Componentization of service:: By the word "Microservice", we can understand it is a suite of small services, so the main objective is breaking a project into multiple services. But what does it mean by services, Which kind of services are those? Are we talking about a Service layer in a layered architecture, or a service wrap into the jar so-called "Library" or publish it through REST API, what is that?
To understand always remember "Microservices means an Independent thing(Component/Service whatever you call it) that can be deployed independently and update independently manage it lifecycle independently".
When you Create a Microservice very clear about that, There should not be any confusion, Maybe many tricky situations come but stick to basics "Microservice mean an independent deployable thing".
One of the best dilemmas is,  Suppose you have a utility module, now you are in a confusion what should you do,  wrap utility module into a jar and all other Miocroservice use the same as a jar , so that there should not be a code duplication or expose a utility service with API fiction(Separate Microservice)  so others can consume it.
One may think, making utility module as a jar   seems severely wrong in Microservice workspace as Microservice means several independent small services so Utility module  should be published as a service,  so Utility module must be a Microservice,
Why is the Utility module as a Jar a bad Idea?
when you import utility module as a jar you limit yourself, Now Your service is dependent on utility module which packaged as jar, If utility modules  java version upgrade you have to upgrade your service unless your program not able to use that jar, Now you both should stick in same language java(Although once service is written in one language, I saw rarely that is rewritten in another language), having said that,  you should not confined  your Microservice to a language(Java), Think your utility module is used by many microservices as a jar, so if you want to upgrade your version  it should be backward compatible, say you want to create functions which can easily be achieved by java9 but you can not upgrade your utility as other microservices not upgraded to java9.
Why Utility module as a Jar is a good Idea?
As a counter logic many can argue, To build a Microservice itself, we  import jars/Libraries like Sprint boot starter parent, or GSON, Jackson etc, so why we should not package our utility as jar why it is a bad idea , Even let me tell you, many architect think it is a brilliant idea as  it solves many purposes.
If we are not using utility module as a jar then we have two options
1. Duplicate the functionality of a utility module in all Microservices.
2. Create a Microservice called Utility service and publish an API to invoke utility methods.
These two options has there it's own demerits.
1.  Duplicate the functionality of utility module in all Microservices::Here the objective is duplicate the code to all Microservices who consumes Utility module, so that there is no utility module as such, all are the part of local codebase, but it is against the DRY principle and bad idea, If the utility module is holding some complex code which associates many classes, unnecessary that has been copied to all Microservices and any future changes in that, has to be copied over all Microservices. So there is a problem of maintainability, say we have a utility module which calls an External service and get a complex response then this module parsing the responses apply some business logic on it and create some analytical data which exposes as public methods (Java API), Now if you copied that complex algorithm written in Utility module in to multiple services certainly it is a very bad idea, think how many time duplication has been done, and if now a new analytical data is needed you have to add it in all Microservices , oops what a pain while I am writing this I am getting feared to imagine the scenario. So certainly it does not work unless you have a very small portion of utility code and which is not changeable say a generic code which deals with Date, TimeZone, and Format, ( A single class with multiple static methods), Copying that class in all Microservice is a onetime effort.
2. Create a Microservice called utility service and publish an API to invoke utility methods : 
So, you create one Microservice where all utility methods are dumped, as this is a Utility module all Microservice or most of the Microservice Communicate with it, So Every Microservice has a link with this Microservice, So just imagine the picture every Microservice dependent(HAS-A) on it , So If that service is down due to some erroneous code or all instances down due to some major causes, All service will be down Total Microservice architecture is doomed then How it is different than a monolith?
As per Microservice, Partial failure may happen but total Microservice will not down ever, It has 100% uptime.
So certainly creating a Utility Microservice not looking very promising.
As an Architect What should you do now It is like a double-edged sword, whatever the option you choose, You have to deal with demerits.
Taking a decision :: As an architect, we always dealt with Demerits and try to choose in which case, which one to choose so it has least demerits, So, As an architect our favourite answer is "It depends on scenario", and we are hated for that answer, even Juniors are mocking us he does not have direct answer always said it depends.
Here I also giving answer in a same way, It depends :) How your utility method is written, in a one liner if your utilty functions are statless use Jar , If your utility functions are dealing with State use Microservice.
If you observe minutely, you can divide your utility methods into two categories.
One type of utility modules takes and input doing some operation on it and returns a result so there are no side effects and it is independent of any parameter state, every time you pass the same parameter you got the same result, In this case do not publish those as separate Microservice, as this is not dealing with state , so no need to publish as REST resource, because no create, Put ,delete operation done by this, So, you can package this as a jar and use in every microservices, Think about Utility jars like Jackson,Gson thy take request do some operation returns some data structure, But if your utility takes and request fetch some data from database, do some operation and return it or save that state in database, then it would be ideal to publish it as Microservice.
Here also think about the failure, if you make a synchronous operation then your utility module fail means all chain of Microservices is failed, so think can you make utility operation as async, is the utility module require in sync fashion? As an example Event store or store Audit information, Those are cross-cutting concerns and a utility operation so it is not a part of main business flow so we can make that async so that if audit information or even storing operation fails it does not block the business flow and your service should not down for it.
If you have to use utility operation in sync fashion, then implement a circuit breaker and return a default path, so that if the operation fails it does not block all Microservices, at least if show a default path and show the user a message. or users can perform other operations rather that this function (Say your transaction function calling a utility function to check Transaction amount and based on that bank giving you some credit points, now if that function fails we can show a default message "at this time we can't process transaction" but user can do other options like balance check. So any moment of time the service will not totally down.
Conclusion:: There are also different shades of Utility some are mostly read-only but has database operation, some may use in-memory caching so take decision wisely How you want to write your utility one,  jars or Microservice.

DDD: Thinking in terms of Context Map

In my previous article, I did a detailed discussion about the Bounded Context and learn that how to tackle the complexity of a Domain, it is the best way to divide the domains into several subdomains and mapped them with different bounded contexts where each business entity/Value object has certain meaning on that context, so every stakeholder of the business like Product owner, Developers, Architect, sponsorers are understood the context and referring entity with the proper name, there should be no confusion of the naming when we discuss the terminology basis on a context. A ubiquitous language which creates a unified language among business stakeholder.
By Bounded context, we properly define a business model, create a different context based on the business domains, but always a functionality spans over multiple business entities , and those entities lie in the different Bounded contexts/domains, so it is utmost important to understand the relationship between the Bounded contexts, to architecting the business solution Context Map is a technique by which we can visualize the relationship between different contexts and Integration architect choose the best integration patterns to communicate to other contexts.

Why Context Map is so important while designing solution?
By UML diagram architect cam understand how different parts will communicate with other parts, It gives the architect a view on the communication between different contexts, that is fine but context map stepped in before UML diagram, it helps to visualize the nature of the relationship and based on the nature , architects can decide what kind of technical solution would be adopted.
The best part of Context Map visualization is, it talks about nature of the relationship, It not only tell is that relationship is upstream or downstream, Publisher or subscriber, it also tells how different team dependent on another team their motifs, their politics everything. so counting all of that now architect in a position to decide optimal solution to minimize the risk while integrating with another context.
In the era of Microservices, Context Map is the key player, because before design the holistic Microservice architecture where every team owns a Microservice, It is important to understand how one team is dependent on other, which teams are in commendable position, which team only seeks help then you can architect the solution best possible way .
Think of our Student online Tutor app, as a full grown app say it has more than fifty microservices are deployed in production , so here fifty teams are coming into play and for developing a functionality say Student registrations , multiple contexts will be affected, so we can say to implement that feature multiple teams will be involved so what would be their relationship, while designing this feature who is the pivot service whose data is most needed obviously that service is in commendable position as if that team is not ready other teams can't do anything, so all teams should align with that team , they have to sync there product backlog with that team, so here the internal politics comes to the picture , if the data of the service comes from an External team not with in the organization then the solution is way more complex as you can't force them so only way is to request them and wait for there changes, so based on these diffrent scenarios, politics context map has diffrent solutions. I will cover most important solutions here
 1. Shared Kernel : Shared kernel talks about a partnership relation where two or many teams shared common data model/ value object, it reduces the code duplication as different context use that common model, but that common model/value object is very sensitive, any changes major/minor should be agreed upon all the parties unless it would break other parties code, so more communication and synchronization needs among those teams say one team need a change in the common model but another team is not ready so the priro team has to wait for other oartner to be ready or otherhand othar partner has to change there code unnecessery although that is not heliping them, but to be in sync with other partners, diffrent shades of problems woven while maintaining the pertnership relation ship so choose that pattern  when your common model remains constant or changes once in an era.
In our example, say we developed an analytical module which analysis which courses are most chosen by the students, which students are chosen more than five courses etc, so that module works with Student model, course model so I can say our Analytics module can share Registration modules student model, and they also agreed upon any changes on student model.

Customer/ Supplier: Generally this is the common relationship between two contexts, where a context consumers or depend on data from another context, the context which produces data marked as upstream and the context which consumes data called downstream. When we visualize this relationship in terms of politics we can visualize the power distribution has many shades.
like following
Upstream as the leader: In this type of relationship, the upstream team is in a commendable position, that team does not care about the downstream team, as they producing the data and downstream team is on the mercy of that upstream team they are always changing their model based on the data structure produces by upstream.
In our Student Registration app relationship between Payment app and Notification, app is kind of upstream and downstream where Payment app decide what information in which structure they provide and Notification module consumes that data structure.

Downstream as the Leader: In some cases, the relationship is revert although upstream is produces data, it must have to follow the rule, the data structure for downstream, in this scenario downstream is in a commendable position.
say in our Studen registration system we need to submit Form 16 to government as a tax payee so our payment module has to submit form 16 data to Government exposed API but government API has certain rules and data structure for submitting form 16 data, so although government API is downstream it has total control, our Payment module should communicate with down steam in such a way so it can fulfil downstream rules.

The customer-supplier relation works best when both the parties upstream and downstream are aligned with the work both party agreed upon the interfaces and change in the structure, in case of any changes in the contract both parties will do a discussion synchronize their priority backlogs and agreed upon the changes, If one party does nor care upon another party then every time contract will be broken and it is tough to maintain a customer-supplier relationship.
Conformist: Sometimes, there is a relation between two parties in such a way that downstream team always dependent on an upstream team and they can't do a mutual agreement with upstream about requirements.  The upstream is not aligned with downstream and does not care they are free to change there published endpoint or contract any time and not taking any request from downstream. It is happens when Upstream team is an external system or under a different management hierarchy , and many downstream systems are registered with it so it can't give a priority to any downstream, rather then all downstream system must be aligned with upstream contact and data structures if upstream contracts or data structures change it ps downstream responsibility to changes accordingly.
Say, In our online Student registration app we have a free tutorial module where all students or other application can consume our free tutorials and embed them in their application, so here out free tutorial module acts as upstream and independent of any other third party app who consumes our free tutorials , we can;t give any priority to them and we don't have any contract with them if we change the contracts or data structures it is other third parties duty to change their application accordingly to consume our free tutorials. Other parties are acting as a conformist.

Anti Corruption Layer: When two system interacts if we consume the data directly from upstream we pollute our downstream system as upstream data structure leak through the downstream so if the upstream become polluted our downstream too as it imitates the upstream data while consuming. So it is a good idea while consume data from third party or from a legacy application always use a translation layer where the upstream data translate to downstream data structure before fed in to downstream in that way we can resist the data leakage from upstream, if upstream contract changes it does not pollute downstream internal system only Translation layer has to be changed in order to  adopt new data structure from upstream and convert it into downstream data structure, this technique is called Anti-corruption layer. Anti-corruption layers save the downstream system from upstream changes.
In our application Notification module can implement an ACL while consuming data from payment module so if payment module data structure changes only ACL layers affected.

Open Host : In some cases, your Domain API needs to be accessed by many other services like our Free Tutorial Publisher module, Many external or internal domains want to consume this service, so as Upstream it should be hosted as a service and maintains a protocol and service contract like REST and JSON structure so another system can consume the data.
Published Language: Often two or more system receive and send messages among themselves, in that case, a common language will be needed for the transformation of the data from one system to another like XML, JSON we call that structure as Published language.
The holistic  view of our Student online registration app in terms of context Map

 Conclusion: Context Map is a very important exercise to realize how one domain communicate with other, It gives a proper view of the organization structure, how different domains are distributed, how domain owners are dependent on each other? What is the relationship between team structure? Can they be aligned while developing a feature, based on all the parameters Integration architect can adopt a suitable integration pattern to integrate domains? Prior to designing the integration solution, always architect has to define context map to understand the relationship and structure of the teams based on that Architect can choose the best possible solution.

5 best practices for Building and Deployment Microservices

Many organizations who are going to adopt or adopted Microservices culture they often think Microservices means breaking a big business functionality into small independent services, which can be scaled automatically. But this is the half part of the story, Another part is often ignored but the other part is equally important and if you do not implement the other part please go back and sit with your architects as you are not getting the full advantages of Microservice culture. The other part is an efficient way to handle build and Deployment so developer code is always production ready.

Today I will discuss 5 best practices for build and development for Microservices.



  1. Independent Deployment : When an organization adopting a Microservice culture, The first rule has to be set is, each and every Microservices should be independent. Here what I mean by independent is, Each Microservice should have a separate code base, publish/consume well-defined API, Separate persistence layer, Separate build, and deployment script, and separate deployable artifacts.  So Developer of the Microservices own the process end to end, They must have complete control over their services in terms of technical and management, which help them to protect themselves from any external impediments. When you building a Microservice you should know Other Microservices API which you will go to consume, And also publish a API so that another Microservices Consumes your Service by doing that you can become independent as you know the request/response for the services you consume so you can MOCK them and work independently, also as you publish your API , there is well-defined method signature so unless you change the API signature, whatever you can do with your codebase, even you can deploy it thousand times in a day that will not affect others. Also in Management perspective Team must own the build and deployment pipeline, administrator access in each and every environment even production!!! Should have complete control everything related to Administration, If your organization does not allow this then you are not doing Microservice and not get its essence. Sometimes dependency with Other services is unavoidable( Not management related those are in our hand), or to cope future changes you need to revamp your API, But those are the Exception cases, in my previous article I had a talk about how to handle the same, But again that is exception, By rule every service is independent has its sole identity, Team related to this service owns everything they are all in all of this service. This brings the speed of development and as an organization, you can publish features to end user easily.

  1. CI/CD : One of the main causes of slowness in publishing feature in production is dependency between Dev team and Ops team, Dev team holds the technical part and ops holds the environment for it , This causes a barrier like( communication, availability of environment/artifact etc) as they depend on each other, In a competitive market even a delay of one hour matters, This is not the way to adopt Microservice culture, Ideal Microservice environment would be anytime any code from developers environment  can be production ready , If developers code pass all criteria then it can be published to production within no time and there may be a manual intervention to publish it in production but else should be automated, Microservice culture welcomes DevOps culture automatically. Without DevOps Organisation not utilize the power of Microservice fullest. There should be a build pipeline whenever user checks in a code it will trigger a build then artifacts should be in UAT environment where unit testing will be done then it moves to SIT and integration testing will be fired at last it can publish to production with or without manual intervention.

  1. Environment Replication : In Project Management lifecycle, our traditional approach is to create the different environments like UAT environment, SIT environment, PROD environment etc. This is really a good approach but the problem is that environments are not identical, In SIT we have few servers , A Load balancer manages them but in Production the Infrastructure is totally changed there may be a Primary and Secondary pool of servers for Blue/green deployment or they may have different data center based on geographical affinity. So one artifact different Topologies. One artifact tested against multiple environments, as a result,  we can’t predict how the artifacts behave in Production as we do not have production like infrastructure to test on. The most common bug comes from production is how artifacts behave when there is a heavy load. So to identify how exactly code behaves it needs an Identical environment like production with Load testing capability. It can be done if organization adopt IAAS, Where we can spawn machines seamlessly and clone the developer's environment. So our artifacts should not be only jar, war, gems. An Artifacts should contain the whole environment i.e the OS, Application server, persistence layer all ancillary settings. Nowadays we can achieve it through IAAS(Infrastructure As A Service). Also, Docker is a very popular option to spawn a container and by Docker file, we can instruct what should be packaged inside the container. Puppet and Chef also a good option to achieve that. If you want it should be managed by the third party you can go for AWS or Cloud Foundry, BlueMix etc.



  1. Artifact Repository :  Although it comes with CI/CD I want to highlight this feature. Why is an Organization adopting Microservice architecture in terms of Business perspective?
The answers are
  1. They want to release their feature quickly to end user to stay ahead of competitors.
  2. No Downtime which increases reputation.

    To achieve this organization has to have a robust framework where it can store all the artifacts with its version number as if any production issues occur with current release either we can patch the fix and deploy it to production or if it takes time to fix we can easily revert the changes and deploy the old artifacts so user does not have to wait for fix for next release. So it is utmost important to maintains the artifact repository and it should be very easy to deploy in production, No cumbersome processes so that it needs a special expertise, anyone can do this it should be like a command in CLI.

Like deploy <ServiceName><Version ID><Environment> where
Service ID: Logical Name of the Microservice.
Version ID is the Version you want to deploy.

Environment: Which has a logical name like prod1,prod2,sit1 internally it contains all the information of the servers and apply artifacts to all servers.



5. Scrum of Scrums : We know that one Microservice holds one Domain information. So When an organization wants to build a feature which distributed over multiple domains naturally the work should be distributed in multiple Agile teams. Although in Microservice culture each Team is independent as they can mock the other services, but make functionality works fully it needs an Integration.  So it is utmost important every team's Work Item should be in Sync so that In a defined timeframe all teams did their jobs and all the works can be integrated and tested in the Integrated environment. So Organization must have to conduct a Scrum of Scrums where each team's Scrum master and Product owner sit together and decide the priority items so that business functionality published in production according to the announced release.



Conclusion : Breaking a Complex Business domain into small moving parts and each part work independent is just one side of the coin, To make and publish new features in a hassle-free manner we need to focus on the Management structure of the organization also.  

I am curious to know from The organization who adopted or about to adopt microservice culture how they change their Management policy or what are the other management related challenges they faced when they move into Microservice Culture.


5 Best Practices for REST based MIcroservice

In this tutorial, we will discuss 5 best practices by which you can make your Microservice architecture developer friendly so they can manage and track the error easily.

  1. User Agent : It is very important to provide a meaningful name in request header so if any problem like slowness, huge memory access, spike etc occurs developers can understand from which microservice this request is originated. It is the best practice to provide the logical name/{service id } in the User-Agent property in the Request header.
Ex : User-Agent:EmployeeSearchService



  1. API Version Control :  In REST based Microservices architecture, One Microservice access another Microservices resources via API. So API acts as a Facade to other Microservices. The utmost important part is -- written API in a judicious way so that API is not changed frequently because other Microservices consume the same so any changes in the API method signature is breaking other codes, but the Irony is Change is inevitable as we don’t know the future so today's business strategy may become old in a few days so API must be revamped. As an architect the main challenge is how to cope with these changes, Answer is maintaining the Version, For a major changes you can update the API version and provide notice to your consumers that new Version is arrived so please migrate to the new version within a defined timeframe -- In this time frame as an API provider you have to maintain both versions and constantly send high important notice to your consumer to migrate there codebase call to new version and after that period you can decommission your old version.

  1. Correlate ID :We know that  a business functionality in a MIcroservice architecture is distributed over multiple microservices, So one request from a client internally fans out to many separate requests  and it is highly probable that one of the services is slow or down, But as a developer how to track which service is slow in the Microservice forest, So  somehow we need to group the requests logically. So it is always a good practice to generate a Random UUID for a client request and pass that UUID to every internal request so in a log we can track by the UUID and find the call trace accordingly.

  1. ELK Implementation : Microservice is meant for autoscaling, so in a complex business domain it is very hard to manage Log files for Microservices, say In a system there are 50 microservices involved and each has 10 instances so there will be 50*10=500 logs file will be generated so as a developer it is not possible to log into each instance and collect logs for investigate an Issue. So we need a Centralized mechanism where all logs are dumped and we can do some intelligent search over that log like find out Error or a particular exception or search by the host, correlate ID etc. ELK provide the same where E stands for ElasticSearch L is Logstash and K is Kibana. ElasticSearch dumps the logs and provide a fuzzy search capability Logstash used for collect logs from different sources and transform it. Kibana is a Graphical User Interface where a developer can search the logs as there need. Alternatively, you can use Splunk or any other opensource framework for centralizinig and analysis the log .

  1. Resiliency Implementation : As I said earlier in a Microservice architecture -- many microservices are involved to complete a business functionality. So it is very common that one of the services is not responding which will stop the whole flow to handle such scenario Resiliency is utmost important and each and every service should implement resiliency to provide a seamless experience to the end user.When a service is not responding a certain amount of time there should be a fallback path so user are not wait for the response but immediately notified about the internal problem with a default response. It is the essence of a responsive design.




Conclusion: when building a REST-based Microservices think about two perspective
1. User experience
2. Developer perspective.
A User does not like to wait and S/he needs a clear nontechnical message what is going wrong if there is an internal problem occurs so S/he can communicate with help desk properly. On another hand when it's time for support the microservice we need the all important information in the log as the incident is already happened and based on the log we are analyzing it. So always think about support perspective while developing a Microservice so you can track a flow and get sufficient information from a log.