Monday, December 30, 2019

Interview Question Resources for Spring | Spring Boot | Java 8 | Hibernate | Microservices

From a long time i have been meaning to pen down a post where i can recommend good resources for particular topics. In this post, I will be mentioning those resources, further to that i will keep on updating this from time to time.

For Java/Java 8
http://www.java2novice.com/java_interview_questions/

Database :
http://codethataint.com/blog/category/database-2/interview-questions-database-2/

Spring boot :
https://www.javaguides.net/p/spring-annotations-examples.html

Thursday, December 26, 2019

Event Driven Architecture vs Message Driven Architecture

Message Passing

I'm going to start out by guessing that when you say a "message passing" system, you are talking about a system which one object spends a message to a specific other object. When I think of a system based based on this paradigm, I more generally think of a system where an object that detects something knows who needs to be told about something. (I'm not specifying how it knows, just that it knows.)

This type of architecture is very good for systems where the producers and consumers are well known. Either the producer of a message knows who must receive it, or the consumer must know from who to get the message from.

If you are writing a banking application, one would expect you really want to know who you are sending your transactions to and who they are coming from.

Event Based

The other system I believe you are thinking about when you say an "event based" system is one where an object raises an "event" without knowing who (if anyone) will respond to it.

This type of event driven architecture is very good for systems where the producer does not care about who consumes the event or where the consumer doesn't really care about who produced the event.

In general, these systems are great where you do no know the relationship between consumers and producers, and where you expect the relationship to be dynamic.

One system I have used this in was a system where the application was actually composed of dynamically configured modules (plug-ins) that were loaded at run time. When a module was loaded, it would register for the events it cared about. The result was a system in which it was very easy to extend the functionality.

For instance, let's say condition A raised Event EA which normally caused response RA. The object that caused response RA simply registered to receive event EA and acted on it when it arrived. Now, let's say we want to add a new response to EA, called RA_1. To do this, we simply add a new object that looks for EA and generates response RA_1.

Here are a couple of examples (using your terminology):

"message passing": Your boss tells you to fill out your time sheet.
"event driven": The department secretary sends out an email to everyone reminding them that their time sheets are due today.

More Info :

https://www.linkedin.com/pulse/android-event-driven-architecture-vs-message-ahmed-adel/

Microservices - Part 8 | Building a Microservice using Spring Cloud Config Server + Spring Cloud Bus + Feign Rest Client + Ribbon + Zuul + Spring Cloud Sleuth and Zipkin + Eureka + Hysterix

Ports

ApplicationPort
Limits Service8080, 8081, ...
Spring Cloud Config Server8888
Currency Exchange Service8000, 8001, 8002, ..
Currency Conversion Service8100, 8101, 8102, ...
Netflix Eureka Naming Server8761
Netflix Zuul API Gateway Server8765
Zipkin Distributed Tracing Server9411

URLs

ApplicationURL
Limits Servicehttp://localhost:8080/limits POST -> http://localhost:8080/actuator/refresh
Spring Cloud Config Serverhttp://localhost:8888/limits-service/default http://localhost:8888/limits-service/dev
Currency Converter Service - Direct Callhttp://localhost:8100/currency-converter/from/USD/to/INR/quantity/10
Currency Converter Service - Feignhttp://localhost:8100/currency-converter-feign/from/EUR/to/INR/quantity/10000
Currency Exchange Servicehttp://localhost:8000/currency-exchange/from/EUR/to/INR
http://localhost:8001/currency-exchange/from/USD/to/INR
Eurekahttp://localhost:8761/
Zuul - Currency Exchange & Exchange Serviceshttp://localhost:8765/currency-exchange-service/currency-exchange/from/EUR/to/INR
 http://localhost:8765/currency-conversion-service/
currency-converter-feign/from/USD/to/INR/quantity/10
Zipkinhttp://localhost:9411/zipkin/
Spring Cloud Bus Refreshhttp://localhost:8080/bus/refresh
Git :

/03.microservices/git-localconfig-repo/limits-service.properties
limits-service.minimum=8
limits-service.maximum=888
management.endpoints.web.exposure.include=*


Steps to Create a Microservice :

Step 1 : Create a Centralized Microservice Configuration with Spring Cloud Config Server
Dependency : spring-cloud-config-server
Annotation : @EnableConfigServer
application.properties :
spring.application.name=spring-cloud-config-server
server.port=8888
spring.cloud.config.server.git.uri=file:///in28Minutes/git/spring-micro-services/03.microservices/git-localconfig-repo

For Fetching data from this config server add the following in bootstrap.properties for the respective service :
spring.application.name=limits-service
spring.cloud.config.uri=http://localhost:8888
spring.profiles.active=qa
management.endpoints.web.exposure.include=*


Step 2 : Using Spring Cloud Bus to exchange messages about Configuration updates
Implementing a refresh URL to dynamically load properties without restarting respective services.
Dependency : spring-cloud-starter-bus-amqp (to be included in all projects which needs to be interconnected.)
URL :  http://localhost:8080/bus/refresh

Refresh one url, rest all will receive refresh message from rabbitmq or Spring Cloud bus.

Step 3 : Implementing Eureka Server
Dependency : spring-cloud-starter-netflix-eureka-server
Annotation : @EnableEurekaServer
application.properties :
spring.application.name=netflix-eureka-naming-server
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false

Step 4: Implementing API Gateway with Zuul
Dependency : spring-cloud-starter-netflix-zuul
Annotation : 
@EnableZuulProxy
@EnableDiscoveryClient
application.properties :
spring.application.name=netflix-zuul-api-gateway-server
server.port=8765
eureka.client.service-url.default-zone=http://localhost:8761/eureka


Step 5 : Implementing Services : Conversion Service -> Exchange Service
Annotation: @EnableDiscoveryClient , @EnableFeignClients("com.in28minutes.microservices.currencyconversionservice")

PS: Create a proxy for calling exchange service from conversion service with automatic detection :
@FeignClient(name="netflix-zuul-api-gateway-server")
@RibbonClient(name="currency-exchange-service")

Step 6 : Enable Distributed Tracing via Zipking and Sleuth
Add Following Bean in gateway + all services :
@Bean
public Sampler defaultSampler(){
return Sampler.ALWAYS_SAMPLE;
}

Plus the following dependency :
                <dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bus-amqp</artifactId>
</dependency>

Step 6.1 : Install rabbit mq and zipkin server.

Step 6.2 : Start a zipkin server :

Annotation : @EnableZipkinServer
dependency : 
                <dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>

<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-ui</artifactId>
<scope>runtime</scope>
</dependency>

 application.properties :

spring.application.name=zipkin-distributed-tracing-server
server.port=9411

Step 7: Implementing Fault Tolerance with Hysterix
Annotation : @EnableHystrix
@HystrixCommand(fallbackMethod="fallbackRetrieveConfiguration")
dependency :
                 <dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
</dependency>


Ref : https://github.com/in28minutes/spring-microservices/tree/master/03.microservices

Microservices - Part 7 | Securing your Microservices | Security in Microservices

Some articles related to security:

https://www.baeldung.com/spring-security-zuul-oauth-jwt


Wednesday, December 25, 2019

Microservices - Part 6 - Design Patterns

 Principles microservice architecture has been built upon:

Scalability
Availability
Resiliency
Independent, autonomous
Decentralized governance
Failure isolation
Auto-Provisioning
Continuous delivery through DevOps

Applying all these principles brings several challenges and issues. Let's discuss those problems and their solutions.

1. Decomposition Patterns

a. Decompose by Business Capability

Each business capability can be thought of as a service, except it’s business-oriented rather than technical

b. Decompose by Subdomain.

 "God Classes" which will not be easy to decompose. These classes will be common among multiple services.For the "God Classes" issue, DDD (Domain-Driven Design) comes to the rescue. It uses subdomains and bounded context concepts to solve this problem.

c. Strangler Pattern

This creates two separate applications that live side by side in the same URI space. Eventually, the newly refactored application “strangles” or replaces the original application until finally you can shut off the monolithic application.

2. Integration Patterns

a. API Gateway Pattern

An API Gateway helps to address many concerns raised by microservice implementation such as :
Single point Of Entry, proxy service to route a request to the concerned microservice,aggregate the results from different MS, It can also convert the protocol request (e.g. AMQP) to another protocol (e.g. HTTP) and vice versa so that the producer and consumer can handle it,authentication/authorization

b. Aggregator Pattern

The responsibility of aggregation cannot be left with the consumer, as then it might need to understand the internal implementation of the producer application. Either a composite microservice or an API Gateway can be used for this.
It is recommended if any business logic is to be applied, then choose a composite microservice. Otherwise, the API Gateway is the established solution.

c. Client-Side UI Composition Pattern

With the decomposing business capabilities/subdomains,the UI has to be designed as a skeleton with multiple sections/regions of the screen/page. Each section will make a call to an individual backend microservice to pull the data. That is called composing UI components specific to service. Frameworks like AngularJS and ReactJS help to do that easily. These screens are known as Single Page Applications (SPA). This enables the app to refresh a particular region of the screen instead of the whole page.

3. Database Patterns

a. Database per Service
To solve MS related concerns, one database per microservice must be designed; it must be private to that service only. It should be accessed by the microservice API only. It cannot be accessed by other services directly. For example, for relational databases, we can use private-tables-per-service, schema-per-service, or database-server-per-service.

b. Shared Database per Service

We have talked about one database per service being ideal for microservices, but that is possible when the application is greenfield and to be developed with DDD. But if the application is a monolith and trying to break into microservices, denormalization is not that easy
A shared database per service is not ideal, but that is the working solution for the above scenario. Most people consider this an anti-pattern for microservices, but for brownfield applications, this is a good start to break the application into smaller logical pieces. This should not be applied for greenfield applications. In this pattern, one database can be aligned with more than one microservice, but it has to be restricted to 2-3 maximum, otherwise scaling, autonomy, and independence will be challenging to execute.

c. Command Query Responsibility Segregation (CQRS)
Problem
Once we implement database-per-service, there is a requirement to query, which requires joint data from multiple services — it's not possible. Then, how do we implement queries in microservice architecture?

Solution
CQRS suggests splitting the application into two parts — the command side and the query side. The command side handles the Create, Update, and Delete requests. The query side handles the query part by using the materialized views. The event sourcing pattern is generally used along with it to create events for any data change. Materialized views are kept updated by subscribing to the stream of events.

d. Saga Pattern
Problem
When each service has its own database and a business transaction spans multiple services, how do we ensure data consistency across services? For example, for an e-commerce application where customers have a credit limit, the application must ensure that a new order will not exceed the customer’s credit limit. Since Orders and Customers are in different databases, the application cannot simply use a local ACID transaction.

Solution
A Saga represents a high-level business process that consists of several sub requests, which each update data within a single service. Each request has a compensating request that is executed when the request fails. It can be implemented in two ways:

Choreography — When there is no central coordination, each service produces and listens to another service’s events and decides if an action should be taken or not.

Orchestration — An orchestrator (object) takes responsibility for a saga’s decision making and sequencing business logic.

4. Observability Patterns

a. Log Aggregation
We need a centralized logging service that aggregates logs from each service instance. Users can search and analyze the logs

b. Performance Metrics
A metrics service is required to gather statistics about individual operations. It should aggregate the metrics of an application service, which provides reporting and alerting. There are two models for aggregating metrics:
Push — the service pushes metrics to the metrics service e.g. NewRelic, AppDynamics
Pull — the metrics services pulls metrics from the service e.g. Prometheus

c. Distributed Tracing
To trace a request end-to-end and to troubleshoot the problem.
Spring Cloud Slueth, along with Zipkin server, is a common implementation.

d. Health Check
Each service needs to have an endpoint which can be used to check the health of the application, such as /health. This API should o check the status of the host, the connection to other services/infrastructure, and any specific logic.
Spring Boot Actuator does implement a /health endpoint and the implementation can be customized, as well.

5. Cross-Cutting Concern Patterns

a. External Configuration
Externalize all the configuration, including endpoint URLs and credentials. The application should load them either at startup or on the fly.
Spring Cloud config server provides the option to externalize the properties to GitHub and load them as environment properties. These can be accessed by the application on startup or can be refreshed without a server restart.

b. Service Discovery Pattern
A service registry needs to be created which will keep the metadata of each producer service. A service instance should register to the registry when starting and should de-register when shutting down. The consumer or router should query the registry and find out the location of the service. The registry also needs to do a health check of the producer service to ensure that only working instances of the services are available to be consumed through it. There are two types of service discovery: client-side and server-side. An example of client-side discovery is Netflix Eureka and an example of server-side discovery is AWS ALB.

c. Circuit Breaker Pattern
The consumer should invoke a remote service via a proxy that behaves in a similar fashion to an electrical circuit breaker. When the number of consecutive failures crosses a threshold, the circuit breaker trips, and for the duration of a timeout period, all attempts to invoke the remote service will fail immediately. After the timeout expires the circuit breaker allows a limited number of test requests to pass through. If those requests succeed, the circuit breaker resumes normal operation. Otherwise, if there is a failure, the timeout period begins again.
Netflix Hystrix is a good implementation of the circuit breaker pattern. It also helps you to define a fallback mechanism which can be used when the circuit breaker trips. That provides a better user experience.

d. Blue-Green Deployment Pattern
To avoid or reduce downtime of the services during deployment. It achieves this by running two identical production environments, Blue and Green. Let's assume Green is the existing live instance and Blue is the new version of the application. At any time, only one of the environments is live, with the live environment serving all production traffic. There are many other patterns used with microservice architecture, like Sidecar, Chained Microservice, Branch Microservice, Event Sourcing Pattern, Continuous Delivery Patterns, and more.


Ref: https://dzone.com/articles/design-patterns-for-microservices

What are greenfield and brownfield applications?

Greenfield

in other disciplines like software engineering, a greenfield is also a project which lacks any constraints imposed by prior work. The analogy is to that of construction on greenfield land where there is no need to remodel or demolish an existing structure.

(from http://en.wikipedia.org/wiki/Greenfield_project)

Brownfield
Brownfield development is a term commonly used in the IT industry to describe problem spaces needing the development and deployment of new software systems in the immediate presence of existing (legacy) software applications/systems. This implies that any new software architecture must take into account and coexist with live software already in situ.

(from http://en.wikipedia.org/wiki/Brownfield_(software_development))

Microservices - Part 5 -Patterns for distributed transactions within a microservices architecture

What is a distributed transaction?
When a microservice architecture decomposes a monolithic system into self-encapsulated services, it can break transactions. This means a local transaction in the monolithic system is now distributed into multiple services that will be called in a sequence.

Here is a customer order example with a monolithic system using a local transaction:

Diagram of customer order example with a monolithic system using a local transaction

In the customer order example above, if a user sends a Put Order action to a monolithic system, the system will create a  local database transaction that works over multiple database tables. If any step fails, the transaction can roll back. This is known as ACID (Atomicity, Consistency, Isolation, Durability), which is guaranteed by the database system.

When we decompose this system, we created both the CustomerMicroserviceand the OrderMicroservice, which have separate databases. Here is a customer order example with microservices:

Diagram of customer order example with microservices

When a Put Order request comes from the user, both microservices will be called to apply changes into their own database. Because the transaction is now across multiple databases, it is now considered a distributed transaction.

Everything you need to grow your career.
With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.


What is the problem?
In a monolithic system, we have a database system to ensure ACIDity. We now need to clarify the following key problems.

How do we keep the transaction atomic?
In a database system, atomicity means that in a transaction either all steps complete or no steps complete. The microservice-based system does not have a global transaction coordinator by default. In the example above, if the CreateOrder method fails, how do we roll back the changes we applied by the CustomerMicroservice?

Do we isolate user actions for concurrent requests?
If an object is written by a transaction and at the same time (before the transaction ends), it is read by another request, should the object return old data or updated data? In the example above, once UpdateCustomerFund succeeds but is still waiting for a response from CreateOrder, should requests for the current customer’s fund return the updated amount or not?

Possible solutions
The problems above are important for microservice-based systems. Otherwise, there is no way to tell if a transaction has completed successfully. The following two patterns can resolve the problem:

2pc (two-phase commit)
Saga
Two-phase commit (2pc) pattern
2pc is widely used in database systems. For some situations, you can use 2pc for microservices. Just be careful; not all situations suit 2pc and, in fact, 2pc is considered impractical within a microservice architecture (explained below).

So what is a two-phase commit?

As its name hints, 2pc has two phases: A prepare phase and a commit phase. In the prepare phase, all microservices will be asked to prepare for some data change that could be done atomically. Once all microservices are prepared, the commit phase will ask all the microservices to make the actual changes.

Normally, there needs to be a global coordinator to maintain the lifecycle of the transaction, and the coordinator will need to call the microservices in the prepare and commit phases.

Here is a 2pc implementation for the customer order example:

Diagram of 2pc implementation for the customer order example

In the example above, when a user sends a put order request, the Coordinator will first create a global transaction with all the context information. It will then tell CustomerMicroservice to prepare for updating a customer fund with the created transaction. The CustomerMicroservice will then check, for example, if the customer has enough funds to proceed with the transaction. Once CustomerMicroservice is OK to perform the change, it will lock down the object from further changes and tell the Coordinator that it is prepared. The same thing happens while creating the order in the OrderMicroservice. Once the Coordinator has confirmed all microservices are ready to apply their changes, it will then ask them to apply their changes by requesting a commit with the transaction. At this point, all objects will be unlocked.

If at any point a single microservice fails to prepare, the Coordinator will abort the transaction and begin the rollback process. Here is a diagram of a 2pc rollback for the customer order example:

Diagram of a 2pc rollback for the customer order example


In the above example, the CustomerMicroservice failed to prepare for some reason, but the OrderMicroservice has replied that it is prepared to create the order. The Coordinator will request an abort on the OrderMicroservice with the transaction and the OrderMicroservice will then roll back any changes made and unlock the database objects.

Benefits of using 2pc
2pc is a very strong consistency protocol. First, the prepare and commit phases guarantee that the transaction is atomic. The transaction will end with either all microservices returning successfully or all microservices have nothing changed.  Secondly, 2pc allows read-write isolation. This means the changes on a field are not visible until the coordinator commits the changes.

Disadvantages of using 2pc
While 2pc has solved the problem, it is not really recommended for many microservice-based systems because 2pc is synchronous (blocking). The protocol will need to lock the object that will be changed before the transaction completes. In the example above, if a customer places an order, the “fund” field will be locked for the customer. This prevents the customer from applying new orders. This makes sense because if a “prepared” object changed after it claims it is “prepared,” then the commit phase could possibly not work.

This is not good. In a database system, transactions tend to be fast—normally within 50 ms. However, microservices have long delays with RPC calls, especially when integrating with external services such as a payment service. The lock could become a system performance bottleneck. Also, it is possible to have two transactions mutually lock each other (deadlock) when each transaction requests a lock on a resource the other requires.



Saga pattern
The Saga pattern is another widely used pattern for distributed transactions. It is different from 2pc, which is synchronous. The Saga pattern is asynchronous and reactive. In a Saga pattern, the distributed transaction is fulfilled by asynchronous local transactions on all related microservices. The microservices communicate with each other through an event bus.

Here is a diagram of the Saga pattern for the customer order example:

Diagram of the Saga pattern for the customer order example

In the example above, the OrderMicroservice receives a request to place an order. It first starts a local transaction to create an order and then emits an OrderCreated event. The CustomerMicroservice listens for this event and updates a customer fund once the event is received. If a deduction is successfully made from a fund, a CustomerFundUpdated event will then be emitted, which in this example means the end of the transaction.

If any microservice fails to complete its local transaction, the other microservices will run compensation transactions to rollback the changes. Here is a diagram of the Saga pattern for a compensation transaction:



Diagram of the Saga pattern for a compensation transaction

In the above example, the UpdateCustomerFund failed for some reason and it then emitted a CustomerFundUpdateFailed event. The OrderMicroservice listens for the event and start its compensation transaction to revert the order that was created.

Advantages of the Saga pattern
One big advantage of the Saga pattern is its support for long-lived transactions. Because each microservice focuses only on its own local atomic transaction, other microservices are not blocked if a microservice is running for a long time. This also allows transactions to continue waiting for user input. Also, because all local transactions are happening in parallel, there is no lock on any object.

Disadvantages of the Saga pattern
The Saga pattern is difficult to debug, especially when many microservices are involved. Also, the event messages could become difficult to maintain if the system gets complex. Another disadvantage of the Saga pattern is it does not have read isolation. For example, the customer could see the order being created, but in the next second, the order is removed due to a compensation transaction.

Adding a process manager
To address the complexity issue of the Saga pattern, it is quite normal to add a process manager as an orchestrator. The process manager is responsible for listening to events and triggering endpoints.

Conclusion
The Saga pattern is a preferable way of solving distributed transaction problems for a microservice-based architecture. However, it also introduces a new set of problems, such as how to atomically update the database and emit an event. Adoption of the Saga pattern requires a change in mindset for both development and testing. It could be a challenge for a team that is not familiar with this pattern. There are many variants that simplify its implementation. Therefore, it is important to choose the proper way to implement it for a project.


Ref : https://developers.redhat.com/blog/2018/10/01/patterns-for-distributed-transactions-within-a-microservices-architecture/

Tuesday, December 24, 2019

Microservices - Part 4 -Client Side Load Balancing | Ribbon as a Load Balancer

Server Side Load Balancing
In Java EE architecture, we deploy our war/ear files into multiple application servers, then we create a pool of server and put a load balancer (Netscaler) in front of it, which has a public IP. The client makes a request using that public IP, and Netscaler decides in which internal application server it forwards the request by round robin or sticky session algorithm. We call it server side load balancing.

server side Load Balancing

server side Load Balancing

Problem: The problem of server side load balancing is if one or more servers stop responding, we have to manually remove those servers from the load balancer by updating the IP table of the load balancer.

Another problem is that we have to implement a failover policy to provide the client with a seamless experience.

But microservices don't use server side load balancing. They use client side load balancing.

Client Side Load Balancing
To understand client side load balancing, let's recap microservices architecture. We generally create a service discovery like Eureka or Consul, where each service instance registers when bootstrapped. Eureka server maintains a service registry; it maintains all the instances of the service as a key/value map, where the {service id} of your microservice serves as the key and instances serve as the value. Now, if one microservice wants to communicate with another microservice, it generally looks up the service registry using DiscoveryClient and Eureka server returns all the instances of the calling microservice to the caller service. Then it was a caller service headache which instance it calls. Here, client side load balancing stepped in. Client side load balancing maintains an algorithm like round robin or zone specific, by which it can invoke instances of calling services. The advantage is s service registry always updates itself; if one instance goes down, it removes it from its registry, so when the client side load balancer talks to the Eureka server, it always updates itself, so there is no manual intervention- unlike server side load balancing- to remove an instance.

Another advantage is, as the load balancer is in the client side, you can control its load balancing algorithm programmatically. Ribbon provides this facility, so we will use Ribbon for client side load balancing

client side load balancing

Ref: https://dzone.com/articles/microservices-tutorial-ribbon-as-a-load-balancer-1

Monday, December 23, 2019

Microservices -The Ultimate list of Interview Questions Part 4

How to handle versioning of microservices?

There are different ways to handle the versioning of your REST api to allow older consumers to still consume the older endpoints. The ideal practice is that any nonbackward compatible change in a given REST endpoint shall lead to a new versioned endpoint.

Different mechanisms of versioning are:

Add version in the URL itself
Add version in API request header
Most common approach in versioning is the URL versioning itself. A versioned URL looks like the following:

Versioned URL

  https://<host>:<port>/api/v1/...

  https://<host>:<port>/api/v2/...

As an API developer you must ensure that only backward-compatible changes are accommodated in a single version of URL. Consumer-Driven-Tests can help identify potential issues with API upgrades at an early stage.


How to refresh configuration changes on the fly in Spring Cloud environment?

Using config-server, it's possible to refresh the configuration on the fly. The configuration changes will only be picked by Beans that are declared with @RefreshScope annotation.

The following code illustrates the same. The property message is defined in the config-server and changes to this property can be made at runtime without restarting the microservices.

package hello;

import org.springframework.beans.factory.annotation.Value;

import org.springframework.boot.SpringApplication;

import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.web.bind.annotation.RequestMapping;

import org.springframework.web.bind.annotation.RestController;

 @SpringBootApplication

public class ConfigClientApplication {

public static void main(String[] args) { SpringApplication.run(ConfigClientApplication.class, args);

} }

@RefreshScope 1 @RestController

class MessageRestController {

@Value("${message:Hello World}") private String message;

@RequestMapping("/message") String getMessage() {

return this.message; }}

1 @RefreshScope makes it possible to dynamically reload the configuration for this bean.


How will you ignore certain exceptions in Hystrix fallback execution?

@HystrixCommand annotation provides attribute ignoreExceptions that can be used to provide a list of ignored exceptions.

Code

import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;

 import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.cloud.client.ServiceInstance;

import org.springframework.cloud.client.loadbalancer.LoadBalancerClient;

import org.springframework.stereotype.Service;

import org.springframework.web.client.RestTemplate;

import java.net.URI;

@Service

public class HystrixService {

@Autowired

private LoadBalancerClient loadBalancer;

 @Autowired

private RestTemplate restTemplate;

@HystrixCommand(fallbackMethod = "reliable", ignoreExceptions = IllegalStateException.class, MissingServletRequestParameterException.class, TypeMismatchException.class)

public String readingList() {

ServiceInstance instance = loadBalancer.choose("product-service"); URI uri = URI.create("http://product-service/product/recommended"); return this.restTemplate.getForObject(uri, String.class);}

public String reliable(Throwable e) { return "Cloud Native Java (O'Reilly)";

In the above example, if the actual method call throws IllegalStateException, MissingServletRequestParameterException or TypeMismatchException then hystrix will not trigger the fallback logic (reliable method), instead the actual exception will be wrapped inside HystrixBadRequestException and re-thrown to the caller. It is taken care by javanica library under the hood.


Is it a good idea to share a common database across multiple microservices?

In a microservices architecture, each microservice shall own its private data which can only be accessed by the outside world through owning service. If we start sharing microservice’s private datastore with other services, then we will violate the principle of Bounded Context.

Practically we have three approaches -

Database server per microservice - Each microservice will have its own database server instance. This approach has the overhead of maintaining database instance and its replication/backup, hence its rarely used in a practical environment.
Schema per microservice - Each microservice owns a private database schema which is not accessible to other services. Its most preferred approach for RDMS database (MySql, Postgres, etc.)
Private Table per microservice - Each microservice owns a set of tables that must only be accessed by that service. It’s a logical separation of data. This approach is mostly used for the hosted database as a service solution (Amazon RDS).


What are best practices for microservices architecture?

Microservices Architecture can become cumbersome & unmanageable if not done properly. There are best practices that help design a resilient & highly scalable system. The most important ones are

Partition correctly

Get to know the domain of your business, that's very very important. Only then you will be able to define the bounded context and partition your microservice correctly based on business capabilities.

DevOps culture

Typically, everything from continuous integration all the way to continuous delivery and deployment should be automated. Otherwise,   a big pain to manage a large fleet of microservices.

Design for stateless operations

We never know where a new instance of a particular microservice will be spun up for scaling out or for handling failure, so maintaining a state inside service instance is a very bad idea.

Design for failures

Failures are inevitable in distributed systems, so we must design our system for handling failures gracefully. failures can be of different types and must be dealt with accordingly, for example -

Failure could be transient due to inherent brittle nature of the network, and the next retry may succeed. Such failures must be protected using retry operations.
Failure may be due to a hung service which can have cascading effects on the calling service. Such failures must be protected using Circuit Breaker Patterns. A fallback mechanism can be used to provide degraded functionality in this case.
A single component may fail and affect the health of the entire system, bulkhead pattern must be used to prevent the entire system from failing.
Design for versioning

We should try to make our services backward compatible, explicit versioning must be used to cater different versions of the RESt endpoints.

Design for asynchronous communication b/w services

Asynchronous communication should be preferred over synchronous communication in inter microservice communication. One of the biggest advantages of using asynchronous messaging is that the service does not block while waiting for a response from another service.

Design for eventual consistency

Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.

Design for idempotent operations

Since networks are brittle, we should always design our services to accept repeated calls without any side effects. We can add some unique identifier to each request so that service can ignore the duplicate request sent over the network due to network failure/retry logic.

Share as little as possible

In monolithic applications, sharing is considered to be a best practice but that's not the case with Microservices. Sharing results in a violation of Bounded Context Principle, so we shall refrain from creating any single unified shared model that works across microservices. For example, if different services need a common Customer model, then we should create one for each microservice with just the required fields for a given bounded context rather than creating a big model class that is shared in all services. The more dependencies we have between services, the harder it is to isolate the service changes, making it difficult to make a change in a single service without affecting other services. Also, creating a unified model that works in all services brings complexity and ambiguity to the model itself, making it hard for anyone to understand the model.
In a way are want to violate the DRY principle in microservices architecture when it comes to domain models.


How will you implement caching for microservices?

Caching is a technique of performance improvement for getting query results from a service. It helps minimize the calls to network, database, etc. We can use caching at multiple levels in microservices architecture -

Server-Side Caching - Distributed caching software like Redis/MemCache/etc are used to cache the results of business operations. The cache is distributed so all instances of a microservice can see the values from the shared cache. This type of caching is opaque to clients.
Gateway Cache - central API gateway can cache the query results as per business needs and provide improved performance. This way we can achieve caching for multiple services at one place. Distributed caching software like Redis or Memcache can be used in this case.
Client-Side Caching - We can set cache-headers in http response and allow clients to cache the results for a pre-defined time. This will drastically reduce the load on servers since the client will not make repeated calls to the same resource. Servers can inform the clients when information is changed, thereby any changes in the query result can also be handled. E-Tags can be used for client-side load balancing. If the end client is a microservice itself, then Spring Cache support can be used to cache the results locally

What is a good tool for documenting Microservices?

Swagger is a very good open-source tool for documenting   APIs provided by microservices. It provides very easy to use interactive documentation.

By the use of swagger annotation on REST endpoint, api documentation can be auto-generated and exposed over the web interface. An internal and external team can use web interface, to see the list of APIs and their inputs & error codes. They can even invoke the endpoints directly from web interface to get the results.

Swagger UI is a very powerful tool for your microservices consumers to help them understand the set of endpoints provided by a given microservice.


Why Basic Authentication is not suitable in the Microservices Context?

Basic Authentication is natively supported by almost all servers and clients, even Spring security has very good support for it and its configured out of the box. But it is not a good fit for Microservices due to many reasons, including -

We need credentials (username and password) every time we authenticate. This may be fine where all the participants can share the secrets securely, but Users may not be willing to share their credentials with all the applications.
There is no distinction between Users and Client Apps (an application that is making a request). In a realistic environment, we often need to know if a real user is making a request or a client app is making a request (for inter-service communication).
It only covers authentication. what about scopes, Authorizations? Basic Auth does not support adding additional attributes in the authentication headers. There is no concept of Tokens in basic auth.
Performance reasons for BCrypt Matching. Passwords are often stored in the database using one-way hash i.e. Bcrypt, it takes a lot of cpu cycles depending upon the strength (a.k.a. log rounds in BCrypt) to compare the user’s plain password with db saved bcrypt password, so it may not be efficient to match password on every request. The larger the strength parameter the more work will have to be done (exponentially) to hash the passwords. If you set the strength to 12, then in total 212 iterations will be done in Bcrypt Logic. Usually, 4-8 passwords can be matched per second on a T2.Micro instance on Amazon AWS instance. See BCryptPasswordEncoder for more info.
If we use Basic Auth for a mobile application client, then we might have to store user’s credentials on the device to allow remember me feature. This is quite risky as anyone getting access to the device may steal the plain credentials.


How does JWT look like?

There are 3 parts in every JWT claim - Header, Claim and Signature. These 3 parts are separated by a dot. The entire JWT is encoded in Base64 format.

JWT = {header}.{payload}.{signature}

A typical JWT is shown here for reference.

Encoded JSON Web Token
Entire JWT is encoded in Base64 format to make it compatible with HTTP protocol. Encoded JWT looks like the following:



Decoded JSON Web Token

Header

Header contains algorithm information e.g. HS256 and type e.g. JWT

{
"alg": "HS256", "typ": "JWT"

}

Claim

claim part has an expiry, issuer, user_id, scope, roles, client_id etc. It is encoded as a JSON object. You can add custom attributes to the claim. This is the information that you want to exchange with the third party.

{
"uid": "2ce35360-ef8e-4f69-a8d7-b5d1aec78759", "user_name": "user@mail.com",
"scope": ["read"],
"exp": 1520017228,
"authorities": ["ROLE_USER","ROLE_ADMIN"], "jti": "5b42ca29-8b61-4a3a-8502-53c21e85a117", "client_id": "acme-app"

}

Signature

Signature is typically a one way hash of (header + payload), is calculated using HMAC SHA256 algorithm. The secret used for signing the claim should be kept private. Pubic/private key can also be used to encrypt the claim instead of using symmetric cryptography.

HMACSHA256(base64(header) + "." + base64(payload), "secret")



How OAuth2 Works?

OAuth2.0 is a delegation protocol where the Client (Mobile App or web app) does not need to know about the credentials of Resource Owner (end-user).

Oauth2 defines four roles.

Resource Owner - The person or the application that owns the data to be shared. When a resource owner is a person, it is called as an end-user.
Resource Server - The application that holds the protected resources. It is usually a microservice.
Authorization Server - the application that verifies the identity of the resource owner (users/clients). These server issues access tokens after obtaining the authorization.
Client - the application that makes a request to Resource Server on behalf of Resource Owner. It could be a mobile app or a web app (like stackoverflow).


What are the tools and libraries available for testing microservices?

Important Tools and Libraries for testing Spring-based Microservices are -

JUnit

the standard test runners

TestNG
the next generation test runner

Hemcrest
declarative matchers and assertions

Rest-assured
for writing REST Api driven end to end tests

Mockito
for mocking dependencies

Wiremock
for stubbing thirdparty services

Hoverfly
Create API simulation for end-to-end tests.

Spring Test and Spring Boot Test
for writing Spring Integration Tests - includes MockMVC, TestRestTemplate, Webclient like features.

JSONassert
An assertion library for JSON.

Pact
The Pact family of frameworks provide support for Consumer Driven Contracts testing.

Selenium
Selenium automates browsers. Its used for end-to-end automated ui testing.

Gradle
Gradle helps build, automate and deliver software, fastr.

IntelliJ IDEA
IDE for Java Development

Using spring-boot-starter-test
We can just add the below dependency in project’s build.gradle

testCompile('org.springframework.boot:spring-boot-starter-test')

This starter will import two spring boot test modules spring-boot-test & spring-boot-test- autoconfigure as well as Junit, AssertJ, Hamcrest, Mockito, JSONassert, Spring Test, Spring Boot Test and a number of other useful libraries.

What are the use cases for JWT?

There are many useful scenarios for leveraging the power of JWT-
Authentication

Authentication is one of the most common scenarios for using JWT, specifically in microservices architecture (but not limited to it). In microservices, the oauth2 server generates a JWT at the time of login and all subsequent requests can include the JWT AccessToken as the means for authentication. Implementing Single Sign-On by sharing JWT b/w different applications hosted in different domains.
Information Exchange

JWT can be signed, using public/private key pairs, you can be sure that the senders are who they say they are. Hence JWT is a good way of sharing information between two parties. An example use case could be -

Generating Single Click Action Emails e.g. Activate your account, delete this comment, add this item to favorites, Reset your password, etc. All required information for the action can be put into JWT.
Timed sharing of a file download using a JWT link. Timestamp can be part of the claim, so when the server time is past the time-coded in JWT, the link will automatically expire. 

Microservices -The Ultimate list of Interview Questions Part 3

How will you implement service discovery in microservices architecture?

Servers come and go in a cloud environment, and new instances of same services can be deployed to cater increasing load of requests. So it becomes absolutely essential to have service registry & discovery that can be queried for finding address (host, port & protocol) of a given server. We may also need to locate servers for the purpose of client-side load balancing (Ribbon) and handling failover gracefully (Hystrix).

Spring Cloud solves this problem by providing a few ready-made solutions for this challenge. There are mainly two options available for the service discovery - Netflix Eureka Server and Consul. Let's discuss both of these briefly:


Netflix Eureka Server
Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. The main features of Netflix Eureka are:

It provides service-registry.
zone aware service lookup is possible.
eureka-client (used by microservices) can cache the registry locally for faster lookup. The client also has a built-in load balancer that does basic round-robin load balancing.
Spring Cloud provides two dependencies - eureka-server and eureka-client. Eureka server dependency is only required in eureka server’s build.gradle

build.gradle - Eureka Server

compile('org.springframework.cloud:spring-cloud-starter-netflix-eureka-server')

On the other hand, each microservice need to include the eureka-client dependencies to enables

eureka discovery.

 build.gradle - Eureka Client (to be included in all microservices)

  compile('org.springframework.cloud:spring-cloud-starter-netflix-eureka-client')
Eureka server provides a basic dashboard for monitoring various instances and their health in the service registry. The ui is written in freemarker and provided out of the box without any extra configuration. Screenshot for Eureka Server looks like the following.



It contains a list of all services that are registered with Eureka Server. Each server has information like zone, host, port, and protocol.

Consul Server

It is a REST-based tool for dynamic service registry. It can be used for registering a new service, locating a service and health checkup of a service.

You have the option to choose any one of the above in your spring cloud-based distributed application. In this book, we will focus more on the Netflix Eureka Server option.


How will you use config-server for your development, stage and production environment?

If you have 3 different environments (develop/stage/production) in your project setup, then you need to create three different config storage projects. So in total, you will have four projects:

config-server
It is the config-server that can be deployed in each environment. It is the Java Code without configuration storage.

config-dev
It is the git storage for your development configuration. All configuration related to each microservices in the development environment will fetch its config from this storage. This project has no Java code, and t is meant to be used with config-server.

config-qa
Same as config-dev but its meant to be used only in qa environment.

Config-prod
Same as config-dev, but meant for production environment.
So depending upon the environment, we will use config-server with either config-dev, config-qa or config-prod.


How does Eureka Server work?

There are two main components in Eureka project: eureka-server and eureka-client.

Eureka Server
The central server (one per zone) that acts as a service registry. All microservices register with this eureka server during app bootstrap.

Eureka Client
Eureka also comes with a Java-based client component, the eureka-client, which makes interactions with the service much easier. The client also has a built-in load balancer that does basic round-robin load balancing. Each microservice in the distributed ecosystem much include this client to communicate and register with eureka-server.

Typical use case for Eureka
There is usually one eureka server cluster per region (US, Asia, Europe, Australia) which knows only about instances in its region. Services register with Eureka and then send heartbeats to renew their leases every 30 seconds. If the service can not renew their lease for a few times, it is taken out of server registry in about 90 seconds. The registration information and the renewals are replicated to all the eureka nodes in the cluster. The clients from any zone can look up the registry information (happens every 30 seconds) to locate their services (which could be in any zone) and make remote calls.

Eureka clients are built to handle the failure of one or more Eureka servers. Since Eureka clients have the registry cache information in them, they can operate reasonably well, even when all of the eureka servers go down.


What is Circuit Breaker Pattern?

Microservices often need to make remote network calls to another microservices running in a different process. Network calls can fail due to many reasons, including-

Brittle nature of the network itself
Remote process is hung or
Too much traffic on the target microservices than it can handle
This can lead to cascading failures in the calling service due to threads being blocked in the hung remote calls. A circuit breaker is a piece of software that is used to solve this problem. The basic idea is very simple - wrap a potentially failing remote call in a circuit breaker object that will monitor for failures/timeouts. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all. This mechanism can protect the cascading effects of a single component failure in the system and provide the option to gracefully downgrade the functionality.

A typical use of circuit breaker in microservices architecture looks like the following diagram-



Typical Circuit Breaker Implementation

Here a REST client calls the Recommendation Service which further communicates with Books Service using a circuit breaker call wrapper. As soon as the books-service API calls starts to fail, circuit breaker will trip (open) the circuit and will not make any further call to book-service until the circuit is closed again.

Martin Fowler has beautifully explained this phenomenon in detail on his blog.

Martin Fowler on Circuit Breaker Pattern : https://martinfowler.com/bliki/CircuitBreaker.html


What are Open, Closed and Half-Open states of Circuit Breaker?

Circuit Breaker wraps the original remote calls inside it and if any of these calls fails, the failure is counted. When the service dependency is healthy and no issues are detected, the circuit breaker is in Closed state. All invocations are passed through to the remote service.

If the failure count exceeds a specified threshold within a specified time period, the circuit trips into the Open State. In the Open State, calls always fail immediately without even invoking the actual remote call. The following factors are considered for tripping the circuit to Open State -

An Exception thrown (HTTP 500 error, can not connect)
Call takes longer than the configured timeout (default 1 second)
The internal thread pool (or semaphore depending on configuration) used by hystrix for the command execution rejects the execution due to exhausted resource pool.
After a predetermined period of time (by default 5 seconds), the circuit transitions into a half-open state. In this state, calls are again attempted to the remote dependency. Thereafter the successful calls transition the circuit breaker back into the closed state, while the failed calls return the circuit breaker into the open state.


What are use-cases for Circuit Breaker Pattern and benefits of using Circuit Breaker Pattern?

Synchronous communication over the network that is likely to fail is a potential candidate for circuit breaker.
A circuit breaker is a valuable place for monitoring, any change in the breaker state should be logged so as to enable deep monitoring of microservices. It can easily troubleshoot the root cause of failure.
All places where a degraded functionality can be acceptable to the caller if the actual server is struggling/down.
Benefits:-

The circuit breaker can prevent a single service from failing the entire system by tripping off the circuit to the faulty microservice.
The circuit breaker can help to offload requests from a struggling server by tripping the circuit, thereby giving it a time to recover.
In providing a fallback mechanism where a stale data can be provided if real service is down.


What is the difference between config first bootstrap and discovery first bootstrap in context of Spring Cloud Config client?

Config first bootstrap and discovery first bootstrap are two different approaches for using Spring Cloud Config client in Spring Cloud-powered microservices. Let’s discuss both of them:

Config First Bootstrap

This is the default behavior for any spring boot application where Spring Cloud Config client is on the classpath. When a config client starts up it binds to the Config Server using the bootstrap configuration property and initializes Spring Environment with remote property sources.

Config-first approach

The only configuration that each microservice (except config-server) needs to provide is the following:

File:-  /src/main/resources/bootstrap.yml

spring.cloud.config.uri: http://localhost:8888

In config-first approach, even the eureka-server can fetch its own configuration from config-server. Point worth noting down here is that config-server must be the first service to boot up in the entire ecosystem, because each service will fetch its configuration from config-server.

Discovery First Bootstrap

If you are using Spring Cloud Netflix and Eureka Service Discovery then you can have Config Server register with the Discovery Service and let all clients get access to config server via discovery service.

Discovery-first approach

This is not the default configuration in Spring Cloud applications, so we need to manually enable it using the below property in bootstrap.yml

Listing 17. /src/main/resources/bootstrap.yml

  spring:

      cloud:

          config:

              discovery:

enabled: true

This property should be provided by all microservices so that they can take advantage of discovery first approach.

The benefit of this approach is that now config-server can change its host/port without other microservices knowing about it since each microservice can get the configuration via eureka service now. The downside of this approach is that an extra network round trip is required to locate the service registration at app startup.


What is Strangulation Pattern in microservices architecture?

Strangulation is used to slowly decommission an older system and migrate the functionality to a newer version of microservices.

Normally one endpoint is Strangled at a time, slowly replacing all of them with the newer implementation. Zuul Proxy (API Gateway) is a useful tool for this because we can use it to handle all traffic from clients of the old endpoints, but redirect only selected requests to the new ones.

Let’s take an example use-case:

/src/main/resources/application.yml

zuul:

    routes:

first:
path: /first/**
url: http://first.example.com --1

legacy:
path: /**
url: http://legacy.example.com  -- 2

1)Paths in /first/** have been extracted into a new service with an external URL http://first.example.com

2 )legacy app is mapped to handle all request that do not match any other patterns (/first/**).

This configuration is for API Gateway (zuul reverse proxy), and we are strangling selected endpoints /first/ from the legacy app hosted at http://legacy.example.com slowly to newly created microservice with external URL http://first.example.com


What is Hystrix?

Hystrix is Netflix implementation for circuit breaker pattern, that also employs bulkhead design pattern by operating each circuit breaker within its own thread pool. It also collects many useful metrics about the circuit breaker’s internal state, including -

Traffic volume.
Request volume.
Error percentage.
Hosts reporting
Latency percentiles.
Successes, failures, and rejections.
All these metrics can be aggregated using another Netflix OSS project called Turbine. Hystrix dashboard can be used to visualize these aggregated metrics, providing excellent visibility into the overall health of the distributed system.
Hystrix can be used to specify the fallback method for execution in case the actual method call fails. This can be useful for graceful degradation of functionality in case of failure in remote invocation.

Add hystrix library to build.gradle dependencies {

compile('org.springframework.cloud:spring-cloud-starter-hystrix')

1) Enable Circuit Breaker in main application

@EnableCircuitBreaker @RestController @SpringBootApplication
public class ReadingApplication {

... }

2) Using HystrixCommand fallback method execution

@HystrixCommand(fallbackMethod = "reliable")

public String readingList() {

URI uri = URI.create("http://localhost:8090/recommended"); return this.restTemplate.getForObject(uri, String.class);

}

public String reliable() { 2
return "Cached recommended response";

}

Using @HystrixCommand annotation, we specify the fallback method to execute in case of exception.
fallback method should have the same signature (return type) as that of the original method. This method provides a graceful fallback behavior while the circuit is in the open or half-open state.


What are the main features of the Hystrix library?

Hystrix library makes our distributed system resilient (adaptable & quick to recover) to failures. It

provides three main features:

Latency and fault-tolerance

It helps stop cascading failures, provide decent fallbacks and graceful degradation of service functionality to confine failures. It works on the idea of fail-fast and rapid recovery. Two different options namely Thread isolation and Semaphore isolation are available for use to confine failures.

Real-time operations

Using real-time metrics, you can remain alert, make decisions, affect changes and see results.

Concurrency

Parallel execution, concurrent aware request caching and finally automated batching through request collapsing improves the concurrency performance of your application.

More information on Netflix hystrix library:

https://github.com/Netflix/Hystrix/
https://github.com/Netflix/Hystrix/wiki#principles
https://github.com/Netflix/Hystrix/wiki/How-it-Works


What is the difference between using a Circuit Breaker and a naive approach where we try/catch a remote method call and protect for failures?

Let's say we want to handle service to service failure gracefully without using the Circuit Breaker pattern. The naive approach would be to wrap the   REST call in a try-catch clause. But Circuit Breaker does a lot more than try-catch can not accomplish -

Circuit Breaker does not even try calls once the failure threshold is reached, doing so reduces the number of network calls. Also, a number of threads consumed in making faulty calls are freed up.
Circuit breaker provides fallback method execution for gracefully degrading the behavior. Try catch approach will not do this out of the box without additional boiler plate code.
Circuit Breaker can be configured to use a limited number of threads for a particular host/API, doing so brings all the benefits of bulkhead design pattern.
So instead of wrapping service to service calls with try/catch clause, we must use the circuit breaker pattern to make our system resilient to failures.


How does Hystrix implement Bulkhead Design Pattern?

The bulkhead implementation in Hystrix limits the number of concurrent calls to a component/service. This way, the number of resources (typically threads) that are waiting for a reply from the component/service is limited.

Let's assume we have a fictitious web e-commerce application as shown in the figure below. The WebFront communicates with 3 different components using remote network calls (REST over HTTP).

Product catalogue Service
Product Reviews Service
Order Service
Now let's say due to some problem in Product Review Service, all requests to this service start to hang (or timeout), eventually causing all request handling threads in WebFront Application to hang on waiting for an answer from Reviews Service. This would make the entire WebFront Application non-responsive. The resulting behavior of the WebFront Application would be same if request volume is high and Reviews Service is taking time to respond to each request.

The Hystrix Solution

Hystrix’s implementation for bulkhead pattern would limit the number of concurrent calls to components and would have saved the application in this case by gracefully degrading the functionality. Assume we have 30 total request handling threads and there is a limit of 10 concurrent calls to Reviews Service. Then at most 10 request handling threads can hang when calling Reviews Service, the other 20 threads can still handle requests and use components Products and Orders Service. This will approach will keep our WebFront responsive even if there is a failure in Reviews Service.


In a microservices architecture, what are smart endpoints and dumb pipes?

Martin Fowler introduced the concept of "smart endpoints & dumb pipes" while describing microservices architecture.

To give context, one of the main characteristic of a   based system is to build small utilities and connect them using pipes. For example, a very popular way of finding all java processes in Linux system is Command pipeline in Unix shell ps elf | grep java

Here two commands are separated by a pipe, the pipe’s job is to forward the output of the first command as an input to the second command, nothing more.   like a dumb pipe which has no business logic except the routing of data from one utility to another.

In his article Martin Fowler compares Enterprise Service Bus (ESB) to ZeroMQ/RabbitMQ, ESB is a pipe but has a lot of logic inside it while ZeroMQ has no logic except the persistence/routing of messages. ESB is a fat layer that does a lot of things like - security checks, routing, business flow & validations, data transformations, etc. So ESB is a kind of smart pipe that does a lot of things before passing data to next endpoint (service). Smart endpoints & dumb pipes advocate an exactly opposite idea where the communication channel should be stripped of any business-specific logic and should only distribute messages between components. The components (endpoints/services) should do all the data validations, business processing, security checks, etc on those incoming messages.

Microservices team should follow the principles and protocols that worldwide web & Unix is built on.


Microservices -The Ultimate list of Interview Questions Part 2

How to stop a Spring Boot based microservices at startup if it can not connect to the Config server during bootstrap?

If you want to halt the service when it is not able to locate the config-server during bootstrap, then you need to configure the following property in microservice’s bootstrap.yml:

spring:

      cloud:

         config:

             fail-fast: true

Using this configuration will make microservice startup fail with an exception when config-server is not reachable during bootstrap.

We can enable a retry mechanism where microservice will retry 6 times before throwing an exception. We just need to add spring-retry and spring-boot-starter-aop to the classpath to enable this feature.

build.gradle:-

...
 dependencies {
   compile('org.springframework.boot:spring-boot-starter-aop')
   compile('org.springframework.retry:spring-retry')
   ...
}


How big a single microservice should be?

A good, albeit non-specific, rule of thumb is as small as possible but as big as necessary to represent the domain concept they own said by Martin Fowler

Size should not be a determining factor in microservices, instead bounded context principle and single responsibility principle should be used to isolate a business capability into a single microservice boundary.

Microservices are usually small but not all small services are microservices. If any service is not following the Bounded Context Principle, Single Responsibility Principle, etc. then it is not a microservice irrespective of its size. So the size is not the only eligibility criteria for a service to become microservice.

In fact, size of a microservice is largely dependent on the language (Java, Scala, PHP) you choose, as few languages are more verbose than others.


How do microservices communicate with each other?

Microservices are often integrated using a simple protocol like REST over HTTP. Other communication protocols can also be used for integration like AMQP, JMS, Kafka, etc.

The communication protocol can be broadly divided into two categories- synchronous communication and asynchronous communication.

Synchronous Communication

RestTemplate, WebClient, FeignClient can be used for synchronous communication between two microservices. Ideally, we should minimize the number of synchronous calls between microservices because networks are brittle and they introduce latency. Ribbon - a client-side load balancer can be used for better utilization of resource on the top of RestTemplate. Hystrix circuit breaker can be used to handle partial failures gracefully without a cascading effect on the entire ecosystem. Distributed commits should be avoided at any cost, instead, we shall opt for eventual consistency using asynchronous communication.

Asynchronous Communication

In this type of communication, the client does not wait for a response, instead, it just sends the message to the message broker. AMQP (like RabbitMQ) or Kafka can be used for asynchronous communication across microservices to achieve eventual consistency.


What should be preferred communication style in microservices: synchronous or asynchronous?

You must use asynchronous communication while handling HTTP POST/PUT (anything that modifies the data) requests, using some reliable queue mechanism (RabbitMQ, AMQP, etc.)
It's fine to use synchronous communication for Aggregation pattern at API Gateway Level. But this aggregation should not include any business logic other than aggregation. Data values must not be transformed at Aggregator, otherwise, it defeats the purpose of Bounded Context. In Asynchronous communication, events should be published into a Queue. Events contain data about the domain, it should not tell what to do (action) on this data.
If microservice to microservice communication still requires synchronous communication for GET operation, then seriously reconsider the partitioning of your microservices for bounded context, and create some tasks in backlog/technical debt.

What is the difference between Orchestration and Choreography in microservices context?

In Orchestration, we rely on a central system to control and call other Microservices in a certain fashion to complete a given task. The central system maintains the state of each step and sequence of the overall workflow. In Choreography, each Microservice works like a State Machine and reacts based on the input from other parts. Each service knows how to react to different events from other systems. There is no central command in this case.

Orchestration is a tightly coupled approach and is an anti-pattern in a microservices architecture. Whereas, Choreography’s loose coupling approach should be adopted where-ever possible.

Example

Let’s say we want to develop a microservice that will send product recommendation email in a fictitious e-shop. In order to send Recommendations, we need to have access to user’s order history which lies in a different microservices.

In Orchestration approach, this new microservice for recommendations will make synchronous calls to order service and fetch the relevant data, then based on his past purchases we will calculate the recommendations. Doing this for a million users will become cumbersome and will tightly couple the two microservices.

In Choreography approach, we will use event-based Asynchronous communication where whenever a user makes a purchase, an event will be published by order service. Recommendation service will listen to this event and start building user recommendation. This is a loosely coupled approach and highly scalable. The event, in this case, does not tell about the action, but just the data.


What is API Gateway?

API Gateway is a special class of microservices that meets the need of a single client application (such as android app, web app, angular JS app, iPhone app, etc) and provide it with single entry point to the backend resources (microservices), providing cross-cutting concerns to them such as security, monitoring/metrics & resiliency.

Client Application can access tens or hundreds of microservices concurrently with each request, aggregating the response and transforming them to meet the client application’s needs. Api Gateway can use a client-side load balancer library (Ribbon) to distribute load across instances based on round-robin fashion. It can also do protocol translation i.e. HTTP to AMQP if necessary. It can handle security for protected resources as well.

Features of API Gateway

Spring Cloud DiscoveryClient integration
Request Rate Limiting (available in Spring Boot 2.x)
Path Rewriting
Hystrix Circuit Breaker integration for resiliency


How to achieve zero-downtime during the deployments?

As the name suggests, zero-downtime deployments do not bring outage in a production environment. It is a clever way of deploying your changes to production, where at any given point in time, at least one service will remain available to customers.

Blue-green deployment

One way of achieving this is blue/green deployment. In this approach, two versions of a single microservice are deployed at a time. But only one version is taking real requests. Once the newer version is tested to the required satisfaction level, you can switch from older version to newer version.

You can run a smoke-test suite to verify that the functionality is running correctly in the newly deployed version. Based on the results of smoke-test, newer version can be released to become the live version.

Changes required in client code to handle zero-downtime

Lets say you have two instances of a service running at the same time, and both are registered in Eureka registry. Further, both instances are deployed using two distinct hostnames:

/src/main/resources/application.yml

  spring.application.name: ticketBooks-service

  ---

  spring.profiles: blue

  eureka.instance.hostname: ticketBooks-service -blue.example.com

  ---

  spring.profiles: green

  eureka.instance.hostname: ticketBooks-service -green.example.com

Now the client app that needs to make api calls to books-service may look like below:

@RestController

@SpringBootApplication

@EnableDiscoveryClient

 public class ClientApp {

@Bean
@LoadBalanced
public RestTemplate restTemplate() {

return new RestTemplate(); }

@RequestMapping("/hit-some-api")

public Object hitSomeApi() {

return restTemplate().getForObject("https://ticketBooks-service/some-uri", Object.class);  }

Now, when ticketBooks-service-green.example.com goes down for upgrade, it gracefully shuts down and delete its entry from Eureka registry. But these changes will not be reflected in the ClientApp until it fetches the registry again (which happens every 30 seconds). So for upto 30 seconds, ClientApp’s @LoadBalanced RestTemplate may send the requests to ticketBooks-service-green.example.com even if its down.

To fix this, we can use Spring Retry support in Ribbon client-side load balancer. To enable Spring Retry, we need to follow the below steps:

Add spring-retry to build.gradle dependencies

compile("org.springframework.boot:spring-boot-starter-aop")

compile("org.springframework.retry:spring-retry")

Now enable spring-retry mechanism in ClientApp using @EnableRetry annotation, as shown below:

@EnableRetry @RestController @SpringBootApplication @EnableDiscoveryClient public class ClientApp {

... }

Once this is done, Ribbon will automatically configure itself to use retry logic and any failed request to ticketBooks-service-green.example.com com will be retried to next available instance (in round-robins fashion) by Ribbon. You can customize this behaviour using the below properties:

/src/main/resources/application.yml

ribbon:

MaxAutoRetries: 5

MaxAutoRetriesNextServer: 5

OkToRetryOnAllOperations: true

OkToRetryOnAllErrors: true


How to achieve zero-downtime deployment(blue/green) when there is a database change?

The deployment scenario becomes complex when there are database changes during the upgrade. There can be two different scenarios: 1. database change is backward compatible (e.g. adding a new table column) 2. database change is not compatible with an older version of the application (e.g. renaming an existing table column)

Backward compatible change: This scenario is easy to implement and can be fully automated using Flyway. We can add the script to create a new column and the script will be executed at the time of deployment. Now during blue/green deployment, two versions of the application (say v1 and v2) will be connected to the same database. We need to make sure that the newly added columns allow null values (btw that’s part of the backward compatible change). If everything goes well, then we can switch off the older version v1, else application v2 can be taken off.
Non-compatible database change: This is a tricky scenario, and may require manual intervention in-case of rollback. Let's say we want to rename first_name column to fname in the database. Instead of directly renaming, we can create a new column fname and copy all existing values of first_name into fname column, keeping the first_name column as it is in the database. We can defer non-null checks on fname to post-deployment success. If the deployment goes successful, we need to migrate data written to first_name by v1 to the new column (fname) manually after bringing down the v1. If the deployment fails for v2, then we need to do the otherwise.
Complexity may be much more in a realistic production app, such discussions are beyond the scope of this book.


How to maintain ACID in microservice architecture?

ACID is an acronym for four primary attributes namely atomicity, consistency, isolation, and durability ensured by the database transaction manager.

Atomicity

In a transaction involving two or more entities, either all of the records are committed or none are.

Consistency

A database transaction must change affected data only in allowed ways following specific rules including constraints/triggers etc.

Isolation

Any transaction in progress (not yet committed) must remain isolated from any other transaction.

Durability

Committed records are saved by a database such that even in case of a failure or database restart, the data is available in its correct state.

In a distributed system involving multiple databases, we have two options to achieve ACID compliance:

One way to achieve ACID compliance is to use a two-phase commit (a.k.a 2PC), which ensures that all involved services must commit to transaction completion or all the transactions are rolled back.
Use eventual consistency, where multiple databases owned by different microservices become eventually consistent using asynchronous messaging using messaging protocol. Eventual consistency is a specific form of weak consistency.
2 Phase Commit should ideally be discouraged in microservices architecture due to its fragile and complex nature. We can achieve some level of ACID compliance in distributed systems through eventual consistency and that should be the right approach to do it.

What is Spring Cloud?

Spring team has an integrated number of battle-tested open-source projects from companies like Pivotal, Netflix into a Spring project known as Spring Cloud. Spring Cloud provides libraries & tools to quickly build some of the common design patterns of a distributed system, including the following:

Pattern TypePattern NameSpring Cloud Library
Development PatternDistributed/versioned configuration managementSpring Cloud Config Server
Core Microservices PatternsSpring Boot
Asynchronous/Distributed MessagingSpring Cloud Stream (AMQP and Kafka)
Inter-Service CommunicationRestTemplate and Spring Cloud Feign
Routing PatternService Registration & DiscoverySpring Cloud Netflix Eureka & Consul
Routing PatternService Routing/ API Gateway PatternSpring Cloud Netflix Zuul
Resiliency PatternClient-side load balancingSpring Cloud Netflix Ribbon
Circuit Breaker & Fallback PatternSpring Cloud Netflix Hystrix
Bulkhead patternSpring Cloud / Spring Cloud Netflix Hystrix
Logging PatternsLog CorrelationSpring Cloud Sleuth
Microservice TracingSpring Cloud Sleuth/Zipkin
Security PatternsAuthorization and AuthenticationSpring Cloud Security OAuth2
Credentials ManagementSpring Cloud Security OAuth2/ JWT
Distributed SessionsSpring Cloud OAuth2 and Redis
Spring Cloud makes it really easy to develop, deploy and operate JVM applications for the Cloud.
What are some kind of challenges that distributed systems introduces?

When you are implementing microservices architecture, there are some challenges that you need to deal with every single microservices. Moreover, when you think about the interaction with each other, it can create a lot of challenges. As well as if you pre-plan to overcome some of them and standardize them across all microservices, then it happens that it also becomes easy for developers to maintain services.

Some of the most challenging things are testing, debugging, security, version management, communication ( sync or async ), state maintenance etc. Some of the cross-cutting concerns which should be standardized are monitoring, logging, performance improvement, deployment, security etc.

On what basis should microservices be defined?

It is a very subjective question, but with the best of my knowledge I can say that it should be based on the following criteria.

i) Business functionalities that change together in bounded context

ii) Service should be testable independently.

iii) Changes can be done without affecting clients as well as dependent services.

iv) It should be small enough that can be maintained by 2-5 developers.

v) Reusability of a service


How to tackle service failures when there are dependent services?

In real time, it happens that a particular service is causing a downtime, but the other services are functioning as per mandate. So, under such conditions, the particular service and its dependent services get affected due to the downtime.

In order to solve this issue, there is a concept in the microservices architecture pattern, called the circuit breaker. Any service calling remote service can call a proxy layer which acts as an electric circuit breaker. If the remote service is slow or down for ‘n’ attempts then proxy layer should fail fast and keep checking the remote service for its availability again. As well as the calling services should handle the errors and provide retry logic. Once the remote service resumes then the services starts working again and the circuit becomes complete.

This way, all other functionalities work as expected. Only one or the dependent services get affected.


How can one achieve automation in microservice based architecture?

This is related to the automation for cross-cutting concerns. We can standardize some of the concerns like monitoring strategy, deployment strategy, review and commit strategy, branching and merging strategy, testing strategy, code structure strategies etc.

For standards, we can follow the 12-factor application guidelines. If we follow them, we can definitely achieve great productivity from day one. We can also containerize our application to utilize the latest DevOps themes like dockerization. We can use mesos, marathon or kubernetes for orchestrating docker images. Once we have dockerized source code, we can use CI/CD pipeline to deploy our newly created codebase. Within that, we can add mechanisms to test the applications and make sure we measure the required metrics in order to deploy the code. We can use strategies like blue-green deployment or canary deployment to deploy our code so that we know the impact of code which might go live on all of the servers at the same time. We can do AB testing and make sure that things are not broken when live. In order to reduce a burden on the IT team, we can use AWS / Google cloud to deploy our solutions and keep them on autoscale to make sure that we have enough resources available to serve the traffic we are receiving.


What should one do so that troubleshooting becomes easier in microservice based architecture?

This is a very interesting question. In monolith where HTTP Request waits for a response, the processing happens in memory and it makes sure that the transaction from all such modules work at its best and ensures that everything is done according to expectation. But it becomes challenging in the case of microservices because all services are running independently, their datastores can be independent, REST Apis can be deployed on different endpoints. Each service is doing a  bit without knowing the context of other microservices.

In this case, we can use the following measures to make sure we are able to trace the errors easily.

Services should log and aggregators push logs to centralized logging servers. For example, use ELK Stack to analyze.
Unique value per client request(correlation-id) which should be logged in all the microservices so that errors can be traced on a central logging server.
One should have good monitoring in place for each and every microservice in the ecosystem, which can record application metrics and health checks of the services, traffic pattern and service failures.


How should microservices communicate with each other?

It is an important design decision. The communication between services might or might not be necessary. It can happen synchronously or asynchronously. It can happen sequentially or it can happen in parallel. So, once we have decided what should be our communication mechanism, we can decide the technology which suits the best.

Here are some of the examples which you can consider.

A. Communication can be done by using some queuing service like rabbitmq, activemq and kafka. This is called asynchronous communication.

B. Direct API calls can also be made to microservice. With this approach, interservice dependency increases. This is called synchronous communication.

C. Webhooks to push data to connected clients/services.


How would you implement authentication in microservice architecture?

There are mainly two ways to achieve authentication in microservices architecture.

A. Centralized sessions

All the microservices can use a central session store and user authentication can be achieved. This approach works but has many drawbacks as well. Also, the centralized session store should be protected and services should connect securely. The application needs to manage the state of the user, so it is called stateful session.

B. Token-based authentication/authorization

In this approach, unlike the traditional way, information in the form of token is held by the clients and the token is passed along with each request. A server can check the token and verify the validity of the token like expiry, etc. Once the token is validated, the identity of the user can be obtained from the token. However, encryption is required for security reasons. JWT(JSON web token) is the new open standard for this, which is widely used. Mainly used in stateless applications. Or, you can use OAuth based authentication mechanisms as well.


What would be your logging strategy in a microservice architecture?

Logging is a very important aspect of any application. If we have done proper logging in an application, it becomes easy to support other aspects of the application as well. Like in order to debug the issues / in order to understand what business logic might have been executed, it becomes very critical to log important details.

Ideally, you should follow the following practices for logging.

A. In a microservice architecture, each request should have a unique value (correlationid) and this value should be passed to each and every microservice so the correlationid can be logged across the services. Thus the requests can be traced.

B. Logs generated by all the services should be aggregated in a single location so that while searching becomes easier. Generally, people use ELK stack for the same. So that it becomes easy for support persons to debug the issue.


How does docker help in microservice architecture?

Docker helps in many ways for microservices architecture.

A. In a microservice architecture, there can be many different services written in different languages. So a developer might have to setup few services along with its dependency and platform requirements. This becomes difficult with the growing number of services in an ecosystem. However, this becomes very easy if these services run inside a Docker container.

B. Running services inside a container also give a similar setup across all the environments, i.e development, staging and production.

C. Docker also helps in scaling along with container orchestration.

D. Docker helps to upgrade the underlying language very easily. We can save many man-hours.

E. Docker helps to onboard the engineers fast.

F. Docker also helps to reduce the dependencies on IT Teams to set up and manage the different kind of environment required.


How would you manage application configuration in microservice running in a container?

As container based deployment involves a single image per microservice, it is a bad idea to bundle the configuration along with the image.

This approach is not at all scalable because we might have multiple environments and also we might have to take care of geographically distributed deployments where we might have different configurations as well.

Also, when there are application and cron application as part of the same codebase, it might need to take additional care on production as it might have repercussions how the crons are architected.

To solve this, we can put all our configuration in a centralized config service which can be queried by the application for all its configurations at the runtime. Spring cloud is one of the example services which provides this facility.

It also helps to secure the information, as the configuration might have passwords or access to reports or database access controls. Only trusted parties should be allowed to access these details for security reasons.


What is container orchestration and how does it helps in a microservice architecture?

In a production environment, you don’t just deal with the application code/application server. You need to deal with API Gateway, Proxy Servers, SSL terminators, Application Servers, Database Servers, Caching Services, and other dependent services.

As in modern microservice architecture where each microservice runs in a separate container, deploying and managing these containers is very challenging and might be error-prone.

Container orchestration solves this problem by managing the life cycle of a container and allows us to automate the container deployments.

It also helps in scaling the application where it can easily bring up a few containers. Whenever there is a high load on the application and once the load goes down. it can scale down as well by bringing down the containers. It is helpful to adjust cost based on requirements.

Also in some cases, it takes care of internal networking between services so that you need not make any extra effort to do so. It also helps us to replicate or deploy the docker images at runtime without worrying about the resources. If you need more resources, you can configure that in orchestration services and it will be available/deployed on production servers within minutes.

Ex : Kubernetes


Explain the API gateway and why one should use it?

An API Gateway is a service which sits in front of the exposed APIs and acts as an entry point for a group of microservices. Gateway also can hold the minimum logic of routing calls to microservices and also an aggregation of the response.

A. A gateway can also authenticate requests by verifying the identity of a user by routing each and every request to authentication service before routing it to the microservice with authorization details in the token.

B. Gateways are also responsible to load balance the requests.

C. API Gateways are responsible to rate limit a certain type of request to save itself from blocking several kinds of attacks etc.

D. API Gateways can whitelist or blacklist the source IP Addresses or given domains which can initiate the call.

E. API Gateways can also provide plugins to cache certain type of API responses to boost the performance of the application.


How will you ensure data consistency in microservice based architecture?

One should avoid sharing database between microservices, instead APIs should be exposed to perform the change.

If there is any dependency between microservices then the service holding the data should publish messages for any change in the data for which other services can consume and update the local state.

If consistency is required then microservices should not maintain local state and instead can pull the data whenever required from the source of truth by making an API call.


What is event sourcing in microservices architecture?

In the microservices architecture, it is possible that due to service boundaries, a lot of times you need to update one or more entities on the state change of one of the entities. In that case, one needs to publish a message and new event gets created and appended to already executed events. In case of failure, one can replay all events in the same sequence and you will get the desired state as required. You can think of event sourcing as your bank account statement.

You will start your account with initial money. Then all of the credit and debit events happen and the latest state is generated by calculating all of the events one by one. In a case where events are too many, the application can create a periodic snapshot of events so that there isn’t any  need to replay all of the events again and again.