9 things to consider when designing microservices
Microservices have become the defacto architectural approach in the last couple of years and for a good reason. I'm not going to tell you why microservices should be used. I assume you know that. What I'd like to share is best practices for designing and deploying microservices with some specific recommendations for AWS.
So let's get to it!
Things to consider when designing microservices
Microservices should never rely on data stored in-memory, across multiple requests. Traditional applications often store session data in memory for fast access during the life of the session. This is an anti-pattern for microservices. Any “session” data that needs to be used by other requests must be saved in a persistent, non-ephemeral store or a distributed cache. The reason this is important is because if your service instance fails (and they often do when you're in the cloud), and the next request goes to a different instance (thanks to load balancers or service discovery), you will simply retrieve “session” data from a database or distributed cache.
Microservice should be responsible for a specific business feature or bounded context (e.g. Sales Orders, Customers, Market Data, etc). It's impossible to know in advance how big or small it should be, that's why developing microservices (like any other software) is an iterative process. Single responsibility allows to encapsulate business logic in a single service, allows that service to change, evolve and scale independently of other services.
No DATA SHARING
In order to provide accuracy and consistency of data, data managed by a microservice should never be accessed directly by other services. In other words, if Sales Orders service needs Customer data, it will contact Customers service for that data instead of reading directly from a database. In some cases Sales Orders service may decide to subscribe to Customer events and create its own internal view of Customer data for performance optimization, but it will always rely on Customers service for all the updates.
Preference to asynchronous service-to-service communication
Another important characteristics of microservices is physical decoupling of requests for service to service communication. This can be achieved with messaging platforms. It requires a different design approach than traditional request/response communication, but it ensures that failures in other services won't affect the performance and availability of the calling service.
The more moving parts a platform has the more chances for failures. Unlike traditional monolithic applications, microservices do not use distributed transactions. Therefore it is possible for parts of the transaction to succeed while others to fail. If the same transaction is retried we must ensure idempotency for those parts that succeeded.
DO NOT USE ephemeral storage, except for per request temporary needs
We all remember the days we saved data to the file system, in a nice folder structure, so we can have fastest access to files required by our application. This is big “NO” for microservices. This goes together with stateless pattern and for the same reasons. The only exception to this is saving data temporarily during the the lifespan of a single request.
One of the consequences of Single Responsibility model is that web or mobile application may require to make multiple requests to get all the data for specific page. It's not unusual to call 5, 10 or 15 endpoints to collect all the information. A good design pattern for this use case is “service aggregator”. Mobile application will make a single service call to a service that will then call 5, 10 or 15 other services (preferably in parallel), collect and aggregate results into a single view object and return it to the mobile application. This makes UI development much simpler and improves overall response time, because instead of making 10 service calls over the Internet, these 10 calls are made within your network (e.g. AWS VPC).
Many applications have a disproportionate number of writes vs reads. Traditional software design puts write and read code within the same service, therefore the writes may slow down the reads and vice versa. The best way to remove this risk is to use CQRS (Command Query Responsibility Segregation) design pattern. As the name suggests, it separates commands (write/update/delete) from query (read). This separation allows to independently deploy heavy read service from light write service or vice versa. It is also an important design approach for Event Sourcing.
Have you ever heard of a requirement to keep track of all the activity within the platform and to allow for complete audit of such activities? Is your business constantly changing and you struggle to refactor existing database structure and code? Did you find a flaw in your code and now you have 6 months worth of data that was incorrectly derived based on that flaw? These are all legitimate reasons to consider event sourcing for your next application. Event sourcing stores all platform events in a read only event store (e.g. database) and uses those events to construct appropriate materialized views to service various parts of the application. Storing all events (i) allows easy auditing of all the business activities; (ii) allows to replay events from specific date if you need to implement new business logic or find a bug in your code. A good practice with event sourcing is to utilize CQRS, because event sourcing by nature separates events from data views.
Ways to deploy microservices on AWS
Now let's review some of the recommended ways to deploy microservices on AWS.
This should go without saying - deploy all microservices as docker containers. Regardless of the underlying container orchestration services. This allows for easy testing on local development machines and easy deployment in lower and production environments, while ensuring the same code is tested and deployed in each case. So, regardless of the following recommendations, make sure to build docker image for your microservice, test that image locally and push to your private docker registry (e.g. ECR - EC2 Container Registry).
ec2 / auto scaling group / elb
A more traditional and straight forward way to deploy more coarse-grained microservices is via AWS Auto Scaling group. This entails a few simple steps:
- Create an AWS ELB (Elastic Load Balancer) to provide a static DNS name and multi-AZ load balancing.
- Create a simple bash script as your EC2 “user data”. In a nutshell, this script will update your Amazon AMI with latest OS patches, pull appropriate docker image and run it.
- Create a launch an AWS Launch Configuration and use above “user data”. Use the smallest EC2 instance type for your use case to provide consistent experience and utilize most of the instance resources.
- Create an AWS Auto Scaling Group with above Launch Configuration and ELB. This will launch and maintain appropriate number of instances based on scaling conditions you set for the service.
ecs / docker / alb
It will usually be more cost effective and easier to manage your microservices using AWS ECS (EC2 Container Services). ECS is a highly scalable, high performance Docker container management service. ECS runs on a cluster of instances, therefore allowing to deploy multiple docker containers across multiple instances. This is especially beneficial if you microservices are more granular. ECS integrates with other AWS services out of the box. Additional benefits of ECS is an ability to schedule containers and run batch jobs.
What I hope you take away from this post is that good microservices architecture requires to change the way applications are designed and deployed. And, regardless of which cloud you use, just follow best practices for microservice design and deploy using cloud specific services.
Igor Royzis is a co-founder and Chief Architect of Kinect Consulting. Igor specializes in highly complex solutions architecture and strategy, with specific emphasis on cloud-native microservices, event sourcing, big data and analytics. Igor is an AWS Certified Architect Professional.