Wir sind ihr Expertenteam, bestehend aus Cloud-Architekten, DevOps Spezialisten, Software-Designern und Projektmanagern, für ihren nächsten Cloud-Erfolg
BLOGMultiCloud (Continued) 02 Oct 2023
In this section, we will get into a few cloud services that we can use to build microservices architecture so we know what service or cloud component to choose during microservices architecture design and implementation.
Nowadays, there is no option to avoid talking about microservices during architectural calls, especially if you want to design cloud or multi-cloud, modular, scalable, and multi-user applications. In this article, I will explain microservices and how to design applications based on the multi-cloud scenario. I will walk you through the microservice design pattern and wrap this information into an architectural example.
Container engines are essential to building microservices as they allow for separation, orchestration, and management of the microservices within various cloud providers. Docker is a widely used container engine that allows one to wrap each microservice into a container and put it into a cloud-based container orchestration system like Kubernetes (AKS, EKS) or directly spin up the application. Containerd is the same as Docker but has a lightweight and more secure architecture.
Kubernetes is a popular open-source system for orchestrating, automating deployment, and scaling the containers apps. It contains and automates the deployment. Azure, AWS, and Google Cloud have their own managed orchestration services that already include load balancing, auto scaling, workload management, monitoring, and a service mesh.
Azure Service Fabric is a distributed platform to orchestrate microservices. For several years, the main goal was to provide the best support for .NET Core/Framework- and Windows-based microservices; we can see it in the selection service flowchart. Microsoft claims that Service Fabric supports other languages and platforms like Linux as well.
A queue is a service that is based on the FIFO (first in, first out) principle. All message bus systems are based on this service. For example, Azure has queue storage that contains simple business logic. If you have a simple architecture that needs centralized message storage, you can rely on the queue. AWS and Google Cloud also have the Simple Queue Service and GC Pub/Sub, respectively. These allow you to send, store, and receive messages between microservices and software components.
Service Bus/Message Bus is based on the same approach as a queue. However, a Message Bus has more features on top — it contains a dead-letter queue, scheduled delivery, message deferral, transactions, duplicate detection, and many other features. For example, Azure Service Bus and AWS Managed Kafka service are highly available message brokers for enterprise-level applications that can deal with thousands of messages.
Serverless allows us to build microservices architecture purely with an event-driven approach. It’s a single cloud service unit that is available on demand with an intended focus on building the microservices straight away in the cloud without thinking about what container engine, orchestrator, or cloud service you should use. AWS and Azure have Lambda and Azure Functions, respectively. Google Cloud Functions follows the same principle.
Now we understand microservices architecture and can start developing the application using microservices. We also understand the benefits of using microservices and how to refactor applications to support this architecture. The best point to start with building the app is to know microservices design patterns.
The microservices domain model (part of domain-driven design) is more than a pattern — it is a set of principles to design and scope a microservice. A microservices domain should be designed using the following rules:
A single microservice should be an independent business function. Therefore, the overall service should be scoped to a single business concept.
Business logic should be encapsulated inside an API that is based on REST.
A legacy system may have unmaintainable code and an overall poor design, but we still rely on the data that comes from this module. An anti-corruption layer provides the façade, or bridge, between new microservices architecture and legacy architecture. Therefore, it allows us to stay away from manipulating legacy code and focus on feature development.
Figure 1: Anticorruption layer
The circuit breaker provides mechanisms to stop a cascade from failing and automatically recovering to a normal state. Imagine we have service A and service B that rely on data from service C. We introduce a bug in service C that affects services A and B. In a well-designed microservices architecture, each service should be independent of the other.
However, dependencies may occur during refactoring from monolith to microservices. In this case, you need to implement a circuit breaker to predict a cascade failure. Circuit breakers act as a state machine. They monitor numerous recent failures and decide what to do next. They can allow operation (closed state) or return the exception (open or half-open state).
Figure 2: Circuit breaker states
A service mesh implements the communication layer for microservices architecture. It ensures the delivery of service requests, provides load balancing, encrypts data (with TLS), and provides the discovery of other services. A service mesh also:
Provides circuit breaker logic
Provides the process to manage traffic control, network resilience, security, and authentication
Has a proxy that integrates with microservices using a sidecar pattern
It allows you to not only manage the service but also collect telemetry data and send it to the control plane. We implement a service mesh such as Istio, which is the most popular service mesh framework for managing microservices in Kubernetes.
Figure 3: Service mesh algorithm
Sidecar is a utility application that is deployed alongside the main service. Sidecar is helpful in:
Controlling connection to the service
Figure 4: Sidecar
To demonstrate the power of microservices, we will migrate our monolithic travel application (see Figure 1) to the microservices architecture.
We will use the serverless approach and Azure cloud.
API Gateway — Using API Management to expose the endpoints of the back-end services
so the client application can consume them securely.
Entry points — The public-facing APIs that the client application will be using, powered by
Azure Functions responding to HTTP requests.
Async queue — Messaging service to handle services intercommunication and pass along
information and data between the different services, represented by Azure Event Grid.
Backend services — The services that are directly operating with the data layer and other
components of the solution, isolated from the rest, and easily replaceable if needed.
Figure 5: Travel booking microservices
Building highly available, scalable, and performant applications can be challenging. Microservices architecture provides us the option to build not only independent services but also create several teams to support each service and introduce the DevOps approach. Microservices and all popular cloud providers allow us to build multi-cloud microservices architecture. This saves money, as some services have different pricing strategies. But be sure to choose the most appropriate service that’s suited for specific microservices domains. For example, we can use AKS with an integrated service mesh or a serverless approach based on AWS Lambdas. Multi-cloud allows us to apply cloud-native DevOps to deliver services independently.
This is an article from DZone’s 2022 Microservices and Containerization Trend Report.