I recently wrote about the differences between Cloud Native and cloud native. In that post I explained that Cloud Native is an approach to developing applications that use microservice architecture. In this post I’ll talk about microservices.
In the Beginning…
To understand microservices we first have to understand the nature of software development and how it has changed in recent years.
In the beginning god invented servers…
Wait, what about mainframe?
Mainframe is like the shark or alligator, prehistoric but evolved over millennia into the pinnacle of what it is today and now happily swims along at the apex without any predators. Mainframe enables services that need masses of compute power, typically for large scale transaction processing, such as financial services, to support thousands of users, or concurrent connections. A very specific use case. Microservices were not born through the evolution of software development for mainframe.
So, in the beginning god invented servers. First they were physical tin with an Operating System (OS). They provided an environment that had all the functions necessary to support one or more applications.
Servers were and still are a significantly cheaper and more cost effective way than mainframe of providing applications to the masses that didn’t require lots of compute power. A different use case to mainframe. Use of servers exploded and became the backbone of any and all business networks. This is still true today.
Applications were/are built in a self containing tower on a server. This is typically described as a monolithic application. Some developed into 2 or 3 tier apps with a front end (often web), a middle tier (processing) and back end database system.
Then came virtualisation where instead of every server being physical, it was virtual and you could have hundreds of virtual servers on a physical server. The OS and apps however remained the same architecture.
Then Came Distributed Architecture
By this time, applications were becoming more and more comprehensive, providing ever more complex functionality in line with the demand from an ever increasingly connected user base. This begins to install a glass ceiling and restrict what can be achieved by the development team responsible for the app. Sure, you could build lots of dev teams to work on their part of the overall app, but you introduce very long release cycles in doing that because it takes time for compiling and integration testing which in turn introduces a higher risk for bugs. This is because it’s all one codebase that lots of people work on that has to be committed and tested to destruction.
Either you have multiple teams, each having to make commits to the code base, hoping that their commits didn’t cause issues in the application, or you have less people having to hold more code in their head. Either way the release cycles were extremely long.
Thus microservices were born. Microservice architecture enables teams of developers to focus on and hold in their heads everything necessary for one function of an application, because instead of being monolithic, the application is now distributed in much smaller parts that are loosely coupled together. Each “part” is a self contained mini app or micro service, that when coupled with surrounding micro services, complete the functions and features of the application. Developers now have a much narrower focus and only have to worry about connecting to other parts of the app, not tightly integrating with them. Code commits are less risky because they are only relevant to one microservice.
You’re probably thinking right now, this doesn’t sound much different to multiple teams working on a monolithic app. The following illustration should help.

The picture on the left depicts how tightly integrated the code is in a monolithic app. All code has to be committed back to the master in the repository. Dev teams are aligned at the whole app or some of it. All dev teams have to perform their code commits collectively.
The picture on the right depicts loosely connected microservices. Each service has it’s own codebase and is committed independently of it’s neighbours, significantly reducing risk of human error introducing bugs or a serious fault effecting other parts of the application. Dev teams can be aligned directly to a service making scalability much more practical.
So, microservices were invented to solve a human problem, rather than a technical problem.
Complexity, New Thinking and New(ish) Tech
Microservice architecture was first talked about in 2005 at a cloud computing conference but became prevalent with the development masses in 2011.
Microservices enable the velocity of releases to significantly increase as there is a reduced risk. Instead of your code having to be compiled and integrated with everyone else that works on the app, that’s now only true of the microservice you are working on since they are loosely coupled and based on API architecture.
Great! We’ve unlocked the ability to release in days instead of years. Watch our business take off!
However, from an infrastructure and operations perspective we’ve just hiked up the complexity 10 fold. Think about it for a second, server systems we know very well:
- We know how to network them
- We know how to control network traffic
- We know how to patch them
- We know what DR needs to look like
- We understand how to control user access for different roles
- We understand the file systems
- We understand the scope of antivirus
- We understand how to monitor their health
- We understand how to harden them
- and the list goes on
A typical application might utilise 2 or 3 servers and as server architecture hasn’t changed much in decades we can keep applying the same management knowledge. With microservices architecture an app can easily be spread across 10 microservices. Each of those microservices needs to be managed from a security, backup, access control, networking, storage and monitoring perspective. Coupled to that notion, is a server is always a server, but one microservice can differ wildly from the next so can have very different considerations or requirements. On top of all that is that applications are increasingly internet facing, have public endpoints or connect to public endpoints.
Getting brain ache yet?
Containers
Containers have been around since the 1970s and were baked into Unix. Their use though has only become prominent in the last decade, particularly with the advent of Docker (2013) and Kubernetes (2014).
Container systems are like mini servers. They have an OS and features. However, they are tiny, they have just enough functionality to support the feature set and their features are fixed at creation. If you want your containers to have new features then you have to update your stored images, delete your existing containers and rebuild from your new images. Sounds crazy on the face of it, but they are purposefully small, so you can do this in minutes, unlike servers where the same action could take days. It’s a half way house between servers and fully cloud native technology (not to be confused with Cloud Native).
Container images are like server images, they have everything they need to support functionality and lend themselves well to automation.
With containers being so purposefully tiny (fraction of the size of a server), it’s significantly cheaper and more efficient to spread your application across three dozen containers than two or three servers. Not only that, but you can also begin to group containers with the same features together in clusters to support the different features of your application. Sound familiar?
So ultimately containers are a microservice infrastructure technology and along with microservices the complexities of management go hand in hand.
Summary
Microservices help increase the velocity of releases by breaking down a monolithic applications into loosely coupled services that act like miniature apps themselves. Dev teams can then be aligned in small groups to a particular microservice without having to worry about code integration with other teams. Each service then ends up with its own release cycle meaning different teams can move at different velocities and only need to hold the knowledge in their heads that is relevant to the microservice they are assigned to. This also means scaling teams with your app is much easier.
However, microservices bring new challenges for operations teams, infrastructure architects and security professionals as the app architecture is “network distributed”, increasingly interacts with the internet, and breaks away from traditional skills.
Servers are definitely becoming history for multiple reasons and in cloud computing much effort has gone into demystifying and re-skilling for cloud native services. Many apps are now created directly in cloud, or are transformed instead of lifted and optimised on their way to cloud. Cloud computing vendors have also created a number of supporting services such as Azure Defender, Azure Security Center, Traffic Manager, DDoS protection, and policy as code to help manage cloud native applications.