How I learned to work with microservices: Part I – The opportunity

After 15 years of experience as a full stack developer of Windows applications in C# or/and C/C++, I wanted to try something new. In 2018, for six months, I worked on developing my first web module with ASP.NET Core 2.1, IIS and Angular. After that, I went back to working with Windows applications for a while. In September 2019 (last year), I was given an opportunity to work with microservices. So I decided to write about what that experience meant for me and the lessons I learned.

Generally, I wouldn’t say I am passionate by new technologies. I’m more attracted by complex scenarios that a client needs to resolve and curious about the business part of building a new solution. So, when I was offered the opportunity to work on a micro-services project, I wasn’t all that enthusiastic about it.

Still, I was aware the software market and technologies are always evolving and as a programmer I needed to keep up. That’s why I decided to accept the challenge and get out of my comfort zone.

This first part of our microservices saga will get you through the following topics:

Technologies, libraries, platforms

Aside from micro-services – based on ASP.NET Core 3.0 and IIS, we commonly agreed to use, for the first time, other new technologies and platforms such as Team Build (the Microsoft TFS continuous integration tool), Visual Studio 2019 – preview version, that is also .NET Core 3.0 compatible, NuGet packages (not only existing ones but also create our own) and Lamar Dependency Injection library.

Also, we took advantage of the fact that another team in RomSoft had recently developed an infrastructure application for micro-services – we got to use it and also benefit from their experience.

The application facilitated, in a centralized way, operations like registering, logging, inter-service communication, and also included a web interface to manage microservices base operations: displaying status, display log messages, activate/deactivate debug logs, reloading cache. For inter-services communication, the application was using RabbitMQ  message broker platform and library, one that was objectively considered extremely performant and stable by the programming community.

The version (of the infrastructure application) that we decided to use was already tested and used in production for another application. Reason enough for us to consider that we had the situation under control and that we wouldn’t have too many unexpected bugs.

Reverse engineering

reverse engineering

The application we were going to build was in fact a re-implementation of an older one, written in VB.NET and Windows Forms. This one had the business code mixed on different layers, the majority being included in the UI layer, more specific, in only one class.

The application had no installer and its configuration was probably the biggest, constant challenge the client faced, as it had to read simultaneously from four or five external configuration sources: registers, a .config file, an .ini file, ODBC configuration and database. Not to mention that some of the settings were hard-coded.

Requirements documentation was written, but it covered about 85 to 90% of the actual functionality. For this reason, we couldn’t trust it entirely, and we had to get back to the code in order to confirm any functionality. Practically, we were forced to do reverse engineering.

Still, an advantage on our side was that we benefited from the experience of the project manager, a colleague who participated in the early development and maintaining of the application. She knew about many problems arisen in the past at the client side, or about the business challenges faced along the way. Also, she was and she is a detail-oriented person, with good communication skills, so from this perspective, a good collaboration was expected.

I should mention that we were a team of three developers, one tester and the project manager I mentioned before – who had advanced technical knowledge, but her current implication in this project was almost exclusively in the requirements part, not in the actual development.

Continuously running server application

A huge challenge, that I never fully acknowledged, was the mere nature of the application: a server that had to process e-mails on a continuous basis, without frequent interventions from the user. The old application had a graphical user interface and it was run every 10 minutes by a Windows scheduled task. We considered that way of execution unnatural and outdated.

The new application not only had to process data without any external interventions, but in the first stage, it would have had no actual graphical interface. The only monitoring needed was that of the micro-services, done through the infrastructure application. The testing part was to be made by real time monitoring of the e-mail servers, of the database, of the RabbitMQ interface, and also of the log files. This was perhaps the greatest challenge – out of the many that were yet to come.

Study and analysis stage

We started with two activities in parallel: studying the principles of working with microservices on one side, and reverse engineering the old application, as detailed as possible, on the other side. As we were reverse engineering, we were also creating UML diagrams that would help us in understanding functionalities as deeply as we could.

We split the application in four parts among the team members, to ease the analysis process. These two activities took us about two weeks.

Architecture and design

microservices vs monolithic

When building the architecture and the application design, we decided to split the application in eight functional micro-services, two auxiliary ones, and another one for the database maintenance. We created UML diagrams to describe the internal connections between the microservices, and also between the microservices and the external resources.

We wrote a document to describe the architecture. After that, we gathered to discuss this document together with technical leads and seniors from different teams, in order to validate it. That meeting went well, even though we had collected various opinions, details, about how we could improve the architecture. We discussed those opinions again within the internal team, and we concluded that most of them were good ideas in principle, but we wouldn’t adopt any of them, for specific reasons that we didn’t have time or forgot to communicate with the others. However, the general conclusion that we drew was that we were on the right track.

Writing the code outline

Afterwards, we started creating the structure of the microservices solutions and projects. We strived to comply several conditions simultaneously: correctly referencing the NuGet packages from the infrastructure application, digitally signing assemblies, and versioning them according to the Microsoft standards, and also properly separating the code into application layers.

Additionally, we decided to follow some very strict rules for the folder structure, class names, file names, namespaces, and reusing code by including it in the NuGet packages. For some developers, these rules come as natural, but for us, it required strong discipline.

The database issue

the database dilemma

A detail we still needed to cover was maintaining the database. The old application worked on an extremely chaotic database that was badly conceived and hard to maintain.

On one side, there were a bunch of other applications writing in it, and the data was further used as configuration or routine input data for our application. We had no idea how the synchronization between the processes of writing and reading the data was done, because the client didn’t offer us this piece of information – and we did not think to ask.

On another side, the names of the database tables and columns were superficially chosen and not always intuitive.

Another strange issue was the fact that this database had no primary keys, indexes or relationships between tables that were strongly defined. Relationships existed, but only on the logical level, without foreign keys.

Finally, the application users had created manual copies of the tables, in the same database – probably for backup or testing purposes -, but without synchronizing with the development team.

The project manager knew about this and the purpose of each of these tables, an aspect which proved extremely useful. She documented for us a list of all configuration and routine tables. Following this list, we concluded that most of the routine output tables were used for logging purposes.

Due to the chaos in that database, a colleague suggested to just create a new one, better organized, to be used by the new application. This new database would have its own logging tables, so we would not need to write in the old ones; actually, only one table from the old database was still needed to be written into.

The configuration tables from the old database will still be used and read from. Also, the configuration data that was written in multiple sources (registers, files etc.) in the old application (some of them were hard-coded) would all be moved into the new database, and read from that place.

Maintaining the internal database

Next, we needed to decide about how to maintain our internal database.

Considering that we were working with microservices, we weren’t sure whether to create a single internal database, for all microservices, or to separate the data into multiple ones. The microservices work principles recommended that each microservice should administrate its own resources; however, there were no strict rules imposed to whether one database per microservice was mandatory or not; we read multiple articles upon this subject, and practically all of them mentioned that this was a subjective decision, relative to each project in particular. This is why, considering the aspect of maintenance ease, and also acknowledging that everything was new for us, we finally decided to go for a unique database.

Then, we realized that there were two alternatives for database maintenance: using the migration system offered by Entity Framework, or using a custom SQL script execution and maintenance, successfully implemented for the first time by some of our colleagues from another team, and also used successfully by us in another application that our team maintained and developed.

One of our colleagues studied for about a week the Entity Framework solution. The conclusion was that it didn’t suit our needs. The main reason was that the system implied centralizing all DTOs belonging to the database in one project. This conflicted with our prior decision to unify all internal tables in one database. We had multiple solutions – one for each microservice – and we would have to create such projects in each of them. Even if we did this, in several cases there would be more than one microservice (project) accessing one table.

So, in the end, we went for the custom solution.

Next up

Thanks for getting through here. Don’t want to leave you hanging, so here’s more: Working with microservices: Part II – A string of challenges and The microservices saga: Part III – Out of the woods. I hope you enjoy it 🙂