Microservices or modular systems? How can a customer choose an approach to the IT architecture of a product

Microservice and modular systems are types of IT solutions architecture.

When working with modules, we are finalizing a boxed version of an existing IT product.

By the boxed version we mean a monolith, a ready-made system with a core that is delivered to all customers the same way, "as is."

The refinement consists in creating modules with missing functionality.

We obtain new modules by reusing parts of the monolith (core or other modules).
Business logic is written inside the monolith: for a program (application, site, portal) there is one entry point and one exit point.

When working with microservices, we create an IT product from scratch, composing it from “bricks” - atomic microservices that are responsible for a separate small process (send a letter, receive order information, change the order status, create a client, etc.).
A set of such blocks is combined by business logic into a common system (for example, using BPMS). Despite the presence of connections, each block is autonomous and has its own entry and exit points.

Most IT products for our customers begin with modular development. Some of them evolve over time to microservices. For the other part, microservices are not needed. In this article, we will examine why this is exactly so and what criteria help determine whether it is necessary to implement microservices or if you should stick to working with modules.

image

Benefits of Modular Architecture


All CMS systems (Bitrix, Magento, Drupal, Hybris, etc.), CRM, ERP, WMS, and many others have boxed versions. They sell well and are in high demand.
Let's look at those objective reasons why customers most often choose to work with a modular architecture and willingly purchase boxed solutions.

  1. High speed of implementation

    Installing, configuring and filling out directories for such software takes a little time. It is realistic for a medium-sized company to start working with a box three to four months after the start, sometimes even a little earlier.

    For small businesses, this period can be only a few days.


  2. . , enterprise- .


  3. . . , , , .
  4. ,

    . , .

There are other subjective factors that can be misleading and influence the decision to use boxes and modules.

1. Race of manufacturers

Software sellers warmly convince customers that their solution out of the box is the right one: it has been tested for years, and fashionable, and enterprise, and popular, and marketing ... Any supplier: Bitrix, Magento, SAP, Oracle, OpenCart, Django and everyone else is working hard on marketing and sales techniques.

2. Misconception about the complexity of improvements.

Customers are often full of optimism. They choose boxed software and think: “Yes, improvements will be needed. But it’s easy: you don’t have to invent something new. We have a popular version, but millions of users cannot make mistakes and buy a bad product. ”
In their view, the refinement process looks like this: there is a main (boxed) functionality. To "finish" something in it, developers will have to "just" redefine the module or quickly write their own. In this case, it is not necessary to use repeated methods, because everything is supposedly thought out in the monolith: common methods for calculating taxes are prescribed, there are clear rules for writing delivery and payment methods, clear workflow of orders, etc.

In real life, things are different. And after pleasant emotions from the easy start of working with the box, customers are still faced with harsh reality. Most often this happens with companies from medium and large businesses, the projects of which have a unique business logic, and large-scale improvements are needed in them.

If your company is a small business and the software is not your main asset, then most likely, a popular boxed (or better - a cloud) solution will suit you.

Let's look at what problems you may encounter when working with a modular architecture and how microservices help to avoid this.

Problems of modular systems


The main problem is that all modular systems are not designed to seriously redefine functionality. They have a box, ready-made modules - that’s better to use them.

The closer to the enterprise level the size of the project and the complexity of their customizations, the more problems there will be with the completion of the modules. Let's talk about the main ones.

Problem No. 1. The core of the system becomes a deceleration point, and modularity becomes an unnecessary complication.


Let's say your project is associated with complex warehouse logic. If you choose a modular architecture, then developers do not just need to create functionality to manage these warehouses - they need to redefine or expand the multicore module, which, in turn, uses kernel methods.

In this case, it is necessary to take into account the complex logic of returns to warehouses: dependence on events from the CRM system, movement of goods between catalogs, etc. It is also worth considering the hidden logic that is associated with the return of funds, bonus points, etc.

When so many redefinitions occur, the monolith changes significantly. It is important to remember that the relationship between the volume of new functionality and the number of modules is non-linear: to add one function, you must either make changes to several modules, each of which changes the operation of the other, or redefine a large number of methods of other modules in the new module, which does not change the essence.

After all the changes, the system becomes so complicated that an indecently large number of hours will be required to add the following customizations.

image

Problem No. 2. The principle of self-documentation is not supported in modular systems.


The documentation for modular systems is difficult to keep up to date. It is a lot of it, and it becomes outdated with each change. Refinement of one module entails changes in several other documents (in user, technical documentation), and all of them need to be rewritten.
ï»ż
As a rule, there is no one to do such work: spending time of valuable IT-specialists on this means simply draining the budget. Even the use of documentation storage in code (PHPDoc) does not guarantee its reliability. In the end, if the documentation may differ from the implementation, it will necessarily be different.

Problem No. 3. Greater coherence of the code - the path to regression: “they changed it here, it fell off”


The classic problems of modular systems are in the fight against regression.

TDD development is difficult to use for monoliths because of the great coherence of different methods (you can easily spend 30 lines of tests on five lines of code, plus fixtures).
Therefore, in the fight against regression, it is necessary to cover the functional with integration tests.
But in view of the already slow development (after all, you need to develop it carefully to provide for many overrides), customers do not want to pay for complex integration tests.

Functional tests become as big as meaningless. They run for hours, even in parallel.

Yes, a modern front like PWA can be tested API-functionally. But tests often depend on receiving data from the external circuit - and therefore begin to feil if, for example, test SAP is behind the grocery for N months, and test “1C” sends incorrect data.

When you have to upload a small edit for some module, developers should choose from two evils: start a full CI run and spend a lot of time on a deploy or lay out a hotfix without running a test, risking breaking something. It’s very dramatic when such edits arrive from the marketing department on Black Friday. Of course, sooner or later, regression and human error will happen. Is that familiar?

Ultimately, to fulfill business goals, the team goes into emergency mode of operation, skillfully juggles tests and carefully looks at dashboards from the logs - Kibana, Grafana, Zabbix ... But what do we get in the end? Burnout.

You must admit that such a situation with regression is not like the “stable enterprise” as it should be in the dreams and dreams of the customer.

Issue # 4: Code Connectivity and Platform Update



Another problem with code connectivity is the difficulty in updating the platform.
ï»ż
For example, Magento contains two million lines of code. Wherever we look, there is a lot of code everywhere (Akeneo, Pimcore, Bitrix). When adding functionality to the kernel, it would be better to take into account changes in your custom modules.

Here is a live example for Magento.
At the end of 2018, a new version of the Magento 2.3 platform was released. Multistores and Elasticsearch have been added to the Open Source Edition. In addition, thousands of bugs were fixed in the kernel and some goodies in OMS were added.

What have e-Commerce projects faced with that have already written multistores in Magento 2.2? They needed to rewrite a bunch of logic in order processing, checkout, product cards in order to use boxed functionality. Indeed, “so right” - why duplicate the functionality of the boxed version in the modules? Reducing the volume of custom code in a large project is always useful - after all, all box methods take into account these multi-warehouses, and updating the box without such refactoring can be pointless (note security issues for simplicity, especially since they can be rolled up without updating).

Now imagine: how much time will be spent on such an update and how can this be tested without integration tests, which are difficult to write?

It is not surprising that for many, updating the platform takes place either without refactoring, but with an increase in duplication, or (if the team wants to do everything in feng shui) with “leaving” for a long time in refactoring and restoring order.

Problem No. 5. Opacity of business processes


One of the most important problems in project management is that the customer does not see all the logic and all the business processes of the project. They can only be restored from code or from documentation (the relevance of which, as we said earlier, is problematic to maintain in modular systems).

Yes, Bitrix has a BPM part, while Pimcore has a workflow visualization. But this attempt to manage modules through business processes always conflicts with the presence of a kernel. In addition, events, complex timers, transactional operations - all this does not occur in BPM monoliths. I repeat: this applies to medium and large companies. For a small company, the capabilities of modular systems are enough. But if we are talking about the enterprise segment, then this solution still lacks a single control center, where you can go and look at the diagram of any process, any status, exactly how something happens, what are the exceptions, timers, events and crowns . There is not enough opportunity to change business processes, but not modules. The process management of the project is drowning in the speed of change and the coherence of logic.

Problem 6. Complexity of system scaling


If you deploy a monolith, it will be deployed in its entirety with all modules on each app server. Those. you cannot separately increase the service of processing orders and bonuses in a season, separately from the rest of the code.

Need more memory and processors, which greatly increases the cost of the cluster.

How microservices save customers from the flaws typical of modular development. Microservices Orchestration in Camunda and jBPM


Spoiler: The solution to the problems listed in the last paragraph is possible using BPMS and orchestrating microservice systems.

BPMS (English business process management system) is software for managing business processes in a company. Popular BPMS that we work with are Camunda and jBPM.

Orchestration describes how services should interact with each other using messaging, including business logic and a sequence of actions. Using BPMS, we do not just draw abstract schemes - our business process will be executed according to the drawn. What we see in the diagram is guaranteed to correlate with how the process works, what microservices are used, what parameters, according to which decision tables, a particular logic is selected.



As an example, we take a frequently encountered process - sending an order for delivery.

By any message or direct call, we start order processing with the process of choosing a delivery method. The selection logic is set.

As a result, processes, services and development:

  • become quickly readable;
  • self-documenting (they work exactly as they are drawn, and there is no rassynchron between documentation and the actual work of the code);
  • just debugged (it’s easy to see how this or that process goes and understand what the error is).

We will get acquainted with the principles by which business process management systems work.

BPMS Principle No. 1. Development Becomes Visual and Process


BPMS allows you to create a business process in which the project team (developer or business user) determines the sequence of launch of microservices, as well as the conditions and branches along which it moves. In this case, one business process (sequence of actions) may be included in another business process.

All this is clearly presented in BPMS: in real time you can watch these schemes, edit them, put them in a productive way. Here, the principle of a self-documenting environment is maximally fulfilled - the process works exactly as it is visualized.

All microservices become process cubes that can be added from a snap to a business user. Business manages the process, and the developer is responsible for the availability and correct operation of a particular microservice. Moreover, all parties understand the general logic and purpose of the process.

BPMS Principle No. 2. Each service has clear inputs and outputs.


The principle sounds very simple, and it may seem to an inexperienced developer or business user that BPMS does not improve the strategy of writing microservices in any way. Like, normal microservices can be written without BPMS.

Yes, it is possible, but difficult. When a developer writes a microservice without BPMS, he inevitably has a desire to save on abstractness. Microservices become frankly large, and sometimes they even begin to reuse others. There is a desire to save on the transparency of the transfer of the result from one microservice to another.

BPMS encourages you to write more abstractly. Development is carried out precisely process, with the definition of input and output.

BPMS Principle No. 3. Concurrency of Queue Processing


Imagine the process of processing orders: we need to go to some market place, pick up all the good orders and start processing them.

image

Look at the diagram (part of the diagram). Here it is determined that every 10 minutes we check all the orders of the marketplace, then run in parallel (as indicated by the vertical “hamburger” in the Process Order) the processing of each order. If successful, transfer all data to ERP and complete the processing.

If we suddenly need to raise the logs for processing a specific order, in Camunda, JBoss or any other BPMS, we will be able to completely restore all the data and see in which queue it was and with what input / output parameters.

BPMS Principle No. 4. Error and escalation


Imagine that an error occurred during the delivery process. For example (by the way, this is a real case), the transport company took the order, and then the warehouse burned down. Another real story: New Year's Eve rush, the product was first delayed, and then, perhaps, it was lost.

In this case, events are triggered by the mouse in BPMS, for example, notification of a client in case delivery time has passed. If you received a mistake from the transport company inside, you can start the process on your own branch and interrupt everything: notify, give a discount on the next order, return the money.

All such exceptions are difficult not only to program outside of BPMS (for example, a timer in a timer), but also to be understood in the context of the whole process.

BPMS Principle No. 5. The choice of actions for one of the events and interprocess options


Submit the same order in delivery.

In total, we have three scenarios:

  • the goods were delivered as expected;
  • the goods were not delivered as expected;
  • item has been lost.

Directly in BPMS, we can determine the procedure for the shipment of goods to the transport company and expect one of the events by order:

  • successful delivery (messages from the product delivery process that everything was delivered);
  • or the onset of some time.

If the time has not passed, you need to start another service: parsing this specific order with the operator (you need to set a task for it in the OMS / CRM system to find out where the order is) with further notification of the client.

But if in the process of investigation the order was nevertheless delivered, it is necessary to interrupt the investigation and complete the processing of the order.

In BPMS, all interrupts and exceptions are on the BPMS side. You do not overload the code with this logic (and the very presence of such logic in the code would make microservices large and poorly reused in other processes).

BPMS Principle No. 6. In your Camunda you will see all the logs


Using events and the interprocess option, you:

  • You see the whole sequence of events in one window (what’s happening with the order, which branch of the exceptions it went through, etc. - it's easy to see);
  • You can collect all analytics for BI on the basis of BPMS logs alone (without the need to overload microservices with logging events).

As a result, you will be able to collect statistics specifically on the problems of processing, transition rates, all processes of the company. There is a unification of the logging information - it is easy to link the event in the delivery with the action of the operator or the event of any other information system.

Pay attention to the difference with the modular system: universal logs can also be made there, but when interacting with other systems, you will need to do something with the unification of logging in them, and this is not always possible.

Conclusions: a comparison of microservice and modular architecture


Each type of architecture has its advantages and disadvantages. There is no universal solution.

We do not advocate a massive shift to microservices. On the contrary, for a small business or when using a very small number of customizations, a modular approach will be more suitable.

Also, we are not opposed to any IT solution (Bitrix, Magento, frameworks like Symfony or Django, etc.), because we develop more than six thousand hours of code every month on these frameworks alone, and the same amount of front'a and microservices. Therefore, we are convinced that it is important to look for a suitable technical solution, and not to promote the use of a specific platform (to which, alas, a significant part of sales in IT is rolling down).

In the previous sections of the article, you learned about the disadvantages and advantages of modular architecture. We hope this has already helped to evaluate whether the refinement of the boxed version or the creation of microservices from scratch would be more suitable. If it was not possible to decide, let's see how different types of architecture change depending on the project life.

At the start of the project:

  • with microservices - you have zero functionality, and you need to write all of it to get to work;
  • with a modular system - from the boxed version a large amount of functionality is immediately available to you, and you can start using the product soon after purchase.

After the first 3-4 months of development (this is the average release date for the first MVP) and beyond:

  • with microservices - the volume of functionality is gradually aligned in comparison with the boxed version. For medium-sized businesses, microservice architecture will catch up with modular fast enough, but for large - generally instantly. And in the future, the maintenance and development of a modular system in terms of a functional unit will increase;
  • with a modular system - the speed of development of functionality will be significantly lower than in microservices.

image

In conclusion, let's look at how orchestration of microservices looks with specific examples.

Services orchestration visualization examples


Consider the orchestration of services using Camunda. From the following images, you can evaluate how convenient it is to manage microservices using BPMS with an orchestrator. All processes are visual, the logic is obvious.

Business processes look like this:
image

Example (order, availability service):

image

It can be seen that in this order there was a branch “No goods”.

Another copy of the order (went to the assembly): The

image

order went further and, according to the decision table (DMN), went to the processing branch by a certain delivery service operator (Boxberry):

image

Care for the nested process: The

image

nested process worked out:

image

History of business processes:

image

Properties of this visualization:

  • business processes are easy to read even by an unprepared user;
  • they are executable, that is, they work exactly as they are drawn, there is no rassynchron between the "documentation" and the actual work of the code;
  • the processes are transparent: it is easy to see how a particular import, order, processing went, it is easy to see where the mistake was made.

Recall that we at kt.team use both modular and microservice development, choosing the right option for each product individually. But if the choice has already been made in favor of microservice architecture, then we are firmly convinced that it is impossible to do without BPM systems like Camunda or jBPM.

See also: video on the topic “Microservice or monolithic architecture: what to choose?”

All Articles