Mobile application reference service

Ruslan Aromatov, chief developer, ICD



Good afternoon, Khabrovites! I work as a backend developer at Moscow Credit Bank, and this time I would like to talk about how we organized the delivery of runtime content to our MKB Online mobile application. This article can be useful to those who are engaged in the design and development of front-servers for mobile applications, in which it is necessary to constantly deliver a variety of updates, whether bank documents, geolocation points, updated icons, etc. without updating the application itself in stores. Those who develop mobile applications, it will not hurt either. The article does not contain code examples, only some discussion on the topic.

Background


I think that any developer of mobile applications has encountered the problem of updating some part of the content of their application. For example, change the user agreement clause, the icon or the coordinates of the store of a customer who has suddenly moved. It seems that it could be easier? We rebuild the application and put it in the store. Customers are updated, everyone is happy.

But this simple scheme does not work for one simple reason - not all clients are updated. And judging by the statistics, there are a lot of such clients.

In the case of a banking application, failure to deliver relevant information can cost both money and customer dissatisfaction. For example, on the first day of the next month, card tariffs are changed, new bonus program rules are included, or new types of payment recipients are added. And if the client launches the application at exactly 0 hours 01 minutes, then he should see the updated content.

โ€œElementary!โ€ - you say. - "Download this data from the server and you will be happy."

And you will be right. We do so. That's it, we disagree .

However, not all so simple. We have applications for both iOS and android. Each platform has several different versions that have different functionality and api.
As a result, it may happen that we need to update the file for an android application with api version higher than 27, but not touch iOS and earlier versions.

It turns out even more interesting when, for example, we need to update the icons of payment recipients or add new items with new icons. We draw each instance of the icon in seven different resolutions for each specific type of screen: for android we have 4 of them (hdpi, xhdpi, xxhdpi, xxxhdpi) and 3 for iOS (1x, 2x, 3x). Which one should I send to a specific application?

"Well then send the file parameters that are needed by a particular application."

Correctly! Nobody knows about which file the application needs, except for the application.
However, this is not all. In applications, there are quite a few files that are interconnected. For example, payee lists (one json file) are associated with details of payees (another json file). And if we receive the first file and for some reason cannot receive the second, then the customers will not be able to pay for the service. And this is not very good, frankly.

The second case: we update the entire set of icons of payment recipients (and there are more than a hundred of them) when entering the payment page. Depending on the speed of the Internet, it can take from 10 seconds to several minutes. What should be the correct page behavior? For example, you can simply display the previous version of the icons, and download new ones in the background, then cache and only show new ones the next time the client visits the page. Somehow not really, right?

Another option is to dynamically replace already downloaded icons with new ones. Not too pretty, right? And if some icon does not download at all? Then we will see a beautiful series of new icons with a piece of the old design in the middle.

Operation icons

"Then download the entire set of icons in one archive at application startup."

Nice thought. No really. But there is a nuance.

It often happens that a designer redraws only a couple of hundreds of icons, and you only need to replace them. They weigh 200 bytes, and the entire archive we have 200 kilobytes. Is it that the client will have to re-pump what he already has?

And we have not yet calculated the cost of such work on the server. Let's say 10,000 clients per hour come to us (this is the average value, it happens more). Application start initiates background updating of directories(yes, you now know what we call it). If one client needs to update 1 kilobyte, then in an hour the server will give more than 10 megabytes. Pennies, right? And if the set of updates weighs 1 megabyte? in this case, we will have to give already 10 gigabytes. At some point, we come to the conclusion that traffic should be considered.

Then you need to learn to understand which files have changed and which are not, and download only the necessary ones.

Right. But how to understand which files have changed and which have not? We consider a hash for this. Thus, a certain file cache appears in the application, which contains a set of reference files. These files are used as resources as needed. And on the server side, we were eventually born ...

Directory Service


In general, this is a regular web service that sends files via http taking into account all the requirements of the application. It consists of a number of docker containers, inside which a java application works with the jetty web server on board. The backend is the Tarantool database on the vinyl engine (there wasnโ€™t any painful choice - there was just the whole binding for this database; you can read about this in my previous article Smart cache service based on ZeroMQ and Tarantool ) with master-slave replication. To manage files there is a service web interface, also completely self-written.



The technical implementation details in the topic of this article are not particularly significant. It could be php + apache + mysql, C # + IIS + MSSQL, or any other bundle, including without a database at all.

The diagram below shows how the service we called Woodside works. Mobile clients through the balancer go to instances of web services, and those, in turn, get the necessary files from the database.

Scheme of work

But in this article I will only talk about the structure of the reference system, and how we use them in applications.

Files necessary in applications, we divide into 3 different types.

  1. Files that must always be in the application, and independent of the type of operating system. For example, this is a pdf file with a banking service agreement.
  2. -, , ( ) . , .
  3. , , . - , , . , .

Affiliate program

The first 2 types of files in the form of archives are immediately put into the application assembly - a fresh release by default includes the newest set of directories. They fall into the automatic update system, which runs in the background when the application starts, and works as follows.

1. The directory service automatically receives part of the data from various places: databases, related services, network balls - this is some important general banking information that is updated by other departments. The other part is directories created within our team via the web interface and containing files intended only for mobile applications.

2. According to the schedule (or by the button), the service runs through all the files of all directories, and on their basis forms a set of index files (inside json) both for files of the first type (2 versions for iOS and android), and for resource files of the second type (7 versions for each type of screen).
It looks something like this:

{
  "version": "43",
  "date": "04 Apr 2020 12:31:59",
  "os": "android",
  "screen": "any",
  "hashType": "md5",
  "ts": 1585992719,
  "files": [
    {
      "id": "WBRbDUlWhhhj",
      "name": "action-in-rhythm-of-life.json",
      "dir": "actions",
      "ts": 1544607853,
      "hash": "68c589c4fa8a44ded4d897c3d8b24e5c"
    },
    {
      "id": "o3K4mmPOOnxu",
      "name": "banks.json",
      "dir": "banks",
      "ts": 1583524710,
      "hash": "c136d7be420b31f65627f4200c646e0b"
    }
  ]
}

The indexes contain information on all files of a given type, on the basis of which the mechanism for updating directories on applications is built.

3. Applications at startup, the first thing they download is index files in the / new directory inside their file cache. And in the / current directory they have indexes for the current set of files along with the files themselves.

4. Based on the new and old index files (with the participation of all current files from which the hash is considered), lists of files are created that need to be updated or deleted, and the need for updating is generally established.

5. After that, to the / new directoryapplications download the necessary files from the server via a direct link (the file id in the index is responsible for this). In this case, the presence and hashes of files already in the / new directory are taken into account , because this can be a resume.

6. As soon as the entire set of files is received in the / new directory , they are checked against the index file (sometimes it happened that the files were not completely downloaded).

7. If the check was successful, the entire file tree is moved with the replacement in the / current directory . Fresh index file becomes current.

8. If the verification is unsuccessful, file transfers will not occur, and the application will continue to use the current set of directories. The next time the application starts, the update mechanism will try to fix it. If we have a global crash when moving files, then we are forced to roll back to the very first version of the directories that came with the assembly. So far there have been no precedents.

But why is it so difficult?

In reality, not very difficult. But the fact is that we constantly have to experiment and find compromises between the number of constantly updated files and load times, between saving traffic and speed. An important role in choosing a file type is played when exactly it is needed in the application. Suppose, if the icon should be displayed immediately on the main page after the login, then the application can load such a file in runtime immediately, and not put it in a long update mechanism. Now the total size of the archive with only the main files is 12 megabytes, not including screen-dependent resources. And since the update is essentially an atomic operation, we must wait until it is over. This can take up to several minutes in cases where the connection is poor and there are many new files.

An important point is saving traffic. There were times when we completely utilized a 100 megabit channel after thick updates. I had to expand to 300. So far, enough. On average, metrics show that usually clients download from 25 to 50 gigabytes per hour during the day (this is because we have quite large files that are updated daily). There is still room for further development in terms of economy, but business is also on the alert - all the time they are adding a variety of new beauties.

In conclusion, I can add that the front servers themselves also use the service, which at startup download the data necessary for processing client requests.

And how do you deliver content updates to applications?

All Articles