How I earned $ 1,000,000 without experience and connections, and then spent them to make my translator

How it all began


This story began 15 years ago. While working as a programmer in the capital, I accumulated money and quit my job in order to create my own projects later. To save money, he went home, to a small hometown, where he worked on a website for students, a program for trading, games for mobile phones. But due to the lack of business experience, this did not generate income, and soon the projects were closed. I had to go to the capital again and get a job. This story repeated several times.

When my money ran out again, the crisis came. I could not find a job, the situation became critical. The time has come to look at all things with a sober look. I had to honestly admit to myself that I do not know which niches to choose for business. Creating projects that you just like is the way to nowhere.

The only thing I could do was iOS apps. Several years of work in IT companies allowed me to gain some experience, and it was decided to make many simple fundamentally different applications (games, music, drawing, healthy lifestyle, language learning) and test in which niches there will be little competition. A set of classes and libraries was prepared that made it possible to quickly create simple applications on various topics (2d games, GPS trackers, simple utilities, etc.). Most of them had several pictures, 2 buttons and only one function. But that was enough to test the idea and how easy it would be to make money on it. For example, a running app tracked a person’s speed, distance traveled, and counted calories. I spent a year and a half creating hundreds of simple applications.This speed was made possible thanks to the purchase of graphics on stocks, as well as the reuse of source codes.



At first the applications were free. Then I added ads and in-app purchases, picked up keywords and bright icons. Applications began to download.

When the income reached $ 30 thousand / month, I decided to tell a friend who worked at a large food company that I was able to achieve such a figure on test applications, and suggested creating them together. He replied that they only have 1 application - a game with an income of $ 60 thousand and 25 thousand users per month, against $ 30 thousand of revenue and 200 thousand users from me. It completely changed my views. It turned out that it was better to create one high-quality application than one hundred low-quality

I understood that you can earn tens of times more with high-quality ones, but I was alone in a small town without the experience and team of designers and marketers. I needed to pay for renting an apartment and earn a living. Test applications were needed simply to test market niches and advertising strategies in order to learn which applications and how to create them. It just happened that some of them began to bring good income. Now the topic of simple applications has long died, and there is no longer much money to make.

Some applications differed greatly in profit - these were translators, applications for transportation, music programs (which simulate playing the piano, drums or, for example, guitar chords, players), as well as simple logic games.

When testing various types of games, I realized that games with a long session length and user involvement (such as “2048”) can bring in a lot of money over a long time interval. But at first it was not obvious. Therefore, I created test ups such as GPS trackers for skiers and in the keywords put the name of popular ski resorts like Courchevel. And then he was glad that a click on advertising brought $ 2. But it was a short-term, non-scalable strategy.

Then I noticed that in just a month, translators downloaded more than 1 million times. And this despite the fact that they were approximately at the 100th position in the ranking of the category.

Music applications brought as many leaps, but given the involvement of users, they were less promising. Users for them need to be attracted by high-frequency keywords, and there are not many of them in this niche: those who are looking for applications for the guitar enter “guitar”, “bass guitar”, “chords”, etc. in the search. It is difficult to find many synonyms for such topics. Thus, users are focused on high-frequency requests, and sooner or later their involvement will be expensive. Translators are different.

There are hundreds of languages ​​in the world, and people enter a query not only with the general word “translator”, but a few words as a solution to their problem: “translate into French”, “translator from Chinese”. If there are a lot of requests, you can attract users simply by mid-frequency keywords (ASO). The niche turned out to be promising, especially since I liked the topic of translations.

Later, about 40 simple translators were created using the translation provided by the Google API. Its cost was $ 20 per 1 million characters translated. Gradually, improved versions of applications appeared, where I added ads, in-app purchases, and voice translation.

Having earned money, I moved to Minsk and bought a house. At that time, I had 50–70 translation applications and 5 million downloads. But with the growth of users, the cost of the paid Google Translate API increased. The profitability of the business has seriously decreased. Paying users translated blocks of 1 thousand characters at a time, which forced the introduction of limits on the request. When they rested on the limit on translation, they wrote bad reviews and returned the money. The time has come when 70% of the proceeds went to expenses. With large volumes of translation, this business was not so promising. In order to recoup costs, it was necessary to add a lot of advertising to applications, and this always scares users away. It was required to make its own API for translation, and this most likely would not be cheap.

I tried to ask startups and the IT community for advice and investments, but I did not find support. Most people did not understand why to work in a market where there is already a leader - Google translator.

In addition to Google, there were several companies that provided an API for translation. I was ready to pay $ 30 thousand for their translation technology licenses in 40 languages. This would allow me to translate an unlimited number of times for a fixed price and serve any number of users on my servers. But in response they called me the amount several times higher. It was too expensive. It was decided to try to make their own technology for translation. I tried to attract friends for development, but by then most of them already had families, small children, and loans. Everyone wanted stability and life at their pleasure for a good salary, and not go to a startup. They also did not understand why creating a translator if there is Google with a cool, sophisticated translation application and API. I had no public speaking experience,charisma and cool prototype applications to interest people. Earnings analytics of $ 300 thousand on test translation applications did not surprise anyone.

I turned to a friend who owns an outsourcing company in Minsk. At the end of 2016, he allocated a team for me. I expected that I would solve the problem in six months on the basis of open-source projects, so as not to depend on the API on Google.

On the way to my translator


The work has begun. In 2016, we found several opensource projects - Apertium, Joshua and Moses. It was a statistical machine translation, suitable for simple texts. These projects were supported by 3 to 40 people, and it took a long time to get an answer to a question about them. After we figured it out and still ran them for tests, it became clear that we needed powerful servers and high-quality datasets, which are expensive. Even after we spent money on hardware and a quality dataset for one of the translation pairs, the quality left much to be desired.

Technically, it didn’t boil down to the “download dataset and train” scheme. It turned out that there are a million nuances that we were not even aware of. We tried a few more resources, but did not achieve good results. But Google and Microsoft do not disclose their achievements. Nevertheless, the work continued, freelancers periodically connected.

In March 2017, we stumbled upon a project called Open NMT. This is a joint development of Systran, one of the leaders in the machine translation market, and Harvard University. The project was just launched and offered translation already on the basis of a new technology - neural networks.

Modern machine translation technologies belong to large companies, they are closed. Small players, realizing how difficult it is to infiltrate this world, do not make such attempts. This impedes market development. The quality of the translation among the leaders did not differ much from each other for a long time. Obviously, large companies also faced a shortage of enthusiasts, scientific papers, startups and opensource projects in order to take new ideas and hire people.

Therefore, Systran made a fundamentally new maneuver: laid out its groundwork in opensource, so that enthusiasts like me could get involved in this work. They created a forum where their experts began to help newcomers for free. And it brought a good return: start-ups, scientific work on translation began to appear, since everyone could take the basis and conduct their experiments on the basis of it. Systran led this community. Then other companies joined.

At that time there was still no ubiquitous neural translation, and Open NMT offered groundwork in this area, outperforming statistical machine translation in quality. I and other guys around the world could take these technologies and ask experts for advice. They willingly shared their experiences, and this allowed me to understand in which direction to move.

We took OpenNMT as a basis. This happened in early 2017, when it was still “raw” and did not contain anything other than basic functions. All this was on Lua (Torch), purely for academic research. Many bugs were found, everything worked slowly, unstably and crashed under light load. It was not suitable for production at all. Then in a general chat we all tested together, caught errors, shared ideas, gradually increasing stability (then we were about 100 people). At first I was wondering: how so, why does Systran grow its competitors? But over time, I understood the rules of the game, when more and more companies began to lay out their groundwork for processing natural language in opensource.

Even if everyone has the computing power to handle large datasets, the question of finding specialists in NLP (natural language processing) is acute on the market. In 2017, this topic was much less developed than image and video processing. Less datasets, scientific papers, specialists, frameworks and more. There are even fewer people capable of building a business and closing any of the local niches from NLP research papers. Both top-tier companies like Google and smaller Systran players need to gain a competitive advantage over players in their category.

How do they solve this issue?

At first glance, this seems strange, but in order to compete with each other, they decide to introduce new players (competitors) to the market, and in order for them to appear there, you need to swing it. The entry threshold is still high, and the demand for speech processing technologies is growing very much (voice assistants, chat bots, translations, speech recognition and analysis, etc.). The required number of startups that you can buy to strengthen your position are still not yet.

In the public domain published scientific works from the teams of Google, Facebook, Alibaba. From them, their frameworks and datasets are laid out in opensource. Forums are created with answers to questions.

Large companies are interested in such startups as ours developing, capturing new niches and showing maximum growth. They are happy to buy NLP startups to strengthen their large companies.

Indeed, even if you have all the datasets and algorithms in your hands and they tell you, this does not mean that you will make a high-quality translator or other startup in the field of NLP. And even if you do, then it’s far from the fact that you bite off a large piece of the market. Therefore, you need help, and if someone succeeds, buy or merge.

In March 2018, Systran invited the entire community to Paris to exchange experiences, and also hosted a free master class on the main problems that startups in translation face. Everyone was interested to see each other live.

Everyone had various projects. Someone created a bot for learning English, with which you can speak like a person. Others used openNMT to summarize text. A significant part of startups represented plugins for SDL Trados Studio, tailored to a specific topic (medicine, construction, metallurgy, etc.) or language to help translators save time on editing the translated text.

In addition to enthusiasts, the guys from Ebay and Booking arrived in Paris, who create an interpreter on the same platform as us, but optimized for translating auction and hotel descriptions.

Also in May 2017, Facebook posted its groundwork for machine translation of Fairseq into open-source along with trained models for tests. But we decided to stay on OpenNMT, watching how the community grows.

History of DeepL


In September 2017, analyzing competitors, I found out about DeepL. They just started at that time and provided translation in only 7 languages. DeepL was positioned as a tool for professional translators, helping to spend less time on proofreading after machine translation. Even a small change in the quality of the translation saves a lot of money for translation companies. They constantly monitor the API for machine translation from different suppliers, since the quality in many language pairs is different for everyone and there is no single leader. Although the number of languages ​​- most of all at Google.

To demonstrate the quality of the translation, DeepL decided to run tests in some languages.

techcrunch.com/2017/08/29/deepl-schools-other-online-translators-with-clever-machine-learning

Quality assessment was carried out by blind testing, when professional translators choose the best translation from Google, Microsoft, DeepL, Facebook. According to the results, DeepL won, the jury rated its translation as the most “literary”.

How did it happen?

Founders of DeepL own startup Linguee - the largest database of links to translated texts. Most likely, they have a huge number of datasets assembled by parsers, and to train them, you need more computing power.

In 2017, they published an article stating that they had assembled a 5-petFlops supercomputer in Iceland (at that time it was the 23rd in terms of performance in the world). To train a large quality model was only a matter of time. At that moment, it seemed that even if we buy high-quality datasets, we will still never be able to compete with them without such a super-computer.

www

. Nvidia launches a DGX-2 computer the size of a nightstand and a performance of 2 petFlops (FP16), which can now be leased from $ 5,000 / month.

www.nvidia.com/en-us/data-center/dgx-2

Having such a computer, you can train your models with giant datasets quickly, and also keep a large load on the API. This dramatically changes the balance of power of the entire machine learning startup market and allows small companies to compete with giants in the field of working with big data. It was the best offer on the market in a price-performance ratio.

I started looking for information on DeepL statistics. For 2018, Google had 500 million users monthly. DeepL has 50 million (article dated December 12, 2018).

slator.com/ma-and-funding/benchmark-capital-takes-13-6-stake-in-deepl-as-usage-explodes

It turns out that at the end of 2018, 10% of the monthly Google audience used DeepL, and they were not particularly advertised anywhere. In just over a year, they captured 10% of the market using word of mouth.

I thought about it. If DeepL defeated Google with a team of 20 people, having a car in 5 petaFlops in 2017, and now you can cheaply rent a car in 2 petaFlops and buy high-quality datasets, how difficult will it be to achieve Google quality?

Lingvanex control panel


In order to quickly deal with translation tasks and not run tests from the console, a Dashboard was created that allowed you to do all tasks, from preparing and filtering data to deploying translation tests to Production. In the picture below: on the right is a list of tasks and GPU servers on which models are being trained. In the center are the parameters of the neural network, and below are the datasets that will be used for training.



Work on a new language began with the preparation of a dataset. We took them from open sources such as Wikipedia, meetings of the European Parliament, Paracrawl, Tatoeba and others. To get an average translation quality, 5 million translated lines are enough.



Datasets are lines of text translated from one language to another. Then the tokenizer splits the text into tokens and creates dictionaries from them, sorted by the frequency of the token's meeting. The token can be either single characters, syllables, or whole words.



After the datasets were loaded into the database, they turned out to be a lot of words with errors or with poor translation. To achieve good quality, they need to be strongly filtered. You can also buy already high-quality filtered datasets.



When the language is deployed to the API, you need to set a list of available functions for it (voice recognition, speech synthesis, image recognition, parser of a file, site, etc.). For the functions to work, they use a third-party API part opensource - part third-parties.

Then it all deploys to the API. Over time, a cache was added. It works well on 1- and 2-word phrases and can save up to 30% of queries.

We continue to work


All of 2018 I spent on solving the problem of high-quality translation in the main European languages. I thought that another six months - and everything will work out. I was very limited in resources, only 2 people were involved in Data Science tasks. It was necessary to move fast. It seemed that the solution to the problem was something simple. But the bright moment did not come, I was not satisfied with the quality of the translation. It has already been spent about $ 450 thousand earned on old translators, and it was necessary to make a decision on what to do next. Launching this project alone and without investment, I realized how many management mistakes I made. But the decision was made - go to the end!

At this time, I noticed that in our community they started talking about a new architecture for neural networks - Transformer. Everyone rushed to train neural networks based on this Transformer model and began to switch to Python (Tensorflow) instead of the old Lua (Torch). I decided to try it too.

We also took a new tokenizer, pre-processed the text, started filtering and marking up the data differently, otherwise processing the text after translation to correct errors. The rule of 10 thousand hours worked: there were many steps to the goal, and at some point I realized that the translation quality was already enough to use it in the API for my own applications. Each change added 2-4% of quality, which was not enough for the critical mass and in which people continue to use the product without leaving the competition.

Then we started connecting various tools that allowed us to further improve the quality of translation: determinant of named entities, transliteration, thematic dictionaries, a system for correcting errors in words. After 5 months of this work, the quality of translations in some languages ​​became much better and people began to complain less. It was a turning point. You can already sell the program, and due to the fact that you have your own API for translation, you can greatly reduce costs. You can increase sales or the number of users, because the costs will be only on the server.

To train a neural network, a good computer was needed. But we saved. First, we rented 20 ordinary computers (with one GTX 1080) and simultaneously launched 20 simple tests on them through the Lingvanex Control Panel. It took a week for each test, it was a long time. To achieve better quality, you had to run with other parameters that required more resources. It took a cloud and more video cards on one machine. We decided to rent a cloud service Amazon 8 GPU V100 x 4. It is fast, but very expensive. We started the test at night, and in the morning - the bill for $ 1200. At that time, there were very few rental options for powerful GPU servers besides it. I had to abandon this idea and look for cheaper options. Maybe try to assemble your own?

Calling companies ended in the fact that we ourselves had to send a detailed configuration, and they would assemble it. Which is better in terms of “performance / price” for our tasks, no one could answer. Tried to order in Moscow - stumbled upon some suspicious company. The site was of high quality, the sales department was in the subject. But they did not accept a bank transfer, and the only payment option was to throw money on the card to their accountant. They began to consult with the team and decided that it is possible to assemble a computer on your own from several powerful GPUs and at a price of up to 10 thousand dollars, which will solve our problems and pay off in a month. Components literally scraped through the guts: they called to Moscow, ordered something in China, something in Amsterdam. A month later, everything was ready.

At the beginning of 2019, I finally assembled this computer at home and began to conduct many tests, without worrying about what I need to pay for the rent. In Spanish, I began to notice that the translation is close to the Google translation of the BLEU metric. But I did not understand this language and set the model of the English-Russian translator to train for the night to understand where I was. The computer buzzed and fried all night, it was impossible to sleep. It was necessary to ensure that there were no errors in the console, as everything periodically hung. In the morning I ran a test to translate 100 sentences with lengths from 1 to 100 words and saw that it turned out to be a good translation, including on long lines. This night has changed everything. I saw the light at the end of the tunnel that you can still someday achieve a good translation quality.

Improving application quality


Having earned money on an iOS translator with one button and one function, I decided to improve its quality, as well as make a version for Android, Mac OS, Windows Desktop. I was hoping that when I have my own API, I will finish the development of applications and enter other markets. During the time when I was solving the problem of my API, competitors went much further. Some functions were needed, for the sake of which it will be my translator who will download.

The first thing I decided to do was voice translation for mobile applications without Internet access. This was a personal issue. For example, you go to Germany, download only the German package to your phone (400 mb) and get a translation from English to German and vice versa. In fact, the problem of the Internet in foreign countries is acute. Wifi is either not, or it is password protected or just slow, as a result, it is impossible to use it. Although there are thousands of high-quality translator applications that work only through the Internet using the Google API, even in 2017.

Since many had problems with the Lua (Torch) version of OpenNMT due to the not very widespread language, the founders transferred the logic of the translate.lua script to the C ++ version (CTranslate), which was used for more convenient translation experiments. On the Lua version, it was possible to train models, on the C version, to run. By May 2017, it was already possible to somehow use it as the basis of production for applications.

We ported CTranslate to work for applications and put it all in opensource.

Here is a link to this thread:

github.com/hunter-packages/onmt

Porting CTranslate to different platforms is only the first step. It was necessary to understand how to make offline models of small size and normal quality to work on phones and computers. The first versions of the translation models occupied 2GB in the phone’s RAM, which was absolutely worthless.

I found guys in Spain with good experience in machine translation projects. For about 3 months, we jointly conducted R&D in the field of reducing the size of the neuron model for translation, in order to achieve 150 MB per pair and then run on mobile phones.
The size had to be reduced in such a way as to include as many options as possible for translating words of different lengths and topics into a certain dictionary size (for example, 30 thousand words).

Later, the result of our research was made publicly available and presented to the European Machine Translation Association in Alicante (Spain) in May 2018, and one of the team members defended PhD on it.

rua.ua.es/dspace/bitstream/10045/76108/1/EAMT2018-Proceedings_33.pdf?fbclid=IwAR1BxipmZMR8Rt0d32gcJ7BaFt1Tf1UEm9LkJCYytBJLgdtx3ujAPFCwE80

At the conference, many people wanted to buy a product, but so far only one language pair was ready (English - Spanish). Offline translation on neurons for phones was ready in March 2018, and it was possible to do it in all other languages ​​until the summer. But under the contract, I could not get the sources and the tools used to do this, to make an offline translator in other languages. It was necessary to carefully read the contract. Alone, I could not quickly reproduce the results in other languages. I had to pause this function. A year later, I returned to her and completed it.

In addition to translating text, voice, and pictures, it was decided to add a translation of telephone calls with transcription that competitors did not have. There was a calculation that people often call in support or on business issues in different countries, moreover, on a mobile or landline phone. The one to whom the call is addressed does not need to install the application. This function required a lot of time and cost, so it was later decided to put it in a separate application from the main one. This is how Phone Call Translator came about .

Translation applications had one problem - they are not used every day. There are not many situations in life when you need to translate daily. But if you study the language, the use of a translator becomes frequent. To learn languages, we created a function of cards, when words are added to bookmarks on the site through an extension for the browser or in subtitles for the film, and then knowledge is consolidated using the mobile chat bot application or the smart column application that will check the selected words. All Lingvanex applications are interconnected by a single account, so you can start translating on a mobile application and continue on your computer.

Also added voice chats with translation. This will be useful for tourist groups, when the guide can speak their own language, and each of the visitors will listen in translation. And in the end - the translation of large files on the phone or computer.

Backenster project


For 7 years I received 35 million downloads without advertising costs and earned more than $ 1 million. Almost half of them are translators. These were test applications to learn mobile marketing. Due to the large number of errors, millions of users both came and went. Having gained the necessary experience, I decide to create a small internal Backenster subproject for managing applications, advertising and analytics, so as not to repeat the mistakes of the past on high-quality translators and earn as much as possible.

Through this system, I am going to redirect users of my old translator applications to new ones, since there is no money to buy traffic. Somewhere else, 5-10 million old applications remained on the phones. When the applications are ready, it remains only to click "Start". It will cost many times cheaper than attracting the same number of users for a fee. Gradually, a system for managing tests, subscriptions, updates, configuration, notifications, a mediator of advertising, etc., as well as the ability to cross-advertise mobile applications in browser extensions, chatbots, desktops, voice assistants and vice versa, was added. I decided to foresee all the problems that arose during this time with applications.



Perspective and Strategy


Creating an API for your applications and investing a lot of money, you need to understand the volume and prospects of the machine translation market. In 2017, there was a forecast that the market will become $ 1.5 billion by 2023, although the market volume for all transfers will be $ 70 billion (for 2023).

Why such a run - about 50 times?

Let's say the best machine translator now translates well 80% of the text. The remaining 20% ​​must be edited by a person. The biggest translation costs are proofreading, that is, people's salaries.

An increase in the quality of translation even by 1% (up to 81% in our example) can figuratively reduce the cost of proofreading text by 1%. 1% of the difference between the market of all transfers minus the machine one will be (70 - 1.5 = $ 68.5 billion) or $ 685 million already. The numbers and calculation above are given approximately to convey the essence.

That is, a quality improvement of even 1% can significantly save large companies on translation services. As the quality of machine translation develops, more and more of it will replace the manual translation market and save on salary costs. It is not necessary to try to cover all languages, you can choose a popular pair (English-Spanish) and one of the areas (medicine, metallurgy, petrochemistry, etc.). 100% quality - the perfect machine translation on all subjects - unattainable in the near future. And each subsequent percentage of quality improvement will be more difficult.

However, this does not prevent the machine translation market from occupying a significant part of the total market by 2023 (by analogy with DeepL, it imperceptibly grabbed 10% of the Google market), as large companies test various translator APIs every day. And improving the quality of one of them by a percentage (for any language) will allow them to save many millions of $.
The strategy of large companies to create their own operating time opensouce has begun to bear fruit. There are more startups, scientific papers, and people in the industry, which allowed us to rock the market and achieve better translation quality, increasing our forecast for the machine translation market.

Every year, competitions are held on the NLP tasks, where corporations, startups and universities compete who will have the best translation in certain language pairs.

http://statmt.org/wmt18/

By analyzing the list of winners, there is confidence that small resources can achieve excellent results.

Company opening


For several years, the project has grown many times. Applications have appeared not only for mobile platforms, but also for computers, wearable devices, instant messengers, browsers, voice assistants. In addition to translating text, a translation of voice, pictures, files, sites and phone calls was created. Initially, I planned to make my translation API to use only for my applications. But then I decided to offer it to everyone. Competitors went ahead, and it was necessary to keep up.

Until that time, I managed everything alone as an individual entrepreneur, hiring people to outsource. But the complexity of the product and the number of tasks began to grow rapidly, and it became obvious that you need to delegate functions and quickly hire people to your own team in your office. I called a friend, he quit his job and decided to open Lingvanex in March 2019.

Until that moment, I was creating a project without advertising anywhere, and when I decided to assemble my team, I ran into a search problem. Nobody believed that this can be done at all, and did not understand why. I had to interview many people and each talk for 3 hours about thousands of unobvious details. When the first article about the project came out, it became easier. I was always asked one question:

The first question always sounds “What are you better than Google?”

At the moment, our goal is to achieve the quality of Google translation of a common theme in the main European and Asian languages ​​and after that to provide solutions for:

1) Translation of text and sites through our API is three times cheaper than competitors, providing excellent support service and easy integration. For example, the cost of Google translation is $ 20 per million characters, which is very expensive for a significant amount of

2) High-quality thematic translation of documents on certain topics (medicine, metallurgy, law, etc.) by API, including integration into professional tools translators (such as SDL Trados)

3) Integration into the business processes of enterprises to run translation models on their servers under our license. This allows you to preserve the confidentiality of data, not depend on the volume of the translated text and optimize the translation for the specifics of a particular company.

You can make translation quality better than competitors for specific language pairs or topics. You can do anything. This is a matter of company resources. With sufficient investment, there is no problem. What and how to do - it’s known that you just need working hands and money.

In fact, the NLP market is growing very rapidly as recognition, speech analysis, machine translation improve, and can bring good profit for a small team. All hype here will begin in 2-3 years, when today's market promotion by large companies will bear fruit. A series of mergers / acquisitions will begin. The main thing at this moment is to have a good product with an audience that you can sell.

Total


For all the time, test applications have brought in more than $ 1 million, of which most were spent on making your own translator. Now it’s obvious that everything could be done much cheaper and better. Many management mistakes were made, but this is experience, and then there was no one to consult with. The article describes a very small part of this story, and sometimes it may not be clear why certain decisions were made. Ask questions in the comments.

At the moment, we have not achieved the quality of Google translation, but I don’t see any problems doing this if the team has at least a few specialists in Natural Language Processing.
Now our translator works best from English into German, Spanish, French.

Links to new programs that have been developed over 3 years and in which money was invested. If anyone wants to see the old test applications that were discussed at the beginning of the article (where money was earned and 35 million jumps) - write in a personal.

Translator for iOS


Translator for Android


Translator for Mac OS


Translator for Windows


Translator for Chrome


Translator for Telegram



This link can be found

Translation API Demo



The team also needs a product manager (mobile applications) and a Python programmer with experience in NLP projects.

If you have ideas for joint partnerships and offers - write in a personal email, add to Facebook, LinkedIn.

All Articles