A selection of articles on machine learning: cases, guides and studies for March 2020



It seems that not a single post can do without mentioning the coronavirus, and this collection will not be an exception.

Since the end of January, the number of open repositories that mention COVID-19 has been in the hundreds . You can find datasets, models, and visualizations in them.

There are lots of publications about the use of machine learning algorithms to combat the spread of COVID-19, but few of them allow you to get acquainted with the source code.

Such materials were not included in the selection, because here, as in the previous two issues, publications have been collected that are designed to lower the threshold for entering the ML sphere. More attention is paid to tools that abstract the behavior of complex models into high-level APIs that you can start applying now.

Computational predictions of protein structures associated with COVID-19

Google DeepMind published the results of its study on the prediction of the structure of virus proteins. For this, the open source DNN AlphaFold was used . This information may be useful in developing new drugs. However, as DeepMind makes clear on its website, this data has not been verified experimentally, and one cannot be sure of the accuracy of the structures.

Machine learning to determine COVID-19 by chest x-ray

One of the creators of COVID-CXRtells how to start using machine learning to predict severe cases of coronavirus infection using chest x-ray. Inside is an instruction on how to prepare a data set, carry out pre-processing and conduct model training. Great emphasis is placed on explaining the predictions that the neural network makes. The explanation consists of two associated images. Areas are highlighted in green or red to indicate what contributed to the forecast.

5 COVID-19 datasets that you can use right now.

Here you can find patient data, geographic distribution data, and even a selection of millions of tweets that mention the virus.



Further materials not related to coronavirus


Real-time face and hand tracking

Google Research has introduced two lightweight tools that fully work in the browser. Thus, the data does not leave the user device, which ensures its security.

Facemesh derives the approximate three-dimensional geometry of the face surface from the image or video stream, which means that it can work with a regular camera without a depth sensor ( demo ).

Handpose recognizes the hands in the video stream and, based on twenty-one landmarks (finger and palm joints), determines the location of the parts of the hand ( demo ).

Further development of these tools will allow us to recognize emotions and gestures, and possibly change the way we interact with content on the Internet.

Real-time volume recognition

Most object recognition research focuses on predicting two-dimensional objects, while 3D prediction reveals a wide range of applications from unmanned vehicles to augmented reality.

The creators of the open source framework Mediapipe introduced the new Objectron tool, which calculates three-dimensional bounding boxes for objects in real time on mobile devices. Already now you can test the mobile application on models trained to recognize chairs and shoes .

Using BERT in a browser using Tensorflow.js

Based on MobileBERT Q&A model, the authors of the article created an extension for Chrome, which works like a page search, with the difference that you can ask a question, and the extension will try to find an answer to it.

For example, in an article about crabs, the authors asked a question: “How do crabs move”, and the algorithm highlighted a fragment of the text “Usually crabs move sideways”. On the page with the lasagna recipe, the authors asked how long it took to bake, to which they received an answer: 25 minutes.

Less successful examples are also given, but the potential for using this model is already visible.



That's all, thanks for watching!

All Articles