Zero Trust Information

Google recently moved all of its employees in North America to work remotely . This was one of the measures to limit the spread of SARS-CoV-2, the virus that causes the disease COVID-19. This is the right solution for any company that can do this. Moreover, Google and a number of other major technology companies are planning to pay for an army of contractors who usually serve the company's employees.

However, Google made an even more significant act five years ago, when it led the transition to zero-confidence networks.for their internal applications. Most other technology companies have followed suit. And although this was not done so that the employees worked from home, now it has facilitated the transfer of people to a remote location in a short time.

Zero Trust Network


In 1974, Vint Cerf, Yogen Dalal, and Carl Sunshine published a groundbreaking work called the Specification of Internet Transmission Control Program. From a technology perspective, this work laid the foundations for the specification of the TCP protocol on which the Internet is built. No less remarkable is the fact that the term “Internet” was proposed in the work. This happened by chance: most of the work was devoted to the program for managing the “internetwork” transmission and “internetwork” packets: the networks already existed, and it was necessary to figure out how to connect them to each other.

Initially, the networks were commercial. In the 1980s, Novell created a "network operating system" consisting of local servers, network cards, and a PC application. The system allowed you to create networks within large companies to exchange files, share printers and other resources. As a result, Novell's market position was undermined by the introduction of network functionality into client operating systems, the distribution of network cards, errors in managing the distribution channel, and the full-scale offensive of Microsoft. However, the corporate intranet model itself with shared resources has been preserved.

However, the problem was also in the Internet itself: connecting at least one computer from the local network to it meant that all other computers and servers of this intranet were actually connected to the Internet. As a solution to the problem, they proposed a security system based on perimeter protection (castle-and-moat, castle-and-moat approach): companies need to deploy firewalls to protect themselves from access to internal networks from outside. The meaning is twofold: if you are on the intranet, they trust you, if outside, then they do not trust you.


But there were two problems:

  1. If an attacker penetrates the firewall, he will gain access to the entire network.
  2. If some employee is not physically in the office, then he does not have access to the intranet.

To solve the second problem, they came up with VPN technology (virtual private network), which, thanks to encryption, allows computers of remote workers to act as if they are physically located on the corporate network. But much more important is the fundamental contradiction that manifests itself in these two problems: you need to provide access from the outside, keeping outsiders outside.

These problems have been dramatically exacerbated by three major trends of the last decade: smartphones, the SaaS (software-as-a-service) approach, and cloud computing. Today, instead of random salespeople or traveling bosses who need to connect their laptop to the corporate network, literally every employee has a mobile device with a permanent connection to the intranet. Now, instead of accessing applications hosted on the internal network, employees need access to applications hosted by the SaaS provider. Now, instead of deploying locally, corporate resources are hosted in public clouds. Which moat can cover all of these scenarios?

You shouldn’t even try: instead of trying to put everything inside the castle, put all its insides outside the ditch, and consider that any user can be a threat. Hence the name: a network of zero trust.


With this model, trust is at the level of a verified person: access (usually) depends on multi-factor authentication (for example, a password and a trusted device or a temporary code). And even after authentication, a person gets access to only a very small number of resources or applications. This model allows you to solve all the problems characteristic of the castle-and-ditch approach:

  • In the absence of an internal network, there cannot be an external attacker or a remote employee.
  • Individual authentication can be scaled on the user side to devices and applications within local resources, to SaaS applications and public clouds (especially those that are implemented using the principle of single user identification, such as Okta or Azure Active Directory).

In short, zero-confidence computing starts with Internet assumptions: everything and everything is connected, good and bad. Such calculations are characterized by zero transaction costs for the continuous decision-making on providing access at a much more distributed and finely dispersed level than that achievable with the physical support of information security, which creates a fundamental contradiction that underlies the controversial scheme with a lock and a moat.

Castles and moats


The lock-and-ditch model is not limited to corporate data alone. It is in this paradigm that communities have been thinking about information from times, ghm, castles and moats. Last fall, I wrote in The Internet and the Third Estate :

In the Middle Ages, the main organizational structure in Europe was the Catholic Church. She de facto had a monopoly on the dissemination of information: most of the books were in Latin, and they were manually copied by the monks. There was a certain ethnic kinship between different representatives of the nobility and commoners on their lands, and under the umbrella of the Catholic Church there were mainly independent city-states.

With locks and moats!

Everything changed after the invention of the printing press. It suddenly turned out that Martin Luther, whose criticism of the Catholic Church turned out to be completely analogous to the one proclaimed by Jan Hus a hundred years earlier, was not limited to any small area when spreading his views (in the case of Hus it was Prague), but he was able to reach Europe with his ideas. Noble took the opportunity to interpret the Bible in accordance with short-term interests, gradually pushing the Catholic Church away from government.

This led to the emergence of new guardians:

Just as the Catholic Church maintained its control over information, the modern meritocracy did the same, not so much controlling the press as incorporating it into a wider national consensus.

Again, the economy played a role: although books were still being sold for profit, over the past century and a half, newspapers began to read more, and then television became the dominant medium. However, all these are delivery vehicles for the “press”, which is usually financed by advertising inextricably linked with large companies ... In a broader sense, the press, big business and politicians operate within the framework of a general national consensus.

However, the Internet has become a threat to the second class of guardians, allowing everyone to publish:

, , . , . , : . , :

— , . , , .

It is difficult to overestimate the full extent of this understatement. I just told you how the printing press allowed to overthrow the First Estate, which led to the emergence of nation-states, the creation and strengthening of the new nobility. And the consequences of the overthrow of the Second Estate through the strengthening of commoners are almost impossible to imagine.

Today's guardians are sure that this is a disaster and "misinformation." Everything from Macedonian teenagers to Russian intelligence, partisans and politicians is perceived as existential threats, and the reasons are clear: the modern media model is based on the fact that these media are the main source of information. And if there is false information, then the society is facing disinformation?

Consequences of increased information


Of course, the problem is that if we focus on misinformation — and it definitely exists — then we lose sight of the other part of the “publisher own” formula: there has been an explosive increase in the amount of information that is true and false. Suppose that all published information complies with the law of normal distribution (I use this concept only for illustration and do not claim that this is true. Obviously, because of the ease of generating information, the amount of misinformation will be greater):


Before the Internet, the total amount of misinformation will be low in relative and absolute terms, because the total amount of information is small:


But thanks to the Internet, the total amount of information has become so great that even if the share of misinformation has remained at about the same level, its absolute amount has grown very strongly:


Therefore, today it is much easier to find false information, and search engines are very helpful in this. It is therefore easy to write stories like this article in the New York Times :

As coronavirus spreads around the world, misinformation spreads about it, despite the active opposition of social network development companies. Facebook, Google and Twitter said they were removing false information about the coronavirus as soon as they were found, and were working with the World Health Organization and various government organizations to protect people from inaccurate information.

However, in a study by The New York Times on each of the social platforms, dozens of similar videos, photographs and texts were found that penetrate the screeners. The texts are written not only in English, the range varies from Hindi and Urdu to Hebrew and Farsi, following the trajectory of the virus traveling around the world. The dissemination of false and malicious information about the coronavirus was a stern reminder of the struggle in which researchers and Internet companies participated. Even if companies must defend the truth, they are often overtaken and beaten by Internet liars and thieves. There is so much inaccurate information about the virus that WHO says it’s faced with “infodemia”.

I also wrote in the Daily Update :

This is what the phrase “during the study of The New York Times” tells us: the power of searching in the abundance of world information lies in the fact that you can find everything you want. Not surprisingly, The New York Times wanted to find misinformation on major technology platforms, and even less surprisingly, journalists found it.

But I find it much more interesting what is on the other side of the distribution. Of course, the fact that the Internet allows anyone to be a publisher has led to an increase in the absolute amount of misinformation, but the same can be said about valuable information that was previously unavailable:


It is difficult to find a more suitable example than the last two months of the distribution of COVID-19. From January to the present day, a lot of information appears on Twitter about SARS-CoV-2 and COVID-19, including support posts and links to medical articles published at amazing speeds and often contradicting traditional media . In addition, many experts express their views, including epidemiologists and health officials.

In the past few weeks, this booming network has begun to sound the alarm over the crisis that hit the United States. Thanks only to Twitter, we learned that this crisis began a long time ago (returning to the illustration with a normal distribution, the impact decreases as the amount of information increases).

Seattle Flu Study Story


Perhaps the most important information about the COVID-19 crisis in the United States was reports by Trevor Bedford , a member of the Seattle flu research team:





You can draw a direct connection between these messages and widespread social exclusion, especially on the West Coast: many companies switched to remote work, the travel industry stood up, conferences were canceled. Yes, there should be more information, but every little thing helps . Data received not from officials or guardians, but from Twitter, will save lives.

However, it is noteworthy that these decisions were made in the absence of official data. For weeks the president downplayed the impending crisis, and the CDC and FDA tied their hands to public and private laboratories, despite the fact that they were completely screwed up with test kits that could help identify a significant and rapidly growing number of cases. Incredible butaccording to documents from an article in The New York Times , the Bedford team also put sticks in wheels:

[Late January] The Washington State Department of Health has begun discussing an ongoing Seattle flu study in the state. But there was a hitch: the project mainly involved research laboratories, not clinical ones, and their tests for coronavirus were not approved by the Food and Drug Administration. Therefore, the group was not allowed to provide test results to anyone other than the researchers themselves ...

CDC officials have repeatedly reiterated that this is not possible [to check for coronavirus]. “If you want to use your tests as a screening tool, you have to check them at the Food and Drug Administration,” said Gail Langley, an employee of the National Center for Immunization and Respiratory Diseases, on February 16. However, the Office could not give approval because the laboratory was not certified as clinical in accordance with the requirements for centers for the provision of medical services and medical care. And the certification process could take months.

As a result, Seattle flu researchers led by Dr. Helena Chu decided to ignore the CDC:

, , , , . , , …

« », — . « , ». …

C.D.C. F.D.A. . . « », — . « ».

Nevertheless, an alarming find has changed the minds of officials about the epidemic. Participants in a Seattle flu study quickly isolated the virus genome and discovered a genetic variation, also present in the country's first case of coronavirus infection.

Thus began the storm raised by Bedford, and the response from private companies and individuals. And even if they reacted a few weeks later than they should, nevertheless it happened much earlier than it would have happened in the world of information watchers.

Internet and personal verification


As you know, the Internet grew out of the US Department of Defense ARPANET project. It was the network for which Surf, Dalal, and Sunshine developed TCP. However, contrary to popular myth, the goal was not to create a communications network that was resistant to a nuclear attack. Everything was more prosaic: few high-performance computers were available to researchers, and the Advanced Defense Research Agency (ARPA) wanted to facilitate access to them.

Although the popularity of the theory of a nuclear attack has reasons. First, there was motivationfor the theoretical study of packet switching, which has turned into the TCP / IP protocol. Secondly, the very fact of the stability of the Internet: despite the efforts of the guardians, any information freely circulates in the network (with the exception of China). Including misinformation, but also very valuable information, too. In the case of COVID-19, this slightly improves a very serious problem.

This does not mean that the availability of the Internet will help us solve all problems, both in the world and in the history of coronavirus. But when we go through this crisis, we will need to remember the story of Twitter and the heroic researchers of the Seattle flu: excessive centralization and red tape prevented critical research. And to accelerate through the study, getting feedback from people and companies across the country was helped by the sense of duty of scientists and the fact that anyone can publish on the Internet.

Therefore, instead of fighting the Internet - creating locks and ditches around information, with all sorts of crazy compromises - it is worth considering what benefits the adoption of the situation can bring? Everything indicates that young people understand the importance of personal verification. For instance,this is a study by the Reuters Institute in Oxford :

In the course of our interviews, we did not find a crisis of confidence in the media, which we often hear about among young people. There is a general distrust of some politicized opinions, but there is also a high rating of some of your favorite brands. Fake news is seen more as a nuisance than a crisis of democracy, especially since the scale of the problems does not correspond to the attention paid to it. Therefore, users feel the strength to curb these problems.

Also in this study, it turned out that social networks show more points of view than offline news. And the authors of another study believe that political polarization is most pronounced among the older generation, which use the Internet less.

I repeat, I am not saying that everything is in order, neither in the history of coronavirus in the short term, nor in social networks and the direct transmission of information in the medium term. But there are still reasons for optimism and the belief that the situation will improve. The faster we accept the idea that reducing the number of guardians and increasing the amount of information will lead to an increase in innovation and good ideas in proportion to the flow of misinformation that young people who grew up in the Internet age are already learning to ignore.

All Articles