MIP * = RE: epoch-making evidence from the field of computer science that caused the domino effect in physics and mathematics

Computer scientists have reached new frontiers in the verification of solutions to problems by computational methods. At the same time, they found answers to the most important open questions of quantum mechanics and pure mathematics.

In 1935, Albert Einstein, working with Boris Podolsky and Nathan Rosen, investigated the possibility opened up by the new laws of quantum physics: two particles can be in an entangled state when even huge distances do not violate their interconnection. The following year, Alan Turing formulated the first general theory of computation, and proved that there are problems that can never be solved by computers.  These two ideas revolutionized the fields of science to which they relate. In addition, it seemed that they had nothing to do with each other. But now the proof





MIP * = RE combined them, which led to the solution of many problems in the field of computer science, physics and mathematics.

Quantum computers perform calculations using entangled quantum bits (qubits), rather than classical zeros and ones. New evidence indicates that such computers, theoretically, can be used to test solutions to a huge number of problems. The connection between quantum entanglement and traditional computing was a big surprise for many researchers.

Miguel Navascués is a student of quantum physics at the Institute of Quantum Optics and Quantum Information in Vienna. “That was a complete surprise,” he said, commenting on the evidence.

The co-authors of the evidence set the goal of defining the boundaries of the approach for verifying solutions to computational problems. This approach involves quantum entanglement. Having discovered these boundaries, the researchers came to the solution of two other problems, which was almost a byproduct of their work. We are talking about the Zirelson hypothesis in physics regarding mathematical modeling of quantum entanglement and the related problem in pure mathematics — the Conn problem in von Neumann algebra theory (Conn's embedding problem).

As a result, the results of applying the evidence in mathematics and physics caused something like the domino effect.

“All ideas belong to the same period. It's nice to see them coming together again in such a spectacular way, ”says Henry Yuen) from the University of Toronto - one of the co-authors of the evidence. Apart from him in this work participated Chzhenfen Ji ( Zhengfeng of Ji ) from the University of Technology Sydney, John Wright ( the John Wright ) from the University of Texas at Austin, Anand Natarajan ( Anand Natarajan ) and Thomas Vidic ( by Thomas Vidick ) from the California Institute of Technology. All five scientists work in the field of computer science.

Insoluble problems


Turing, even before the advent of computers, laid the foundation upon which reflections on computing are built. And he, at the same time, showed that there is a certain problem that, which is provable, computers cannot solve. This is the so-called shutdown problem.

Typically, computer programs receive something at the input and generate output. But sometimes they get stuck in endless cycles, which means they never stop. When this happens, there is only one way out of this situation. According to Henry Yuen, you need to manually stop the program. You just need to put an end to her work.

Turing proved that there is no universal algorithm capable of figuring out whether a program will ever be stopped. In order to find out, you need to run the program.


Henry Yuen, Thomas Widick, Zhengfeng Ji, Anand Natarajan, John Wright

“You waited a million years, but the program did not stop. Maybe you just have to wait 2 million years? There is no way to tell it in advance, "- says William Slofstra ( William Slofstra ), a mathematician from the University of Waterloo.

Turing, from a technical point of view, proved that the shutdown problem is unsolvable. Even the most powerful computer you can imagine is not able to handle it.

After Turing, computer scientists began to classify other tasks by their complexity. More complex tasks require more processing power - more processor time and memory. This is a study of the computational complexity of algorithms.

As a result, any problem poses two big questions for researchers: “How hard is it to solve it? What is the difficulty of verifying that the resulting solution is correct? ”

Interrogation of the decision through interrogation


If the tasks are relatively simple - their solutions can be checked independently. But when tasks become more complicated, even checking the results of their solution can be incredibly difficult. Despite this, in 1985, computer scientists decided that it was possible to build confidence in the correctness of the answer even if it was not possible to confirm this on their own.

The verification method follows the logic of police interrogation.

If the suspect tells a carefully thought-out story, then the investigator probably will not be able to simply take and find confirmation of each of its details. But by asking the right questions, you can either catch the suspect on a lie, or confirm the veracity of his story.

In terms of computer science, the two sides of the interrogation are represented by two computers. The first is a powerful computing system that offers a solution to the problem. He is called the witness. The second is no longer such a powerful device that asks the tester questions to determine whether the answer he offers is correct. This computer is called a verifier.

Consider a simple example. Suppose someone (the verifier) ​​suffers from color blindness. Someone else (the witness) claims that the two balls are painted in different colors and are no different from each other. The verifier cannot verify this statement itself. But, having smartly built the interrogation, he can nevertheless find out whether this is really so.

To do this, you can hide the balls behind the back and mix them. And then - ask the prover about which ball is in which hand. If they are really different, the prover should always answer such a question correctly. But if they have the same color, that is, they look exactly the same, half the answers of the prover will turn out to be wrong.

Thomas Widick says that if far more than half of the testimonial’s answers turn out to be correct, then you can be very confident that the balls have different colors.

By asking the prover questions, you can verify the solutions of a wider class of problems than you can verify yourself.

In 1988, computer scientists examined a situation in which there are two provers offering solutions for the same problem. In the end, if there are two suspects who can be interrogated, this will simplify the disclosure of the crime, or - the verification of the decision, as they can be forced to play against each other.

According to Thomas Widick, this gives the verifier more influence. He interrogates, asks related questions, cross-checks the answers. If the suspects are telling the truth, their answers should match each other most of the time. If they lie, their answers will often diverge.

Similarly, the researchers showed that by interrogating two witnesses separately, asking them questions about the solutions they found, it is possible to quickly verify the solutions of an even more extensive class of problems than when they work with one prover.

Studies of the computational complexity of algorithms may seem purely theoretical, but they are very closely related to the real world. The resources that computers need to solve problems and verify solutions — time and memory — are physical resources. For this reason, new discoveries in physics may change the approach to the study of computational complexity.

Anand Natarajan says that if we move away from the classical physical base of computations and choose something completely different, like quantum mechanisms, we will also get a new theory of computational complexity.

The new evidence is the end result of a collision of 21st century computer scientists with one of the strangest ideas of physics of the last century - with the idea of ​​quantum entanglement.

Conn's problem


When two particles are entangled, they, in fact, do not affect each other. There is no causal relationship between their actions. Einstein and his co-authors worked on this idea in a 1935 article. Subsequently, physicists and mathematicians tried to arrive at a mathematical way of describing what quantum entanglement actually means.

However, these efforts have led to mixed results. Scientists have come up with two different mathematical models of entanglement. However, it was not clear whether these models are equivalent to each other.

The presence of such potential dissonance, in a roundabout way, led to the appearance of an important problem in the field of pure mathematics. This is Conn's problem in the theory of von Neumann algebras. In the end, it played the role of some kind of “clue” that five computer scientists took advantage of in the derivation of their evidence.

The first way to model quantum entanglement was to perceive particles as objects spatially isolated from each other. Suppose one of these particles is on Earth, and the other is on Mars. The distance between them is something that excludes a causal relationship between them (they are causally separated). This is what is called a tensor product model.

But in some situations, the causal separation of entities is not entirely obvious. As a result, mathematicians arrived at a second, more general way of describing causal independence.

When the order in which two operations are performed does not affect the result, the operation is called commutative: 3x2 is the same as 2x3. In this second model, particles are entangled when their properties are correlated, but the order in which the measurements are made does not matter. One can take measurements on particle A in order to predict the momentum of particle B, and vice versa. In any case, the result will be the same. This is called a commutation operator entanglement model.

Both descriptions of entanglement use arrays of numbers called matrices. Matrices consist of rows and columns. The tensor product model uses matrices with a finite number of rows and columns. The commutation operator model uses more general objects that function as matrices with an infinite number of rows and columns.

Over time, mathematicians began to study these matrices as objects of independent interest, not connected in any way with the physical world. In line with this work, mathematician Alain Connes suggested in 1976 that it should be possible to approximate many matrices of infinite dimension using matrices of finite dimension. This is one of the consequences of the Conn problem in the theory of von Neumann algebras.

In the next decade, physicist Boris Tsirelson formulated a new version of this problem, which again returned it to the field of physics. Zirelson suggested that the models of the tensor product and the commuting operator are approximately equivalent to each other. This statement makes sense, since both of these models, theoretically, are two different ways of describing the same physical phenomenon. Subsequent work showed that due to the connection between matrices and physical models using them, the problems of Conn and Zirelson are indirectly related: if one is solved, the other will be solved.

But the solution to both of these problems came from a completely unexpected place.

Game physics


In the 1960s, physicist John Bell came up with a test to determine whether quantum entanglement is a real physical phenomenon, not a theoretical concept. The test used something like a game, the results of which made it possible to find out whether certain mechanisms that are not related to ordinary physics work during the game.

Later, computer scientists realized that this test related to quantum entanglement can also be used as a tool for verifying solutions to very complex problems.

But before moving on, let's talk about the game. Imagine that there are two players: Alice and Bob, and also a 3x3 playing field. The presenter assigns a line to Alice and prompts her to enter 0 or 1 in each cell, so that their sum would give an odd number. Bob gets a column that needs to be filled with zeros and ones so that their sum would give an even number. They will win if they put the same number in the place where the row and column intersect. It is impossible to communicate with him.

Under ordinary circumstances, the best they can do is win 89% of the time. But if you take into account the factor of quantum entanglement, then their chances are improved.

Imagine that Alice and Bob have entangled quantum particles. They take measurements of their particles and use the measurement results to indicate what to write in each cell - 0 or 1. Because the particles are entangled, the measurement results will correlate with each other. And this also means that the actions of the players will be interconnected. As a result, it turns out that they will be able to win the game always.


If there are different numbers in the cell where the row and column intersect, the game is lost. If they are the same - won.

So, if it turns out that two players are surprisingly successful in playing this game, then we can conclude that they are using something different from what classical physics can give. These "Bell" experiments today are called "non-local" games, referring to the separation of players. Physicists, in fact, conduct similar games in laboratories.

Henry Yuen says that over the years, scientists have conducted experiments to prove the reality of these frightening phenomena.

As with the analysis of any game, you may need to find out how often players can win in a non-local game, given that they play as good as they can. For example, in the case of solitaire, you can calculate the frequency of possible wins of the one who plays perfectly.

But in 2016, William Slofstra proved that there is no general algorithm for calculating the exact maximum probability of winning for all non-local games. As a result, the researchers asked the question: is it possible to at least approximately calculate the maximum percentage of winnings?

Computer scientists came to the answer using the two models of quantum entanglement described above. The algorithm that uses the tensor product model sets the minimum value for the approximate calculation of the maximum probability of winning for all non-local games. Another algorithm that uses the model of the switching operator sets the maximum value.

The answer given by the implementations of these algorithms, the more accurate, the longer the program runs. If the Zirelson hypothesis is true, and these two models are equivalent, then the lower and upper bounds should approach each other, turning into a single value representing the approximate maximum probability of winning.

But if the Zirelson hypothesis is not true, and the two models are not equivalent, then, according to Henry Yuen, the upper and lower bounds will always be separated. And there will be no way to calculate even the approximate percentage of winnings for non-local games.

Five researchers in their new work took advantage of the argument about whether the upper and lower bounds converge, and whether the Zirelson hypothesis is true or false. They did this for the sake of finding the answer to the question of in which situations it is possible to verify solutions to computational problems.

Intricate help


At the beginning of the two thousandth computer scientists, they began to wonder how the range of tasks would change, the solutions of which can be verified by “interrogating” two provers who possess intricate particles.

Most scientists thought confusion harms verification. In the end, it will be easier for the two “suspects” to reconcile false testimonies if they have any way to coordinate the responses.

But over the next few years, it became clear that the opposite statement was true: by “interrogating” provers who have intricate particles, a much broader class of problems can be verified than when working with provers who do not have such particles.

Thomas Widick says entanglement is a way of creating correlations that seem to help witnesses lie. But, in fact, this phenomenon can be used to their advantage.

In order to understand how to use it, you first need to deal with the almost supernatural scale of tasks whose solutions can be verified thanks to this interactive procedure.

Imagine a graph - a set of points (vertices) connected by lines (faces). You need to find out if it is possible to paint the vertices using three colors so that the graph does not have two vertices connected by a face and colored in the same color. If possible, then such a graph is “three-color”.

If you give a pair of evidence a very large graph, and they say that it is “three-color”, then you might wonder if there is a way to verify their answer.

In the case of very large graphs, it would be impossible to check the answer directly. Instead, you can ask each of the testers what color one of the two connected vertices has. If each of them reports different colors, and they will give similar answers every time they are asked about it, we will have confidence that the graph is indeed “three-color”.

But even this interrogation strategy will not work on huge graphs with more faces and vertices than atoms in the universe. Even asking a specific question (“Tell me the color of the XYZ vertex”) becomes an unbearable problem for someone trying to verify a solution to a problem. The amount of data needed to simply specify the name of a particular vertex is greater than the verifier can store in the memory available to it.

But quantum entanglement makes possible a scheme of work, in the application of which the testers themselves formulate the questions.

“The verifier does not need to pose questions. The verifier forces the proponents to independently formulate these questions and answer them, ”says John Wright.

The verifier needs the provers to report the colors of the connected vertices. If the vertices are not connected, then the answers to the questions will not say anything about whether the graph is “three-color” or not. In other words, the verifier needs testers to ask related questions. One of the provers asks about the top of ABC, and the second about the top of XYZ. It is assumed that the two peaks are connected, despite the fact that not one of the proofs knows which one the other “thinks” of. (This is similar to how Alice and Bob hope to write the same number in the same cell, despite the fact that none of them knows about which row or column of the table the other works with.)

If two provers would completely independently formulate such questions, then there would be no mechanism to force them to select connected (correlating) vertices, doing so in such a way as to allow the verifier to verify their answers. But such a correlation is exactly what quantum entanglement can achieve.

Thomas Widick says they are going to use intricacies to refer practically all cases to the witnesses. Scientists make the testers choose their own questions.

At the end of this procedure, each of the provers reports a vertex color. The verifier checks if these are different colors or not. If the graph is indeed “tri-color”, then the testers should never report the same color.

According to Henry Yuen, if the graph is “three-color,” then the testers will be able to convince the researcher that this is indeed so.

As it turned out, this verification procedure is another example of a non-local game. Proofers “win” if they convince the researcher that their decision is correct.

In 2012, Thomas Widik and Tsuyoshi Ito) proved that you can play a wide variety of non-local games using convoluted provers to test solutions. This applies, at a minimum, to the same number of tasks that can be verified by interrogating two classic computers. Thus, the use of confusing evidence does not harm verification. Last year, Anand Natarajan and John Wright proved that interacting with intricate witnesses actually extends the class of problems whose solutions can be verified.

But computer scientists did not know about the full range of tasks whose solutions can be verified using this approach. Now they know about it.

Domino effect


In their new work, five scientists proved that the “interrogation” of confused witnesses makes it possible to verify solutions to unsolvable problems, including the solution of a shutdown problem.

Henry Yuen says models of this type have unimaginable verification capabilities.

But the stopping task is unsolvable. And this fact was the driving force that led to the final proof.

Imagine that you handed over the program to a couple of confused witnesses. You asked them if this program will ever stop. You are ready to verify their answers through some kind of non-local game. Proofers generate questions and “win” based on the coordination of their responses.

If the program actually stops, then the proponents should be able to win the game in 100% of cases. This is similar to an example with a graph - if it is truly “three-color”, then convoluted provers should never report the same color for connected vertices. If the program never stops, the testers should win only by chance - in 50% of cases.

This means that if someone asks you to determine the approximate maximum probability of winning for a particular non-local game, then you will first have to solve the stop problem. But to solve this problem is impossible. This means that finding the approximate maximum level of probability of winning for non-local games as well as for the stop problem is impossible.

And this, in turn, means that the Tsirelson hypothesis is false: the two models of quantum entanglement are not equivalent. If they were equivalent, then it would be possible to reduce the lower and upper bounds to the same value and calculate the approximate maximum probability of winning.

David PĂ©rez-GarcĂ­a of the Complutense University of Madrid says that such an algorithm cannot exist, as a result the two [models] must be different.

The new work proves that the class of problems whose solutions can be verified with the help of intricate quantum proofers, a class called MIP *, is exactly equal to the class of problems that are no more complicated than the stop problem - the class RE. The title of the paper contains a concise indication of this: “MIP * = RE”.

In the course of the proof that the two complexity classes are equal, the scientists proved the Tsirelson hypothesis to be false, which, thanks to previous work, means that the Conn hypothesis is also false.

Researchers working in their respective fields were stunned by the fact that solutions to such large-scale problems were obtained thanks to evidence from the field of computer science, which, it seems, is not related to mathematics and physics.

Miguel Navázquez says that if he saw an article with the heading MIP * = RE, he would not have thought that she had something in common with his work. He was a co-author of an earlier work linking the hypotheses of Zirelson and Conn. For him, this was a complete surprise.

Quantum physicists and mathematicians are just beginning to master a new proof. Before that, mathematicians were interested in the question of whether they can approximate matrices of infinite dimension, using large matrices of finite dimension instead. Now, thanks to the fact that Conn's hypothesis has been proven false, they know that this cannot be done.

“Their results indicate that this is not possible,” said William Slofstra.

Computer scientists did not seek to analyze Conn's hypothesis. And they, as a result, are not in the best position to explain the possible consequences of solving this problem.

“Personally, I am not a mathematician. “I don’t understand very well the initial formulation of the Conn hypothesis,” says Anand Natarajan.

He and his co-authors expect mathematicians to translate new results into their own language. In a publication reporting on the proof, Thomas Widick writes: “I have no doubt that in the end, the theory of computational complexity will not be needed to obtain purely mathematical results.”

Nevertheless, when other researchers become acquainted with the proof, the path of research, thanks to which it was possible to reach it, comes to an end. For more than three decades, computer scientists have simply tried to find out how far interactive verification of problem solutions can take them. Now in front of them lies the answer, in the form of a long article with a simple heading and with echoes of Turing's ideas.

“There are many works whose authors are only wondering what are the possibilities of a verification procedure using two intricate quantum proofs. Now we know how powerful this procedure is. This story has come to an end, ”says Anand Natarajan.

Dear readers! What does the proof MIP * = RE mean for the area in which you work?


All Articles