On competitive corutinism (using reactive programming as an example)

1. Introduction


The competition for the minds, moods and aspirations of programmers is, as it seems to me, a modern trend in the development of programming. When almost nothing is proposed, although under the slogan of the struggle for it. It is very, very difficult to recognize in the crush of software paradigms something new, which in fact often turns out to be quite well-known, and sometimes simply outdated. Everything is “washed away” by terminological delights, verbose analysis and multiline examples in many programming languages. At the same time, requests to open and / or consider the background of the solution, the essence of innovations are stubbornly avoided, the attempts to find out how much this is needed and what will give in the end, which qualitatively distinguishes the innovation from already known approaches and programming tools, are thwarted at the bud.

I appeared on Habré, as was aptly noticed in one of the discussions, after a certain freeze. I won’t even mind. At least, the impression, apparently, is just that. Therefore, I agree, I confess, although, if it’s my fault, it’s only partially. I admit, I live by the ideas about parallel programming, formed in the 80s of the last century. Antiquity? Maybe. But tell me what’s new, about which the science of [parallel] programming would not already be known then (see details [1]). At that time, parallel programs were divided into two classes - parallel-serial and asynchronous. If the former were already considered archaic, then the latter - advanced and truly parallel. Among the latter, programming with event control (or just event programming), stream control, and dynamic programming was singled out.That's all in general. Further details already.

And what does current programming offer in addition to what is already known at least 40 years ago? In my "frostbitten look" - nothing. Coroutines, as it turned out, are now called coroutines or even goroutines; the terms concurrency and competition enter into a stupor, it seems, not only translators. And there are no such examples. For example, what is the difference between reactive programming (RP) and event programming or streaming? Which of the known categories and / or classifications does it fall into? Nobody seems to be interested in this, and no one can clarify this. Or can you classify now by name? Then, indeed, coroutines and coroutines are different things, and parallel programming is simply obliged to differ from the competitive one. What about state machines? What kind of miracle technique is this?

The “spaghetti” in the head arises from the oblivion of the theory where, when a new model is introduced, it is compared with already known and well-studied models. Whether this will be done well, but at least you can figure it out, because the process is formalized. But how to get to the bottom of it if you give the coroutines a new nickname and then pick the “engine hood code” simultaneously in five languages, evaluating in addition the prospect of migration to streams. And these are only coroutines, which, frankly, should already be forgotten because of their elementary nature and their small use (it’s, of course, about my experience).

2. Reactive programming and everything, everything, everything


We will not set ourselves the goal of thoroughly understanding the concept of “reactive programming”, although we will take the “reactive example” as the basis for further discussion. His formal model will be created on the basis of the well-known formal model. And this, I hope, will allow us to clearly, accurately, in detail understand the interpretation and operation of the original program. But how much the created model and its implementation will be “reactive” is up to the apologists of this type of programming to decide. At the moment, it will be enough for now that the new model will have to implement / model all the nuances of the original example. If something is not taken into account, then I hope there are those who correct me.

So, in [2], an example of a reactive program was considered, the code of which is shown in Listing 1.

Listing 1. Reactive program code
1. 1 = 2 
2. 2 = 3 
3. 3 = 1 + 2 
4.  1, 2, 3 
5. 1 = 4 
6.  1, 2, 3


In the world of reactive programming, the result of its work will be different from the result of a regular program of the same kind. This alone is bad, if not to say ugliness, because The result of the program should be unambiguous and not depend on implementation. But more confuses the other. Firstly, in appearance it is hardly possible to distinguish a regular similar code from a reactive one. Secondly, apparently, the author himself is not entirely sure of the work of the reactive program, speaking about the result “most likely”. And thirdly, which of the results is considered correct?

Such an ambiguity in the interpretation of the code has led to the fact that it is not immediately possible to “cut into” it. But then, as often happens, everything turned out to be much simpler than one might have expected. Figure 1 shows two structural diagrams that, hopefully, correspond to the structure and explain the operation of the example. In the upper diagram, blocks X1 and X2 organize data entry, signaling block X3 about their change. The latter performs the summation and allows the Pr block to print the current values ​​of the variables. Having printed, the Pr block signals to the X3 block, moreover, to him and only to him that he is ready to print new values.

Fig. 1. Two structural models of the example
image

The second scheme, in comparison with the first, is quite elementary. As part of a single block, it enters data and implements sequentially: 1) calculating the sum of the input data and 2) printing them. The internal filling of the block at this level of presentation is not disclosed. Although it can be said that at the structural level it can be a "black box including a four-block scheme. But still, his [algorithmic] device is supposed to be different.

Comment. The approach to the program as a black box essentially reflects the attitude of the user towards it. The latter is not interested in its implementation, but in the result of the work. Whether it is a reactive program, an event program, or some other, but the result in accordance with the theory of algorithms should be unambiguous and predictable.

In fig. 2 presents algorithmic models that clarify in detail the internal [algorithmic] structure of circuit blocks. The upper model is represented by a network of automata, where each of the automata is an algorithmic model of a separate block. The connections between the automata shown by dash-dotted arcs correspond to the connections of the circuit. A single-automaton model describes the operation algorithm of a block diagram consisting of one block (see a separate Pr block in Fig. 1).

Fig. 2. Algorithmic models for structural schemes
image

The automata X1 and X2 (the names of the automata and blocks coincide with the names of their variables), detect the changes and, if the automaton X3 is ready to perform the addition operation (in the state “s0”), go into the state “s1”, remembering the current value of the variable. The X3 machine, having received permission to enter the state “s1”, performs the addition operation and, if necessary, waits for the completion of printing of the variables. “Printing machine“ Pr, having finished printing, returns to the initial state “p0”, where it waits for the next command. Note that its state “p1” starts a chain of reverse transitions - the automaton X3 to the state “s0”, and X1 and X2 to the state “s0”. After that, the analysis of the input data, then their summation and subsequent printing is repeated.

Compared to the automaton network, the algorithm of a separate Pr automaton is quite simple, but, we note, it does the same job and maybe even faster. Its predicates reveal a change in variables. If this happens, then the transition to the state “p1” is performed with the start of the action y1 (see Fig. 2), which sums up the current values ​​of the variables, while remembering them. Then, on an unconditional transition from the state “p1” to the state “p0”, the action y2 prints the variables. After that, the process returns to the analysis of the input data. The implementation code for the latest model is shown in Listing 2.

Listing 2. Implementation of the Pr automaton
#include "lfsaappl.h"
#include "fsynch.h"
extern LArc TBL_PlusX3[];
class FPlusX3 : public LFsaAppl
{
public:
    LFsaAppl* Create(CVarFSA *pCVF) { Q_UNUSED(pCVF)return new FPlusX3(nameFsa, pCVarFsaLibrary); }
    bool FCreationOfLinksForVariables();

    FPlusX3(string strNam, CVarFsaLibrary *pCVFL): LFsaAppl(TBL_PlusX3, strNam, nullptr, pCVFL) { }

    CVar *pVarY;        		// 
    CVar *pVarX1;        		// 
    CVar *pVarX2;        		// 
    CVar *pVarX3;        		// 
    CVar *pVarStrNameX1;		//   X1
    CVar *pVarStrNameX2;		//   X2
    CVar *pVarStrNameX3;		//   X3
protected:
    int x1(); int x2();
    int x12() { return pVarX1 != nullptr && pVarX2 && pVarX3; };
    void y1();
    void y12() { FInit(); };
    double dSaveX1{0};
    double dSaveX2{0};
};

#include "stdafx.h"
#include "fplusx3.h"

LArc TBL_PlusX3[] = {
    LArc("st",		"st","^x12","y12"), 		//
    LArc("st",		"p0","x12",	"--"),			//
    LArc("p0",		"p1","x1",  "y1"),			//
    LArc("p0",		"p1","x2",  "y1"),			//
    LArc("p1",		"p0","--",  "--"),			//
    LArc()
};

// creating local variables and initialization of pointers
bool FPlusX3::FCreationOfLinksForVariables() {
// creating local variables
    pVarY = CreateLocVar("strY", CLocVar::vtString, "print of output string");			//  
    pVarX1 = CreateLocVar("dX1", CLocVar::vtDouble, "");			//  
    pVarX2 = CreateLocVar("dX2", CLocVar::vtDouble, "");			//  
    pVarX3 = CreateLocVar("dX3", CLocVar::vtDouble, "");			//  
    pVarStrNameX1 = CreateLocVar("strNameX1", CLocVar::vtString, "");			//   
    pVarStrNameX2 = CreateLocVar("strNameX2", CLocVar::vtString, "");			//   
    pVarStrNameX3 = CreateLocVar("strNameX3", CLocVar::vtString, "");			//   
// initialization of pointers
    string str;
    str = pVarStrNameX1->strGetDataSrc();
    if (str != "") { pVarX1 = pTAppCore->GetAddressVar(str.c_str(), this);	}
    str = pVarStrNameX2->strGetDataSrc();
    if (str != "") { pVarX2 = pTAppCore->GetAddressVar(str.c_str(), this);	}
    str = pVarStrNameX3->strGetDataSrc();
    if (str != "") { pVarX3 = pTAppCore->GetAddressVar(str.c_str(), this);	}
    return true;
}

int FPlusX3::x1() { return pVarX1->GetDataSrc() != dSaveX1; }
int FPlusX3::x2() { return pVarX2->GetDataSrc() != dSaveX2; }

void FPlusX3::y1() {
// X3 = X1 + X2
    double dX1 = pVarX1->GetDataSrc(); double dX2 = pVarX2->GetDataSrc();
    double dX3 = dX1 + dX2;
    pVarX3->SetDataSrc(this, dX3);
    dSaveX1 = dX1; dSaveX2 = dX2;
//  1, 2, 3
    QString strX1; strX1.setNum(dX1); QString strX2; strX2.setNum(dX2);
    QString strX3; strX3.setNum(dX3);
    QString qstr = "X1=" + strX1 + ", X2=" + strX2 + ", X3=" + strX3;
    pVarY->SetDataSrc(nullptr, qstr.toStdString(), nullptr);
}


The amount of code is clearly incomparably larger than the original example. But, note, not a single code. The new solution removes all the issues of functioning, not allowing to run into fantasies in the interpretation of the program. An example that looks compact and elegant, but about which you can say “most likely”, does not cause, let’s say, positive emotions and a desire to work with it. It should also be noted that it is necessary to compare actually with the action of the automaton y1.

The rest of the code is related to the requirements of the "automatic environment", which, I note, is not spoken in the source code. So, the FCreationOfLinksForVariables method of the base automaton class LFsaApplcreates local variables for the machine and links to them when at the level of the VKPA environment symbolic names of the other environment variables associated with them are indicated. The first time it starts when creating an automaton, and then within the framework of the FInit method (see step y12), because not all links are known when creating an object. The machine will be in the “st” state until all the necessary links that the x12 predicate checks are initialized. A reference to a variable, if given its name, returns the GetAddressVar method.

To remove possible questions, we present the code of the automaton network. It is shown in Listing 3 and includes the code for three automaton classes. It is on their basis that many objects are created that correspond to the structural diagram of the network shown in Fig. 1. Note that the objects X1 and X2 are derived from the general class FSynch.

Listing 3. Automated network classes
#include "lfsaappl.h"

extern LArc TBL_Synch[];
class FSynch : public LFsaAppl
{
public:
    double dGetData() { return pVarX->GetDataSrc(); };
    LFsaAppl* Create(CVarFSA *pCVF) { Q_UNUSED(pCVF)return new FSynch(nameFsa, pCVarFsaLibrary); }
    bool FCreationOfLinksForVariables();

    FSynch(string strNam, CVarFsaLibrary *pCVFL): LFsaAppl(TBL_Synch, strNam, nullptr, pCVFL) { }

    CVar *pVarX;			// 
    CVar *pVarStrNameX;		//   
    CVar *pVarStrNameObject;//  -
    LFsaAppl *pL {nullptr};
protected:
    int x1() { return pVarX->GetDataSrc() != dSaveX; }
    int x2() { return pL->FGetState() == "s1"; }
    int x12() { return pL != nullptr; };
    void y1() { dSaveX = pVarX->GetDataSrc(); }
    void y12() { FInit(); };
    double dSaveX{0};
};

#include "stdafx.h"
#include "fsynch.h"

LArc TBL_Synch[] = {
    LArc("st",		"st","^x12","y12"), 		//
    LArc("st",		"s0","x12",	"y1"),			//
    LArc("s0",		"s1","x1",  "y1"),			//
    LArc("s1",		"s0","x2",	"--"),			//
    LArc()
};

// creating local variables and initialization of pointers
bool FSynch::FCreationOfLinksForVariables() {
// creating local variables
    pVarX = CreateLocVar("x", CLocVar::vtDouble, " ");
    pVarStrNameX = CreateLocVar("strNameX1", CLocVar::vtString, "name of external input variable(x1)");			//   
    pVarStrNameObject = CreateLocVar("strNameObject", CLocVar::vtString, "name of function");                   //  
// initialization of pointers
    string str;
    if (pVarStrNameX) {
        str = pVarStrNameX->strGetDataSrc();
        if (str != "") { pVarX = pTAppCore->GetAddressVar(str.c_str(), this);	}
    }
    str = pVarStrNameObject->strGetDataSrc();
    if (str != "") { pL = FGetPtrFsaAppl(str);	}
    return true;
}

#include "lfsaappl.h"
#include "fsynch.h"

extern LArc TBL_X1X2X3[];
class FX1X2X3 : public LFsaAppl
{
public:
    double dGetData() { return pVarX3->GetDataSrc(); };
    LFsaAppl* Create(CVarFSA *pCVF) { Q_UNUSED(pCVF)return new FX1X2X3(nameFsa, pCVarFsaLibrary); }
    bool FCreationOfLinksForVariables();

    FX1X2X3(string strNam, CVarFsaLibrary *pCVFL): LFsaAppl(TBL_X1X2X3, strNam, nullptr, pCVFL) { }

    CVar *pVarX1{nullptr};			//
    CVar *pVarX2{nullptr};			//
    CVar *pVarX3{nullptr};			//
    CVar *pVarStrNameFX1;		//  X1
    CVar *pVarStrNameFX2;		//  X2
    CVar *pVarStrNameFPr;		//  Pr
    CVar *pVarStrNameX3;		//   
    FSynch *pLX1 {nullptr};
    FSynch *pLX2 {nullptr};
    LFsaAppl *pLPr {nullptr};
protected:
    int x1() { return pLX1->FGetState() == "s1"; }
    int x2() { return pLX2->FGetState() == "s1"; }
    int x3() { return pLPr->FGetState() == "p1"; }
    int x12() { return pLPr != nullptr && pLX1 && pLX2 && pVarX3; };
    void y1() { pVarX3->SetDataSrc(this, pLX1->dGetData() + pLX2->dGetData()); }
    void y12() { FInit(); };
};
#include "stdafx.h"
#include "fx1x2x3.h"

LArc TBL_X1X2X3[] = {
    LArc("st",		"st","^x12","y12"), 		//
    LArc("st",		"s0","x12",	"--"),			//
    LArc("s0",		"s1","x1",  "y1"),			//
    LArc("s0",		"s1","x2",  "y1"),			//
    LArc("s1",		"s0","x3",	"--"),			//
    LArc()
};
// creating local variables and initialization of pointers
bool FX1X2X3::FCreationOfLinksForVariables() {
// creating local variables
    pVarX3 = CreateLocVar("x", CLocVar::vtDouble, " ");
    pVarStrNameFX1 = CreateLocVar("strNameFX1", CLocVar::vtString, "");
    pVarStrNameFX2 = CreateLocVar("strNameFX2", CLocVar::vtString, "");
    pVarStrNameFPr = CreateLocVar("strNameFPr", CLocVar::vtString, "");
    pVarStrNameX3 = CreateLocVar("strNameX3", CLocVar::vtString, "");
// initialization of pointers
    string str; str = pVarStrNameFX1->strGetDataSrc();
    if (str != "") { pLX1 = (FSynch*)FGetPtrFsaAppl(str);	}
    str = pVarStrNameFX2->strGetDataSrc();
    if (str != "") { pLX2 = (FSynch*)FGetPtrFsaAppl(str);	}
    str = pVarStrNameFPr->strGetDataSrc();
    if (str != "") { pLPr = FGetPtrFsaAppl(str);	}
    return true;
}
#include "lfsaappl.h"
#include "fsynch.h"

extern LArc TBL_Print[];
class FX1X2X3;
class FPrint : public LFsaAppl
{
public:
    LFsaAppl* Create(CVarFSA *pCVF) { Q_UNUSED(pCVF)return new FPrint(nameFsa, pCVarFsaLibrary); }
    bool FCreationOfLinksForVariables();

    FPrint(string strNam, CVarFsaLibrary *pCVFL): LFsaAppl(TBL_Print, strNam, nullptr, pCVFL) { }

    CVar *pVarY;        		// 
    CVar *pVarStrNameFX1;		//    X1
    CVar *pVarStrNameFX2;		//    X2
    CVar *pVarStrNameFX3;		//    X3
    FSynch *pLX1 {nullptr};     //    X1
    FSynch *pLX2 {nullptr};     //    X2
    FX1X2X3 *pLX3 {nullptr};    //    X3
protected:
    int x1();
    int x12() { return pLX3 != nullptr && pLX1 && pLX2 && pLX3; };
    void y1();
    void y12() { FInit(); };
};
#include "stdafx.h"
#include "fprint.h"
#include "fx1x2x3.h"

LArc TBL_Print[] = {
    LArc("st",		"st","^x12","y12"), 		//
    LArc("st",		"p0","x12",	"--"),			//
    LArc("p0",		"p1","x1",  "y1"),			//
    LArc("p1",		"p0","--",	"--"),			//
    LArc()
};
// creating local variables and initialization of pointers
bool FPrint::FCreationOfLinksForVariables() {
// creating local variables
    pVarY = CreateLocVar("strY", CLocVar::vtString, "print of output string");			//  
    pVarStrNameFX1 = CreateLocVar("strNameFX1", CLocVar::vtString, "name of external input object(x1)");			//   
    pVarStrNameFX2 = CreateLocVar("strNameFX2", CLocVar::vtString, "name of external input object(x2)");			//   
    pVarStrNameFX3 = CreateLocVar("strNameFX3", CLocVar::vtString, "name of external input object(pr)");			//   
// initialization of pointers
    string str;
    str = pVarStrNameFX1->strGetDataSrc();
    if (str != "") { pLX1 = (FSynch*)FGetPtrFsaAppl(str);	}
    str = pVarStrNameFX2->strGetDataSrc();
    if (str != "") { pLX2 = (FSynch*)FGetPtrFsaAppl(str);	}
    str = pVarStrNameFX3->strGetDataSrc();
    if (str != "") { pLX3 = (FX1X2X3*)FGetPtrFsaAppl(str);	}
    return true;
}

int FPrint::x1() { return pLX3->FGetState() == "s1"; }

void FPrint::y1() {
    QString strX1; strX1.setNum(pLX1->dGetData());
    QString strX2; strX2.setNum(pLX2->dGetData());
    QString strX3; strX3.setNum(pLX3->dGetData());
    QString qstr = "X1=" + strX1 + ", X2=" + strX2 + ", X3=" + strX3;
    pVarY->SetDataSrc(nullptr, qstr.toStdString(), nullptr);
}


This code is different from Listing 1, like a picture of an airplane from its design documentation. But, I think, we are primarily programmers, and, no offense will be told to them, some designers. Our "design code" should be easy to understand and unambiguously interpreted so that our "plane" does not crash on the first flight. And if such a misfortune happened, and with programs this happens more often than with airplanes, then the reason can be found easily and quickly.

Therefore, considering Listing 3, you need to imagine that the number of classes is not directly related to the number of corresponding objects in the parallel program. The code does not reflect the relationship between objects, but contains the mechanisms that create them. So, the FSynch class contains a pL pointer to an object of typeLFsaAppl . The name of this object is determined by a local variable, which in the VKPa environment will correspond to an automaton variable with the name strNameObject . A pointer is necessary to use the FGetState method to monitor the current state of an FSynch type automaton object (see the predicate code x2). Similar pointers to objects, variables for specifying the names of objects, and predicates necessary for organizing relationships contain other classes.

Now a few words about the “construction” of a parallel program in the VKPA environment. It is created during the loading of the program configuration. In this case, first objects are created on the basis of classes from thematic dynamic libraries of an automaton type (their set is determined by the configuration of the application / program). Created objects are identified by their names (let's call them automatic variables) Then, the necessary values ​​are written to the local variables of the automata. In our case, variables with a string type are set to the variable names of other objects and / or the names of the objects. In this way, connections between objects of a parallel automaton program are established (see Fig. 1). Further, changing the values ​​of the input variables (using individual object control dialogs or the standard dialog / environment dialogs for setting values ​​for environment variables), we fix the result. It can be seen using a standard environment dialog to display the values ​​of variables.

3. To the analysis of parallel programs


On the functioning of a parallel program, unless it is quite simple sequentially parallel, it is very, very difficult to say something concrete. The considered network of automata is no exception. Next, we will see this, understanding what can be expected from it.

The resulting automaton and the network for which it is built are shown in Fig. 3. From the network in Fig. 2, in addition to renaming its elements - automata, input and output signals, it is distinguished by the absence of a “print machine” of variables. The latter is not essential for the operation of the network, and renaming allows you to use the composition operation to build the resulting automaton. In addition, to create shorter names, coding was introduced when, for example, the state “a0” of the automaton A is represented by the symbol “0”, and “a1” by the symbol “1”. Similarly for other machines. In this case, the component state of the network, for example, “a1b0c1”, is assigned the name “101”. Similarly, names are formed for all component states of the network, the number of which is determined by the product of states of component automata.

Fig. 3. The resulting network automaton
image

The resulting automaton can, of course, be calculated in a purely formal way, but for this we need an appropriate “calculator”. But if it is not, then you can use a fairly simple intuitive algorithm. Within its framework, one or another component state of the network is recorded and then, sorting through all possible input situations, the target component states are determined by “handles”. So, having fixed the state “000” corresponding to the current states of the component automata - “a0”, “b0”, “c0”, transitions for the conjunctions of input variables ^ x1 ^ x2, ^ x1x2, x1 ^ x2, x1x2 are determined. We obtain the transitions respectively in states “a0b0c0”, “a0b1c0”, “a1b0c0”, “a1b1c0”, which are marked “000”, “010”, “100” and “110” on the resulting machine. You must repeat this operation sequentially for all reachable states. loopswhich are not loaded with actions can be excluded from the graph.

What we have "in the dry residue". We achieved the main thing - we received the resulting automaton, which accurately describes the operation of the network. We found out that out of eight possible network states, one is inaccessible (isolated) - state “001”. This means that the summation operation will under no circumstances be triggered for input variables that have not changed the current value.

Which is disturbing, although testing did not reveal errors. On the graph of the resulting automaton, transitions conflicting in output actions were found. They are marked with a combination of the actions y1y3 and y2y3. Actions y1 and y2 are triggered when the input data changes, and then another action y3 computes the sum of the variables in parallel with them. What values ​​will it operate on - old or just changed by new ones? To eliminate the ambiguity, you can simply change the actions of y3 and y4. In this case, their code will be as follows: X3 = X1Sav + X2Sav and print (X1Sav, X2Sav, X3).

So. The construction of the resulting automaton revealed obvious problems in the created parallel model. Whether they appear in the reactive program is a question. Everything will, apparently, depend on the approach to the implementation of parallelism in the reactive paradigm. In any case, such a dependence must be taken into account and somehow eliminated. In the case of an automated network, it is easier to leave the changed version than to try to change the network. It’s okay if the “old” data that initiated the network’s operation is printed first, and then the current data is printed next.

4. Conclusions


Each of the solutions considered has its pros and cons. The initial one is very simple, the network is more complicated, and created on the basis of a single machine, it will start analyzing the input data only after its visualization. Due to its parallelism, the same automatic network will start the analysis of input data before the end of the printing procedure. And if the visualization time is long, but this will be the case against the summation operation, then the network will be faster from the point of view of input control. Those. an assessment based on an estimate of the amount of code in the case of parallel programs is not always objective. In simpler terms, the network is parallel, the one-component solution is largely sequential (its predicates and actions are parallel). And we, first of all, are talking about parallel programs.

The network model is also an example of a flexible solution. First, components can be designed independently of one another. Secondly, any component can be replaced by another. And thirdly, any network component can be an element of a library of automatic processes and is used in another network solution. And these are just the most obvious benefits of a parallel solution.

But back to reactive programming. Does RP consider all program statements to be initially parallel? We can only assume that without this it is difficult to talk about a programming paradigm “oriented towards data flows and the propagation of changes” (see the definition of reactive programming in [3]). But then what is its difference from programming with streaming control (for more details see [1])? So we return to where we started: how to classify reactive programming in the framework of well-known classifications? And, if RP is something special programming, then what is it different from the known programming paradigms?

Well, about the theory. Without it, the analysis of parallel algorithms would not just be difficult - impossible. The analysis process sometimes reveals problems that, even with a careful and thoughtful look at the program, as, incidentally, at the “design document”, it is impossible to guess. In any case, I am for the fact that planes, both in a figurative and in any other sense, do not crash. This is me to the fact that, of course, you need to strive for simplicity and grace of form, but without loss of quality. We, programmers, do not just “draw” programs, but often control what is hidden there, including by airplanes!

Yes, I almost forgot. I would classify automatic programming (AP) as programming with dynamic control. As for asynchrony - I bet. Given that the basis of the AP control model is a network in a single time, i.e. synchronous networks of automata, then it is synchronous. But since the VKPa environment also implements many networks through the concept of “automaton worlds,” it is completely asynchronous. In general, I am against any very rigid classification framework, but not for anarchy. In this sense, in VKPa, I hope a certain compromise has been reached between the rigidity of serial-parallel programming and a certain asynchronous anarchism. Given the fact that automatic programming also covers the class of event programs (see [4]), and stream programs are easily modeled within it,what programming can you still dream of? For sure - to me.

Literature
1. /.. , .. , .. , .. ; . .. . – .: , 1983. – 240.
2. . [ ], : habr.com/ru/post/486632 . . . ( 07.02.2020).
3. . . [ ], : ru.wikipedia.org/wiki/_ . . . ( 07.02.2020).
4. — ? [ ], : habr.com/ru/post/483610 . . . ( 07.02.2020).

Source: https://habr.com/ru/post/undefined/


All Articles