The practice of state supervision of algorithmic systems on the example of Canada and the EU

image

Transparency and accountability of algorithmic decisions, access to significant components of the algorithm and its corrections are tools for making fair decisions of social, technical and regulatory problems that arise in connection with the use of algorithmic systems. The issue of monitoring and supervising such systems is being actively studied in Canada and the European Union.

Canada has advanced the most in resolving this issue. But she focused solely on government systems. Thus, in April 2019, the government of the country approved the Directive on automated decision-making to ensure the management, supervision and audit of state algorithmic systems. Compliance with the directive is mandatory from April 1, 2020 and assumes the frequency of inspection every 6 months. The consequences of non-compliance with the directive range from preventative measures up to the requirements for “freezing” the system and significant restrictions on the organization.

How will the regulation of algorithmic systems work?


According to the directive, government websites and services that use algorithmic systems for making decisions are required to notify users of this “in a visible place and understandable language” and regularly publish information about the effectiveness and efficiency of automated decision-making systems. Also, the owner of the algorithm is obliged, upon request, to provide explanations of the adoption of this particular decision by the algorithm and the available options for recourse (in case of contesting the decision).

To determine the level of impact of algorithmic systems, the Government of Canada has developed and publishedopen source assessment tool (claimed to be used by other countries). In particular, to evaluate the impact of Canada’s state algorithmic systems, algorithm owners are invited to answer 60 questions. Based on the results of the analysis of the responses, the algorithmic system is assigned a level from 1 to 4. The key factors for determining the level are the socio-economic impact, assessment of the impact on government processes (services), data management, (methodological) transparency and system complexity (an assessment of the impact of indicators is attached).

The European Union has not yet developed practical steps or mandatory measures, but as a whole is moving in the same direction as Canada. However, unlike North Americans, the EU plans to exercise control over all algorithms, not just state-owned ones.
The problem of evaluating the impact of algorithmic systems is being investigated by the Research Center of the European Parliament.

In particular, two analytical documents were prepared: “Understanding Algorithmic Decision Making: Opportunities and Challenges” (March, 2019) and “A Management System to Ensure Algorithmic Accountability and Transparency” (April, 2019).
Documents prepared as reference material for members of the European Parliament.

According to the researchers, the following measures may be effective:

  • Creation of a regulatory body to control and supervise the activity of algorithmic systems, the tasks of which will include: assessing the risks of using algorithms according to the degree of impact on a person, classifying types of algorithms, researching algorithms in case of human rights violations, advising regulatory authorities regarding algorithmic systems, setting standards and best practices, audit of algorithmic systems, assistance to users in protecting their rights with inappropriate use of algorithms;
  • , . , IEEE P7000 « » IEEE P7001 « ». , , ;
  • , . , , , , , ;
  • ( - ) ;
  • ( ) , ;
  • Providing "algorithmic literacy" to increase public awareness of the principles of algorithmic systems, the impact and the formation of a critical assessment;
  • Providing a standardized mandatory notification of users about ongoing algorithmic processing (information of an explanatory nature that could potentially affect the decision-making process of an individual user or a wider public understanding of the behavior of the overall system).

All Articles