Pope: Ethics for AI and Unmanned Vehicles

image

The Pope fears that AI may harm people instead of becoming a tool that will complement society and improve its life.

According to the Vatican , a recently released document called the Rome Call For AI Ethics tells us which direction AI should take as a technology that aims to improve society, as well as preventing prospects for harm and death of society.

Sometimes scientists are inclined to introduce innovations without explicitly studying the possible consequences. Thus, there are many calls for paying attention to the ethics of AI in the era of the race of development and implementation of intelligent systems (especially when it comes to systems based on machine and deep learning).

Here is a key quote from a document issued by the Pope:

“Now, more than ever, we must guarantee the prospects according to which AI will develop with emphasis not on technology, but on the benefit of humanity, the environment, our common home, and its inhabitants, which are inextricably interconnected

The Catholic Church asks everyone involved in the creation of AI to remember the potential actions of these systems. This also includes an appeal to pay attention to both the anticipated and potentially unintended consequences.

Some AI developers turn a blind eye to the fact that the actions of their developments can cause unintended adverse effects.

Remember the following aspects of social dynamics:

  • Some engineers and scientists believe that as long as the AI ​​system at least does what they expected, all the other consequences are not their responsibility, and therefore they try to wash their hands from any such hardships.
  • ( , , , , ).
  • , , . , , .
  • , «» , - . , .
  • , , . , , ( – , – ).

– ,


A document published by the Catholic Church acts as a kind of pledge, and all those involved in the creation of AI systems will be politely asked to sign this pledge.

Indeed, this document has already been signed by several players, among them IBM and Microsoft.

What is this document asking the signatories for?

I will briefly talk about some of the details. This one describes three basic principles. So, the AI ​​system should:

"... to be inclusive for everyone and not discriminate against anyone; in the heart of this system must be the kindness of the human race as a whole and each person in particular; finally, this system must remember the complex structure of our ecosystem and be characterized by its attitude towards our common home. Intelligent system "should approach this issue from the most rational side, and its approach should include the use of AI to ensure stable food systems in the future."

So in short:

  1. AI must not discriminate against anyone
  2. AI should only have good intentions
  3. AI must take care of environmental sustainability

The aspect that should be borne in mind is that AI does not determine its future itself, no matter how it seems that these systems are already autonomous and themselves create their own destiny.

All this is wrong, at least for now (it will be so in the near future).

In other words, only we humans create these systems. Therefore, only humans are responsible for what AI systems do.

I’m talking about this because this is a simple way out of the situation in which AI engineers and developers pretend that they are not to blame for the errors of their product.

You are probably familiar with the situation when you contact the motor vehicle administration to renew your driver’s license and their information system crashes. At the same time, the management agent only shrugs and complains that all this always happens with the equipment.

To be precise, it does not last forever, and not with all the technology.

If the computer system does not work, then the people who set up the computer and could not properly back up, and also did not provide the ability to gain access to the system if necessary, are to blame.

Do not fall into the logical trap, disagree with the fact that computer systems (including AI systems) have their own opinions, and if they go crazy, it’s not that this “always happens with technology”.

The truth is that the people who developed and implemented the computer system bear full responsibility, since they did not take sufficient measures and did not pay enough attention to their work.

In the case of Pope Francis’s call to take part in the discussion of vital ethical principles, it might seem that all these theses are very simple and not amenable to criticism (because they are obvious as an assertion that the sky is blue), but despite this, there are parties that deny Vatican offers.

How so?

First, some critics claim that there is no compulsory aspect of this pledge. If the company subscribes to it, then for non-compliance with the principles of the pledge no punishment will follow. Thus, you can subscribe without fear of responsibility. A pledge without consequences seems empty to many.

Secondly, some are worried by the fact that it was the Catholic Church that made the call for AI ethics. The reason is that the presence of the church introduces religion into the subject, although some believe that it has nothing to do with the subject.

The question arises - will other religions issue similar calls? And if so, which ones will prevail? And what will we do with a fragmented and disparate set of AI ethics declarations?

Thirdly, some believe that this guarantee is very soft and overly general.

Probably, the document does not contain substantive directives that could be implemented in any practical way.

Moreover, those developers who want to dodge responsibility may claim that they misunderstood the general provisions, or interpreted them differently than intended.

So, given the flurry of condemnation, should we get rid of this call from the Vatican in favor of AI algoethics?

Moving away from the topic, I note that algoethics is a name that is given to issues related to algorithmic ethics. This may be a good name, but it is not very sonorous, and is unlikely to become generally accepted. Time will tell.

Returning to the topic - should we pay attention to this call for AI ethics or not?

Yes, we must.

Despite the fact that this document, admittedly, does not provide for any financial sanctions for non-compliance with the principles, there is another aspect that is also important to consider. Firms that sign this call may be held accountable by the general public. If they break their promises and the church indicates it, then bad advertising can damage these companies - it will lead to the loss of business and reputation in this industry.

You could say that for violating these principles some kind of payment is implied, or suppose that it is hidden.

As for the appearance of religion in a topic where, as it seems, it does not belong to it, then understand that this is a completely different issue. This issue is worth considerable discussion and debate, but also acknowledge that there are now many ethical systems and calls for AI ethics from a wide variety of sources.

In other words, this call is not the first.

If you carefully study this document, you will see that there are no abstracts related to religious doctrine. This means that you could not distinguish it from any similar, performed without any religious context.

Finally, from the point of view of the vagueness of this document - yes, you could easily drive a Mack truck through numerous loopholes, but those who want to cheat will still cheat.

I assume that further steps will be taken in the field of AI, and these steps will complement the Vatican document with specific guidance that will help fill gaps and omissions.

It is also possible that anyone who tries to cheat will be condemned by the whole world, and false statements about an allegedly misunderstanding of the document will be regarded as obvious tricks that are needed only to avoid complying with reasonably formulated and quite obvious ethical standards of AI proclaimed in document.

Speaking of the belief that everyone understands the principles of AI ethics, consider the myriad use of AI.

There is an interesting example in the field of applied AI related to the topic of ethics: should unmanned vehicles be developed and put on the road in accordance with the appeal of the Vatican?

My answer is yes, definitely so.

In fact, I urge all car and vehicle manufacturers to sign this document.

In any case, let's look at this issue and see how this appeal applies to real unmanned vehicles.

Unmanned vehicle levels


It’s important to clarify what I mean when I talk about fully unmanned vehicles with AI.

Real unmanned vehicles are vehicles in which the AI ​​manages itself without any human assistance.

Such vehicles are assigned to levels 4 and 5, while cars that require human participation for co-driving are usually assigned to levels 2 or 3. Cars in which driving with the help of a person is called semi-autonomous, and usually they contain many additional functions that are referred to as ADAS (advanced driver assistance systems).

So far, there is no fully unmanned vehicle level 5. Today, we don’t even know whether this can be achieved, and how long it will take.

Meanwhile, work is underway in level 4 area. Very narrow and selective tests are carried out on public roads, although there is debate about the admissibility of such tests (some believe that people participating in tests on roads and highways act as guinea pigs, which can survive or die in each test) .

Since semi-autonomous cars require a human driver, the adoption of such cars by the masses will not be much different from driving ordinary cars, there is nothing new that could be said about them in the context of our topic (although, as you will soon see, the points that will be considered further applicable to them).

In the case of semi-autonomous cars, it is important that the public is warned about the alarming aspect that has arisen recently - despite people who continue to publish videos about how they fall asleep while driving cars of 2 or 3 levels, we all need to remember that the driver cannot be distracted from driving a semi-autonomous car.

You are responsible for actions to control the vehicle level 2 or 3, regardless of the level of automation.

Unmanned vehicles and AI ethics


In vehicles of level 4 and 5, a person does not take part in the management, all people in these vehicles will be passengers, and AI will drive.

At first glance, you can assume that there is no need to consider AI ethics issues.

An unmanned vehicle is a vehicle that can transport you without the driver.

You might think that there is nothing like this, and there are no ethical considerations in how the AI ​​driver handles the control.

Sorry, but those who believe that all this has nothing to do with the ethics of AI need to knock on the head or pour cold water on these people (I am against violence, this is just a metaphor).

There are many considerations related to AI ethics (you can read my review onthis link , an article on the global aspects of AI ethics is located here , and an article on the potential of expert advice in this area is here ).

Let's take a brief look at each of the six basic principles outlined in the Roman Call for AI Ethics.

I will strive to keep this article concise, and therefore I will offer links to my other publications, which contain more detailed information on the following important topics:

Principle 1: “Transparency: AI systems must be inherently explainable”


You get in an unmanned car, and he refuses to go where you said.

Nowadays, most unmanned driving systems are developed without any explanation regarding their behavior.

The reason for the refusal may be, for example, a tornado, or a request that cannot be fulfilled (for example, there are no passable roads nearby).

However, without XAI (the abbreviation for AI, which works with explanations), you will not understand what is happening (more on this here).

Principle 2: “Inclusiveness: the needs of all people must be taken into account so that the benefits are the same for all, and the best conditions for self-expression and development should be offered to all people”


Some are worried that unmanned vehicles will be available only to the rich, and they will not bring any benefit to the rest of the population (see Link here).

What do we get - mobility for all, or for a few?

Principle 3: “Responsibility: those who develop and implement AI systems must act with responsibility, and their work must be transparent”


What will the AI ​​do in a situation where it will have to choose whether to knock a child who has run out onto the road, or crash into a tree, which will harm passengers?

This dilemma is also known as the trolley problem (see link). Many require automakers and unmanned vehicle technology companies to be transparent about how their systems make decisions in favor of life or death.

Principle 4: “Impartiality: Build AI Systems and Act Without Prejudice, thereby Guaranteeing Justice and Human Dignity”


Suppose a car with automatic control responds to pedestrians depending on their race characteristics.

Or suppose that a fleet of unmanned vehicles learns to avoid driving in certain areas, which will deprive residents of free access to cars that do not need a driver.

The prejudices associated with machine learning and deep learning systems are a major concern

Principle 5: “Reliability: AI systems must work reliably”


You are driving a car with automatic control, and he suddenly moves to the side of the road and stops. Why? You may not have a clue.

AI may have reached the limits of its scope.

Or perhaps the AI ​​system crashed, an error occurred, or something else like that.

Principle 6: “Security and Confidentiality: AI Systems Must Work Safely and Respect User Confidentiality”


You get in an unmanned vehicle after having fun in the bars.

It turns out that this machine has cameras not only outside but also inside.

In fact, the cameras in the cabin are needed in order to catch passengers who will draw graffiti or spoil the interior of the car.

In any case, all these cameras take you all the time while you are driving in an unmanned car, and your incoherent phrases will also get on this video, because you are in a drunken stupor.

Who owns this video, and what can he do with it?

Should the owner of an unmanned car provide you with a video, and is there something that prevents him from uploading this video to the Internet?

There are many privacy issues that still need to be addressed.

From a security perspective, there are many hacking options for intelligent driving systems.

Imagine an attacker who can hack an AI drone car. He can take control, or at least order a car to take you to the place where the abductors are waiting for you.

There are many creepy scenarios associated with security holes in unmanned vehicle systems or in their cloud systems. And car manufacturers, along with technology companies, are required to use the necessary protective systems and precautions.

Conclusion


If you compare this call to the ethics of AI with many others, you will find that they have a lot in common.

The criticism, which claims that in the end we will have an immeasurable multitude of such appeals, is somewhat true, but it can be assumed that covering absolutely all the fundamentals will also not hurt.

This situation is starting to get a little confusing, and AI manufacturers will use the excuse that since there is not a single accepted standard, they will wait until such a document appears.

Sounds like a reasonable excuse, but I won't buy it.

All manufacturers who say that they are going to wait until the world-wide version compiled by some big shot are approved, in fact, they are ready to postpone the adoption of any code of ethics of AI, and their expectation may be the same. like Godot’s expectation.

I urge you to choose any manual on the ethics of AI, in which there is something important, and which is suggested by an authoritative source, and proceed with the implementation of this ethics.

Better sooner than later.



image

About ITELMA
- automotive . 2500 , 650 .

, , . ( 30, ), -, -, - (DSP-) .

, . , , , . , automotive. , , .

Read more useful articles:


All Articles