US Department of Defense: Ethics for AI and Unmanned Vehicles

image

Ethics AI is a hot and relevant topic these days. And it is right.

With the heyday of AI systems and their very widespread occurrence, there are reasonable concerns that these computer systems are simply thrown into this world without understanding the possible consequences for society.

There is serious concern that the new artificial intelligence systems have prejudices built into them and are prone to actions that are toxic in terms of ethics and society.

When it comes to the ethics of artificial intelligence, I argue that there are at least two main aspects (or rules):

  1. AI system behavior should be acceptable to society
  2. AI system designers and the systems themselves must enforce rule 1.

Perhaps the essence of the first rule is obvious - the meaning of AI ethics is that the behavior of these systems should comply with the ethical standards of society.

It will be shown below that compliance with the first rule is more complicated than it might seem.

The importance of rule 2 is to clarify that the parties responsible for ensuring full compliance with the ethics of AI are made up of people who design, build, and distribute these systems.

If this seems obvious to you, then do not rush to make judgments based on "common sense."

Around the AI ​​systems, there is a kind of aura, as if they are autonomous and capable of self-determination. Thus, when the AI ​​system commits a kind of manifestation of racial prejudice , some make a logical mistake saying that the AI ​​did it, without blaming those who created and implemented this system.

Some developers and manufacturers of these systems would certainly prefer you not to look in their direction.when the AI ​​system does something wrong.

As a result, those who bear true responsibility can make a surprised face and cunningly shy away from their duties , saying that the AI ​​is to blame. These people will behave as if they are just watching the train crash, even if it was they who put it on the rails and started it without any control or verification.

Of course, some will defend their system that made a mistake. They will say that they tried to prevent all failures, but damn it, something was missing . In a sense, these people will try to make themselves “victims,” contrasting themselves with those who have suffered from AI.

Do not get fooled by this.

In fact, please do not get fooled by this.

In short, the second rule is as important as the first.

Without bringing people accountable for the actions of AI created by them, we will all sit in a puddle.

I’m sure that the creators will easily get away with their promises related to the development of their new AI systems . These people will praise their products, hoping for their success, and they will also pretend that they have nothing to do with applications whose ethical mistakes have fallen on us all. All of this will encourage the creation of AI systems that do not have moral responsibility.

Another area in which ethics of AI can be particularly difficult is AI for military purposes.

Most people will agree that we need to abide by certain AI ethical standards when creating and using military intelligence systems. The knowledge and experience that we gain in this way can be used for commercial and industrial use of AI.

The U.S. Department of Defense recently released a set of AI principles .

If you look at these principles in isolation (which I will do in this article), that is, without taking into account the military and defensive contexts, you can see that they are equally applicable to any commercial or industry AI systems.

I will tell you about these principles, and use my favorite area for this - the emergence of real unmanned vehicles. This will allow me to illustrate how applicable the latest set of principles of AI ethics (including non-military use).

According to media reports, the Pentagon published this latest set of AI ethical standards, and, citing Air Force Lieutenant Jack Jack Shenhan and Director of the Joint AI Center, aptly stated : “AI is a powerful developing and effective technology that rapidly transforms culture, society and ultimately even war. "How good or bad these changes depend on our approach to adopting and using these systems."

As I have repeatedly said, the approach of society to the development of AI will determine the fate of this system - whether it will work for the good , or it will (or will become over time) a failure with deplorable consequences .

Developers and manufacturers of AI systems must be seriously pressured.

For clarity, I note that AI systems will not work for the good simply by their nature (there are no axioms and prerequisites that could guarantee this).

In order to fully study this issue, we will consider my concept, which is a four-square scheme, which I call the AI ​​ethics consequences scheme .

Conventionally, we denote the upper part of the abscissa axis as “AI for good,” and the lower as “AI for harm." Similarly, we divide the ordinate axis, the left side is designated as "Intentionally", and the right side is "Unintentionally".

Thus we get 4 squares:

  1. Good AI - Intentionally
  2. AI to the detriment - Intentionally
  3. AI for Good - Unintentionally
  4. AI to the detriment - Inadvertently

I understand that some of you may be horrified from simplifying such a weighty issue to a simple four-square layout, but sometimes a global perspective can be useful - with it you can see the forest among the trees.

The first square describes a good AI that was created with good intentions. This AI fulfills its goals without any ethical violations, and I believe that this square is the standard that all intelligent systems should strive for.

The second square is a malicious AI. Unfortunately, such systems will be created and released (by criminals or terrorists), but I hope that we will be ready for this, and the principles of AI ethics will help us detect their appearance.

The third square is random luck. He describes a situation in which someone created an intelligent system that suddenly turned out to be good. Perhaps its creator did not think about the positive direction of its development, but even if it is just a successful combination of circumstances, it is still a success, because for us the result is the basis.

The fourth square is the most insidious, and it is he who causes the most anxiety (with the exception of the creepy second square). As a rule, systems that should work for the good fall here, but during development, negative qualities accumulate in them that overlap all the good.

In general, we will be surrounded by systems in which good and bad traits will be mixed little by little. The situation can be described as follows: developers will strive to create a good AI that will be useful, but in the process of creating unintentional errors will accumulate in the system , which will lead to adverse consequences.

Consider two important questions:

  • How can we understand or discover that an intelligent system may contain features of poor AI?
  • How can developers be aware of how to create such systems in order to catch errors and prevent the emergence of systems whose operation can lead to negative consequences?

Answer: strictly abide by the rules taken from the correct and substantial set of principles of AI ethics.

Speaking of AI ethics guidelines, a document issued by the Department of Defense contains five basic principles, and each of them expresses important ideas that all developers and distributors of AI should pay attention to, as well as everyone who uses or will use these systems.

In general, AI ethics guidelines are useful to all interested parties, not just developers.

The best way to learn the principles from a document from the Department of Defense is to demonstrate them in relation to existing intelligent systems.

It is worth considering the urgent question: are the principles from the code of the Ministry of Defense applicable totruly unmanned cars ? If applicable, how should they be used?

They are really applicable, let's take a look at how they work.

Unmanned vehicle levels


image

It’s important to clarify what I mean when I talk about fully unmanned vehicles with AI.

Real unmanned vehicles are vehicles in which the AI ​​manages itself without any human assistance.

Such vehicles are assigned to levels 4 and 5 , while cars that require human participation for co-driving are usually assigned to levels 2 or 3 . Cars in which driving with the help of a person is called semi-autonomous, and usually they contain many additional functions that are referred to as ADAS (advanced driver assistance systems).

So far, there is no fully unmanned vehicle level 5. Today we evenwe don’t know whether this will be achieved , and how long it will take.

Meanwhile, work is underway in level 4 area. Very narrow and selective tests are carried out on public roads, although there is debate about the admissibility of such tests (some believe that people participating in tests on roads and highways act as guinea pigs, which can survive or die in each test ) .

Since semi-autonomous cars require a human driver, the adoption of such cars by the masses will not be much different from driving ordinary cars, there is nothing new that could be said about them in the context of our topic (although, as you will soon see, the points that will be considered further applicable to them).

In the case of semi-autonomous cars, it is important that the public is warned about the alarming aspect that has arisen recently - despite people who continue to publish videos about how they fall asleep while driving cars of 2 or 3 levels, we all need to remember that the driver cannot be distracted from driving a semi-autonomous car.

You are responsible for actions to control the vehicle level 2 or 3, regardless of the level of automation.

Unmanned vehicles and AI ethics


In vehicles of level 4 and 5, a person does not take part in the management, all people in these vehicles will be passengers, and AI will drive. As a result, many intelligent systems will be built into these level vehicles - an indisputable fact.

The following question arises - will the AI ​​that drives the unmanned vehicle be limited to any guidelines or requirements on the ethics of AI (you can look at my analysis of the ethics of AI for unmanned vehicles by this link , as well as tips on developing this ethics ).

Recall that there is no default system that would require AI to comply with ethical standards or a moral contract with society.

People themselves must embed such beliefs or principles into intelligent systems, and AI developers must do this explicitly and with wide eyes.

Unfortunately, some act the other way round - they close their eyes tightly or fall under the influence of hypnosis, and instead of the above actions, they look fascinated at the potential prize that their project will bring (or the supposed noble goal).

I warned that some AI systems will lead to “noble corruption” from those who develop them. The result will be as follows - those involved in the development will be so absorbed in the potential of well-meaning AI that they will selectively ignore, miss or reduce attention to considerations about the ethics of AI (see link).

Here are five points in the set.US Department of Defense AI Ethics Principles :

Principle One: Responsibility


“Defense Department staff will carry out an appropriate level of assessment and attention, while remaining responsible for the development, deployment and use of AI capabilities.”

To begin with, we’ll extract from this wording the part that speaks of “Ministry staff” and replace it with “All staff”. You got a manual that is suitable for any organization working with AI, especially commercial.

Automakers and technology firms working with unmanned vehicles should carefully read the wording of the principle, carefully following its words.

I’m talking about this, as some continue to argue that the AI-based drone will act on its own, and therefore there will be a lot of concern about who will carryliability for errors in the behavior of the vehicle that will result in personal injury or death.

In short, people will be responsible .

These may be people who developed an AI-based drone — an automaker or a technology firm.

Understand also that I constantly say that the responsible people include both developers and those who deploy these AI systems , which means that working with these systems and installing them is just as important as working with developers .

Why?

Because people who deploy intelligent systems can also make mistakes that lead to adverse consequences.

For example, suppose thata ridesharing company has acquired a fleet of unmanned vehicles and uses them for joint trips of its customers.

So far, so good.

But let's say that this company is not able to properly service these unmanned vehicles . Due to lack of maintenance or neglect of instructions, unmanned vehicles do not work. In this case, we could find fault with the original developer, but it is more worth asking questions to those who deployed the AI ​​system and did not show due care and responsibility in their work .

Principle Two: Impartiality


“The Ministry will take steps to minimize the appearance of unintentional bias in the operation of AI systems.”

Once again, take the term “Ministry” in the wording and replace it with “Developers and employees installing AI systems.” You will receive a principle that is universal and suitable for everyone.

In the case of unmanned vehicles, there is a fear that the AI ​​system may react differently to pedestrians depending on their race , thus subtly and inexorably demonstrating racial prejudice.

Another problem is that the fleet of unmanned vehicles can cause cars to avoid certain areas. This will lead to geographic exclusion, which will ultimately rob people with mobility problems.free access to unmanned vehicles.

As a result, there is a significant chance that full-fledged unmanned vehicles will become mired in prejudices of one form or another, and instead of raising their hands and saying that this is life, developers and users should strive to minimize, mitigate or reduce any such prejudice.

Principle Three: Traceability


“Ministry projects in the field of AI will be developed and deployed in such a way that the relevant personnel have a proper understanding of the technologies, development processes and operational methods applicable to the field of AI, including through transparent and verifiable methods, data sources, as well as design and documentation ”.

Just remove from the wording all references to the ministry or, if you want, you can think of it as any department in any company or organization - this will make the principle widely applicable to everyone.

There is much inherent in this principle, but for brevity, I will focus only on one aspect that you may not have noticed.

The item with “data sources” aims to disseminate machine learning anddeep learning .

When you work with machine / deep learning, you usually need to collect a large set of data, and then use special algorithms and models to find patterns .

Similar to the preceding paragraph on hidden prejudice, if the data collected is initially biased in some way, then patterns will potentially highlight these prejudices. As a result, when the AI ​​system operates in real time, it will translate these biases into reality.

Even more frightening is that AI developers may not understand what is happening, and those who install and deploy these systems will not be able to figure it out either.

A drone trained on crazy and hostile traffic in conditional New York could potentially take overAggressive driving style typical of a driver from New York. Imagine that you took this system and use it in US cities where the driving style is more calm and measured.

This principle clearly indicates that developers and staff responsible for installation and deployment should be careful about the data sources, transparency, and auditing of such systems.

Principle Four: Reliability


“Ministry projects in the field of AI will have clearly defined uses, and the safety, reliability and effectiveness of such projects will be tested and verified within the described uses throughout the life cycle.”

Follow the same steps as in the previous paragraphs: remove the word “ministry” and apply this paragraph as if it applies to any organization and its department.

Most (or perhaps all) will agree that the success or failure of unmanned vehicles depends on their safety , safety , and efficiency.

I hope this principle is obvious because of its explicit applicability to AI-based cars.

Principle Five: Manageability


“The Ministry will design and develop AI-related projects to carry out their intended functions, while being able to detect inadvertent consequences in advance and avoid them, as well as the ability to disable or deactivate deployed systems that exhibit abnormal behavior.”

As usual, remove the word "ministry" and apply this paragraph to anything.

The first part of this principle underlies my topic of detecting and preventing unexpected consequences, especially adverse ones.

This is pretty obvious in the case of unmanned vehicles.

There is also a subtle point regarding unmanned vehicles, which includes the ability to disable or deactivate the deployed AI system, which has shown unexpected behavior - there are many options and difficulties, and this is not just clicking on the “red” shutdown button located outside the car ( as some people think )

Conclusion


We usually expect the Department of Defense to not reveal its cards.

In this case, the ethics manual of AI should not be kept secret. Indeed, it is very pleasant and commendable that the Ministry of Defense published this document.

In the end, it is one thing to glue a tiger out of paper, and another to see it in the real world.

All developers and staff responsible for installation and deployment should carefully and thoughtfully study this AI ethics manual, as well as similar (many) ethical principles from other sources.



image

About ITELMA
- automotive . 2500 , 650 .

, , . ( 30, ), -, -, - (DSP-) .

, . , , , . , automotive. , , .

Read more useful articles:


All Articles