Vladyslav Shcherbatiuk, ME-110i, KNEU
From the pages of books of science fiction writers’, predictions of futurists and guesses of people, artificial intelligence confidently comes into our lives. And the closer the day, when AI will stand on one level of consciousness with people, the more today’s philosophers, lawyers and interested public enter discussions about possible future AI rights, of course the legal part of this question.
First of all, let’s define what is Artificial intelligence.
Artificial intelligence is the property of intelligent systems to perform creative functions that are traditionally considered the prerogative of a person.
In English, the phrase «artificial intelligence» does not have the anthropomorphic coloration that it acquired in the traditional Russian/Ukrainian translation: the word intelligence in the context used rather means “the ability to reason reasonably” rather than “intelligence” (for which there is an English analogue of “intellect” So artificial intelligence is a machine, which is able to think and, therefore, act like human. Through this definition we have established what is an AI, to avoid confusion.
But, how would we know that our specific machine is AI? The most popular answer on this question, probably, will be – Turing test. So, what it is? The empirical test was proposed by Alan Turing in his article “Computing Machinery and Intelligence”, published in 1950 in the philosophical journal “Mind”. The purpose of this test is to determine the possibility of artificial thinking close to human.
The standard interpretation of this test is as follows: “A person interacts with one computer and one person. Based on the answers to the questions, he must determine with whom he is talking: with a person or a computer program. The task of a computer program is to mislead a person into making the wrong choice. ” All test participants cannot see each other.
The most general approach assumes that AI will be able to exhibit behavior that does not differ from human, and in normal situations. This idea is a generalization of the Turing test approach, which claims that a machine will become intelligent when it is able to maintain a conversation with an ordinary person, and he will not be able to understand that he is talking to the machine (the conversation is by correspondence).
Science fiction writers often suggest another approach: AI will emerge when a machine is able to feel and create. So, the owner of robot-butler – Andrew Martin, from “Bicentennial Man”, begins to treat him like a person, when he creates a toy of his own design. But, fortunately or unfortunately, none of proposed ways to test AI weren’t accepted by the scientific community.
Let’s remember the most famous list, which comes to mind
When we are talking about AI and the law. Of course, I am talking about Three laws of robotics. What is three laws of robotics?
The Three Laws of Robotics in Science Fiction are mandatory rules of behavior for robots, first formulated by Isaac Asimov in his story Round Dance (1942).
The laws state:
- A robot cannot harm a person or, by its inaction, allow harm to be done to a person.
- A robot must obey all orders given by a human, except when these orders are contrary to the First Law.
- The robot must take care of its safety to the extent that it does not contradict the First or Second Laws.
Of course, you may say that this laws are suitable for science fiction and robots. But, for most of people AI in its final form is a robot with ability to reason reasonably, as was previously mentioned. As for question about science fiction, I can only say that in contemporary discussions, dedicated to robotics and AI we can often hear mention of three laws.
Artificial intelligence work views the Laws of Robotics as the ideal of the future: it takes a true genius to find a way to put them into practice. And in the very field of artificial intelligence, serious research may be required in order for robots to understand the Laws.
However, the more sophisticated robots become, the more interest is expressed in developing guidelines and safety measures for them. On the other hand, Asimov’s later novels («Robots of Dawn», «Robots and Empire», «Foundation and Earth») show that robots do even more long-term harm by obeying the Laws and thus depriving people of the freedom to perform inventive or risky actions.
Modern robotics recognize that Asimov’s Laws are good for storytelling today, but useless in practice. Some argue that the Laws are unlikely to be implemented in robots, because the military structures – the main source of funding for research in this area – do not need it. Science fiction scholar Robert Sawyer has summarized this statement across all industries.
The major part of law theory is dedicated to an ethical part of laws. Also in the topic with the three laws of robotics, a controversial issue of ethics is raised. For example, later Asimov have modified his list of laws, he has added Law Zero, prioritizing the 1st,2nd and 3rd laws. This law asserted that the robot should act in the interests of all mankind, not just the individual.
Towards the end of «Steel Caves», Elijah Bailey notes that the First Law forbids a robot from harming a person, unless there is confidence that it will be useful for him in the future. In the French translation (“Les Cavernes d’acier”,1956), Bailey’s thoughts are conveyed somewhat differently:
A robot cannot harm a person unless it can prove that it will ultimately benefit all of humanity. After the examples above we can remember, so-called, Trolley problem, when we have to decide to kill one person to save four other.
And at this moment we finally get to law aspects of future AI.
There are examples in the real world. On March 18, 2018, one of the self-driving cars Uber (in the state of Arizona) knocked down a 49-year-old Temple resident to death. This is the first and, fortunately, the only lethal case involving a pedestrian and self-driving cars.
The obvious question is: who is to blame? A pedestrian? Driver? Or the engineers who developed the autopilot? Or maybe the drivers who helped put together a dataset for training the autopilot? Car manufacturers with autopilot functions are insured against liability in these cases.
The fact is that the driver, when the autopilot is on, must always keep his hands on the steering wheel or put them on the steering wheel at the first request of the car. Thus, the driver can and should respond to an emergency on the road.
Let’s define what AI legal regulation can include?
Where artificial intelligence needs jurisprudence right now:
- protection of personal data (the basis of AI is the collection of petabytes of personal data);
- regulation of economic activities for the production of robots or software;
- issues of civil and criminal liability;
- copyright for works created by artificial intelligence;
- cybersecurity and AI applications;
- “Mixed justice” and “artificial intelligence justice”, that is, the application of AI in justice;
- the role of AI in combating climate change and the spread of fakes;
- human rights and discrimination.
It is important to understand that AI is also another new technology that needs to be regulated from a legal point of view. Separately, of course, it is the legal personality of AI, and the (criminal) responsibility of AI.
There is another example: in China, more than 3 million lawsuits have already been resolved using an automated service using artificial intelligence. These are mainly civil lawsuits for title. There have been no high-profile error cases yet. However, the system is positioned as a universal assistant referee. That is, “your hands should still be on the steering wheel.” After all, the court works with human rights.
This raises the question of data privacy. Namely – the preservation of the human right to his private life (Article 8 – Convention for the Protection of Human Rights and Fundamental Freedoms). The use of AI is known to require big data (real data). The reader can object and state that the data can be anonymized. And, of course, he will be right. To dive deeper into this issue, it is proposed to conduct a thought experiment.
Let’s assume that there is a certain system that analyzes the client’s well-being by his appearance. This system is installed at one of your local pharmacies. This pharmacy already has an algorithm that makes recommendations for you on drugs, including taking into account the aforementioned system. All shoppers entering the pharmacy take coupons and enter their symptoms.
There are two men in line with complaints of coughing. And they are offered different means. One costs 1,500 hryvnias, the other 350 hryvnias. And now, the reader, it is proposed to summarize the experiment. Have your privacy rights been violated? Formally, no. None of the men gave their names or other details. But, in fact, the system understood the material status of each of them. This raises the metadata dilemma.
Metadata is sub-channel information about the data used.
In other words, information about information.
An example of a file’s metadata might be its size, format, etc. Suppose that a sample of several hundred thousand people was used to train the aforementioned AI. They signed agreements on their consent to the processing of their personal and metadata. The resulting dataset was strictly confidential and not shared. However, the pharmacy that paid for this black box, which tells about a person’s income, did not violate the rights of its customers. The personal data company also did not violate human rights. But in fact, the rights have been violated – anyone can pay and buy a “salary calculator”. In this regard, the Committee of Ministers of the Council of Europe (CoE) in September last year established the Special Committee on Artificial Intelligence (CAHAI). The main task of the expert group is to develop a legal framework for the design and application of artificial intelligence based on the organization’s standards in the field of human rights, democracy and the rule of law.
The Council of Europe’s Committee on Artificial Intelligence was tasked with working out the feasibility and structure of a potential regulatory framework for artificial intelligence. They recently submitted an interim report.
Below are some interesting takeaways from the report:
– The need to regulate the development and application of AI is associated with the protection of human rights, democracy, the rule of law, and ensuring that AI benefits both for individuals and for society as a whole.
– One of the results of the committee’s work was the initiative to create a map of national AI initiatives. Some interesting ideas that are being discussed: an analogue of the Hippocratic Oath for AI engineers or an analogue of a “driver’s license” for AI developers and a self-governing professional organization for data scientists.
– It is also assumed the need for verification / certification of AI developments by independent bodies, as well as the functioning of AI algorithms in the so-called sandboxes – in isolation to ensure safety from being hit by viruses, that is, for reasons of cybersecurity. The settlement is planned to be developed by the end of 2021.
Before that, artificial intelligence (AI) technology was practically not regulated, and now many questions arise:
- who can own the rights to objects created by AI?
- how to reliably anonymize and protect personal data, especially related to face recognition?
- how to access big data and public data to enable AI to fully evolve?
- who is responsible for the actions of the AI, and how can this responsibility be proven?
Artificial intelligence and responsibility
Now the law does not answer the question of whether AI can be a subject of law and be responsible for incidents or make a profit for its works. There is a version that AI under the name of Nikolai Ironov (neural network) creates logos for companies in the studio of Artemy Lebedev, famous Russian designer. In this case, the profit from the logos is received by the legal entity.
The AI algorithm belongs to the creator, the result of its use is optional. The result of the algorithm is highly dependent on the initial data, and the developer cannot always predict it. In this case, copyright needs legal entity justice, because AI can be customized by a whole team of employees.
Everything is moving towards the fact that works created by AI will be protected based on the creative result, and not on the basis of the creative process, – reflects Irina Shurmina, senior lawyer at IP / Digital CMS Russia.
Precedents in the US and Australia show that copyright does not protect art that is not created by man, be it a selfie taken by a monkey with a photographer’s camera, or HTML code written by AI.
The EU Patent Office, referring to the Patent Convention, refused to grant Dabus a patent for inventions, since the patent request must contain the name of the inventor – this is one of the guarantors of the use of rights. The European Parliament report indicates that AI cannot be the copyright holder and the right will be owned by the one who prepared the object.
However, in China, the 2020 financial report of Tencent’s Dreamwriter robot was copyrighted when it was copied.
Lawyer Irina Shurmina recalls that in European practice, responsibility can be borne either by the developer (backend operator), or the administrator / customizer (deployer), or the end user (end user). Typically, when it comes to risky systems such as medical technology or self-driving cars, the customizer is responsible. In this sense, he is equated to the owner of a car or pet. For damage and injury to health, fines are set from 2 to 10 million euros.
CMS Russia proposes to look at the responsibility of AI from the point of view of related law, which is not necessarily associated with creative work and may arise when working with objects that are not subject to copyright. This is how phonograms, databases and TV broadcasts work.
In my essay I’ve tried to introduce you to the still unexplored topic of the relationship between artificial intelligence and the law. We’ve considered concept of artificial Intelligence, the three laws of robotics, ethical part of the issue, and real-world cases. From my point of view all of this information is enough to form some opinion on this topic. As we can see, for now, there is no some clear or generally accepted laws dedicated to AI regulation. But some progress in this field could be already seen.
I think we have had a great honor and good fortune to observe the formation of artificial intelligence, as, perhaps, the apogee of the scientific progress of mankind. Just think about what issues of ethics, law, morality and just human interest have fallen on our age and the age of our children. Isn’t it luck? I hope you are as interested as I am in subsequent discussions, reasoning and observation, the development of artificial intelligence. I hope my work has helped you to “touch” this or learn more.