“You’re not human” – USPTO.

This sentence is said by the U.S. patent office to all artificial intelligence.

The U.S. Patent Office issued a regulation: for any product independently designed and invented by artificial intelligence, AI has no right to apply for a patent and become an inventor because AI is not a natural person.

The final ownership of AI invention product patents has not been clearly stipulated by any country so far.

A water cup open AI ethics debate

In 2018, Stephen Thaler, an artificial intelligence researcher in the United States, submitted two patent applications to the U.S. patent office and the European Patent Office (EPO), one is a deformable beverage cup convenient for the manipulator to grasp, and the other is an emergency light.

These two products are independently designed and invented by artificial intelligence DABUS developed by Thaler.

AI technology should be inspired by human intelligence

△ water cup invented by DABUS

USPTO and EPO rejected the patent applications for these two AI inventions in December last year and February this year, respectively. The reason is the same: under the current law, only human beings can apply for and hold patents.

The document issued by the U.S. Patent Office on April 27 officially stipulates that any artificial intelligence has no right to apply for or hold a patent as an inventor in the future.

The three parties say to themselves that the AI patent problem may never be solved

The regulations issued by the U.S. government can not prevent researchers from striving for “human rights” for AI.

In order to enable DABUS to legally apply for and hold patents, Stephen Thaler specially established the “artistic inventor” organization, which widely distributed documents and propaganda among scientists, philosophers and ethicists supporting AI’s “human rights”; It also gathered the best patent lawyers in Europe and America to debate with the government patent department. Moreover, the organization’s legal resources are open to the world, and any AI invention patent that needs help can contact for help.

The U.S. patent office put forward a compromise plan, that is, the two product patents belong to Stephen Thaler himself, but Thaler and his Arctic inventor lawyer team flatly refused.

The core point of contention among lawyers of Arctic inventor is that Thaler did not participate in any invention process, nor did he understand the design of containers or emergency lights at all. All intellectual achievements came from DABUS, so patents can only belong to artificial intelligence, not those who developed artificial intelligence.

For Thaler’s point of view, the U.S. patent office and the European Patent Office did not respond at all. They believed that the key problem was not the source of intellectual achievements, but that non-human agents had no legal qualification to apply for patents.

The patent office did not give any answer to the original question of the debate, that is, “whose invention is this?”.

The attitude of European and American legal experts towards “Ai patent” is ambiguous. When asked this question, most people first state the current legal provisions, and then list their concerns about AI ethics.

European and American jurists can not give convincing answers to the crucial questions of “if AI invention can not belong to AI, does it belong to man?” and “whether the legal definition of invention has lagged behind the development of society and science and technology”.

The World Intellectual Property Organization (WIPO) has begun its investigation and Research on “Ai patent rights”, and plans to hold a seminar this year, hoping to put forward suggestions on improving the current patent law.

Discussion on AI ethics

The dilemma of AI patent right is essentially human’s concern about AI’s impact on human’s own values and rules.

△ Hal in 2001 Space Odyssey makes human beings truly feel the danger of artificial intelligence for the first time

As early as 1950, Wiener, the father of cybernetics, clearly put forward his concern about artificial intelligence in his famous book “man has man’s use: cybernetics and society”: “The trend of these machines is to replace human beings at all levels, not just with machine energy and power. It is clear that this new replacement will have a far-reaching impact on our lives.”

Today, this concern becomes more real and visible as AI technology matures.

In order to balance the development consistency between science and technology and social cognition, many researchers in the field of AI strive to put forward the interpretation and norms of AI ethics.

Stanford was officially established as the “people-oriented AI Institute” (HAI), and Li Feifei and philosopher John etchemendy served as the president.

Li Feifei proposed the rules to be followed in AI development:

1. AI technology should be inspired by human intelligence;.

2. The development of artificial intelligence must be guided by human influence;.

2. The application of AI should enhance human beings, not replace them. In other words, the application of artificial intelligence is to enhance human ability and empower human beings, not replace human beings.

Google also announced seven principles for using AI:

1. Good for society.

2. Avoid creating or aggravating social prejudice.

3. Test in advance to ensure safety.

4. Human beings bear the responsibility, that is, AI technology will be under appropriate human guidance and control.

5. Ensure privacy.

6. Adhere to high scientific standards.

7. Weigh the value from the main purpose, technical uniqueness, scale, etc.

The AI ethics statements of these AI industry leaders are macro visions. The core is that AI must benefit mankind and cannot threaten the value of human existence.

The difficulty of the patent problem mentioned above is: if the application is passed, it is to recognize that machines and people have equal legal status, and the uniqueness and value of human beings are erased; If you simply give the patent to the person who wrote the AI algorithm, it violates the legal definition of “invention”.

As for how to apply AI ethics guiding ideology to the above practical cases, it needs continuous exploration by policy makers and legal circles, and it is difficult to give an answer at present.

        Editor in charge: PJ

Leave a Reply

Your email address will not be published. Required fields are marked *