Categories: Analytics

Six commandments of artificial intelligence

The coming years will be marked by the rapid development of artificial intelligence technologies. Is it time to introduce a clearly defined framework for the creation and use of AI? Is 6 months of experience enough to regulate a technology that hasn’t even left the lab yet? This question is increasingly being asked by experts and journalists when it comes to artificial intelligence. Voices and calls to take measures to regulate artificial intelligence – both at the user level and at the level of project development – are becoming more and more frequent. The history of such calls began quite a while ago.

The ubiquitous presence of artificial intelligence in the digital space, and above all, models capable of creating content indistinguishable from that created by humans, evokes very different emotions. On the one hand, we have a group of enthusiasts who see the future in AI, and despite its relatively limited capabilities (because AI does not think, but often just takes information from the Internet), they are not afraid to entrust it with many tasks. On the other side of the barricade is a group that expresses its scepticism and concern about current trends in the development of artificial intelligence.

The bridge between the two groups is artificial intelligence researchers, who, on the one hand, provide numerous examples of how artificial intelligence has positively affected the surrounding reality. At the same time, they understand that it is too early to rest on their laurels, and that a huge technological leap brings many challenges and great responsibility. A striking example of this attitude is an international group of artificial intelligence and machine learning researchers led by Dr Özlem Garibay from the University of Central Florida. The 47-page publication of 26 scientists from around the world identifies and describes six issues that research institutions, companies, and corporations must address to make their models (and the software that uses them) safe.

Yes, this is a serious scientific work with important insights into the future of artificial intelligence. Anyone interested can read this scientific report and draw their own conclusions. In simple terms, scientists have identified 6 commandments of artificial intelligence. All AI developments and actions must comply with them to be safe for people and the world.

In my article, written on the basis of this scientific work, I will try to formulate the basic postulates, the laws by which artificial intelligence should exist and develop. Yes, this is my almost free interpretation of the conclusions of scientists on the development of artificial intelligence and an attempt to present them in a so-called biblical version. But this is how I wanted to introduce you to this scientific work of respected scientists.

Read also: The creation of AI: Who is leading the race?

The first law: Human welfare

The first postulate of the researchers is to orientate the work of artificial intelligence towards human well-being. Due to the lack of “human values, common sense, and ethics,” artificial intelligence can act in ways that lead to a significant deterioration in human well-being. Problems can result from the superhuman capabilities of artificial intelligence (e.g., how easily AI beats humans – and not only in chess), as well as the fact that AI does not think for itself and is therefore unable to “filter out” biases or obvious errors.

Researchers note that excessive trust in artificial intelligence technologies can have a negative impact on human well-being. A society that has little understanding of how AI algorithms actually work tends to overly trust them, or, conversely, to have a negative attitude towards content generated by a particular model, such as chatbots. Given these and other factors, Garibay’s team calls for putting human well-being at the centre of future AI-human interaction.

Read also: ChatGPT: Simple instructions for use

The second law: Responsibility

Responsibility is a term that keeps popping up in the AI world in the context of what we use machine learning for and how AI models and algorithms are developed and trained. The international team emphasises that the design, development and implementation of AI should be done with good intentions.

In their view, responsibility should be viewed not only in a technical but also in a legal and ethical context. Technology should be considered not only in terms of its efficiency, but also in the context of its use.

With the introduction of advanced machine learning techniques, it is becoming increasingly important to understand how a decision was made and who is responsible for it,” the researchers write.

The third law: Confidentiality

Privacy is a topic that keeps coming back like a boomerang in every technology discourse. Especially when everything is discussed on social media. However, it is extremely important for artificial intelligence because it does not exist without a database. What are databases?

Scientists describe them as “an abstraction of the basic building blocks that make up the way we see the world”. These blocks are usually mundane values: colours, shapes, textures, distances, time. While narrow artificial intelligence focused on a single goal, such as the degree of opening of blinds at a given light intensity, uses publicly available objective data, artificial intelligence in broader applications (here, for example, text-to-picture models such as Midjourney or language models such as ChatGPT) can use data about people and the people they create. Also worth mentioning are press articles, books, illustrations, and photos published on the Internet. Artificial intelligence algorithms have access to everything because we have given it to them. Otherwise, it will know nothing and will not be able to answer any questions.

Data about users fundamentally affects both the people about whom the data is collected and the people in the system where the AI algorithms will be implemented.

Therefore, the third challenge relates to a broad understanding of privacy and ensuring such rights as the right to be alone, the right to restrict access to oneself, the right to privacy of personal life or business, the right to control personal information, i.e., the right to protect one’s personality, individuality and dignity. All of this must be spelled out in the algorithms, otherwise privacy will simply not exist, and artificial intelligence algorithms can be used in fraudulent schemes and criminal offences.

Read also: 7 coolest ways to use ChatGPT

The Fourth Law: Project structure

Artificial intelligence can be extremely simple and single-purpose, but in the case of larger models with a broad and multitasking nature, the problem is not only data privacy, but also the structure of the project.

For example, GPT-4, the latest artificial intelligence model by OpenAI, despite its size and impact on the AI world (and beyond), does not have fully public documentation. That is, we do not have an understanding of the ultimate goals set for the developers, what they want to get in the end. Therefore, we cannot fully assess the risks associated with using this AI model. GPT-3, on the other hand, trained on data from the 4chan forum, is a model you would definitely not want to interact with. The 4chan forum is one of the most interesting phenomena on the Internet. This is an example of absolute, total anarchy, which in practice is not limited by any framework. It is the place where hacker groups such as Anonymous or LulzSec were created. It is the source of many of the most popular memes, a place to discuss controversial topics and publish even more controversial opinions. Although the English-language image board states that “as long as it’s legal”, this is somewhat questionable, given that 4chan has been known to take an interest in media outlets from time to time, including those with racist, Nazi and sexist content.

Professor Garibay’s team wants every AI model to operate within a clearly defined framework. Not only because of the well-being of the person with whom the AI interacts, but also because of the ability to assess the risks associated with using the model. The structure of any project should respect the needs, values, and desires of different cultural groups and stakeholders. The process of creating, training, and fine-tuning AI should focus on human well-being, and the end product – the AI model – should focus on enhancing and improving the productivity of the human community. Models where risks cannot be identified should have limited or controlled access. They should not pose a threat to humanity, but rather contribute to the development of humans and society as a whole.

Read also: Twitter in Elon Musk’s Hands — A Threat or an “Improvement”?

The Fifth Law: Governance and independent oversight

Artificial intelligence algorithms have literally changed the world in just a year. The premieres of Google’s Bard and Microsoft’s Bing have had a significant impact on the stock market shares of both giants. By the way, they contributed to the growth of these companies’ shares even against the background of Apple’s shares. ChatGPT has been actively used by school students, they communicate with it, take exams, and ask questions. The most important thing is that it has the ability to self-learn and correct its mistakes. Artificial intelligence is even starting to work in some governments. For example, Romanian Prime Minister Nicolae Ciuca virtual assistant to inform him about the needs of society. In other words, artificial intelligence is playing an increasingly important role in our lives.

Given the ever-increasing interdependence between artificial intelligence, humans and the environment, scientists believe it is necessary to establish governance and independent oversight bodies for its development. These bodies will control the entire life cycle of artificial intelligence: from idea to development and implementation. The authorities will properly define different models of AI and consider cases related to artificial intelligence and social actors. That is, artificial intelligence may become a subject of court cases and litigation. Although, of course, not AI itself, but its developers.

Read also: All About Neuralink: A Beginning Of Cyberpunk Madness?

The sixth law: Interaction between humans and artificial intelligence

There is something for everyone in artificial intelligence applications: generating text, detecting content in images, answering questions, generating images, recognising people in photos, analysing data. These multiple use cases are not the only concerns of people trying to adapt artificial intelligence to legal and ethical standards. Many fear being pushed out of the labour market by AI models, as AI algorithms will be able to do the same thing faster, cheaper, and maybe even better than humans. At the same time, there are people who rely on AI in their work, meaning that artificial intelligence is already an indispensable assistant for them.

But the research cited by scientists makes it clear that we are still quite far from replacing people with cheap artificial labour. Nevertheless, they are already insisting on the need to establish a strict hierarchy of interaction between humans and artificial intelligence. In their view, humans should be placed above artificial intelligence. Artificial intelligence should be created with respect for human cognitive abilities, taking into account their emotions, social interactions, ideas, planning and interaction with objects. In other words, in all situations, it is the human who must stand above artificial intelligence, control the behaviour and content created by the model, and take responsibility for it. Simply put, even the most advanced AI should obey humans and not go beyond the limits of what is permitted, so as not to harm its creator.

Read also: How Ukraine Uses And Adapts Starlink In Wartime

Conclusions

Some might say that the scientists did not mention anything important or new in their report. Everyone has been talking about this for a long time. But now we need to put AI within some kind of legal framework. Reaching for GPT-4 is like grabbing a knife blindly. They hide key information from us. All artificial intelligence developments, including the ChatGPT project by Open AI, often remind me of raising a small child. Sometimes it seems that this child is of alien origin. Yes, it is alien, but it is still a child that learns, makes mistakes, sometimes behaves inappropriately, is capricious, and argues with parents. Although it grows and develops very quickly.

Humanity may not be able to keep up with its development, and everything may get out of control. That is why humanity needs to understand why we are developing all this, to know the ultimate goals, to be “responsible parents”, because otherwise the “child” may simply destroy its “parents”.

Read also: 

Share
Yuri Svitlyk

Son of the Carpathian Mountains, unrecognized genius of mathematics, Microsoft "lawyer", practical altruist, levopravosek

Leave a Reply

Your email address will not be published. Required fields are marked*