The Role Of Regulations In Establishing The Benefit-Risk Balance In Artificial Intelligence
(My article that was published on Inc. Türkiye)
Today, rapidly developing artificial intelligence (AI) technologies are finding their way into many areas of our lives. This penetration of AI offers tremendous opportunities, but it also brings challenges such as privacy breaches, ethical issues, workforce transformations and security risks. This raises the need for legal regulations and restrictions aimed at mitigating the potential risks associated with the use of AI technologies.
There is no denying that AI technology has made significant contributions in leading industrial transformation, increasing efficiency and solving complex problems, but serious concerns such as algorithm errors, misguidance by AI trained with misleading data and risks of misuse should not be ignored. In addition, the controversies caused by generative artificial intelligence technology on issues such as copyrights have raised the question of how to develop a responsible artificial intelligence that complies with ethical values.
Legal regulations are needed to ensure that AI technologies are developed and used in an ethical and safe manner. These regulations have to cover a variety of areas, from the security of the large data sets collected by AI systems to the transparency of decision-making processes to reduce the risks of bias and discrimination... So what should be in AI systems? First of all, decisions need to be understandable and questionable. In addition, rules, rigorous tests and ethical standards against biases in data sets and AI decisions should be set immediately.
Of course, there are those who have thought about these and taken some steps. For example, regulations pioneered by the European Union, the United States, China and the OECD stand out as important steps to promote the ethical and safe use of artificial intelligence technologies.
European Union Artificial Intelligence Law
This law, drafted by the European Commission, provides a comprehensive framework that categorizes artificial intelligence systems according to their level of risk, and imposes regulations on the highest-risk applications. For example, all systems that create databases for facial recognition and biometric forms of identification, as well as the use of emotion detection systems in workplaces or schools, will be banned. This law no longer allows citizen tracking systems that reward or punish people for their behavior, or artificial intelligence systems that try to manipulate human behavior. In my opinion, one of the most important regulations introduced by this law is to make it clear to users that they are interacting with a machine when using artificial intelligence systems such as chatbots. This law, adopted by the European Parliament and member states in 2023, is expected to enter into force in 2025.
United States Algorithmic Liability Act
This bill, which was first introduced to the US Congress in 2019 and brought to the agenda again in 2023, imposes a series of responsibilities on companies. Among the new duties of companies are to conduct impact analyses of the artificial intelligence systems they are using, and to evaluate the performance of these systems on issues such as privacy, security and bias. This bill proposes a series of measures, especially in terms of protecting personal data and reducing the risks of disinformation. However, as this bill is still in the process of being enacted into law, a specific entry into force date has not yet been announced.
China's Next Generation Artificial Intelligence Development Plan
In 2017, China issued several principles and policies to promote the ethical and safe use of artificial intelligence technology. In fact, this plan is more of a blueprint for how China will compete in the AI race. At the same time, it also includes decisions on legal regulations to ensure fairer, more transparent and safer use of AI.
OECD's Artificial Intelligence Principles
The OECD Principles for Artificial Intelligence, published in 2019, provide values supported by member countries for the development and use of AI in an innovative, trustworthy and human rights-respecting manner. These include ethical criteria such as AI systems being transparent and explainable, respecting human-centered values and equality, promoting social welfare and environmental improvement, and international cooperation for trusted AI.
While shaping the development and use of AI technologies, these regulations also ensure that important steps are taken in terms of ethical standards, security protocols and respect for human rights. However, given the rapid advancement of technology, these regulations need to be constantly reviewed and updated. The development and use of AI technologies should not be limited to legal regulations, but should also support an ethical and responsible approach through the cooperation of technology developers, users and all segments of society. Only in this way can we make the most of the opportunities brought by artificial intelligence and minimize the risks of harming social values.
Mustafa İÇİL