With the widespread use of artificial intelligence in different fields such as production, health, finance, education, transportation and security today, it is of great importance to clearly define the rules for this technology.
The EU does not yet have a regulation on ChatGPT or similar artificial intelligence systems, but 2 years ago, the EU Commission prepared the first legislative proposal containing the framework of new rules on artificial intelligence and submitted it to the member states and the European Parliament (EP).
This proposal introduced some limitations and transparency rules in the use of artificial intelligence systems. If this proposal becomes law, artificial intelligence systems such as ChatGPT will also have to be used in accordance with these rules.
The new rules for artificial intelligence, which are expected to be applied in the same way in all member countries, bring a risk-based approach.
In the proposal of the commission, artificial intelligence systems are divided into 4 main groups as unacceptable risk, high risk, limited risk and minimum risk.
Artificial intelligence systems, which are considered to be a clear threat to people’s life safety, livelihoods and rights, are in the unacceptable risk group. The use of systems in these areas is expected to be banned.
Artificial intelligence systems or applications that go against the free will of individuals, manipulate human behavior or perform social scoring are also prohibited.
Critical infrastructure, education, surgery, CV evaluation in the recruitment process, credit rating, evidence, immigration, asylum and border management, verification of travel documents, biometric identification systems, judicial and democratic processes fall into the high risk group.
Strict requirements are imposed on high-risk areas, while strict requirements are imposed on artificial intelligence systems in this group before they are released to the market. These systems must be non-discriminatory, results must be observable and subject to adequate human oversight.
Within the scope of the rules, security units will be able to use biometric identification systems in public areas in special cases such as terrorism and serious crimes. However, such uses of AI systems will be limited and subject to judicial authority’s permission.
Artificial intelligence systems in the limited risk group will also have to comply with certain transparency obligations.
Chatbots are also in the limited risk group in the proposal. The goal here is to make users aware that they are interacting with a computer while conversing with chatbots.
Applications such as artificial intelligence-supported video games or spam filters are in the minimum risk group. Artificial intelligence systems in this group, which pose little or no risk to the rights or safety of individuals, will not be interfered with.
There will be high fines
The proposal includes fines up to 30m euros, or 6% of global profits, for violators of the AI law.
Work on the artificial intelligence law, which requires the approval of the EP and member states to enter into force, is still continuing. In this area, EU member states determined a common position at the end of last year.
EU countries wanted some changes regarding banned AI applications and extended the ban on using AI for social scoring to include the private sector.
Expanded the scope of the provision that prohibits the use of artificial intelligence systems that exploit the vulnerabilities of a particular group.
The use of real-time remote biometric identification systems in public places by law enforcement agencies was exceptionally permitted. EU countries agreed not to include national security, defense and military purposes within the scope of artificial intelligence law.
The European Parliament will take a common position
The AP, on the other hand, has not yet determined a common stance on this issue. Members of parliament are still working on the artificial intelligence law. Deputies are meeting here to determine a common position on which risk group systems such as Chat GPT will fall into.
Members of the AP currently believe that artificial intelligence systems that produce complex texts without human oversight should be part of the high-risk list.
After the Parliament has determined its position on the law, it will sit at the negotiating table with the member states.
Negotiations on the AI law are expected to begin this year.
It is expected that a 2-year transition period will be granted for harmonization with the sector with the law to be approved after the EP and member countries agree on a common text.
Big technology companies such as Microsoft and Google, which have made serious investments in this field, are following the new rules in the field of artificial intelligence closely.
AB reviews ChatGPT
The European Data Protection Board, which brings together the national data protection institutions of EU countries, evaluated the artificial intelligence chat robot ChatGPT developed by OpenAI at its last meeting.
At the meeting, it was decided to establish a special task force to increase cooperation and information sharing in the investigations carried out on ChatGPT.
Last month, the Italian Data Protection Agency launched an investigation into the ChatGPT artificial intelligence chatbot on suspicion of violating personal data collection rules and suspended access to the app.
Authorities in Germany also reported that they could, in principle, block ChatGPT, the artificial intelligence-based chatbot, if necessary, due to data security concerns.
At the same time, data protection authorities in France and Ireland had contacted the Italian Data Protection Agency to discuss their findings.
In this case, it seems inevitable that the EU will clarify its approach to ChatGPT soon and impose strict rules on similar artificial intelligence technologies.