The European Union wants to regulate artificial intelligence (AI) to monitor its development and ensure the proper use of this technology. AI can bring many benefits, such as better healthcare or more sustainable energy, but…Why is it advisable to regulate it…?
The Parliament envisions this draft as a statement of general, neutral principles that can be applied to future AI systems.
AI systems used in the EU will be human supervised, safe, secure, transparent, traceable, non-discriminatory and sustainable.
The newly drafted law sets out a series of guidelines depending on the level of risk of AI:
1st Unacceptable risk –
AI systems that pose a threat to people, which will be banned. They include:
– Manipulation of the behavior of vulnerable individuals or groups.
– Classification of people by status or personal characteristics.
– Biometric identification in real time and remotely (will be allowed a posteriori to prosecute serious crimes, with prior judicial approval).
2º. High risk –
AI systems that threaten security or fundamental rights, with two categories:
– Those used in products subject to European safety legislation (toys, aviation, automobiles, medical devices and elevators).
– AI systems belonging to specific domains to be registered in an EU database, such as Critical Infrastructure, Education and Employment.
These high-risk AI systems will be assessed prior to commercialization and throughout their lifecycle.
Generative AI, such as ChatGPT, will have to meet transparency requirements:
– Identify that the content has been generated by AI.
– Establish guidelines to prevent it from generating illegal content.
– Publish the data protected by intellectual property that is used.
3º. Limited risk –
Systems that comply with the necessary transparency so that users can make informed decisions, being aware of when they are interacting with AI. Includes those that generate or manipulate image, audio or video content (e.g. deepfakes).
Last June 14, 2023, MEPs started negotiations on the AI law. The Council of Europe will try to shape The AI Law before the end of this year, a law that will be a pioneer in the regulation of this booming technology.
Nubeprint, with its first solution in the Cloud in 2010, uses A.I. and Big Data to ensure a secure environment based on two pillars:
1st Security in network and data protection, complying with up to seven security directives. These include HIPAA (Health Insurance Portability and Accountability Act), the Federal Information Security Management Act and GDPR (General Data Protection Regulation of the EC).
Security against cyber-attacks, avoiding security breaches as Nubeprint is not intrusive and does not leave access ports open unnecessarily.