3

AI Laws Across the Globe

 2 years ago
source link: https://codecondo.com/ai-laws-across-the-globe/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

AI Laws Across the Globe

It’s not surprising that most countries take a “wait and see” attitude to AI rules and regulations. As with each new technological wave, it’s difficult to foretell how this new technology will be utilized or abused.

When talking about AI, people relate it to the humanoid robotic kind. Artificial Intelligence is much broader than that idea. It is a wide field of computer science concerned with building intelligent machines that can accomplish activities that need human intelligence. 

For example, it serves many purposes, helping enterprises get insights on operations they may not have been aware of. In some circumstances, AI can outperform humans at tasks like quickly analyzing enormous quantities of data with few errors. 

According to research conducted by the Harvard Business Review, 84% of business executives believe using AI to achieve their growth objectives is a necessity.

While AI tools provide organizations with various new capabilities, their employment presents ethical concerns. An AI system will reinforce what it has already learned for better or worse. Hence, AI laws come into the picture. The biases in the data used to program the AI to require close monitoring.

What Are AI Laws?

AI Law (Artificial Intelligence Law) oversees the application and development of artificial intelligence. Because intelligent robots have artificial intelligence, AI Law is somewhat related to robot law. It governs the usage and development of artificial intelligence. 

Concerns about AI’s potential misuse or unforeseen effects have led to initiatives examining and setting standards. The US NIST initiative involves workshops and discussions with the public and private sectors to develop federal standards to create reliable and robust AI systems.

Benefits of Implementing AI Laws

Over the next few years, and likely beyond, AI will become more prevalent. Businesses cannot ignore the technology’s uses because they are far too intelligent, efficient, and cost-effective, which means the amount of AI we interact with daily will expand in all sectors of life. Using AI will have its benefits much prevalent than we saw in the movies.

  1. Eliminating human error – Even the best of us makes mistakes, whether it’s a lapse in concentration or a simple blunder. But an artificially intelligent machine designed to perform a specific task is devoid of these quirks.
  2. Increased efficiency – Data is now as crucial to the business as oil once was, and it is essential to handle this data precisely and fast for real-time outcomes.
  3. Smart technology – The capacity of an AI system to analyze massive data volumes fast and accurately will be critical for the operation of many intelligent technologies and settings. An example of this is already in use on many high-end smartphones, where AI constantly tweaks the phone’s settings for optimal performance or battery life.

What Are Legal Issues of AI?

Current artificial intelligence legislation cannot regulate AI’s use, manufacture, behavior, or responsibilities. It is difficult to understand how much to govern AI given its rapid development, even though rules are needed.

  • Is AI capable of committing a crime, and who will keep track of it? Is it necessary to include legal requirements in the code to ensure that AI follows the law? What will be the consequences of employing AI to commit a crime?
  • How will AI adapt to international humanitarian law given its contact with humans?
  • Does the AI or its developer own the intellectual property it generates? Who owns new content created by AI outside of its code?
  • Who’s liable for the harm caused by AI?
  • How do you certify something that’s fluctuating? AI isn’t static; it’s evolving.
  • How to avoid biases?
  • How do we control the use of artificial intelligence for marketing, particularly in the setting of elections?

With the rise of AI, more questions arise than the answers we can obtain on these issues. Legislators must address the legal and ethical issues concerning AI. It is difficult to handle these legal difficulties without appropriate legislation. It will become easier to recognize the legal challenges around AI as time goes on.

With existing or planned legislation in seven of the nine categories where regulation may apply to AI, the European Union is pretty active in proposing new laws and regulations. The EU realized the General Data Protection Regulation (GDPR) in 2018, requiring member states to maintain a very prohibitive regulatory approach to data protection and usage. When it comes to AI regulations, the United States, on the other hand, maintains a “light” regulatory system.

No country has yet passed legislation governing ethical and responsible AI. It will take time to see if firms will self-regulate or if governments will step in to regulate legally. Furthermore, there is no legal or regulatory activity relating to the malicious use of AI. The European Union, the United Kingdom, Singapore, Australia, and Germany actively discuss similar legislation.

What Are the AI Laws in Working?

With the recent eruption on the importance of AI, everyone is yet to figure out how to implement the new laws still in development.

On April 21, the European Commission followed suit, releasing its proposal for AI regulation. Non-compliance results in fines of up to 6% of a company’s yearly revenue, higher than the previous penalty of up to 4% of global turnover under the General Data Protection Regulation (GDPR).

First, conduct assessments of AI risks. Document how the risks were mitigated or resolved as part of the effort. These are known as “algorithmic impact assessments” or “IA for AI” in regulatory systems.

Second, accountability and independence. This theory proposes that the incentives of data scientists, attorneys, and others evaluating the AI system differ from those of frontline data scientists. This implies that AI gets assessed and validated by people other than who created it or that organizations pay outside specialists to evaluate the AI system.

Third, continuous review. AI systems are “brittle and subject to high rates of failure,” with hazards that evolve and alter over time, making risk mitigation challenging at a single moment in time. “Lawmakers and regulators alike are sending the message that risk management is a continual process,” said Burt, Managing Partner of bnh.ai.

The evolving legislation will consequentially impact every industry closely working with AI. So, the companies aren’t clear on how to behave with the regulations. Yet, the three core trends connect all current and potential AI standards, meaning that enterprises may take accurate measures right now to ensure their systems do not break any existing or future laws and regulations. 

Wrapping Up

A new international law is required to confront the evolution of AI law globally. Many organizations are looking forward to using AI to benefit the customers and providing them with the openness and explanation they need to feel confident that it’s being used for good. 

According to Deloitte Global, there will be a lot of talk about regulating AI more systematically in 2022, with various ideas incoming. However, enacting these proposals into actual enforceable legislation would most likely happen in 2023 or later.

Also Read: AI In Marketing Strategy


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK