Dr Asress Gikay is a Senior Lecturer in AI, Disruptive Innovation and Law, at the Brunel Law School
The latest generation of artificial intelligence (AI), such as ChatGPT, will revolutionise the way we and . AI technologies could significantly education, healthcare, transport and welfare. But there are downsides, too: , , and , including in and .
There’s general agreement that AI needs to be regulated, given its awesome potential for good and harm. The EU has proposed one approach, based on potential problems. The UK is proposing a different, , approach.
This year, the 成人直播app government published a (a policy document setting out plans for future legislation) unveiling how it intends to regulate AI, with an emphasis on flexibility to avoid . The document favours voluntary compliance, with five principles meant to tackle AI risks.
Strict enforcement of these principles by regulators could be added later if it’s required. But is such an approach too lenient given the risks?
Crucial components
The UK approach differs from the EU’s risk-based regulation. The prohibits certain AI uses, such as , where people shown on a camera feed are compared against police “watch lists”, in public spaces.
The EU approach creates for so-called . These include systems used to evaluate job applications, student admissions, eligibility for loans and public services.
I believe the 成人直播app’s approach better balances AI’s risks and benefits, fostering innovation that benefits the economy and society. However, critical challenges need to be addressed.
The UK approach to AI regulation has three crucial components. First, it relies on existing legal frameworks such as privacy, data protection and product liability laws, rather than implementing new .
Second, five general principles – each consisting of several components – would be applied by regulators in conjunction with existing laws. are (1) “safety, security and robustness”, (2) “appropriate transparency and explainability”, (3) “fairness”, (4) “accountability and governance”, and (5) “contestability and redress”.
During initial implementation, regulators would not be legally required to enforce the principles. A statute imposing these obligations would be enacted later, if considered necessary. Organisations would therefore be expected to comply with the principles voluntarily in the first instance.
Third, regulators could adapt the five principles to the subjects they cover, with support from a . So, there will not be a single enforcement authority.
Promising approach?
The UK’s regime is promising for three reasons. First, it promises to use evidence about AI in its correct context, rather than applying an example from one area to another inappropriately.
Second, it is designed so that rules can be easily tailored to the requirements of AI used in different areas of everyday life. Third, there are advantages to its decentralised approach. For example, a single regulatory organisation, were it to underperform, would affect AI use across the board.
Let’s look at how it would use evidence about AI. As AI’s risks are yet to be fully understood, predicting future problems involves guesswork. To fill the gap, evidence with no relevance to a specific use of AI could be appropriated to propose drastic and inappropriate regulatory solutions.
For instance, some US internet companies use algorithms to based on facial features. These showed poor performance when presented with photos of .
This finding has been cited in support of a in the 成人直播app. However, the two areas are quite different and problems with gender classification do not imply a similar issue with facial recognition in law enforcement.
These US gender algorithms work under relatively lower legal standards. Face recognition used by UK law enforcement undergoes , and is deployed under .
Another advantage of the 成人直播app approach is its . It can be difficult to predict potential risks, particularly with and machine learning systems, which improve in their performance over time.
The framework allows regulators to quickly address risks as they arise, avoiding lengthy debates in parliament. Responsibilities would be spread between different organisations. Centralising AI oversight under a single national regulator could lead to inefficient enforcement.
Regulators with expertise in specific areas such as transport, aviation, and financial markets are to regulate the use of AI within their fields of interest.
This decentralised approach could minimise the effects of corruption, of regulators becoming preoccupied with concerns other than the public interest and differing approaches to enforcement. It also avoids a single point of enforcement failure.
Enforcement and coordination
Some businesses could resist voluntary standards, so, if and when regulators are granted enforcement powers, they should be able to issue fines. The public should also have the right to seek compensation for harms caused by AI systems.
Enforcement needn’t undermine flexibility. Regulators can still tighten or loosen standards as required. However, the 成人直播app framework could encounter difficulties where AI systems fall under the jurisdiction of multiple regulators, resulting in overlaps. For example, transport, insurance, and data protection authorities could all issue conflicting guidelines for self-driving cars.
To tackle this, the white paper suggests establishing a , which would ensure the harmonious implementation of guidance. It’s vital to compel the different regulators to consult this organisation rather than leaving the decision up to them.
The UK approach shows promise for fostering innovation and addressing risks. But to strengthen the country’s position as a in the area, the framework must be aligned with regulation elsewhere, especially the EU.
Fine-tuning the framework can enhance legal certainty for businesses and bolster public trust. It will also foster international confidence in the 成人直播app’s system of regulation for this transformative technology.
This article which is republished from under a Creative Commons license. .
Reported by:
Press Office,
Media Relations
+44 (0)1895 266867
press-office@brunel.ac.uk