Artificial Intelligence (AI) Regulation: Exploring Perspectives and Implications

I’ve been studying AI and working on ML models extensively recently. I’ve been thinking extensively about regulation, and it’s been something discussed continuously. These are my thoughts.


AI (artificial intelligence), at the moment, is completely unregulated. We rely on the producers of models and the morals of users of those models to do what is right. The problem that poses is that “right” is objective.

You can go to an AI model and ask it to draw a person looking like Hitler, and it will likely reject you. You can ask ChatGPT any question you like, and its acceptance of that question is dictated by who?

Could they influence political agendas and limit free speech? Could those models have inbuilt biases? That’s up to the creators of the model to decide.

Could companies generating models decide to exclude data about certain groups of people? If a vegan ran a company producing language models, could they manipulate those models to be biased towards veganism?

The answers to these questions require the people who control millions in business revenue to balance revenue and growth with morality.

This technology can touch areas of your life without your permission. You may use your face to unlock your phone or fingerprint to register your arrival at work or unlock a door. Smart devices can recognise your voice. You grant permission to have that data used in a particular way, but what if it was used without your consent?

AI allows us to identify people without their consent, make predictions based on behavioural models, associate your face with purchases and advertising, and implicate you in crimes if the AI gets it wrong.

Amazon built a shop without checkouts that tracks your movement and the things you pick up. That is very cool, it reduces a huge bottleneck with the experience (checkout queues), but you consent to that, it’s not forced upon you.

Why AI Regulation is coming

If you take a flight, ride a train, drive a car, use a credit card, drink water or eat food, at some point, a regulatory body has said to those industries you must do what we say; you can’t do that if we don’t grant permission and you must operate a certain way.

In the UK, we have the Food Standards Agency, Civil Aviation Authority, Rail Safety & Standards Board and the Finacial Conduct Authority, among others. Most would agree these authorities are there for the better.

AI will need the same controls to prevent businesses from prioritising profits over privacy and growth above ethics.

Look at the gambling industry as an example; massive revenues and incentives for bookmakers caused many societal issues that forced regulators to impose spending limits, require signage and create schemes like Gamstop.

We risk similar ethical, business, safety and privacy issues.

Job displacement

Across the world, thousands of journalists churn out thousands of news stories a day. Accountants calculate profit and loss; authors hammer keys on a typewriter, producing books. Artists, advertising executives, and graphic designers produce creative after creative.

ChatGPT risks displacing anybody working with language or numbers, and DALL-E and MidJourney risk displacing designers. At the very least, it increases the scope for competition and creates wage pressure.

Artificial Intelligence will cause some loss of workers through physical automation. Autonomous cars, deliveries, robots etc. We need to create programs to train those displaced to find new work or to use AI within their job.

Autonomy through robotics may help reduce food costs – we have seen automated diaries and farm equipment for many years, allowing existing staff to manage more cows and acres. That may be what we will see – higher worker productivity using AI as an additional tool.

Stop and think about what happens in a recession when unemployment grows. The burden on the government to support those without jobs is very expensive, leading to higher taxes. If people lose their jobs to AI – where do they work? How do they buy food?

Industrial Revolution Vs Artificial Intelligence

When we invented the steam engine and made horses redundant, the blacksmiths who changed the horse’s shoes became mechanics or launched new businesses to feed those industries – oil producers, tyre manufacturers, and tractor parts producers.

When the internet was born, it allowed people to launch online businesses and learn new things that would have been impossible before. We saw new marketing techniques and new revenue models. YouTube has enabled some companies to give away their work for free because recording and uploading to YouTube is more profitable.

The real challenge is that AI can potentially displace the jobs of a large proportion of the world’s population but with little in the way of new industries.

New industries might include

  • AI Development
  • AI management (software updates, growth, tuning)
  • Maintenance (hydraulic oil on burger flipping machines need changing etc)
  • Ethics/Compliance

We have little evidence to suggest where displaced people could get new jobs and careers.

Autonomous cars and tractors risk eradicating the traditional taxi driver and those becoming “artisanal” businesses. Imagine London Black cab drivers are replaced with autonomy – what do the 21,000 cabbies do? There doesn’t seem to be any new AI-created industry, and they can’t help train AI models; that’s already been done, or the autonomous black cabs couldn’t exist.

Do we need Artificial intelligence regulation to protect jobs & workers?

Regulation may be created to say you can’t operate a black cab autonomously, securing the jobs of the 21,000 drivers that, however, might be temporary. If consumers decide they like a more consistent, cheaper service – even if it operates slower – those “protected” by AI regulation just see customers moving to the alternatives. If you’re from the UK you could make comparisons to how big shopping centres and online stores have damaged the high street.

Humans are ultimately more expensive than machines to operate, and regulation will struggle to protect manual jobs once consumers decide they can get a cheaper, more consistent service elsewhere. A few may prefer the “human” touch and are willing to pay extra, but it won’t sustain the status quo.

If AI is the tractor, what is the AI mechanic?

The question we want to answer here is whether we want to allow AI to risk this. Just because we have autonomous cars, should we let them put taxi drivers out of work, or should we regulate this market so that governments can maintain the employment of its people and taxation?

Ethics

We must ensure that models are generated ethically, unbiased and sensitive to local laws and customs. A language model could offer responses about illegal topics in some countries, and through internet censorship, it endangers people’s freedom. Would you want to discuss overthrowing the Chinese government with ChatGPT in China?

Does AI help or hinder poorer communities? AI gives countries and people with good internet access an advantage over remote or poorer communities that might find access to AI models beyond their reach creating a bigger gap in growth.

Education

Education must be provided globally to people on AI, but we can’t allow our future architects & doctors to ask ChatGPT to do the work. A standardised approach to these practices and agreements on the boundaries of where AI can help in these industries should be explored.

We can’t outright say doctors are banned from AI because of the benefits AI can bring to doctors. Architects are using AI to design and model buildings, there are books on it, but you need to ensure the qualifications were obtained through genuine study.

Industry Regulation

Facial recognition within a supermarket might not be ethical, but facial recognition could be useful in high-security situations. We can’t outright ban certain AI uses; it can bring benefits in other areas. Could facial recognition work on a drone in a disaster zone after an earthquake? We might limit our innovation if we ban it because of a supermarket context.

Regulators of AI will need to set these kinds of applicable uses.

Transparency

When you buy a chocolate bar, the ingredients are on the packet. You know what you are eating. If you take a flight, you know what to and not to take in your carry-on. Regulated industries come with a reasonable level of transparency.

People using AI should offer transparency about when they are using it.

  • It’s not right that you could pay for copywriting, and AI does it unless it was explained to you that the AI does it.
  • You should have a right to understand if AI influenced a decision on your health or some other significant aspect of your life
  • You should know if the information passed to you is genuine, cited or synthetic.

Transparency lets users understand how a model got to its answer and helps users build trust in the model because they can be shown how a decision was reached.

Intellectual property protection.

In the UK, we have a scheme called PRS For Music. It’s a scheme whereby a business can play a radio station (in a working environment or waiting room etc), and the music producers receive a small fee. It’s fair.

But models might be trained on 12,000 films or 20,000 cookery books. Am I getting a recipe for free if ChatGPT provides it to me? Lawsuits have already been filed over alleged copyright infringement.

We will likely need a framework that allows everybody to be respected fairly. This may inspire a new business model where people who provide documents used to generate AI responses receive a small fee for each use.

We have to teach the model the things we know before it can answer our questions. When a man steps on the moon again, no AI model will know about it if we don’t produce blog posts, news articles, books and images about that event.

What AI regulation should be

Any regulator must work within the industry to protect people from wrongdoing, help maintain innovation, and not stifle creativity.

It must be dynamic

Regulatory bodies cannot enforce regulations that might be too inflexible or slow to adapt. It will limit innovation, and any regulatory body must be ready to assess the new developments quickly.

It’s useless to any innovator if the regulators take the time to agree on policy or react to new developments.

The United Kingdom still hasn’t finalised a policy for electric scooters, and trials and research into that started before 2020. You cannot have this slow meandering response times.

It must allow creativity

The regulation must allow, support and encourage the development of AI for scientific, educational and research purposes. The European Union recognised this and published a document outlining the progress of their regulation development. In that, they recognise the importance of Open Source projects, and they say

To boost AI innovation and support SMEs, MEPs added exemptions for research activities and AI components provided under open-source licenses.

This encourages the open source and scientific communities to work together and collaborate openly and transparently.

It must work across borders

Ideally, we should collaborate to develop models that work across borders. Legalities, compliance and business issues will arise through a lack of inconsistent standards across borders.

What do you do if one type of AI is not permissible in one country? We risk having fragmented businesses and limits on growth if we cannot agree on a common framework.

It must focus on the effect of the technology, not the technology itself

A large language model may draft responses involving illegal material and tell a 13-year-old about pornography or drugs. A model may also use facial recognition to influence advertising, but this shouldn’t cause the technology itself to be banned.

Instead, the effects of the technology should be regulated – the use of facial recognition and the sharing of information with those underage should be controlled and licenced accordingly.

Autonomy for dangerous situations

We have the technology to drive a car autonomously, Tesla has been working on it for years, and Google has their system. That technology could put people out of work – as mentioned earlier. A government should consider how to manage that best.

But banning the technology isn’t the answer.

In Fort McMurray, Canada, an oil sands mine exists. In the winter, it’s icy; in the summer, it’s muddy. It’s dangerous. The excavated material is dirty and flammable; it’s no place for humans.

That same technology allows the world’s largest dump trucks to move that material from the mine to the processing facility without human involvement. If it prevents a driver from dying, that must surely be a good thing. Learn more about these dump trucks in Canada on Arron Witt’s YouTube channel.

Regulation risk to business

A business may adopt an artificial intelligence model and solution to serve its clients.

A warehouse may want to use AI to measure staff breaks, and if a staff member extends their break outside the permitted period, have that time deducted from the staff member’s final salary or trigger an HR process that might see disciplinary action taken.

There may be good reasons to do this; time sheets may not get filled in correctly, and HR may not have the resources.

The business may sell this product to warehouses, nightclubs or football stadiums.

The risk to the business is that it built itself on AI technology which now finds itself on the wrong side of the regulation. That could cause redundancy, closure and court cases. The EU has clarified that this kind of business would not be permissible under their frameworks.

Environmental regulation

Artificial intelligence is a high-energy consumer. Training models require high-powered graphics cards like the NVIDIA A100. This has a peak power consumption of 300W. My Macbook charger is rated for 96 watts.

According to this Reddit, OpenAI used 10,000 A100 GPU’s assuming peak load for 720 hours, producing a consumption of 2,160,000 kWh.

Maths: 720 Hours in June, 0.3 kWh per device, (720*0.3) * 10,000 = 2,160,000 kWh

According to Ofgem, a medium-sized house’s total annual domestic consumption is 2,900 kWh.

That means ChatGPT will use enough electricity for a month of peak operation to power 745 UK homes. That is one business training a single model for a conservative amount of GPUs. The energy demands are high.

We must ensure that these models are trained in places with an incentive for using renewable energy sources or fewer GPU. Over time the energy requirements will reduce, but we may create a bigger demand that offsets any reduction in energy savings.

Liability

You approach a doctor for help with a health condition. He is assisted via AI. Your medical notes are analysed, symptoms are checked, and a prescription is made.

You take the prescribed drugs, one of which is ibuprofen. Later that week, you end up in the hospital because the AI didn’t pay attention to your stomach ulcer and now have extra health complications.

Who is liable for that mistake? Is the doctor or the company producing the AI model? You can’t hold the model accountable for its actions, but somewhere a process failed.

Organisations adopting AI, especially in important industries like health and finance, will need consistent guidance and guidance and reassurance of liabilities. Organisations like the Financial Conduct Authority have been looking at this.

We will probably get AI regulation wrong… at first

Innovation within artificial intelligence development is moving quicker than any regulatory body can, and we will probably see scenarios during the development of these frameworks where we got it wrong.

Many negative things may come from AI in the future, and we should be careful not to become reactionary to it. Regulation around gambling, terrorism, drugs etc., shouldn’t result in banning answers relating to these topics, but we should invest in making the models smarter.

We may see businesses leave territories or shut down entirely if they find their business model is no longer viable. This can cause skilled data scientists and developers to relocate to where they can operate. Migration of skill is never good.


AI models can’t be all-serving to their creators; they should be for all and respect people’s opinions. We should not have biased models or models refusing to answer topics disliked by the creators. Any model should be used ethically, and the data produced and collected by those models should be controlled sensibly. We need to collaborate globally on this policy. AI Regulation should empower the good and balance the bad while keeping people secure and their privacy intact.

Further reading

https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Comments