4

Why that open letter urging an AI development pause is problematic

 1 year ago
source link: https://axbom.com/ai-open-letter-problematic/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

digital ethics

Why that open letter urging an AI development pause is problematic

Per Axbom

Mar 30, 2023 • 13 min read
How to stop AI harm. Pause development of GPT-5 for six months. Boom. You're welcome!

Applied ethics isn't a checklist. It's about putting in the time and effort to understand risks to wellbeing with the express intent of avoiding, mitigating and monitoring harm. It makes sense then to assume that the open call to pause AI development is a good thing. Well, yes – but no.

Let's talk about some things that are going on with the touted open letter signed by the likes of Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang, Gary Marcus and Tristan Harris. More than an open letter it is in many ways a letter of misdirection. And who authored it exactly?

I have myself extensively criticised the current hype and on the surface perhaps it should make sense for me to applaud any call "to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4".

So before I begin listing why I don't applaud this letter, I want bring in some concepts from the elements of digital ethics. While there are many of the 32 outlined elements that apply here, these three cover a lot of bases:

  1. Monoculture. The homogenity of those who have been provided the capacity to make and create in the digital space means that it is primarily their mirror-images who benefit – with little thought for the wellbeing of those not visible inside the reflection.
  2. Power concentration. When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs are prioritized, the more power they gain.
  3. Ethicswashing. Ethical codes, advisory boards, whitepapers, awards and more can be assembled to provide an appearance of careful consideration without any real substance behind the gloss.

The open letter itself could be seen as a potential example of ethicswashing. The idea being that by claiming to show attention to human wellbeing the job of ethical consideration is done. Enough people around the world can internalise the idea that these powerful individuals are doing what they can to stop dangers posed by AI. If they fail, the overarching message could be that "they've at least acted with the best of intent". It's a neat parlor trick that could give many powerful actors an opportunity for shedding their accountability.

"We tried. We have the receipts. Look at this letter".

It conveniently also leaves out of the equation any expressed interest in listening to more voices. So there is no true way of measuring any effort beyond signing the letter.

The most obvious way of measuring the success of this letter would be if a 6-month pause for AI development beyond the capacity of GPT-4 is respected. It's still rather nebulous how this actually helps anyone. It's also unclear who should verify whether or not a system more powerful than GPT-4 is being worked on during this announced timeframe, and how.

Why the letter should be cause for concern

I admit to having started my line of argument already but here are some more reasons I believe further skepticism and questioning of the open letter is in order.

The host organisation

It's curious how in times where source criticism is talked about on a daily basis it has been lost on many journalists exactly where this open letter is hosted and what this organisation stands for. The Future of Life Institute is one of several organisations in a network of billionaire-driven technocrat fellowships that safeguard and promote a philosophy known as long-termism, with its roots in the effective altruism movement.

You may have heard of long-termism and the idea of ensuring that what we build today safeguards the interests of future humans that have not been born yet. But you may have missed how representatives of this philosophy have been shown to prioritise lives far into the future over current lives ("they are more in number"), arguing that lives in rich countries are more important to save than lives in poor countries ("they contribute more"), and suggesting that climate change can be toned down as it isn't an existential threat ("at least not to all of humanity").

As Rebecca Ackerman writes:

[Effective Altruism]’s ideas have long faced criticism from within the fields of philosophy and philanthropy that they reflect white Western saviorism and an avoidance of structural problems in favor of abstract math—not coincidentally, many of the same objections lobbed at the tech industry at large.

The understanding of this philosophy gives context to some of the next points.

Sidenote: It was somewhat bizarre to see many people asking for confirmation that Max Tegmark had in fact signed the letter, when he is actually the president of the institute that is hosting the letter! Basic source checking still has a ways to go.

Essentially no mention of all the current harm

It's not like there isn't already harm happening due to these types of tools. Why would an open letter claiming to care for human wellbeing not acknowledge or outline the harm that is in fact happening today due to AI – and should be stopped now? It's an obvious opportunity for boosting awareness.

I am speaking for example of:

  • Security issues such as data privacy and breaches, that have already happened
  • The fact that these tools are trained on vast amounts of biased data and serve to perpetuate that bias
  • The fact that workers in Kenya and elsewhere are being exploited to train these tools, having to suffer harm to remove harmful content. A practice long employed by social media companies.
  • An increase in the capture of biometrically inferred data that will severely impact human free will as it enables widespread personal manipulation and  gives authoritarian regimes more power to suppress dissent. Or encourages democracies to move towards authoritarianism, when putting the disenfranchised in harms way.
  • Risks to climate due to significant energy use in large neural network training. It's valid to note that the ones who benefit the most from AI are the rich, and the ones who suffer most from the climate crisis are the poor. The latter group don't appear to get a lot of say in what these suggested 6 months of pause mean for them.
  • How the tools are already disrupting art, literature and education (including ownership of all training data) without opportunity to address these issues in a reasoned manner
  • Exclusion of a large part of the global population simply due to the limited number of languages that these tools are trained on.
  • Unsubstantiated claims of sentience that lead to unfounded fears (a harm that the letter itself contributes to)

The letter explicitly states that training of tools more powerful than GPT-4 should be paused. As if harm is only what happens beyond this. I would argue that time is better spent addressing the harm that already is happening than any harm that might happen.

The focus of the letter does make more sense when you understand that "a humanity-destroying AI revolt" is what is explicitly of greater concern to the technocrats within the longtermist movement. Not the here and now.

Boosting the idea of sentience

The letter does an impressive job of fearmongering when it comes to convincing the reader of a soon-to-arrive intelligence that will "outnumber, outsmart, obsolete and replace us". Their biggest concern expressed as a "loss of control of our civilization".

It could very well be the case that the authors of this letter are truly afraid, as all of a sudden they have this sense of being abused by AI in the same way that millions of other people are already being abused by AI.

This would explain why the letter doesn't delve into the current harms of AI. Those harms simply don't apply to the authors of the letter. The implicit fear is that the authors themselves could now be negatively impacted.

From the letter:

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

It's the next version of AI that worry the authors, not the one that is already harming.

Misleading citations

The very first citation in the letter is the infamous Stochastic Parrots paper. But the authors of the letter completely misrepresent the study. In the words of one of the paper's researchers herself, Dr. Timnit Gebru of DAIR Institute:

The very first citation in this stupid letter is to our #StochasticParrots Paper, "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"

EXCEPT

that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have  "human-competitive intelligence." They basically say the opposite of what we say and cite our paper?

You may want to read that again. This open letter that is selling the idea of sentience uses as its first reference a paper that explains how these types of sentience claims are one of the biggest harms. I mean, we could have stopped there.

As professor Emily M. Bender, also a Stochastic Parrots co-author, points out in her own comments on the letter, there are further issues when it comes to how well the citations support the letter's arguments.

The choice of "6 months"

You do have to wonder about the suggested timeframe in the letter:

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

Anyone who has ever worked in IT knows that 6 months is rarely enough to turn anything around, and certainly not the premise of any AI harm. Why would this length of time be chosen as the baseline? What could realistically be corrected after only 6 months? A half-year may in some contexts feel like a long time but in this context it's the blink of an eye. This truly feels like a red herring.

When suggesting that "governments should step in and institute a moratorium" if the authors' stated rule of 6 months is not followed it's not clear what governments are in question and should abide by their instruction. But we certainly know that few countries have a say in this development, even if the impact of these tools are already significant for most.

A further concern is that a true interest in helping people would not focus on drafting a predefined premise such as the one outlined in the open letter. The authors have already decided the exact rules for what is needed (6 months pause or else). There is no acknowledgement of the importance of more perspectives than the one presupposed by the authors.

Ethics requires that we involve people who are harmed. Anyone serious about ethics would emphasise the importance of involving people who are at risk and who are already suffering consequences. Anyone serious about ethics would not make prior assumptions about how that inclusion of voices should happen.

But here we are… apparently 6 months of pausing development, of a specific type of AI that hasn't been released yet, is the answer.

We vs. they

There is a lot of We-posturing when talking about AI. But it's clear that the people who are building AI are a small number of They. It's not an Us. The issue at hand is how much power the They should be allowed to wield over the rest of the world.

And from the letter (my emphasis):

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt

I do wonder if all the people suffering from the bias in these tools already today will identify themselves as part of this We, and enjoy the wonder of this "AI summer", preparing themselves to adapt. I'd suppose many are already being forced to adapt moment-to-moment, rather than being provided more autonomy. Likely watching others reap the rewards.

The way this letter ignores ongoing harm, and insinuates sentience, speaks volumes about its intent.

The main reason the technocrat billionaires are conjuring AI at the same time as they explain how afraid they are the of thing they are conjuring: it’s because they believe they have a better chance of not suffering the abuse everyone else is suffering if they can claim to be its master.

This is what they mean with the word “loyal” in that open letter.

References

Ethics advocates expressing concern about the letter

1680115563807-your-paragraph-text-83.jpeg?image-resize-opts=Y3JvcD0xeHc6MXhoO2NlbnRlcixjZW50ZXImcmVzaXplPTEyMDA6KiZyZXNpemU9MTIwMDoq

1*WlToz2_ciSHmayYbkDzrxA.jpeg

Bildschirm%C2%ADfoto-2023-03-29-um-15.54.56.png

About longtermism

torres2-1024x646.jpg

ND22_altruism_ELON_thumb.jpeg?resize=1200,600

header_essay-final-nn11440305.jpg

https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F235c1c92-466a-4bbc-8275-acb0a68f3145_1280x1280.png

A curated list of relevant podcast episodes

ai-hype-91ZDhJI2iT5-Qw73TrshCLG.1224x1224.JPEG

More sources for the article

3442188.cover.jpg

arxiv-logo-fb.png

ChatGTP-malicious-use.jpg

robot-chatgpt.jpg

0*7o1nPtr260CgeJ-Z

Lensa-Riv.jpg

Kathryn-Hulick-headshot-square-500.jpg

1*DfFjyA42VpX-wPxCjJ6nlA.jpeg

sub-buzz-1650-1649200106-11.jpg?crop=2571:1714;0,0%26downsize=1250:*

Just another one of Sam Altman's companies.

axbom-rwi-raoul-wallenberg-talks-PA01.001.png

ai-responsibility-hype.webp

Get Per's newsletter

Per Axbom's newsletter on digital ethics and compassionate design helps you stay updated on human rights issues in the tech space. You will receive select posts in your inbox 2-3 times per month.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK