

Facing the reality of AI
source link: https://www.stevefenton.co.uk/blog/2023/02/facing-reality-ai/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Facing the reality of AI
Table of contents
I want to explain, first, that I don’t want to talk about AI, machine learning, Chat-GPT, and large language models. The nature of tech hype means we sometimes need to add a footnote to history, even if the masses will largely ignore it. I’m not someone who jumps into publishing opinions at the drop of a hat. I have strong views on things like front-end JavaScript frameworks and crypto. These are well-known to my friends, but I don’t continually publish these thoughts. The hype around generative AI, though, needs some reason.
AI. Yes. The term AI has lost all meaning as the hasty rebranding of things that weren’t AI confuses things as simple as basic algorithms with self-aware dystopian robots. I can’t deal with that here, but people have written books on it!
In general, it’s automationBookmark
Let’s quickly talk about automation. Automation comes with responsibility. I call this the ethics of scale. When a human does some task and gets it wrong, the extent of the mistake is limited to the speed they can introduce errors.
Imagine you managed a forest and needed to thin out a population of Ash trees to reduce the spread of Ash dieback. Gerald heads out with an axe to begin the work, and you catch up to him at lunchtime. In a couple of hours, Gerald has brought down several trees and hacked off the larger branches to make them easier to pull through the forest without getting snagged. You find him sitting on a rather neat pile of beech tree logs. Gerald’s mistake is bad but limited. Had Gerald used a tree-felling machine, his mistake could have been a hundred times worse.
Automation of any kind opens up the possibility of making high-velocity mistakes. The size and speed of error is often overlooked unless it makes a headline. In the UK, mistakes in the Post Office computer system led to hundreds of sub-postmasters being wrongfully prosecuted. In my opinion, accidentally sending an email to vast numbers of people that might cause anxiety or loss of time is also wrong. It also happens with alarming frequency, as my good friend {{name}} will tell you.
You may be thinking: “But some human mistakes are catastriphic”. You’re right, there are well known disasters caused by human error spurred on by systemic problems. The proverbial repair tag covering the warning light, if you will. Although these were huge events, they affected one big thing. Automation may have affected many more big things - it would scale the big disaster even further.
The trash heap already existsBookmark
If you are on social media, you’ll know about this rather irritating phenomenon. Someone digs up an old and factually questionable anecdote. Something like “American railroads were designed by the Romans”. They weren’t. People use the story to make some pithy point, and it gets rewarded with a bunch of attention.
The next day, you open your social media feed and discover the same story has been posted by… well… a thousand copy/paste plagiarists. Your feed is wrecked and you scroll down, quickly losing respect for anyone who shared the post or commented, “this is so true”.
This copy/paste attitude to content has quickly increased the velocity of content beyond our human capacity to consume it. If you try and find out what “Agile” is, for example, most of the content ranges from thin to wrong and is often both. We have never been more in need of a mechanism that can filter out plagiarism and low-quality content.
Here are two top sites that answer the question, “what is Agile?“. The first is the top result across several search engines:
Agile is an iterative approach to project management and software development that helps teams deliver value to their customers faster and with fewer headaches. Instead of betting everything on a “big bang” launch, an agile team delivers work in small, but consumable, increments. Requirements, plans, and results are evaluated continuously so teams have a natural mechanism for responding to change quickly.
And this is close behind it:
Agile is an iterative way of managing projects and developing software that makes it easier for teams to deliver value to their customers more quickly and effectively. An agile team is to deliver small but consumable increments of work rather than wagering everything on a “big bang” launch.
The first site has had this description since 2020. The second site changed to this current text in February 2023 (all according to The Internet Archive). It’s likely the second site updated its description in response to the first site outranking it. This is a shame as their previous description was a more accurate reflection of the original manifesto for Agile Software Development.
We still need to get to AI! Remember, this is real people copying and pasting the same content onto millions of web pages, making it hard to find original thoughts. Google’s search index contains hundreds of billions of web pages, with 329 million results (in 0.4 seconds) for “what is agile”. The trash heap is real. It already exists.
We utterly depend on search engines to separate the good from the bad, balancing our need for accurate information with their business models.
WALL·E gone badBookmark
If you haven’t seen it, here’s a quick introduction to the animated film:
In the 22nd century, rampant consumerism, corporate greed, and environmental neglect had turned Earth into a garbage-strewn wasteland, and the megacorporation Buy n Large (BnL) had evacuated humanity to space on giant starliners, leaving trash compacting robots to clean up the planet. - WALL-E, from Wikipedia
WALL·E is the last remaining trash robot. He gathers junk, compacts it, and organises it. Some items he finds are useful, like spare parts he can use to repair himself. The Web needs WALL·E right now.
However, what we got instead were large-language models.
Imagine if, instead of cleaning up the junk, WALL·E created a schematic model based on every object he’d ingested over 700 years and started producing more of it. Instead of carefully selecting those functional spare parts, he simply made more of everything. We needed WALL·E to clear space for sunlight to hit the ground. To make it possible for that first sprout of organic life to burst forth and save humanity. Instead, WALL·E buries every square inch with reproduction “plastic and rusted metal amorphous objects”.
Explaining the hype divideBookmark
While many people are warning of the potentially massive downside to turning The Web into a lifeless trash pile, there are equal numbers of far louder AI-hype fans steering towards the sirens without lashing themselves to the mast. This cavernous divide is one I hoped to explain in this statement:
In general, the people who think AI will write better content than they could and the people who think AI can’t write content as well as them are both right. - Steve Fenton
If you have been copying and pasting content, such as in the example provided on Agile, large language models are a brilliant boost to this process. The model is rather good at sounding convincing. Rather than taking a single source and applying a light thesaurus rewrite, it looks at many sources and generates something that appears original (in terms of words on paper).
This is different from the kind of original we need. We want original as in “I was thinking about Agile and how it’s lost its way over two decades, and I have an opinion on how we can get back on track”.
Yes, the people who were happy to copy/paste their way to engagement and page views will certainly embrace tools like Chat-GPT to try and look more original. Automation is attractive when you need to churn out lots of words. It is less valuable when you want to share original thoughts and ideas.
Search engine answersBookmark
Each time we have depended on a central platform, it has proven that the best feature of The Web is decentralisation. We are re-discovering RSS feeds and realising that the Fediverse is an escape from centralised social platforms that exist to pay dividends or to serve the whims of billionaire owners.
People are even coming to terms with the fact that HTML and CSS are the best way to make websites, rather than hundreds or thousands of lines of JavaScript.
No matter how high we jump away from the terra firma that is The Web, gravity always brings us back to the orginal concept.
The thought of search engines abandoning the organisations of information and become a source of word-soup is more than disappointing. It further centralizes our source of information when we should instead be decentralising it.
Don’t forget the good stuffBookmark
I don’t dislike machine assistance. Imagine you had a vast digital archive of photographs (most people have built up extensive collections). Using AI to understand what is in each picture, detect text, and categorise the images makes them immediately more accessible. There are many examples where AI could be filtering out and amplifying signals. I also love Visual Studio’s IntelliCode and similar assistive tools (some of which are based on the same bones as generative AI, which can make the disussion confusing).
Machine-generated content does the opposite of useful forms of AI, amplifying noise and distortion until all signal is lost.
There ought to be a directive that machines and automation should be a net saving for individual time and energy.
Getting philosophical with AIBookmark
Does it save us time? Not just the person entering a prompt but all of us. Humanity. A frequent problem that has emerged with large language models is they save “authorship time” at the cost of significant “readership time”. This readership time includes the people who consume content and those who edit and moderate it. Thanks to AI, the volunteer moderators on Stack Overflow now have longer queues. Small publishers are shutting down story submissions, thanks to AI.
Sadly, the ability of machines to generate text, images, and video has led to the following:
- The devaluing of real content, thanks to…
- Growing skepticism about the origin of content
Imagine if what we are facing with AI is the loss of faith in all content. To combat this, the only answer is to thoroughly reject all AI content. If you write original text, don’t top it with an AI-generated image, and don’t add AI-generated video alongside it. If you add any generated content, no matter how small, it devalues all other content alongside it. That one AI-generated image infects everything associated with it.
If someone tries to pay you with a counterfeit banknote, you don’t just view the banknote with suspicion. It affects your view of the whole person. If you use one fake image, the rest of your content is probably fake, too.
If you create content with a tool like ChatGPT, it is of no use to anyone who has access to ChatGPT.
Conclusion: AI-deas (yes, AI and ideas)Bookmark
One of the popular use cases for AI is asking it to generate ideas for content. Please take a moment to think about this!
Is it a good idea for someone bereft of original ideas to be sat in contemplation of writing? Wouldn’t it be better for that person to read instead?
That way, they may be the source of original thoughts in the future (a great deal of invention is assembling disparate concepts into a new combination).
Writing isn’t simply adding words to a page. The process of writing does magical things in your brain that connects ideas and helps you form thoughts and opinions. Reducing writing to “adding words into sentences to form paragraphs” misses the whole point of humans communicating their thoughts to each other. It reduces the written word to a sordid primordial slime, where the goal is to add as many words as possible.
Blaise Pascal once wrote, “I have made this longer than usual because I have not had time to make it shorter”. Amusingly, John Locke later used a variation of this excuse when he wrote what might we may imagine to be a parody of the explanation:
I will not deny, but possibly it might be reduced to a narrower Compass than it is; and that some Parts of it might be contracted: The way it has been writ in, by Catches, and many long Intervals of Interruption, being apt to cause some Repetitions. But to confess the Truth, I am now too lazy, or too busy to make it shorter.
We should hold onto the directive that machines and automation should be a net saving for individual time and energy. Any AI that fails this test must be detested.
If you have an idea, then write, dear friends. If the creative waters give forth not the fish of inspiration, take instead a swim.

Sunday, February 19, 2023
Revised Friday, September 8, 2023
Steve Fenton is an Octonaut at Octopus Deploy and six-time Microsoft MVP for developer technologies. He’s a Software Punk and writer.
I wrote a list of DevOps books for Shepherd
Written by Steve Fenton
Monday, June 19, 2023
</div
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK