1

OpenText CEO - don’t send a human to do a machine’s job!

 4 weeks ago
source link: https://diginomica.com/opentext-ceo-dont-send-human-do-machines-job
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

OpenText CEO - don’t send a human to do a machine’s job!

By Chris Middleton

April 19, 2024

Dyslexia mode

AI Machine Learning business internet technology concept circuit board in shape © MemoryStockPhoto - Canva.com
(© MemoryStockPhoto - Canva.com)

Don’t send a human to do a machine’s job.

Those were the words of OpenText CEO Mark Barrenechea in London earlier this week, as he welcomed delegates to OpenText World, where enterprise leaders were invited to “take flight” with AI and experience how it can “elevate human potential”.

At a series of OpenText events this week - others were in Munich and Paris, with Las Vegas to follow in November -  the “taking flight” message was conspicuous. With an AI-first mindset, organizations can “reach heights long thought impossible” and “soar beyond barriers”, they were told. 

The subtext of all this was the $9.5 billion Ontario-headquartered firm’s new Aviator family of generative AI capabilities. These deploy Large Language Models (LLMs) and private data sets to solve industry- or task-specific use cases, within broad areas such as DevOps, ITOps, Business Networks, ‘Experience’ (customer communications), Content, and Cybersecurity.

But grounded beneath these AI flights of fancy was a surprisingly tough message from a company that counts 15 of the world’s governments as clients: “Let the machines do the work”. Not just some tasks, but all forms of work.

Good for IT vendors’ revenues, no doubt. But is that the right message for a world in which fears for long-term human employment are growing? Just this week, for example, UK skills provider AND Digital released research suggesting that 43% of CEOs fear AI will steal their jobs.

Shouldn’t humans always be in the loop, with oversight of what these ever-learning AIs are doing? Barely 18 months on from the generative and LLM AI explosion, should we really begin handing over control of our decision-making to machines? Isn’t that really code for the real vendor message: “Give us your money”?

Barrenechea said:

Never send a human to do a machine’s job, yeah. And let me make another stark statement. I think we need to take humans out of the loop completely â€“ off from many processes. That's the problem [we are trying to solve]. 

There are just too many processes in which a human has to stick themselves. Like XDR [Extended Detection and Response in cybersecurity]. The machine should get all the way down until you have triage, or you need to provide the human touch. 

Now, I'm not advocating for humans not having a role anymore – far from it. What I'm saying is our role is now different; it's more creative. We should be ruthless about taking the manual work out of those steps, so we can be in a better position to add more value.

But this is what vendors in automation, robotics, AI, and other Industry 4.0 technologies have been saying for decades. And yet generative AI’s Big Bang moment was companies like OpenAI, Stability AI, and others, doing the exact opposite: promising instant creativity at the touch of a button. Cue OpenAI’s annualized revenue hitting $2 billion this year, alongside an $86 billion valuation.

Even my local newspaper now employs ‘AI-assisted reporters’, in which journalists’ job is to make an LLM-originated report sound more human, casting them in a low-grade support role to a machine. In this new world, ‘creativity’ seems to be just about finessing a machine’s errors while it learns how to do things better. And all this has happened in less than 18 months!

I put this to Barrenechea, who said:

I haven't found an example yet where I would say no [to putting the machine first]. It may come up, but I haven't found an example to say no to yet. 

I don't want to approach a problem and say, ‘Let me automate it’. I'd rather work backwards and say, ‘I want to eliminate it.’ It’s about, how can you eliminate something with an AI-first mindset and work backwards from that?

It’s not automation with AI on top. I'd rather start with AI and work backwards to see what I can eliminate, what I can do differently.

Then he added:

Then quantum is going to come. So, 10 years from now, the computing power that is going to sit underneath all this [will be incredible]. I think the real fear should be around quantum.

Rise of the machines? 

But as others have pointed out (see diginomica, passim), that ‘fear’ is not so much about job losses today - in some facile equation of ‘machines in, humans out’, which has failed to become reality in any highly automated economy - it is subtler than that. 

It is more that, as increasingly sophisticated tasks get handed to AIs, and as we begin to rely on them for expertise, skill, insight, creativity, and more, across every sector, the ladder may be pulled up behind people who are already successful. Meanwhile, human creativity and intelligence are already becoming commoditized, with a gulf opening up between those starting out in the world of work and those who are established professionals.

Barrenechea responded:

I'm with you 99% of the way, except for that being a bad idea.

But I think that observation is absolutely right. For example, I don't need to hire junior programmers anymore. In fact, we've had to elevate - and will continue to elevate - who we hire, because our tools are so good. I'm stepping up the skill that's required to get into our company. And I think AI is going to force that across a lot of industries.

No doubt. But the deep challenge is surely in the long term, in that the education system is just not set up to cater for that fast-emerging world. 

Bear in mind that a generation of recent graduates already finds itself saddled with huge debts and unable to get onto the property ladder, not to mention having to care for their ageing parents and grandparents. 

Now AI is threatening to either take their jobs, or their ability to acquire ground-level skills – and perhaps even to learn for themselves. And creative tasks? Just pay a subscription to the most valuable companies on the planet. 

Then Barrenechea added:

Also, never send a machine to do a human’s job!

Excellent advice - if covering his own back in our discussion, perhaps. But what is a human’s job in an AI-enabled world in which more and more tasks, professions, and creative pursuits can be handed to machines that have been trained - legally or otherwise - on humanity’s collected data online?

So, just what are the essential skills that, for example, coders will need to develop now for this new world of work – one for which their education and training have failed to prepare them? I put this to Executive VP and Chief Product Officer, Muhi Majzoub.

He said:

One of the biggest skills you will need is familiarity with data structure and system design. 

Internally at the company, we are not allowed to use ChatGPT or Copilot in production - it's my policy to 24,000 employees and colleagues that they are not allowed to do that, and are prohibited from putting any customer data or private data from OpenText into a public cloud. We've given them the tools they need internally to advance their skills. 

Even so, I wanted to test how good AI is becoming. And I do not know Python as a programming language, yet in 10 minutes I developed 700 lines of Python code using Copilot. But I did know about data structure and system design. And I think that's where AI is headed: it will really make life easier for us.

Even so, Majzoub revealed that, despite OpenText’s policy of not using popular tools in the public cloud, he is evaluating five different LLMs for possible adoption at the company - including OpenAI’s.

Timestamp it

Back in the CEO session, Barrenechea was just getting started on tearing up industry shibboleths:

I think ‘Delete’ is the worst command ever created in Computer Science. I don't think you should delete anything! And you shouldn't waste anything, because you don't know how you’re going to use it in the future. 

Instead of ‘Delete’, put a timestamp on it. The things you query need to be temporal, to be limited by time, and other things need to keep being demoted to lesser-cost environments. But conceptually, I don't want to actually delete anything. I want to manage it, to keep pushing it towards lower cost, or to near-zero cost. And put a timestamp on it.

However, as we reported last month, many enterprises are, if not actively deleting their AI strategies, then certainly putting them on hold, in light of fears about security, privacy, copyright, bias, ethics, compliance, and more. 

On the subject of an enterprise AI pause, Barrenechea said:

OpenAI is not a strategy. You don't want to take your source code to OpenAI. You don’t want to take your contracts and pump them into OpenAI, period. So, it wouldn't surprise me that enterprises are saying no to OpenAI. 

But I don’t see any projects being put on hold myself. We see banks building their own LLMs, for example. We see insurance companies, apparel companies, retailers building their own platforms. But for sure, there is a concern about taking your IP and handing it off to OpenAI. 

Because you can't protect it once it is in OpenAI. An LLM doesn't unlearn; even if you're experimenting, the learning remains behind. You can't use the Men in Black flashy thing on it. If you put your data in an LLM, and you shouldn’t have, you can’t take your data back. Those learnings remain.”

My take

Perhaps a hypothetical Delete function is not such a bad thing after all?


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK