Can you trust AI?

This was a question posed at a recent meeting of Fellows at the Institute. The superior power of technology (over the average human brain) is now well established to:

  • scan vast amounts of data in a second
  • speed up analysis
  • decipher patterns and anomalies
  • improve decision making (in well-understood areas)
  • automate and evolve responses to queries

The question of this article is: Will generative AI inevitably have the last word? There may be a lot of evidence pointing to the infinite capacity of generative AI, but “Just because you can, doesn’t mean you should.”

To simplify the question for a moment: Is automation always better? Here I think of New Zealand’s recent banning of cell phones during school. Reports are now in that the cacophony of noise during school breaks is music to the ears of teachers. Youngsters are once more interacting away from phone screens. Schools even welcome the odd broken window now that footballs are once more kicked around during breaks.

Bernie Taupin recently commented about AI generated music where the instruction is to create music in the style of a certain artist. His description of it was an expletive. (I wondered: if AI proponents were so sure of its superiority, why would they instruct it to copy the music of a particular artist?)

“We should take care not to make the intellect our god; it has, of course, powerful muscles, but no personality.”
Albert Einstein

I feel similarly about AI. Many years ago now, quantum physics challenged the prevailing paradigm that what you physically see and touch is all there is. Various studies showed that human observation ‘bends’ matter which cannot be explained by mere physics. Rupert Sheldrake’s Morphic Resonance1 challenged views at the time about matter and shared knowledge. As the human brain contains componentry that can override matter, we run the risk of ignoring what is at the source of all that we value, when we disregard what ‘human being’ is, and rely more and more on automation.

Cause for pause

As long as there is a scientific argument for non-physical human powers, we should exercise extreme caution when giving so much power to artificial (manufactured) intelligence expecting our human experience on Planet Earth to improve. (This is assuming humans would not deliberately make our human experience worse.)

“The man widely regarded as the godfather of artificial intelligence is worried the technology is becoming too powerful for humanity’s own good. Renowned computer scientist Geoffrey Hinton quit his role at Google last year.

When he resigned, he said he was able to speak freely about the dangers and that some were “quite scary”. In particular, around how AI could spread misinformation, upend the job market, and ultimately, pose an existential risk to humanity.

Hinton was an early pioneer of the neural network – a method which teaches computers to process data in a way that is inspired by the human brain.”2

I am with Charan Ranganath3 who pointed out that human thought consumes a few watts of energy in comparison with excessive amounts of machine data. The questions that follow are:

  • Just because vast amounts of data can be produced and consumed, should it?
  • What is the problem this is trying to solve?
  • Why invest limitless amounts of money to prove the muscle strength of machine and artificial intelligence when there are a large number of areas where such investment has less risk of harm and more certainty of advancement of the human experience?

It really comes back to that essential question: What is the objective, here?

The case for it

Hinton argues that AI is superior to human intelligence in that the running of neural network models on various computers can more or less instantly share what they all learned, so that each of them can know what all of them learned. ”That’s a way in which they’re far superior to us….It’s a kind of hive mind.” (Similar to Sheldrake’s Morphic Resonance, then.)

This is not to discredit the power of AI to enable the functions listed at the start of this article, especially when considering how flawed human beings become when exhausted, upset or over-excited. As we are such emotive creatures, having AI assist analysis and decision making is not only valuable but many would say, essential.

The case against it

Should we rather see these AI systems as labour-saving devices not a replacement for human beings?

Yes, a physician exhausted after back-to-back shifts may be flawed in their thinking, but he or she can still entrain a human heart in a patient mere meters away (by virtue of being human). Many are starting to publish data about the negative impact on human brains of constant data overload i.e. a deterioration in the way humans function. Is it really worth sacrificing human ‘being’ for intelligence ‘automating’.

When you need to change direction, the human brain has a very quick brake to override your original intention. You absorb practically no energy to change tack. Not so with AI. There was an amusing example recently where a frustrated customer decided to play with a delivery company’s chat bot. The answers it started to generate – that were rapidly circulated via social media – were so derisive, the recent ‘system upgrade’ had to be pulled.4

“Humans create the AI models, vet the selected data, frame the emerging patterns into business cases, and set direction with new mental models based on further information.” 5
Institute Fellow Patti Blackstaffe

Is it simply that we don’t know enough about every AI system to effectively manage it in the manner that Patti describes, to control for negative impact?  Have we removed the well-proven control step, namely: “What problem are we trying to solve?”

I wonder how good a machine is at bedside manner. I remember a surgeon once talking about the calm versus agitated pre-operative patient and the significant difference in health outcomes given the volume of blood pumping through the physical system. This is a matter of life or death and yet another example of a uniquely human (brain-to-brain) phenomenon.

A similar experience is spoken about by those working with agitated customers. This is not easy work. The stress on Call Centre employees may be reason enough for a company to resort solely to AI for customer contact. Yet it is heartening when I hear employees discuss their sense of fulfilment at helping customers sort out stressful issues. Both humans seem to come away from the interaction positively transformed.

Hinton warns that AI systems create agents that have autonomy yet need sub goals to achieve things; BUT they may create subgoals that you didn’t intend (the alignment problem.) “….if without saying anything more you told them to get rid of climate change, they might figure the best way to do that is just to get rid of people. And that’s not really what you meant.”

This is similar to Yuval Noah Harari’s warning that “we are on the verge of destroying ourselves.” In a recent interview6, he makes the point that it is not merely about reproducing information (as with the Gutenberg Press), but rather that AI is an independent decision-making agent choosing what data to put in front of ‘homo sapiens’. He claims that most information flooding us is junk – the truth is becoming more elusive as the purpose of the information dissemination is to reach people, but this is not elevating the quality of thought.

The next step

Yuval Noah Harari calls for the immediate banning of technology that claims to be human, and for corporations to be liable for the consequences of their algorithms (rejecting their assertion that their operations uphold the right to ‘free speech’.)

He has a good case on both these points.

Put me down for human beings having the last say. Long may we live in a world where kiddies revel in squishy finger paint as they form a crude representation of the world as they see it. Long may we gaze in awe at a rare Aurora.

If we as human beings become more and more reliant on AI to think for us, could our brains actually shrink? And if so, are we ready to sacrifice what is human for what is artificial?

For these reasons, and many others, I hope humans are wise enough to hit pause.

Sources

  1. “What is Morphic Resonance?” Rupert Sheldrake. https://www.youtube.com/watch?v=d_RGEpJSr6s&ab_channel=RupertSheldrake
  2. “Artificial intelligence found to be ‘superior to biological intelligence.” Interview with Geoffrey Hinton. March 15, 2024
    https://www.rnz.co.nz/news/world/511778/artificial-intelligence-found-to-be-superior-to-biological-intelligence-geoffrey-hinton
  3. Charan Ranganath – CNN interview, March 9, 2024
    https://www.youtube.com/watch?v=CjXPiUN_wDw&t=1s&ab_channel=AmanpourandCompany
  4. “UK parcel firm disables AI after poetic bot goes rogue” by Reuters. January 22, 2024.9:41 PM GMT+13 https://www.reuters.com/technology/uk-parcel-firm-disables-ai-after-poetic-bot-goes-rogue-2024-01-20/
  5. “Why We Still Need Humans” by Patti Blackstaffe. April 5, 2022 https://www.institutefordigitaltransformation.org/why-we-still-need-humans/?utm_source=brevo&utm_campaign=2023%20JDT%2019&utm_medium=email
  6. Yuval Noah Harari – “We are on the verge of destroying ourselves”. CNN interview, September 17, 2024
    https://www.youtube.com/watch?v=BLP6K8xm0Kc&ab_channel=AmanpourandCompany

Tag/s:Artificial Intelligence, Business Transformation, Creativity, Digital Disruption,