2

Artificial Intelligence and the Crisis of Materialism

 1 year ago
source link: https://mitch-horowitz-nyc.medium.com/artificial-intelligence-and-the-crisis-of-materialism-73a2b478f5d2
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
1*neG6VOXshISGk35Of9qxgg.png

The author in 2022 testing the psi effect on a random number generator with engineers from the Princeton University parapsychology lab.

Artificial Intelligence and the Crisis of Materialism

Artificial Intelligence (AI) is back in the news after a Google engineer was recently suspended for postulating that an artificial-intelligence chatbot might have a “soul.” This dispute, and others like it, may dominate philosophy in the unfolding century.

So far, however, the AI dispute is one that Western society has proven ill-fit to address. The problem lies somewhere between 19th century religiosity and the the 21st century cult of “reason.”

Let me me dial back the clock for a moment. As someone dedicated to probing whether the mind possesses extra-physical abilities, I determined several years ago to survey what might be considered the finest literature in the field of New Thought, or the extreme idealist school that holds to the principle that thoughts are causative.

The best literature in New Thought tends to go back to its earliest days in the late 19th and dawning 20th centuries, so I read the work of philosopher William James (who called New Thought the “religion of healthy-mindedness”), his immediate student Horatio Dresser, minister and philosopher John Herman Randall, and other late-to-early 20th century metaphysicians. Excluding the work of James, I came away disappointed, finding that many writers relied on familiar but undefined terms like spirit or soul, the very thing that the Google engineer did.

I eschew such language for its disclarity. I prefer to speak more directly of results from the academic study of psychical abilities, or parapsychology; innovative placebo experiments, such as those conducted by Harvard Medical School’s Program in Placebo Studies & Therapeutic Encounter (PiPS); findings from the fields of neuroplasticity and mind-body medicine; and some of the more stirring interpretations of perception and particle mechanics. I also value the testimony of individuals, a practice inured in me by James. Testimony, over time, assumes the form of a record. Testimony, for example is exactly what we use to determine the efficacy of psychopharmacology.

We must bring the insights of mind causation and the study of extra-physicality, on which I will expand, to bear on the discussion of AI. The frustrations of studying AI are hardly limited to the use of terms like “soul.” The term “intelligence” itself — much like consciousness, awareness, and sentience — has no widely accepted definition. Of course, we can come up with some consensus-based ideas (language, empathy, analytic skills, and evolutionary applications); but, still, precision eludes us. This precision grows more distant as we expand our understanding of the potentials of thought.

For example, within the field of neuroplasticity, now about a generation old, brain scans reveal that habitual thought patterns eventually alter the pathways through which electrical impulses travel in the brain. Simply put, what is called “thought” — another familiar but undefined term — alters the body at the cellular and molecular levels. No one disputes those findings. What remains controversial are the implications of the findings.

If thought demonstrably alters your body at a molecular level is that not a defining characteristic of intelligence? And has a computer or network of computers — as yet — demonstrated the ability of programmed and evolving analysis to alter the molecular nature of its mechanism? If not, can a machine justly be called intelligent? Some colleagues have argued to me that what I am describing is in the cards for nanotechnology. But we will have to await that day. (I haven’t even touched on the tangential question of the effects of psychedelics and whether they are replicable with an “intelligent” machine.)

So, long as we exclude questions of the metaphysical or extra-physical, as I prefer, in matters of AI, we remain like the proverbial blind men trying to describe an elephant. Our insights, while maybe grasping a pertinent fact here and there, will always be incomplete.

Our philosophical and technological blindness stems, in part, from the triumph of philosophical materialism, or belief that matter creates itself. Ever since Darwin — who was not a materialist — modernist philosophy has largely embraced the idea that all of life is attributable to unseen but knowable antecedents.

For Marx, this meant economic patterns; for Freud, trauma and repressed memory; for Pasteur, germs; for Einstein, time and space; for James, self-image — and so on across myriad fields. This helpful model has, however, almost always excluded matters considered spiritual, or extra-physical. This prima facie exclusion, based on the overindulged belief that religion obfuscates rather than reveals, has degraded our capacity to evaluate contrary evidence. Hence, for generations Western culture has been locked into a repeat-loop of arguments over evidence for extrasensory perception (ESP), for example — a field slurred on Wikipedia to the point of comedy by bloggers and social medianiks who have honestly no idea of the body of juried data that supports the statistical evidence of the human capacity to exchange information in a manner that exceeds commonly observed sensory experience or technology. (I review the evidence in my forthcoming book, Daydream Believer.)

I am not one to quote Marx very often, but he had an irresistible insight, which applies to the current debate. In The Eighteenth Brumaire of Louis Bonaparte, the philosopher famously observed: “Hegel remarks somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce.” We are living out this second-time farce when fitfully attempting to address AI under the umbrella of materialist assumptions. Indeed, modern materialism, which began as a triumph of logic (sort of), has devolved into a position of sentiment.

Let me me explain. In general, the consensus materialist view of human psychology is that the individual craves power and immorality but must inevitably face the fragile and limited nature of life. Hence, he or she clings to emotionally appealing and traditionally reinforced concepts like “free will,” meaning, soul, afterlife, infinite potential, and so on, never realizing that we are, in the words of the comicstrip Dilbert, and embraced by noted materialists, little more than “moist robots.” We are electric sheep dreaming of ourselves.

In the 1950s and 60s, when questions of “conditioning” dominated the social sciences, materialism proved a challenging and even triumphalist (not to mention career-making) point of view. Countervailing perspectives could be minimized. Parapsychology or psychical research was a lightening rod for criticism (and remains so) and quantum theory was not yet popularized. Hence, the materialist outlook, in academia and media, mostly won out. This remains true today in much of academia, opinion-shaping media, and even among feted screen personalities like Rachel Maddow and Bill Maher. Any glance at Wikipedia — which urgently needs editorial ombudsmanship in areas non-materialist scholarship— speaks to this victory.

But materialism has maintained its primacy much like 19th century religion once did: it faced few cultural challenges despite the weight and quality of countering evidence. That countering evidence has grown to the point where philosophical materialism defends its worldview less less by repetition and validation of its premises (the gold-standard of science) but, largely, by polemic, catechism, and causticism. Materialism’s denizens rely upon cultural affinities within media — scholarly journals, news sources, Wiki, et al.—to maintain the status quo. Hence, you will read that experiments that have been supported by replication and meta-analysis — from the 1930s-era Duke University ESP card experiments by JB Rhine to the recent precognition experiments by Cornell University clinical psychologist Daryl Bem — have never been repeated. You will read such responses in the comments to this article.

But the status quo is breaking down. AI will continue to confound us. The question of how or whether a machine is intelligent will — and already is — forcing us to finally define our terms. And if those definitions do not square with the insights of cumulative science and testimony, a passage of generations will ensure the gradual abandonment of philosophical materialism, just as we have abandoned the casual use of the term soul — or, at least, insisted that it be defined and defended.

Nobel Prize winner in physics Max Planck — who took seriously the extra-physical dimensions of thought — famously observed: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

This is the position in which materialism — once so heralding of reason — finds itself today. To exclude the question of the extra-physical as we approach and attempt to define AI and other predicaments of current life, is to leave us as the blind men describing an elephant. And humanity, for all its foibles, will not consent to blinkers for very long.

_____________

If you wish to further explore questions of ESP, the author delivered this talk in 2022:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK