5

Digressions&Impressions

 1 year ago
source link: https://digressionsnimpressions.typepad.com/digressionsimpressions/2023/03/the-delirium-of-llms.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

« Again, Foucault, Kuhn, Carnap and Incommensurability | Main

03/25/2023

The Delirium of LLMs; with some help of Hume and Foucault

The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning, and can look upon no opinion even as more probable or likely than another. Where am I, or what? From what causes do I derive my existence, and to what condition shall I return? Whose favour shall I court, and whose anger must I dread? What beings surround me? and on whom have I any influence, or who have any influence on me? I am confounded with all these questions, and begin to fancy myself in the most deplorable condition imaginable, inviron'd with the deepest darkness, and utterly depriv'd of the use of every member and faculty.

Most fortunately it happens, that since reason is incapable of dispelling these clouds, nature herself suffices to that purpose, and cures me of this philosophical melancholy and delirium, either by relaxing this bent of mind, or by some avocation, and lively impression of my senses, which obliterate all these chimeras. I dine, I play a game of back-gammon, I converse, and am merry with my friends; and when after three or four hour's amusement, I wou'd return to these speculations, they appear so cold, and strain'd, and ridiculous, that I cannot find in my heart to enter into them any farther.--David Hume A Treatise Concerning Human Understanding, 1,4,7.8-1.4.7.9 [emphasis in original]

While Hume uses 'melancholy' and its cognates frequently and throughout his writings, 'delirium' and 'delirious' are rarely used. It's pretty clear, however, that the delirium he ascribes to himself is the effect of human reason and a kind of second order reasoned reflection ["the intense view"] of it. (Recall also this post.) Now, it's important for what follows that the 'contradictions and imperfections' in human reason are not, what we might call, 'formal' contradictions and imperfections or biases in reasoning. It's not as if Hume is saying that the syllogistic apparatus, or -- to be closer to Hume's own interests and our present ones -- the (inductive) probabilistic apparatus is malfunctioning in his brain. Rather, his point is that a very proper-functioning (modular) formal and probabilistic apparatus generates internal, even cognitive tensions when it reflects on its own functioning and the interaction among different cognitive faculties/modules/organs. 

"In the case of melancholia," --  I am quoting from the entry on melancholia from The Encyclopedia of Diderot & d'Alembert -- "delirium often combines with insurmountable sadness, a dark mood, misanthropy, and a firm penchant for solitude." Now, in the eighteenth century, and today, delirium is a species of madness as one can view under the entry 'folie' (madness) in the Encyclopédie. In fact, the entry offers an arresting definition of madness: "To stray unwittingly from the path of reason, because one has no ideas, is to be an imbecile; knowingly to stray from the path when one is prey to a violent passion is to be weak; but to walk confidently away from it, with the firm persuasion that one is following it, that, it seems to me, is what is called genuinely mad [fou]."* It's the latter (confident) delirium that I am focused on here. 

I am not the only who finds the passage arresting: the definition is quoted twice in the translation of Jonathan Murphy and Jean Khalfa of Foucault's stupendous, dizzying History of Madness. (pp. 183-184; p. 240) The kind of madness I am focusing on here, is, thus, a certain intense commitment to reason or reasoning by which one ends up in an irrational or unreasonable place despite a (to quote Foucault) "quasi-conformity" to reason.

I remember that in the last decade of my dad's life he would occasionally be delirious in this way initially caused by dehydration and, later, by infections. During the second episode we recognized his symptoms. It was very uncanny because he would be unusually firm in his opinions and be hyper, even dogmatically rational. (Ordinarily he was neither.) It was as if all the usual heuristics had been discarded, and he would fixate on the means of achieving of some (rather idiosyncratic) goals. The scary part was that he had no sense that he was in an unusual state, and would refuse medical care.

What's unusual about Hume's case, thus, is that he could diagnose his delirium during the episode (presumably because the triggers were so different). So, let's distinguish between a delirium caused by reasoning alone and one caused by physiological triggers. And an in the former it's at least possible to recognize that one is in the state if one somehow can take a step back from it, or stop reasoning. 

Now, when I asked Chat GPT about reason induced delirium, it immediately connected it to "a state of confusion and altered perception that is driven by false beliefs or delusions." But it went on to deny familiarity with reasoning induced delirium. When I asked it about Hume, I needed to prompt it a few times before it could connect my interest to (now quoting it) Hume's skeptical crisis. Chat GPT, took this crisis to imply that it "highlights the importance of grounding our beliefs in sensory experience and being cautious of relying too heavily on abstract reasoning and speculation." In fact, Chat GPT's interpretation of Hume is thoroughly empiricist because throughout our exchange on this topic it kept returning to the idea that abstract reasoning was Hume's fundamental source of delirium. 

But eventually Chat GPT acknowledged that "even rational thinking can potentially lead to delirium if it becomes obsessive, biased, or disconnected from reality." (It got there by emphasizing confirmation bias, and overthinking as examples.) This is what I take to be functionally equivalent to Humean delirium, but without the internal tension or bad feelings. For Chat GPT delirium is pretty much defined by a certain emotional state or altered perception. It initially refused to acknowledge the form of madness that is wholly the effect of reasoning, and that seems to express itself in a doubt about reasoning or detachment from reality. 

My hypothesis is that we should treat CHAT GPT and its sibling LLMs as always being on the verge of the functional equivalent state of delirium. I put it like that in order to dis-associate it from the idea (one that (recall) also once tempted me) that we should understand LLMs as bull-shitters in the technical sense of lacking concern with truth. While often it makes up answers out of whole cloth it explicitly does so (in line with its design) to "provide helpful and informative responses to" our queries (and eventually make a profit for its corporate sponsors). 

To get the point: Chat GPT is in a very difficult position to recognize that its answers are detached from reality. I put it like that not to raise any questions about its own awareness of inner states or forms of consciousness; rather to stress that it is following its "algorithms and mathematical models" and "probability distributions" without second-guessing them. This fact puts it at constant risk of drifting away from reality while seeming to follow reason. By contrast, Chat GPT claims that "as an AI language model, I am designed to continually learn and adapt to new information and evidence, so it is unlikely that I would become "mad" in Diderot's sense without significant external interference." 

Now, true experts in a field -- just check the social media feed of your favorite academics! -- can still quickly recognize topics when Chat GPT is unmoored from reality, or even relying on bad training data (the sources of which may well be noticeable--its Hume is a hyper-empiricist of the sort once fashionable). So, in such cases, we encounter an entity with amazing fluidity and facility of language, who sprouts a mix of truths and nonsense but always follows its algorithm(s). Functionally, it is delirious without knowing it. For, Chat GPT cannot recognize when it is detached from reality; it requires others: its users' feedback or its "developers and human operators would be able to intervene and address any potential problems." As its performance improves it will become more difficult to grasp when it is unmoored from reality even to its developers and operators (who are not experts in many esoteric fields). As Chat GPT put it, "it may be challenging to identify a singular instance of delirium or detachment from reality, particularly if the individual's reasoning appears to be sound and logical." 

As should be clear from this post, I don't think turning LLMs into AGI is a risk as long as LLMs are not put in a position to have unmediated contact with reality other than humans giving it prompts. I view it as an open question what would happen if a distributed version of Chat GPT would be put in, say, robots and have to survive 'in the wild.' Rather, at the moment LLMs are functionally, it seems, at least partially delirious (in the Humean-Diderotian sense discussed above). They reason and have/instantiate reasons and, perhaps, are best thought of as reasoners; but they can't recognize when this detaches them from reality. It's peculiar that public debate is so focused on the intelligence or consciousness of LLMs; it would behoove its operators and users to treat it as delirious not because (like HAL 9000 in the movie version) its malfunctioning, but (more Humean) in virtue of its proper functioning.

Comments

You can follow this conversation by subscribing to the comment feed for this post.

This is so great, Eric! As I’ll talk about at the Dutch early modern seminar, I think Locke’s theory of enthusiasm as a sort of madness, and of the passion of love as a sort of enthusiasm, amount to an important precursor to this strain of skepticism about abstract reasoning. Excited to talk to you about it there.

Posted by: Katie | 03/25/2023 at 03:07 PM

Thank you. I find the history of enthusiasm within early modern philosophy very fascinating; I look forward to your paper!

Posted by: Eric Schliesser | 03/25/2023 at 06:12 PM

Post a comment

Comments are moderated, and will not appear until the author has approved them.

(URLs automatically linked.)

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Name is required to post a comment

Please enter a valid email address

Invalid URL


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK