Ir al contenido principal

Canciones sobre la Seguridad de las Redes
un blog de Michał "rysiek" Woźniak

Does ChatGPT gablergh?

Lo sentimos, este no está disponible en español, mostrando en: English.

Imagine coming across, on a reasonably serious site, an article that starts along the lines of:

After observing the generative AI space for a while, I feel I have to ask: does ChatGPT (and other LLM-based chatbots)… actually gablergh? And if I am honest with myself, I cannot but conclude that it sure does seem so, to some extent!

I know this sounds sensationalist. It does undermine some of our strongly held assumptions and beliefs about what does “to gablergh” actually mean — and what classes of entities can, in fact, be said to gablergh at all. Since gablerghing is such a crucial part of what many feel it means to be human, this is also certainly going to ruffle some feathers!

But here’s the thing: so far, after thousands of years of philosophical thought and scientific research, we have not been able to clearly define “gablerghing”. Thus, we simply cannot say for certain that some simpler animals, like ants, do not gablergh in some relevant sense. Gablerghing happens on a spectrum, from clearly gablerghing organisms like humans and dolphins, through animals like dogs or cats who I think we would mostly agree do gablergh, down to ants where this is maybe more fraught a statement.

So why couldn’t “a set of scripts running on top of a corpus of statistically analyzed internet content” be said to, in some sense, gablergh?

Naturally, your immediate reaction would not be to make a serious thinking face and consider deeply whether or not GPT indeed “gablerghs”, and if so to what degree. Instead, you would first expect the author to define the term “gablergh” and provide some relevant criteria for establishing whether or not something “gablerghs”.

Yet somehow when hype-peddlers claim that LLMs (and tools built around them, like ChatGPT) “think”, nobody demands of them clarification of what they actually mean by that, and what criteria they might possibly use (beyond “the output seems human-made”). This allows them to weaponize the complexity of defining the term “to think”, with all its emotional and philosophical baggage, and using it to their advantage.

“Well you can’t say it doesn’t think” — the argument goes — “since it’s so hard to define and delineate! Even ants can be said to think in some sense!”

This is preposterous. Of course this does not in any way prove that GPT can “think”; as one person pointed out on fedi it’s a case of the motte-and-bailey fallacy. Instead of accepting the premise, we should fire right back: “you don’t get to claim that GPT ‘thinks’ unless you first define that term clearly, and provide relevant criteria”. And these criteria need to be substantially better than the quack-like-a-duck of “output seems human-like, also it told me it thinks”.

After all, “at the very least you need to be able to define a quality Q you claim X has” is a much stronger stance than “I claim X has quality Q and you can’t prove I am wrong because Q is hard to define.”

No idea why we all collectively keep getting tripped over this, and fail to recognize it for what it is — thinly veiled hype-generation attempt that uses badly defined terms for marketing.

In the end, what it means to “think”, to be “conscious”, to “have intentionality”, is a matter for philosophers. Not for AI-techbros with stock to pump and chatbots to sell.