Clippy, Giles, and the Reason Chat-GPT Is Impressive (It's not what you think it is.)

Clippy, Giles, and the Reason Chat-GPT Is Impressive (It's not what you think it is.)
A picture of a woman spitting out her coffee from the meme "Spit Take Woman. " This is me, reading about Bill Gates using Chat GPT for the first time. Also, Spit Take Woman is not J-Law. In case you were wondering. It's a real woman named Elissa.

I made a joke recently. What happens if Open AI's black box is Schroedinger's box, and when you open it, you find a sad, dead carcass of...
This image represents Clippy, the cartoon character who helped us learn how to do things in Microsoft Office. Who could resist this debonair chap with his perfect paper clip form, large dark eyes, and expressive eyebrows? I think only people who did administrative work for others appreciate him.
Clippy.

One of the less intelligent people who litter my feed with his posts about AI contradicted people who dismiss AI or are skeptical about it, saying that AI will affect every aspect of our lives, even if it isn't quite where it should be. This is a tell, you see. The future tense. It's adorable, really, that anyone thinks this is new in any sense. Because AI has already affected all aspects of our lives, starting with how World War was waged. The possibility for it erupted through Turing in the 30s. The tech for the language model we now have first emerged in 1955 and has been developing since then. AI is what brought the socially connected web to its pinnacle; it's what perfected the Netflix streaming experience. It's been providing clothing choices in robostyling subscriptions and suggesting books for you to read. It's been directing what news you see and when. The idea that you are saying "will," a LinkedIn poster looking for a new career, is simply precious.

However, I can understand wanting to silence critiques because the critiques are reductive. While the hype is ludicrous to the educated, there is an innovation within ChatGPT; it simply is not what people think it is. Its limits can be more accurately stated as follows: AI's effectiveness quickly wanes without human input and fine-tuning. While AI can replicate certain aspects of human intelligence, like John Searle's Chinese Room Argument, it remains an illusory intelligence at best, slightly better than some chatbots that pre-existed it. It remains a reflection of its creators' intellect and the intellect behind the data put into it. The "intelligence" of the system is bound by the quality of its programming and data, highlighting the ongoing challenge of distinguishing accurate intelligence from clever imitation in the digital age.

But I feel like telling you a story. Do you feel like reading one? If not. You got the jewels right here. :) If so...here is my story.


Horror Novels and Trivial Pursuit Were My Yodas

A photo of Grogu. by Lisa Fotios: https://www.pexels.com/photo/close-up-of-yoda-toy-7829101/

I was a spontaneous reader. I read at a young age.

Mind you, I didn't always read with complete understanding. Especially explicitly horror content in Stephen King, but that's for another post.

Steve, if you ever find this, I've got issues—all early Gen-X readers. We've all got issues. I'm grateful for your literary imagination and for elevating the status of dorky kids who read a lot everywhere, but we've got issues. Let's talk. Call me.

I digress.

At any rate, this gave me kind of a rep. I would read things at parties for entertainment. It was like I was a monkey. Or the Shirley Temple of the written word or something. As I left pre-school and entered grammar schools, the stakes were raised on my performance. There was another kid in our family social circle my age--I will call him Giles--and around age 9, he was held up as a competitor to me. The men in our circle would pit us against each other, like an epic rap battle or something, and if you knew the men in question, you'd have no doubt: there was always money riding on us.

On this day, most of them seemed to have their money on Giles. His leading talent was memorizing sports data; he could tell you the records of many hockey players and baseball players. I could tell you how to lay a quinella or an exacta at the dog or horse track, and I could calculate the odds and payouts. Outside that, however, sports was a whole bunch of mishegas for me. How I learned to lay horse and dog bets and calculate jockey weights, furlongs, and speed (and to say mishegas as a shiksa) is yet another story.

I'm trying to stay on track here, I promise, pun intended. I've had a colorful life. It's sometimes distracting for me. There was one last epic rap battle between me and Giles. It taught me something really powerful.


"Um, Burkina Faso?"

This is a Dall-E-generated image of a Trivial Pursuit playing piece. It's still not too good at things like that. Perhaps if there was a copyrighted artwork that depicted it, we might have a better result. Oh, snap. SICK BURN.

On this particular occasion, we had several rounds of Trivial Pursuit with the men watching. Giles was the hands-down winner. I was humiliated. The men chuckled and said, "Well, that was a TKO in the third round." Fortunately, however, Giles' dad was an honest kind of guy. He later made Giles confess that he had taken a stack of cards, memorized them, and placed them in the front of the box. The father also told me that he probably would not be able to retain that information for long. What was creepy was that Giles had this set of behaviors he developed. When you asked him a question, he would stroke his chin, furrow his brows, and then say, "Um--Burkina Faso?" This made it compellingly persuasive. It gave the impression that he was retrieving the answer from some dark recess of long-term kid genius memory rather than some ephemeral data in working memory.

It was a performance.
And perhaps a different variety of genius.

Do you see where I'm going with this?


Bill Gates And The Million Dollar Question

This is what a human learner of AP Bio material looks like. Photo by Ron Lach : https://www.pexels.com/photo/young-woman-sleeping-with-her-head-on-the-books-8086367/

Giles is an (imperfect) metaphor for LLMs.

I've been writing about Isaacson's bio of Musk recently, and the story that made me snarf my coffee was the one about Bill Gates being awe-struck by being unable to stump Chat GPT with biology questions after it had been trained on AP Bio test data.

Remember this article here? College Board was discovered to have been sharing large data sets in 2020. Look at the list. So, one assumes easy access to College Board data for Open AI given the "donors" who sponsored it.

What made me snarf my coffee was the "innocent" statement that Gates was impressed by this and that this should persuade me. He should know:

The model is only as good as the data you input and the data science you use to structure it.

Why wouldn't it work well if you have historical data for the AP Biology test? Who wouldn't be good at the test with that at their disposal? Why wouldn't a kid who memorized the trivia cards and stacked the box with them win in a trivia contest? Any AI model–--and none of them deviate from each other significantly thus far--is only as good as the data entered into it by humans who can fine-tune the model over time as it starts to degrade through use. This would have been easy to do even with pre-existing technology. See Clippy. Yes--higher volume of data, nuance, sophisticated language, and broader answers. Yes, there is a more comprehensive range of ways to ask the question. But in the main?

Clippy.

Here is the million-dollar question:

Was Bill Gates--who founded the company that made Clippy--seriously impressed by that? Maybe. Was he willing to let people think he was impressed? Absolutely. And what, if anything, really impressed him? I'm sure it is possible that Gates was like the men watching Giles stroke his chin and furrow his brows. Gates is getting up there, so it's in the realm of possible answers. Somehow, I doubt it. His reaction was likely a PR fantasy.

What was likely impressive to Gates, outside of the investment potential, was not the semantic content but the syntax of the answers. This is not nothing. It's something. Many of us have been waiting for this because NLP promises to allow us to query qualitative data intelligently and revolutionize our code. This, seen in a knowledgeable context, is quite huge. It's just far more modest than the claim of what GPT technology delivers.


It's About the Language, Stupid

It's about language and its relationship to thought and intelligence. A Photo of a Dictionary by Nothing Ahead: https://www.pexels.com/photo/lens-on-a-dictionary-4567481/

Understanding linguistics is fundamental to understanding human language production and any language production that at least pretends to be humanoid. For years, I have talked to brilliant engineers and developers who can't tell you the difference between syntactical and semantic content--and they look at me kind of strangely when I ask about it; they are writing code for "neural nets" and all other kinds of technology magic. Still, they don't understand language, how it works neurologically, what language is, and how it differs from thought and intelligence. These engineers are seduced by the technology as well. So, they may even appear to give expert testimony. However, they are missing contextual information you need to theorize LLMs well. Many of them are likely to tell you, who cares? You tinker until it works. Or, that models and theories tend to fall apart in practice.

At Singular XQ, we realized early on that those of us who wrote and used writing software were not quite as impressed at the grammar and style contributions that GenAI contributes because we've been using software that does the same for years. Those of us who have dabbled in writing scripts for stage and screen also know the capabilities of that software, and they'd already become quite assistive. Years ago. And those of us who taught university-level courses know plagiarism detectors well. The designers knew about AI automation within Figma and other AI tools like Uizard and Beautiful.ai. This is why cross-functional collaboration matters. We needed all those people in rooms talking to assemble what was happening in this moment of "advance" and identify where we saw innovation in this eruption of chatbots and where we didn't. Those who understand hacker culture among us also know a lot about bots and chatbots that the average person doesn't know, making much of this far less startling.

Returning to that handsome devil Clippy and those of us in the working class who had to write memos and do spreadsheets for execs to survive, we know how powerful Clippy was even in 1997. Traditional relational databases with boolean operants could deliver persuasive knowledge management swag.

Clippy even had the same eyebrows as Giles. Clippy wouldn't know Burkina Faso though. Then again.

Neither did Giles.


The Illusion of Intelligence Is a Form of Intelligence. I Guess?

It's the illusion of two fingers crushing the moon in a photograph. It's Alex Brites crushing the moon's head. He's crushing its head. Look crush, crush: https://www.pexels.com/photo/a-person-holding-the-moon-9582281/

I'm not naive enough to say that Chat GPT is Clippy. But. Well. Clippy, Grammarly, and a lot of back-end data being fed to it and then refined and modeled by human actors. Continuously. Without the human input, it degrades very quickly. There are other problems—confabulation, model autophagy, and model collapse. The pesky idea that some data is not yours to use. Some of these are inherent to the model and aren't likely to be resolved until a satisfying second-generation cognitive model emerges, and I think it will. Some of it is data quality and the intelligence and skill of the people pre-structuring it.

We've confused the illusion of intelligence with actual intelligence—all models are a reflection of the intelligence of:

the people who generated the data (human writers)
the people who structured it (human data scientists)
the architecture of the model (human solution architects)
the users of the chatbot (human users)

I'm generalizing, I realize. There are some things in detecting patterns that the machine does faster and better than humans. (Radiology, for one example, maybe? The data isn't all in yet.)

But only to the extent that human experts trained it and then interpret the answers. While lack of transparency prevails, I don't know for sure, and we are running our experiments internally to the extent we can without big gobs of sweaty VC cash. Still, I suspect it is highly likely that it has to be maintained and fine-tuned to continue to work consistently and that it works better if it's live, breathing radiologists that help do that work—not exploited people locked in a room for 20 hours a day in an offshore company.

Returning to Giles

My dad, whose nickname was Butch, was hopping mad and went around to the other guys, screaming re-match! re-match! But half of them went home. They probably died believing Giles was more intelligent than me. The other half laughed, and one said,

"Doesn't matter, Butch. Giles won me my money, and I'm not giving it back."

I learned pretty early on that what you know doesn't matter as much as what you can persuade people you know. And if you make those people money, they get to keep it? They don't care that much. At all.

Just tinker until it works.

Or wins a bet for you.

Maybe it's the same thing?