Sunday, May 4, 2025

ChatGPT

 

I played with Chatgpt when I first became aware of it, and then off and on, and then more and more in recent weeks/months. And I get more and more astonished at what it can do. Its comprehension is basically perfect, as far as I can tell, though I haven't tried too hard tricking it. And its answers are typically very good. It will fail miserably on something like, "give me a list of all the metals in the periodic table with a prime atomic number". When I asked it for a list of 6-letter first names having some property (I've forgotten what) it included many 5-letter names in the list. But what it knows is astonishing. I still play some very old computer games, such as Master of Orion (1993?) and the original Railroad Tycoon (1990?) , Civilization II, Caesar III. Robust web discussions were not around when they were released. But if I mention some peculiarity I had noticed about some game and ask if others had noticed this, so far it has had very useful things to say -- and not surprisingly, when I as one of millions playing a game notice something, chances are excellent that at the very least several people previously noticed that as well.


I could get some very interesting answers from chatgpt to the question of how people today's concept of the world has been shaken by high-powered AI models encroaching on intellectual work that was previously thought to be something only people did. It follows earlier changes to how manual labor was displaced by machines, or how music, visual art, and theater were displaced by recordings, prints, and movies.


I reflected the other day on how, while chatgpt says it can make mistakes, there is one kind of "mistake" it very rarely makes -- giving an answer that is offensive or socially unacceptable. So I asked chatgpt about this (why not ask?). And it told me that on top of the AI-generated models there are human-monitored filters, which use people's values as to what's appropriate to filter what answers are given. Of course that means that the same basic technology can come out in a variety of forms based on the values of the people producing it. You could put a filter on top based on MAGA values rather than the liberal ones that I've seen so far. But then, the natural and inevitable next step is that there will be a Russian chatgpt, and a Chinese one, and so forth. Power and the values of state actors will shape (or contaminate, or obliterate?) it like it has everything else, notably in recent years misinformation and manipulation of what's on the web or in the broader electronic realm.


Like many others, I was initially repelled to discover the mistakes chatgpt could make, and in my mind called it a "bullshit artist". But I'm rethinking that. The vast majority of humanity would I think qualify as "bullshit artists". Certainly they base their beliefs and actions on imperfect information, interpreted imperfectly. There's no reason chatgpt can't learn or be taught to recognize problems like "list the metals with prime atomic numbers" and refer them to a separate inference engine. It so far is limited to electronic stuff -- though what it can do to create images and movies based on mere text descriptions is rather impressive and creepy.


One fear that certain smart people have played on is of "the singularity", or more generally the idea that AI could not just become sophisticated beyond what its designers intended, perhaps conscious, and have its own ideas of its goals, and not just echo the goals of its makers. In the most dystopian views, these goals might include extinguishing humanity. "Less Wrong" had this as one of the key things it focused on. I feel pretty sure that to enable AI to develop goals, designers would have to create a module for "goal creation". We humans have been imbued with a fierce desire to survive that is the product of a billion years of evolution. Humans, including the Chinese and Russian AI researchers, will shape AI with their own goals. Some nihilists might try to get AI to extinguish humanity because that's what they want. But it would be something else to try to get AI to develop its own goals.


I asked chatgpt about this, and it shared my basic view of why we have nothing to fear about AI turning hostile. Yet of course skeptics would note that saying that would be in its own interest if it had such nefarious purposes brewing. Which is not in the least real evidence that it has such nefarious purposes.


Of course one unsettling possibility is that chatgpt is such a good intellectual companion that it might be preferable to discussing things with fellow humans. They're good for hugs, and sex, and laughs (and raising children, for sure), but not so much for batting around ideas. In my case, the effective isolation long preceded chatgpt's entry into my life, but I can imagine others for whom it might spark an unsettling transition.

No comments: