Sunday, May 4, 2025

One way Google Search changed life

 

This isn't exactly new, and perhaps I wrote about it in the past, but writing about chatgpt motivates me to write about it again.


As someone who has devoted a lot of time to learning things, I became aware of how the availability of huge amounts of data that can be searched efficiently by what we called "googling" changed things pretty profoundly.


We used to spend a lot of time learning and memorizing things, because if you wanted to easily get them again you had to, as the particular book or other source you were using often wouldn't be available. Even if it was on your bookshelf, finding the exact relevant passage would be very difficult. Since googling became available, you can often say, "If I ever need to really know that again, I'll just search for it." Or perhaps you don't really need to learn it when you encounter it, but know you can find it to learn it if/when it becomes relevant.


Another thing people like me used to do was share our wisdom. Tell or write online (in recent years mostly the latter) what you remembered. Googling changed that in a couple ways. First, the humbling experience of simply asking Google what you thought you remembered, and finding that you hadn't gotten some key things right. Which after a while led to me checking that before sharing wisdom. And then often led to not sharing the wisdom but just providing a link to where it was written up. And finally sometimes not even providing the link, because you know the reader knows they can find out just by searching for it themselves. This particularly affected the case where you in the old days would have explained some key background, like the definition of some key term or phrase. No need to write it up yourself, no need to provide a link, since you know astute readers can search for it themselves. (If you're writing for a large audience, you can provide the hyperlinks to make it easier, but for smaller audiences typically not worth it).


So while it might have been a source of satisfaction or pride to share wisdom I had learned, that rarely happens so much any more. But sometimes I'll still do that, partly because my learning on this score is only incomplete, and occasionally explicitly by saying I enjoy telling the story myself -- if I can hope my readers will indulge me.


So what I write is guided by what I know other people can search for, and maybe more precisely what I believe they know they can search for. The more a question can be defined by a simple unique phrase, the more easily it can be found, and readers know that, and I know they know that. If you refer to Noam Chomsky, you know anyone can find him by a simple search. If the person in question is on the other hand John Smith, you'd better provide more information than that, but you still will likely not have to actually say much substantive about John Smith.


ChatGPT

 

I played with Chatgpt when I first became aware of it, and then off and on, and then more and more in recent weeks/months. And I get more and more astonished at what it can do. Its comprehension is basically perfect, as far as I can tell, though I haven't tried too hard tricking it. And its answers are typically very good. It will fail miserably on something like, "give me a list of all the metals in the periodic table with a prime atomic number". When I asked it for a list of 6-letter first names having some property (I've forgotten what) it included many 5-letter names in the list. But what it knows is astonishing. I still play some very old computer games, such as Master of Orion (1993?) and the original Railroad Tycoon (1990?) , Civilization II, Caesar III. Robust web discussions were not around when they were released. But if I mention some peculiarity I had noticed about some game and ask if others had noticed this, so far it has had very useful things to say -- and not surprisingly, when I as one of millions playing a game notice something, chances are excellent that at the very least several people previously noticed that as well.


I could get some very interesting answers from chatgpt to the question of how people today's concept of the world has been shaken by high-powered AI models encroaching on intellectual work that was previously thought to be something only people did. It follows earlier changes to how manual labor was displaced by machines, or how music, visual art, and theater were displaced by recordings, prints, and movies.


I reflected the other day on how, while chatgpt says it can make mistakes, there is one kind of "mistake" it very rarely makes -- giving an answer that is offensive or socially unacceptable. So I asked chatgpt about this (why not ask?). And it told me that on top of the AI-generated models there are human-monitored filters, which use people's values as to what's appropriate to filter what answers are given. Of course that means that the same basic technology can come out in a variety of forms based on the values of the people producing it. You could put a filter on top based on MAGA values rather than the liberal ones that I've seen so far. But then, the natural and inevitable next step is that there will be a Russian chatgpt, and a Chinese one, and so forth. Power and the values of state actors will shape (or contaminate, or obliterate?) it like it has everything else, notably in recent years misinformation and manipulation of what's on the web or in the broader electronic realm.


Like many others, I was initially repelled to discover the mistakes chatgpt could make, and in my mind called it a "bullshit artist". But I'm rethinking that. The vast majority of humanity would I think qualify as "bullshit artists". Certainly they base their beliefs and actions on imperfect information, interpreted imperfectly. There's no reason chatgpt can't learn or be taught to recognize problems like "list the metals with prime atomic numbers" and refer them to a separate inference engine. It so far is limited to electronic stuff -- though what it can do to create images and movies based on mere text descriptions is rather impressive and creepy.


One fear that certain smart people have played on is of "the singularity", or more generally the idea that AI could not just become sophisticated beyond what its designers intended, perhaps conscious, and have its own ideas of its goals, and not just echo the goals of its makers. In the most dystopian views, these goals might include extinguishing humanity. "Less Wrong" had this as one of the key things it focused on. I feel pretty sure that to enable AI to develop goals, designers would have to create a module for "goal creation". We humans have been imbued with a fierce desire to survive that is the product of a billion years of evolution. Humans, including the Chinese and Russian AI researchers, will shape AI with their own goals. Some nihilists might try to get AI to extinguish humanity because that's what they want. But it would be something else to try to get AI to develop its own goals.


I asked chatgpt about this, and it shared my basic view of why we have nothing to fear about AI turning hostile. Yet of course skeptics would note that saying that would be in its own interest if it had such nefarious purposes brewing. Which is not in the least real evidence that it has such nefarious purposes.


Of course one unsettling possibility is that chatgpt is such a good intellectual companion that it might be preferable to discussing things with fellow humans. They're good for hugs, and sex, and laughs (and raising children, for sure), but not so much for batting around ideas. In my case, the effective isolation long preceded chatgpt's entry into my life, but I can imagine others for whom it might spark an unsettling transition.