From Librarians to LLMs - Reflections on Our Changing Relationship with Information
Note: this is a short personal reflection on how I have witnessed the evolution of information access. You may disagree.
Introduction
The journey of how we access information has come full circle in many ways. From the guidance of knowledgeable librarians to the algorithmic precision of search engines (if you used them right), and now to the conversational intelligence of Large Language Models (LLMs), our methods have evolved while our fundamental needs seem to remain constant.
The Cyclical Nature of Information Access
Before the digital age, information was gatekept in physical repositories - real printed encyclopedias sold by itinerant sellers and libraries staffed by expert navigators of knowledge - the librarians. They knew where to find everything. Using their systems, they could quickly navigate thousands of books and tell you exactly where in the library you could find the book that contained the answer you were looking for. The dawn of the internet brought us Altavista, Yahoo, and MSN Messenger - platforms most people from the 2000s don't even know existed, much less mIRC (which still exists) or Microsoft Encarta (which was great!).
Website creation was done through raw HTML or tools like Dreamweaver and Flash, with no WordPress, Medium.com or similar platforms. There were many free hosts offering only static file hosting where you could upload HTML files and images, with some also offering PHP hosts to try stuff out. People had to think, discuss, research, read full books. Now I feel many are overwhelmed with the summary of the summary in an AI generated video. People had to relate to others to get knowledge.
As technology advanced, I (we) witnessed the rise of Google, which fundamentally changed our relationship with information. "Google it" became a cultural directive, signaling the democratization of knowledge access. Google also used to have the motto "Don´t do evil", which is long gone now.
Social media platforms like Instagram, TikTok, and X (formerly Twitter) later emerged as unexpected information sources, offering different perspectives than traditional search. They also changed our relation with reality, adding (vanity) metrics to our success and relations. Likes and shares became the currency and the "real reality", for lack of better words, became unimportant. Much like some people do what it takes to get more money, social media created a space for people to do anything to get likes and shares, even at the cost of information. Disinformation has become a rule more than an exception, as negativity and polemic get more likes and shares than the good old boring reality. And this got worse with ads, as that currency (likes and shares) actually became real money. But this is for another post. Let's go back the main topic.
Now, we find ourselves in the age of LLMs - intelligent systems that once again personalize information retrieval through conversation rather than keywords. People ask ChatGPT or Perplexity or Claude or any other LLM out there. Simultaneously, static site generation is coming back. This evolution resembles a cycle, with LLMs functioning much like the librarians of yesteryear. As you noted, if this were a cycle, could ChatGPT be the old librarian? In the end, if you need proper information, you should research a little further than just asking an LLM - just as before, you would go to the library and search for a book, use an old Pentium Desktop to search in the index, or ask the librarian if they knew of books related to specific topics. Once you get the source of the information, you need to put the work and understand it. The more summarised, the less you worked. It is easy to think I understood something until I try to explain it to someone (or to write about it, as it seems to have the same effect). If I just read a summary, I only get a part of the information. Sometimes, the context in which that information is is as relevant as the information itself. And in a summary that context is mostly lost. (there are, of course, exceptions).
But the way we search for information remains constant: We ask questions. either to a librarian, a search box or a chat.
The Human Element: Our Need for Conversation
What seems to remains consistent throughout these technological shifts is our persistent tendency to humanize our interaction with information. We didn't simply search card catalogs; we asked librarians for guidance. We often phrased Google searches as questions rather than optimized keywords. Now, we converse with LLMs in natural language trying to explain what we want and hope for the best.
This pattern suggests that we inherently seek not just information, but communication - a dialogue about knowledge rather than a one-way transfer. Each technological advancement has eventually bent toward accommodating this conversational instinct. But conversation is hard. It means you also have to be prepared to listen, and listening is harder. I think that is why these technological tools are so welcomed and expand so quickly: the listening element is not required. You just ask and ask and ask, without a care in the world about the other party, just because it is not a person or because it has no emotions.
Would you behave the same way if chatgpt was a real person? Or would you care a bit more about punctuation and typos and how you write? Would you use please more or would you say thanks? With a person, you cannot just keep asking and receiving without giving. You would also not expect that the other party knows everything, but you do with an LLM or a search engine. And you also give those tools as well as social media credibility12. I have heard a lot of people say "if its on Instagram/Facebook/TikTok/< Insert your platform here> then it has to be truth, or real." And, given that the technology itself does not care, it just shows the information it thinks we should see. Two interesting concepts here are the Spiral of silence and the Negativity Bias, but that's also for another time.
New Abilities for New Technologies
Each era has demanded its own form of literacy. Understanding library classification systems gave way to crafting effective search queries, which has now evolved into the art of prompt engineering for LLMs. The access of information has consistently been accompanied by new skills needed to navigate these systems effectively, and by the unwanted responsibility and work of understanding how they work and when to use them.
Conclusion
While the mechanisms have evolved dramatically, our relationship with information seems to remain rooted in human/ humanized connection. LLMs represent not just a technological advancement but a return to conversational knowledge-seeking - suggesting that perhaps the most effective information interfaces are those that adapt to our natural communication preferences rather than forcing us to adapt to them. A couple of examples here are the web search features of chatgpt, or the many Retrieval-Augmented Generation (RAG) approached to "converse" or "chat" with our information, be it privately or in companies.
The future may and will bring new technologies, but if history is any indication, they will likely continue to evolve toward satisfying our deeply human desire to learn through dialogue and narrative.