From Librarians to LLMs - Reflections on Our Changing Relationship with Information

Note: this is a short personal reflection on how I have witnessed the evolution of information access. You may disagree.

Introduction

The journey of how we access information has come full circle in many ways. From the guidance of knowledgeable librarians to the algorithmic precision of search engines (if you used them right), and now to the conversational intelligence of Large Language Models (LLMs), our methods have evolved while our fundamental needs seem to remain constant.

The Cyclical Nature of Information Access

Before the digital age, information was gatekept in physical repositories - real printed encyclopedias sold by itinerant sellers and libraries staffed by expert navigators of knowledge - the librarians. They knew where to find everything. Using their systems, they could quickly navigate thousands of books and tell you exactly where in the library you could find the book that contained the answer you were looking for. The dawn of the internet brought us Altavista, Yahoo, and MSN Messenger - platforms most people from the 2000s don't even know existed, much less mIRC (which still exists) or Microsoft Encarta (which was great!).

Website creation was done through raw HTML or tools like Dreamweaver and Flash, with no WordPress, Medium.com or similar platforms. There were many free hosts offering only static file hosting where you could upload HTML files and images, with some also offering PHP hosts to try stuff out. People had to think, discuss, research, read full books. Now I feel many are overwhelmed with the summary of the summary in an AI generated video. People had to relate to others to get knowledge.

As technology advanced, I (we) witnessed the rise of Google, which fundamentally changed our relationship with information. "Google it" became a cultural directive, signaling the democratization of knowledge access. Google also used to have the motto "Don´t do evil", which is long gone now.

Social media platforms like Instagram, TikTok, and X (formerly Twitter) later emerged as unexpected information sources, offering different perspectives than traditional search. They also changed our relation with reality, adding (vanity) metrics to our success and relations. Likes and shares became the currency and the "real reality", for lack of better words, became unimportant. Much like some people do what it takes to get more money, social media created a space for people to do anything to get likes and shares, even at the cost of information. Disinformation has become a rule more than an exception, as negativity and polemic get more likes and shares than the good old boring reality. And this got worse with ads, as that currency (likes and shares) actually became real money. But this is for another post. Let's go back the main topic.

Now, we find ourselves in the age of LLMs - intelligent systems that once again personalize information retrieval through conversation rather than keywords. People ask ChatGPT or Perplexity or Claude or any other LLM out there. Simultaneously, static site generation is coming back. This evolution resembles a cycle, with LLMs functioning much like the librarians of yesteryear. As you noted, if this were a cycle, could ChatGPT be the old librarian? In the end, if you need proper information, you should research a little further than just asking an LLM - just as before, you would go to the library and search for a book, use an old Pentium Desktop to search in the index, or ask the librarian if they knew of books related to specific topics. Once you get the source of the information, you need to put the work and understand it. The more summarised, the less you worked. It is easy to think I understood something until I try to explain it to someone (or to write about it, as it seems to have the same effect). If I just read a summary, I only get a part of the information. Sometimes, the context in which that information is is as relevant as the information itself. And in a summary that context is mostly lost. (there are, of course, exceptions).

But the way we search for information remains constant: We ask questions. either to a librarian, a search box or a chat.

The Human Element: Our Need for Conversation

What seems to remains consistent throughout these technological shifts is our persistent tendency to humanize our interaction with information. We didn't simply search card catalogs; we asked librarians for guidance. We often phrased Google searches as questions rather than optimized keywords. Now, we converse with LLMs in natural language trying to explain what we want and hope for the best.

This pattern suggests that we inherently seek not just information, but communication - a dialogue about knowledge rather than a one-way transfer. Each technological advancement has eventually bent toward accommodating this conversational instinct. But conversation is hard. It means you also have to be prepared to listen, and listening is harder. I think that is why these technological tools are so welcomed and expand so quickly: the listening element is not required. You just ask and ask and ask, without a care in the world about the other party, just because it is not a person or because it has no emotions.

Would you behave the same way if chatgpt was a real person? Or would you care a bit more about punctuation and typos and how you write? Would you use please more or would you say thanks? With a person, you cannot just keep asking and receiving without giving. You would also not expect that the other party knows everything, but you do with an LLM or a search engine. And you also give those tools as well as social media credibility12. I have heard a lot of people say "if its on Instagram/Facebook/TikTok/< Insert your platform here> then it has to be truth, or real." And, given that the technology itself does not care, it just shows the information it thinks we should see. Two interesting concepts here are the Spiral of silence and the Negativity Bias, but that's also for another time.

New Abilities for New Technologies

Each era has demanded its own form of literacy. Understanding library classification systems gave way to crafting effective search queries, which has now evolved into the art of prompt engineering for LLMs. The access of information has consistently been accompanied by new skills needed to navigate these systems effectively, and by the unwanted responsibility and work of understanding how they work and when to use them.

Conclusion

While the mechanisms have evolved dramatically, our relationship with information seems to remain rooted in human/ humanized connection. LLMs represent not just a technological advancement but a return to conversational knowledge-seeking - suggesting that perhaps the most effective information interfaces are those that adapt to our natural communication preferences rather than forcing us to adapt to them. A couple of examples here are the web search features of chatgpt, or the many Retrieval-Augmented Generation (RAG) approached to "converse" or "chat" with our information, be it privately or in companies.

The future may and will bring new technologies, but if history is any indication, they will likely continue to evolve toward satisfying our deeply human desire to learn through dialogue and narrative.

The Power of Defaults: How Unconscious Choices Shape Our Lives

I've been thinking a lot about "defaults" lately: default settings, default prices, default processes, default habits, default choices. I find myself defaulting to original behaviors more often than I'd like to admit.

Take food, for example. I can try to eat healthy by avoiding junk food, eating on schedule, and consuming more fruits and vegetables. But if I don't have the proper snacks or ingredients at the right time, I default to eating cookies (if there are any at home) or buying bread with whatever toppings are available when I'm hungry.

This pattern applies to almost anything: when adding extra effort or thinking becomes necessary, I revert to the default. The worst part? The default isn't always mine—it could be a family default, a school default, or a friends default. A default is whatever doesn't require extra effort. It's what we do almost without thinking, what comes naturally. It isn't necessarily good or bad; it's simply what has always been there.

When we try to change our defaults, at least two significant efforts are required:

  1. Plan the new default
  2. Make it so that doing the new default is easier than doing the old one. This also means maintaining the new default.

For healthy eating, if fruit is already washed and portioned, while cookies are either absent or difficult to reach, you'll naturally eat more fruit.

This concept is elaborated in the book "Nudge", which includes a school experiment where placing healthy snacks at eye level and junk food above resulted in children eating more fruits. Something as simple as positioning can make a tremendous difference. Imagine what you could accomplish if you deliberately designed your default choices!

But there's a catch (there's always a catch): you need to plan and maintain the change, the new default. The latter is most challenging. Planning is difficult but not as hard as maintaining. You might go to the gym daily for the first two weeks when motivation runs high, but after that it requires a conscious choice. If your gym bag isn't ready, if you forgot to prepare snacks, if you didn't block time in your calendar—you won't follow through. It is easier to default to the old choice, the old decision.

Maintenance is hard, and there are already many books that elaborate on techniques to create habits (similar to creating new defaults) such as "Atomic Habits" by James Clear and "The power of habit" by Charles Duhigg.

Note to self: I should re-read those books.

But what if we simply think about defaults and eliminate other options? What if we remove cookies from the shopping list, buy multiple sets of gym clothes, and purchase ready-to-serve healthy snacks? Would we still face the same issues?

What if we set everything up to make what we want easier to achieve?

Consider learning guitar: If my guitar is hidden in the closet, and starting lessons requires booting a computer that takes 10 minutes, I won't play. But if the friction is reduced to seconds, I'll have no excuse, and playing for 5 minutes becomes easier than not playing. Imagine if my computer powers on automatically at 5 PM, loads my learning website, and my guitar sits right beside it. Even if I wanted to play a game instead, starting the lesson would be easier because everything is already prepared (playing the game would still require more clicks and effort).

What about zombie scrolling versus learning something new? I recently reinstalled Instagram on my phone (because of reasons) and quickly realized it would consume significant time. So I adjusted the settings to block the app after 30 minutes. After that time expires, Instagram becomes inaccessible. That's helpful! (Although I'm contemplating removing it entirely again.)

I've been thinking extensively about defaults—ultimately, I'm thinking about my choices. Each default, each choice, conscious or not, has consequences that build upon each other. Eating unhealthily, remaining sedentary, and scrolling mindlessly all day negatively impact health and life. And the time wasted is unimaginable. I get angry just thinking about it.

After publishing my latest book, some people asked how I managed it with family, kids, and work responsibilities (it's a long book after all). It all comes down to choices, I said, after giving some thought (and I still need to improve my health-related choices a lot, btw). I'm almost certain that if you redirect the time spent on social media and TV toward writing a book or learning something new, you'd be amazed by what you could achieve in just a few months.

What defaults are shaping your life? Which ones could you redesign?

D2 (d2lang) Python Wrapper: Use D2Lang in Python Projects

TLDR: I created a Python wrapper for D2 diagram language that bundles D2's binaries, allowing you to render D2 diagrams in your Python projects without manual binary installation or subprocess management. You can install it with pip.

Context

I needed a way to render D2 diagrams programmatically in Python, specifically for this Nikola static site and other Python projects. While D2 is a powerful diagram scripting language (which I now love) with a CLI tool written in Go, installing the binary manually in each environment wasn't practical (and also not possible afaik in Cloudflare Pages). After searching for "D2 lang Python" and "D2 diagram Python wrapper" without finding a solution that bundled the binaries or that could render d2 code, I created d2-python-wrapper. It's a thin Python layer around D2's official binary, distributed as a Python package that includes the necessary executables for each platform.

I found D2 while exploring options for adding diagrams to my digital marketing book and I wanted to use it to add a diagram in this article about Tailscale and LXC containers. If you're familiar with Mermaid or PlantUML, D2 is similar but with a more intuitive syntax and (in my opinion) better-looking output.

Here's where it got interesting: I couldn't install D2's CLI tools directly in Cloudflare Pages. Rather than depend on third-party services (I tried kroki, I decided to create a wrapper that bundles the binaries. That's how d2-python-wrapper was born, and now you can install it directly from PyPI.

What I Built

I wanted something simple - no complex subprocess management or command-line hassles. Just clean Python code like this:

from d2_python import D2

d2 = D2()
d2.render("x -> y", "output.svg")

Instead of installing D2 separately, I can just use pip to install the package and have everything ready to go. I can also fix specific version of the package with specific versions of the d2 binary.

What You Get

When you install d2-python-wrapper, you get:

  • A simple Python API for D2 diagram rendering
  • The D2 binaries included in the package
  • Automatic platform detection (Linux, Windows, MacOS)
  • Multiple output formats (SVG, PNG, PDF)
  • Theme customization options
  • Layout engine selection
  • Proper cleanup of temporary files

Basically, an interface in Python for d2 bin 😁

Real-World Usage

I'm currently working on a Nikola plugin for D2 diagrams (not yet released). Here's how I use it in my site with a shortcode:

direction: down
Internet -> Server
Server -> Database
InternetServerDatabase

If you're interested in using D2 with Nikola, keep an eye out for the plugin release or just let me know in Mastodon or LinkedIn. In the meantime, you can still use d2-python-wrapper directly in your Python projects.

How to Get Started

  1. First, install via pip:
pip install d2-python-wrapper
  1. Then use it in your code:
from d2_python import D2

d2 = D2()

# From string input
d2.render("x -> y", "output.svg")

# Or with more options
d2.render("diagram.d2", "output.svg", 
          theme="1",
          format="svg"
          )

Testing Approach

I wanted to make sure the wrapper produces exactly the same output as the original D2 binary. Here's one of the tests that verifies this:

def test_wrapper_vs_binary_output(d2, tmp_path):
    diagram = "x -> y"
    wrapper_output = tmp_path / "wrapper.svg"
    binary_output = tmp_path / "binary.svg"

    # Test both outputs are identical
    d2.render(diagram, str(wrapper_output))
    subprocess.run([d2.binary_path, str(input_file), str(binary_output)])
    assert wrapper_output.read_bytes() == binary_output.read_bytes()

Build Automation

I'm using GitHub Actions to automatically bundle D2 binaries for Linux, Windows, and MacOS. While I've primarily tested this on Linux, the setup should support all major platforms:

- name: Download and extract D2 releases
  run: |
    mkdir -p d2_python/bin/{linux,win32,darwin}
    # Platform-specific binary downloads and setup...

If you're using Windows or MacOS and want to help test, feel free to give it a try and let me know how it goes!

What's Next

I built this primarily for my own use with Nikola and other Python projects, but I'm open to expanding it. If you have ideas for new features or find any bugs, please open an issue on GitHub.

When to Use This

You might find d2-python-wrapper useful if you're: - Generating static sites - Automating documentation - Creating reports - Working on any Python project where you need programmatic diagram rendering

I'm releasing this under the Mozilla Public License Version 2.0 (same as D2 itself), so feel free to use and modify it for your projects.

How to Access Multiple LXC Containers Through a Single Tailscale Connection

Context

After setting up Tailscale in an AlmaLinux LXC container, I wanted to access other containers in the same Proxmox host (subnet) without installing Tailscale on each one. This can be achieved by advertising routes through the container that already has Tailscale installed.

My setup is as follows (simplified for this example):

  • I have a Proxmox instance with 3 VMs/ LXC Containers
  • I use opnSense as a Firewall for the internal network. (IP: 10.0.10.1)
  • LXC_1 has tailscale installed (IP: 10.0.10.5)
  • LXC_2 has no tailscale (IP: 10.0.10.6)
  • I have a DESKTOP_1 at home, in another location, that need to access LXC_2.

Only the opnSense VM is accessible from outside the internal network. Here is a simple diagram to visualize this:

InternetProxmox Host(Hosted Server)HomeOther ComputersopnSense VM10.0.10.1LXC_1with Tailscale10.0.10.5LXC_2no Tailscale10.0.10.6DESKTOP_1with TailscaleTailscale VPNPublicRoute AdvertisementExternal AccessAccess via Tailscaleand Route Advertisement

Steps

The following steps allowed me to access LXC_2 and its services without installing tailscale on it.

1. Advertise Routes on Tailscale Host

On the container with Tailscale installed, run:

tailscale up --advertise-routes=10.0.10.0/24  # Replace with your subnet

You might see warnings about IPv6 forwarding and UDP GRO forwarding. While not critical, you can optimize these later.

2. Enable Route in Tailscale Admin Console

  1. Go to the Tailscale admin console
  2. Find your machine (it should show a "subnets" label)
  3. Click the three dots menu
  4. Select "Edit route settings"
  5. Enable the new subnet route
  6. Save changes

3. Accept Routes on Client Machines

On your client machine (like your desktop), run:

sudo tailscale up --accept-routes

Testing the Connection

You should now be able to:

  • Ping other containers in the subnet
  • Access services running on other containers
  • Use SSH to connect to other containers

For example: ping 10.0.10.6 should work from your client machine, even though that container doesn't have Tailscale installed.

Notes

  • This approach requires only one Tailscale instance for multiple containers
  • All containers must be in the same subnet
  • The container running Tailscale acts as a gateway

References

How to Install Tailscale in a Proxmox CE 8.2 LXC Container (AlmaLinux 9)

Context

I recently needed to set up Tailscale in an AlmaLinux 9 LXC container running on my Proxmox 8.2 server. Following the official instructions from Tailscale's RHEL 9 guide and even trying their Linux install script, I ran into some issues. The main problem turned out to be missing TUN device support in the LXC container.

UPDATE: this is actually documented here under unprivileged lxc. Thanks to @echobot in Reddit for the info.

The Initial Problem

After following the installation steps, when I tried to start Tailscale, I got this error:

failed to connect to local tailscaled; it doesn't appear to be running (sudo systemctl start tailscaled ?)

Checking the logs with journalctl -u tailscaled -n 50 --no-pager, I found:

is CONFIG_TUN enabled in your kernel? `modprobe tun` failed with: modprobe: FATAL: Module tun not found in directory /lib/modules/6.8.8-4-pve

What Didn't Work

My first attempt was to use the older Proxmox method of enabling TUN support by adding:

features: nesting=1,tun=1

to the container configuration. This resulted in:

TASK ERROR: format error tun: property is not defined in schema and the schema does not allow additional properties

This didn't work because Proxmox 8.x handles TUN/TAP support differently than previous versions.

What Actually Worked

1. Configure TUN Support

I had to edit the LXC container configuration on the Proxmox host:

nano /etc/pve/lxc/<container-id>.conf

And add these lines:

lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file

2. Create TUN Device on Host

In my case my system already had dev/net/tun. I checked this with ls -l /dev/net/tun.

> ls -l /dev/net/tun
crw-rw-rw- 1 root root 10, 200 Jul 27 23:02 /dev/net/tun

If your system does not have it, then create it the TUN device exists on the Proxmox host:

mkdir -p /dev/net
mknod /dev/net/tun c 10 200
chmod 666 /dev/net/tun

3. Restart Container

pct restart <container-id>

or use the WebUI for that.

4. Install Tailscale

After fixing the TUN support, I could properly install Tailscale in the container (i must say I always could install it, I could just not run it):

# Add Tailscale repository
dnf config-manager --add-repo https://pkgs.tailscale.com/stable/almalinux/9/tailscale.repo

# Install Tailscale
dnf install tailscale

# Start and enable the service
systemctl enable --now tailscaled

# Connect to Tailscale network
tailscale up

Verifying Everything Works

To make sure everything was working correctly, I ran:

# Check service status
systemctl status tailscaled

# Activating tailscape asks for auth and provides a link. Just follow the instructions.
tailscale up

What is tun anyway?

TUN/TAP provides packet reception and transmission for user space programs. It can be seen as a simple Point-to-Point or Ethernet device, which, instead of receiving packets from physical media, receives them from user space program and instead of sending packets via physical media writes them to the user space program. -- https://docs.kernel.org/networking/tuntap.html

TUN, short for "network TUNnel," is a virtual network device that operates at the IP level. It acts like a virtual network card, allowing software to send and receive network packets as if it were a physical network interface. TUN is needed for creating virtual private networks (VPNs) and other network tunneling applications, as it provides a way for programs to interact directly with network traffic at a low level.

In the context of running Tailscale in an LXC container, TUN is required because Tailscale uses it to create its secure network tunnel. LXC containers, by default, don't have access to create these virtual network devices. To run Tailscale successfully, the LXC container needs to be configured to allow the creation and use of TUN devices. This involves modifying the container's configuration to grant it the necessary permissions to access and manipulate TUN devices, enabling Tailscale to establish its encrypted network connections and route traffic securely.

References

How to recover and update Proxmox 8 firewall configuration in SQLite when you locked yourself out

TLDR

The firewall config is not in /etc/pve/firewall/cluster.fw but in a SQLite Database in /var/lib/pve-cluster/config.db. You need to reboot your system into rescue mode, edit the value enable: 1 to enable: 0 and reboot into Proxmox.

Context

I made a noob mistake and locked myself out of my server. Luckily Hetzner allows me to reboot into rescue mode. This is what happened and how I managed to get my access back.

In other words, this tutorial is for situations where you've accidentally locked yourself out of your Proxmox server due to a firewall misconfiguration (like I did). In my case, I enabled the firewall (enable: 1) with an incorrect configuration, preventing access to the server. The solution involves booting into a rescue system, mounting the Proxmox partition, and manually editing the firewall configuration in the SQLite database.

Prerequisites

  • Access to a rescue system (e.g., Hetzner Rescue System)
  • Basic knowledge of Linux commands and SQLite, although you can copy and paste these commands and it should work.

Disclaimer: I am not responsible for data loss or anything else for that matter. The following commands worked for me and nothing bad happened. I out them here in case they help someone else, as I had to research a few hour before solving this (specially the issue of not finding the config).

Step 1: Boot into Rescue System

Boot your server into the rescue system provided by your hosting provider (e.g., Hetzner Rescue System).

Step 2: Identify the Proxmox Partition

Use the lsblk command to list all block devices:

lsblk

Identify the partition where Proxmox is installed. It's often part of a RAID array or LVM setup.

In my case the output was like this:

loop0            7:0    0   3.1G  1 loop  
nvme1n1        259:0    0 476.9G  0 disk  
├─nvme1n1p1    259:1    0   256M  0 part   └─md0          9:0    0 255.9M  0 raid1 
├─nvme1n1p2    259:2    0     1G  0 part   └─md1          9:1    0  1022M  0 raid1 
└─nvme1n1p3    259:3    0 475.7G  0 part  
  └─md2          9:2    0 475.6G  0 raid1 
    ├─vg0-root 253:0    0    64G  0 lvm   
    ├─vg0-swap 253:1    0     8G  0 lvm   
    └─vg0-data 253:2    0   402G  0 lvm   
nvme0n1        259:4    0 476.9G  0 disk  
├─nvme0n1p1    259:5    0   256M  0 part   └─md0          9:0    0 255.9M  0 raid1 
├─nvme0n1p2    259:6    0     1G  0 part   └─md1          9:1    0  1022M  0 raid1 
└─nvme0n1p3    259:7    0 475.7G  0 part  
  └─md2          9:2    0 475.6G  0 raid1 
    ├─vg0-root 253:0    0    64G  0 lvm   
    ├─vg0-swap 253:1    0     8G  0 lvm   
    └─vg0-data 253:2    0   402G  0 lvm

There I saw that I should mount vg0, and that is was in a raid md2

Step 3: Assemble RAID Array (if applicable)

If your Proxmox partition is part of a RAID array, assemble it:

mdadm --assemble --scan

Step 4: Activate Volume Group

Activate the volume group (usually named vg0 in Proxmox):

vgchange -ay vg0

Step 5: Mount the Proxmox Partition

Create a mount point and mount the Proxmox root partition:

mkdir /mnt/proxmox
mount /dev/vg0/root /mnt/proxmox

Verify the mount:

ls /mnt/proxmox/

Here you should see some files and directories.

Step 6: Locate the Configuration Database

The Proxmox configuration is stored in an SQLite database. Locate it:

ls -la /mnt/proxmox/var/lib/pve-cluster

You should see a file named config.db.

Step 7: Access the SQLite Database

Open the SQLite database:

sqlite3 /mnt/proxmox/var/lib/pve-cluster/config.db

sqlite3 is already installed in the rescue system of Hetzner. You need to install it if it's not available in your system.

Step 8: Check the Current Firewall Configuration

View the current firewall configuration:

SELECT * FROM tree WHERE name = 'cluster.fw';

Note: Initially I didn't know where this was, so I used the following to find where the entry was and if there was any.

SELECT * FROM tree WHERE name = 'cluster.fw';

Step 9: Update the enable Option

Change the enable option from 1 to 0 to disable the firewall:

UPDATE tree 
SET data = replace(data, 'enable: 1', 'enable: 0') 
WHERE name = 'cluster.fw';

Step 10: Verify the Change

Confirm that the change was made successfully:

SELECT * FROM tree WHERE name = 'cluster.fw';

Step 11: Exit SQLite

Exit the SQLite prompt:

.quit

Step 12: Unmount and Reboot

Unmount the Proxmox partition and reboot the server:

umount /mnt/proxmox
reboot

Important Notes

  • Disabling the Firewall: This process disables the firewall cluster-wide. Re-enable it after properly configuring it once you regain access.
  • Security Risks: A disabled firewall may expose your system to security risks. You have been warned.
  • Backup: Always create backups before making significant changes. I have my proxmox configs in a git repository for reference.
  • Alternative Methods: When possible, use the Proxmox web interface or CLI tools for configuration changes. At least that's what I've read. I like to use config files, but I also locked myself out of my server.

References

Several sites, but I cannot longer remember all of them.

Some of the sites I visited are:

  • https://forum.proxmox.com/threads/ssh-connection-no-web-interface.110702/
  • https://www.reddit.com/r/Proxmox/comments/13hyn0y/how_to_secure_proxmox_web_ui/
  • https://eulenfunk.readthedocs.io/en/stable/supernode01.html
  • and many more ...

How to manage multiple AsciiDoc Chapter Files: Copy, Merge, and Customize

TL;DR

Manage multiple AsciiDoc chapter files efficiently with a customizable bash script that supports copying and merging chapters in different languages. This script allows you to concatenate, copy, or merge chapter files with flexible options for language codes, prefixes, output destinations and with recursive file search.

Context

I work a lot with AsciiDoc files. I write documentation, books and even part of this site with it. Sometimes I need to merge some files or copy their content to a new file. This is the case with the new novel I'm writing.

Because of reasons, I decided to write it in 3 languages in parallel: spanish, german and english.

I use include::file.adoc[] to manage multiple files and merge them into a final version. This way I can write each chapter in each language in different files, and then include them in a final file as follows:

= title of the book
:author: Diego Carrasco
// here comes more metadata and config stuff.

include::chapter_00.adoc[]

include::chapter_01.adoc[]

include::chapter_02.adoc[]

include::chapter_03.adoc[]

I'm already by chapter 15, which means I have 45 files plus the main ones. (15 files per language).

Again, because of reasons, I needed to copy the content of all the chapter from a specific language to my clipboard - or at least to a new file.

To streamline this, I made a bash script that helps me automate the process of concatenating, merging, or copying chapter files. This script is designed to handle these tasks while providing support for different languages, prefixes, and output formats.

As I tend to forget these stuff, I make a note here.

NOTE: This script does actually work on any kind of text file, just change the prefix 😁

Steps

1. The Bash Script

Create a new file manage_chapters.sh and paste the following:

DISCLAIMER: I am in no way responsible for any damage to your files. This script is provided as is. Always read the script before using it.

#!/bin/bash

usage() {
    echo "Usage: $0 [-c xx] [-l yy] [-p prefix] [-o output_file] [-r]"
    echo "  -c xx : Specify chapter numbers (e.g., '01 02 03' or '*' for all)"
    echo "  -l yy : Specify language code (e.g., 'es' for Spanish)"
    echo "  -p prefix : Specify file prefix (default: 'chapter_')"
    echo "  -o output_file : Merge into a file instead of copying to clipboard"
    echo "  -r : Search recursively in subdirectories"
    exit 1
}

chapters=""
language=""
prefix="chapter_"
output_file=""
recursive=""

while getopts "c:l:p:o:r" opt; do
    case $opt in
        c) chapters="$OPTARG" ;;
        l) language="$OPTARG" ;;
        p) prefix="$OPTARG" ;;
        o) output_file="$OPTARG" ;;
        r) recursive="-r" ;;
        *) usage ;;
    esac
done

if [ -z "$chapters" ]; then
    usage
fi

if [ "$chapters" = "*" ]; then
    pattern="${prefix}*.adoc"
else
    pattern="${prefix}@(${chapters// /|}).adoc"
fi

if [ -n "$language" ]; then
    pattern="${pattern%.adoc}.${language}.adoc"
fi

if [ -n "$recursive" ]; then
    find_cmd="find . -type f -name"
else
    find_cmd="find . -maxdepth 1 -type f -name"
fi

content=$(eval "$find_cmd \"$pattern\"" | sort | xargs cat)

if [ -z "$content" ]; then
    echo "No matching files found."
    exit 1
fi

if [ -n "$output_file" ]; then
    echo "$content" > "$output_file"
    echo "All matching chapter contents merged into '$output_file'"
else
    # Check if xclip is available (Linux)
    if command -v xclip &> /dev/null; then
        echo "$content" | xclip -selection clipboard
        echo "All matching chapter contents copied to clipboard as a single text using xclip."
    # Check if pbcopy is available (macOS)
    elif command -v pbcopy &> /dev/null; then
        echo "$content" | pbcopy
        echo "All matching chapter contents copied to clipboard as a single text using pbcopy."
    else
        echo "No clipboard utility found. Please install xclip (Linux) or use pbcopy (macOS)."
        exit 1
    fi
fi

2. How It Works

  • Command-line options: You can specify chapters (-c), a language code (-l), a custom file prefix (-p), and an output file (-o).
  • Pattern construction: The script dynamically generates a filename pattern based on the options provided.
  • Finding matching files: It uses the find command to locate all matching files within the current directory.
  • File concatenation: The xargs cat command is used to concatenate the contents of the found files into one string.
  • Clipboard support: If no output file is provided, the script copies the concatenated content to the clipboard using either xclip (Linux) or pbcopy (macOS).
  • Option -r to enable recursive searching.

3. Usage Examples

Here are a few example scenarios where this script can be helpful:

Example 1: Copy all chapters to clipboard (default behavior)
./manage_chapters.sh -c "*"
Example 2: Merge specific chapters into a file
./manage_chapters.sh -c "01 02 03" -o merged_chapters.adoc
Example 3: Copy chapters with a different prefix
./manage_chapters.sh -c "*" -p "section_"
Example 4: Merge language-specific chapters into a file
./manage_chapters.sh -c "*" -l es -o spanish_chapters.adoc
Example 5: Copy chapters with a custom prefix and specific language
./manage_chapters.sh -c "01 02" -p "part_" -l de
Example 6: Recursive search in all subdirectories:
./manage_chapters.sh -c "*" -r
Example 7: Recursive search for specific chapters and language
./manage_chapters.sh -c "01 02" -l es -r

Key Features

  • Flexible Chapter Selection: Select specific chapters or use wildcards to include all.
  • Language Support: Handle different language versions of chapters easily by specifying the language code.
  • Customizable File Prefix: The ability to change file prefixes allows the script to adapt to different project structures or naming conventions.
  • Clipboard or File Output: Depending on your needs, choose to merge chapters into a file or copy the concatenated content directly to the clipboard.

I hope this helps you in your AsciiDoc adventures :)

(Quick-note) Troubleshooting Dual Monitor Issues on KDE on Ubuntu/ Linux Mint

TLDR

Learn how to troubleshoot and resolve issues with dual monitors on KDE, especially when one monitor stops working or remains off.

Context

By default, KDE handles dual monitors well on a new installation. However, sometimes one monitor might stop working and stay off. This happened to me, and I had to fix it using xrandr and KDE's display settings.

Steps

Here is how I resolved my issue:

  1. Run xrandr Query:
xrandr --query

This showed two activated displays.

  1. Copy Display Names: I identified display names from the xrandr output.
 xrandr --query
Screen 0: minimum 320 x 200, current 5120 x 1440, maximum 16384 x 16384
HDMI-A-0 connected 2560x1440+0+0 (normal left inverted right x axis y axis) 597mm x 336mm
   2560x1440    144.00*+ 120.00    99.95    59.95  
   3840x2160     30.00    25.00    24.00    29.97    23.98  
   1920x1200    144.00  
   1920x1080    120.00   119.88    60.00    60.00    50.00    59.94  
   1600x1200    144.00  
   1680x1050     59.88  
   1280x1024     75.02    60.02  
   1440x900      59.90  
   1280x960      60.00  
   1280x800      59.91  
   1152x864      75.00  
   1280x720      60.00    50.00    59.94  
   1024x768      75.03    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   720x576       50.00  
   720x480       60.00    59.94  
   640x480       75.00    72.81    66.67    60.00    59.94  
   720x400       70.08  
DisplayPort-0 connected primary 2560x1440+2560+0 (normal left inverted right x axis y axis) 597mm x 336mm
   2560x1440     59.95*+
   1920x1200     59.95  
   1920x1080     60.00    50.00    59.94    30.00    25.00    24.00    29.97    23.98  
   1600x1200     59.95  
   1680x1050     59.95  
   1280x1024     75.02    60.02  
   1440x900      59.89  
   1280x960      60.00  
   1280x800      59.81  
   1152x864      75.00  
   1280x720      60.00    50.00    59.94  
   1024x768      75.03    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   720x576       50.00  
   720x480       60.00    59.94  
   640x480       75.00    72.81    66.67    60.00    59.94  
   720x400       70.08  
DisplayPort-1 disconnected (normal left inverted right x axis y axis)
  1. Execute Command:
xrandr --output HDMI-A-0 --mode 1920x1080 --right-of DisplayPort-0

I set HDMI-A-0 as the main display, right of DisplayPort-0.

  1. Adjust in KDE Preferences: I went to Preferences > Display Configuration. I set the desired resolutions and applied the changes.

  2. Fix Overlap: When there was overlap, I changed both monitors to a lower resolution, applied changes, then set the preferred resolution again.

Following these steps fixed my issue, and both monitors worked correctly.

References