How to recover and update Proxmox 8 firewall configuration in SQLite when you locked yourself out

TLDR

The firewall config is not in /etc/pve/firewall/cluster.fw but in a SQLite Database in /var/lib/pve-cluster/config.db. You need to reboot your system into rescue mode, edit the value enable: 1 to enable: 0 and reboot into Proxmox.

Context

I made a noob mistake and locked myself out of my server. Luckily Hetzner allows me to reboot into rescue mode. This is what happened and how I managed to get my access back.

In other words, this tutorial is for situations where you've accidentally locked yourself out of your Proxmox server due to a firewall misconfiguration (like I did). In my case, I enabled the firewall (enable: 1) with an incorrect configuration, preventing access to the server. The solution involves booting into a rescue system, mounting the Proxmox partition, and manually editing the firewall configuration in the SQLite database.

Prerequisites

  • Access to a rescue system (e.g., Hetzner Rescue System)
  • Basic knowledge of Linux commands and SQLite, although you can copy and paste these commands and it should work.

Disclaimer: I am not responsible for data loss or anything else for that matter. The following commands worked for me and nothing bad happened. I out them here in case they help someone else, as I had to research a few hour before solving this (specially the issue of not finding the config).

Step 1: Boot into Rescue System

Boot your server into the rescue system provided by your hosting provider (e.g., Hetzner Rescue System).

Step 2: Identify the Proxmox Partition

Use the lsblk command to list all block devices:

lsblk

Identify the partition where Proxmox is installed. It's often part of a RAID array or LVM setup.

In my case the output was like this:

loop0            7:0    0   3.1G  1 loop  
nvme1n1        259:0    0 476.9G  0 disk  
├─nvme1n1p1    259:1    0   256M  0 part   └─md0          9:0    0 255.9M  0 raid1 
├─nvme1n1p2    259:2    0     1G  0 part   └─md1          9:1    0  1022M  0 raid1 
└─nvme1n1p3    259:3    0 475.7G  0 part  
  └─md2          9:2    0 475.6G  0 raid1 
    ├─vg0-root 253:0    0    64G  0 lvm   
    ├─vg0-swap 253:1    0     8G  0 lvm   
    └─vg0-data 253:2    0   402G  0 lvm   
nvme0n1        259:4    0 476.9G  0 disk  
├─nvme0n1p1    259:5    0   256M  0 part   └─md0          9:0    0 255.9M  0 raid1 
├─nvme0n1p2    259:6    0     1G  0 part   └─md1          9:1    0  1022M  0 raid1 
└─nvme0n1p3    259:7    0 475.7G  0 part  
  └─md2          9:2    0 475.6G  0 raid1 
    ├─vg0-root 253:0    0    64G  0 lvm   
    ├─vg0-swap 253:1    0     8G  0 lvm   
    └─vg0-data 253:2    0   402G  0 lvm

There I saw that I should mount vg0, and that is was in a raid md2

Step 3: Assemble RAID Array (if applicable)

If your Proxmox partition is part of a RAID array, assemble it:

mdadm --assemble --scan

Step 4: Activate Volume Group

Activate the volume group (usually named vg0 in Proxmox):

vgchange -ay vg0

Step 5: Mount the Proxmox Partition

Create a mount point and mount the Proxmox root partition:

mkdir /mnt/proxmox
mount /dev/vg0/root /mnt/proxmox

Verify the mount:

ls /mnt/proxmox/

Here you should see some files and directories.

Step 6: Locate the Configuration Database

The Proxmox configuration is stored in an SQLite database. Locate it:

ls -la /mnt/proxmox/var/lib/pve-cluster

You should see a file named config.db.

Step 7: Access the SQLite Database

Open the SQLite database:

sqlite3 /mnt/proxmox/var/lib/pve-cluster/config.db

sqlite3 is already installed in the rescue system of Hetzner. You need to install it if it's not available in your system.

Step 8: Check the Current Firewall Configuration

View the current firewall configuration:

SELECT * FROM tree WHERE name = 'cluster.fw';

Note: Initially I didn't know where this was, so I used the following to find where the entry was and if there was any.

SELECT * FROM tree WHERE name = 'cluster.fw';

Step 9: Update the enable Option

Change the enable option from 1 to 0 to disable the firewall:

UPDATE tree 
SET data = replace(data, 'enable: 1', 'enable: 0') 
WHERE name = 'cluster.fw';

Step 10: Verify the Change

Confirm that the change was made successfully:

SELECT * FROM tree WHERE name = 'cluster.fw';

Step 11: Exit SQLite

Exit the SQLite prompt:

.quit

Step 12: Unmount and Reboot

Unmount the Proxmox partition and reboot the server:

umount /mnt/proxmox
reboot

Important Notes

  • Disabling the Firewall: This process disables the firewall cluster-wide. Re-enable it after properly configuring it once you regain access.
  • Security Risks: A disabled firewall may expose your system to security risks. You have been warned.
  • Backup: Always create backups before making significant changes. I have my proxmox configs in a git repository for reference.
  • Alternative Methods: When possible, use the Proxmox web interface or CLI tools for configuration changes. At least that's what I've read. I like to use config files, but I also locked myself out of my server.

References

Several sites, but I cannot longer remember all of them.

Some of the sites I visited are:

  • https://forum.proxmox.com/threads/ssh-connection-no-web-interface.110702/
  • https://www.reddit.com/r/Proxmox/comments/13hyn0y/how_to_secure_proxmox_web_ui/
  • https://eulenfunk.readthedocs.io/en/stable/supernode01.html
  • and many more ...

How to manage multiple AsciiDoc Chapter Files: Copy, Merge, and Customize

TL;DR

Manage multiple AsciiDoc chapter files efficiently with a customizable bash script that supports copying and merging chapters in different languages. This script allows you to concatenate, copy, or merge chapter files with flexible options for language codes, prefixes, output destinations and with recursive file search.

Context

I work a lot with AsciiDoc files. I write documentation, books and even part of this site with it. Sometimes I need to merge some files or copy their content to a new file. This is the case with the new novel I'm writing.

Because of reasons, I decided to write it in 3 languages in parallel: spanish, german and english.

I use include::file.adoc[] to manage multiple files and merge them into a final version. This way I can write each chapter in each language in different files, and then include them in a final file as follows:

= title of the book
:author: Diego Carrasco
// here comes more metadata and config stuff.

include::chapter_00.adoc[]

include::chapter_01.adoc[]

include::chapter_02.adoc[]

include::chapter_03.adoc[]

I'm already by chapter 15, which means I have 45 files plus the main ones. (15 files per language).

Again, because of reasons, I needed to copy the content of all the chapter from a specific language to my clipboard - or at least to a new file.

To streamline this, I made a bash script that helps me automate the process of concatenating, merging, or copying chapter files. This script is designed to handle these tasks while providing support for different languages, prefixes, and output formats.

As I tend to forget these stuff, I make a note here.

NOTE: This script does actually work on any kind of text file, just change the prefix 😁

Steps

1. The Bash Script

Create a new file manage_chapters.sh and paste the following:

DISCLAIMER: I am in no way responsible for any damage to your files. This script is provided as is. Always read the script before using it.

#!/bin/bash

usage() {
    echo "Usage: $0 [-c xx] [-l yy] [-p prefix] [-o output_file] [-r]"
    echo "  -c xx : Specify chapter numbers (e.g., '01 02 03' or '*' for all)"
    echo "  -l yy : Specify language code (e.g., 'es' for Spanish)"
    echo "  -p prefix : Specify file prefix (default: 'chapter_')"
    echo "  -o output_file : Merge into a file instead of copying to clipboard"
    echo "  -r : Search recursively in subdirectories"
    exit 1
}

chapters=""
language=""
prefix="chapter_"
output_file=""
recursive=""

while getopts "c:l:p:o:r" opt; do
    case $opt in
        c) chapters="$OPTARG" ;;
        l) language="$OPTARG" ;;
        p) prefix="$OPTARG" ;;
        o) output_file="$OPTARG" ;;
        r) recursive="-r" ;;
        *) usage ;;
    esac
done

if [ -z "$chapters" ]; then
    usage
fi

if [ "$chapters" = "*" ]; then
    pattern="${prefix}*.adoc"
else
    pattern="${prefix}@(${chapters// /|}).adoc"
fi

if [ -n "$language" ]; then
    pattern="${pattern%.adoc}.${language}.adoc"
fi

if [ -n "$recursive" ]; then
    find_cmd="find . -type f -name"
else
    find_cmd="find . -maxdepth 1 -type f -name"
fi

content=$(eval "$find_cmd \"$pattern\"" | sort | xargs cat)

if [ -z "$content" ]; then
    echo "No matching files found."
    exit 1
fi

if [ -n "$output_file" ]; then
    echo "$content" > "$output_file"
    echo "All matching chapter contents merged into '$output_file'"
else
    # Check if xclip is available (Linux)
    if command -v xclip &> /dev/null; then
        echo "$content" | xclip -selection clipboard
        echo "All matching chapter contents copied to clipboard as a single text using xclip."
    # Check if pbcopy is available (macOS)
    elif command -v pbcopy &> /dev/null; then
        echo "$content" | pbcopy
        echo "All matching chapter contents copied to clipboard as a single text using pbcopy."
    else
        echo "No clipboard utility found. Please install xclip (Linux) or use pbcopy (macOS)."
        exit 1
    fi
fi

2. How It Works

  • Command-line options: You can specify chapters (-c), a language code (-l), a custom file prefix (-p), and an output file (-o).
  • Pattern construction: The script dynamically generates a filename pattern based on the options provided.
  • Finding matching files: It uses the find command to locate all matching files within the current directory.
  • File concatenation: The xargs cat command is used to concatenate the contents of the found files into one string.
  • Clipboard support: If no output file is provided, the script copies the concatenated content to the clipboard using either xclip (Linux) or pbcopy (macOS).
  • Option -r to enable recursive searching.

3. Usage Examples

Here are a few example scenarios where this script can be helpful:

Example 1: Copy all chapters to clipboard (default behavior)
./manage_chapters.sh -c "*"
Example 2: Merge specific chapters into a file
./manage_chapters.sh -c "01 02 03" -o merged_chapters.adoc
Example 3: Copy chapters with a different prefix
./manage_chapters.sh -c "*" -p "section_"
Example 4: Merge language-specific chapters into a file
./manage_chapters.sh -c "*" -l es -o spanish_chapters.adoc
Example 5: Copy chapters with a custom prefix and specific language
./manage_chapters.sh -c "01 02" -p "part_" -l de
Example 6: Recursive search in all subdirectories:
./manage_chapters.sh -c "*" -r
Example 7: Recursive search for specific chapters and language
./manage_chapters.sh -c "01 02" -l es -r

Key Features

  • Flexible Chapter Selection: Select specific chapters or use wildcards to include all.
  • Language Support: Handle different language versions of chapters easily by specifying the language code.
  • Customizable File Prefix: The ability to change file prefixes allows the script to adapt to different project structures or naming conventions.
  • Clipboard or File Output: Depending on your needs, choose to merge chapters into a file or copy the concatenated content directly to the clipboard.

I hope this helps you in your AsciiDoc adventures :)

(Quick-note) Troubleshooting Dual Monitor Issues on KDE on Ubuntu/ Linux Mint

TLDR

Learn how to troubleshoot and resolve issues with dual monitors on KDE, especially when one monitor stops working or remains off.

Context

By default, KDE handles dual monitors well on a new installation. However, sometimes one monitor might stop working and stay off. This happened to me, and I had to fix it using xrandr and KDE's display settings.

Steps

Here is how I resolved my issue:

  1. Run xrandr Query:
xrandr --query

This showed two activated displays.

  1. Copy Display Names: I identified display names from the xrandr output.
 xrandr --query
Screen 0: minimum 320 x 200, current 5120 x 1440, maximum 16384 x 16384
HDMI-A-0 connected 2560x1440+0+0 (normal left inverted right x axis y axis) 597mm x 336mm
   2560x1440    144.00*+ 120.00    99.95    59.95  
   3840x2160     30.00    25.00    24.00    29.97    23.98  
   1920x1200    144.00  
   1920x1080    120.00   119.88    60.00    60.00    50.00    59.94  
   1600x1200    144.00  
   1680x1050     59.88  
   1280x1024     75.02    60.02  
   1440x900      59.90  
   1280x960      60.00  
   1280x800      59.91  
   1152x864      75.00  
   1280x720      60.00    50.00    59.94  
   1024x768      75.03    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   720x576       50.00  
   720x480       60.00    59.94  
   640x480       75.00    72.81    66.67    60.00    59.94  
   720x400       70.08  
DisplayPort-0 connected primary 2560x1440+2560+0 (normal left inverted right x axis y axis) 597mm x 336mm
   2560x1440     59.95*+
   1920x1200     59.95  
   1920x1080     60.00    50.00    59.94    30.00    25.00    24.00    29.97    23.98  
   1600x1200     59.95  
   1680x1050     59.95  
   1280x1024     75.02    60.02  
   1440x900      59.89  
   1280x960      60.00  
   1280x800      59.81  
   1152x864      75.00  
   1280x720      60.00    50.00    59.94  
   1024x768      75.03    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   720x576       50.00  
   720x480       60.00    59.94  
   640x480       75.00    72.81    66.67    60.00    59.94  
   720x400       70.08  
DisplayPort-1 disconnected (normal left inverted right x axis y axis)
  1. Execute Command:
xrandr --output HDMI-A-0 --mode 1920x1080 --right-of DisplayPort-0

I set HDMI-A-0 as the main display, right of DisplayPort-0.

  1. Adjust in KDE Preferences: I went to Preferences > Display Configuration. I set the desired resolutions and applied the changes.

  2. Fix Overlap: When there was overlap, I changed both monitors to a lower resolution, applied changes, then set the preferred resolution again.

Following these steps fixed my issue, and both monitors worked correctly.

References

Introducing the FlexSearch Plugin for Nikola

TLDR

I wanted to have search functionality in this site, which is powered by Nikola SSG. After some research I decided to create a new plugin using Flexsearch and the plugin is not available in Nikola Plugin repository.

Context

I use this site a lot as personal repository of stuff I forget. This means I refer to it when I have to do something and I no longer remember what command it was. I also refer people here when they ask something I´ve already answered here. My usual workflow was to go through my articles and use the search feature from the browser, but that was not that efficient because sometimes I only remember that one phrase of that one post.

Well, this is no longer necessary because now this site has full-text search functionality, courtesy of yours truly. This is a new Nikola Plugin, under MIT License (so that you can just use it as you want) and the FlexSearch Library.

How does this work?

Great that you ask. This is a static site. That means that whenever I post a new article I have to rebuild the site to generate HTML from my source (markdown, rst, asciidoc and others) from my git repository. In other words, this site does not use any database. The search functionality has 3 steps:

  1. When the site is build, a search:index.json is build. This index contains all the titles, slugs and body of all my articles.
  2. There is a search box now in th main menu. This does nothing else than calling a javascript snippet, which in turn uses the FlexSearch Library, to use that search_index as input and return a list of post that matches your search query.
  3. The small javascript snippet renders the search results in a div for you to browse.

That´s all. It is completely offline, no Google, Databases, third party services or whatever. Pretty cool, isn´t it?

Of course, I´m not the first one to think about this. There are many examples out there. These are some of them

  • https://www.stephanmiller.com/static-site-search/
  • https://snipcart.com/blog/static-site-search
  • https://plugins.getnikola.com/v7/localsearch/

If you are using Nikola for your site, you are welcome to give it a try.

To install the plugin just run

nikola plugin -i flexsearch_plugin

and follow the instructions.

References

How to fix incrementing mount names on reboot in Ubuntu/Linux Mint

TL;DR

To prevent your internet disks from being mounted with incrementing names (e.g., name1, name2) on each reboot, configure static mount points using the /etc/fstab file. This avoids conflicts with services like Docker that use bind volumes.

Context

Each time I restarted my computer running Ubuntu/Linux Mint, my additional disks were mounted with incrementing names. This breaks all symbolic links and history because another service, Docker in my case, uses the path before it is mounted.

Why was this happening? I did not care to check how were the additional internal disks being mounted. I just connected them and started to use them. To be fair, I don't restart my computer that often (around once a year, see the image below), so it took me almost a year to find this situation.

uptime

Long story short, I restarted my computer and many thing did not work anymore, and the mount points were the cause of it.

Steps to Fix the Issue

1. Find the UUID of the Disk

Use the lsblk command to list all block devices and their UUIDs:

lsblk -o NAME,FSTYPE,UUID

You can also use the disk manager in your desktop environment to find the UUID. In my case, KDE, it looks like this:

disk manager kde uuid

Note down the UUID of the disk you want to mount.

2. Edit /etc/fstab

Open the /etc/fstab file with a text editor:

sudo nano /etc/fstab

3. Add an Entry for Your Disk

Add an entry for your disk using its UUID:

UUID=<your-uuid> /mnt/your-mount-point ext4 defaults 0 2

Replace <your-uuid> with the actual UUID and /mnt/your-mount-point with your desired mount point.

4. Understand the fstab Options (optional, most of the time the defaults will work)

  • defaults option: A shorthand for standard mount options, including:
  • rw (read-write)
  • suid (allow setuid bits)
  • dev (interpret device files)
  • exec (allow execution of binaries)
  • auto (can be mounted with mount -a)
  • nouser (only root can mount)
  • async (asynchronous I/O)

  • 0 (dump): This field is used by the dump utility to decide if the filesystem needs to be dumped. 0 means ignore.

  • 2 (pass): The order in which fsck checks the filesystem for errors during boot. 0 means don't check, 1 is reserved for the root filesystem, and 2 is for all other filesystems.

5. Create Mount Points

If you don't have the mount points already, create them:

sudo mkdir -p /mnt/your-mount-point

In my case I did already have them.

6. Mount the Drives

Restart your computer or run the following command to mount the drives immediately:

sudo mount -a

In my case I had to restart the system, as that was the easiest way to get everything normal again.

Side notes

As sidenote, some of the issues I had were:

  • Nextcloud Desktop could not sync anymore, as it did not find the destination folder.
  • The quick accesses on doplhin file manager were no longer correct.
  • all my zoxide paths were broken.

That was enough for me to fix this, as I couldn't take it anymore. :)

Some References

Introducing the GitHub Widget Plugin for Nikola

TL;DR

The GitHub Widget Shortcode Plugin for Nikola lets you embed a customizable widget showcasing a GitHub repository's details in your site.

Context

Today, I'm excited to announce that my GitHub Widget Plugin for Nikola has been merged! This plugin allows you to embed a GitHub repository widget directly into your Nikola-generated site, displaying key details of the repository.

Nikola is a static site generator that offers great flexibility. I've already written about it [a few times](https://diegocarrasco.com/categories/nikola/, and it's what's powers this site. However, embedding GitHub repository details required custom solutions—until now. My new plugin provides an easy way to integrate this functionality using a simple shortcode.

Why is this important? Because somehow I've been using GitHub more for public code, such as my espanso-compatible TextExpander Android App. Thus, I plan to have a section on this site to list such projects, and I was missing a way to show updated information from each repository.

With this shortcode I can just do this listing-github-widget-example.md (Source)

{{% github_widget %}}dacog/textexpander_android {{%/github_widget %}}`
and I get the following

textexpander_android

This is a basic text expander app for android. Use it as a espanso companion app, as it parses the yaml files in espanso/match

Languages: Kotlin

  • ⭐ Stars: 14
  • Forks: 1
  • 👁 Watchers: 2
  • ❗ Open Issues: 0

This looks a lot better than just a link 😁

How It Works

It was a bit trial and error, as I had not used the GitHub API before.

The first approach was to use requests to call the api endpoints (such as https://api.github.com/repos/{repo}/commits) to get the json response and use that. But after that I also wanted to get the latest commits and the latest release, if available. And then I got an api rate limit error, with a nice message:

{
  "message": "API rate limit exceeded for 2.206.40.62. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
  "documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"
}

That meant, of course, that I now had to get a token to avoid the error. I tried following the GitHub API Documentation for AUTH, but without success. And then it got me... there should already be a library for this... and there was.

I installed PyGithub and refactored my code to use it. That meant I removed most of my code, as I no longer had to call each endpoint on my own.

Authentication worked instantly and I no longer had the api rate limit error.

That's the short story. Now let's explain how to use this.

Installation

You need to have a working nikola setup. If you dont have one, you can check this article: link://slug/nikola-blog-setup

To install the plugin, run the following command:

nikola plugin -i github_widget

Optional Configuration

For enhanced API usage, add your GitHub API token to conf.py:

# Add your GitHub API token here
GITHUB_API_TOKEN = 'your_github_api_token_here'

This step is optional but recommended to avoid API rate limit issues.

Your personal token should allow access to the repositories and its contents. Read-only permission is enough.

Using the Shortcode

Here are some examples of how to use the shortcode in your markdown files:

listing-github-widget-examples.md (Source)

// Basic Example
{{% github_widget %}}user/repo{{% /github_widget %}}

// Example with Avatar and Custom Width
{{% github_widget avatar=true max_width=400px %}}user/repo{{% /github_widget %}}

// Example with Latest Release and Commit Info
{{% github_widget avatar=true latest_release=true latest_commit=true max_width=400px %}}user/repo{{% /github_widget %}}

Customization

You can customize the widget to display various details such as the repository owner's avatar, the latest release, and the latest commit. Adjust the max_width parameter to fit the widget into your site's layout.

Visual Examples

Here's what the widgets look like with the example shortcodes:

Basic Example

Basic Example

With Avatar and Custom Width

With Avatar

With Latest Release and Commit Info

With Latest Info

CSS Customization

To style the widget, you can use the following CSS:

/* github shortcode */
.github-widget {
    display: flex;
    align-items: center;
    border: 1px solid #ddd;
    padding: 10px;
    margin: 10px 0;
    border-radius: 5px;
    background-color: #f9f9f9;
}

.github-widget-image {
    margin-right: 10px;
}

.github-widget img {
    border-radius: 50%;
}

.github-widget .repo-info {
    display: flex;
    flex-direction: column;
}

.github-widget .repo-info h3 {
    margin: 0;
    font-size: 1.2em;
}

.github-widget .repo-info p {
    margin: 5px 0;
    font-size: 0.9em;
    color: #555;
}

.github-widget .repo-info ul {
    list-style: none;
    padding: 0;
    display: flex;
    gap: 10px;
}

.github-widget .repo-info ul li {
    font-size: 0.8em;
    color: #333;
}

.github-widget h4 {
    color: black;
}

Conclusion

The GitHub Widget Plugin for Nikola makes it easy to display detailed information about any GitHub repository on your site. With simple configuration and usage, it's a great addition for any developer's blog or project page.

References

How to Optimize and Free Up Disk Space on Debian/Ubuntu Servers with Docker Containers

TLDR

Manage disk space on Debian/Ubuntu servers and Docker containers by removing unnecessary packages, cleaning up caches, and pruning Docker objects.

Context

I needed to free up space as I had a small VPS with full storage, and my notebook and desktop computers had also a really high disk usage, although I did not have that many files, but I do use a lot of docker.

After researching I did not find a guide with everything I needed (explanations included), thus here it is.

Steps

Package Manager (apt)

Remove packages that are no longer required

sudo apt-get autoremove

Clean Up APT Cache

Check the space used by the APT cache:

sudo du -sh /var/cache/apt

Clean up the APT cache:

sudo apt-get autoclean
sudo apt autoclean

Delete cache files:

sudo apt-get clean
sudo apt clean

Clear Systemd Journal Logs

Check the disk usage of systemd journal logs:

journalctl --disk-usage

Clean logs older than 3 days:

sudo journalctl --vacuum-time=3d

Docker

Docker takes a lot of space compared to vanilla servers. Check link:/slug/change-docker-data-directory-vps-optimization for a related post on the overlay2 and how to move docker data root to another volume/ drive.

Check system usage

Check overall system usage:

docker system df

For more detailed information:

docker system df -v

Use docker system prune

(from documentation)

WARNING! This will remove:

  • all stopped containers
  • all networks not used by at least one container
  • all dangling images
  • all build cache
docker system prune
Use Docker Container Prune

Warning: This will remove all stopped containers. Refer to the documentation for more details.

docker container prune # Remove all stopped containers
Use Docker Image Prune

Remove unused images (Remove all dangling images. If -a is specified, will also remove all images not referenced by any container.)

Quick note:

What are Docker Dangling Images?

  • Images that have no tag and are not referenced by any container source
  • Untagged layers that serve no purpose but still consume disk space.
  • Not automatically removed by Docker and need to be cleaned up manually. source
docker image prune

Use docker volume prune

Remove all unused local volumes. Unused local volumes are those which are not referenced by any containers. By default, it only removes anonymous volumes.

docker volume prune # remove only anonymous (unnamed) volumes

This command removes only anonymous (unnamed) volumes by default.

To remove all unused volumes:

docker volume prune -a # remove all unused volumes

References

Exciting News: My Upcoming Book "The Digital Marketer’s Playbook" is about to launch

I am thrilled to share some exciting news about my forthcoming book "The Digital Marketer’s Playbook: How to Effectively Collaborate with Agencies, Freelancers, and Digital Marketing Experts," to be published by Apress (Springer-Nature). The book is targeted for release by the end of Q3 or Q4 2024. That's this year!

If you've followed my blog, you might recall that I initially conceived this project as a short book. My idea was to write a practical guide with the main topics related to digital marketing. It has now evolved into a comprehensive resource. Designed for professionals grounded in marketing or business and for those particularly interested in the field, it guides readers through the complexities of digital marketing, emphasizing effective collaboration with various partners. I think the title of the book is also self-explanatory. 😁

The Journey and Collaboration

The book has evolved significantly over the past few months. My first post about it marked the beginning of this journey. Since then, ten months later, I've moved away from the idea of self-publishing for this project, choosing instead to sign to go with traditional publishing. This decision came after discussing the book proposal with Shivangi Ramachandran, an Apress Senior Editor (Acquisitions), which ended with me signin with Apress (Springer-Nature).

Partnering with them and working closely with Shiva and Sowmya Thodur (Production Editor), brought about substantial changes. After signing, I received a lot of guidelines regarding the structure how the book, which made me realize that the chapters were more that just a compilation of important points and I had to elaborate on them. These changes quickly expanded the book into a comprehensive 250+ page resource. That structured approach to writing required me to rethink and rewrite many sections of the book.

Throughout this journey, I have dedicated countless hours refining the content, often late into the night. My amazing wife’s patience has been invaluable, and I cannot thank her enough for her support. Throughout the entire process, she also served as a thoughtful partner, sharing her insights and candid comments.

Additionally, the Unperfekthaus in Essen provided a perfect environment with their focus room and endless coffee, where I spent nearly three days a week until closing time last month to finish the manuscript.

I also have to thank my colleagues at MEHRKANAL, especially Miri and Mario, for enduring my questions and serving as sparring partners on several book topics. They often heard me talk about the book, perhaps more than they would have liked. Victor and Eva, who gave me detailed feedback on the first version of the Table of Content. Damian, Tom, Holger, Christian also listened to my thoughts on the book and gave their feedback. There are many others whose names escape my memory, as it's again late into the night.

Although many of them saw this as just another item on my never ending list of personal projects, I'm thrilled it has become a reality. 😎

I wrapped up the manuscript in early June, fueled by an impressive coffee intake—seriously, about 4-6 cups per session! Thanks to the patience of many, it's now in the capable hands of the Apress team for review. Can't wait to share it with you soon!

A quick preface before diving into the details

This project has not only deepened my grasp of familiar topics; it has also allowed me to enrich the manuscript with innovative content driven by the latest trends and technologies in digital marketing.

Some chapters turned out shorter than anticipated, while others demanded more detail. I focused on keeping the content both core and practical, but ultimately, you'll be the judge of that.

Stay tuned for more updates and sneak peeks of the book! I am excited to share this resource with those of you eager to master digital marketing.

If you want to be among the first to explore "The Digital Marketer’s Playbook," please fill out the form below.

What will you learn by reading the book?

This book is my effort to provide you an all-inclusive resource to enhance your understanding of digital marketing and equip you with the skills and knowledge to work effectively with various partners in the field.

My goal throughout its pages is to provide you with everything you need to communicate your requirements to your partners, understand what they do and ask the right questions when you need to. I define digital marketing as a process, and that's also how the book is structured.

Within its pages, I guide you through the foundational concepts of digital marketing. You'll grasp what digital marketing is, familiarize yourself with crucial terms like digital assets, advertising channels, and customer awareness, and learn to distinguish between walled gardens and the open internet. This knowledge will be essential for discussing your goals and your campaigns implementations with your partners. I also describe the landscape of taxes and laws regarding digital marketing and how it can impact your digital marketing efforts.

As you dive deeper, I'll show you how to set up and structure digital marketing campaigns effectively. You'll understand bid strategies, ad quality, and the importance of placements and inventory. We'll explore various campaign types and their significance, focusing on how to craft a briefing to get impactful creatives, messages, and copy to achieve marketing success and align your digital marketing initiatives with your business goals.

Finally, I offer practical examples tailored to various company types, detailing key factors to consider based on company size, resources, and business structure. I also share my perspective on using artificial intelligence in digital marketing, discussing both its benefits and risks. Additionally, I provide an overview of social media and its role in digital marketing.

Ready to dive deeper into digital marketing? Sign up below to receive updates on "The Digital Marketer’s Playbook" and access exclusive digital marketing resources.

Quick and simple Local WordPress Setup for Lazy Developers

Quick and simple WordPress Setup for Lazy Developers

I needed a fast way to set up WordPress locally. I tried various methods, but they were either too complex or didn't work. So, I created a simple, lazy solution.

This is by no way secure, but it runs on the first try 😁

The code is in this GitHub repository. Feel free to use it.

lazy-docker-compose-wordpress-setup

Set up a local WordPress development environment easily with Docker Compose using this repository. One command sets up WordPress with WP-CLI, MariaDB, and PHPMyAdmin, plus autogenerated aliases for quick management. A second command sources these aliases. Perfect for quick, lazy local development. Not secure for production use.

Languages: Shell

  • ⭐ Stars: 0
  • Forks: 0
  • 👁 Watchers: 1
  • ❗ Open Issues: 0

How It Works

This setup revolves around 4 files:

1 docker-compose.yml:

This file:

  • Uses the latest official containers for WordPress, MariaDB, PHPMyAdmin, and WP-CLI.
  • Sets up a wordpress database with user and password root.
  • Launches a WordPress instance linked to the database.

2 setup.sh:

This script:

  • Detects whether you have docker-compose or docker compose and uses the correct one.
  • Sets your UID and GID in a .env file.
  • Generates an alias file for quick commands.

3 start.sh:

This script:

  • Creates the WordPress directory.
  • Sets permissions to 777 (yes, it's insecure).
  • Starts Docker Compose.

4 alias.sh:

Autogenerated by setup.sh or start.sh. Source it to get these aliases:

  • dc: Docker Compose commands.
  • wpcli: WP-CLI commands.
  • wpbackup: Backs up the database to ./backups.
  • wpdown: Backs up and then stops and removes containers and volumes.

How to Use

First you need to clone the repository:

git clone https://github.com/dacog/lazy-docker-compose-wordpress-setup.git
cd lazy-docker-compose-wordpress-setup

Run these commands to get started:

chmod +x setup.sh
chmod +x start.sh
./start.sh
source alias.sh

Now you are good to go. You will see a new folder wordpress is created in the path you were in.

Now you can:

  • Check http://localhost:8000 for the WordPress site
  • Check http://localhost:8080 for PHPMyAdmin. User and password is root

You receive this information also when you run start.sh.

Using the Aliases

alias dc

  • Bring up the services:
dc up -d
  • Stop the services:
dc down
  • Stop and remove volumes:
dc down -v
  • View the status of the services:
dc ps

alias wpcli

Run WP-CLI as root (it includes --allow-root):

  • Check WordPress version:
wpcli core version
  • Install a plugin:
wpcli plugin install hello-dolly --activate
  • Update WordPress core:
wpcli core update
  • Create a new post:
wpcli post create --post_title='My New Post' --post_content='This is the content of the post.' --post_status=publish

wpbackup

Back up the database to ./backups:

wpbackup

wpdown

Back up the database and files, then stop and remove everything:

wpdown

Alternatives

Here are some other methods to consider:

  1. Local by Flywheel: User-friendly local WordPress setup with advanced features. To be fair, I completely forgot about this when I was trying to get WordPress to run locally. This is way more powerful, but you need many clicks... and you need to fill a form to download it.
  2. DevKinsta: Offers local development with easy deployment to Kinsta hosting. I haven't tried this one.

How to change the Screen Resolution of a Guest-OS (Ubuntu) in Hyper-V with PowerShell on Windows 11

TL;DR

To change the screen resolution of a VM in Hyper-V, use PowerShell. The command requires the VM name and the desired resolution.

Context

Unlike in VirtualBox, in Hyper-V I could not find a GUI option to change the screen resolution directly. Instead, I had to use PowerShell to adjust the display settings of your virtual machine.

Steps

  1. Identify the VM Name

Find the name of your VM in Hyper-V Manager. (Search for Hyper-V Manager in the start menu to access it)

Hyper-V Manager VM Name

  1. Open PowerShell

Open a PowerShell terminal on your host machine.

  1. Execute the Command

Use the following PowerShell command, replacing "Ubuntu 22.04 LTS2" with your VM's name and adjusting the resolution as needed:

Set-VMVideo -VMName "Ubuntu 22.04 LTS2" -HorizontalResolution 1920 -VerticalResolution 1080 -ResolutionType Single
  • HorizontalResolution: The width of the screen in pixels (e.g., 1920).
  • VerticalResolution: The height of the screen in pixels (e.g., 1080).
  • ResolutionType: Set to Single for a single monitor setup.

References