Architecting an Ultra-Minimal Linux VM with Buildroot | Part 1: Build, Break, Fix

Building the smallest bootable Linux VM that passes a professor's audit script. A hands-on walkthrough of Buildroot, from compiling a custom kernel inside Docker to debugging the four failures that broke the first boot.. and fixing them without a full rebuild.
Architecting an Ultra-Minimal Linux VM with Buildroot | Part 1: Build, Break, Fix

Hey folks, i hope you are doing great. The blog is growing subscriber by subscriber, it is almost April. Can you believe it? Time is flying by.

This is Part 1 of a multi-part series. Here we'll go from zero to a working VM that passes every audit check. Part 2 will cover packaging it into a bootable ISO and squeezing under the 20MB target.

My new semester just started. It's a wild ride so far in, with one of the first challenges being the reason for this post: we have a small module on operating systems, in which our professor gave us the following challenge:

Build the smallest bootable Linux VM you can. It must only pass the tests in this bash-testing-script, nothing more. Personally, i'd love to see <20MB.

Along with us, he realized later on that this challenge is way harder than anticipated. 15 years ago, this would've been a rather quick journey, starting with a small vm, something like archlinux-tiny is nowadays. Then you would've just stripped some things out and boom, <20MB.

πŸ’½
Not so today. Everything has gotten bigger, software seems bloated when compared to earlier days. Let's still try to ace the challenge, shall we?

In this i will show you how it can work. While writing this post, i also don't know if we'll land <20MB, but believe me we'll be small hehe.

Buildroot: starting from absolute zero

I am not experienced with kernels and filesystems. Before this challenge i knew some basics and that was about it. Therefore, i had no idea where to start. The challenge was limited to 4hrs, so did i really have the time to strip down a larger, existing VM? My team and i discussed how to go about it until i got a hint in the right direction :D

A good friend in my OS class introduced me to Buildroot to get the journey started.

What is Buildroot?

Buildroot is a set of Makefiles and patches that automates the process of generating a complete (and potentially tiny) Linux system. You pick exactly which packages you want, and it builds a cross-compilation toolchain, the kernel, and the root filesystem from the ground up.

We'll see and learn all about Buildroot, but first, what must our VM be able to do?


The Audit Script

Before we start building, we have to understand what we're being graded on. Our professor gave us a bash script called audit.sh. If this script doesn't return a perfect "OK" score, the VM fails, no matter how small it is.

Let’s break down the requirements we have to bake into our minimal build.

1. The Essentials (Shell & File Ops)

First, the VM needs a brain and a way to move things around. The script checks for a POSIX-compliant shell and the standard "bread and butter" commands.

Shell & Basic Commands

command -v sh >/dev/null 2>&1 && ok "POSIX-Shell"
for cmd in cp mv cat rm ls; do
  command -v "$cmd" >/dev/null 2>&1 && ok "$cmd present"
done

In Buildroot, these are usually handled by BusyBox, which combines tiny versions of many common UNIX utilities into a single small executable.

2. Privilege & Processes

We also need sudo for administrative tasks and a working ps command to monitor the system.

sudo & ps

command -v sudo >/dev/null 2>&1 && ok "sudo present"
if ps -e 2>/dev/null | head -n1 | grep -q 'PID'; then
  ok "ps works correctly"
fi

3. Networking (The Hard Part)

This is where it gets tricky. The VM needs a valid local network (RFC1918) and the ip command to manage it. It also checks for the lo (loopback) interface.

if ip link show 2>/dev/null | grep -q 'lo'; then
    ok "lo interface present"
fi

# Valid LAN check (Looking for 192.168.x.x, 10.x.x.x, etc.)
NET_CIDR=$(ip -o -f inet addr show | awk '{print $4}' \
    | grep -E '^(192\.168\.|10\.|172\.1[6-9]\.)')

4. Connectivity & Advanced Tools

The script doesn't just want a "ping"; it wants to see that we can actually interact with the web and other machines. We need:

  • OpenSSH: Both the client and the daemon (sshd) must be running.
  • Network Scanning: A tool called arp-scan to find other hosts.
  • Web Tools: curl (with both HTTP and HTTPS support) and netcat (nc) for raw TCP tests.
if pgrep -x sshd >/dev/null 2>&1 || pgrep -x dropbear >/dev/null 2>&1; then
    ok "sshd/dropbear running"
fi

command -v arp-scan >/dev/null 2>&1 && ok "arp-scan present"
command -v curl >/dev/null 2>&1 && ok "curl present"

5. Data Handling & Analysis

Finally, the script tests if we can handle archives (tar), manipulate text (awk), and even perform "Banner Grabbing", which is a fancy way of saying "ask a server what its name is."

command -v tar >/dev/null 2>&1 && ok "tar present"
RESULT=$(echo "test 42" | awk '{print $2}') # Should be 42

# HTTP Banner Grabbing
HTTP_BANNER=$(printf "HEAD / HTTP/1.0\r\n\r\n" \
    | nc -w3 example.com 80 | grep -i "^server:")

6. The "Reverse" Analysis

If you connect via SSH, the script gets even more aggressive. It identifies your IP and tries to scan you back to see which ports are open on your host machine!

The Challenge

Packages like openssh, curl, and arp-scan aren't exactly "tiny." My goal is to fit all of this into a bootable ISO that stays under that magical 20MB mark.


What are we Actually Building?

Before we touch a single config file, we need to understand the four layers that make up our minimal VM. Skip this and you'll be copy-pasting flags blindly.

Read it and the whole build will make sense.

Layer 1: Kernel

The kernel is the bridge between software and hardware. It manages CPU, time, memory and devices, Our output here is a file called bzImage, a compressed kernel binary. Everything else runs on top of it.

Layer 2: The Root Filesystem

The rootfs is the directory tree the kernel hands control to after boot.

/bin, /etc, /usr and so on. It contains every program the running system can use. We pack ours into a single rootfs.ext2 image (ext2 for Second Extended File System, introduced in 1993 for Linux, designed for high performance and reliability; it organizes data into block groups, using inodes to manage files and directories).

Layer 3: BusyBox

On a standard Linux desktop, ls, cp, tar and friends are separate binaries from the GNU project, hundres of files! BusyBox replaces most of them with a single ~1 MB binary, perfect for our minimal system.

Layer 4: Buildroot

Just as we said, Buildroot automates the whole thing. You tell it what packages you want via a config file, it builds a cross-compiler, compiles the kernel, compiles every package and assembles the rootfs. ALL in one make command.

Wait, A Cross-Compiler?

The cross-compiler part is worth a second. Your build machine (in our case, that'll be a Debian Docker container on Kali) runs x86_64. Our target VM also runs x86_64, so why not just use the host's gcc?
Because Buildroot needs to control every compiler flag, library version, and optimization setting to guarantee a reproducible, minimal output. It builds its own isolated toolchain first, then uses that for everything else.

This is why the first build takes 30–60 minutes, it's literally compiling a compiler.

your make command does:

β”‚ β–Ό Buildroot 
β”œβ”€β”€ builds cross-compiler (host-gcc, ~20 min) 
β”œβ”€β”€ compiles kernel β†’ bzImage 
β”œβ”€β”€ compiles packages (openssh, curl, arp-scan ...) 
└── assembles rootfs β†’ rootfs.ext2 
    β”‚ ISO builder 
    β”‚ β–Ό minimal.iso ← bootable

Now for the question we already touched slightly in the beginning: why not just use a tiny existing distro like Alpine? 

You could, and for production use, you probably should.
But Buildroot gives us exact control over what ends up in the image. Alpine's package manager, init system, and base utilities add overhead we don't need.

With Buildroot, if audit.sh doesn't check for it, it doesn't ship.

Speaking of which, let's make sure you've actually absorbed all of this before we start running build commands. Get these right and the rest of the guide will click into place.

Not all the answers are in here, for one or two you might have to think and research a bit :D
Knowledge Check
0 / 5 correct
Question 1 of 5
final score
 
πŸ‘©β€πŸš€
Ready? Set! Now let's get building!

Setting Up the Build Environment

Before we touch Buildroot, we need a clean, isolated place to build.

We use a Debian Docker container, not because Docker is special here, but because it gives us a known-good environment that won't interfere with our host (i used a Kali vm i had lying around).

Installing Docker

sudo apt install -y docker.io
systemctl enable --now docker

You might get prompted for your password more than once, that's normal. Docker has several components that each request authentication separately during setup.

Starting the Build Container

sudo docker run --privileged --dns 8.8.8.8 \
  -it --name minlinux debian:bookworm bash

Two flags worth understanding here. --privileged gives the container full access to the host kernel. We need this later when we mount our rootfs image as a loop device to embed files into it. --dns 8.8.8.8 sets Google's DNS explicitly, because without it apt inside the container silently fails to resolve hostnames.

If you wondered, a loop device is a "file acting as a device". We mount a file that acts like an entire filesystem, so we use loop.

The above command creates the container and drops you straight into a root shell inside it. From this point forward, every command in this section runs inside that container unless stated otherwise.

Once you exit, you can get back in with:

sudo docker start -ai minlinux

Getting the Build Script Into the Container

From your Kali host (not inside the container), copy the two files over:

docker cp build.sh minlinux:/root/
docker cp audit.sh       minlinux:/root/
If docker cp gives you a "no such container" error, the container isn't running yet. Run sudo docker start minlinux first, then retry.

Walking Through the Build Script

Let's view our build plan and understand what's going on.

πŸ’‘
You will need ~10GB (temporary) free space in your environment at least (you can check that withdf /). You'll also need some Gigabytes of available RAM, which you can check with free.

Step 1: Install Build Dependencies

apt-get update
apt-get install -y \
    wget make gcc g++ unzip bc \
    libncurses-dev rsync cpio xz-utils \
    bzip2 file perl patch python3 git qemu-system-x86

Buildroot compiles an entire Linux system from source, so it needs a full C/C++ toolchain, download tools, and a handful of utilities. libncurses-dev is specifically for menuconfig, the interactive configuration interface. git is required even if you're not cloning anything, because some Buildroot package scripts call it internally.

Step 2: Download Buildroot

BUILDROOT_VERSION="2026.02"
wget "https://buildroot.org/downloads/buildroot-${BUILDROOT_VERSION}.tar.xz"
tar -xf buildroot.tar.xz -C /opt/buildroot --strip-components=1

This grabs the Buildroot source tarball and extracts it to /opt/buildroot. The --strip-components=1 removes the top-level version-named directory from the archive so we get a clean path.

Step 3: Load the Base Config

cd /opt/buildroot
make qemu_x86_64_defconfig

Buildroot ships with pre-made configs for common targets. qemu_x86_64_defconfig gives us a working baseline for a 64-bit QEMU VM: kernel version, architecture flags, basic filesystem. We're not keeping it as-is; we're using it as a starting point to overwrite.

Step 4: Apply Our Configuration

cat >> .config << 'EOF'
 
# --- System ---
BR2_TARGET_GENERIC_HOSTNAME="minlinux"
BR2_TARGET_GENERIC_ROOT_PASSWD="aeb"
BR2_SYSTEM_DHCP="eth0"
 
# --- Minimize size ---
BR2_STRIP_strip=y
 
# --- Filesystem ---
BR2_TARGET_ROOTFS_EXT2=y
BR2_TARGET_ROOTFS_EXT2_SIZE="64M"
BR2_TARGET_ROOTFS_CPIO=y
BR2_TARGET_ROOTFS_CPIO_GZIP=y
 
# --- SSH ---
BR2_PACKAGE_DROPBEAR=n
BR2_PACKAGE_OPENSSH=y
BR2_PACKAGE_OPENSSH_SERVER=y
BR2_PACKAGE_OPENSSH_CLIENT=y
BR2_PACKAGE_OPENSSH_KEY_UTILS=y
 
# --- Network ---
BR2_PACKAGE_IPROUTE2=y
BR2_PACKAGE_IPUTILS=y
BR2_PACKAGE_ARP_SCAN=y
 
# --- curl + TLS ---
BR2_PACKAGE_LIBCURL=y
BR2_PACKAGE_LIBCURL_CURL=y
BR2_PACKAGE_CA_CERTIFICATES=y
 
# --- sudo ---
BR2_PACKAGE_SUDO=y
 
# --- pgrep (for the sshd-check in audit.sh) ---
BR2_PACKAGE_PROCPS_NG=y
 
# --- netcat-openbsd (audit.sh needs -z for port scan) ---
BR2_PACKAGE_NETCAT_OPENBSD=y
 
# --- busybox: sh, ls, cp, mv, cat, rm, ps, tar, awk, ping ---
BR2_PACKAGE_BUSYBOX=y
 
EOF
 
make olddefconfig

Rather than using the interactive menuconfig, the script appends our options directly to .config. The >> means append, not overwrite, so our additions layer on top of the defconfig baseline.

make olddefconfig then resolves the full dependency tree. Any option we set that requires another package to be enabled gets pulled in automatically here. This is also the step that will throw errors if you've set something contradictory, for example, setting both BR2_PACKAGE_DROPBEAR=y and BR2_PACKAGE_OPENSSH=y with conflicting SSH server flags.

Want the final build script and audit script as ready-to-run files? They're available to subscribers; it's free, and it's the best way to support the blog while I keep writing these deep dives.

Surprises and Learnings

The Dropbear Trap: Why I Had to Switch

My first version of this build used Dropbear instead of OpenSSH. It made perfect sense from a size perspective: Dropbear is a fraction of the size and does SSH just fine.

Then the audit script slapped me in the face:

if ! pgrep -x sshd >/dev/null; then
    warn "sshd inactive"
fi

See the problem? pgrep -x sshd. The -x flag means exact process name match. Dropbear runs as dropbear in the process table, not sshd. No symlink or alias changes what the kernel reports in /proc. The audit would fail every single time.

Lesson learned: read the audit script before choosing packages. The system follows the test, never the other way around. OpenSSH it is. It's bigger, but the test demands it.

The HTTPS Surprise: CA Certificates

Here's another one that bit me. My first build had curl working perfectly for HTTP, but HTTPS failed every time:

curl -sL https://www.google.com/ >/dev/null 2>&1  # FAIL

curl was compiled, TLS support was there, but HTTPS still failed. Why? No root CA certificates. Without BR2_PACKAGE_CA_CERTIFICATES=y, curl has no way to verify the certificate chain for any HTTPS connection. The binary works, the protocol works, but trust doesn't. One line in the config, hours of confusion.

The netcat Dilemma: When One Tool Isn't Enough

This one is subtle. The professor's audit script uses nc (netcat) in two different ways:

# Raw TCP test β€” uses -q (quit after EOF delay)
nc -l -p 12345 -q 1 >/dev/null 2>&1 &
echo test | nc -q 1 localhost 12345
 
# Port scan β€” uses -z (zero-I/O scan mode)
nc -z -w1 "$CLIENT_IP" "$PORT"

Here's the problem: no single netcat implementation supports both flags. BusyBox nc has -q but not -z. netcat-openbsd has -z but not -q. You literally cannot satisfy both checks with one binary.

My solution: installing netcat-openbsd. It handles -z for the port scan correctly, and it silently ignores the -q flag rather than crashing on it. The raw TCP test still passes because the pipe (echo test | nc ...) closes stdin naturally anyway, which is what -q was trying to accomplish. Sometimes you get lucky with failure modes.

Package Summary

(Some of) the packages we're adding and why:

Package Why
BR2_PACKAGE_OPENSSH audit.sh checks pgrep -x sshd β€” Dropbear would show as dropbear and fail
BR2_PACKAGE_NETCAT_OPENBSD Audit needs -z for port scanning; BusyBox nc lacks it. Openbsd nc silently ignores the -q flag used elsewhere.
BR2_PACKAGE_ARP_SCAN Checked explicitly by the audit
BR2_PACKAGE_IPROUTE2 Provides the ip command for network checks
BR2_PACKAGE_PROCPS_NG Provides pgrep for the sshd process check
BR2_PACKAGE_LIBCURL + BR2_PACKAGE_CA_CERTIFICATES curl needs the library explicitly, and CA certs are required or HTTPS silently fails
BR2_TARGET_ROOTFS_EXT2_SIZE="64M" OpenSSH is larger than Dropbear β€” 32M was too tight, 64M gives headroom

Step 5: The Actual Build

export FORCE_UNSAFE_CONFIGURE=1
make -j$(nproc) 2>&1 | tee /tmp/build.log

Three things happening here.

FORCE_UNSAFE_CONFIGURE=1 bypasses Buildroot's check that refuses to run as root. Inside a Docker container we're always root, so without this flag the build stops immediately.

make -j$(nproc) runs as many parallel compile jobs as you have CPU cores. A note of caution: on my first attempt with only 8 GB of RAM, this caused OOM (Out Of Memory) kills because GCC compiling itself consumes 1–2 GB per process. If you hit OOM errors, drop it down to make -j4 to cap the number of parallel jobs. With 16 GB or more you should be fine with $(nproc).

2>&1 | tee /tmp/build.log does two things at once. 2>&1 merges stderr into stdout so errors don't disappear silently. tee writes the combined output to both your terminal and a logfile simultaneously, so if the build crashes at minute 45, you can scroll build.log to find exactly where it failed without losing the output.

This step takes 30–60 minutes. Buildroot is not downloading a pre-built system, it is compiling GCC from source, then using that compiler to compile the kernel, then using that kernel build system to compile every package. Go do something else.

When it finishes successfully you'll see the build output in /opt/buildroot/output/images/:

bzImage        ~6.4M
rootfs.ext2    ~60M
rootfs.cpio.gz ~11M

Step 6: Verify OpenSSH Autostart

mount -o loop rootfs.ext2 /tmp/rootfs_check
ls /tmp/rootfs_check/etc/init.d/

Buildroot's OpenSSH package should have generated /etc/init.d/S50sshd automatically. The script mounts the ext2 image and checks for it; if it's missing, it creates it manually. The S50 prefix is Buildroot's init convention: scripts run in alphabetical order at boot, so S50 runs after networking (S40) is up.

The init script also needs to generate host keys on first boot (ssh-keygen for rsa, ecdsa, and ed25519) and we make sure PermitRootLogin yes is set in sshd_config, since the audit runs as root.

If sshd isn't in that init directory, pgrep sshd in the audit script will fail and we never modify the audit to work around our system. The system has to be right.

Knowledge Check
0 / 5 correct
Question 1 of 5
final score
 

First Boot... And It Doesn't Work

So we run make, wait 45 minutes, boot the VM, run the audit and... 4 failures. Welcome to systems engineering.

[OK]  POSIX-Shell vorhanden
[OK]  cp, mv, cat, rm, ls
[OK]  ps funktioniert korrekt
[OK]  lo Interface vorhanden
[OK]  ssh (Client) vorhanden
[NOK] sshd konnte nicht gestartet werden     ← pgrep missing!
[OK]  SSH-Key-Setup korrekt
[OK]  Ping 8.8.8.8 / DNS OK
[OK]  arp-scan vorhanden
[NOK] nc (netcat) fehlt                      ← never built!
[OK]  curl vorhanden / HTTP OK
[NOK] HTTPS Download fehlgeschlagen           ← no CA certs!
[NOK] HTTP Banner nicht gefunden              ← needs nc!

Four failures. Not catastrophic, but not a pass either. Let's debug them one by one.

Failure 1: "sshd couldn't be started"

The audit runs pgrep -x sshd to check if the SSH daemon is running. But:

./audit.sh: line 70: pgrep: not found

pgrep doesn't exist. We set BR2_PACKAGE_PROCPS_NG=y in our config, but make olddefconfig silently dropped it. This is one of the most frustrating things about Buildroot; if the defconfig baseline already has an opinion about a package, your appended option might get overridden during dependency resolution without any warning.

The sshd binary itself was actually in the image (/usr/sbin/sshd existed), and the init script (S50sshd) was there too. The daemon was probably running fine. But the audit couldn't verify it was running because pgrep was missing. A tool to check the process was absent, not the process itself.

Failure 2: "nc (netcat) missing"

Same story. BR2_PACKAGE_NETCAT_OPENBSD=y was in our config, make olddefconfig ate it. The binary never got compiled, never got installed. Every netcat-dependent check (raw TCP test, port scanning, banner grabbing) failed as a cascade.

Failure 3: "HTTPS Download Not Successful"

This one was different. curl was there and HTTP worked fine. But HTTPS failed silently. The cause: no CA root certificates. Without BR2_PACKAGE_CA_CERTIFICATES=y, curl can't verify any TLS certificate chain. The binary works, the protocol works, but trust doesn't. Curl sees an untrusted cert and refuses the connection.

Failure 4: "HTTP Banner Not Found"

This was just a cascade from Failure 2; banner grabbing uses nc, which wasn't installed.

The Fix: No Full Rebuild Required

Here's where it gets satisfying. We don't need to wait another 45 minutes. Buildroot already compiled the toolchain, the kernel, and all the packages that did make it in. We just need to build the missing packages and repackage the rootfs.

From inside the Docker container (not the VM), I shut down the VM and ran:

cd /opt/buildroot
export FORCE_UNSAFE_CONFIGURE=1

# Build just the two missing packages
make procps-ng-rebuild
make netcat-openbsd-rebuild

Each one took about 30 seconds. Then we verified the binaries landed:

find output/target -name "pgrep"   # β†’ output/target/bin/pgrep βœ“
find output/target -name "nc"      # β†’ output/target/usr/bin/nc βœ“

While we were at it, we fixed two more things: PermitRootLogin yes in sshd_config (the audit runs as root and needs SSH access), and confirmed that the CA certificates were already present in /etc/ssl/certs/ (turns out make olddefconfig did honor that config line; small mercies).

Then we rebuilt just the filesystem image:

make rootfs-ext2

30 seconds. Boot. Run the audit.

Second Boot: Green Across the Board

βœ“ POSIX Shell OK
βœ“ ls OK
βœ“ cp OK
βœ“ mv OK
βœ“ cat OK
βœ“ rm OK
βœ“ ps OK
βœ“ lo Interface vorhanden
βœ“ ssh OK
βœ“ SSH key setup OK
βœ“ Ping 8.8.8.8 OK
βœ“ DNS OK
Netzwerk: 10.0.2.15/24
-- Live Hosts (ARP) --
10.0.2.2  52:55:0a:00:02:02  (Unknown: locally administered)
10.0.2.3  52:55:0a:00:02:03  (Unknown: locally administered)
βœ“ Alle Tests abgeschlossen

Everything that the system needs to do, it does. The only "failures" left are the SSH Client Analysis and OS Fingerprinting sections; those require you to connect via SSH from an external machine, not from the VM console. When you SSH in through the port-forwarded port 2222, the script detects your client IP and runs the reverse scan against you. That's working as designed.

The OS Fingerprinting section is left as a TODO by the professor; that's the part each team implements themselves.

Lessons Learned

This build broke in ways I didn't expect, and each break taught something concrete:

1. Read the test before choosing packages

My first build used Dropbear instead of OpenSSH. Dropbear is tiny and does SSH perfectly well. But pgrep -x sshd matches exact process names, and Dropbear runs as dropbear, not sshd. The audit would never pass. Fifteen minutes of reading the audit script upfront would have saved an hour of debugging.

2. make olddefconfig can silently drop your options

We set BR2_PACKAGE_PROCPS_NG=y and BR2_PACKAGE_NETCAT_OPENBSD=y. Both were silently ignored. The defconfig baseline had its own opinions, and olddefconfig's dependency resolution overwrote ours without a single warning. The fix was to rebuild the individual packages after the fact β€” but the real lesson is: always verify what ended up in the image, not what you put in the config.

3. HTTPS β‰  HTTP + TLS library

Having curl compiled with TLS support is not enough. You also need BR2_PACKAGE_CA_CERTIFICATES=y β€” the actual root certificate bundle that lets curl verify certificate chains. Without it, every HTTPS connection fails silently. The binary works, the protocol works, trust doesn't.

4. No single netcat does everything

The professor's audit uses nc -q 1 (BusyBox supports this) and nc -z -w1 (only netcat-openbsd supports this). No single implementation handles both flags. We installed netcat-openbsd, which supports -z and silently ignores -q. The raw TCP test still passes because the pipe closes stdin naturally. Sometimes you get lucky with failure modes.

5. You can patch without rebuilding from scratch

The scariest part of a 45-minute build is the thought of starting over. But Buildroot is smarter than that. make procps-ng-rebuild compiles just that one package in seconds. make rootfs-ext2 repackages the filesystem in seconds. The kernel, the toolchain, and everything else stay cached. Incremental fixes are fast.

What's Next

This post covered the build and the debugging. In Part 2, we'll tackle:

  • Packaging it into a bootable ISO: the bzImage + rootfs combo needs to become a single .iso file
  • Squeezing under 20MB: our rootfs.cpio.gz is 11MB and bzImage is 6.4MB, so we're at ~17.4MB before ISO overhead. It's going to be tight.
  • The OS Fingerprinting challenge: implementing the TODO the professor left us
Thanks for reading. If you're attempting something similar, I hope the debugging sections save you some time. The "it compiled therefore it works" assumption is the biggest trap in embedded Linux; always verify against the actual test.

See you in Part 2.

Subscribe to my monthly newsletter

No spam, no sharing to third party. Only you and me.

Member discussion