Fixing my Firewall Problem

In my first post, I documented how I was using a Linksys WRT54G router to serve as the router/firewall for my new home IoT lab.

The old router is intended to serve a separate subnet ( for IoT purposes, and firewall it from the rest of my home internet. Unfortunately, I ran into a snag:

One of my goals (#1) is that the analysis subnet should be isolated by a firewall. Because NAT is the greatest firewall ever invented, I only have to worry about outbound packets from the lab network.

The easy way to do this would be to for the lab router to drop incoming LAN packets that were destined for Unfortunately, the OpenWRT build I’m using is so old that it doesn’t support this. Specifically, you can’t set net.bridge.bridge-nf-call-iptables=1 because the sysctl doesn’t exist (it’s compiled out for performance reasons). This prevents using iptables rules on bridged interfaces. I’m not really interested in compiling OpenWRT myself and the only sensible configuration for the LAN is to bridge the Ethernet switch and WiFi interfaces, so I have to find an alternative. Until I do, I’ll just have to live with the inbound filtering only.

Well, after toying around with different ideas – including physically plugging both ends of a cable into the router’s switch to make a “physical loopback” (note: this doesn’t seem to work) – I decided that recompiling the OpenWRT kernel version I needed with support for “bridge-nf” was the most elegant and correct solution to my problem.

I underestimated how difficult this would be.


In the nearly 7 years since OpenWRT 10.03.1 “Backfire” Interim Release 1 was released the OpenWRT build system has had some large overhauls that unfortunately lead to a lot of link rot.

For starters, OpenWRT switched from svn to git, and seems to have lost some of there repository history along the way. I was able to find a full archive of the old svn repo here though, so this didn’t present a large challenge.

The more pressing problem was that the OpenWRT project structure is primarily metadata about the packages and a build system, that contains information about what source packages to fetch and how to build them. It turns out a few of these were broken.

To reproducibly-ish iterate on getting the build to work, I decided to dockerize it. The Dockerfile uses two “stages” to do the build – the first is an Ubuntu 18.04 layer that performs some preprocessing on the dependencies, and then the second is an Ubuntu 12.04 layer that does the actual compilation. The Ubuntu 18.04 layer was used because git repositories have changed formats in the intervening time period, so old copies of git no longer work with modern repos. The 12.04 layer was used because this was a moderately contemporary operating system at the time, which increased the chances that old articles about how to build OpenWRT would still be applicable.

Stage 1

Stage 1, an Ubuntu 18.04 container, runs a bunch of git clone operations to collect the dependencies required for the “builder” in Stage 2.

FROM ubuntu:18.04 as fetcher

RUN apt-get update && apt-get install -y git

# These git repos are too new to fetch is old versions of Ubuntu
RUN git clone --depth=1 -b tags/backfire_10.03.1 && \
	rm -rf archive/.git

# This contains the extra packages
RUN git clone --depth=1 -b packages_10.03.1 && \
	rm -rf packages/.git

# This contains the LuCI web interface
RUN git clone --depth=1 -b 0.10.0 && \
	rm -rf luci/.git

# This *should* exist, but seems to have link rotten
# original link was
RUN git clone git:// && \
	cd linux-firmware && \
	git checkout d543c1d98fc240267ee59fff93f7a0f36d9e2fc3 && \
	rm -rf .git && cd .. && \
	mv linux-firmware linux-firmware-d543c1d98fc240267ee59fff93f7a0f36d9e2fc3 && \
	tar jcf linux-firmware-d543c1d98fc240267ee59fff93f7a0f36d9e2fc3.tar.bz2 linux-firmware-d543c1d98fc240267ee59fff93f7a0f36d9e2fc3

This first part of the script downloads the four git repositories that I found to be required to build OpenWRT. In order, they are:

  1. The primary OpenWRT source repository, containing the pinned package versions and patches for Backfire 10.03.1
  2. The OpenWRT packages repository, which contains the metadata about how to build all the “extra” third-party packages.
  3. The LuCI (Lua Configuration Interface) repository, which contains the information for building the web interface packages.
  4. A clone of a specific snapshot of the official “linux-firmware” repository, which recreates an archive that this build of OpenWRT depends on but is no longer hosted by the openwrt project. Instead, it is recreated from upstream.

All repositories are downloaded shallow (if possible) and have their revision history blown away to save layer space.

Stage 2

The second stage of the docker build is responsible for the process of building and configuring the OpenWRT toolchain, then configuring, building and installing the target OpenWRT install.


# Build on a "contemporary" operating system...
# NB: 14.04 did not work because gcc errored on calls to 'gets'-
FROM ubuntu:12.04 as builder

# Install prerequisites...
RUN apt-get update && apt-get install -y \
	gcc \
	gettext \
	binutils \
	patch \
	bzip2 \
	flex \
	make \
	pkg-config \
	libz-dev \
	libc6-dev \
	build-essential \
	unzip \
	wget \
	subversion \
	gawk \
	python \

# OpenWRT does NOT build as root
RUN useradd -ms /bin/bash openwrt
RUN mkdir /src && chown openwrt /src

The first part of the second stage installs and configures the container to have an appropriate environment to build the openwrt native toolchain. The requirements are documented on this page.

Notably, I first tried to build all of this in Ubuntu 14.04, but even that was too new. At some point between 12.04 and 14.04, Ubuntu made the unsafe C gets function a compiler error, which caused the build to fail. I decied to just use an older version of Ubuntu instead of working around One More Problem.

OpenWRT also doesn’t support building as root at all for some reason, so to appease it the build must be run as a separate user.


Next, the Dockerfile pulls in (most of) the deps that were downloaded in the first stage and updates the feeds, which involves pulling the build information about the packages exported by the ‘packages’ and ‘luci’ repositories into the main build location.

COPY --from=fetcher --chown=openwrt /archive /src/archive
COPY --from=fetcher --chown=openwrt /packages /src/packages
COPY --from=fetcher --chown=openwrt /luci /src/luci

USER openwrt
WORKDIR /src/archive

# Pull in the deps we downloaded earlier
RUN echo "src-link packages /src/packages" > feeds.conf.default && \
	echo "src-link luci /src/luci" >> feeds.conf.default && \
	./scripts/feeds update -a && \
	./scripts/feeds install -a

In a standard OpenWRT build at the time, the “packages” and “luci” feeds were instead referenced by SVN URLs, but those URLs are no longer alive. The workaround is to point the build system at them locally instead.

Next, the Dockerfile creates a .config file for the OpenWRT build. A normal build would create this interactively with make menuconfig (similar to the kernel configuration menu), but I was almost entirely satisfied with the defaults and only wanted to make a few small tweaks. Therefore the most sensible approach was to download a copy of the relevant official “release” configuration for the standard images on the brcm47xx (the WRT54Gs target) and make the desired changes. In this case, that meant disabling building of the extra packages and extra native tools that are intended for OpenWRT developers.

# This is the configuration used to build the standard images
# We want this, with a few modifications
RUN wget -O config.ref

# Use the reference configuration with:
#   - disable all packages being built as modules
#   - disable CONFIG_ALL (builds all packages by default)
#   - disable CONFIG_IB|CONFIG_SDK|CONFIG_MAKE_TOOLCHAIN these targets don't matter here
#   - disable CONFIG_GDB we don't need GDB
RUN cat config.ref \
	| grep -vE ^CONFIG_PACKAGE.*=m$ \
	> .config


After setting up the OpenWRT configuration (which includes the target processor), the native tools and the toolchain can be installed. The native tools are tools that are required to build the toolchain, like autoconf, automake, bison, flex, m4, etc. The toolchain consists of a GCC compiler, binutils and a libc for the target platform.

# Correct MD5 sum of binutils-2.19.1
# See
RUN sed -i 's/09a8c5821a2dfdbb20665bc0bd680791/023222f392e9546bcbb0b4c0960729be/' toolchain/binutils/Makefile

# This is a bunch of native tools -- mostly deps for the compiler
RUN make -j $(nproc) tools/install

# For some reason this seems to have link rotten in a way the other downloads didn't
RUN cd dl; wget

# This is the native build of the cross compiler
RUN make -j $(nproc) toolchain/install

Two bits of linkrot are noted here. The first is described in this ticket, which is that there are two separate “official” MD5 hashes for the binutils-2.19.1.tar.gz the build system expects to download. The fix was simply to replace the expected MD5 sum with the observed one for the package hosted on the official GNU mirror.

The second piece of linkrot is manually downloading linux- The buildsystem expects to download this from an OpenWRT mirror that no longer exists, so the workaround is to just pre-download it from the official mirror so the buildsystem never attempts to fetch it.

Once the toolchain/install target has been run, the OpenWRT build system has compiled GCC three times(!) and has a working copy for the target platform.

Finally, the last step of setting up the toolchain is to install the Kernel headers.

# Effectively undoes;a=patch;h=919763c958e09005035c3b7a7a18d1554f0ca797
RUN echo "CONFIG_BRIDGE_NETFILTER=y" >> target/linux/brcm47xx/config-2.6.32

# Double check we're 'prepared', which builds tools/toolchain but also installs headers
RUN make -j $(nproc) prepare

The Dockerfile sets the CONFIG_BRIDGE_NETFILTER=y kernel option (which is the option I’m trying to change!) before the header installation. I’m not entirely sure this is required, but it seemed prudent and changing layer order later is a pain.

During development of the Dockerfile, it was convenient to separate out the make tools/install and make toolchain/install into separate layers, so that I could fix build issues while retaining some cached layers. This isn’t actually required, as make prepare depends on both of them.


A few pieces of housekeeping are required before the final build step.

First, the OpenWRT configuration must be finalized. This includes setting the optional packages I want (tcpdump and libpcap) and modifying /etc/sysctl.conf to have the defaults that I want.

# Re-add packages we actually want
RUN echo "CONFIG_PACKAGE_tcpdump=y" >> .config && \
	echo "CONFIG_PACKAGE_libpcap=y" >> .config

# Enable net.bridge.bridge-nf-call-* sysctl by default
# Effectively undoes;a=commitdiff;h=bac87d3042411789e61c60edfd6385c8d7f6380f
RUN sed -iE 's/^net.bridge.bridge-nf-\(.*\)=0$/net.bridge.bridge-nf-call-\1=1/;s/disable bridge/enable bridge/' package/base-files/files/etc/sysctl.conf

Second, two additional pieces of linkrot need to be fixed before the build can be kicked off. The first is to pull in the linux-firmware tarfile created in the first layer, and the second is to fix a link to the sources for the Unified Configuration Interface (UCI), which no longer seem to be hosted on the OpenWRT archive, but I was able to find a copy (with matching MD5 hash of the original) via Google.

# Fix linkrot of linux-firmware
COPY --from=fetcher /linux-firmware-d543c1d98fc240267ee59fff93f7a0f36d9e2fc3.tar.bz2 ./dl/

# hopefully relatively stable url?
RUN cd dl; wget

Finally, the actual build can be kicked off:

# Actually build all the code for the target
# Note tools and toolchain are cached from previous make invocations
RUN make -j $(nproc)

WORKDIR /src/archive/bin/brcm47xx/

This make command will compile the kernel and all the packages that are required for the base system as speified in the official release config, along with the extra packages that I added.

The WORKDIR command instructs docker to start the resulting container in the directory that contains the output files.


To flash this image, I copied the openwrt-wrt54g-squashfs.bin out of the final Docker container and used the same tftp flashing method to re-flash my device. The LuCI web interface provides a handy “Backup/Restore configuration”, so I used that to maintain my configuration across installs. Once that was done, I was able to double check that the new sysctl was provided by the kernel and was enabled:

# ssh /sbin/sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1

With that verified, I was able to accomplish my goal of blocking outbound traffic from to with a simple firewall rule:

config 'rule'
	option '_name' 'drop leaking traffic'
	option 'src' 'lan'
	option 'dest' 'wan'
	option 'proto' 'all'
	option 'src_ip' ''
	option 'dest_ip' ''
	option 'target' 'REJECT'


It took me a few hours to get this fixed, but most of it was waiting for build processes in a separate terminal. Using Docker to perform software archaeology in an iterative and repeatable manner is incredibly powerful, particularly because it is so full of trial and error. By incrementally constructing a Dockerfile I was able to chisel away at my compilation problem until I was able to get the end-to-end build working, and if I ever need to do so again I have a reproducibly series of instructions about how to do so. I think that might be the best way to write an anti-denvercoder9 post about an issue.

With a proper firewall in place, my initial lab setup is almost complete. The only thing remaining on my initial goals is setting up TLS/HTTPS interception, which will involve generating a CA, configuring the lab tablet to trust it, configuring sslsplit on the Kali laptop and possibly setting up iptables rules on the router to intercept outbound traffic on port 443. I don’t think that will take too long to configure, so I might end up doing it when I first encounter a device that requires that capability.

You can download the final Dockerfile here.