In my first post, I documented how I was using a Linksys WRT54G router to serve as the router/firewall for my new home IoT lab.
The old router is intended to serve a separate subnet (
192.168.2.0/24) for IoT purposes, and firewall it from the rest of my
home internet. Unfortunately, I ran into a snag:
One of my goals (#1) is that the analysis subnet should be isolated by a firewall. Because NAT is the greatest firewall ever invented, I only have to worry about outbound packets from the lab network.
The easy way to do this would be to for the lab router to drop incoming LAN packets that were destined for
192.168.1.0/24. Unfortunately, the OpenWRT build I’m using is so old that it doesn’t support this. Specifically, you can’t set
net.bridge.bridge-nf-call-iptables=1because the sysctl doesn’t exist (it’s compiled out for performance reasons). This prevents using iptables rules on bridged interfaces. I’m not really interested in compiling OpenWRT myself and the only sensible configuration for the LAN is to bridge the Ethernet switch and WiFi interfaces, so I have to find an alternative. Until I do, I’ll just have to live with the inbound filtering only.
Well, after toying around with different ideas – including physically plugging both ends of a cable into the router’s switch to make a “physical loopback” (note: this doesn’t seem to work) – I decided that recompiling the OpenWRT kernel version I needed with support for “bridge-nf” was the most elegant and correct solution to my problem.
I underestimated how difficult this would be.
In the nearly 7 years since OpenWRT 10.03.1 “Backfire” Interim Release 1 was released the OpenWRT build system has had some large overhauls that unfortunately lead to a lot of link rot.
For starters, OpenWRT switched from svn to git, and seems to have lost some of there repository history along the way. I was able to find a full archive of the old svn repo here though, so this didn’t present a large challenge.
The more pressing problem was that the OpenWRT project structure is primarily metadata about the packages and a build system, that contains information about what source packages to fetch and how to build them. It turns out a few of these were broken.
To reproducibly-ish iterate on getting the build to work, I decided to dockerize it. The Dockerfile uses two “stages” to do the
build – the first is an Ubuntu 18.04 layer that performs some preprocessing on the dependencies, and then the second is an
Ubuntu 12.04 layer that does the actual compilation. The Ubuntu 18.04 layer was used because git repositories have changed
formats in the intervening time period, so old copies of
git no longer work with modern repos. The 12.04 layer was used
because this was a moderately contemporary operating system at the time, which increased the chances that old articles about
how to build OpenWRT would still be applicable.
Stage 1, an Ubuntu 18.04 container, runs a bunch of
git clone operations to collect the dependencies required for the
“builder” in Stage 2.
This first part of the script downloads the four git repositories that I found to be required to build OpenWRT. In order, they are:
- The primary OpenWRT source repository, containing the pinned package versions and patches for Backfire 10.03.1
- The OpenWRT packages repository, which contains the metadata about how to build all the “extra” third-party packages.
- The LuCI (Lua Configuration Interface) repository, which contains the information for building the web interface packages.
- A clone of a specific snapshot of the official “linux-firmware” repository, which recreates an archive that this build of OpenWRT depends on but is no longer hosted by the openwrt project. Instead, it is recreated from upstream.
All repositories are downloaded shallow (if possible) and have their revision history blown away to save layer space.
The second stage of the docker build is responsible for the process of building and configuring the OpenWRT toolchain, then configuring, building and installing the target OpenWRT install.
The first part of the second stage installs and configures the container to have an appropriate environment to build the openwrt native toolchain. The requirements are documented on this page.
Notably, I first tried to build all of this in Ubuntu 14.04, but even that was too new. At some point between 12.04 and
14.04, Ubuntu made the unsafe C
gets function a compiler error, which caused the build to fail. I decied to just use an older
version of Ubuntu instead of working around One More Problem.
OpenWRT also doesn’t support building as root at all for some reason, so to appease it the build must be run as a separate user.
Next, the Dockerfile pulls in (most of) the deps that were downloaded in the first stage and updates the feeds, which involves pulling the build information about the packages exported by the ‘packages’ and ‘luci’ repositories into the main build location.
In a standard OpenWRT build at the time, the “packages” and “luci” feeds were instead referenced by SVN URLs, but those URLs are no longer alive. The workaround is to point the build system at them locally instead.
Next, the Dockerfile creates a
.config file for the OpenWRT build. A normal build would create this interactively with
menuconfig (similar to the kernel configuration menu), but I was almost entirely satisfied with the defaults and only wanted
to make a few small tweaks. Therefore the most sensible approach was to download a copy of the relevant official “release”
configuration for the standard images on the brcm47xx (the WRT54Gs target) and make the desired changes. In this case, that meant
disabling building of the extra packages and extra native tools that are intended for OpenWRT developers.
After setting up the OpenWRT configuration (which includes the target processor), the native tools and the toolchain can be
installed. The native tools are tools that are required to build the toolchain, like
m4, etc. The toolchain consists of a GCC compiler, binutils and a libc for the target platform.
Two bits of linkrot are noted here. The first is described in this ticket, which is that there are two separate “official” MD5 hashes for the binutils-2.19.1.tar.gz the build system expects to download. The fix was simply to replace the expected MD5 sum with the observed one for the package hosted on the official GNU mirror.
The second piece of linkrot is manually downloading
linux-220.127.116.11.tar.bz2. The buildsystem expects to download this from an
OpenWRT mirror that no longer exists, so the workaround is to just pre-download it from the official mirror so the buildsystem
never attempts to fetch it.
toolchain/install target has been run, the OpenWRT build system has compiled GCC three times(!) and has a working
copy for the target platform.
Finally, the last step of setting up the toolchain is to install the Kernel headers.
The Dockerfile sets the
CONFIG_BRIDGE_NETFILTER=y kernel option (which is the option I’m trying to change!) before the header
installation. I’m not entirely sure this is required, but it seemed prudent and changing layer order later is a pain.
During development of the Dockerfile, it was convenient to separate out the
make tools/install and
into separate layers, so that I could fix build issues while retaining some cached layers. This isn’t actually required, as
make prepare depends on both of them.
A few pieces of housekeeping are required before the final build step.
First, the OpenWRT configuration must be finalized. This includes setting the optional packages I want (tcpdump and libpcap)
/etc/sysctl.conf to have the defaults that I want.
Second, two additional pieces of linkrot need to be fixed before the build can be kicked off. The first is to pull in the
linux-firmware tarfile created in the first layer, and the second is to fix a link to the sources for the Unified
Configuration Interface (UCI), which no longer seem to be hosted on the OpenWRT
archive, but I was able to find a copy (with matching MD5 hash of the original) via Google.
Finally, the actual build can be kicked off:
This make command will compile the kernel and all the packages that are required for the base system as speified in the official release config, along with the extra packages that I added.
WORKDIR command instructs docker to start the resulting container in the directory that contains the output files.
To flash this image, I copied the
openwrt-wrt54g-squashfs.bin out of the final Docker container and used the same tftp
flashing method to re-flash my device. The LuCI web interface
provides a handy “Backup/Restore configuration”, so I used that to maintain my configuration across installs. Once that was
done, I was able to double check that the new sysctl was provided by the kernel and was enabled:
# ssh 192.168.2.1 /sbin/sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-iptables = 1
With that verified, I was able to accomplish my goal of blocking outbound traffic from
with a simple firewall rule:
config 'rule' option '_name' 'drop leaking traffic' option 'src' 'lan' option 'dest' 'wan' option 'proto' 'all' option 'src_ip' '192.168.2.1/24' option 'dest_ip' '192.168.1.1/24' option 'target' 'REJECT'
It took me a few hours to get this fixed, but most of it was waiting for build processes in a separate terminal. Using Docker to perform software archaeology in an iterative and repeatable manner is incredibly powerful, particularly because it is so full of trial and error. By incrementally constructing a Dockerfile I was able to chisel away at my compilation problem until I was able to get the end-to-end build working, and if I ever need to do so again I have a reproducibly series of instructions about how to do so. I think that might be the best way to write an anti-denvercoder9 post about an issue.
With a proper firewall in place, my initial lab setup is almost complete. The only thing remaining on my initial goals is
setting up TLS/HTTPS interception, which will involve generating a CA, configuring the lab tablet to trust it, configuring
sslsplit on the Kali laptop and possibly
setting up iptables rules on the router to intercept outbound traffic on port 443. I don’t think that will take too long to
configure, so I might end up doing it when I first encounter a device that requires that capability.
You can download the final Dockerfile here.