I'm writing fuzzers at work, but since the internal tooling is doing a lot of magic fuelled by incredible automation, it has little in common with how regular people are casually fuzzing things. So I decided to give a try at fuzzing OpenMW, with AFL++ and honggfuzz.
At first, I naïvely tried to fuzz the
openmw binary, by patching it to
immediately exit after
loading all of its resources, but it is way too large, way too slow and with
inputs way too big to be fuzzed in a meaningful way. On the bright side,
are ~fast and small, making them suitable targets.
I tried to fuzz with Address
Sanitizer, but since
OpenMW's codebase is dealing with too-large-to-be-true memory allocations by catching the
std::bad_alloc exception, this lead to a ton of false positives,
since exceptions aren't supported in ASAN yet.
So I had to resort to using
Because we're in 2021, llvm12 is packaged in modern Linux distributions, meaning that it's possible to use AFL++' LTO instrumentation, as well as laf-intel passes without the hassle of having to compile LLVM/clang on my own.
$ export make CC=~/dev/AFLplusplus/afl-clang-lto $ export CXX=~/dev/AFLplusplus/afl-clang-lto++ $ export LD=~/dev/AFLplusplus/afl-ld-lto $ export AFL_LLVM_LAF_ALL=1 $ export AFL_HARDEN=1 $ cmake .. $ make -j $(nproc) niftest esmtool bsatool $ afl-fuzz -i ../fuzzin_nif -o ./out_nif -d -f /tmp/test.nif -- ./niftest --input-file /tmp/test.nif $ afl-fuzz -i ../fuzzin_esm -o ./out_esm -d -x ../esp.dict -- ./esmtool -C -p -q @@ $ afl-fuzz -i ../fuzzin_bsa -o ./out_bsa -d -- ./bsatool list -l @@
After a couple of days of fuzzing, my CPU provider told me that I had to
reboot the machine as soon as possible, likely due to a kernl upgrade.
So I merged all
the AFL++ instances output, ran
fdupes on it, and
tried to minimize the result with
but it crashed, so I
used honggfuzz instead. Unfortunately, honggfuzz doesn't like AFL++'
instrumentation, so I had to recompile my targets with
pc-guard, to be able to run:
$ honggfuzz -M -i ../fuzz_in --output ../fuzz_in_minimized -- ./esmtool dump -C -p -q ___FILE___
Also, always use
--ouput, because if your
minimizer doesn't like your instrumentation for whatever reason, odds are that
it might consider all the files in your corpus to have a coverage of zero, and
will thus trash everything.
I had around ~20.000 files in my corpus, and since honggfuzz' minimisation doesn't take advantage of multiple cores, it took around 4 hours to minimize everything down to ~5000 files.
After spending some time reading AFL++' documentation and tuning power-schedules, I looked at FuzzBench and switched to honggfuzz since it performs roughly the same, without the need to have to manually launch a tuned fuzzer per core to get everything rolling the way it should.
$ honggfuzz --threads $(nproc) -i ../fuzz_in_esm -x ../esp.dict -- ./esmtool -C -p -q ___FILE___ $ honggfuzz --threads $(nproc) -i ../fuzz_in_bsa -- ./bsatool list -l ___FILE___ $ honggfuzz --threads $(nproc) -i ../fuzz_in_nif -e nif -- ./niftest --input-file ___FILE___
In the end, I used a mixture of the two, to take advantage of honggfuzz' high efficiency/complexity ratio as well as AFL++' interesting power schedules. Moreover, while honggfuzz ran around 50 execs/s, AFL++ was running around 175 execs/s.
All of this lead to a couple bugs:
- an off-by-one in
- a non-zero-terminated string in
- a read heap-buffer overflow in
- a read heap-buffer overflow in
- a DoS in
- a crash in
Those are now fixed, mostly thanks to elsid handholding me into writing acceptable C++. My fuzzers' coverage isn't increasing anymore since a couple of days, time to wrap up and publish this blogpost.
If you want to fuzz OpenMW on your own, I documented everything on the wiki and I would be happy to share my corpus.