Back to blog

Fuzzing so far: Honggfuzz, LibFuzzer, and diving into LibAFL

April 4, 2026
3 min read
Security

TL;DR: No bugs filed yet, but the stack is clearer: multi-process vs in-process fuzzers, why the harness matters, and how LibAFL's pieces fit together. I'm documenting as I go and contributing upstream.


Where it started

I have been interested in fuzzing for about three years. So far I have not landed a vulnerability (no shiny CVE), but I have picked up a lot along the way: how real fuzz campaigns are structured, where time actually goes, and why "just run a fuzzer" rarely tells the whole story.

Early on I used:

  • Honggfuzz: a straightforward multi-process fuzzer. Good for getting executions in parallel without wiring everything yourself.
  • libFuzzer: in-process fuzzing (LLVM's coverage-guided fuzzer). Tight loop, fast feedback, but you live inside one process and the harness shape really matters.

That contrast alone taught me something important: the execution model (forking workers vs in-process) changes how you think about crashes, timeouts, and reproducibility.


January 2026: going deep on LibAFL

From January 2026 I started seriously learning LibAFL: a Rust framework for building custom, feedback-driven fuzzers. It's the kind of project where reading the architecture pays off before you copy-paste a tutorial harness.

What I focused on:

  1. Overall architecture: how the library splits concerns so you can swap observers, feedback, stages, and schedulers instead of forking a monolith.
  2. Writing a good harness: minimal, deterministic, fast reset between runs, and actually exercising the code you care about (not just "it compiles").
  3. StdFuzzer and friends: how the "standard" fuzzer wiring fits together in practice.
  4. Kinds of fuzzing: coverage-guided, structure-aware ideas, different ways to feed inputs depending on the target.

I tried several setups along the way: not every experiment needs a blog post, but each one clarified what LibAFL expects from you as an integrator.


What actually matters as a user

As someone using a fuzzing library (LibAFL), most of the outcome is decided by:

  • Harness: Defines what runs per input and how state is reset. A bad harness hides bugs or wastes cycles.
  • Target: Parsing, protocol state, sanitizer build, corpus seeds: the project you attach the fuzzer to.
  • Feedback loop: Coverage, crashes, timeouts: whatever your observers and feedback encode.

The library gives you engines and plumbing; you still own the semantics of the test.


Current Status of Learning

I have been writing notes as I learn: architecture sketches, harness patterns, things that confused me once and might confuse the next person. That habit makes it easier to contribute back.

I am contributing to LibAFL now and I don't see that stopping anytime soon. Open source fuzzing moves fast; staying close to upstream is how I keep the mental model current.


Truth

No CVE yet, and that's OK for a multi-year learning path. The useful part is knowing why Honggfuzz vs libFuzzer felt different, what LibAFL is optimizing for, and where the real work sits (harness + target). Everything else is iteration.

If you are also on this road: document the boring bits, keep seeds and builds reproducible, and treat every "still no crash" week as calibration, not failure.

About the Author

muon is trying to just learn