Hacking the Virgin Media Super Hub
By Jan Mitchell and Andy Monaghan, 12 June 2017
Update: Following several queries we can confirm that all Super Hub 2 devices use 'changeme' as the default password for administrative access. The Super Hub 2AC also uses this password by default, but has the additional requirement of a per-device WPS PIN. Users are prompted, but not required, to change the default password. An attacker would not need access to an existing backup to exploit this vulnerability.
Context’s Research team have looked at a large number of off-the-shelf home routers in the past and found them to be almost universally dreadful in terms of security posture. However, flagship routers from large ISPs such as BT, Sky and Virgin Media are notably absent from the regular stream of router vulnerabilities in the press. We were curious to discover if these routers were significantly more secure than their off-the-shelf cousins, so we decided to dedicate some of our public research time into looking at one of these devices.
A straw poll of the office revealed that most of us are Virgin Media customers, so we set about looking for second hand Super Hubs. At the time of the research (October 2016) the latest Super Hub (SH) was the SH2AC (VMDG490), but given the prevalence of the SH2 (VMDG485) we got hold of a couple of these as well. This blog details our approach to vulnerability research and our findings, which included the ability to get a root shell from the LAN.
Depending on the type of product we are asked to evaluate, Context’s Product Security Evaluations typically follow these steps:
- Determine the available attack surface, which may include network interfaces, input devices, and external ports such as USB. At this point we would look for ‘quick wins’ such as command injection via administrative interfaces.
- Understand the software running on the device. This includes looking at any open source components, attempting to obtain firmware from the vendor (if provided), enumerate versions of various components and check for existing CVEs.
- Perform a hardware teardown, including identifying any debugging interfaces, understanding the type and form factor of any storage devices (e.g. NAND flash modules). If necessary attempt to extract firmware.
- Vulnerability Research and exploit development. Using the source/firmware for the product – attempt to gain unauthorised code execution on the device.
We quickly discovered that the Super Hub’s web-based administrative interface had been subject to security testing, as input appeared to be well sanitised – a refreshing change from your standard off-the-shelf home router.
We decided to move on to a more detailed analysis of the running software. As both Super Hubs are manufactured by Netgear the source code for all GPL components is available from their website for both the Super Hub 2 and 2ac.
While the source code was useful to understand what was running on the Hubs, we were still after specific information on the internal workings of these devices. The firmware update process is completely automated and is conducted, quite correctly, using secure channels. This meant we needed to look to the hardware in order to retrieve a representative firmware image for analysis.
Obtaining a Firmware Image
Connecting to the serial interface of most home routers using a 3.3v USB-to-Serial adaptor is fairly straight forward, particularly for those of you that follow our blog regularly, so we won’t labour the point here. Suffice to say that there are a couple of UART interfaces: one for the main System-on-Chip (SoC), an ARM-based Intel Puma V, and another for the Atheros Wireless chip (AR9344).
After a cursory look at the Atheros chip which appeared pretty sparse, we moved on to the main SoC. While we could see output from the U-Boot bootloader, it appeared that the ‘autoboot cancel’ feature had been disabled, meaning we couldn’t interact with the bootloader.
The screenshot below shows the initial u-Boot output.
This configuration is sensible as it would dissuade most people from continuing, however we like to be thorough so we considered how to interrupt the boot process without spending too much time and effort. The output in Figure 1 suggested that U-Boot is executing a boot script, which was definitely something we wanted to investigate.
The first step was to obtain a copy of the bootloader by reading the Flash memory. Given we didn’t have the ability to input characters this would be somewhat tricky via software, so we fired up the hot air gun and removed the Spansion (S25FL129P) NAND flash chip.
There are a number of ways to read data from a flash chip, all of which we will be detailing in another blog shortly. In our case, as our preferred I2C/Serial Peripheral Interface (SPI) reader was in another office we used a BeagleBone Black and a bit of Python to manually drive the chip’s SPI bus, as seen in the photo below.
Now that we had a 32MB flash image we could see the U-Boot boot loader in all its glory.
Both Super Hubs used U-Boot scripts to perform the second stage of the boot process and initialise and run the Linux kernel. By making minor modifications to the U-Boot script in the flash image we were able to redirect kernel output to the UART connection and force the boot script to exit, enabling the U-Boot console. Once these modifications were made the flash chip was re-soldered to the board. For the Super Hub 2 this was sufficient. For the Super Hub 2AC, which splits its boot process between a NAND and a NOR chip the U-Boot console was additionally used to patch parts of the second stage loader in memory, including fixing up several checksums, before executing it during the boot sequence.
Finally, via the U-Boot console, we were able to modify the kernel command line parameters to enable verbose kernel boot messages via the serial cable.
Identifying the Vulnerability
Now that we had access to both a firmware image and kernel debug output via the UART interface, we were able to examine the cgi-bin binaries that service the administrative web interface. These binaries are responsible for implementing actions the user requests via the web interface, such as adding firewall rules or pinging IP addresses, which could be vulnerable to command injection. After a reasonably detailed examination of these interfaces we were able to confirm our suspicion that the user interface had been subject to hardening and that input was appropriately sanitised. Just as we were about to abandon this line of investigation we took a look at the backup and restore mechanism, as we had seen success in this area in previous vulnerability assessments.
The backup/restore interface is used to take a copy of the custom configuration settings a user has made to a Super Hub, for instance firewall or port forwarding rules. The user can then later re-submit this configuration file to the router to restore their settings; useful if a factory reset of the device is needed without lots of manual reconfiguration. This functionality was intriguing as it potentially allowed attacker controlled data (a malicious configuration file) to be uploaded directly into the Super Hub for parsing.
To begin investigation we retrieved a configuration from our Super Hub to try to discern the data format, to give us an idea how we might craft a malicious configuration. Once we exported a configuration file it quickly became apparent that the file was encrypted; the file contained 120KB of structure-less binary data with entropy close to 1. Our interest was piqued and we decided to investigate further.
The binary file responsible for encrypting and decrypting the configuration blob, libunihan_utilities.so, contains two functions for dealing with the configuration blobs, des3_encrypt_backup and des3_decrypt_backup. This library is compiled against OpenSSL and uses the function DES_ede3_ofb64_encrypt to perform a Triple-DES encryption/decryption of the blob before it is sent in/out of the router. Crucially, the keys used for this process are hardcoded in the binary, which meant, with knowledge of the keys and algorithm in use, we were able to write a small C program to decrypt the configuration blobs we had retrieved from our Super Hub.
Once we were in possession of a decrypted configuration blob we attempted to determine its structure. The Linux command line tool file rather unhelpfully tells us the file is “data”. Dumping the first handful of bytes from the file using xxd and head showed us that the backup file appeared to be a TAR file with some unknown data appended to the front. Running the venerable binwalk tool confirmed these suspicions, finding a valid TAR file at offset 0x4.
After reverse engineering the file VmRgBackupCfgCgi, the binary responsible for processing the configuration file once it has been uploaded to the router, we discovered the first 4 bytes represent a CRC-32 checksum of the adjacent TAR file. The contents of the TAR file appeared to be a partial backup of Non-Volatile RAM (or NVRAM). This is a common way to store configuration information on embedded devices across power cycles. The contents of these files clearly contained various snippets of sensitive configuration information such as the credentials for the administrative interface. These files will be discussed in more detail later in this post.
As we control both the contents of the TAR file and the CRC32 checksum we have all we need to craft our own backup files which will be accepted and parsed by the router.
At this point we know how to construct a valid configuration blob and have it processed by the router, but what can we actually do with this knowledge? Delving back into VmRgBackupCfgCgi tells us how the file is processed and is the starting point of our vulnerability. Once a valid configuration file has been verified and the TAR written to /var/tmp/backup_var on the router, the following command is run:
tar -xf /var/tmp/backup_var -C /
This command is very powerful. It will untar the TAR file we provide to the root of the Super Hub file system as the root user, giving us what appears to be an arbitrary file write/overwrite. This is more general than the command needs to be, all of the configuration files it replaces are contained within /nvram. So what can we overwrite? Whilst this appears at first glance to be an arbitrary file overwrite, we do have some restrictions. The majority of the Super Hub file system is mounted as read-only squashFs. Only a few interesting areas are actually writeable while the hub is running:
- /tmp (symlink to /var/tmp)
- Some files within /etc (they are actually symlinks to /tmp)
So within these writeable portions of the file system we need to find a file we can write/overwrite to give us some further access into the router. Fortunately for us, within /etc/init.d/we found that the start-up script rcS provides a suitable candidate.
The rcS file is used by the Super Hub router to start key services, setting kernel parameters and other general system setup. It is invoked every time the router powers on. In amongst the various functions of files a section grabbed our attention:
This section of the script checks for the existence of a file, /nvram/0/sys_setup.sh, and executes the contents if it is present on the file system. The Super Hub does not contain this file within its firmware and so the “else” clause executes and performs a few service start up commands. This section of script is incredibly useful; we have write permissions to /nvram and it is persistent across reboots, it gives reliable and guaranteed execution at every boot. We can now combine our arbitrary file write with this remnant of a script (possibly leftover from debugging or individual system customisation) to gain full control of the Super Hub.
To exploit this flaw we create a local directory structure of /nvram/0/ and create the file sys_setup.sh within it. This can then be tar’d, CRC32’d and encrypted to create a valid configuration backup. Once submitted to the router it will be decrypted, validated and then untar’d to the root of the file system. Consequently our script will get execution when the Super Hub reboots, which usefully happens as part of the backup/restore procedure. To test our theory we created a script to drop firewall rules and created a little C program to properly package it for backup. Uploading it appeared to be successful as seen below.
After waiting for the system to fully boot we re-scanned the Super Hub to see if our modifications to the firewall rules had the desired effect (see below). It’s worth nothing that this is a port scan of the LAN interface, as we didn’t have the cable WAN connection connected in our lab, but the telnet daemon is listening on all interfaces.
Attempting to connect showed some unusual behaviour. The connection was accepted by the Super Hub but then immediately torn down before any prompts were displayed. This was unusual behaviour and we turned to the customised utelnetd binary to find out why.
utelnetd is a small binary which implements the telnet protocol and connects an external TCP stream to an internal program, the default (used on the Super Hub) being /bin/sh. The version present on the Super Hub has been modified from the default open source version to add some new features. By reverse engineering the binary to find where the connection is accepted we can see one of these modifications:
The top block in this disassembly shows the connection being accepted with a call to the library function accept(). The second basic block of this disassembly is where the code has been modified. The decision as to whether the connection will proceed or be terminated is based on the return value from a function call (the BLX instruction). This function call clearly checks some sort of internal status to see if telnet access is allowed. What exactly is it checking and is there any way we can modify this setting from within our malicious script?
The implementation of the function in question resides in a shared library that is imported by utelnetd. The majority of this shared library consists of helper functions to get and set various persistent parameters for the Super Hub. Further reverse engineering showed us that it implemented an in-memory database of settings that was read from a set of configuration files in the /nvram directory on start-up and persisted back on reboot/shutdown. As previously mentioned the /nvram directory consists of a series of numbered directories and files and is what a legitimate backup writes to disk. It turns out that each file is a Type-Length-Value (TLV) file encoding.
Looking inside some of these files we can see a number of familiar strings such as our administrative username and password.
The image above shows the partial contents of one of the configuration files, this one specifically backs the ManagementDb (other files back other aspects e.g. PortForwardingDb, MacFilteringDb etc.). Each file starts with a 28-byte header (in red) before being followed by a series of TLVs. The “type” in this case specifies the record number (in green). The “length” value follows (in green), both this and the type are 2-byte values. Next follows length-bytes of the value (in orange). Each record has the configuration details for a certain aspect of the Super Hub. From reversing the various binaries involved in managing the nvram configuration we know that the 10th record is responsible for flagging whether telnet access is available or not. It is a single byte value and is set to 0 to disable telnet and 1 to enable telnet.
The configuration files are read at system start up and then managed in memory for the run time of the Super Hub. If we can flip the telnet enabled byte before the nvram service starts and processes the files then we can enable telnet for the device. The following line inserted into our malicious script file can achieve this:
printf ‘\x01’ | /bin/dd of=/nvram/9/5 bs=1 seek=534 count=1 conv=notrunc
This forces telnet access on and when we drop the firewall we can finally connect to the telnet service on any interface and at last, using credentials gleaned from the configuration files, we are rewarded with our root shell:
On discovering these issues, Context reported them to Virgin Media and provided proof-of-concept code. After verifying our findings, Virgin Media worked with us to develop mitigations which were released as part of their existing firmware patching cycle. We would like thank Virgin Media for their professionalism and responsiveness in working with Context to fix this issue.
The following shows the main events in the
- 20 Oct 2016: Initial disclosure via http://virginmedia.com/netreport
- 20 Oct 2016: VM’s Internet Security Team request further detail which Context provide
- 24 Oct 2016: Context and Virgin Media hold conference call to discuss disclosure in detail. Context provide proof-of-concept code.
- Nov 2016 - Feb 2017: Virgin Media work with Netgear and Context to develop and test patch across both devices
- May 2017: Virgin Media roll out patch as part of scheduled firmware update