Updating SSHD on the Cyclades ACS series

(or, changing things without really changing things)

As a user of legacy hardware that often has no ability to handle an ethernet connection, I found myself in need of a solution to get them online in order to ease the transfer of data to and from them. To solve this issue, I chose a “terminal server” device, and settled on an Avocent/Cyclades “ACS” series system.

There were two primary reasons for the choice. First and most important, they’re well documented. Second and also most important at least for me, they run linux and offer a root shell for anyone in need of one. This is handy as software often comes with security issues, and hardware is often abandoned when newer or better hardware is put on the market. This is the case, sadly, with our beloved Advanced Console Server.

I often become interested in the Open Source releases for hardware simply “because I can,” but in this instance I’d needed something to update the SSH daemon on the system. The version included with the last firmware release for the system is ‘OpenSSH_4.4p1, OpenSSL 0.9.8l 5 Nov 2009’ as reported by ‘sshd -V’. Anyone looking up security issues with that version of SSH would experience shivers and curled toes.

This post is intended to provide an entertaining look at the process I experienced while learning not only how the system works, but also how it handles configuration updates, and locating and utilizing a build environment to aid these updates. The process isn’t yet perfect as the current solution is to use Dropbear SSHd, and I would like to eventually use OpenSSHd to do the job.

The information in this article comes from a handful of sources online, but due to the litigous nature of the internet and the habit of internet entities frankly concentrating on ass-covering over defending users (cough I use a -free- wordpress acount cough) I’ll provide search keys for that great semi-trustworthy search engine empire that should direct us all to the keys we need. These will be included at the end of the article, but if I forget one feel free to remind me and I’ll do an update. /finally inhales.

 

Understanding what the ACS is

(or, grokking the gateway)

Console Servers are often also referred to as Terminal Servers. They’re literally made to convert a serial connection (or two, or maybe forty-eight) to a network connection of some sort. In the case of the ACS series, the options are pretty impressive considering the age of the hardware.

Besides simply putting a serial terminal on a network using TCP sockets, these also have the ability to provide services to handle modems both for dialing in or out, putting serial devices on a network such as plotters or printers, or even custom applications if the ‘acslinux’ open source package was requested.

I’ve noted above that these ACS systems run Linux. They also provide a serial console like many of the systems of its era, and connecting to it allows logging in as root and even resetting the configuration of the system by using the ‘single’ command on the kernel commandline before booting. Beyond that step, a full root shell in a ramdisk is offered to manage and utilize the system’s resources.

The challenges are getting beyond the volitale nature of a ramdisk and understanding how to do so in a way that works with the firmware implementation. The first article in this series covers the step of understanding the system.

Further articles will cover changing the system’s configuration and thereby its behavior, and ensuring that configuration change is included when the system is powered down or rebooted.

Understanding the ACS’ configuration files

(or, a segment of an unwritten manual for the Cyclades ACS)

Like many embedded systems I’ve come across, the Cyclades ACS runs entirely witin a ramdisk that’s loaded during startup. That ensures a sane configuration is put in place to operate the hardware as intended by the manufacturer. After the ramdisk is loaded into memory, another location is used to read configuration data and update this ramdisk.

In the ACS series, a compact flash card stores the bootloader, kernel, ramdisk, and the configuration data. How exactly it’s laid out isn’t apropos to these articles and is somewhat beyond my understanding. I’m not interested (yet /grin) in changing this, so it’s not a concern. As much as I dislike ‘magic’ or ‘black boxes’ I’m going to have to let this go, for now. Grr.

So, how do we determine how this is done? It’s a linux system. Look at the startup scripts! We’re dealing with a simple script system here, not anything convoluted or over-engineered like SystemD. The startup process is summarized here.

A large portion of the system is based on busybox, which is modified for this system somewhat- it runs one script of interest, /bin/init_ram.

Init_ram takes care of setting up system mounts and pcmcia before loading configuration data using a script at /bin/restoreconf.

Reviewing /bin/restoreconf sheds light on how the system saves its configuration between runs: A list of files is chosen baed on the contents of /etc/config_files and saved to a custom filesystem mounted on /mnt/flash, in config.tgz.

The contents of config.tgz are unpacked over the root filesystem, overwriting files that have changed from the factory configuration and adding new ones.

/etc/config_files will be key to the SSHd upgrade later on.

After busybox’s built-in preconfiguration /etc/rc.sysinit is run. A lot of the usual linux init happens here, such as tuning the system, setting hostname, and starting up networking.

After rc.sysinit, several scripts are run at once, the most important of which is /bin/daemon.sh. This script reads the file /etc/daemon_list, which contains a list of daemons to start after sysinit. The format is pretty simple. Hashed lines are ignored via an inverse grep, and lines are in the format ‘nickname dfile’ where nickname is a name for the daemons, and dfile is the file responsible for controlling the daemon. These files are called via /bin/handle_daemon.sh.

/etc/daemon_list is another key to the SSHd upgrade.
The files in /etc/daemon.d/, which are referred to by /etc/daemon_list, are the last key to getting the SSHd upgrade in place.

Building programs for a legacy system

(or, digging for tools waaaay back in the shed where the dust bunnies live)

Building programs for a given system requires an understanding of a little of its hardware. Linux will abstract just about everything, but it’s necessary to know what processor’s at heart in the system. This is often easily obtained via a single command:

# cat /proc/cpuinfo
processor : 0
cpu : 8xx
clock : 48MHz
bus clock : 48MHz
revision : 0.0 (pvr 0050 0000)
bogomips : 47.36

Well then. Isn’t that incredibly descriptive. It’s a 8-what now? And 48MHz? The system’s fitted with PC133 SDRAM. I guess the memory’s not going to be a bottleneck here.

Lets check a binary on a system with ‘file’ installed.

bin$ file busybox
busybox: ELF 32-bit MSB executable, PowerPC or cisco 4500, version 1 (SYSV), dynamically linked, interpreter /lib/ld.so.1, for GNU/Linux 2.4.17, stripped

Well look at that. It’s apparently a PowerPC 8xx. Startup messages also mention Montavista and Hardhat Linux. The only thing I’ve been able to learn about hardhat is that it’s an apparently not-for-free distribution of linux that’s carefully kept behind password-protected gateways.

With that option out, a secondary option appears to be viable: DENX’s ELDK, aka Embedded Linux Development Kit. Thankfully it’s still present on their site and fairly well documented! Searching for ppc_8xx on the internet finds them pretty quickly, linking to a ‘working with’ page. Using ELDK on a newer system as I’m doing does require a bit of trickery, but that’s covered later and mostly impacts cross-compiling.

Cross-compiling is often the ideal way to take things, but it’s not without its challenges. Did I mention earlier that I’ve not yet succeeded getting OpenSSH’s configure script to finish? Meh. 😛

I’m taking the often-used (today anyway) easy method of crossbuilding used by owners of small ARM systems such as Raspberry pi’s and their friends. Essentially it involves installing a system emulator in a chroot and just building programs “natively” under emulation.

The ELDK installation does provide a CHROOT’able tree that can be used to build smaller software sets, which led to the decision to use Dropbear SSH.

On Ubuntu, the qemu-user-static package provides CPU emulation while still using system calls in native space. The files in /usr/bin/qemu-ppc* need to be copied to the chroot as described below so the system can use qemu to handle launching cross-platform binaries.

I followed the standard installation location of ELDK to /opt/eldk. On choosing the ppc_8xx target, a tree at /opt/eldk/ppc_8xx is created. A backup was made just in case I overwrote some files during a build and needed to recover.

Starting the environment enough to run a build was fairly simple. First, get the dropbear sources and copy them to /opt/eldk/ppc_8xx/usr/src/ and unpack them there. Next, copy /usr/bin/qemu-ppc* to /opt/eldk/ppc_8xx/usr/bin/. Once that’s done, you can use ‘sudo chroot /opt/eldk/ppc_8xx’ and will be greeted by a root prompt.

~$ uname -a
Linux epicfail 4.15.0-29-generic #31~16.04.1-Ubuntu SMP Wed Jul 18 08:54:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
~$ sudo chroot /opt/eldk/ppc_8xx/
bash-3.00# uname -a
Linux epicfail 4.15.0-29-generic #31~16.04.1-Ubuntu SMP Wed Jul 18 08:54:04 UTC 2018 ppc ppc ppc GNU/Linux
bash-3.00#

WAT. A chroot turns an x86 into a ppc? Kind of. QEMU is using a powerpc cpu emulator to run programs within the chroot and translating system calls to the native host kernel. At this point, the build should be easy, right?

Right??

I chose to use ELDK 4.0 as it’s an early version of similar age to the Montavista 3.1 installation on the ACS. That means some newer software will rely on things that don’t exist inside this chroot. Did I mention OpenSSH?

Dropbear had only one issue. Configure ran without a hitch, but make failed after an error with sed. Apparently, ‘sed -E’ didn’t exist then, but ‘sed -r’ does. An edit to ifndef_wrapper.sh and another make leads to success.

Make install is run to get a list of files that are created, and a tarball is made of the files to transit them to the ACS for testing.

make -j$(grep -c cpu /proc/stat)
make install
tar cfz dropbear.tar.gz /usr/local/sbin/dropbear /usr/local/bin/dbclient /usr/local/bin/dropbear* /etc/dropbear/

Now we can exit the chroot and copy the tarball to the ACS:

scp /opt/eldk/ppc_8xx/usr/src/dropbear-2018.76/dropbear.tar.gz acs:/tmp/

Note that on my system I have a 1gig CF mounted via a pcmcia-CF adapter. This is incredibly handy at times, and was used to store the tarball before unpacking it. The system will even automatically mount an ext2 (not ext3+) filesystem on the first partition to /mnt/ide on startup. I have a copy of the ppc_8xx chroot there for short tests.

Next, ssh into the ACS as root, and unpack the tarball. There’s no need for concern with the filesystem– All files are in the /usr/local tree or /etc/dropbear, and a reset will restore the system to where it was before the tarball was unpacked.

cd /
tar zxvf /tmp/dropbear.tar.gz
touch /var/log/lastlog
/usr/local/sbin/dropbear -FERp8022

Dropbear needs lastlog to log in without complaints, so we’ll create that ahead of time.

Next, launch dropbear in test mode. F means ‘foreground’, E means ‘log to stderr’, R means ‘create host keys on demand’ and p8022 sets the sshd port to 8022.

From your system, ssh to root on port 8022 to see if it worked:

ssh -p8022 root@acs

The test run of dropbear should note an incoming connection. At this moment a healthy reminder of the 48MHz system speed is handed over. The first time I made a connection it did take about two minutes, and timed out before I could log in. A second connection proceeded quickly enough and successfully logged into the root account.

 In all, it was frustrating but still educational and entertaining.

(or, we’ll just edit these explitives out.)

It’s always great to find ways for breathing new life into old hardware for me. Also, a good reminder of how to perform trickery to get things done an easier way keeps the mind active. Generally cross-compiling isn’t something people do even if they’re in a system administration line, which reminds me how to find resources and understand different methods. In all, a handful of weekend hours well spent.

The replacement SSH daemon is now up and running, but will disappear the moment the ACS is rebooted or power cycled. In a following article, I’ll cover some permanency for the SSH daemon, and even replacing the old and insecure daemon with something a little more secure.

See you then!

Advertisement