Security cleanup of a compromised dedicated server

Disclaimer

This document is designed to give details on how to perform a basic online scan/cleanup of a compromised dedicated server.

If a host is compromised, an attacker may:

  • Run malware that you are unable to detect. The malware may not even necessarily be persistent (ie. the files on disc may be unchanged);

  • Modify the hardware (eg. trojan the firmware contained in the BIOS, video card, SCSI controller, NICs, actual hard discs, etc).

The only way to properly recover a compromised host is:

  1. Start from scratch on new hardware with known good backups; or

  2. Individually examine and repair each of the possibly compromised bits of hardware under a forensics lab style environment (outside the scope of what Anchor provides).

This procedure does NOT provide any guarantees that you will actually be able to find/cleanup the damage. Whilst this procedure is not foolproof, it may be sufficient under many circumstances as not all attackers will use the most sophisticated of techniques (how do you think you managed to detect that the host was compromised in the first place, eh?). It trades off the level of assurance that a host is in a non-compromised state against not losing data since the last backup and the downtime involved.

The clean up squad..

  • Determine IP address and MAC address of the compromised host:

    [root@server root]# host server
    server.anchor.net.au has address xxx.xxx.xxx.xxx
    [root@server root]# ip neigh show to xxx.xxx.xxx.xxx
    xxx.xxx.xxx.xxx dev eth1.839 lladdr 00:00:09:b8:0e:c2 nud delay
  • Determine switch and switch port of the compromised host. Change the switch port to 10 Mbps.
  • Setup NSM of compromised host (use both IP address and MAC address as traffic selector)

    [root@server root]# df -h -x none
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md2              487M  407M   55M  89% /
    /dev/md4               11G  2.9G  7.2G  29% /data
    none                  501M     0  501M   0% /dev/shm
    /dev/md0              2.0G  1.1G  802M  59% /usr
    /dev/md3              2.0G  1.6G  325M  84% /var
    tmpfs                 500M   24K  500M   1% /tmp
    [root@server root]# tcpdump -n -s 0 -i eth1.839 -w /data/`/bin/date "+%Y%m%d-%H%M%S"`-hacked_server.lpc '(host xxx.xxx.xxx.xxx or ether host 00:00:09:b8:0e:c2) and not host xxx.xxx.xxx.xxx'
  • Stop the host from doing more damage with temporary iptables rules:
    • On the host:

      iptables -N PACKET_LIMITING
      iptables -A PACKET_LIMITING -p tcp --dport 22 --syn -j REJECT
      iptables -A PACKET_LIMITING -p tcp --dport 21 --syn -j REJECT
      iptables -A PACKET_LIMITING -m limit --limit 200/second --limit-burst 4000 -j RETURN
      iptables -A PACKET_LIMITING -j DROP
      iptables -I OUTPUT -j PACKET_LIMITING
    • Also on routers:

      iptables -N server_HACKED
      iptables -A server_HACKED -p tcp --dport 22 --syn -j REJECT
      iptables -A server_HACKED -p tcp --dport 21 --syn -j REJECT
      iptables -A server_HACKED -m limit --limit 200/second --limit-burst 4000 -j RETURN
      iptables -A server_HACKED -j DROP
      iptables -I FORWARD -i eth1.839 -m mac --mac-source 00:00:09:b8:0e:c2 -j SERVER_HACKED
      iptables -I FORWARD -i eth1.839 -s 202.4.x.x -j SERVER_HACKED
  • Collect a list of processes and save to your workstation:

    % mkdir ~/work/hacked_server-compromise
    % cd ~/work/hacked_server-compromise
    % ssh server ps auxfw | tee process-listing
  • Collect a list of network ports that are listening:

    % ssh server netstat -ln -p -e | tee netstat
  • Inspect the results of the process listing and network listeners
    • Is there anything unusual there ?
    • Are there any daemons listening on ports that you don't expect?
  • Check what the package manager thinks about the state of the machine
    • For an RPM based system run:

      % ssh server rpm -Va | tee rpm-Va

      The flags from the output of RPM mean:

      The  format  of  the  output  is  a  string of 8 characters, a possible
      attribute marker:
      
      c %config configuration file.
      d %doc documentation file.
      g %ghost file (i.e. the file contents are not included in the package payload).
      l %license license file.
      r %readme readme file.
      
      from the package header, followed by the file  name.   Each  of  the  8
      characters  denotes  the  result of a comparison of attribute(s) of the
      file to the value of those attribute(s) recorded in  the  database.   A
      single "." (period) means the test passed, while a single "?" (question
      mark) indicates the test could not be performed (e.g. file  permissions
      prevent  reading).  Otherwise,  the (mnemonically emBoldened) character
      denotes failure of the corresponding --verify test:
      
      S file Size differs
      M Mode differs (includes permissions and file type)
      5 MD5 sum differs
      D Device major/minor number mismatch
      L readLink(2) path mismatch
      U User ownership differs
      G Group ownership differs
      T mTime differs
    • If any files are modified, then reinstall the package that the file comes from:

      # up2date --get PACKAGE_NAME
      # cd /var/spool/up2date
      # rpm -K PACKAGE_FILE
      # rpm --repackage -Uvh --force --oldpackage PACKAGE_FILE

      Check for any binaries not managed by the package manager:

      $ ssh server
      server> for path in $(echo $PATH | tr : " ") ;do [ -d "$path" ] && rpm -qf `echo $path/*` | grep owned;done 
  • Check for the presence of rootkits with:
  • Check what files are open for anything suspicious:

    $ ssh server lsof | less
  • Check for any unusual user accounts:

    $ ssh server cat /etc/passwd | tee passwd
    $ ssh server cat /etc/shadow | tee shadow
  • Check for any unexpected cron programs:

    $ ssh server cat '/var/spool/cron/*' | less
    $ ssh server cat /etc/crontab | less
    $ ssh server cat '/etc/cron.d/*' | less
    $ ssh server cat '/etc/cron.daily/*' | less
    $ ssh server cat '/etc/cron.hourly/*' | less
    $ ssh server cat '/etc/cron.weekly/*' | less
    $ ssh server cat '/etc/cron.monthly/*' | less
  • Check what users are allowed ssh access to root:

    $ ssh server cat '/root/.ssh/authori*'
  • Check for suid/sgid files:

    $ ssh server find / -type f -perm +6000 -ls | tee suid-sgid
    Check that the files belong there.
  • Check what updates need to be applied.
  • Reboot.
  • Inspect the packet dump collected earlier for anything unusual that may indicate a control channel. It is useful to run snort against it.

Other resources