Configuring server file system quotas

Quotas are sexy and quite useful on dedicated servers with multiple users that you don't trust. Of course we use quotas on our shared web hosting servers. Sometimes you'll get customers who want quotas on their dedicated server, which is great.

Quotas (on ext2/3) work by keeping metadata in the root of the filesystem, in files aquota.user and/or Some filesystems like XFS have direct support for quotas. Updates to the filesystem are caught and the metadata is updated accordingly.


  1. Install the necessary packages, quota and quotatool if you've got it

    apt-get install quota quotatool
    up2date quota
    yum install quota
  2. Add the usrquota or grpquota option to the fstab entry

    • quotas are a filesystem feature, while bind-mounts work on the VFS layer. Thus, you can only do quotas on the root of a filesystem. The practical effect of this would be that a bind-mounted /home will have to be done on /data. Quotas will also affect stuff like /var/lib/mysql, or /usr/local/foo. Keep this in mind when you set quotas, but that tends to be system stuff, and you shouldn't be setting quotas on system users.

  3. Remount the filesystem with mount -o remount /home

    • It's highly likely you'll be acting on /data if this is an Anchor dedicated server

  4. Gather the initial quota metadata with quotacheck

    1. On Red Hat, quotacheck -cug /home

    2. On Debian, /etc/init.d/quota start will do this for you (recommended). quotacheck -augmv is another option, best to check nothing is using the FS when you do it, as it may skew quota data, I think.

  5. Enable quotas
    1. On Red Hat, quotacheck -avug

    2. On Debian, quotaon -augv, although the initscript method should have done that for you already

Setting quotas

Filesystems (at least in the case of ext2/3) work with 1024 byte blocks, not bytes, so you need to calculate stuff a bit. If you have the quotatool package, you can specify quantities in bytes and MB, which is sane. quotatool also allows scripted quota setting, whereas doing lots of users with edquota would blow cocks.

  1. On a shared hosting server we have convenient aliases to set the quota to common amounts. Run

     alias|grep quota
    to see what is available.
  2. edquota is good for this, but not ideal for lots of users.

  3. RTFM for great justice and saved time
  4. A helpful shell script you can use for en-mass quota-setting

    if [ $# -ne 1 ]
            echo "Need a single username to set quota for!"
            echo "Exiting."
            exit 1
    quotatool -v -u $1 -b -l '24MB' /data
    quotatool -v -u $1 -b -q '20MB' /data
  5. adduser on debian has a feature that lets you run a sort of "post-add" script, which is perfect for setting quotas. It resides at /usr/local/sbin/adduser.local, and here's one I prepared earlier (syntax and expected behaviour is detailed in the adduser manpage)

    if [ $# -ne 4 ]
            echo "Needs four args, username, uid, gid and homedir!"
            exit 1
    case "$VERBOSE" in
            # verbose
            echo "username is $1"
            echo "uid is $2"
            echo "gid is $3"
            echo "homedir is $4"
    # Give new users a soft quota of 20meg, and a hard limit of 24meg,
    # will help things a little if it gets to breaking point
    case "$VERBOSE" in
            echo -n "Setting 24meg hard limit... "
            quotatool -v -u $1 -b -l '24MB' /data
            echo "Done"
            echo -n "Setting 20meg soft limit... "
            quotatool -v -u $1 -b -q '20MB' /data
            echo "Done"
            quotatool -v -u $1 -b -l '24MB' /data &>/dev/null
            quotatool -v -u $1 -b -q '20MB' /data &>/dev/null
            quotatool -v -u $1 -b -l '24MB' /data
            quotatool -v -u $1 -b -q '20MB' /data


It's good to know if you've got a user over quota. You can easily set warnquota to do this for you, but it primarily emails the user, which may be irrelevant in some circumstances. Thankfully, you can use this and modify it to suit your needs. Just run it out of /etc/cron.daily/
# A plus-sign appears in the unlabeled status column to indicate an over-quota situation
# This is more reliable that searching for "days" on the line, which refers to remaining grace time
MESSAGE_BODY=`repquota -au | grep '\+'`

repquota -au | grep '\+' > /dev/null
if [ $RET -eq 0 ]
#echo Found user over quota! Sending mail to $ADMIN_EMAIL
/usr/sbin/sendmail -t <<EOMAIL
Subject: User over quota




If a machine goes down hard and needs a fsck, there's a chance that the quota metadata is corrupt or incorrect.

  1. Check for a quiet time. Leverage cacti for some "business intelligence" if it's enabled for the server and look for lulz in the disk I/O patterns.
  2. Check for any cronjobs due to run around that time. On our servers, MySQL dumps happen around midnight. PgSQL dumps happen at 00:30. Hourly cronjobs happen on the hour, daily jobs run at 04:00. Backups at 01:00.
  3. Check the processlist for anything that might need special attention. We're going to stop cron, apache, mysql and pgsql, typically. Hopefully other stuff will be okay with a read-only filesystem, but there'll ideally be no activity anyway.
  4. Schedule an hour of flexible downtime in nagios for the host, we're gonna be taking out a bunch of services. For the writing of this guide, nagios measured 25min of downtime, and I was dawdling.
  5. Kill some services, in this order. It follows a rough set of dependencies that should allow things to stop working cleanly. If there's more services to take care of, you'll have to use your head and slot them in.
    • crond
    • apache
    • mysql
    • postgresql
  6. Once services are stopped, give it a few minutes to settle down. Check the processlist for zombies and other stuff that's not cleanly finished.
  7. Run find over the filesystem. This is currently hypothetical, but it should cache inodes and make things run faster (anything to shrink the r/o mount time is good). This could be done before the service outage, too.

    find /data > /dev/null
  8. Disable quotas on the filesystem

    quotaoff -vug /data
  9. Get thyself a screen session if you aren't already using one.
  10. Do the quotacheck. We use interactive mode so it can ask us what to do if anything is corrupt.

    quotacheck -ivug /data
  11. There's a good chance it'll fail to remount the filesystem read-only. If you can figure out why, that's nice, otherwise you just have to bite the bullet and hope the counts won't be too far out.

    lsof +D /data    # may help
    lsof +D /home    # chances are you've got bind-mounts keeping things open (`/home` is bind-mounted from `/data/home`)
  12. This hopefully won't take long. It took about one minute on the system I ran it on.

    [root@yoshino ~]# quotacheck -ivug /data
    Cannot remount filesystem mounted on /data read-only. Counted values might not be right.
    Should I continue [n]: y
    quotacheck: Scanning /dev/mapper/lvm-data [/data] quotacheck: Old group file not found. Usage will not be substracted.
    quotacheck: Checked 206131 directories and 1787653 files
    [root@yoshino ~]# df -i
    Filesystem            Inodes   IUsed   IFree IUse% Mounted on
    /dev/mapper/lvm-data 33193984 1993793 31200191    7% /data
    [root@yoshino ~]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/lvm-data  123G   81G   36G  70% /data
  13. Re-enable quotas

    # quotaon -vug /data
    /dev/mapper/lvm-data [/data]: user quotas turned on
  14. Re-enable the services you stopped earlier, in reverse-order.
  15. Chances are something tipped you off to there being a problem with quota data. See if that's still a problem, it should be fixed now.
  16. Check your nagios to ensure everything's come back as expected.

Reference links,1697,2140153,00.asp