Engine Yard Developer Center

Filesystem Health Checks

This document describes how to handle alerts from the Filesystem Consistency Check Device primary ebs and consists of the following topics:


Filesystem consistency check alerts are included with the most recent version of the Engine Yard Stable-V4 and Stable-V5 stacks. These checks run weekly and will alert in three situations:

  • It has been more than 12 months since the last filesystem check.
  • The volume has been mounted more than 30 times since the last filesystem check.
  • The volume state is something other than "clean".

The alert will be displayed with your other dashboard alerts and will have wording similar to the following:

WARNING: Device primary ebs: 2:36:00 Filesystem Check Warning /data (/dev/xvdj1): Last Check: 2015-10-13 Check Due: 2016-10-13

There are currently three alert severities for this alert:

  • OKAY: the prior check was a WARNING or FAILURE and that state has since been cleared; no action is necessary.
  • WARNING: the amount of time or number of mounts since the last check exceeds the standard limits (1 year, 30 mounts respectively). Best practice is to schedule some time for performing checks on a copy of the underlying volume as described below.
  • FAILURE: the volume is reporting that it has not mounted cleanly due to an unknown failure. Immediate investigation is recommended.

Purpose of These Checks

Best practice for most linux filesystems is to perform regular checks and repairs to correct any inconsistencies. These typically occur due to hardware failures or system crashes; however, in our experience they are not especially common. When a system is configured to perform automated checks of the filesystem it will do this as part of its reboot cycle; the system performing a check is inaccessible while the filesystem check is occurring.

Unfortunately, in the event of a scheduled maintenance or other system restart event these checks can end up having very poor timing (e.g. extended downtime while you are already short on resources). Depending on the size of the volumes to be checked, these checks may take several hours or even days to complete. The detail about the state of the instance is also not available to the API or the AWS console so it cannot currently be exposed in the cloud dashboard; as a result, the instance just looks unresponsive.

To prevent this, we have turned off the setting that performs these checks during startup for your primary data volumes (/data, /db). As a replacement, filesystem health checks have been added as alerts on your dashboard, allowing you to address these alerts as part of a planned maintenance schedule.

Handling Alerts by Instance Role

There are several approaches to addressing these alerts. Generally the best practice is to create a "donor" instance that will act as a stand-in for the host with the volume needing to be checked. If the checks come back clean on the "Donor" the original host can have its volume information adjusted with tune2fs to indicate a healthy state. The recommended procedure depends on the instance type.

  • Solo (shared app/db): Take a snapshot using the environment level snapshot button; when complete, clone the environment. Perform the checks against this host and then use tune2fs (described further below) to adjust the reported state of the original volume.
  • DB Master: Create a new db replica. Perform checks against this host and then use tune2fs (described further below) to adjust the reported state of the original volume.
  • DB Slave: Follow the instructions on checking the state of the db_master volume and update its volume state. When the master has been updated create a new replica from a fresh snapshot of that master, and terminate the existing replica.
  • Utility: Take a snapshot using the environment level snapshot button, or by running ey-snapshots --snapshot on the utility instance. When complete add an instance using that snapshot and perform the checks against the volume.

Application Instances can be created using a new volume that then syncs its data from the app_master. As a result, the recommended practice for these host types is to use this functionality so that a completely new volume is used:

  • App Master: Create a new app (slave) using a new volume instead of a snapshot. When the host is ready, initiate a takeover so the new host becomes the App Master.
  • App (slave): Create a new app slave specifying to use a new volume.


In order to perform an FSCK operation the volume must be taken offline; to do this it will be necessary to stop any services that are using the volume. What follows, describes the process of using an extra host, a "donor", added to your environment that is not otherwise needed or used by the application.

WARNING Make sure to use the "donor" instance so that you don't negatively impact your application performance or create downtime for your application. If you decide to scan an application instance be sure to manually remove that instance from your load balancer or haproxy on the application master to prevent traffic from being routed through that host.

Stopping Services

  • Collectd (alerts)

    sudo -i sed -i 's/\(.*collectd.conf.*\)/#\1/' /etc/inittab
    sudo -i telinit q
  • MySQL

    sudo -i /etc/init.d/mysql stop
  • Postgres

    sudo -i /etc/init.d/postgresql-$(postgres -V | egrep -o '[0-9]{1,}\.[0-9]{1,}') stop

Disconnect the Volume

Note: The following steps assume you are working with a database volume, for a utility or application instance volume you would change all references to /db to /data instead.

Volumes can be detached by their device or by their mounted name. For a database volume you would run:

sudo -i umount /db

If a service is still using this volume you can run the lsof or fuser commands to determine what process is connected

sudo -i lsof |grep /db sudo -i fuser -m /db

You can then use sudo -i ps -ef to determine which process is connected to the device. The most common scenarios would be a service still in use, or your login shell being connected to that device; the latter of which can be corrected by running cd ~.

Performing the Check

To check the filesystem you need to force the check so that the volume is actually checked instead of just reading the volumes state information:

Start a screen session so if you do become disconnected you can reconnect again later (To disconnect: ctrl-a, d)

screen -S fs_checks

Paste the commands to perform the check and remount the volume.

sudo -i fsck /db -f
sudo -i mount /db

You should see output similar to the following:

fsck from util-linux 2.26.2
e2fsck 1.42.13 (17-May-2015)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/xvdz2: 1785/983040 files (10.3% non-contiguous), 219124/3932160 blocks

If you disconnect from the screen session you can reconnect with `screen -x fs_checks`. If you become disconnected from the instance you may not be able to log in again until the filesystem check completes; this is expected.

You can confirm the state of the checked volume with the following command:

sudo -i tune2fs -l $(cat /etc/fstab | egrep '/data |/db '|awk '{print $1}') 

Pay special attention to the values for 'Mount count' (1), 'Last checked' (recent timestamp), and 'Filesystem state' (clean).


If you are using a "Donor" system you'll want to re-mount the volume and keep it around until you've adjusted the settings on the original volume and confirmed the alert has cleared.

Restarting Services (if needed)

  • Collectd (alerts)

    sudo -i sed -i 's/#\(.*collectd.conf.*\)/\1/' /etc/inittab
    sudo -i telinit q
  • MySQL

    sudo -i /etc/init.d/mysql start
  • Postgres

    sudo -i /etc/init.d/postgresql-$(postgres -V | egrep -o '[0-9]{1,}\.[0-9]{1,}') start

Adjusting the Original Volume

Now that you've checked a copy of the volume on a "Donor" instance you can reset the state of the original volume:

Set the timestamp to the current time:

sudo -i tune2fs -T now $(cat /proc/mounts|grep '/db'|awk '{print $1}')

Set the mount count back to 1:

sudo -i tune2fs -C 1 $(cat /proc/mounts|grep '/db'|awk '{print $1}')

Getting a Recovery (OKAY) Alert

As configured the check will only run once weekly so depending on when you complete these steps it may be some time before you see the recovery alert. To force the next check and confirm the alert is cleared run:

sudo -i touch -d 20140101 $(ls /tmp/check_primary-ebs_status/*)

The next collectd check will validate the state of the volume and a recovery should be displayed on the dashboard within a minute or two.

Disabling the Check

While we don't recommend disabling this check we have built in a simple means to make this possible. To disable this check create a file at /etc/engineyard/skip_fsck_check on any instance you want this turned off for. We recommend you add a comment to this file that advises of the reason this has been disabled in case is is questioned at some future time. To re-enable the check later simply remove this file.

If you would like to disable this on multiple systems you would add a custom chef recipe that sets this file across multiple hosts.

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request


Please sign in to leave a comment.

Powered by Zendesk