Passenger workers are monitored for excessive memory consumption and are killed if they are found to exceed the pre-set memory limit. This mechanism, in conjunction with the EngineYard provided "Worker Counts" document allows the tweaking of workers based on application needs and instances' characteristics.
The default limit of memory consumption is 250Mb as can be seen in the aforementioned document and that limit is passed over to recipes setting up Passenger on our stacks. However, it was found that the default value of "800Mb" instead of the expected "250Mb" was used. As a result, customers relying on the default value of "worker_memory_size" had their environments configured with the wrong value. In some cases this could lead to bloating workers being able to reach a higher than desired memory limit, whilst in rare cases multiple workers could reach this limit and thus over-utilise the memory and swap for an instance.
With the release of Stack V5 stable-v5-3.0.64, passenger workers' default memory limit is set to 250Mb on cookbooks. This is fixed on PR https://github.com/engineyard/ey-cookbooks-stable-v5/pull/425 which is included in this release. On Stack V6, PR https://github.com/engineyard/ey-cookbooks-stable-v6/pull/138 is now merged and it's included in Stack V6 release stable-v6-1.0.23.
After upgrading to stable-v5-3.0.64 (Stack V5) or stable-v6-1.0.23 (Stack V6) customers who rely on the default memory limit will find Passenger workers are killed when they reach 250Mb of memory usage. Some customers may have applications whose workers require a higher memory limit and have been consuming more memory but without over-utilising the instance. Such customers may find that Passenger workers are killed unnecessarily, leading to decreased app performance and increased error rates. Ideally the memory usage of the workers should be identified before the upgrade is applied to the environment(s).
You can identify the limit set by examining the relevant cronjob (as root):
# Chef Name: passenger_monitor_todo
* * * * * /engineyard/bin/passenger_monitor todo -l 250 -w 60 >/dev/null 2>&1
The command "passenger-status" can be used to examine the current memory consumption of workers. Sample output:
----------- Application groups -----------
App root: /data/todo/current
Requests in queue: 0
* PID: 18839 Sessions: 0 Processed: 143 Uptime: 11m 52s
CPU: 0% Memory : 44M Last used: 3s ago
* PID: 18849 Sessions: 0 Processed: 0 Uptime: 11m 52s
CPU: 0% Memory : 26M Last used: 11m 52s ago
* PID: 18859 Sessions: 0 Processed: 0 Uptime: 11m 52s
CPU: 0% Memory : 18M Last used: 11m 52s ago
If the Memory value is seen to be above the default 250Mb limit for the majority of workers, then customers should contact EY Support before applying the stack upgrade.
If the stack upgrade has already been applied, then customers can monitor for workers being killed for breaching the new lower limit. Monit keeps track of killed workers by writing to /var/log/syslog, therefore if you notice a large number of worker kills please contact EY Support for guidance.
Jul 17 10:36:01 ip-10-110-54-192 passenger_monitor: Killing PID 2118 (app lsswebapp) - memory 251 MB exceeds 250 MB
Jul 17 10:36:02 ip-10-110-54-192 passenger_monitor: Killing PID 2118 - orphaned process
The workers' memory consumption limit is something that can be adjusted by Engine Yard Support through our platform backend. If you need the value adjusted, please open a new support ticket and our team will discuss and apply the required changes. As per the previously linked Worker Counts document, increasing the memory limit results in a reduction in the number of workers, in order to prevent overconsumption of memory.