On April 18, 2018, I spoke on "Containerizing Rails: Techniques, Pitfalls, and Best Practices" at RailsConf in Pittsburgh, PA. The talk discussed best practices for designing Docker containers for Rails applications.
A list of tips
Here is the full list of tips we covered in the talk.
Read and understand your base image.
Dockerfiles are generally accessible on Github. It’s important to understand what your base image is doing, so you can make a judgment on whether it is appropriate for your application.
Combine install commands with cleanup
It is important to ensure temporary/unneeded files do not appear in any intermediate layers in your image.
Use a separate build stage
Multi-stage Dockerfiles are very powerful and useful for ensuring your final image has only the files you need at runtime.
Set the system locale
Some Linux distributions do not set it by default, and that could have unexpected effects on your string encodings in Ruby.
Create an unprivileged user
Set up defense in depth. By default, processes in a container run as superuser, which is not what you want for Rails.
Prefer exec form for CMD
Many reasons, including interoperability with ENTRYPOINT. But in particular, it’s needed for proper signal propagation and safe container shutdown.
Prefix shell form with the “exec” keyword
If you must use shell form, prefix with the “exec” keyword to cause bash to propagate signals.
It’s just not that useful a feature. It makes assumptions about the needs of your app, and involves implicit rather than explicit builds.
Always specify resource constraints
This is crucial to ensuring container behavior is stable and predictable in production. If you can’t define static constraints, it’s an indication your container may be doing too much.
Avoid preforking in a container
Preforking has complicated interactions with resources such as file handles, and can lead to less predictable memory behavior
Scale by adding containers
Containers are best used as the unit of scaling. If you need more resources, add more containers.
I’m sorry I didn’t make this that clear in the talk itself, but it’s okay to have multiple processes in a single container if they work together to perform a service, provided that the container still has static, predictable resource constraints.
Log to STDOUT or an external agent
Direct your production logs outside the container. The easiest way is to log to STDOUT by setting the
- The Docker documentation publishes another list of good Dockerfile best practices.
- Phusion’s baseimage provides support for a number of complex use cases such as controlling the init procedure and running daemons within a container. It’s well documented and very useful for learning about Docker’s edge cases, though it may be overkill for most applications.