Customizing Postgres in Docker

The Docker registry provides an official Postgres container that can be used in a wide variety of situations. However, at its default configuration, Postgres does not accept remote connections from other IP addresses.

This presents a problem when used with something like WebHelpDesk, which needs a Postgres database to slave off of.

To address this issue, we’re going to create a new Docker container based on Postgres that changes the appropriate settings in pg_hba.conf.

The repo for this project can be found here.

The Dockerfile

FROM postgres
ENV DB_NAME database
ENV DB_PASS password
ADD /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/
ADD /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/

FROM postgres
Start with the official Postgres image.

ENV DB_NAME database
ENV DB_PASS password

These three environment variables are going to be passed to the script. This technique was originally developed by Graham Gilbert for his Postgres docker container for the use of Sal. The purpose of these environment variables and the script are to create a starting database for us to use.

ADD /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/

Add in the script to setup remote connections. The location of this script, /docker-entrypoint-initdb.d/ is a special directory provided by the default Postgres container to extend additional tasks on startup. All scripts located in this directory get run on startup – thus we can add Postgres configurations automatically. Chmod sets correct execute permissions on the script.

The script contains only one line of code:
sed -i '/host all all\/32 trust/a host all all\/16 trust' /var/lib/postgresql/data/pg_hba.conf
This sed statement modifies /var/lib/postgresql/data/pg_hba.conf to allow for remote access connections – see documentation here. It simply adds to the list of trusted access locations that are allowed to make database updates. The range is for Docker IP addresses – thus giving any other Docker container necessary access. This is specifically intended for use with WebHelpDesk, which requires it.

ADD /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/

This script, as mentioned previously, was shamelessly stolen from Graham Gilbert – thanks for that. It sets up the initial database with correct role and privileges, using the environment variables.

The script looks like this:

TEST=`gosu postgres postgres --single <<- EOSQL
   SELECT 1 FROM pg_database WHERE datname='$DB_NAME';
if [[ $TEST == "1" ]]; then
    # database exists
    # $? is 0
    exit 0
gosu postgres postgres --single<<- EOSQL
gosu postgres postgres --single <<- EOSQL
gosu postgres postgres --single <<- EOSQL
echo ""
echo "******DOCKER DATABASE CREATED******"

The script executes on startup of the Docker container, and sets up the database we need to use by reading in the environment variables we pass in using the -e argument to docker run. We’ll see this in action later.

With this Dockerfile, and with our scripts ready, the container is ready to be built:
docker build -t name/postgres .
Or you can pull the automated build from the registry:
docker pull macadmins/postgres

Using the new Postgres database:

Running the database container by itself without anything special is simple:
docker run -d -e DB_NAME=db -e DB_USER=admin -e DB_PASS=password macadmins/postgres

Of course, we want to probably do a bit more than that. For example, it’s much easier to test and work with it if we can name it something we can refer to easily:
docker run -d -e DB_NAME=db -e DB_USER=admin -e DB_PASS=password --name postgres-test macadmins/postgres

You can then check to see if your database exists the way you expect by using psql:
docker exec -it postgres-test psql --dbname db --username admin
You’ll be at the psql prompt, and you can type \l to list all the databases. You’ll see the one we created, db, in the list (which does not format well, so I haven’t tried to copy and paste it here).

Looks good, right? Now the database is ready to be used by any application or service by linking the customized Postgres container to another one. We’ll do an example of this in another post.

An Alternative

Dockerfiles are pretty neat things. They allow us to do fun stuff, like take someone’s else’s image as a base and build more stuff on top of it. This is the basis for nearly all of the images I use – find someone else who did the hard work, like installing Nginx, or Apache, or a database like Postgres or MySQL, and then add the pieces I need to get the results I want.

I pointed out earlier that Graham Gilbert already has a great Postgres container that incorporates the database setup script. All I’m doing differently is configuring Postgres to allow remote connections as well.

So as an alternative to customizing from the base Postgres image, we can try just adding our changes to Graham’s Postgres container. It makes for a smaller Dockerfile:
FROM grahamgilbert/postgres
ENV DB_NAME database
ENV DB_PASS password
ADD /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/

We can build this alternative version with a new tag:
docker build nmcspadden/postgres:alt .

Which is better?

If we look purely at image file size, we see that both our original customized version, and the version based off of Graham’s come out the same:
bash-3.2$ docker images
nmcspadden/postgres alt 21bbfdc5857f 49 seconds ago 213.4 MB
macadmins/postgres latest 2e2d24011a32 2 days ago 213.4 MB

So there’s no particular size advantage to doing it the “alternative” way, except that the Dockerfile is a bit smaller and we do less wheel-reinvention.

But there are some pros and cons, and it depends on what you want to do.

By using Graham’s Postgres container, it’s easy for us to set up an automated build on the Docker registry that rebuilds ours every time Graham rebuilds his. If he ever updates his database setup script, or updates his container for any reason, ours gets rebuilt to incorporate his changes. This is both a pro and a con, because it means that our Postgres database’s behavior might change in the latest build without our knowledge (or approval). If Graham decides to change his container to do something different at startup, and we’re designing our Postgres databases around assuming a specific behavior happens at startup every time, we could be in trouble if an unexpected change occurs.

On the other hand, if it turns out we do want to incorporate Graham’s changes automatically, using his image as the basis for our build saves a lot of time – I don’t have to make any changes to my Dockerfile to upstream those changes into our builds.

The shorter answer for me is that I need to make sure the database setup occurs the way I expect it to every time I use it – and thus I manually recreate Graham’s setup in my own Dockerfile. That way Graham can make any changes to his Dockerfile he wants and it won’t affect our own builds.

Ultimately, we end up with a Postgres database (which accepts remote connections on startup) that we can easily use again and again.

5 thoughts on “Customizing Postgres in Docker

  1. hi, thanks for the tutorial. when we run the image using
    `docker run -d -e DB_NAME=db -e DB_USER=admin -e DB_PASS=password -d macadmins/postgres`
    the container is exiting in no time. Also, it doesnt seem like the container is accepting any remote connections. Do we have to EXPOSE 5432 in our dockerifle? any help is apprecaited. Thank you.


  2. Yep, having this same issue with the container exiting after having completed the RUN.

    This is also happening on the official docker postgres ( so I imagine it may be related to the design of the base image.

    Any ideas why it would not stay alive as a background service anyone?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s