How To – Caddy, WordPress Stack

Docker Logo
Caddy Logo
Caddy 2
MariaDB Logo
MariaDB
Wordpress Logo
WordPress
Adminer Logo

Docker is a container management system that enables the sand-boxing and grouping together of applications which have dependencies on (or interact with) each other. Docker calls these groupings “Stacks” which allow applications to be deployed and controlled as a single entity (as well as individually). A “Stack” also provides a way define a public interface (exposed ports) to the group as a whole, and to hide any ports that are not essential to the outside world. Thus minimising attack surfaces to hackers.

Caddy is a simple to define and control web server that can provide many services including (but not limited to):

  • Web server
  • Reverse proxy
  • Load ballancing
  • Auto provisioning and renewal of TLS certificates

MariaDB is an SQL based, Relational Database Server implementation based on the widely known and used MySQL server.

WordPress is a popular, convenient and flexible Content Management System (CMS) / Blog / Website design tool.

Adminer is a Database Management System (DBMS) Graphical user Interface (GUI) that can be used to manage many different database server types including:

  • MariaDB
  • MySQL
  • PostgreSQL
  • SQLite
  • …and many more

Disclaimer

I am not an expert in any of the technologies or techniques that I use in this article. I am just feeling my way through and learning as I go, just like you probably are!

Objectives

  1. To gain some understanding of how to use the docker ecosystem to deploy and manage a simple service stack comprising multiple services.
  2. To have a locally hosted public blogging site that is secured by TLS (Transport Layer Security).
  3. To have the ability to quickly add new sites and services on different domains or sub-domains.
  4. To have a graphical interface for directly managing any back-end database that can only be accessed from hosts within the local network.

Strategy

The top level strategy is to implement an organised “stack” of connected applications, each within it’s own docker container. The advantage of this strategy is that:

  1. Each application is sand-boxed and is not restricted by any library version restrictions of the other applications within the stack. So it doesn’t matter if two applications depend upon mutually conflicting library versions.
  2. Only the ports that need direct interaction with the outside world are exposed in the public interface of the stack. Thus, for example: The blogging application probably needs access to a database, but to external users the database accesses are fully transparent. Hence, although there is database used internally by the stack, it is not exposed/visible from outside of the stack.
  3. Each service can be managed and upgraded independently of the others in the stack, as long as the public interface of the service remains unchanged. So for example access to the web service is purely through a port (80 for http or 443 for https). Therefore it might initially be implemented using an “NGINX” container, but later changed to use a “Caddy” container.
  4. The entire stack could quickly be migrated to different hardware. For example, initially development may take place using a Raspberry Pi of some kind, but later on as usage demand increases the Pi could be replaced by a more powerful Intel based server (possibly even using a different underlying OS. As long as docker containers are available for that platform the migration is simple.
  5. The solution can easily be scaled to meet increasing demand by implementing a docker swarm to spread the load over multiple physical servers.

Implementation

  1. Ensure that docker is installed
  2. Create a skeleton directory structure for the project
  3. Create docker-compose.yaml to define the structure of the stack
  4. Create “.env” file
  5. Create “php.ini” file
  6. Create “Caddyfile”
  7. Fire up the stack

Ensure that docker is installed

This article assumes the use of docker compose v2.x. To check if you have a compatible version open up a terminal and type “docker compose version” (without the quotes). You should get a response similar to the following:

docker compose screenshot

If the version is v2.x… then you’re good to go. If the version is less than v2 or you get a docker command unknown error, or even a “-bash: xxx command not found” error then you need to install (or remove and then re-install) docker. To do this follow any of the many excellent tutorials on the internet. For example one of the following from docker themselves:

Once you have a suitable version of docker installed, you should make sure that the user that you will be using to run docker commands has “docker” amongst the list of groups that it belongs to. To do this open up a terminal as that user and enter the command “groups”. You should get a response similar to below:

Screenshot of "groups" command

If you see “docker” amongst the list of group names output then you’re good to go. The above screenshot has been deliberately sized to demonstrate the need to be aware that names can wrap around from one line to the next (as it does for “docker” in this case).

If “docker” is missing from the list of groups belonged to then issue the command “sudo usermod -aG docker <your-user>” where “<your user>” is replaced by the actual username that will be issuing docker commands.

Screenshot of "sudo usermod -aG docker myuser" command

You will then need to logout and then login again for the change to take effect. Issuing the “groups” command again, should this time show “docker” as one of the groups belonged to.

Screenshot of "groups" command

Create a skeleton directory structure

Login as the user that you want to issue docker commands and open up a terminal. Execute the following commands:

cd ~
mkdir -p stacks/volumes/caddy2/config
mkdir -p stacks/volumes/caddy2/data
mkdir -p stacks/volumes/mariadb
mkdir -p stacks/volumes/www/wp_html

mkdir -p stacks/conf/caddy2
touch stacks/conf/caddy2/caddy2.env
touch stacks/conf/caddy2/Caddyfile

mkdir -p stacks/conf/mariadb
touch stacks/conf/mariadb/mariadb.env

mkdir -p stacks/conf/wp
touch stacks/conf/wp/wp.env

mkdir -p stacks/projects/caddy-stack
touch stacks/projects/caddy-stack/docker-compose.yaml
touch stacks/projects/caddy-stack/.env

That should have created the following directory structure directly beneath this users home directory:

tree stacks screenshot

I used the “tree” command utility to display the directories in a human friendly form. If you don’t have this utility already installed you can install it with “sudo apt install tree”. Alternatively you could just use the standard “ls” command like this “ls -aR stacks”. Which gives an output similar to the following:

ls -aR stacks screenshot

Create docker-compose.yaml

OK. Now it’s time for the interesting bit. Here we define the “docker-compose.yaml” file. Which is what the “docker compose” command uses to tell it exactly:

  • Which containers are to be created.
  • Which ports each container is to use internally.
  • Which internal ports are to be exposed on the public interface, and which public port number will be mapped to which internal port number.
  • What commands and parameters should be used to start up each container.
  • Where each container should store it’s data. That may be on a named volume which is never exposed outside of docker, or mapped to a real world directory on the host system.
  • How applications will communicate with each other through virtual networks.
  • Which parameter values are to be mapped to which environment variables for ease of specifying configurations during deployment. Or just specified as literal values directly in the compose file.
  • Sensitive data, such as passwords, can be supplied as “docker secrets” which are encrypted for security.

The “docker-compose.yaml” file is just a plain text, human readable file. YAML by the way, is an acronym for “Yet Another Markup Language” (or so I believe). The file extension of “.yaml” will be used in this article, though a file extension of “.yml” is also commonly used: either is acceptable.

When viewed as a whole, the compose file can look complicated and overwhelming. However, the approach here will be to take a look at the docker-compose.yaml file structure and what sections and elements are available. Once some kind of appreciation of how to write a compose file has been gained, then the actual file will be built up slowly one section at a time with an explanation of why each parameter was chosen.

Background Reading

File Structure

A compose file comprises up to six top-level elements (not all of which are needed on all configurations):

AFAIK, the order that the sections appear in the compose file is not critical. However, in this article the compose file will be built up in the order that they are described in the official docker documentation.

Version and top-level name

Actually the “version” top-level element is no longer required. In fact it will be ignored by compose and a warning message that it is obsolete will be issued if you use it. However, it is mentioned here because the majority of files and tutorials that are to be found on the web still specify it. When used it is the first thing in the compose file and takes the form of version: ‘<version-number>’ similar to this:

version: '3.7'

The top-level name is optional, but if specified is used to set the ‘project name’ (or if you use Portainer to manage your docker instances, it is what they use as the “stack name”). It is also made available as an environment variable called “COMPOSE_PROJECT_NAME”. If it is not specified (and many internet tutorials don’t), then docker generates, and assigns, a unique name automatically.

For the purpose of this article you may have already noticed that I will be using a project name of “caddy-stack”. Thus at this point in time our “~/stacks/projects/caddy-stack/docker-compose.yaml” file contains just a single line as follows:

name: caddy-stack
Services top-level element

This is where most of the interesting stuff happens. Each container provides a service that may be used by other containers (services) and may also be exposed to the outside world (more on that later). Within the top-level “services” element is a definition for how each individual service (container) is configured and run.

All docker-compose.yaml files must contain a “services” section. Each service (container) within the “services” element is defined by a service name followed by a number attributes, such as “image”, “container_name”, “volumes”, “networks”, and “depends_on”. There are a lot of attributes that could be used (around a hundred). However, most of them are optional and a typical service (container) will only require under a dozen to fully configure. Clearly there are far too many to cover all of them in this article (especially as many are rarely used). So all that will be covered here are the relatively small number required for this stack (project) of containers.

Each attribute that is required somewhere in the docker-compose.yaml file for this article will be briefly described. The file, and values used, for the actual stack of services will be built up later, once all of the relevant parts of a compose file have been investigated.

Service attributes used in the caddy-stack compose file
image:

This is the docker image that should be used to create the container. It is specified using the syntax:

image: imagename:tagname

Where “imagename” is the name as specified in the repository (the repository hub.docker.io is often used as a source for prebuilt images) and is mandatory. Typical examples of names might be things such as “mariadb”, “wordpress”, or “bitnami/mariadb”.

And “tagname” is an optional tag that specifies and exact version of the image. This might be as simple as “latest” or “11.4”. Or it might be as complex as “6.8.1-php8.1-fpm-alpine”. Projects tend to have a standard way of tagging their images, though the standard may vary between projects.

container_name:

This is used to give a custom name by which the container can be referenced. This makes application management a lot more convenient. However, the down side to this is that compose will not be able to scale the service beyond a single container. For typical “home lab” use this is often not an issue and the advantages can far outweigh the disadvantages.

Example syntax would be:

container_name: mariadb
restart:

Specifies the policy to apply upon container termination. There are four possible values:

  • “no” – this is the default and indicates that the container has to be restarted manually. Note that this has to be in quotes to prevent the compose interpreter from changing the “no” to “False”.
  • always – the container always automatically tries to restart until it is removed.
  • on-failure[:max-retries] – Automatically restart if the exit code indicated an error. Optionally a maximum number of retries can be specified.
  • unless-stopped – Automatically restarts unless it was explicitly stopped or removed.

Example syntax would be:

restart: unless-stopped
ports:

Specifies the port mapping to apply between the host machine and the containers. This is important for allowing (or disallowing) external access to the service. Note that the external port used does not need to be the same as the one used internally between services. Also note that the “ports” attribute must not be used if the attribute “network_mode: host” is also specified (since that already exposes container ports directly to the host network).

There are two forms of specifying the mapping: short syntax for simple port mapping or long syntax which allows for extra parameters to be specified such as the network mode to use.

Short syntax is simply: [host_port:]container_port[/protocol] where:

  • the optional “host_port:” is either a simple port number, such as 80, or a range of port numbers such as 8000-8003
  • “container_port” is again either a simple port number such as 80, or a range of port numbers such as 8000-8003
  • the optional “/protocol” is either “tcp” which is the default, or “udp”

Example syntax might be:

ports:
  - "80"
  - "8080:80"
  - "9000-9002:80-82"
  - "6060:6060/udp"

Long form syntax allows for the following extra fields to be specified:

  • “target:” which is the container port
  • “published:” which is the publicly exposed port. If a range is given then it means the actual port will be assigned an available remaining port from the range.
  • “host_ip:” is the host IP mapping. If not specified it defaults to binding to all network interfaces (0.0.0.0).
  • “protocol:” is the protocol to use. Either “tcp” (default), or “udp”.
  • “app_protocol:” is the application layer protocol that the port is used for. For example “http”. It is used as a hint to allow compose to provide a richer behaviour for those protocols that it understands.
  • “mode:” how the port is published within a swarm setup. Options are:
    • “ingress”, which is the default, allows load balancing across the nodes of the swarm.
    • “host” publishes the port individually on every node in the swarm.
    • “name:” is a human readable name for the port. used to document port usage within the service.

Example syntax would be:

ports:
  - name: web
  - target: 80
  - host_ip: 127.0.0.1
  - published: "8080"
  - protocol: tcp
  - app_protocol: http
  - mode: host

  - name: web-secure
    target: 443
    host_ip: 127.0.0.1
    published: "8083-9000"
    protocol: tcp
    app_protocol: https
    mode: host
env_file:

Specifies one or more files containing environment variable definitions to be passed to the service (container). In it’s simplest syntactical form it is specified as:

env_file: ./my_service.env

Where “my_service.env” is a plain text file containing a list of environment variable definitions (one per line) to be passed to the service to which it has been included. The path to the file has been specified as relative to the location of the docker-compose.yaml file. Absolute paths are discouraged as they are not portable and if used, compose will issue a warning to that effect.

Each line within the file is of the format var_name=var_value. An example file might look something like this:

# Primary domain for WordPress site
DOMAIN=example.com

# WordPress database credentials
WORDPRESS_DB_HOST=mariadb
WORDPRESS_DB_USER=wp

The “env_file” attribute can also be declared as a list of files like this:

env_file:
  - ./my_common.env
  - ./my_service.env

The list of files is parsed in the order given. If the same variable is defined in more than one file, then the last one to be parsed takes precedence.

In the list format of the declaration, each file can also be specified using the sub-attributes of:

  • path: which takes the same parameter as the simplest syntax
  • required: which is either “true” (which is the default), or “false”. Note that the values must be in quotes to prevent the compose interpreter changing them to boolean True and False. If false is specified and the file doesn’t exist at the given path, then the entry is silently ignored.
  • format: optional, but if specified then currently only the value “raw” is accepted. If not specified then normal compose env_file format rules are used to parse the file. If a value of “raw” is specified then the normal key=value syntax is still used, but compose will not try to parse the value for interpolation. This allows the value to be passed in exactly as written in the file, including any dollar or quotes.

An example of using the more complex list format might look like this:

env_file:
  - path: ./default.env
    required: "true" # default even if not specified

  - path: ./overide.env
    required: "false" # just ignore it if file doesn't exist
    format: raw # pass in all values as-is including $
environment:

Used to directly define environment variables to be set within the container. The syntax can be either map syntax “key: value” or array syntax “key=value” pairs. Like elsewhere, boolean values of “yes/no”, “true/false” etc should be written in quotes to prevent the compose yaml parser from converting them to True/False.

Map syntax example:

environment:
  USER_NAME: wp
  USER_AGE: "very old"

Array syntax example:

environment:
  USER_NAME=wp
  USER_AGE="very old"

Note that any variable names set here take precedence over those within an “env_file:” file definition.

Finally it should noted that if a file with the name “.env” (no name part, just the extension) exists within the same directory as the docker-compose.yaml file. Then that file will be automatically read in by the interpreter before any part of the compose file is interpreted.

However, it should also be noted that this file is treated slightly differently to those specified by the “env_file” attribute. These variables are intended for use by the parser and are NOT automatically passed in to the container environments. To be passed in to a container they must be explicitly defined in the “environment” service attribute and use the files variable name as an argument for the value parameter of the environment variable.

The treatment of variables in the global “.env” file doesn’t seem to be explained very clearly in the docker documentation (at least as far as I can see). Hopefully an example will help to clarify things. Assuming that a global “.env” file exists with the following line in it:

BASE_PATH=/my/special/base/path

Then it would need the following within the service definition of any container which needed to have it passed to it:

environment:
  BASE_PATH: ${BASE_PATH}

A more typical, and proper use would not be for use within the container, but rather for use by the parser itself such as locating resources or naming services etc. For example. Assuming that a global “.env” file exists with the following line in it:

CADDY_IMAGE="2.10.0-alpine"

Then a caddy container service could be created like this:

services:
  caddy:
    image: ${CADDY_IMAGE}

Note that in this case the environment variable is NOT passed in to the container, it is only used by the parser to define the service. Note also that no “environment” attribute is required to use it.

volumes:

Provides a mechanism for specifying the container mount points for either physical host paths (known as a bind mount), or named docker volumes. Host paths are used where the host needs access to the persisted data from the container. While named docker volumes are used where the container has data that needs to be persisted, but it only needs to be accessible from within the docker ecosystem.

Named volumes are the preferred mechanism for persisting data for a variety of reasons including:

  • Security – data within named volumes is only accessible within the docker ecosystem and not directly from the host.
  • Flexibility – Easier to backup or migrate than bind mounts.
  • Portability – completely portable between Linux and Windows containers.
  • Can be more safely shared between containers as they are completely managed by docker.

Volumes are specified within services (containers) using either the short syntax, or a longer more capable syntax.

The short syntax is very simple and is a single colon separated line of:

VOLUME:CONTAINER_PATH[:ACCESS_MODE]

where:

  • VOLUME is either a named volume defined in a top-level volumes element. Or a physical host path to use in a bind mount.
  • CONTAINER_PATH is the mount point within the containers own file system.
  • ACCESS_MODE is an optional specifier which determines how the volume is to be accessed and must be one of:
    • rw – the default if not specified and indicates Read/Write access
    • ro – Read Only access
    • z – SELinux option indicating that the bind mount host content is shared among multiple containers.
    • Z – SELinux option indicating that the bind mount host content is private and unshared for other containers.

Short syntax example:

services:
  mariadb:
    image: mariadb:latest
    volumes:
      - db_data:/var/lib/mysql
      - ./mariadb/extra:/opt/shared/extra

volumes:
  db_data:

The longer form syntax allows extra configuration fields:

  • type:
    • volume – this is a named volume defined in a top-level volumes element.
    • bind – this mounts a path from the hosts own file system.
    • tmpfs – this holds none persisted data and exists only in memory.
    • image – not figured this out yet
    • npipe – a named pipe, but not figured out how/when to use it yet
    • cluster – not figured this out yet
  • source: – the source of the mount. Either a path on the host FS, or a named docker volume.
  • target: – the path to the mount point within the container.
  • read_only: – flag to make the mounted volume read only.
  • bind:
    • propagation: – the propagation mode used for the bind.
    • create_host_path: – defaults to true. Creates a directory at the source path if it doesn’t already exist.
    • selinux: – The SELinux re-labeling option z (shared) or Z (private).
  • volume: – Additional volume options of:
    • nocopy: – Flag to disable copying of data from a container when a volume is created.
    • subpath: – Path inside a volume to mount instead of the volume root.
  • tmpfs:
    • size: –  The size for the tmpfs mount in bytes (either numeric or as bytes unit).
    • mode: – The file mode for the tmpfs mount as Unix permission bits as an octal number.
  • image: – Additional image options:
    • subpath: – Path inside the source image to mount instead of the image root.
  • consistency: – The consistency requirements of the mount. Available values are platform specific.

Long syntax example:

services:
  backend:
    image: example/backend
    volumes:
      - type: volume
        source: db-data
        target: /data
        volume:
          nocopy: true
          subpath: sub
      - type: bind
        source: /var/run/postgres/postgres.sock
        target: /var/run/postgres/postgres.sock

volumes:
  db-data:
networks:

Specifies the networks that the service container is attached to, along with optional alias names by which that service can be reached on each network. The names of these networks are defined under the top-level networks element (discussed later).

This is important as it helps to manage how services are segmented and can interact, both within the docker ecosystem, but also the outside world too. If the networks attribute is missing from a service definition, then it will automatically be assigned to the “default” network. If the intent is to deliberately isolate the service from all networks, then set the “network_mode” to “none”.

In it’s simplest form the syntax for the networks attribute is just a list of network names like this:

services:
  some-service:
    networks:
      - some-network
      - other-network

Where “some-network” and “other-network” are defined in a top-level networks element. However, there is a longer form syntax which also allows one or more aliases to be defined for each network. The service can then be reached by either referring to it by it’s service name using any network to which it is attached. Alternatively, if aliases have been defined then other services attached to those specific networks could also reach the service using one of the aliases for that network.

More complex example using the longer form syntax to create aliases:

services:
  frontend:
    image: example/webapp
    networks:
      - front-tier
      - back-tier

  monitoring:
    image: example/monitoring
    networks:
      - admin

  backend:
    image: example/backend
    networks:
      back-tier:
        aliases:
          - database
      admin:
        aliases:
          - mysql

networks:
  front-tier: {}
  back-tier: {}
  admin: {}

In the above, much more complex example, service "frontend” is able to reach the “backend” service at either the hostname “backend” or alias “database” on the “back-tier” network. The service “monitoring” is able to reach same “backend” service at the hostname “backend” or alias “mysql” on the “admin” network.

healthcheck:

The healthcheck attribute declares a check that’s run to determine whether or not the service container is “healthy”. It works in the same way, and has the same default values, as the HEALTHCHECK Dockerfile instruction which is set in the service’s Docker image. The Compose file can override the values set in the Dockerfile.

An example healthcheck for some web service could be along the lines of:

healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost"]
  interval: 1m30s
  timeout: 10s
  retries: 3
  start_period: 40s
  start_interval: 5s

Where:

  • test: specifies the command to be executed to check the health of the service. There are four forms in which the command can be specified:
    • The whole command as one string like this:
      test: “curl -f http://localhost”
    • In a default sub-shell of the container (/bin/sh) where environment features like variables are available. Like this:
      test: [CMD-SHELL, “curl -f http://localhost”]
    • Or, as in the example above, as a list using the CMD specifier. In this case the command is run directly without a shell and hence no access to environment features like variables. Like this:
      test: [CMD, “curl”, “-f”, “http://localhost”]
    • Finally using the NONE specifier which disables the check. Like this:
      test: [NONE]
  • interval: specifies the time between retries
  • timeout: specifies how long to wait for a response before considering it dead
  • retries: specifies how many times to try the test command
  • start_period: specifies how long to wait before the first attempt during initial startup
  • start_interval: specifies the time between retries during the initial startup period

Alternatively the check can simply be disabled like this:

healthcheck:
  disable: true
depends_on:

Used to control the order of service startup and shutdown. It is useful if services are closely coupled, and the startup sequence impacts the application’s functionality.

The short form syntax simply provides a list of service names upon which this service depends. This service will not be started until after all those in the list have been started (more on when a service is considered started later). Similarly, this service will be stopped prior to stopping those services on which it depends.

Simple format syntax:

services:
  web:
    build: .
    depends_on:
      - db
      - redis
  redis:
    image: redis
  db:
    image: postgres

In the above example both db and redis services will be started and must be in a state of “ready” before attempting to start the web service. Similarly when stopping the project stack, the web service will be stopped before stopping the redis and db services.

Long format syntax allows additional fields to be defined that are not available in the short form syntax. Specifically:

  • restart: defaults to false, but when set to true this service will be restarted after any updates have been made to any dependant services. Note that this only applies to an explicit restart due to a docker compose action and not due automatic restarts due to failure.
  • condition: sets the condition under which a dependency is satisfied. Must be one of:
    • service_started – this is equivalent to the short form syntax effect
    • service_healthy – Note that a service can be ready, but not yet declared healthy. this requires the dependencies to have passed the healthcheck prior to starting this service.
    • service_completed_successfully – Requires that the dependencies have run to a successful completion before this service will be started.
  • required: defaults to true, but if set to false then compose only warns you that the dependencies are not ready or available, but goes ahead and starts the service anyway.

Example of long form syntax:

services:
  web:
    build: .
    depends_on:
      db:
        condition: service_healthy
        restart: true
      redis:
        condition: service_started
  redis:
    image: redis
  db:
    image: postgres

In this example compose starts both the db and redis services first. However, while it only waits for the redis service to become “ready”, it requires that the db service has also reached a state of “healthy” before starting the web service.

Also, if updates are being made to the db service (through compose), then the web service will be restarted afterwards.

When it comes to tearing down the project stack using compose, the web service will be stopped prior to stopping the db and redis services.

Networks top-level element

This is where you configure named networks that can be reused across multiple services. However, for a service to be able to use any of these networks it must have the networks attribute configured on it.

Using the short syntax, here is a simple example of how to setup multiple network segments:

services:
  frontend:
    image: example/webapp
    networks:
      - front-tier
      - back-tier

networks:
  front-tier:
  back-tier:

Here, two networks (“front-tier” and “back-tier”) are declared in the top-level networks element and the “frontend” service has been plumbed in to both of those networks.

A more complex form of syntax is also available which provides access to a number of attributes that allow for a more customised configuration to be achieved. the additional attributes are:

driver:

specifies which driver should be used for this network. Compose returns an error if the driver is not available on the platform. Possible values are:

  • bridge – The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are commonly used when your application runs in a container that needs to communicate with other containers on the same host (or just to the outside world), but not to containers on other nodes in a swarm..
  • host – Removes network isolation between the container and the Docker host. Use the host’s networking directly.
  • overlay – Overlay networks connect multiple Docker daemons together and enable Swarm services and containers to communicate across nodes. This strategy removes the need to do OS-level routing.
  • ipvlan – IPvlan networks give users total control over both IPv4 and IPv6 addressing. The VLAN driver builds on top of that in giving operators complete control of layer 2 VLAN tagging and even IPvlan L3 routing for users interested in underlay network integration.
  • macvlan – Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
  • none – Completely isolate a container from the host and other containers. none is, understandably, not available for Swarm services.
networks:
  db-data:
    driver: bridge
driver_opts:

specifies a list of options as key-value pairs to pass to the driver. These options are driver-dependent.

networks:
  frontend:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.host_binding_ipv4: "127.0.0.1"
attachable:

If attachable is set to true, then standalone containers will be able to attach to this network, in addition to services within this stack. If a standalone container attaches to the network, it can communicate with services and other standalone containers that are also attached to the network (even if they are in another stack).

networks:
  mynet1:
    driver: overlay
    attachable: true
enable_ipv4:

Used to disable IPv4 address assignment.

networks:
  ip6net:
    enable_ipv4: false
    enable_ipv6: true
enable_ipv6:

Enables IPv6 address assignment.

networks:
  ip6net:
    enable_ipv6: true
external:

Defaults to false, but if set to true:

  • Specifies that this network’s life-cycle is maintained outside of that of the application. Compose doesn’t attempt to create these networks, and returns an error if one doesn’t exist.
  • All other attributes apart from name are irrelevant. If Compose detects any other attribute, it rejects the Compose file as invalid.

In the following example, proxy is the gateway to the outside world. Instead of attempting to create a network, Compose queries the platform for an existing network simply called outside and connects the proxy service’s containers to it. The network “outside” must could been defined in another stack with the “attachable” attribute set.

services:
  proxy:
    image: example/proxy
    networks:
      - outside
      - default
  app:
    image: example/app
    networks:
      - default

networks:
  outside:
    external: true
ipam:

Specifies a custom IPAM configuration. This is an object with several properties, each of which is optional:

  • driver: Custom IPAM driver, instead of the default.
  • config: A list with zero or more configuration elements, each containing a:
    • subnet: Subnet in CIDR format that represents a network segment.
    • ip_range: Range of IPs from which to allocate container IPs.
    • gateway: IPv4 or IPv6 gateway for the master subnet.
    • aux_addresses: Auxiliary IPv4 or IPv6 addresses used by Network driver, as a mapping from hostname to IP.
  • options: Driver-specific options as a key-value mapping.
networks:
  mynet1:
    ipam:
      driver: default
      config:
        - subnet: 172.28.0.0/16
          ip_range: 172.28.5.0/24
          gateway: 172.28.5.254
          aux_addresses:
            host1: 172.28.1.5
            host2: 172.28.1.6
            host3: 172.28.1.7
      options:
        foo: bar
        baz: "0"
internal:

By default, Compose provides external connectivity to networks. internal, when set to true, lets you create an externally isolated network.

labels:

Add metadata to containers using labels. You can use either an array or a dictionary.

It is recommended that you use reverse-DNS notation to prevent labels from conflicting with those used by other software.

networks:
  mynet1:
    labels:
      com.example.description: "Financial transaction network"
      com.example.department: "Finance"
      com.example.label-with-empty-value: ""
networks:
  mynet1:
    labels:
      - "com.example.description=Financial transaction network"
      - "com.example.department=Finance"
      - "com.example.label-with-empty-value"
name:

Sets a custom name for the network. The name field can be used to reference networks which contain special characters. The name is used as is and is not scoped with the project name.

networks:
  network1:
    name: my-app-net

It can also be used in conjunction with the external property to define the platform network that Compose should retrieve, typically by using a parameter so the Compose file doesn’t need to hard-code runtime specific values:

networks:
  network1:
    external: true
    name: "${NETWORK_ID}"
Volumes top-level element

This is where you configure named volumes of storage that can be reused across multiple services. However, for a service to be able to use any of these volumes it must have the volumes attribute configured on it.

Using the short syntax, here is a simple example of how to share a named volume between multiple services:

services:
  backend:
    image: example/database
    volumes:
      - db-data:/etc/data

  backup:
    image: backup-service
    volumes:
      - db-data:/var/lib/backup/data

volumes:
  db-data:

In the above imaginary example the “backend” service shares it’s “db-data” volume with a “backup” service so that it can have it’s data periodically backed up. Notice that it is quite alright for the same volume to be mounted in different locations within each of the services. The mount point should be where ever that particular service expects to find the data.

There is also a longer syntax version of the volumes element which allows for a more customised configuration of the volume. The following attributes are available:

driver:

Specifies which volume driver should be used. If the driver is not available, Compose returns an error and doesn’t deploy the application.

volumes:
  db-data:
    driver: foobar
driver_opts:

Specifies a list of options as key-value pairs to pass to the driver for this volume. The options are driver-dependent.

volumes:
  example:
    driver_opts:
      type: "nfs"
      o: "addr=10.40.0.199,nolock,soft,rw"
      device: ":/docker/example"
external:

Defaults to false, but if set to true:

  • Specifies that this volume already exists on the platform and its lifecycle is managed outside of that of the application. Compose then doesn’t create the volume and returns an error if the volume doesn’t exist.
  • All other attributes apart from name are irrelevant. If Compose detects any other attribute, it rejects the Compose file as invalid.

In the following example, instead of attempting to create a volume called {project_name}_db-data, Compose looks for an existing volume simply called db-data and mounts it into the backend service’s containers.

services:
  backend:
    image: example/database
    volumes:
      - db-data:/etc/data

volumes:
  db-data:
    external: true
labels:

Are used to add metadata to volumes. You can use either an array or a dictionary.

It’s recommended that you use reverse-DNS notation to prevent your labels from conflicting with those used by other software.

volumes:
  db-data:
    labels:
      com.example.description: "Database volume"
      com.example.department: "IT/Ops"
      com.example.label-with-empty-value: ""
volumes:
  db-data:
    labels:
      - "com.example.description=Database volume"
      - "com.example.department=IT/Ops"
      - "com.example.label-with-empty-value"

Compose sets com.docker.compose.project and com.docker.compose.volume labels.

name:

Sets a custom name for a volume. The name field can be used to reference volumes that contain special characters. The name is used as is and is not scoped with the stack name.

volumes:
  db-data:
    name: "my-app-data"

This makes it possible to make this lookup name a parameter of the Compose file, so that the model ID for the volume is hard-coded but the actual volume ID on the platform is set at runtime during deployment.

For example, if DATABASE_VOLUME=my_volume_001 is in your .env file:

volumes:
  db-data:
    name: ${DATABASE_VOLUME}

Running docker compose up uses the volume called my_volume_001.

It can also be used in conjunction with the external property. This means the name used to look up the actual volume on the platform is set separately from the name used to refer to the volume within the Compose file:

volumes:
  db-data:
    external: true
    name: actual-name-of-volume
Secrets top-level element

Currently this project doesn’t make use of secrets (though it will at some point in the future. So for now I’m just putting a link to the official docker documentation here.

Configs top-level element

It’s getting late and I am struggling to get my head around what configs are used for and how to use them. So for now I’m just putting a link in to the official docker documentation here.

Building up the docker-compose.yaml file

xx

Create “.env” file

xxx

Create “php.ini” file

xxx

Create “Caddyfile”

xxx

Fire up the stack

xxx

Leave a Reply