How to pass environment variables to Docker containers

docker-with-waves.jpg
docker-with-waves.jpg


Jack Wallen shows you how to pass environment variables to Docker containers for a more efficient development process.

docker-with-waves.jpg

Illustration: Lisa Hornung/TechRepublic

Did you know you can pass environment variables from your host to Docker containers? By using this feature, you can make your development of those containers a bit more efficient. But before we get into the how of this, we need to address the what—as in, what are environment variables? 

SEE: The best programming languages to learn–and the worst (TechRepublic Premium)

Environment variables are dynamically named values that can be stored and then passed to services, applications or scripts. This is an easy way of storing a value in a centralized location (usually in memory) and then using it globally. To find out more about what environment variables are, check out “Linux 101: What are environment variables?

These types of variables are an incredibly handy trick to have up your sleeve. And if you’re a container developer, they can help make your job a bit easier.

Let me show you how.

What you’ll need

The only thing you’ll need to pass environment variables to Docker containers is a running instance of Docker and a user that is part of the docker group. That’s it. Let’s pass some variables.

How to set an environment variable

To pass an environment variable to a container, we first have to set it. I’m going to demonstrate this in Linux. If you use a different operating system for container development, you’ll need to find out how to do the same on your platform of choice.

Let’s say we want to set the variable for a database user and we plan on using that variable for different containers. We could set a variable called DB_USER, which would be viable for any container using any type of database. Let’s say the value for DB_USER will be TechRepublic. To set that variable, we’d issue the command:

export DB_USER=TechRepublic

To verify that variable has been set, issue the command:

echo $DB_USER

You should see TechRepublic printed in the terminal. That’s it, you’ve set your variable. Let’s take this one step further (for the sake of example) and also set a password as an environment variable. You wouldn’t do this in production, but it’s a good way to illustrate how this is done. Set an environment variable for the password with:

export DB_PWORD=T3chR3public

How to pass the variable to a container

Now that you understand how environment variables work, you can see how they can easily be passed to your containers. First I’ll demonstrate how to do it from the docker command line, and then using an .env file.

Unlike using environment variables in Linux, you can’t set them on the host and then pass those set variables to the container in the same way you would within the host system. In other words, you can’t use the variables we just set with the Docker command like:

docker run --name postgresql -e $DB_PWORD -e $DB_USER -d postgres

If you attempt to deploy the container as such, it will run but immediately exit. Why? Because unlike the Linux system, where you can pretty much define environment variables however you like, container images expect certain variables. For example, the PostgreSQL database can’t use DB_PWORD or DB_USER, as it expects POSTGRES_PASSWORD and POSTGRES_USER. To that end, you could set those environment variables on your Linux hosts with the commands:

export POSTGRES_PASSWORD=t3chr3public
export POSTGRES_USER=TechRepublic

OK, now we can run that same command with:

docker run --name postgresql -e POSTGRES_PASSWORD -e POSTGRES_USER -d postgres

The command will succeed and the container will remain running. You can test it by accessing the PostgreSQL command within the container by issuing:

docker exec -it postgresql psql -U $POSTGRES_USER

You should find yourself at the PostgreSQL console within your container.

How to pass variables with an .env file

One of the problems with passing environment variables as described above is that they live on in memory (until you unset them with the unset command). To avoid this, we use an environment variable file.

Let’s stick with the same variables we used above. Create a new .env file with the command:

nano .env

In that file paste the following:

POSTGRES_PASSWORD=t3chr3public
POSTGRES_USER=TechRepublic

Save and close the file.

Now, how do we pass those variables? Simple, we’d issue the command:

docker run --name postgresql --env-file .env -d postgres

Make sure to use the full path to your .env file (if you’re not running the docker command from the same directory housing the file). Your container will deploy and be ready to use. You can test it by accessing the PostgreSQL console. The only difference is, you have to manually type out the user (as we didn’t set the POSTGRES_USER variable in the host system). That command would be:

docker exec -it postgresql psql -U TechRepublic

And that’s how you pass environment variables to a Docker container, either from the command line or using an .env file. Hopefully, you can use this method in your developer workflow to make things a bit more efficient.

Also see



Source link