Salting your LXC Container Fleet

| devops | linux |

Saltstack is an awesome configuration management system that can make managing 10 to 10,000 servers very simple. Salt can be used to deploy, manage, configure, report around, and even troubleshoot all of your servers. It can also be used to manage a fleet of LXC containers which we will be doing in this blog post. If you have been reading this blog, you know that I love Linux Containers. I am using them for pretty much anything these days. Salt is a great way to keep track of and manage all of these containers. On my main server, I have three containers that are running various applications. In order to update the packages on these containers I would have to log into each one, and run apt-get update and apt-get upgrade. This is not so bad for three containers, but you can imagine how annoying and cumbersome this gets as your container lists grows. This is where salt comes to the rescue, with salt I can update all of these containers with a single command. The official Salt Walkthrough is a great place to start to learn about how Salt works. This short post will show you how to set up a small salt configuration on a single server that is hosting several containers. All of my containers are pretty boring because they run Ubuntu 14.04. The best part about salt is that it is really OS agnostic and can manage a diverse fleet of different versions and types of operating systems. For this post, my host and all of my LXC containers are running Ubuntu 14.04 LTS Salt works by having a master that manages a bunch of minions. Setting up salt master is a breeze. For the purpose of this blog post, we refer to the master as being your host server and the minions as being your LXC containers.

Setting up Salt Master

On your host server you will need to install salt master. First we will need to add the saltstack repo to our repository list:
sudo add-apt-repository ppa:saltstack/salt
Next we will install the salt-master:
sudo apt-get update 
sudo apt-get install salt-master
Once the Salt Master is installed it will start running right away. By default it will run on port 4505 and 4506. You can verify this by running netstat -plntu | grep python to see which port(s) it is currently running on.

Setting up your Firewall

One thing I ran into during the installation was getting the firewall working. This is all running on a Linode, and I used Linode’s Securing Your Server guide to set up my firewall. If you have a similar setup you can add the following lines to /etc/iptables.firewall.rules to allow the minions to communicate with the master.
# Allow Minions from these networks 
-I INPUT -s -p tcp -m multiport --dports 4505,4506 -j ACCEPT  

Allow Salt to communicate with Master on the loopback interface

-A INPUT -i lo -p tcp -m multiport –dports 4505,4506 -j ACCEPT

Reject everything else

-A INPUT -p tcp -m multiport –dports 4505,4506 -j REJECT

LXC gives you a nice “Management Network” where the containers can communicate with the host using private IP addresses. The easiest way to set this up is to allow the entire range (which above is ) of this network through the firewall. For security purposes I am rejecting all other IP addresses. Once you have configured your firewall you will want to load the new firewall settings to enable them.
sudo iptables-restore < /etc/iptables.firewall.rules

Setting up your Minions

Once your master is set up, running, and allows minions through the firewall we can set up the minions. Since LXC is a pretty barebones system we will need to install a couple of prerequisites first to get everything working. First we want to log into our container. I usually run the containers in a screen session so it would look something like this.
screen -dRR container1 lxc-attach -n container1
Once we are inside of our container, intall the following things:
sudo apt-get install software-properties-common  
sudo add-apt-repository ppa:saltstack/salt  
sudo apt-get update  
sudo apt-get install salt-minion
Now our minion is installed! It will need to know where to find the master. In our case we are running everything on the management network. The easiest way to get it to find the master is to add the IP address of the master to our /etc/hosts configuration. If you are not sure what the IP address of the master is you can run ip a | grep ineton the master and look for the IP address that starts with a 10.
vim /etc/hosts  # Now add the master IP    salt
To start it up, we will simply run:
/etc/init.d/salt-minion start
Before the minion is able to communicate with the master its key must be accepted. Back on the salt-master you will need to run salt-key -Ain order to accept the key from your minion. You should see the name of your container pop up and you will want to say ‘Y’ to accept its key. You can test to see that everything is working by running:
salt '*'
Your output should look something like this:
hci:     True git:     True usel:     True
That’s it! This may seem like a bit of work, but it is totally worth it because now every time we need to do anything on these containers we will simply use salt instead of having to log into each one. You can simple repeat these steps for each additional containers until you have an entire fleet of salted minions.

Thank you for reading! Share your thoughts with me on mastodon or via email.

Check out some more stuff to read down below.

Most popular posts this month

Recent Favorite Blog Posts

This is a collection of the last 8 posts that I bookmarked.

Articles from blogs I follow around the net

Today in heavy-handed metaphors

Sam Altman is the owner of a massive, invasive, parasitical toxic sludge that respects no boundaries and ruins everything it touches, and that he thinks someone else should clean up. Also his new house has mold. OpenAI CEO's $27 million San Francisco ma…

via jwz July 19, 2024

Weeknotes: GPT-4o mini, LLM 0.15, sqlite-utils 3.37 and building a staging environment

Upgrades to LLM to support the latest models, and a whole bunch of invisible work building out a staging environment for Datasette Cloud. GPT-4o mini and LLM 0.15 Today's big news was the release of GPT-4o mini, which I wrote about here. If you build ap…

via Simon Willison's Weblog: Entries July 19, 2024

ESM3: A simplified primer to the model's architecture

A short primer explaining the architecture of the ESM3 model

via Emmanuel Blogs July 18, 2024

Generated by openring