Salting your LXC Container Fleet

Saltstack is an awesome configuration management system that can make managing 10 to 10,000 servers very simple. Salt can be used to deploy, manage, configure, report around, and even troubleshoot all of your servers. It can also be used to manage a fleet of LXC containers which we will be doing in this blog post. If you have been reading this blog, you know that I love Linux Containers. I am using them for pretty much anything these days. Salt is a great way to keep track of and manage all of these containers. On my main server, I have three containers that are running various applications. In order to update the packages on these containers I would have to log into each one, and run apt-get update and apt-get upgrade. This is not so bad for three containers, but you can imagine how annoying and cumbersome this gets as your container lists grows. This is where salt comes to the rescue, with salt I can update all of these containers with a single command. The official Salt Walkthrough is a great place to start to learn about how Salt works. This short post will show you how to set up a small salt configuration on a single server that is hosting several containers. All of my containers are pretty boring because they run Ubuntu 14.04. The best part about salt is that it is really OS agnostic and can manage a diverse fleet of different versions and types of operating systems. For this post, my host and all of my LXC containers are running Ubuntu 14.04 LTS Salt works by having a master that manages a bunch of minions. Setting up salt master is a breeze. For the purpose of this blog post, we refer to the master as being your host server and the minions as being your LXC containers.

Setting up Salt Master

On your host server you will need to install salt master. First we will need to add the saltstack repo to our repository list:

sudo add-apt-repository ppa:saltstack/salt

Next we will install the salt-master:

sudo apt-get update 
sudo apt-get install salt-master

Once the Salt Master is installed it will start running right away. By default it will run on port 4505 and 4506. You can verify this by running netstat -plntu | grep python to see which port(s) it is currently running on.

Setting up your Firewall

One thing I ran into during the installation was getting the firewall working. This is all running on a Linode, and I used Linode’s Securing Your Server guide to set up my firewall. If you have a similar setup you can add the following lines to /etc/iptables.firewall.rules to allow the minions to communicate with the master.

# Allow Minions from these networks 
-I INPUT -s -p tcp -m multiport --dports 4505,4506 -j ACCEPT  

# Allow Salt to communicate with Master on the loopback interface 
-A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT  

# Reject everything else 
-A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT

LXC gives you a nice “Management Network” where the containers can communicate with the host using private IP addresses. The easiest way to set this up is to allow the entire range (which above is ) of this network through the firewall. For security purposes I am rejecting all other IP addresses. Once you have configured your firewall you will want to load the new firewall settings to enable them.

sudo iptables-restore < /etc/iptables.firewall.rules

Setting up your Minions

Once your master is set up, running, and allows minions through the firewall we can set up the minions. Since LXC is a pretty barebones system we will need to install a couple of prerequisites first to get everything working. First we want to log into our container. I usually run the containers in a screen session so it would look something like this.

screen -dRR container1 lxc-attach -n container1

Once we are inside of our container, intall the following things:

sudo apt-get install software-properties-common  
sudo add-apt-repository ppa:saltstack/salt  
sudo apt-get update  
sudo apt-get install salt-minion

Now our minion is installed! It will need to know where to find the master. In our case we are running everything on the management network. The easiest way to get it to find the master is to add the IP address of the master to our /etc/hosts configuration. If you are not sure what the IP address of the master is you can run ip a | grep ineton the master and look for the IP address that starts with a 10.

vim /etc/hosts  # Now add the master IP    salt

To start it up, we will simply run:

/etc/init.d/salt-minion start

Before the minion is able to communicate with the master its key must be accepted. Back on the salt-master you will need to run salt-key -Ain order to accept the key from your minion. You should see the name of your container pop up and you will want to say ‘Y’ to accept its key. You can test to see that everything is working by running:

salt '*'

Your output should look something like this:

hci:     True git:     True usel:     True

That’s it! This may seem like a bit of work, but it is totally worth it because now every time we need to do anything on these containers we will simply use salt instead of having to log into each one. You can simple repeat these steps for each additional containers until you have an entire fleet of salted minions.


Programming Sockets in Python

EDIT: I am sorry for derping out of control. When I initially published this post it was called “Programming Web Sockets in Python”, this is just flat out wrong. What we are making here is just a regular socket. Web Sockets and regular sockets are similar but are certainly not the same thing. I hope you will still find this useful! Sockets are pretty much the basis of how applications work on the Internet. Python makes it super easy to get started programming sockets. In this brief introduction we will create a simple server that greets the user when it receives incoming requests from the client application. Due to my recent obsession with Linux Containers we will also be implementing this inside of two containers. Containers make it really simple to simulate a network because you can create additional hosts in seconds.

Creating your Containers

I am running Ubuntu 14.04. So creating two additional containers can be achieved by running the following as the root user.

lxc-create -t download -n pyServer 

# Choose ubuntu, trusty, amd64 when prompted 
# Then clone the first container 

lxc-clone -o pyServer -n pyClient

Running the Server

Now that we have created our containers lets jump into our server container and fire up our simple server application. We can start up the container by issuing the following command as root: lxc-start -n pyServer -d, this will start the container as a daemon. Let’s go ahead and get into by attaching the container. I like to do this inside of screen so that we can easily get in and out of the container. Create a screen session screen -dRR pyServer and once inside the screen attach the container lxc-attach -n pyServer Once we are inside the container we need to install python and launch our simple server.

apt-get install python vim

Inside of vim (or your favorite text editor) we need to enter the following simple python code.

from socket import *  

serverPort = 12000  
serverSocket = socket(AF_INET, SOCK_DGRAM)  
serverSocket.bind(('', serverPort))  
print "The server is ready to rock and roll!"  

while 1:     
    name, clientAddress = serverSocket.recvfrom(2048)     
    response = "Hello " + str(name) + "! You are really good at socket programming"     
    serverSocket.sendto(response, clientAddress)

The code should be pretty straightforward. We are creating a new serverSocket that is bound to port 12000. When it receives requests (which include a name) it responds with an encouraging message. Fire up this server by running python if all goes well you should see a message that states This server is ready to rock and roll! Exit the container (and the screen session) by pressing Ctrl+a and Ctrl+d

Running the Client

Now that we have our server up and running, lets get our client working as well. Before we move forward, lets grab the IP address of our server container because we will need it soon. You can get the IP by runninglxc-ls --fancy. Launch the client container, attach it in screen, and install python in the same way that we did previously.

lxc-start -n pyClient -d screen -dRR pyClient 
lxc-attach -n pyClient 
apt-get install python 

In vim, lets create the program by entering the following code.

from socket import *  

# Replace the IP address in serverName with the IP of your container that you grabbed previously. 

serverName = '' 
serverPort = 12000 
clientSocket = socket(AF_INET, SOCK_DGRAM)  
name = raw_input('Please enter your name:')  
clientSocket.sendto(name, (serverName, serverPort)) 

response, serverAddress = clientSocket.recvfrom(2048) 
print response 

This code is also pretty straightforward. It asks the user for their name, sends it to the server, and prints the response. You can try this out now! Save the file and execute your python program by runningpython After entering your name and pressing enter you should see a response from your server with the encouraging message. This was a pretty trivial exercise, but we can quickly see that we can expand upon this basic code to create much more interesting and complex applications. We can also leverage the power and simplicity of LXC to create a simulated large network for distributed applications.


Proxy Everything into a Linux Container with nginx

I previously wrote about setting up Node.js + Ghost in an Ubuntu LXC container and using Apache to proxy all web requests into that container. This works pretty well for the most part, but it seems like nginx is much better tool for this since it was pretty much designed to be a proxy server. We have a server that we are using for all Bit-Monkeys projects and I recently set up gitlab, along with a development site for openfaqs inside of LXC containers. The main benefit of this approach is that you can isolate the environments, manage upgrades and updates of various pieces separately, and fix issues in one environment without bringing down your entire infrastructure. Setting this up to work with nginx is super easy. First you will need to grab the IP address of your container which you can easily get by running as the root user

lxc-ls --fancy

Once you have the IP address of the container, you will need to install nginx. We are running Ubuntu 14.04 so it is as simple as apt-get install nginx. The last step is to create a virtual host config file for your container.

vim /etc/nginx/sites-available/yoursite

The contents of this file should look something like this:

server {  

listen 80;  
location / {  
  proxy_set_header Host $host;  
  proxy_set_header X-Real-IP $remote_addr;  
  proxy_set_header X-Forwarded-for $remote_addr;  
  port_in_redirect off;  
  proxy_redirect /;  
  proxy_connect_timeout 300;  

First you should replace the server_name directive with the name of your site. Next you will want to replace the IP address in the proxy_pass and proxy_redirect arguments to the IP address of your container. We are running Flask which is why it is routing to port 5000, you should replace the port with whatever port your application is running on. After this has been completed you should make a symbolic link to the /sites-enabled directory and restart nginx.

ln -s /etc/nginx/sites-available/yoursite /etc/nginx/sites-enabled/yoursite  

service nginx restart

If all goes well, you will now be able to enter the name of your site in the browser and be served with whatever content or application is running inside of your container. This is a really great use case for container in my opinion, and nginx makes it easier than ever to get started. UPDATE: You can just as easily add a server block for 443 to proxy all HTTPS requests into the container as well. (Thanks tostmiller via reddit for the question.) Sweet, now that you have mastered nginx proxies with LXC, check out the the complete guide to nginx high performance.


UbuTab Case Study: How to be Taken Seriously

I was taking a look at the ubutab which is supposedly an upcoming Ubuntu/Android tablet that failed to meet its indegogo campaign goal last month. A couple of things about this site just make me sad. I see a lot of shady businesses with no SSL during checkout, no real email address, and a lazy themes for their site all the time. I am not sure if they are just lazy, don’t care, or a combination of both. This post is really a PSA. The 1990’s are over. People expect a higher level of quality in your product, your website, and your brand. If you are launching a new product here are a few tips to be taken seriously.

Use a Real Email Address

Use an Email Address at your own domain. It is not difficult to have a custom domain for your email address. Gandi gives them away for free when you purchase a domain. Having a custom domain instead of free email makes your company seem more legitimate.

Don’t Send Customers to Paypal

Use a payment processing system or at least something like stripe that it integrated to your website instead of redirecting users to a paypal checkout page that sends their payment to a company that has a different name from your own. This is just silly. I would love to have a tablet that runs Ubuntu. But this entire operation just screams scam to me. It will be interesting to see if they actually release a product later on this year.


It looks like the actual site was taken down. You can see an archive of the site here.


Installing Node.js + Ghost in an Ubuntu 14.04 LXC Container

I had to set up a blog for an interaction design course that I am taking this semester. I figured this would be the perfect opportunity to play with Node.js and work with the absolutely beautiful Ghost blogging platform. I installed all of this on my primary Linode server that hosts this blog among other things. I like to keep my server neat, so whenever I am working with a new technology that I have not used before, especially when I know it is going to install a bunch of random files and run a bunch of random scripts that I will never be able to track down, I like to put it all into an LXC Container. Aside from a few snags, the installation was pretty straightforward. I followed theinstallation guide on the Ghost github site and had to make a couple small changes due to this issue. The interesting part of this was getting the container to be accessible from the outside world. I used apache’s mod_proxy module to forward requests to the new subdomain that I created directly to Ghost which was running in my LXC container. I have seen a couple different approaches to making containers accessible to the outside world but I think that this approach works well especially if you are hosting multiple sites on the same server.

Installing Your Container

I suppose this part is optional, you can just as well run this in a regular server or VM. However, if you like to put things into tiny little boxes like me, read on! The following should be run as root.

apt-get install lxc   

lxc-create -t download -n nodejs   

# During the template selection choose ubuntu, trusty, and amd64   
# Start the Container as a Daemon  
lxc-start -n nodejs -d   

# Open a screen session to attach the container.  
# Why? Because "If it's worth doing, it's worth doing in screen"   
screen -dRR node   
lxc-attach -n nodejs

Installing Node.js and Ghost

Now that we have our shiny new container, lets get Node.js and Ghost installed. Since containers come with a very minimal set of software, we will install some additional utilities as well. The following should be run as root inside of your container. The only change I had to make from the official install guide was installing nodejs-legacy along with all the other stuff. Take a look at the issue linked to above if you are interested in more information.

apt-get install wget unzip nodejs npm nodejs-legacy   

# Create a Directory for your Ghost blog and go into it  
mkdir ghost  cd ghost   

# Download and Unzip Ghost  

# Install Ghost  
npm install --production  

# Since we are in a container, in order to be able to access ghost from the host  
# machine we will need to edit the config.js file and change the values of to   
vim config.js   

# Start Ghost   
npm start

If all went well you should now see this in the terminal.


npm start  
> ghost@0.5.8 start /ghost 
> node index  Migrations: Up to date at version 003 Ghost is running in development...  Listening on  Url configured as: http://localhost:2368  Ctrl+C to shut down

You can now exit the screen session by pressing Ctrl+a and then Ctrl+d to get back to the host.

Apache Proxy to Container

The last step of this is to set up the Virtual Host config file to proxy requests to our new Node.js container.

# Grab the IP address of your container   
lxc-ls --fancy   

# Load the appropriate apache proxy modules   
a2enmod proxy  
a2enmod proxy_http   

# Configure the Virtual Host file to set up the proxy it should look something like this. Be sure to replace the IP address listed below with the IP address of your actual container.   

<VirtualHost *:80>   

# Admin email, Server Name (domain name), and any aliases   

ProxyVia full    
ProxyPreserveHost on     


Order deny,allow      
Allow from all    


ProxyPass /   
ProxyPassReverse / 


# Restart apache to clean things up   
service apache2 restart

You should now be able to access your Ghost blog by going to the domain that you set up in your virtual host config file. In order to set up Ghost for the first time, you will want to navigate to Ghost is a great product! This was a fun little project because it is a good exercise with LXC and proxying requests to containers. I hope you found this useful! If you have any questions or run into any issues please let me know in the comments below.


Integrate ownCloud in Ubuntu with Symbolic Links

Ever since I switched to ownCloud last year I have pretty much stopped using my local file system for any personal documents or media. This is especially handy since I typically used several different computers in a given day. The only thing that is a bit frustrating with this setup is all the wasted links that exist in Nautilus by default. This includes “Documents, Music, Downloads, Videos, etc..”. Although you can simply remove these and use the bookmarks feature of Nautilus to make new folders, there is a better way! Enter symbolic links. Symbolic links are one of the most useful and powerful parts of UNIX based systems. Using symbolic links we can simply redirect the Documents, Music, Pictures, etc folders to our ownCloud folder. This way Nautilus will show you your ownCloud files when you click on these existing links. This essentially integrates ownCloud with your native file system making it pretty transparent and seamless. This can easily be accomplished in two steps.

  1. Remove the existing folder (this is required to get the symbolic link working.) If you have any data in the existing folders currently be sure to back it up and/or move it because the next step ** will delete these files! **. For example, to reroute our Music folder:
    rm -rf /home/$USER/Music
  2. Create the Symbolic Link
    ln -s /home/$USER/ownCloud/Music /home/$USER/Music

Now you have created a symbolic link, and assuming you have some content in your Music folder on ownCloud you will now see this content when you click on the Music link in Nautilus. In addition, if you open up a terminal and run ls ~/ you will see that the Music folder is now a lighter shade of blue. You can repeat this step for any additional folders. This will make your life easier and keep Nautilus nice and clean.


User Interfaces

I am taking a Human Computer Interaction course this semester which I am really looking forward to! We were assigned a whole bunch of reading but one article that I really enjoyed was Jef Raskins comments on User Interfaces. It is an older article but many of the things that he pointed out really resonate with the issues that we still face today. In the article, one line really stands out for me which is “the union of two wrong systems does not make for a single, unified, correct one”. This reminds me of the idiom “two wrongs don’t make a right”, but is interesting to think about from a UI perspective. UI’s are changing all around us. Every major GUI OS is implementing some sort of convergence between the traditional desktop metaphor and touch screen capability with endless screens of icons and interesting gestures  in lieu of menu driven interfaces. This is a very interesting time to be thinking about HCI as more and more devices in our world are becoming computerized.


Removing a Public Facing User Page in OS X Server Wiki

OS X Server has some pretty neat tools that are easy to set up and use for a team collaboration. The problem is that some of these tools are a bit quirky, especially when it comes to removing users or making sure that no data is accessible from the outside world. For example, if you edit your user profile page, this change will be visible to the public world. There is no real way (that I can find) to hide it, so it is a little bit annoying. Even removing the user from the wiki does not fix this. After doing some digging, it looks like all of this is controlled by a PostgreSQL database which makes it nice to try to figure out how to get rid of these pages. You can log into the PostgreSQL database on OS X server by opening up a terminal and running sudo -u _postgres psql template1 You can list all of the available databases by running \listand you should see one called collab. Connect to collab so that you can view the data inside and make some changes by running \c collab You can see the entire scheme by running \dt and it will look something like this:

:                   List of relations  Schema |            Name             | Type  | Owner --------+-----------------------------+-------+--------  public | blog_entity                 | table | collab  public | document_entity             | table | collab  public | entity                      | table | collab  public | entity_acls                 | table | collab  public | entity_acls_defaults        | table | collab  public | entity_attrs                | table | collab  public | entity_changesets           | table | collab  public | entity_comment              | table | collab  public | entity_lock                 | table | collab  public | entity_preview              | table | collab  public | entity_private_attrs        | table | collab  public | entity_tag                  | table | collab  public | entity_type                 | table | collab  public | file_entity                 | table | collab  public | filedata_entity             | table | collab  public | filename_reservation        | table | collab  public | global_settings             | table | collab  public | groups                      | table | collab  public | migration_entity            | table | collab  public | migration_status            | table | collab  public | migrationplaceholder_entity | table | collab  public | notification                | table | collab  public | page_entity                 | table | collab  public | podcast_entity              | table | collab  public | podcast_episode_entity      | table | collab  public | preview_queue               | table | collab  public | project_entity              | table | collab  public | relationship                | table | collab  public | savedquery_entity           | table | collab  public | search_index                | table | collab  public | search_stat                 | table | collab  public | session                     | table | collab  public | subscription                | table | collab

The schema is pretty complicated and has some really interesting relationships. My first thought was to just remove all instances of a user, but this turned out to be very complex because pretty much all of these tables depend on each other. The best way to remove a page is to fake the application out by marking the item as “deleted” in the entity table. For example, you can find the entity that you want to hide by running:

 select * from entity where long_name like 'Lev%';

This will show us all of the things that I have done in the wiki. Find the specific thing that you want. If you are looking for a user Profile page, this has the entity_type_fk of You can grab the uid of this item from the first column and then run a simple update statement to mark the item as deleted.

update entity set is_deleted = 't' where uid = 'YOUR UID';

This item will no longer show up in the UI and you can have a truly “private” wiki again. The data model is pretty interesting and is worth looking at if you have nothing to do.


I am like 6 days late on my 2014 post

2014 was freaking awesome. I went to New Orleans for the first time, and then back again a few weeks later. It was an amazing city and I cannot wait to go back soon. I moved to a new department in my previous job and as a result made some awesome new friends. I continued to plow through my graduate program and am excited to continue to make progress this year. I got a great new job at Linode, uprooted my life, moved to South Jersey, and met some amazing people. I wrote more code, solved more problems, and learned more than any previous year to date. I cannot wait to see what 2015 has in store. 🙂