levlaz

Posts tagged: devops

Testing Syntax Errors in Apache Config

2017-05-01 21:13:22 ][ Tags: apache devops

If you spend any time mucking around config files in Linux you are likely to run into some syntax errors sooner or later. Recently I was setting up cgit on Debian 8 and was banging my head against the wall for a few minutes trying to figure out why apache was so unhappy.

Symptoms

The key issue was when I restarted apache2 like I normally would after adding a new configuration it spat out an angry message at me.

root@nuc:/etc/apache2# sudo service apache2 restart
Job for apache2.service failed. See 'systemctl status apache2.service' and 'journalctl -xn' for details.

Troubleshooting

The first place that I would look is the error logs. However, in this particular case they were not very helpful.

root@nuc:/etc/apache2# tail -f /var/log/apache2/error.log
[Mon May 01 21:00:11.922943 2017] [mpm_prefork:notice] [pid 20454] AH00169: caught SIGTERM, shutting down

Next, I read the error message per the suggestion from the restart command. This was also not very helpful.

root@nuc:/etc/apache2# systemctl status apache2.service
 apache2.service - LSB: Apache2 web server
 Loaded: loaded (/etc/init.d/apache2)
 Drop-In: /lib/systemd/system/apache2.service.d
 └─forking.conf
 Active: failed (Result: exit-code) since Mon 2017-05-01 21:05:58 PDT; 1min 45s ago
 Process: 20746 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
 Process: 20697 ExecReload=/etc/init.d/apache2 reload (code=exited, status=1/FAILURE)
 Process: 20920 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE)

May 01 21:05:58 nuc apache2[20920]: Starting web server: apache2 failed!
May 01 21:05:58 nuc apache2[20920]: The apache2 configtest failed. ... (warning).
May 01 21:05:58 nuc apache2[20920]: Output of config test was:
May 01 21:05:58 nuc apache2[20920]: apache2: Syntax error on line 219 of /etc/apache2/apache2.conf: Syntax error on line 22 of /etc/a... section
May 01 21:05:58 nuc apache2[20920]: Action 'configtest' failed.
May 01 21:05:58 nuc apache2[20920]: The Apache error log may have more information.
May 01 21:05:58 nuc systemd[1]: apache2.service: control process exited, code=exited status=1
May 01 21:05:58 nuc systemd[1]: Failed to start LSB: Apache2 web server.
May 01 21:05:58 nuc systemd[1]: Unit apache2.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.

Inspecting the error message, we see that it is unhappy with line 219 of the main /etc/apache2/apache2.conf file. Looking at that line we can see that it is simply loading all of the other config files in sites-enabled which means that before it even gets to load my new cgit config file it fails.

Help

So now that we have done some basic troubleshooting. It's time to dig into the manual for further information. I know that the config file is failing to load, and knowing my fat fingers it is very likely a config error on my part. Before reading 200 pages of documentation on the apache website we should take a look at the built in help to see if we can find something of value.

root@nuc:/etc/apache2# apache2 -help
Usage: apache2 [-D name] [-d directory] [-f file]
 [-C "directive"] [-c "directive"]
 [-k start|restart|graceful|graceful-stop|stop]
 [-v] [-V] [-h] [-l] [-L] [-t] [-T] [-S] [-X]
Options:
 -D name : define a name for use in <IfDefine name> directives
 -d directory : specify an alternate initial ServerRoot
 -f file : specify an alternate ServerConfigFile
 -C "directive" : process directive before reading config files
 -c "directive" : process directive after reading config files
 -e level : show startup errors of level (see LogLevel)
 -E file : log startup errors to file
 -v : show version number
 -V : show compile settings
 -h : list available command line options (this page)
 -l : list compiled in modules
 -L : list available configuration directives
 -t -D DUMP_VHOSTS : show parsed vhost settings
 -t -D DUMP_RUN_CFG : show parsed run settings
 -S : a synonym for -t -D DUMP_VHOSTS -D DUMP_RUN_CFG
 -t -D DUMP_MODULES : show all loaded modules
 -M : a synonym for -t -D DUMP_MODULES
 -t : run syntax check for config files
 -T : start without DocumentRoot(s) check
 -X : debug mode (only one worker, do not detach)

Success! It turns out we can run a linter on a specific config file using the -t flag.

Solution

root@nuc:/etc/apache2# apache2 -t -f sites-available/git.levlaz.org.conf
apache2: Syntax error on line 22 of /etc/apache2/sites-available/git.levlaz.org.conf: </VirtualHost> without matching <VirtualHost> section

Doh! Such a silly mistake with a missing </VirtualHost> closing bracket. Fixing this syntax error resolved the issue. The main takeaway for me is that the best part about most Linux tools is that they usually give you everything you need in order to succeed. We were able to troubleshoot and resolve this issue without resorting to google and running random commands that stranger posted on the internet 5 years ago.

Upgrading and Restarting Salt on OS X

2015-08-31 18:19:18 ][ Tags: devops salt

I have been working a lot with salt this month on OS X. I have done some work in the past with Linux and it is a much more pleasant experience. Recently I have salted the installation of DataDog on a fleet of Mac Minis. My initial attempts did not work for several reasons. First, I was getting an error message from salt complaining that list indicies must be integers, not str. This turned out to be a python bug that was resolved in the latest version of salt. After upgrading salt-master and salt-minion to the latest versions I was getting yet another error stating that global name '__salt_system_encoding__' is not defined. It turns out that this error can be resolved by restarting salt-minion. This turned out to be an issue on OS X because if you attempt to simply restart the process with salt-minion restart it hangs up and does nothing. If you try to run kill, or killall it is not able to match the name salt-minion for some reason. At first I was logging into each box manually and kill 9ing, but this kind of defeated the entire purpose of salt so I was determined to find a better way. This is where pkill comes in to save the day. pkill is able to match on salt-minion and is a working solution to restart salt-minion. On OS X salt-minion appears to run as a super daemon and immediately restarts if you try to kill it. This is not so bad since all we need to do to resolve the errors above is kill all of the old salt-minion processes. The complete process to upgrade and restart salt-minion on OS X is outlined below:

  1. Upgrade Salt (Assuming you installed salt with pip)

    `salt '*' cmd.run '/usr/local/bin/pip install salt --upgrade'`
    
  2. Restart Salt Minion (Be extra careful with this command)

    `salt '*' cmd.run 'sudo pkill -f salt-minion'`
    
  3. Verify that you are on the latest version (salt-minion 2015.5.5 (Lithium) as of this post)

    `salt '*' cmd.run '/usr/local/bin/salt-minion --version'`
    

After following these steps, everything is now working! So, now I have the latest version of salt-minion, DataDog chugging along, and I was able to do it all without resorting to writing some jenky bash script or logging in manually to each box. Salt on OS X has been a headache so far, but hopefully with everything being on the latest version things will be a bit smoother going forward.

Salting your LXC Container Fleet

2015-01-31 02:36:27 ][ Tags: devops lxc

Saltstack is an awesome configuration management system that can make managing 10 to 10,000 servers very simple. Salt can be used to deploy, manage, configure, report around, and even troubleshoot all of your servers. It can also be used to manage a fleet of LXC containers which we will be doing in this blog post. If you have been reading this blog, you know that I love Linux Containers. I am using them for pretty much anything these days. Salt is a great way to keep track of and manage all of these containers. On my main server, I have three containers that are running various applications. In order to update the packages on these containers I would have to log into each one, and run apt-get update and apt-get upgrade. This is not so bad for three containers, but you can imagine how annoying and cumbersome this gets as your container lists grows. This is where salt comes to the rescue, with salt I can update all of these containers with a single command. The official Salt Walkthrough is a great place to start to learn about how Salt works. This short post will show you how to set up a small salt configuration on a single server that is hosting several containers. All of my containers are pretty boring because they run Ubuntu 14.04. The best part about salt is that it is really OS agnostic and can manage a diverse fleet of different versions and types of operating systems. For this post, my host and all of my LXC containers are running Ubuntu 14.04 LTS Salt works by having a master that manages a bunch of minions. Setting up salt master is a breeze. For the purpose of this blog post, we refer to the master as being your host server and the minions as being your LXC containers.

Setting up Salt Master

On your host server you will need to install salt master. First we will need to add the saltstack repo to our repository list:

sudo add-apt-repository ppa:saltstack/salt

Next we will install the salt-master:

sudo apt-get update sudo apt-get install salt-master

Once the Salt Master is installed it will start running right away. By default it will run on port 4505 and 4506. You can verify this by running netstat -plntu | grep python to see which port(s) it is currently running on.

Setting up your Firewall

One thing I ran into during the installation was getting the firewall working. This is all running on a Linode, and I used Linode’s Securing Your Server guide to set up my firewall. If you have a similar setup you can add the following lines to /etc/iptables.firewall.rules to allow the minions to communicate with the master.

# Allow Minions from these networks -I INPUT -s 10.0.3.0/24 -p tcp -m multiport --dports 4505,4506 -j ACCEPT  # Allow Salt to communicate with Master on the loopback interface -A INPUT -i lo -p tcp -m multiport --dports 4505,4506 -j ACCEPT  # Reject everything else -A INPUT -p tcp -m multiport --dports 4505,4506 -j REJECT

LXC gives you a nice “Management Network” where the containers can communicate with the host using private IP addresses. The easiest way to set this up is to allow the entire range (which above is 10.0.3.0/24 ) of this network through the firewall. For security purposes I am rejecting all other IP addresses. Once you have configured your firewall you will want to load the new firewall settings to enable them.

sudo iptables-restore < /etc/iptables.firewall.rules

Setting up your Minions

Once your master is set up, running, and allows minions through the firewall we can set up the minions. Since LXC is a pretty barebones system we will need to install a couple of prerequisites first to get everything working. First we want to log into our container. I usually run the containers in a screen session so it would look something like this.

screen -dRR container1 lxc-attach -n container1

Once we are inside of our container, intall the following things:

sudo apt-get install software-properties-common  sudo add-apt-repository ppa:saltstack/salt  sudo apt-get update  sudo apt-get install salt-minion

Now our minion is installed! It will need to know where to find the master. In our case we are running everything on the management network. The easiest way to get it to find the master is to add the IP address of the master to our /etc/hosts configuration. If you are not sure what the IP address of the master is you can run ip a | grep inet on the master and look for the IP address that starts with a 10.

vim /etc/hosts  # Now add the master IP  10.0.3.1    salt

To start it up, we will simply run:

/etc/init.d/salt-minion start

Before the minion is able to communicate with the master its key must be accepted. Back on the salt-master you will need to run salt-key -A in order to accept the key from your minion. You should see the name of your container pop up and you will want to say ‘Y’ to accept its key. You can test to see that everything is working by running:

salt '*' test.ping

Your output should look something like this:

hci:     True git:     True usel:     True

That’s it! This may seem like a bit of work, but it is totally worth it because now every time we need to do anything on these containers we will simply use salt instead of having to log into each one. You can simple repeat these steps for each additional containers until you have an entire fleet of salted minions.

Proxy Everything into a Linux Container with nginx

2015-01-24 02:35:10 ][ Tags: devops containers

I previously wrote about setting up Node.js + Ghost in an Ubuntu LXC container and using Apache to proxy all web requests into that container. This works pretty well for the most part, but it seems like nginx is much better tool for this since it was pretty much designed to be a proxy server. We have a server that we are using for all Bit-Monkeys projects and I recently set up gitlab, along with a development site for openfaqs inside of LXC containers. The main benefit of this approach is that you can isolate the environments, manage upgrades and updates of various pieces separately, and fix issues in one environment without bringing down your entire infrastructure. Setting this up to work with nginx is super easy. First you will need to grab the IP address of your container which you can easily get by running as the root user

lxc-ls --fancy

Once you have the IP address of the container, you will need to install nginx. We are running Ubuntu 14.04 so it is as simple as apt-get install nginx. The last step is to create a virtual host config file for your container.

vim /etc/nginx/sites-available/yoursite

The contents of this file should look something like this:

server {  listen 80;  server_name dev.openfaqs.com www.dev.openfaqs.com;  location / {  proxy_pass http://10.0.3.194:5000;  proxy_set_header Host $host;  proxy_set_header X-Real-IP $remote_addr;  proxy_set_header X-Forwarded-for $remote_addr;  port_in_redirect off;  proxy_redirect http://10.0.3.194:5000 /;  proxy_connect_timeout 300;  }  }

First you should replace the server_name directive with the name of your site. Next you will want to replace the IP address in the proxy_pass and proxy_redirect arguments to the IP address of your container. We are running Flask which is why it is routing to port 5000, you should replace the port with whatever port your application is running on. After this has been completed you should make a symbolic link to the /sites-enabled directory and restart nginx.

ln -s /etc/nginx/sites-available/yoursite /etc/nginx/sites-enabled/yoursite  service nginx restart

If all goes well, you will now be able to enter the name of your site in the browser and be served with whatever content or application is running inside of your container. This is a really great use case for container in my opinion, and nginx makes it easier than ever to get started. UPDATE: You can just as easily add a server block for 443 to proxy all HTTPS requests into the container as well. (Thanks to stmiller via reddit for the question.) Sweet, now that you have mastered nginx proxies with LXC, check out the the complete guide to nginx high performance.

Suspending Admin Users in GitHub Enterprise

2014-12-22 01:38:46 ][ Tags: devops ghe

GitHub Enterprise has a neat feature that allows you to suspend user accounts instead of removing them. This is really nice for when someone leaves an organization because it allows you to restrict access to their account without removing all of their work. If the user that you are trying to suspend is a Site Administrator, you will need to “demote” them prior to suspending their account. After you have demoted the user you will see that the suspend button appears and you can continue with the suspension process.

Secure Your Self Hosted WordPress

2014-08-20 00:22:14 ][ Tags: devops

Self hosting WordPress rocks. Unsecured websites do not rock. It does not matter how long or complicated your password is if it is being transmitted in plain text over HTTP. Luckily, it is easy to create a Self Signed certificate and use it on your website. Keep in mind that browsers become very unhappy with Self Signed Certificates and tend to yell at the user. So, if you have a lot of traffic and want your users to feel safe purchase an SSL certificate from a real Certificate Authority. In any case, at the very least you should be using a self signed SSL for all of the admin portions of your site. Here’s how to do it on Debian 7.5 running a standard LAMP stack.

  1. Create your self signed Certificate by running the following commands sequentially.
mkdir /etc/apache2/ssl  openssl req -new -x509 -days 365 -nodes -out /etc/apache2/ssl/wp.pem -keyout /etc/apache2/ssl/wp.key
2.  Create a Virtual Host for your website in /etc/apache2/conf.d/yoursite.conf
<VirtualHost 1.2.3.4:443> SSLEngine on SSLCertificateFile /etc/apache2/ssl/wp.pem SSLCertificateKeyFile /etc/apache2/ssl/wp.key DocumentRoot /srv/www/yoursite.com/public_html <Directory *> AllowOverride All order allow,deny Allow from all  </Directory> </VirtualHost>
3.  Enable the SSL module in Apache
sudo a2enmod ssl
4.  Restart apache
sudo service apache2 restart
All set! Now, you can navigate to https://yourwebsite.com, confirm the security exception, and administer and view your WordPress site securely.