levlaz

SQLite DB Migrations with PRAGMA user_version

2017-12-06 ][ Tags: python databases sqlite

This blog is using a simple homegrown blogging engine that I wrote backed by a SQLite database. I have a function in the flask app that performs database migrations. My current approach has been to keep a folder full of migrations and run them sequentially whenever the app starts.

This works well for the case of adding and removing tables since SQLite has the handy IF NOT EXISTS option. However, when you are altering an existing table, this entire model falls apart since IF NOT EXISTS no longer works.

Practically, this means that outside of a fresh install my database migrations are useless.

I am still being stubborn and not using a well written solution like Alembic (which I would highly recommend for a "serious" project) for this blog. Instead, I discovered that SQLite comes with a built in mechanism to keep track of the user schema. This is the pragma statement, and specifically user_version.

Using PRAGMA user_data for DB Migrations

My migrations folder structure looks like this:

.
├── blog.db
├── blog.py
├── __init__.py
├── migrations
│   ├── 0001_initial_schema.sql
│   ├── 0002_add_unique_index_to_posts_tags.sql
│   ├── 0003_add_fts.sql
│   ├── 0004_add_column_to_post.sql
│   ├── 0005_add_comments_table.sql
│   └── 0006_add_admin_flag_to_comments.sql

As you can see the naming convention is 000N_migration_description.sql. Each migration file has the following statement in it:

PRAGMA user_version=N; (where N is the 000"N" part of the file name)

This steps the current user_version to be equal to the current version as defined by the file name.

The code to do stuff with the database is shown below:

def connect_db():
    """Connects to Database."""
    rv = sqlite3.connect(
        app.config['DATABASE'],
        detect_types=sqlite3.PARSE_DECLTYPES | sqlite3.PARSE_COLNAMES)
    rv.row_factory = sqlite3.Row
    return rv


def get_db():
    """Opens new db connection if there is not an
    existing one for the current app ctx.
    """
    if not hasattr(g, 'sqlite_db'):
        g.sqlite_db = connect_db()
    return g.sqlite_db


def migrate_db():
    """Run database migrations."""

    def get_script_version(path):
        return int(path.split('_')[0].split('/')[1])

    db = get_db()
    current_version = db.cursor().execute('pragma user_version').fetchone()[0]

    directory = os.path.dirname(__file__)
    migrations_path = os.path.join(directory, 'migrations/')
    migration_files = list(os.listdir(migrations_path))
    for migration in sorted(migration_files):
        path = "migrations/{0}".format(migration)
        migration_version = get_script_version(path)

        if migration_version > current_version:
            print("applying migration {0}".format(migration_version))
            with app.open_resource(path, mode='r') as f:
                 db.cursor().executescript(f.read())
                 print("database now at version {0}".format(migration_version))
        else:
            print("migration {0} already applied".format(migration_version))

The relevant part to this blog post is the migrate_db() function. Two things are happening.

  1. The get_script_version() helper function extracts the integer from the migration name.
  2. current_version gets the current value of user_version of your database.
  3. We iterate over each migration file in the migrations folder and perform a simple check. If the migration version is larger than the current_version we run the migration, otherwise it gets skipped.

This solves for most cases and allows for a smooth upgrade path if anyone ever decides to start using this blogging engine for themselves. I am still pretty happy with this approach because this is essentially a fully functional migration system in just a handful of lines of python.

Using Plex with Nextcloud

2017-12-06 ][ Tags: ubuntu plex nextcloud

After hearing about it for years, I finally got around to installing plex on my nuc. I'm impressed with everything about Plex. It was easy to install, and mostly works out of the box. I am using it to manage my ever growing movie collection and massive music library.

All of my files were already on the nuc since I am using Nextcloud. Rather than duplicating the files, I pointed my media library to the same directory where my files are in my nextcloud installation.

This poses a couple of permissions problems. On Ubuntu, this directory is owned by the www-data (apache) user and group. In order to get plex to be able to see the files at all I had to add the plex user to the www-data group and then restart the plex service. The following commands will make that happen:

sudo usermod -aG www-data plex
sudo systemctl restart plexmediaserver.service

My biggest complaint with most "home media servers" is that once you point the files to the right place, you cannot really "manage" most of them. For instance, I have a massive (50+ GB) music collection that I have built up over the years. When I am listening on shuffle I want to prune out some of the songs that I hate. Luckily, with plex this is very simple. The only catch is that the www-data group needs to have read/write/execute access to those files.

In order to make this happen you can run the following command against your data file. Be sure to replace the directory I have below to whatever you are using for your own Nextcloud files.

chmod -R 775 /var/www/nextcloud/data/levlaz/files

Doing these two things makes the Plex + Nextcloud integration work very well. Now whenever I add or remove files from my many different computers everything stays in sync.

A Robot With a Soul

2017-12-03 ][ Tags: switch

OPUS: The Day We Found Earth was released on Nintendo Switch this week. I picked it up and played through the main story in a few hours. There are few other games at the $5 price point that are worth playing in the Nintendo eShop. This simple game tells a very compelling story. Like most great short stories, it quickly establishes an emotional connection with the main characters and draws you in.

Lately, I've been thinking about video games as a medium for telling compelling stories. No one does this better than indie developers and the team at SIGONO delivers with this emotional adventure.

In OPUS, you play as a tiny robot who's mission is to find the planet Earth in order to save the human race. You do this by exploring a vast galaxy from a space ship that is equipped with a powerful telescope. As you progress through the game you uncover additional parts of the space ship and begin to understand the curious circumstances in which the robot finds himself.

The game is short, the graphics are not revolutionary, and the game mechanics are very simple. However, where OPUS really shines is in the story that is told. The robot loves the woman who programmed him, he exhibits emotions, and you are quickly drawn in to feel sympathy and concern for his wellbeing. Coupled with the calming soundtrack by Triodust, you are immersed in the game and race against time to fulfill the seeming futile task of finding Earth.

I really loved this game. I can't wait to see what comes next from Sigono and I would love to see more games like this in the Nintendo eShop.

MacOS High Sierra Recovery "The recovery server could not be contacted"

2017-11-26 ][ Tags: macos

I was trying to reinstall High Sierra on an older MacBook Air using internet recovery and I kept on getting an error message when trying to reinstall High Sierra.

The recovery server could not be contacted

It appears that this has to do with the time on the machine not being synchronized, so when the MacBook tries to reach out to the recovery server the certificates do not validate and we get this useless error message.

To fix this.

  1. Open up a Terminal from the utilities menu
  2. Enter the following command:

    ntpdate -u time.apple.com

  3. Try to install High Sierra again

Dockerized PostgreSQL and Django for Local Development

2017-10-31 ][ Tags: hacking python docker django

Docker and docker-compose make it dead simple to avoid dependency hell and have a consistent environment for your whole team while doing local development. This post walks through setting up a new Django project from scratch to use Docker and docker-compose. It is modeled after a previous post that I wrote about doing a similar thing with Laravel and MySQL.

Dockerfile

Nothing too interesting happening here. Installing python and pip.

FROM ubuntu:16.04

# system update
RUN apt update
RUN apt upgrade -y

# python deps
RUN apt install -y python3-dev python3-pip

docker-compose.yml

version: '2'
services:
  app:
    build: .
    ports:
      - "8000:8000"
    volumes:
      - .:/app
    working_dir: /app
    command: bash -c "pip3 install -r requirements.txt && python3 manage.py migrate && python3 manage.py runserver 0:8000"
    depends_on:
      - db
  db:
    image: postgres:9.6.5-alpine
    environment:
      - POSTGRES_USER=feedread
      - POSTGRES_PASSWORD=feedread
    volumes:
      - ./data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

With this in place you can start your Django app with docker-compose up. Each time the app starts it will install the latest dependencies, run migrations, and start serving the app on localhost:8000

Notes

  1. In order to do stuff with the database locally you should add the following record to your local /etc/hosts file

    # /etc/hosts
    
    127.0.0.1 db
    
  2. Since we define - .:/app as a volume, this means that all of your local changes are immediately visible in the dockerized app.

  3. If you need to access the running app or db container you can do so with docker-compose exec app bash or docker-compose exec db bash.
  4. This docker-compose file is not really suitable for production since it is not likely that you would want to build the container each time the app starts or automatically run migrations.
  5. You can add additional services like memcached, a mail server, an app server, a queue, etc., using the same method that we are using above with our database.

I Want to Become a Core Python Developer

2017-10-30 ][ Tags: hacking python

I've been tinkering with python for almost five years now. I am absolutely in love with the language. My new goal is to make enough contributions to the project to join the core team.

This post is my attempt to keep a list of all that I've done in this endeavor. I will keep this up to date on a monthly basis.

Short Term Goals

November 2017

Code

Community

October 2017

Code

Community

Python Mocks Test Helpers

2017-10-27 ][ Tags: hacking python

I've been writing a python wrapper for the CircleCI API over the last week. I wanted to do this "the right way" with test driven development.

I have a couple integration tests that actually hit the CircleCI API, but most of the unit tests so far are using MagicMock to ensure that the basic functions are working as expected.

This generally involves the tedious process of dumping out JSON, saving it to a file, and then reloading that file later on to actually test it.

I wrote two helper functions that make this process slightly less tedious.

Load Mock

The first is a function that loads a file and overrides every request to return that file (typically as JSON).

    def loadMock(self, filename):
        """helper function to open mock responses"""
        filename = 'tests/mocks/{0}'.format(filename)

        with open(filename, 'r') as f:
            self.c._request = MagicMock(return_value=f.read())

Test Helper

The second is a function that runs a real request for the first time and dumps the output to a file.

    def test_helper(self):
        resp = self.c.add_circle_key()
        print(resp)
        with open('tests/mocks/mock_add_circle_key_response', 'w') as f:
             json.dump(resp, f)

Naming it test_helper allows it to be picked up and ran when you run your test suite since by default unittest will capture any methods that start with test.

Usage

An actual example is shown below.

    def test_clear_cache(self):
        self.loadMock('mock_clear_cache_response')
        resp = json.loads(self.c.clear_cache('levlaz', 'circleci-sandbox'))

        self.assertEqual('build dependency caches deleted', resp['status'])

Writing the tests is easy, we just copy and paste the name of the file that was created with test_helper and verify that the contents are what we expect them to be.

This approach has been working very well for me so far. One thing to keep in mind with writing these types of tests is that you should also include some general integration tests against the API that you are working with. This way you can catch any regressions with your library in the event that the API changes in any way. However, as a basic sanity check mocking these requests is a good practice and less prone to flakiness.

It's A Sign

2017-08-27 ][ Tags: writing

Yesterday I read an article in the San Francisco Chronicle about the Writers' Grotto. This inspiring community of working writers makes me excited to partake in some of their upcoming activities. Upon further investigation I came across a handful of books that they have published including a book of prompts called "642 Things To Write About".

So, while traveling to Boise for my travel blog how excited do you think I was when I came across this very book at the Flying M Coffee house.

Now I have another 642 things to write about. I'm sorry if you read this blog.

Bicycle

2017-07-20 ][ Tags: life

In 2010, when I was living in Maryland I purchased the new B.o.B album that had just come out on iTunes and was listening to it while I walked two miles down Cherry Hill Road to target. I don't remember exactly why I walked. Perhaps my car was in the repair store. At target, I bought a new bicycle for the first time in many years. Walked down means I had to ride it up. This didn't last very long and I had an embarrassing walk of shame since I was not able to make it up the hill.

Back then, and even today, I know nothing about bikes. I am pretty sure I bought the worst possible bike for the occasion. It was slow, clunky, and felt like it would fall apart at any moment.

It's worth noting, that at the time I was working at the National Naval Medical Center which was under 10 miles from my apartment. 10 miles in beltway traffic can quickly turn into a 90 minute commute. The only other option was taking the train, which was in an inconvenient U shape. The train station was a few miles away and the bus to take you there was slow, also in some inconvenient letter shape, and overall the commute time was not much better.

Riding a bike to the train station was a viable option, and I ended up doing just that a number of times. The best part about this is that there was a bike trail directly next to my apartment that took you up to the University of Maryland campus and the metro station. This trail was beautiful, and there was even a creepy swamp straight out of a horror movie that would be filled with Silent Hill esque fog in the mornings.

The best memory that I have of this bike was the time that my Ford Focus broke down for good. I loved that car. It was the firs thing I bought with my first military pay check. I got it with around 7 miles on it brand new in 2007. I drove all over the east coast and the midwest in that egg shaped hot red car. I blew the speakers out listening to house music that Gerald introduced me to. I popped it in third gear one time and chased a woman down Wisconsin Ave in a fit of road rage when she cut me off one day. One of my friends joked to me that the moment I reached 45,000 miles the car would break down.

Damn that person. Literally the day I reached 45,000 miles my clutch went out on the beltway. It was the most frustrating experiences of my life. I somehow made it back to my apartment. This was one of the most memorable moments of that old bike. I rode it, in the middle of the winter, over ice, to a Honda dealership.

Let's make one thing clear. When you show up to a dealership on a bicycle in the middle of the winter, you just made the day of whoever is lucky enough to come talk to you first because there is no way that you are leaving there without a car. I got a Honda Civic. Also brand new. No clutch this time. That car, named Chester, is still around. My dad drives it these days.

In 2011, when I was preparing to leave Maryland and move back to Ohio I sold the bike to a University of Maryland engineering student for a fraction of what I paid for it. I remember watching him ride away into the sunset. That was the last time I rode a bicycle.

Maryland, Ohio, and New Jersey where I spent most of the last decade are not really big bicycle towns. San Francisco on the other hand is full of bike lanes, bike shares, and every morning you can see hundreds of cyclists commuting to work like a herd of gazelles down Market Street.

I remember one of the doctors that I worked with biked to work every day. Unlike the folks around here who do it unpretentiously, it was an entire event for him. He would wear the whole tight clothes getup, take a shower before he started to work, and then change into his uniform. Must be nice, who has time for that?

Ever since I moved here, I have been wanting to get a bicycle. A few weeks ago I asked my twitter followers to recommend a bike shop. My good friend, and co-worker, Tad made me an offer I couldn't refuse. Rather than recommending a bike shop or a bike model he gave me an old bike instead. Tonight, I finally got a chance to go pick it up and take it for a spin.

It was amazing.

By far, this is the best bicycle that I have ever ridden on. It has huge wheels. It takes very little effort to pick up speed. It's fast. Most of all, its fun. I felt like a kid again riding on that thing.

We rode to Golden Gate park to watch the awesome photosynthesis light show at the Conservatory of Flowers. I learned about The Wiggle and rode home from the Haight to SoMa. There was something truly amazing and freeing about biking home tonight. I saw the city in a whole different light.

Besides living in constant fear of my front wheel or seat being stolen, I cannot wait to take this for a spin all over the city. The first thing I want to do is finally make my way all around Golden Gate Park. That place is huge and walking around would take an entire day. I am too lazy for that. Naturally I am going to join the flock of tourists and take a ride over the golden gate bridge one of these days as well.

My last bike offered me so many great memories that I have not really thought about until now. I can't wait to see what adventures this new bike will have in store for me.

I want to give a public, heartfelt, humongous THANK YOU to Tad. He really made my day.

Spring Security, Webjars, and MIME type error

2017-05-22 ][ Tags: hacking java

I volunteered to be JLO (Java Language Owner) at CircleCI and I am currently working on getting a sample Spring Framework project running on CircleCI 2.0. I made a simple app bootstrapped with the Spring Initializer. I included Spring Security for the first time and I decided to try out WebJars for static Javascript libraries such as bootstrap. I am using Thymeleaf for templating.The app does not actually do anything yet but I ran into a pretty strange issue today that I wanted to write up here. My home page is pretty straightforward.

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <title>CircleCI Spring Demo</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />

    <link rel="stylesheet" th:href="@{/webjars/bootstrap/3.3.7/css/bootstrap.min.css}" />

    <link rel="stylesheet" th:href="@{/css/style.css}" href="../static/css/style.css" />
</head>
<body>

    <nav class="navbar">
        <div class="container">
            <div class="navbar-header">
                <a class="navbar-brand" href="#">CircleCI Demo Spring</a>
            </div>
            <div id="navbar" class="collapse navbar-collapse">
                <ul class="nav navbar-nav">
                    <li class="active"><a href="#">Home</a></li>
                    <li><a href="#">About</a></li>
                </ul>
            </div>
        </div>
    </nav>

    <div class="container">
        <h1> CircleCI Spring Demo </h1>
    </div>

    <script th:src="@{/webjars/bootstrap/3.3.7/js/bootstrap.min.js}"></script>
</body>
</html>

However, when I tried to load up the app with mvn spring-boot:run none of the styles showed up and console showed the following error message:

Resource interpreted as Stylesheet but transferred with MIME type text/html

It turns out, that a default spring-security config will basically block any request unless you whitelist it. The MIME type is a red herring since what is actually happening is that my spring-security config is redirecting all unauthenticated users to my login page (which is login.html) instead of serving up the stylesheet from the /webjars directory. The solution is to update my security configuration to whitelist anything that comes from /webjars

package com.circleci.demojavaspring;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;

@Configuration
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .authorizeRequests()
                .antMatchers("/", "/home", "/webjars/**").permitAll()
                .anyRequest().authenticated()
                .and()
            .formLogin()
                .loginPage("/login")
                .permitAll()
                .and()
            .logout()
                .permitAll();
    }
}

Now, the styles load as expected.

Install Netbeans on Debian Stable

2017-05-11 ][ Tags: java debian

Netbeans is a great open source Java IDE. For some reason it is missing from the current stable repository on debian. In order to get it installed as a regular desktop application in Debian Jessie (using GNOME) you should do the following:

  1. JDK 8 is required in order to use netbeans. The default-jdk package on Jessie installs jdk7. First you must enable debian backportsand then you You can install it with sudo apt install -t jessie-backports openjdk-8-jdk
  2. Download the latest version from the releases page. There are a couple different flavors. I usually choose the one that contains everything. This will download a bash installer script.
  3. Open up a terminal and navigate to wherever you downloaded the script from Step 2. Execute the script with sh netbeans*.sh
  4. This will run some pre-flight checks and then fire up an installation wizard that will guide you through the rest of the process.
  5. Once Netbeans has been installed you can launch it by clicking on the icon that should now be on your desktop.

The Best Autotools tutorial from the Ajunta docs

2017-05-11 ][ Tags: hacking gnu autotools

Autotools is probably the most overwhelming piece of software that I have encountered. Just when I think that I get how it works, something goes wrong and I spend hours digging through man pages and info docs trying to figure out what is going on.

I wish I had found the Anjuta docs that describe how the "magic" behind their project wizards actually work earlier. Like most IDE's Anjuta takes care of doing a lot of heavy lifting in the background. Unlike most IDE's they have excellent documentation (in addition to the source code of course) on how everything actually works. This is extremely valuable and I am grateful to the folks from the Anjuta project who took the time to describe how all of this works from doing everything as a single line gcc command all the way to clicking buttons via the new project wizard.

If you are new to autotools like me, check out this doc. If you dont care about auto tools but want to see what excellent documentation looks like, check out this doc as well.

Using gtk-doc with Anjuta on Debian Stable

2017-05-09 ][ Tags: gtk

gtk-doc is a library that helps extract code documentation. When you create a new project with Anjuta it asks if you wish to include gkt-doc. Unfortunately, on Debian stable there seems to be a bug because the autoconf configuration is looking for the wrong version of gtk-doc.

/home/levlaz/git/librefocus/configure: line 13072: syntax error near unexpected token `1.0'
/home/levlaz/git/librefocus/configure: line 13072: `GTK_DOC_CHECK(1.0)'

On Debian stable, the version of GTK doc that comes with thegtk-doc-tools package is 1.21. In order to resolve this error you need to update configure.ac to use the newer version of gtk-doc as shown below:

GTK_DOC_CHECK([1.21])

Then you need to regenerate the entire project and everything should work as expected.

Anjuta "You must have libtool installed"

2017-05-09 ][ Tags: debian gnu

Anjuta is an excellent IDE specifically when it comes to writing applications for GNOME. On Debian stable, there seems to be a bug having to do with a missing dependency. When you create a project for the first time using the new project wizard and then try to execute it; Anjuta will complain that you must have libtool installed. I already have libtool installed, but it is looking specifically for some tools found in the libtool-bin package. Installing this package resolves the issue.

sudo apt-get install libtool-bin

Terminal Reader Mode with Pandoc and Less

2017-05-06 ][ Tags: hacking terminal

The other day Aosheng send me an article to read from the verge. When I tried to read it, it took about 5 minutes to load because of the 15 various JavaScript things that were running in addition to ads loading in the background. Firefox was unhappy, and even when I tried to turn on "Reader View" (which strips out all of the junk) it took another minute to load. I've been on a UNIX binge lately so I figured there had to be a clever hack to make my own reader view in a terminal. This is where pandoc comes to the rescue. I've written about this tool in the past discussing how to easily convert Markdown to PDF. It turns out that pandoc also supports arbitrary URL arguments which means that you can convert HTML files on the fly without having to download them first. This means that we can take an arbitrary URL, pass it into pandoc, and spit out plain text. Furthermore, we can pipe this into less to get a nice pager for longer documents. The full string is shown below:

pandoc -f html -t plain
https://www.theverge.com/2017/5/4/15547314/edward-snowden-cory-doctorow-nypl-talk-walkaway
| less

In the example above, -f specifies the input filetype, in this case HTML. -t specifies the conversion filetype, in this case plain text. Pandoc supports a ton of different formats, you can read the man page for more info.

The next logical step is to make a script like my wordpress mutt poster to make this even easier. You could make a simple program called reader and put it in /usr/local/bin/reader. The contents of this script are:

1
2
3
4
5
6
#!/bin/bash
# Terminal Reader Mode using Pandoc and Less

url="$1"

pandoc -f html -t plain $url | less

You can then use this  by typing reader $URL.

Posting to Wordpress via Email with Mutt

2017-05-05 ][ Tags: hacking terminal

Soemtimes you are hanging out in your terminal and you just want to be able to post something to your blog quickly. I was pretty inspired by Derek Siver's OpenBSD post [1] where he really embraces the unix philosophy of having one tool to do a job correctly and putting together various small tools like this to come up with a solution for the problem that you are trying to solve with your computer. Wordpress with Jetpack makes it dead simple to post to your blog via email [2], even if you do not have a mail server configured. I was able to write a three line bash script to "automate" creating a new post from my command line.

1
2
3
4
5
6
7
#!/bin/bash
# Bash Utility to Post to Wordpress using Mutt

subject="$1"
WP_ADDRESS=

mutt -s "${subject}" $WP_ADDRESS

I saved this file in /usr/local/bin/wp and whenever I am inspired to fire off some quick thoughts to this blog I can run wp "Blog Post Title" which dumps me into a vim buffer that once I complete is sent off via mutt to wordpress. [1] https://sivers.org/openbsd [2] https://jetpack.com/support/post-by-email/#examples

Reading gz files with zcat

2017-05-04 ][ Tags: hacking gnu terminal

The Debian Policy Manual dictates that all packages should come with documentation. In order to save space in the debian archive these documents need to be compressed with gzip. There are a ton of these files floating around in the /usr/share/doc directory. Recently I wanted to read some of the documentation. If you try to open the file with cat it spits out binary gibberish. You can of course unzip the file as you normally would and open it up that way, but it turns out there is an easier way. Using zcat you can read the contents of compressed files just like you would with cat.

zcat is identical to gunzip -c. (On some systems, zcat may be installed as gzcat to preserve the original link to compress.) zcat uncompresses either a list of files on the command line or its standard input and writes the uncompressed data on standard output. zcat will uncompress files that have the correct magic number whether they have a .gz suffix or not. GZIP(1) man page.

By default, this will put all of the output into your terminal window, which is fine for most files. The other place where this can come in handy is when you are trying to look through compressed log files. In this case, having to scroll around the terminal may not be a great option. You can pipe the output of zcat into other programs such as less in order to be able to page through long files. For example, if I wanted to read the first 10 lines of a compressed log file, I could do so with the following command:

levlaz@debvm:/var/log$ sudo zcat syslog.2.gz | head -n 10

The output of this command would look like this:

May  2 22:27:43 debvm rsyslogd: [origin software="rsyslogd" swVersion="8.4.2" x-pid="585" x-info="http://www.rsyslog.com"] start
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1])
May  2 22:27:43 debvm kernel: [    0.000000] ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1])

Help Out With Packages You Use in Debian

2017-05-03 ][ Tags: debian

Many new and existing Debian users want to help make the distribution  better but do not quite know where to begin. Debian comes with a very handy package called how-can-i-help which tells you after each apt invocation the current bugs that are associated with packages on your system. The "Work-Needing and Perspective Packages" (WNPP) listing is a bit overwhelming for new contributors. What better way to figure out what packages need your help than by seeing a list of them each time you use apt.

The first time you run apt after installing this package it will likely spit out a long list of packages that need your help. Each subsequent time it will only show new packages or changes. In order to see the master list again you can use the how-can-i-help --old command to see all packages that need your help. I think this is a great way to get engaged with the software that you rely on each day.

Although getting started with Debian development is not trivial, this lowers the barrier a bit and provides some clear direction on what to work on since the list includes packages that you are using every day.

Using Owncloud Client for Nextcloud Server on Debian Stable

2017-05-02 ][ Tags: debian

There is no official debian package for the nextcloud client. There have been a handful of RFP bugs reported but it looks like no one has taken this on yet. I want to get more involved with debian packaging so this might be a great first package to maintain. For the time being, the owncloud client is still backwards compatible with nextcloud. Unfortunately, the version that ships with Debian stable (8, jessie at the time of writing) is a bit old. When I tried to connect to my nextcloud instance it complained that my password was incorrect. Luckily, there is a slightly newer version available in jessie-backports  which has no trouble connecting to nextcloud. The steps to get a working version of owncloud-client to work with the latest stable version of Nextcloud are as follows:

  1. If you have not already, enable jessie-backports
    1. Open up /etc/apt/sources.list
    2. Append deb http://ftp.debian.org/debian jessie-backports main to that file.
  2. Run sudo apt-get update
  3. Install the latest version of owncloud-client with sudo apt-get install -t jessie-backports owncloud-client

You should now be able to connect to nextcloud without any issues.

Testing Syntax Errors in Apache Config

2017-05-01 ][ Tags: apache devops

If you spend any time mucking around config files in Linux you are likely to run into some syntax errors sooner or later. Recently I was setting up cgit on Debian 8 and was banging my head against the wall for a few minutes trying to figure out why apache was so unhappy.

Symptoms

The key issue was when I restarted apache2 like I normally would after adding a new configuration it spat out an angry message at me.

root@nuc:/etc/apache2# sudo service apache2 restart
Job for apache2.service failed. See 'systemctl status apache2.service' and 'journalctl -xn' for details.

Troubleshooting

The first place that I would look is the error logs. However, in this particular case they were not very helpful.

root@nuc:/etc/apache2# tail -f /var/log/apache2/error.log
[Mon May 01 21:00:11.922943 2017] [mpm_prefork:notice] [pid 20454] AH00169: caught SIGTERM, shutting down

Next, I read the error message per the suggestion from the restart command. This was also not very helpful.

root@nuc:/etc/apache2# systemctl status apache2.service
 apache2.service - LSB: Apache2 web server
 Loaded: loaded (/etc/init.d/apache2)
 Drop-In: /lib/systemd/system/apache2.service.d
 └─forking.conf
 Active: failed (Result: exit-code) since Mon 2017-05-01 21:05:58 PDT; 1min 45s ago
 Process: 20746 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
 Process: 20697 ExecReload=/etc/init.d/apache2 reload (code=exited, status=1/FAILURE)
 Process: 20920 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE)

May 01 21:05:58 nuc apache2[20920]: Starting web server: apache2 failed!
May 01 21:05:58 nuc apache2[20920]: The apache2 configtest failed. ... (warning).
May 01 21:05:58 nuc apache2[20920]: Output of config test was:
May 01 21:05:58 nuc apache2[20920]: apache2: Syntax error on line 219 of /etc/apache2/apache2.conf: Syntax error on line 22 of /etc/a... section
May 01 21:05:58 nuc apache2[20920]: Action 'configtest' failed.
May 01 21:05:58 nuc apache2[20920]: The Apache error log may have more information.
May 01 21:05:58 nuc systemd[1]: apache2.service: control process exited, code=exited status=1
May 01 21:05:58 nuc systemd[1]: Failed to start LSB: Apache2 web server.
May 01 21:05:58 nuc systemd[1]: Unit apache2.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.

Inspecting the error message, we see that it is unhappy with line 219 of the main /etc/apache2/apache2.conf file. Looking at that line we can see that it is simply loading all of the other config files in sites-enabled which means that before it even gets to load my new cgit config file it fails.

Help

So now that we have done some basic troubleshooting. It's time to dig into the manual for further information. I know that the config file is failing to load, and knowing my fat fingers it is very likely a config error on my part. Before reading 200 pages of documentation on the apache website we should take a look at the built in help to see if we can find something of value.

root@nuc:/etc/apache2# apache2 -help
Usage: apache2 [-D name] [-d directory] [-f file]
 [-C "directive"] [-c "directive"]
 [-k start|restart|graceful|graceful-stop|stop]
 [-v] [-V] [-h] [-l] [-L] [-t] [-T] [-S] [-X]
Options:
 -D name : define a name for use in <IfDefine name> directives
 -d directory : specify an alternate initial ServerRoot
 -f file : specify an alternate ServerConfigFile
 -C "directive" : process directive before reading config files
 -c "directive" : process directive after reading config files
 -e level : show startup errors of level (see LogLevel)
 -E file : log startup errors to file
 -v : show version number
 -V : show compile settings
 -h : list available command line options (this page)
 -l : list compiled in modules
 -L : list available configuration directives
 -t -D DUMP_VHOSTS : show parsed vhost settings
 -t -D DUMP_RUN_CFG : show parsed run settings
 -S : a synonym for -t -D DUMP_VHOSTS -D DUMP_RUN_CFG
 -t -D DUMP_MODULES : show all loaded modules
 -M : a synonym for -t -D DUMP_MODULES
 -t : run syntax check for config files
 -T : start without DocumentRoot(s) check
 -X : debug mode (only one worker, do not detach)

Success! It turns out we can run a linter on a specific config file using the -t flag.

Solution

root@nuc:/etc/apache2# apache2 -t -f sites-available/git.levlaz.org.conf
apache2: Syntax error on line 22 of /etc/apache2/sites-available/git.levlaz.org.conf: </VirtualHost> without matching <VirtualHost> section

Doh! Such a silly mistake with a missing </VirtualHost> closing bracket. Fixing this syntax error resolved the issue. The main takeaway for me is that the best part about most Linux tools is that they usually give you everything you need in order to succeed. We were able to troubleshoot and resolve this issue without resorting to google and running random commands that stranger posted on the internet 5 years ago.

Change the Default Terminal Editor in Debian

2017-04-28 ][ Tags: debian

Debian comes with a very handy utility called update-alternatives that helps to set default tools for various tasks.

It is possible for several programs fulfilling the same or similar functions to be installed on a single system at the same time. For example, many systems have several text editors installed at once. This gives choice to the users of a system, allowing each to use a different editor, if desired, but makes it difficult for a program to make a good choice for an editor to invoke if the user has not specified a particular preference.

On Linode, it seems that the default editor is nano, I prefer to use vim for editing git commits, visudo, and other things that use the default editor which is symbolically linked through /usr/bin/editor. The update-alternatives package basically changes the symbolic links for you. In order to change your default editor, you simply need to run the following command:

sudo update-alternatives --config editor

The output of this command is shown below. You will see a list of all of your editors that you currently have installed and will be asked to make a choice.

There are 3 choices for the alternative editor (providing /usr/bin/editor).

Selection Path Priority Status
------------------------------------------------------------
 0 /bin/nano 40 auto mode
 1 /bin/nano 40 manual mode
 2 /usr/bin/vim.basic 30 manual mode
* 3 /usr/bin/vim.tiny 10 manual mode

Press enter to keep the current choice[*], or type selection number:

Behind the scenes you can see that all this does it updates the symbolic links.

levlaz@dev:~$ ls -al /usr/bin/editor
lrwxrwxrwx 1 root root 24 Feb 10 20:49 /usr/bin/editor -> /etc/alternatives/editor
levlaz@dev:~$ ls -al /etc/alternatives/editor
lrwxrwxrwx 1 root root 17 Apr 28 18:56 /etc/alternatives/editor -> /usr/bin/vim.tiny

There are many other things that can be configured this way. For more information reading the man page for update-alternatives is worthwhile.

Don't forget the -i

2017-04-19 ][ Tags: gnu

I spent way too much time troubleshooting an issue I was having with sed today. This is a common theme sometimes where I spend upwards of an hour debugging something that is ridiculously obvious.

I was trying to replace a string in a file. This is super simple to do with sed.

sed 's/string/replacement_string/` $file_name

The problem is that this command just spits out the result. If you want to actually save your change you must use the -i flag.

sed -i '/s/string/replacement_string/` $file_name

For more information on how to not suck at Linux like Lev please refer to man $command ;)

Serving a Static Home Page in Rails

2017-04-17 ][ Tags: hacking rails

TIL while working on this issue that if you dump a file into public/index.html then rails will just serve that up for you. A lot of tutorials out there talk about making a static pages controller, which seems like overkill (unless you have a lot of static pages .. but if thats the case why are you using Rails?)

Do not Install Karma Globally

2017-04-15 ][ Tags: hacking testing javascript

Wow, I spent so long trying to figure out why the hell karma was not working for me, it turns out its because it was installed globally. For instance. In my projects package.json I had:

"scripts": { "test": "karma start karma.conf.js" } ...

When I ran npm test - it told me sh 1: karma not found Every other possible combination also did the same thing. i.e.

node_modules/karma/bin/karma

./node_modules/karma/bin/karma

node ./node_modules/karma/bin/karma

I could totally execute this myself from the shell, so I had no idea what was wrong. Then I finally stumbled upon this GitHub Issue. After uninstalling karma globally, npm uninstall -g karma I was able to run npm test without any issues. I still have no idea why this works or didn't work. But at this point I just want to go back to writing tests.

Injecting Stuff into your Python Path

2017-04-13 ][ Tags: hacking python

Similar to a previous post where I wrote about how to run flask tests without installing your app, another common thing that you might want to be able to do is import your app from some arbitrary script. This is especially useful when running your app with apache mod_wsgi. This module expects the app to be installed globally or at least in the python path. Unless you install the app in a traditional sense this will not be true. The solution is just to inject the path prior to running your import statement like this.

sys.path.insert(0, '/var/www/blog')
from blog import app as application

This import will actually work.