programming

Dealing With Flakey CI Commands With a Retry Loop in Bash

One of the most frustrating things to deal with in Continuous Integration is flakey commands. Whether it’s flakey tests, or intermittent networking issues, when your build fails for issues outside of your control not only does it cause frustration, it reduces the trust in your CI process.

One strategy for dealing with this type of issue is to introduce some retry logic into your commands. This can easily be accomplished with good old bash.

For example, pretend that I have $FLAKEY_COMMAND and I want to retry it three times before finally failing my build. I could wrap the whole thing up in a bash loop like this.

counter=1
max=3
$FLAKEY_COMMAND

while [[ $? -ne 0 && $counter -lt $max ]]; do
    counter=$((counter+1))
    $FLAKEY_COMMAND
done

This script will run my command, if the exit code (the output of $?) is non zero (i.e something went wrong) and my counter is less than three, then it will retry the command. You can increase or decrease the number of attempts by adjusting the max variable.

This is not a foolproof strategy, but is one approach to handle flakey commands in your CI pipeline.

Standard
programming

Read All of Hacker News With the hanopener Extension

I’ve been reading Hacker News obsessively lately. In the past I would skip the top posts every couple of days, lately I have been reading every new and top article. In order to achieve this I would go to the main website, and click on every single link on the front pages.

After doing this for a few days I realized that I should probably write some Javascript to automate this entire process. So this afternoon I whipped up the hanopener chrome extension.

Initially I made it a python CLI script, but then realized that this would probably make more sense as a chrome extension.

Be warned: This extension is super obnoxious and will open up 60 chrome tabs every time you click on the icon, which is not always a good time depending on your computer.

It’s available on the chrome webstore and as a Firefox Add-On now.

UPDATE @ 5/28
Added link to Firefox Add On

Standard
programming

Deploying an Angular 6 Application to Netlify

Netlify is an excellent platform for building, deploying, and managing web applications. It supports automated deployment using GitHub webhooks and also provides some advanced features such as custom domains and HTTPS all for free. Deploying a Static Site to Netlify is a breeze. Although it does support running Angular JS applications, there are a couple gotchas in the deployment process that I had to wrangle together from various blog posts in order to get things to work.

Enable Redirects

The first issue that I ran into was after I deployed my site to Netlify, whenever I would click on an Angular link, I would get a 404 page.

Netlify Page Not Found

Looks like you’ve followed a broken link or entered a URL that doesn’t exist on this site.

Getting this to work is pretty simple. Ultimately you just need a file called _redirects in the root of your web project. In order to get angular to create this you need to do the following things. This file will send all URL’s to the root of your application which allows the Angular router to kick in and do its thing.

  1. Create a _redirects file in the src directory of your angular project.

    For most basic sites it should look something like this.

    # src/_redirects
    
    /*  /index.html 200
    
  2. Add this file to your angular.json file.

    Your angular.json file serves as a configuration for many different aspects of the angular CLI. In order to get this file into the root of your output directory you must define the file here. A snippet of my file is shown below. Update this configuration file and push all of your changes back up to GitHub.

    {
    "$schema": "./node_modules/@angular/cli/lib/config/schema.json",
    "version": 1,
    "newProjectRoot": "projects",
    "projects": {
        "flagviz": {
        "root": "",
        "sourceRoot": "src",
        "projectType": "application",
        "prefix": "app",
        "schematics": {},
        "architect": {
            "build": {
            "builder": "@angular-devkit/build-angular:browser",
            "options": {
                "outputPath": "dist/flagviz",
                "index": "src/index.html",
                "main": "src/main.ts",
                "polyfills": "src/polyfills.ts",
                "tsConfig": "src/tsconfig.app.json",
                "assets": [
                "src/favicon.ico",
                "src/assets",
                "src/_redirects"
    
    ... rest of file 
    

Configure your Netlify Project

Now that you have the redirects file in place. You can set up your project for automatic deployment with GitHub and Netlify.

Once you have logged into Netlify, click on New Site From Git and find the name of your project.

New Site from GitHub

Configure Build Settings

The last step is to configure your build settings.

For Build command you should enter ng build --prod.

For Publish directory you should enter dist/$NAME_OF_YOUR_PROJECT.

Netlify Build Settings

Be sure to replace $NAME_OF_YOUR_PROJECT with the actual name of your project.

Now you can click on Deploy site and once the initial deployment has completed you should see your new angular application running on Netflify with a working routing system.

Standard
programming

Slow Python Script and Using Pipenv with AWS Lambda

I’m working on improving a python script I wrote to get a list of old posts from a wordpress website. Basically I want to be able to see what post I wrote X years ago on this day for any wordpress site.

This script uses the wonderful requests library and the very powerful public WordPress API.

I am also using pipenv for the first time and its wonderful. I wish I started using this tool years ago.

What it Does Right Now

  1. Takes a dictionary of sites and iterates over each one
  2. Prints out to the console
print("1 year ago I wrote about {0} {1}".format(p['title']['rendered'], p['link']))
if years_ago > 1:
print("{0} years ago I wrote about {1} {2}".format(years_ago, p['title']['rendered'], p['link']))

The Script is Super Slow

You can time how long a script takes on OS X using the time command.

Levs-iMac:OldPosts levlaz$ time python old_posts.py
1 year ago I wrote about Thoughts on “Sacramento Renaissance” https://tralev.net/thoughts-on-sacramento-renaissance/

real	0m11.192s
user	0m0.589s
sys	0m0.060s

I know why its slow. Because I have like 6 for loops and a bunch of other inneficiencies. In addition, the requests are not cached anywhere so it has to get the entire JSON load each time that the script runs.

Plans for Optimization

  1. Use Redis (or something) to cache the results.
  2. Get rid of some of the for loops if we can.

Plans for Usage

  1. Deploy to AWS (Labmda?)
  2. Have this run on a Cron Job every day (using CloudWatch)

Plans for Additional Features

I want to share all of the posts from that day on social media. Instead of plugging in all of the various accounts that I need I am planning on using the Buffer API to post everywhere at once and queue up posts so that it does not fire off a bunch of stuff at the same time in the event that there are many posts for that day.

This will involve doing some sort of Outh dance because I don’t think that Buffer offers using personal access tokens.

I’ll Just Use Lambda

Famous last words.

It’s not the worst thing in the world, but when you are using the amazing pipenv tool you have to go track down where the site-packages are located and zip them up in order to ship your code to AWS Lambda.

Unsurprisingly someone opened a feature request for this, but the solution in the comments works just fine.

I wrote a little bash script that is being called through a Makefile to zip up the site-packages along with the core python code in preparation to ship it off to AWS Lambda.

Bash Script to Zip Up Site-Packages

SITE_PACKAGES=$(pipenv --venv)/lib/python3.6/site-packages
DIR=$(pwd)

# Make sure pipenv is good to go
pipenv install

cd $SITE_PACKAGES
zip -r9 $DIR/OldPosts.zip *

cd $DIR
zip -g OldPosts.zip old_posts.py

Makefile

.PHONY: package

package:
	sh package.sh

This should just work™.

Standard
programming

What is GlassFish?

I jumped down another rabbit hole trying to figure out how to get started with java ee without using an ide. Although IDE’s are very handy when it comes to Java development, they also are sometimes a crutch. For instance, if you want to transition to CI, do you actually know what commands the IDE runs when you right click and run tests?

First, I have no idea what Java EE actually is. There is something called GlassFish, which is an open source Java EE “reference implementation”. It also the same thing that is installed when you go to the main Java EE website.

Java EE does not support the latest Java JDK 1.9. On my Mac I had a tough time trying to get two versions of Java to run at the same time.

I think 99.9% of all tutorials about getting started with Java EE include using Netbeans or Eclipse. I wanted to write one that used the CLI. This involves using maven.

Maven has a concept called “archetypes” which creates the necessary directory structure for a new Java project. The main problem is that I could not find a bare bones archetype definition.

At the end of the day, I dug deep into the rabbit hole and came up empty. I will figure this out at some point and write a blog post about it.

Standard
programming

Dockerized PostgreSQL and Django for Local Development

Docker and docker-compose make it dead simple to avoid dependency hell and have a consistent environment for your whole team while doing local development. This post walks through setting up a new Django project from scratch to use Docker and docker-compose. It is modeled after a previous post that I wrote about doing a similar thing with Laravel and MySQL.

Dockerfile

Nothing too interesting happening here. Installing python and pip.

FROM ubuntu:16.04

# system update
RUN apt update
RUN apt upgrade -y

# python deps
RUN apt install -y python3-dev python3-pip

docker-compose.yml

version: '2'
services:
  app:
    build: .
    ports:
      - "8000:8000"
    volumes:
      - .:/app
    working_dir: /app
    command: bash -c "pip3 install -r requirements.txt && python3 manage.py migrate && python3 manage.py runserver 0:8000"
    depends_on:
      - db
  db:
    image: postgres:9.6.5-alpine
    environment:
      - POSTGRES_USER=feedread
      - POSTGRES_PASSWORD=feedread
    volumes:
      - ./data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

With this in place you can start your Django app with docker-compose up. Each time the app starts it will install the latest dependencies, run migrations, and start serving the app on localhost:8000

Notes

  1. In order to do stuff with the database locally you should add the following record to your local /etc/hosts file
    # /etc/hosts
    
    127.0.0.1 db
    
  2. Since we define – .:/app as a volume, this means that all of your local changes are immediately visible in the dockerized app.
  3. If you need to access the running app or db container you can do so with docker-compose exec app bash or docker-compose exec db bash.
  4. This docker-compose file is not really suitable for production since it is not likely that you would want to build the container each time the app starts or automatically run migrations.
  5. You can add additional services like memcached, a mail server, an app server, a queue, etc., using the same method that we are using above with our database.
Standard
programming

I Want to Become a Core Python Developer

I’ve been tinkering with python for almost five years now. I am absolutely in love with the language. My new goal is to make enough contributions to the project to join the core team.

This post is my attempt to keep a list of all that I’ve done in this endeavor. I will keep this up to date on a monthly basis.

Short Term Goals

  • Ship some actual code. Focus on improving test coverage.
  • Attend the next available local meetup.
  • Get this PR merged.
  • Work on some other low hanging fruit from bedevere.

November 2017

Code

  • Reported an issue with Vagrant and Ansible on the pythondotorg repo, and assisted with testing the resolution. (note, for any future newbies, reporting issues, writing docs, testing PRs, these are all super valuable things that you can do to get more familiar with a projects code base).
  • Substantial refactoring of the dev guide merged.

Community

  • Reached out to the core workflow team to see if we could introduce CircleCI into the Python organization. This addresses the PoC showed in this PR.

October 2017

Code

Community

  • Became a PSF Member.
  • Hang out in various IRC channels. Notably #python on freenode and help out where I can.
  • Join the PSF Volunteers mailing list and volunteer for opportunities as they come in.
  • Sign up for all the dev related mailing lists.
  • Joined the BAyPIGgies local python meetup group.
Standard
programming

Python Mocks Test Helpers

I’ve been writing a python wrapper for the CircleCI API over the last week. I wanted to do this “the right way” with test driven development.

I have a couple integration tests that actually hit the CircleCI API, but most of the unit tests so far are using MagicMock to ensure that the basic functions are working as expected.

This generally involves the tedious process of dumping out JSON, saving it to a file, and then reloading that file later on to actually test it.

I wrote two helper functions that make this process slightly less tedious.

Load Mock

The first is a function that loads a file and overrides every request to return that file (typically as JSON).

    def loadMock(self, filename):
        """helper function to open mock responses"""
        filename = 'tests/mocks/{0}'.format(filename)

        with open(filename, 'r') as f:
            self.c._request = MagicMock(return_value=f.read())

Test Helper

The second is a function that runs a real request for the first time and dumps the output to a file.

    def test_helper(self):
        resp = self.c.add_circle_key()
        print(resp)
        with open('tests/mocks/mock_add_circle_key_response', 'w') as f:
             json.dump(resp, f)

Naming it test_helper allows it to be picked up and ran when you run your test suite since by default unittest will capture any methods that start with test.

Usage

An actual example is shown below.

    def test_clear_cache(self):
        self.loadMock('mock_clear_cache_response')
        resp = json.loads(self.c.clear_cache('levlaz', 'circleci-sandbox'))

        self.assertEqual('build dependency caches deleted', resp['status'])

Writing the tests is easy, we just copy and paste the name of the file that was created with test_helper and verify that the contents are what we expect them to be.

This approach has been working very well for me so far. One thing to keep in mind with writing these types of tests is that you should also include some general integration tests against the API that you are working with. This way you can catch any regressions with your library in the event that the API changes in any way. However, as a basic sanity check mocking these requests is a good practice and less prone to flakiness.

Standard
programming

Spring Security, Webjars, and MIME type error

I volunteered to be JLO (Java Language Owner) at CircleCI and I am currently working on getting a sample Spring Framework project running on CircleCI 2.0. I made a simple app bootstrapped with the Spring Initializer. I included Spring Security for the first time and I decided to try out WebJars for static Javascript libraries such as bootstrap. I am using Thymeleaf for templating.The app does not actually do anything yet but I ran into a pretty strange issue today that I wanted to write up here. My home page is pretty straightforward.

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <title>CircleCI Spring Demo</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />

    <link rel="stylesheet" th:href="@{/webjars/bootstrap/3.3.7/css/bootstrap.min.css}" />

    <link rel="stylesheet" th:href="@{/css/style.css}" href="../static/css/style.css" />
</head>
<body>

    <nav class="navbar">
        <div class="container">
            <div class="navbar-header">
                <a class="navbar-brand" href="#">CircleCI Demo Spring</a>
            </div>
            <div id="navbar" class="collapse navbar-collapse">
                <ul class="nav navbar-nav">
                    <li class="active"><a href="#">Home</a></li>
                    <li><a href="#">About</a></li>
                </ul>
            </div>
        </div>
    </nav>

    <div class="container">
        <h1> CircleCI Spring Demo </h1>
    </div>

    <script th:src="@{/webjars/bootstrap/3.3.7/js/bootstrap.min.js}"></script>
</body>
</html>

However, when I tried to load up the app with mvn spring-boot:run none of the styles showed up and console showed the following error message:

Resource interpreted as Stylesheet but transferred with MIME type text/html

It turns out, that a default spring-security config will basically block any request unless you whitelist it. The MIME type is a red herring since what is actually happening is that my spring-security config is redirecting all unauthenticated users to my login page (which is login.html) instead of serving up the stylesheet from the/webjars directory. The solution is to update my security configuration to whitelist anything that comes from /webjars

package com.circleci.demojavaspring;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;

@Configuration
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .authorizeRequests()
                .antMatchers("/", "/home", "/webjars/**").permitAll()
                .anyRequest().authenticated()
                .and()
            .formLogin()
                .loginPage("/login")
                .permitAll()
                .and()
            .logout()
                .permitAll();
    }
}

Now, the styles load as expected.

Standard
debian, programming

Using gtk-doc with Anjuta on Debian Stable

gtk-doc is a library that helps extract code documentation. When you create a new project with Anjuta it asks if you wish to include gkt-doc. Unfortunately, on Debian stable there seems to be a bug because the autoconf configuration is looking for the wrong version of gtk-doc.

/home/levlaz/git/librefocus/configure: line 13072: syntax error near unexpected token `1.0'
/home/levlaz/git/librefocus/configure: line 13072: `GTK_DOC_CHECK(1.0)'

On Debian stable, the version of GTK doc that comes with thegtk-doc-tools package is 1.21. In order to resolve this error you need to update configure.ac to use the newer version of gtk-doc as shown below:

GTK_DOC_CHECK([1.21])

Then you need to regenerate the entire project and everything should work as expected.

Standard