Read All of Hacker News With the hanopener Extension

I’ve been reading Hacker News obsessively lately. In the past I would skip the top posts every couple of days, lately I have been reading every new and top article. In order to achieve this I would go to the main website, and click on every single link on the front pages.

After doing this for a few days I realized that I should probably write some Javascript to automate this entire process. So this afternoon I whipped up the hanopener chrome extension.

Initially I made it a python CLI script, but then realized that this would probably make more sense as a chrome extension.

Be warned: This extension is super obnoxious and will open up 60 chrome tabs every time you click on the icon, which is not always a good time depending on your computer.

It’s available on the chrome webstore now.

Deploying an Angular 6 Application to Netlify

Netlify is an excellent platform for building, deploying, and managing web applications. It supports automated deployment using GitHub webhooks and also provides some advanced features such as custom domains and HTTPS all for free. Deploying a Static Site to Netlify is a breeze. Although it does support running Angular JS applications, there are a couple gotchas in the deployment process that I had to wrangle together from various blog posts in order to get things to work.

Enable Redirects

The first issue that I ran into was after I deployed my site to Netlify, whenever I would click on an Angular link, I would get a 404 page.

Netlify Page Not Found
Looks like you’ve followed a broken link or entered a URL that doesn’t exist on this site.

Getting this to work is pretty simple. Ultimately you just need a file called _redirects in the root of your web project. In order to get angular to create this you need to do the following things. This file will send all URL’s to the root of your application which allows the Angular router to kick in and do its thing.

  1. Create a _redirects file in the src directory of your angular project.

    For most basic sites it should look something like this.

    # src/_redirects
    
    /*  /index.html 200
    
  2. Add this file to your angular.json file.

    Your angular.json file serves as a configuration for many different aspects of the angular CLI. In order to get this file into the root of your output directory you must define the file here. A snippet of my file is shown below. Update this configuration file and push all of your changes back up to GitHub.

    {
    "$schema": "./node_modules/@angular/cli/lib/config/schema.json",
    "version": 1,
    "newProjectRoot": "projects",
    "projects": {
        "flagviz": {
        "root": "",
        "sourceRoot": "src",
        "projectType": "application",
        "prefix": "app",
        "schematics": {},
        "architect": {
            "build": {
            "builder": "@angular-devkit/build-angular:browser",
            "options": {
                "outputPath": "dist/flagviz",
                "index": "src/index.html",
                "main": "src/main.ts",
                "polyfills": "src/polyfills.ts",
                "tsConfig": "src/tsconfig.app.json",
                "assets": [
                "src/favicon.ico",
                "src/assets",
                "src/_redirects"
    
    ... rest of file 
    

Configure your Netlify Project

Now that you have the redirects file in place. You can set up your project for automatic deployment with GitHub and Netlify.

Once you have logged into Netlify, click on New Site From Git and find the name of your project.

New Site from GitHub

Configure Build Settings

The last step is to configure your build settings.

For Build command you should enter ng build --prod.

For Publish directory you should enter dist/$NAME_OF_YOUR_PROJECT.

Netlify Build Settings

Be sure to replace $NAME_OF_YOUR_PROJECT with the actual name of your project.

Now you can click on Deploy site and once the initial deployment has completed you should see your new angular application running on Netflify with a working routing system.

Install Terraform on an Ubuntu Server

Terraform by Hashicorp is a powerful tool that you can use to manage your infrastructure as code. It is distributed as a single binary so getting it installed on Ubuntu is a breeze.

  1. Assuming you are on a “standard” computer or server. From the downloads page copy the URL for the 64-bit Linux package. At the time of writing this was: https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
  2. SSH into your ubuntu server and execute wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
  3. Unzip this file (you may need to install the unzip package with sudo apt-get install unzip)
  4. Move the file to the /usr/local/bin directory with sudo mv terraform /usr/local/bin/

You can confirm that this works by typing in terraform -version in your terminal. Your output should look something like this.

ubuntu@ip-172-26-5-139:~$ terraform -version
Terraform v0.11.7

You should now be able to execute the terraform command from anywhere and manage your infrastructure as code.

Using Microsoft Power BI With PostgreSQL

Microsoft Power BI is an advanced business intelligence suite that allows you to perform robust data analysis from a variety of different data sources. One common data source is PostgreSQL. Although Microsoft PowerBI does support PostgreSQL, getting started can be a bit tricky because there is no great documentation.

If you try to connect to PostgreSQL with a fresh installation of PowerBI you will receive the following error message.

https://www.postgresql.org/

This connector requires one or more additional components to be installed before it can be used.

If you click on the Learn more link, it will take you to the GitHub repository for the Npgsql library, which is a windows driver for Postgres.

If you download the latest .msi file and run through the default installation, you will continue to receive the same error message in Power BI. In order to get this to work you must select the Npgsql GAC Installation option as shown in the screenshot below.

Npgsql GAC Installation Option

Once you have installed the Npgsql GAC Installation, you can restart Microsoft Power BI and you should now be able to connect to a PostgreSQL database as a data source.

PostgreSQL connection window in Microsoft Power BI
PostgreSQL connection window in Microsoft Power BI

Slow Python Script and Using Pipenv with AWS Lambda

I’m working on improving a python script I wrote to get a list of old posts from a wordpress website. Basically I want to be able to see what post I wrote X years ago on this day for any wordpress site.

This script uses the wonderful requests library and the very powerful public WordPress API.

I am also using pipenv for the first time and its wonderful. I wish I started using this tool years ago.

What it Does Right Now

  1. Takes a dictionary of sites and iterates over each one
  2. Prints out to the console
print("1 year ago I wrote about {0} {1}".format(p['title']['rendered'], p['link']))
if years_ago > 1:
print("{0} years ago I wrote about {1} {2}".format(years_ago, p['title']['rendered'], p['link']))

The Script is Super Slow

You can time how long a script takes on OS X using the time command.

Levs-iMac:OldPosts levlaz$ time python old_posts.py
1 year ago I wrote about Thoughts on “Sacramento Renaissance” https://tralev.net/thoughts-on-sacramento-renaissance/

real	0m11.192s
user	0m0.589s
sys	0m0.060s

I know why its slow. Because I have like 6 for loops and a bunch of other inneficiencies. In addition, the requests are not cached anywhere so it has to get the entire JSON load each time that the script runs.

Plans for Optimization

  1. Use Redis (or something) to cache the results.
  2. Get rid of some of the for loops if we can.

Plans for Usage

  1. Deploy to AWS (Labmda?)
  2. Have this run on a Cron Job every day (using CloudWatch)

Plans for Additional Features

I want to share all of the posts from that day on social media. Instead of plugging in all of the various accounts that I need I am planning on using the Buffer API to post everywhere at once and queue up posts so that it does not fire off a bunch of stuff at the same time in the event that there are many posts for that day.

This will involve doing some sort of Outh dance because I don’t think that Buffer offers using personal access tokens.

I’ll Just Use Lambda

Famous last words.

It’s not the worst thing in the world, but when you are using the amazing pipenv tool you have to go track down where the site-packages are located and zip them up in order to ship your code to AWS Lambda.

Unsurprisingly someone opened a feature request for this, but the solution in the comments works just fine.

I wrote a little bash script that is being called through a Makefile to zip up the site-packages along with the core python code in preparation to ship it off to AWS Lambda.

Bash Script to Zip Up Site-Packages

SITE_PACKAGES=$(pipenv --venv)/lib/python3.6/site-packages
DIR=$(pwd)

# Make sure pipenv is good to go
pipenv install

cd $SITE_PACKAGES
zip -r9 $DIR/OldPosts.zip *

cd $DIR
zip -g OldPosts.zip old_posts.py

Makefile

.PHONY: package

package:
	sh package.sh

This should just work™.

What is GlassFish?

I jumped down another rabbit hole trying to figure out how to get started with java ee without using an ide. Although IDE’s are very handy when it comes to Java development, they also are sometimes a crutch. For instance, if you want to transition to CI, do you actually know what commands the IDE runs when you right click and run tests?

First, I have no idea what Java EE actually is. There is something called GlassFish, which is an open source Java EE “reference implementation”. It also the same thing that is installed when you go to the main Java EE website.

Java EE does not support the latest Java JDK 1.9. On my Mac I had a tough time trying to get two versions of Java to run at the same time.

I think 99.9% of all tutorials about getting started with Java EE include using Netbeans or Eclipse. I wanted to write one that used the CLI. This involves using maven.

Maven has a concept called “archetypes” which creates the necessary directory structure for a new Java project. The main problem is that I could not find a bare bones archetype definition.

At the end of the day, I dug deep into the rabbit hole and came up empty. I will figure this out at some point and write a blog post about it.

Learn Kubernetes with Interactive Tutorials

I wanted to get a deeper understanding of how Kubernetes actually works, so I started to work through the tutorials on the kubernetes documentation website.  Kubernetes is a container orchestration system that creates some standard tooling for deploying, scaling, and managing containers at scale.

The tutorials themselves, are amazing.

The tutorials use Katacoda to run a virtual terminal in your web browser that runs Minikube, a small-scale local deployment of Kubernetes that can run anywhere.

At a high level kubernetes allows you to deploy a cluster of resources as a single unit without having to really think about the underlying individual hosts. It follows a master -> node model where there is a centralized control point for managing your cluster and worker nodes that perform the actions that your application needs.

Kubernetes supports running both Docker containers and rkt containers. I’m pretty familiar with Docker. I learned more than I ever wanted to over the last few years of working at CircleCI. I have never used rkt, but am looking forward to learning more in the future.

It is really neat that you can simulate a production-like instance on your local computer using minikube. This is a great way to learn kubernetes as well as be able to do local development.

Kubernetes docs has some interactive tutorials that allow you to get your hands dirty with Kubernetes without having to install anything. These tutorials are powered by KataCoda, a tool that I am not familiar with. This is a neat web service that allows you to learn new technologies in your browser.

Kubernetes in your Browser
Kubernetes in your Browser

The first tutorial teaches you how to use minikube, and the kubectl cli to create a new cluster.

One of the most amazing parts of kubernetes to me is the self-healing aspect. For example once you have defined what your application stack consists of, if a node happens to go down then kubernetes will automatically replace it with another instance.

Not only does the interactive online tutorial allow you to use a real kubernetes cluster from within your browser, you can even preview the web UI portion of the cluster as well as viewing your application running.

Kubernetes Web UI
Kubernetes Web UI

This is such a great way to learn.

Apex Triggers

I worked on the Apex Triggers module on trailhead. Apex triggers are very similar to database triggers (remember those?). I remember in my first job, which was an enterprise healthcare company, our DB was littered with hundreds of triggers that did various actions whenever records were inserted, updated, or removed.

Triggers are a powerful concept, but tend to be very difficult to maintain at a large scale. Especially when you have a large team. I think they are an artifact of the legacy development methodologies. These days most of the actions that triggers used to be responsible for are managed as either a part of the model, or as separate background tasks.

Despite this being true in most modern software development, Salesforce allows you to write triggers in a first class way that do things when records change. I think this is a case where they are still “ok” to use because they remove a lot of the overhead with having to figure out how to keep track of the state of all of your various records.

The best part about Apex triggers is that unlike DB triggers which require you to write your code in an enhanced variant of SQL, Apex triggers allow you to write the code in Apex. This means that you can take full advantage of all of the built in salesforce libraries, as well as making HTTP callouts (the most powerful part of all of this) in a really simple way.

One thing to note is that if you do make HTTP callouts with Apex, you must do so asynchronously.

Apex triggers have a handy access to the context that fired the trigger, including both the old and new state of the affected object.

One great hint that the module gives us is to write our code to support both single and bulk operations. While most triggers that I have written operate on only a single object at a time; there may come a day when I may want to do work on multiple objects at a time. For example, if I was using the bulk API. By writing the code in a way that supports bulk operations (essentially using a for loop) you can reuse the same code in the future rather than having to handle both cases separately.

 

Salesforce DX External Sharing Model

I was working through the Getting Started with Salesforce DX module on trailhead and when it came time to push the Dreamhouse app up to my scratch org I got a dozen or so error messages complaining about all sorts of things.

PROJECT PATH                                                                                    ERROR
──────────────────────────────────────────────────────────────────────────────────────────────  ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
force-app/main/default/objects/Property__c/Property__c.object-meta.xml                          Can't specify an external sharing model for Property__c
force-app/main/default/objects/Favorite__c/fields/Property__c.field-meta.xml                    referenceTo value of 'Property__c' does not resolve to a valid sObject type (65:13)
force-app/main/default/objects/Favorite__c/listViews/All.listView-meta.xml                      In field: columns - no CustomField named Favorite__c.Property__c found (88:16)
force-app/main/default/layouts/Broker__c-Broker Layout.layout-meta.xml                          In field: relatedList - no CustomField named Property__c.Broker__c found (81:19)
force-app/main/default/layouts/Favorite__c-Favorite Layout.layout-meta.xml                      In field: field - no CustomField named Favorite__c.Property__c found (13:26)

Luckily the error messages are pretty useful. In this case it looks like the “External Sharing Model” was not turned on in my scratch org. This appears to be turned off by default.

In order to get this step to work:

    1. Log into your scratch org sfdx force:org:open
    2. Go to Setup
    3. In the Quick Search box look for Sharing Settings
    4. Click on Enable External Sharing Model 

Sharing_Settings___SalesforceNow you can run the push command and deploy the Dreamhouse app without any issues.

Keep on trailbalazing!

 

Oskar’s Heavy Boots

In Extremely Loud and Incredibly Close, Jonathan Safran Foer tells the story of a young boy named Oskar who is on a quest to come to terms with the sudden death of his father in the 9/11 attacks. While rummaging through his father’s belongings a few days after the tragic events of that day, he finds a mysterious key inside a vase. Determined to find the lock that it belongs to, he travels around all of New York city in search of closure.  Foer captures the voice of a nine year old boy perfectly. We are immediately attached to him and his terrible loss and spend the rest of the book hoping that he succeeds in his journey.


EXTREMELY LOUD AND INCREDIBLY CLOSE
By Jonathan Safran Foer
368 pp. Mariner Books $25

September 11th was not the only tragedy that was covered in this book. A generation earlier, Oskar’s grandfather survived the Bombing of Dresden. While he walked away with his life, he chose to live his life as a victim rather than a survivor. He leaves Oskar’s grandmother abruptly, loses the ability to speak, and spends many years writing letters to his son (Oskar’s father) which he never delivers before his death.

The book consists of intertwined segments. The main story is pushed along via Oskar’s narration. Pieces of the past are presented in the form of letters from his Grandparents. It explores a wide range of emotions including tragedy, loss, love and regret.

I regret that it takes a life to learn how to live, Oskar. Because if I were able to live my life again, I would do things differently.

Oskar slowly finds a way to cope with his fathers death. Throughout his journey he comes up with many provocative metaphors. The one that stood out the most to me was comparing life to a building on fire.

Everything that’s born has to die, which means our lives are like skyscrapers. The smoke rises at different speeds, but they’re all on fire, and we’re all trapped.

It’s difficult to read this book even a decade after the terrible events of that day. Those of us who were witnesses were changed forever in one way or another. An entire generation has now grown up viewing life from the lens of everything that happened before 9/11 and everything that has happened after.

We are quickly approaching a date where everyone under the age of 18 will have been born after September 11, 2001. I imagine they will grow up to view this day similar to how people in their 30’s and 40’s think about Pearl Harbor or the bombing of Hiroshima; a terrible event that happened long ago but has little emotional connection to every day reality. Historical fiction books are important in this regard. Unlike the non-fiction books that tell an objective story with facts, figures, and death tolls, fiction allows us to view the event from the perspective of a real human being. We feel something more than shock. We learn something more than a statistic or a timeline of events.

I thought if everyone could see what I saw, we would never have war anymore.

This book does not have a happy ending. We walk away feeling the same hopelessness and loss that Oskar does. Our boots become very heavy. The next 9/11, Hiroshima, Bombing of Dresden, Rape of Nanking, or < INSERT NAME OF TRAGEDY HERE >, is potentially days away. I would love to live in a world where books like this one were pure fiction, instead of based on a true story.