UbuTab Case Study: How to be Taken Seriously

I was taking a look at the ubutab which is supposedly an upcoming Ubuntu/Android tablet that failed to meet its indegogo campaign goal last month. A couple of things about this site just make me sad. I see a lot of shady businesses with no SSL during checkout, no real email address, and a lazy themes for their site all the time. I am not sure if they are just lazy, don’t care, or a combination of both. This post is really a PSA. The 1990’s are over. People expect a higher level of quality in your product, your website, and your brand. If you are launching a new product here are a few tips to be taken seriously.

Use a Real Email Address

Use an Email Address at your own domain. It is not difficult to have a custom domain for your email address. Gandi gives them away for free when you purchase a domain. Having a custom domain instead of free email makes your company seem more legitimate.

Don’t Send Customers to Paypal

Use a payment processing system or at least something like stripe that it integrated to your website instead of redirecting users to a paypal checkout page that sends their payment to a company that has a different name from your own. This is just silly. I would love to have a tablet that runs Ubuntu. But this entire operation just screams scam to me. It will be interesting to see if they actually release a product later on this year.


It looks like the actual site was taken down. You can see an archive of the site here.

Posted in tech | 2 Comments

Installing Node.js + Ghost in an Ubuntu 14.04 LXC Container

I had to set up a blog for an interaction design course that I am taking this semester. I figured this would be the perfect opportunity to play with Node.js and work with the absolutely beautiful Ghost blogging platform. I installed all of this on my primary Linode server that hosts this blog among other things. I like to keep my server neat, so whenever I am working with a new technology that I have not used before, especially when I know it is going to install a bunch of random files and run a bunch of random scripts that I will never be able to track down, I like to put it all into an LXC Container. Aside from a few snags, the installation was pretty straightforward. I followed theinstallation guide on the Ghost github site and had to make a couple small changes due to this issue. The interesting part of this was getting the container to be accessible from the outside world. I used apache’s mod_proxy module to forward requests to the new subdomain that I created directly to Ghost which was running in my LXC container. I have seen a couple different approaches to making containers accessible to the outside world but I think that this approach works well especially if you are hosting multiple sites on the same server.

Installing Your Container

I suppose this part is optional, you can just as well run this in a regular server or VM. However, if you like to put things into tiny little boxes like me, read on! The following should be run as root.

apt-get install lxc   

lxc-create -t download -n nodejs   

# During the template selection choose ubuntu, trusty, and amd64   
# Start the Container as a Daemon  
lxc-start -n nodejs -d   

# Open a screen session to attach the container.  
# Why? Because "If it's worth doing, it's worth doing in screen"   
screen -dRR node   
lxc-attach -n nodejs

Installing Node.js and Ghost

Now that we have our shiny new container, lets get Node.js and Ghost installed. Since containers come with a very minimal set of software, we will install some additional utilities as well. The following should be run as root inside of your container. The only change I had to make from the official install guide was installing nodejs-legacy along with all the other stuff. Take a look at the issue linked to above if you are interested in more information.

apt-get install wget unzip nodejs npm nodejs-legacy   

# Create a Directory for your Ghost blog and go into it  
mkdir ghost  cd ghost   

# Download and Unzip Ghost  
wget https://ghost.org/zip/ghost-0.5.8.zip  
unzip ghost-0.5.8.zip   

# Install Ghost  
npm install --production  

# Since we are in a container, in order to be able to access ghost from the host  
# machine we will need to edit the config.js file and change the values of to   
vim config.js   

# Start Ghost   
npm start

If all went well you should now see this in the terminal.


npm start  
> ghost@0.5.8 start /ghost 
> node index  Migrations: Up to date at version 003 Ghost is running in development...  Listening on  Url configured as: http://localhost:2368  Ctrl+C to shut down

You can now exit the screen session by pressing Ctrl+a and then Ctrl+d to get back to the host.

Apache Proxy to Container

The last step of this is to set up the Virtual Host config file to proxy requests to our new Node.js container.

# Grab the IP address of your container   
lxc-ls --fancy   

# Load the appropriate apache proxy modules   
a2enmod proxy  
a2enmod proxy_http   

# Configure the Virtual Host file to set up the proxy it should look something like this. Be sure to replace the IP address listed below with the IP address of your actual container.   

<VirtualHost *:80>   

# Admin email, Server Name (domain name), and any aliases   
ServerAdmin lev@levlaz.org   
ServerName  hci.levlaz.org  

ProxyVia full    
ProxyPreserveHost on     


Order deny,allow      
Allow from all    


ProxyPass /   
ProxyPassReverse / 


# Restart apache to clean things up   
service apache2 restart

You should now be able to access your Ghost blog by going to the domain that you set up in your virtual host config file. In order to set up Ghost for the first time, you will want to navigate to http://yoursite.com/ghost Ghost is a great product! This was a fun little project because it is a good exercise with LXC and proxying requests to containers. I hope you found this useful! If you have any questions or run into any issues please let me know in the comments below.

Posted in linux | 1 Comment

Integrate ownCloud in Ubuntu with Symbolic Links

Ever since I switched to ownCloud last year I have pretty much stopped using my local file system for any personal documents or media. This is especially handy since I typically used several different computers in a given day. The only thing that is a bit frustrating with this setup is all the wasted links that exist in Nautilus by default. This includes “Documents, Music, Downloads, Videos, etc..”. Although you can simply remove these and use the bookmarks feature of Nautilus to make new folders, there is a better way! Enter symbolic links. Symbolic links are one of the most useful and powerful parts of UNIX based systems. Using symbolic links we can simply redirect the Documents, Music, Pictures, etc folders to our ownCloud folder. This way Nautilus will show you your ownCloud files when you click on these existing links. This essentially integrates ownCloud with your native file system making it pretty transparent and seamless. This can easily be accomplished in two steps.

  1. Remove the existing folder (this is required to get the symbolic link working.) If you have any data in the existing folders currently be sure to back it up and/or move it because the next step ** will delete these files! **. For example, to reroute our Music folder:
    rm -rf /home/$USER/Music
  2. Create the Symbolic Link
    ln -s /home/$USER/ownCloud/Music /home/$USER/Music

Now you have created a symbolic link, and assuming you have some content in your Music folder on ownCloud you will now see this content when you click on the Music link in Nautilus. In addition, if you open up a terminal and run ls ~/ you will see that the Music folder is now a lighter shade of blue. You can repeat this step for any additional folders. This will make your life easier and keep Nautilus nice and clean.

Posted in linux | Leave a comment

User Interfaces

I am taking a Human Computer Interaction course this semester which I am really looking forward to! We were assigned a whole bunch of reading but one article that I really enjoyed was Jef Raskins comments on User Interfaces. It is an older article but many of the things that he pointed out really resonate with the issues that we still face today. In the article, one line really stands out for me which is “the union of two wrong systems does not make for a single, unified, correct one”. This reminds me of the idiom “two wrongs don’t make a right”, but is interesting to think about from a UI perspective. UI’s are changing all around us. Every major GUI OS is implementing some sort of convergence between the traditional desktop metaphor and touch screen capability with endless screens of icons and interesting gestures  in lieu of menu driven interfaces. This is a very interesting time to be thinking about HCI as more and more devices in our world are becoming computerized.

Posted in design | Leave a comment

Removing a Public Facing User Page in OS X Server Wiki

OS X Server has some pretty neat tools that are easy to set up and use for a team collaboration. The problem is that some of these tools are a bit quirky, especially when it comes to removing users or making sure that no data is accessible from the outside world. For example, if you edit your user profile page, this change will be visible to the public world. There is no real way (that I can find) to hide it, so it is a little bit annoying. Even removing the user from the wiki does not fix this. After doing some digging, it looks like all of this is controlled by a PostgreSQL database which makes it nice to try to figure out how to get rid of these pages. You can log into the PostgreSQL database on OS X server by opening up a terminal and running sudo -u _postgres psql template1 You can list all of the available databases by running \listand you should see one called collab. Connect to collab so that you can view the data inside and make some changes by running \c collab You can see the entire scheme by running \dt and it will look something like this:

:                   List of relations  Schema |            Name             | Type  | Owner --------+-----------------------------+-------+--------  public | blog_entity                 | table | collab  public | document_entity             | table | collab  public | entity                      | table | collab  public | entity_acls                 | table | collab  public | entity_acls_defaults        | table | collab  public | entity_attrs                | table | collab  public | entity_changesets           | table | collab  public | entity_comment              | table | collab  public | entity_lock                 | table | collab  public | entity_preview              | table | collab  public | entity_private_attrs        | table | collab  public | entity_tag                  | table | collab  public | entity_type                 | table | collab  public | file_entity                 | table | collab  public | filedata_entity             | table | collab  public | filename_reservation        | table | collab  public | global_settings             | table | collab  public | groups                      | table | collab  public | migration_entity            | table | collab  public | migration_status            | table | collab  public | migrationplaceholder_entity | table | collab  public | notification                | table | collab  public | page_entity                 | table | collab  public | podcast_entity              | table | collab  public | podcast_episode_entity      | table | collab  public | preview_queue               | table | collab  public | project_entity              | table | collab  public | relationship                | table | collab  public | savedquery_entity           | table | collab  public | search_index                | table | collab  public | search_stat                 | table | collab  public | session                     | table | collab  public | subscription                | table | collab

The schema is pretty complicated and has some really interesting relationships. My first thought was to just remove all instances of a user, but this turned out to be very complex because pretty much all of these tables depend on each other. The best way to remove a page is to fake the application out by marking the item as “deleted” in the entity table. For example, you can find the entity that you want to hide by running:

 select * from entity where long_name like 'Lev%';

This will show us all of the things that I have done in the wiki. Find the specific thing that you want. If you are looking for a user Profile page, this has the entity_type_fk of com.apple.entity.Page. You can grab the uid of this item from the first column and then run a simple update statement to mark the item as deleted.

update entity set is_deleted = 't' where uid = 'YOUR UID';

This item will no longer show up in the UI and you can have a truly “private” wiki again. The data model is pretty interesting and is worth looking at if you have nothing to do.

Posted in databases | Leave a comment

I am like 6 days late on my 2014 post

2014 was freaking awesome. I went to New Orleans for the first time, and then back again a few weeks later. It was an amazing city and I cannot wait to go back soon. I moved to a new department in my previous job and as a result made some awesome new friends. I continued to plow through my graduate program and am excited to continue to make progress this year. I got a great new job at Linode, uprooted my life, moved to South Jersey, and met some amazing people. I wrote more code, solved more problems, and learned more than any previous year to date. I cannot wait to see what 2015 has in store. 🙂

Posted in life | Leave a comment

Diagraming Tools for Linux

In Grad school and supposedly in the real world, we make a lot of diagrams of stuff. There are a lot of tools to do this.

For Windows, Visio probably works the best but I don’t like the way that Visio 2013 makes it really difficult to add new members if you are making a class or an ER diagram. Visio 2013 is prettier, but a lot more clunky in my opinion.

For Mac, I am a pretty big fan of Omnigraffle. It is simple, easy to use, has good stencils and gets the job done.

For Linux, although there are various choices they all leave something to be desired. I recently tried out Visual Paradigm, even though it has a “free” version for non-commercial use, they put in gross watermarks if you make more than one of the same type of diagram. This is not very professional and I do not think that the software is good enough to pay for. It seems like they just took eclipse and added some drawing functionality.

As much as I want to like Dia, it is just too clunky for every day use. I would not recommend this for anything other than very simple diagrams.

yEd is a really neat tool, it is simple and free to use. This tool really stands out from the pack for me because it “just works”. It also gets extra points for using an open standard drawing format which makes it compatible with other standards based software.

I think the best tool (but also the one with the highest learning curve) is Graphviz. Specifically, I am referring to using dot to make drawings. Despite the steep learning curve, it is 100% free software, standards based, is flexible and will draw exactly what you ask it to without too much trouble. Graphviz products embed perfectly in other programs which can be challenging when writing reports or papers. Also, if you master dot you will feel like a real hacker.

If you make diagrams for work or school (UML, ER, etc) what tool do you use? Let me know in the comments below!

Posted in software | Leave a comment

Convert Markdown to PDF in Sublime Text

Sublime Text is an awesome text editor that has a ton of useful extensions that can be installed through the Package Manager that make it even better.

One of my favorite parts of emacs is org-mode, this allows you to organize your life, make awesome notes, and even perform spreadsheet calculations all from with in the emacs text editor. The best part about org-mode is that you can export your documents to HTML, PDF, and LaTex.

In my opinion, sublime text is almost a modern successor to emacs because a lot of the extensions that have been developed for Sublime Text allow you to do fancy things like this as well. I am a huge fan of Markdown, It makes writing structured documents super simple, and is especially useful when your documents contain code snippets. As a CS student, I am often writing papers that include code snippets and this is not handled very well in most word processors. Org-mode does a great job of creating documents with code snippets, and I was excited to find out that with a few extensions to Sublime text you can make beautiful documents including code with little to no hassle. The following steps are being done with Sublime Text 3 running on Debian Testing, the steps may be a bit different for other Operating Systems. In order to make beautiful documents with code snippets in Sublime Text you will need to do the following.

  1. If you have not already install Package Control
  2. Install MarkdownEditing in Sublime Text
  3. Install Pandoc in Sublime Text
  4. Install Pandoc in Debian
  5. Install TexLive in Debian (this is to convert things to PDF)

sudo apt-get install pandoc texlive

We are going to be leveraging the awesome tools that are provided by pandoc in order to make really cool things happen with our text editor! This even supports syntax highlighting! Once you have installed all of those pre requisites you can save a text file as markdown using the .md extension and then convert it to HTML, PDF, or other formats using pandoc. Simply open the command launcher with Control+Shift+P type in pandoc and select your output format.


Posted in debian, software | 2 Comments

Built In PDF Magic in Debian

Debian has some awesome PDF tools built right in via the poppler-utilspackage that I never knew about. In my previous post I talked about how to make beautiful documents with code snippets using various Sublime Text extensions to convert markdown into PDF. One issue that I ran into was getting a cover page created. As far as I know There is really no easy way to make a nice cover page in markdown. Specifically in Github Flavored Markdown (which is what I am using) there is not a good way to make a page break. The easy solution would be to simple write up your entire document in Markdown, and then make a separate cover page. The problem is how to merge these two files together to make one document. Thanks to the built in PDF tools in debian this becomes very simple! The poppler-utils package has the following utilities built in:

  • pdfdetach — lists or extracts embedded files (attachments)
  • pdffonts — font analyzer
  • pdfimages — image extractor
  • pdfinfo — document information
  • pdfseparate — page extraction tool
  • pdftocairo — PDF to PNG/JPEG/PDF/PS/EPS/SVG converter using Cairo
  • pdftohtml — PDF to HTML converter
  • pdftoppm — PDF to PPM/PNG/JPEG image converter
  • pdftops — PDF to PostScript (PS) converter
  • pdftotext — text extraction
  • pdfunite — document merging tool

In order to combine a coverpage with another document we can simple run the following command inside a terminal.

pdfunite coverpage.pdf content.pdf final.pdf

This will create a document called final.pdf (of course you should change coverpage.pdf and content.pdf to match your actual files). Warning: be sure to include the final output file or pdfunite will replace the last file that you type will all over the previous files!

Posted in debian, software | Leave a comment

Connect LibreOffice Base to MySQL

I think LibreOffice Base has so much underutilized potential as a rapid application development platform, business intelligence platform and just a general reporting platform. Not to mention the fact that registered data objects can be used in all of the other LibreOffice applications to make really amazing documents and improve the work-flow of any office. Anyone who has actually used MS Access knows how powerful it can be used to be for these types of purposes. The most recent version of Access seems to have lost a lot of the features that made it useful. This is okay since most power-users are still using Access 2003. LibreOffice Base is not nearly as powerful as Access, specifically from a usability perspective. My biggest frustration with getting started with LibreOffice Base is the obscure and somewhat cryptic documentation around the platform that makes way too many assumptions about what someone new to LibreOffice actually knows. My hope is to provide some practical tutorials with real world use cases. So let’s get started by connecting Base to an existing MySQL database. In my opinion, the built in HSQL engine has a somewhat weird syntax and is generally not worth learning unless you are not planning on ever actually writing any SQL and only using the built in wizards. I would prefer to work with MySQL databases because they are ubiquitous, use a “standard” syntax and very powerful. In addition most practical office use cases will involve a central database and not a local database.

Preparing Your MySQL Server

This is the part of the documentation that I find most obscure and confusing, so here is how to do it using LibreOffice 4.3 running on Ubuntu 14.04 LTS. The steps here will be slightly different depending on if you are developing on a local database or a remote database. If you are using a local MySQL database please feel free to skip this section. I do most of my database development inside of Linux Containers which essentially makes my databases “remote”. In order to allow remote connections we need to make a few changes to the default MySQL configuration. Please note that if you are doing these steps on a live production system you will need to be extra careful with users, permissions, ports that are opened, etc.. This falls outside of the scope of this tutorial but the rule of thumb is that if your database accepts connections from the outside world you should white list each IP address that will be connecting to it and block all others. The easiest way to do this in my opinion is with your firewall. By default MySQL only runs on the local host and is not accessible from remote hosts. To change this setting you need to edit the my.cnf file. 1) Open up my.cnf which is found in /etc/mysql/my.cnf 2) Find the bind-address and change if from the local host to the IP address of the server.


3) Restart MySQL

sudo service mysql restart

Install the MySQL JDBC Driver

On Ubuntu 14.04 this is very easy and can be done by running the following command:

sudo apt-get install libmysql-java

Configure the Class Path in LibreOffice

Open up any LibreOffice App and go to Tools -> Options On the right hand side navigate to LibreOffice -> Advanced Select on the Class Path…button and load the new driver that was installed in the previous step. In order to do this you will need to select Add Archive… and select /usr/share/java/mysql.jar Once this has been loaded restart LibreOffice

Connect to your Database

Now comes the fun part. Now that we have taken care of all of the previous steps the rest is easy. To connect to your database open up LibreOffice Base.

  1. In the Database Wizard select Connect an existing database and chose the MySQL option from the dropdown menu.
  2. Select Connect using JDBC and hit next
  3. Enter the database name, server IP, port number and select next
  4. Enter the username for an existing user in your database and select next
  5. If you wish to use this database in other LibreOffice applications you should select the Yes, register the database for me radio button.
  6. Select Finish

Congratulations! Now you can rock some custom queries, fancy forms, and TSP reports using Base. We will go through how to do all of that an more in future posts.

Posted in databases | Leave a comment