While I was researching on static web hosting with Jekyll, one of the most suggested ways to get it up and running for free is by relying on GitHub Pages if you’re happy to have the source code for your site to be open sourced as well. However, I came across some write ups which suggested the use of BitBucket pipelines for getting the Jekyll site built and subsequently published. Intrigued, I spent some time figuring out the details and is now implemented as of this post!

What I’ve come up with is putting the source code for the site in a BitBucket git repository, created a base Docker image with Ruby, Node.js and Java, set up a BitBucket pipeline which does a Jekyll build and publishing to S3 buckets with staging and production environments, and CloudFlare for the DNS/CDN/SSL/etc. Just a note that this is not the most optimised solution, and is potentially (definitely..) overkill for a personal website but is something of a topic of interest and experimentation.

So first up, reasonings for the choices I came up with:

  • BitBucket
    • I’m sure GitHub would be the first thing that pops up when I mention Git repository - but Atlassian’s BitBucket was picked as they offered up to 5 private repositories and 50 minutes of ‘build time’ on their BitBucket pipeline as of time of writing.
  • Custom Docker image based on Alpine Linux 3.6 with Ruby, Node.js and Java
    • BitBucket pipeline is able to use Docker Hub images, Jekyll requires ruby (and potentially Node.js depending on the theme), s3_website (2.x) gem requires Java due to Scala code and I was not able get an image which reliably worked for this setup. Besides that, I was able to fix the alpine package versions in use via the Dockerfile and have everything required for the Jekyll site and publishing fixed in bundler for reliable consistent builds.
  • S3
    • An object store that works very reliably, supports static website hosting and relatively cheap to use.
  • CloudFlare
    • I’ve been using them since they were in beta and works well to handle DNS, performance and security for free.

And here is the summary of steps which I went through to get to being able to write a new post in markdown, committing and pushing to a feature branch, reviewing the changes in a staging environment, merging the pull request and have the changes published to my site:

  1. Create S3 buckets for ‘staging’ and ‘production’ (e.g. staging.yoursite.com for staging and yoursite.com for production) and configure CloudFlare accordingly, this site has a good set of setup instructions to be followed for this. Check this SO answer and this site if you require 403/404 redirect rules on your site.
  2. Create a BitBucket repo for your Jekyll site and you use the following code snippets for the bitbucket-pipelines.yml and s3_website.yml files:
    • bitbucket-pipelines.yml
       image: bchew/jekyll-build
      
       pipelines:
         default:
           - step:
               script:
                 - bundle install
                 - bundle exec jekyll build
                 - bundle exec s3_website push
         branches:
           master:
             - step:
                 script:
                   - bundle install
                   - bundle exec jekyll build
                   - ENV=production bundle exec s3_website push
      
    • s3_website.yml
        s3_id: <%= ENV['S3_ID'] %>
        s3_secret: <%= ENV['S3_SECRET'] %>
      
        <%
          if ENV['ENV'] == 'production'
            @s3_bucket = 'yoursite.com'
          else
            @s3_bucket = 'staging.yoursite.com'
          end
        %>
        s3_bucket: <%= @s3_bucket %>
      

    These configuration files would create a BitBucket pipeline which uses the s3_website gem to push the jekyll site to the staging environment on any pushes to a non master branch and to production on master branch pushes.

Hope this post would come in useful for anyone who would be inclined to go down this path, or even use this as a base for an improved process. :)

References:

I’ve been contemplating converting my WordPress powered blog and photo site for quite awhile now, but as you can see from the amount of time from the last post, this blog has not been getting much attention (besides having to update the Ubuntu OS running on the EC2 instance and the endless WordPress updates..).

Unfortunately (or fortunately?), that smooth sailing came to an end recently when the usual apt-get update/upgrade failed to complete. After further investigation, it was due one of the WordPress’s caching plugin I had installed which managed to use up all the inodes on the instance.

Following that issue, I decided to research on the migration (evaluated Hugo as well besides Jekyll) and spent time testing before making the changes over the weekend. The brunt of the work was mostly handled by the Jekyll Exporter WordPress plugin which converted all posts and pages for use in Jekyll (this site was a good reference). Then came choosing the Jekyll theme and validating all the posts and pages were in order. To simplify things, I decided to merge the photo site posts into the blog as well.

Overall the migration did involve quite a fair bit amount of work (I ended up implementing a continuous deployment workflow with Docker and Bitbucket pipeline which could have made it a lot bigger that originally planned.. would write up on this in the next post), but I believe it would be beneficial in the longer term with just relying on S3 to host static files (cost wise too). :)

I’ve had the Cisco Linksys E3000 for quite awhile now and it has been a reliable router, internet connection wise, weak in the wireless department but decent enough to live with.

So first off, the power adaptor blew, quickly resolved it by switching it out with an older power adaptor which came with the Linksys WAG354G (yes, I actually did like Linksys routers.. till now – eyeing the Asus RT-AC3200). Sometime after, I started noticing strange behaviours with the Facebook app on my Nexus 5 – the news feed would load, but certain images would just end up with an endless loading spinner, and soon after the app would no longer refresh. Switching from WiFi to 4G/LTE got it working fine which was baffling considering everything else works fine on Wifi. Coincidentally, Netflix launched in Australia then and I jumped on the free trial to test it out – however, I got a black screen on the Chromecast and a cryptic error message before I got booted back out to the main Chromecast screen.

Started googling around and initial results were on potentially IPv6 issues. However, looking around my router admin pages had nothing on disabling IPv6 which was baffling as testipv6 says I had an IPv6 IP! More digging then led me to more sites on 6to4 being enabled by default on the router: reddit, chromecast help forum, forum, blog

you go to http://your_router’s_ip/System.asp. Set Vista Premium to disabled and the router will stop broadcast 6to4.

That’s pretty much the fix to get everything working as it should. I suppose this doesn’t help with IPv6 adoption, but I would probably stick with just IPv4 and move to IPv6 when I’m able to get a native IPv6 IP from my ISP.

Seems bizarre to have a hidden admin system page for this..

Wrote dynamodump which is a simple backup and restore script for Amazon DynamoDB using boto to work similarly to mysqldump. It is suitable for DynamoDB usages of smaller data volume which do not warrant the usage of AWS Data Pipeline for backup/restores.

It includes features to help with managing backup/restores between various environments (e.g. production, staging, dev). Comments/suggestions to further improve on it are welcomed.

Perhaps a start to more open sourced stuff this year! :)

Loch Ard Gorge