Update #6: I have gotten absolutely no where with this with the Debian folks. It looks like we're going to have to take this directly to the kernel developers and see what they have to say. I posted to the linux-kernel mailing list hoping somebody will be able to help. We'll see how long it takes before they call me an idiot :) https://lkml.org/lkml/2014/5/14/243

Update #5: I have decided to take another look into this issue. The issue is with the 3.2+ kernels, as I can replicate this by installing a 3.2 kernel into Squeeze. This will hopefully make it easier to troubleshoot, since I can replicate and fix simply by isolating what is happening specifically to the kernel. I filed a new bug report at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=742643 - Check it out and see if you're seeing the same issues as I am.

Update #4: No traction at the Debian bug report. We ended up rolling back to Squeeze. I am a big proponent of Debian, but I'm definitely a little bummed out right now. We may end up trying to build an Ubuntu 12.04 environment just to see if we run into the same issue.

Update #3: I got no where with the debian-users mailing list, so I submitted a bug report to Debian at http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=715269 - We'll see if anything comes of it.

Update #2: This morning I built the latest 2.6.32 kernel for wheezy and it got rid of all the load issues, so if for some reason you have to be on wheezy, that could be a way to go for you.

Update #1: I posted this to the debian-users mailing list, and we have a couple other people at this point who are having the same issue.

At YouVersion, we have been using Debian Squeeze to power our application servers. Personally, I'm a huge fan of Debian and have been for over 10 years now. Once Wheezy was officially released, we started the process of getting ready for Wheezy. We built new AMI's for our developers, setup testing environments using salt-cloud and built new versions of the components that help power API such as nginx, PHP, Python, Gearman and Twemproxy. Everything was going well until we put Wheezy in production.

Our plan was to only upgrade two boxes to Wheezy and then compare metrics to see how we're doing. Our load on our application servers is normally between .5 and 1 under Squeeze. Under Wheezy, we're somewhere around 3, which troubled us greatly.  Worse yet, the Wheezy boxes didn't hold up on our Sunday traffic levels, php-fpm just wasn't responding quick enough and monit had to restart php-fpm a few times before we took them out of service.

During our troubleshooting, the first thing we noticed was that most of our stack (nginx, PHP, uWSGI/Python)  was taking more virtual memory in Wheezy than Squeeze. While this isn't necessarily a big deal, it could be under the right circumstances. We decided instead of doing an in-place upgrade to Wheezy, we'd do a fresh install. Thankfully, SoftLayer makes super easy to do through their portal and we had a new app server loaded with a fresh OS in less than an hour. This got rid of the virtual memory issue, but our load still remained high. The worst part was that we couldn't attribute the high load to anything in particular very easily. CPU usage was the same, memory usage was smaller in Wheezy and the I/O system all checked out as fine. The only difference we found was that our interrupts are much higher in Wheezy than Squeeze.  Specifically, "Rescheduling interrupts" and "timer" are through the roof on Wheezy, compared to Squeeze.

If you're interested, you can check out what I found here

We built a new server with a different board/CPU combination in hopes that the issue was somehow hardware related, but we saw the same #'s there.

Ultimately, we decided to load one server back to Squeeze and keep one with the fresh Wheezy install and see how it would hold up against our Sunday load.  We use a custom C program that exports our HAProxy logs into JSON and ship's them to Google's BigQuery service to allow us to easily and quickly query against them.  After querying the average response time of all of our servers, despite the high load, our Wheezy box actually performed better than the rest of our app servers by about 3 ms.  With the fresh reload, it was also able to stay up with no issues.

So, now we have a conundrum.  From a performance perspective, we seem to be in good shape with Wheezy, but the high amount of interrupts and load is causing us tremendous unease about rolling it out to production. Right now, we're not exactly sure what to do.  The issue is bothering us so much we're thinking about spending the time to build out a test stack on Ubuntu Precise to see if we're seeing the same thing, since it's on a more similar kernel to Wheezy.

Since I migrated my WordPress blog to an installation of Ghost running on a droplet at Digital Ocean, I've wanted to make sure I was backing up my data in case something should ever happen.

I decided the way I wanted to do this was to use the duplicity software to backup to Google Cloud Storage rather than the popular Amazon S3. Why you ask? Mostly, it's cheaper, and I always look to minimize costs when it makes sense to. I use Durable Reduced Availability buckets at Google (same idea as S3's Reduced Redundancy Storage) to save even more money, since I care more about my data being good than having the highest level of availability.

Before we begin, you'll need to create a Google Cloud Storage Project:

  1. Create a Google Cloud Storage Project. Call it whatever you like, but make sure to make note of the project ID you made.

  2. Go to the Google Storage Console, click on your project name, then on the left side click 'Cloud Storage'. If this is the first project you've created with Google, it will show you a link where you can give Google your credit card so they can charge you for the storage space you are using.

  3. Enable Interoperable Access - Click on 'Allow' (or it may be called something diffent but similar) at the bottom of the page under "Interoperable Access"

  4. Grab/Generate Interoperable Access Storage Keys - Leave this web page open for now, you'll need the ACCESS KEY and SECRET for a step later (hit the Show button to show your secret)

Now that the project is made, it's time to login to our droplet over SSH and get gsutil and duplicity configured and installed. I am executing these commands as root.

  1. apt-get update - This updates our Ubuntu package definitions.
  2. apt-get install gcc python-dev python-setuptools python-software-properties python-boto - This will install the prerequisites for Google Cloud Storage utilities and some other necessary packages.
  3. add-apt-repository ppa:duplicity-team/ppa - This installs the duplicity software repository so you can install the latest version of duplicity.
  4. apt-get update - Update the Ubuntu package definitions again so that it's aware of the duplicity repository we just added.
  5. apt-get install duplicity - Install's the duplicity package
  6. cd - This makes sure we are in /root (if you're running these commands as root like I am)
  7. wget http://storage.googleapis.com/pub/gsutil.tar.gz - This downloads gsutil to your current directory
  8. tar xvfz gsutil.tar.gz -C /usr/local/bin - This will uncompress and expand the gsutil file archive to /usr/local/bin/gsutil
  9. echo export PATH=/usr/local/bin/gsutil:$PATH >> ~/.bashrc - This adds gsutil to your PATH, so that you can simply type 'gsutil' and have it work.
  10. source ~/.bashrc - Refreshes the PATH change you just made. This makes it so you don't have to log out and log back in again to make your change active.
  11. gsutil - This command should now run successfuly and print out instructions on how to use gsutil. What's important is that you see the instructions, you don't need to pay attention to them right now. If you get an error, you've done something wrong.
  12. gsutil config -a - Remember the page I asked you to keep up? You'll need to enter in the access key, secret, and project ID you created when prompted. When this finishes, it should create a file called ~/.boto with your credentials.
  13. gsutil mb -c DRA gs://backups/ - This will create a Durable Reduced Availability bucket called backups.

Now our system is all setup. You can now backup whatever you want to backup with duplicity.

For instance, to backup /var/www, you could run:
duplicity --full-if-older-than 1M /var/www gs://backups/www - This command will backup /var/www to a "folder" called 'www' in the backups bucket. This will prompt you for a GnuPG passphrase, simply enter in a password you won't use. You'll need this to encrypt and decrypt your backups, so don't ever lose it. Without it, you'll be unable to restore your data. When this finished, you should see statistics printed out to the screen. Please note there is a bug in duplicity currently where you cannot backup to the root of a bucket.

To restore, you run something like:
duplicity gs://backups/www /var/www and that will restore your latest backup to /var/www

I recommend reading the awesome How To Use Duplicity tutorial for more information on how to use securely setup duplicity to run automatically. If you're fairly new to this stuff, just make sure to note the tutorial is using sftp in its examples, so instead of the sftp:// use the gs:// syntax like I used above.

Stumbled across https://github.com/duggan/pontoon today. Looks neat for those who are heavy into Digital Ocean but not using stuff like SaltStack to automate their environments.

Ghost Markdown UI When I split my blogs off, I decided to take a hard look at blogging and what I wanted to accomplish. I decided that I wanted to use something other than WordPress. Why? Because I found myself going through countless optimizations and was about to try to further speed it up by purchasing a LiteSpeed license and maybe a Varnish+nginx plugin for cPanel for more caching.

I was sitting there getting very frustrated at either the amount of time and/or money I was going to spend to overcome the bloatedness that WordPress has become.

That doesn't make WordPress a bad product. It's actually a really nice CMS engine that I recommend to many clients when they want to create a website for themselves. At the same time, it has strayed very far from its roots as a blogging platform. It's still good at what it does, but as a DevOps person, I am naturally inclined to go for leaner and cleaner.

That's when I started looking at static site generators that only real geeks use. I was looking for a product that I could create posts in Markdown and easily publish out. I did demo's of projects like Pelican on the Python side or Jekyll on the Ruby side. They were all nice and at the end of the day, a static site is alluring to me because I crave speed for the web. At the same time, I also didn't want to lose a web based dashboard.

I wanted something in between command-line site generators and the bloatedness of WordPress. That's when I discovered Ghost.

You may have seen Ghost in its Kickstarter Campaign. I fell in love with the awesome Markdown interface and I totally get what they are trying to build and why they are trying to build it. Ultimately, I decided to bite the bullet and try it out, and I'm very happy I did.

Here's what I liked:

  • The Markdown UI is awesome!
  • It's a new product written in node.js and is not bloated at all.
  • It's very fast
  • Because it's so fast and lean, it's less expensive to run. I am able to run multiple blogs off of a 512 MB of RAM instance at DigitalOcean
  • There was an awesome pre-built image with good directions to help people get started with Ghost quickly and easily.
  • Despite being a new and infant product, there is a Market Place that has some really nice themes.
  • Because the platform was made to be simple, the themes and templating engine is simple as well. I enjoyed being able to just navigate to the template files and make my changes rather than having to do that through a web interface and trying to figure out if I'm able to make the change I want through the UI or if I have to go manually edit some PHP functions which break everytime the theme upgrades.
  • The Ghost Plugin for WordPress made it very easy to import my blog from WordPress.
  • They are very intentional about their production cycle and publicize their very detailed roadmap.

At the same time, the product isn't for everybody:

  • Node.js means that Shared Hosts, the most predominant way people host websites aren't able to host it right now, requiring new users either be more skilled in Systems Administration than they should have to be, or pay money for another hosting service like the offical hosted platform that Ghost will be launching soon or ghostify.io
  • It's still very much a work-in-progress and things that should be in a blogging platform, like tag clouds, aren't there yet. Take a look at the roadmap to see when features you want will be added.

So far, I'm happy. Now it's time to publish this to Hacker News and see how well it holds up :)

This is an awesome article for web developers out there who are looking to get the highest Pagespeed possible. It's written from a Bootstrap perspective, but there's gold in here for any real web developer.

The Right Stuff: Breaking the PageSpeed Barrier with Bootstrap

Stumbled upon the Skype Collaboration Project Promotion earlier today. Supposedly, if you sign up for it they will send you a voucher for a year's worth of free Premium service.

I'm sure it's getting very abused right now, so I'll update this once more information is available.

As it turns out, I am very unique.

Not surprisingly, there are not a lot of people who tweet about a huge mix of Systems Engineering/Operations/DevOps type stuff as well as a lot of about Jesus, Church and Faith.

This has created somewhat of a problem for years. The people who find my Tech stuff useful sometimes aren't interested in my unabashed love for Jesus. At the same time, most of the people who really enjoy my Christian content stare blankly and have no idea what to do when I start talking about optimizing nginx configurations so that the time to the first byte over TLS is as fast as possible.

I don't compartmentalize my life into my "God" life and my "Tech" life, so it seemed unnatural to separate them in social media either...but I am having a change of heart. Ultimately, I want my content to be relevant and useful to the person reading it. I also want to make similar content easier to find. Because of all this, I am changing the way I approach Social Media.

Blogs

I have separated my blog into two different sites:

http://www.willdurness.com - This blog focuses on Tech, Systems Engineering/Administration and DevOps stuff.

http://www.deaconwill.com - This blog focuses on Jesus, Church and Faith. I will also talk about Church and Technology stuff here.

Twitter

@DeaconWillP - This has been my main Twitter account is now renamed. I will talk about Jesus all the time here as well as my passion point where Faith and Technology meet.

@WillPlatnick - This is my Technology account, where I write about cool DevOps stuff, like making your software stack faster and awesome tools like Salt.

Facebook

I mostly use Facebook as a place to share random content and catch up on the lives of close friends. Though I am friends with plenty of "technology people", I've found they don't use Facebook as much, preferring other social networks. My posts about God are generally very well received with my group on Facebook, so that will be my emphasis.

Google+

People use Google+? In the tech world, it's actually used quite a bit, and is the preferred social network for many IT focused people. There is a huge amount of content here and the Linux Groups have helped me find content I wouldn't have known about otherwise. Because virtually everybody in my Google+ circles are technologists and use Google+ for technology-focused stuff, it makes the most sense if I use it in a similar fashion.

Anybody else in the same boat?

How have you been dealing wrestling with having vastly different interests and wanting your content to be helpful to as many people as possible?

I was browsing around my Google+ page today when I noticed a post saying the Debian 7 Administrator's Handbook had dropped a few weeks ago and I didn't even notice!

This is an awesome reference for any Ops/DevOps person.

http://debian-handbook.info/browse/wheezy/

Optimizing NGINX TLS Time To First Byte (TTTFB) via igvita.com

22 Recommendations for Building Effective High Traffic Web Software