Double Negative

Software, code and things.

Upgrades and instances with the Ghost blogging platform

The Double Negative blog (you are reading it..) runs on the Ghost blogging platform.

The decision to use Ghost was a simple one.

  • It is made by great developers and has been built well.

  • It is written in javascript (Node.js) and allowed me to learn and play with some technologies that I don't regularly use.

  • It is not Wordpress.

  • It is so simple and easy to use - you just write.

I did however recently encounter a few small issues. Nothing significant. Nothing too difficult. Just some small issues that you may encounter when trying to upgrade your self hosted verion of Ghost and/or if you want to run multiple instances of Ghost on the same host.


Ghost provide a How to upgrade guide. It is pretty simple to follow, and explains exactly how to upgrade your Ghost install.

If I recall correctly (I have not touched Wordpress in five years or so), Wordpress has one click upgrades. Unfortunately Ghost does not offer this functionality.. and its propbably for the best.

Firstly.. actually back up your data. I am often 'told' by colleagues/upgrade processes/cats to backup my data, and I rarely listen. In this case however you should absolutely do it. Given that Ghost is powered by SQLite, your data (posts) are held in a single file which it is oh so easy to accidentally delete.

Given that Ghost provides one click backup functionality, you'd be a fool not to.

Now.. you are probably using Forever or Supervisord to manage your Ghost process. One click upgrades with Ghost would more than likely cause more headaches than they would cure as a result of this. Imagine if you upgraded your Ghost install, and the process incorrectly restarted the mechanisms that keep your blog up and running.. crys

I personally use Forever, and one thing I didn't appreciate (it is obvious in hindsight) is that processes are per user. That is to say, my forever process was originally running as user user1 and all of my ghost files were owned by user1. When I restarted the process I was running as root and as a result whilst I could load the blog, I could not login to the control panel.

I was a little bemused, but managed to debug the issue utilizing the Chrome developer tools. A quick look at the network requests being made indicated that the Ajax request processing the login was returning a failed response pertaining to my SQLite database file being read only.

This is one area where Ghost lacks a little mainstream polish - if you were not a developer you'd never be able to debug an issue like this simply because there was no error handling of any sort.. I just could not get past the login screen.

Fortunately, for that situation Ghost offers their hosted solutions which might be of interest if you are not from a technical background.

Multiple instances

Having upgraded this blog, I also wanted to setup another blog for a different website.

If you are trying to do this yourself, Ghost provides a great installation guide. In fact, all their documentation is great.. and there are a lot of tutorials etc from the community too.

The only divergence for multiple blogs is that you need to proxy your HTTP requests to each site (domain) to the respective node instance.

I utilize nginx, and to do this you need to simply direct the request to a particular port as so: proxy_pass http://localhost:1234.

For each blog you need to utilize a different port.

Then all you need to do is specify the matching port in your Ghost config.js for the respective blog.


As alluded to above, I utilize different users/owners for each of my Ghost blogs. With the appropriate permissions you get the additional security that someone wont accidentally delete your data or stop you node server. You just need to remember that you have done that when it comes to upgrading ;)


One of the awesome things about Ghost is its theming.

A lot of our products have extremely complex automated build, testing, and release processes. Of course a blog based on well maintained open source software will unlikely need any such processes.. I only mention this to emphasize the simplicity of Ghost.

All you need to modify is your theme files. Once you have set up your Ghost blog, the only thing you need to change is its visual appearance. Everything else can be done from the control panel.

As such on my development machine I have a singular instance of Ghost running and I build, play, and test my theming for all (two) of my blogs using it. I then push up the themes to the respective blogs and I am done.

This simplicity makes me smile.

One slight annoyance is that you do need to restart your Ghost instance for theme changes to go live and as such you do need a 'release process' of sorts (albeit a very simple one :P )


Use Ghost.

If you like complexity and security issues go ahead and use Wordpress.

But really.. use Ghost. It is so simple and easy to use, has a great community, and.. well.. this post outlines the only 'problems' I have encountered with it, and they really are not very significant :)

FastCGI in 5 minutes

FastCGI "is a binary protocol for interfacing interactive programs with a web server" (from Wikipedia).

In the same vain as my nginx in 15 minutes post, I thought i'd outline FastCGI, and its implementation on nginx with PHPFPM.

A FastCGI server is independent of your web server. You delegate your request to it, it processes it, and returns a response.

FastCGI is a protocol. An implementation of said protocol can be written in any language. PHPFPM is a process manager that implements the FastCGI protocol with a number of optimizations. It is now part of the PHP core and is well used on the web.

Whereas previous incarnations of CGI created a new request per process the FastCGI protocol processes multiple requests within the same process (multiplexing). This allows for concurrency and handling of higher loads.

The FastCGI protocol seeks to resolves many similar issues in the CGI protocol that nginx seeks to resolve in earlier versions of Apache.


nginx integrates with the FastCGI protocol throught its fastcgi module. That is to say it knows how to interface with a FastCGI server that implements the FastCGI protocol. This makes connecting to a FastCGI server extremely simple.

Communication occurs via interprocess communication (IPC). For a simple setup you can connect to a unix socket. For a more complex setup you might want to communicate with multiple servers using TCP sockets.

The most important fastcgi_param is SCRIPT_FILENAME which indicates as to where on the filesystem a file should be loaded for a particular request.

I choose to define my server root as follows:

set $root_path '/path/to/files';  
root $root_path;  

I then utilize the $root variable within my location blocks for static image files as well as my location block for passing php files to my FastCGI server.

Within the latter block I use:

fastcgi_param SCRIPT_FILENAME $root_path$fastcgi_script_name;

This maps a request for myfile.php to /path/to/files/myfile.php

You can utilize the fastcgi_split_path_info directive to allow for customized URL structures.

As long as you specify a regular expression with two capturing blocks you could direct a request to to /path/to/files/myfile.php with relative ease.

You are only limited by your knowledge of regular expressions :)

Another interesting tidbit regarding the nginx integration is fastcgi_intercept_errors on.

This directive allows for a response from the FastCGI server to be directed to the appropriate location block based on your defined error_page directive.

Through this directive it is easy to display a custom 404 page for example should FastCGI return an error response.


PHPFPM is a "process manager to manage the FastCGI SAPI (Server API) in PHP" (source).

In essence, what that means is that PHPFPM manages the creation of PHP processes (as required) to process the requests sent to it by the webserver.

It implements the FastCGI protocol such that for example, data sent to it as a fastcgi_param (when using nginx) can be processed and utilized appropriately.

When running PHPFPM on a local machine (server) it 'runs' the server to which one directs their PHP requests.

For usage with nginx you would connect to the server through a unix socket using the fastcgi_pass directive (as outlined above).

fastcgi_pass unix:/var/run/php5-fpm.sock;  


nginx, PHP, and FastCGI are a power combination in web application development because they integrate so seemlessly together with one another.

Configuration is simple, and they allow for deployment of dynamic websites that can scale with ease.

The curiosities of Facebook's developer offering

In the process of building multiplatform applications which integrate with social platforms I have been required to investigate Facebook's developer tools and processes.

Unfortunately the process has been somewhat painful. I thought I would document the curiosities I encountered such that anyone else encountering them can resolve them with more ease.

The review process

Whilst Facebook do provide documentation for their (relatively) new application review process, it is (at the time of writing) somewhat lacking and unclear. That said, I strongly advise anyone submitting an application for review to read all of the documentation thoroughly before attempting a submission.

For some reason Facebook have released an extremely well reasoned review process yet implemented it extremely poorly. There is/was a nice video outlining what exactly the review process is, and what it seeks to do. I cannot however find it (now), and am suspicious that it may have been removed because the gentlemens smile (in said video) did not match a realistic developer experience.

The main problem is that should you have any issues with your submission, you will more than likely receive a cryptic, generated response. The response that I received was along the lines of 'Your open graph action does not post on all platforms' which whilst strictly true could have said 'You are sharing a link rather than the open graph action under review'.

Whilst I did ask various Facebook staff members for comment, none was received. I can only assume that the tools provided to reviewers only allow for preselected responses. As such if you do have any issues it may well be a guessing game attempting to get it resolved.

Fortunately you can contact support.. right?

Nope. Facebook do not provide a support service to developers. At least not a generally available one. They do provide support to developers working under business umbrellas, but even then there are undisclosed requirements for being allowed help using their system. After playing the 'review process guessing game' for a number of rounds, I discovered this option and went through the motions of associating my business on Facebook solely for this purpose.

Depressingly, once I had access to this support option, I was helped by an extremely helpful individual and I was able to resolve the issue that had caused numerous review failures in 3 minutes. I.E. As soon as Facebook told me what the issue was, I fixed it.

I was also able to use this platform to resolve an issue whereby my application dashboard was 'out of sync'. That is to say, I could not submit a review because the platform thought my application was already under review when it was in fact not.

Again.. I questioned various Facebook staff as to why they dont just implement a proper review process and received no response. I have told them that it would save both them, and their developers a lot of time and stress.. but nothing.

Developer community

If you cannot get any support, your only bet is the Facebook developer community. This is a closed group for developers to ask questions and discuss development related issues. Whilst there are Facebook staff members in the group, again getting appropriate support is very difficult.

The developer community does however provide a little insight. With the greatest of due respect, a lot of the questions posted within the group are from inexperienced developers asking wide berth questions like "How do I do Facebook with PHP". I imagine that receiving many thousands of such requests through an open support channel would be hellish to manage.

That said if you are scratching at walls, unable to get anywhere.. you may find a helpful developer here who can point you in the right direction. I also feel like I cannot post something like this, and not offer my own support. If you have an issue that you cannot get to the bottom of, I am happy to try and help.


Obviously building a solid API on such a massive scale is an extremely difficult task. The Facebook team have done a marvellous job with their various SDKs, developer tools, and debug tools. That said, I cannot write about all the positives, so I'll stick to writing about the few negatives.

I encountered two issues which bemused me. Both pertained to errors, and error handling.

In my web application I was utilizing a version of the PHP SDK that was maybe a month old. On submitting an open graph request for a custom story, I received an error response suggesting that I had not authorized the 'User Messages' capability. I had. After some futile debugging I decided to upgrade to the very latest SDK and the problem was gone. I am happy to excuse a small bug in a massive and complex product but I wouldn't call returning/processing a completely incorrect error response a 'small bug'. This occurred at a similar time to the synchronization issues with the review dashboard (outlined above), and as such the upgrade could have been a false positive. Either way it was certainly a serious issue.

The second issue was one pertaining to responses. Whilst attempting to share a link through the API some unchanged code suddenly started failing. I received the following error response:

[message] => An error occurred while processing this request. Please try again later. [type] => OAuthException [code] => 368 

Not only do you have to guess your way through the review process, but you have to guess your way through error handling too ;)

Fortunately Googling the error code suggested that the error code may well pertain to the link being flagged in some capacity. I was able to resolve the issue but it presented another curiosity to me.

The issue was that my link (a url shortener link) had previously pointed somewhere else (stupid I know). Facebook had cached the previous content (something slightly suspicious) and was flagging it. I'll give Facebook a break here on the basis that caching the Internet is pretty hard. I just thought I'd mention it in case someone else encounters a similar issue.


Whilst this post is primarily negative, it merely seeks to outline some curiosities with Facebook's developer offerings, and perhaps outline some resolutions for people encountering problems.

As mentioned what Facebook are doing.. and at the scale they are doing it.. is mightily impressive. That said, I just cannot fathom as to why Facebook have not 'polished' such important product offerings.

You do not see many (if any) issues with the main public production Facebook website. Why has the same attention to detail not been applied to the developer offerings? In many respects the open graph, and developer integrations allow an open medium for Facebook to expand its own offering by proxy of third party offerings. Surely that is incredibly important?

Further to that, whilst Facebook operates on a massive scale, they also hire on a massive scale. They (as I understand) have a massive team of incredibly talented developers.. I cannot understand how you can build infrastructure and tooling to handle billions of status messages yet cannot provide reviewers a text box to tell people why their reviews have failed..

The only other possibility is that the reviewers have a text box, but are on some sort of devillish commission structure and have to get through 1.6 million reviews every hour ;) Either way.. not cool.

I suspect (and hope) that Facebook will at some point get around to polishing their offering. If not I can not helpu but feel that they should at least put a BETA sticker on it.

nginx in 15 minutes

I am currently tying up various loose ends on a full stack project that we have invested a lot of our development time into over the past six months.

As a general knowledge exercise, and to make sure our server setup is optimized I have spent the past few hours fine-toothcombing the nginx docs.

In the process I learnt a number of new things, and discovered some interesting optimizations. I thought I'd post a brief 'nginx - what you need to know' kind of post. This is intended to be an overview of nginx for someone who has a generally solid knowledge of software/server architecture yet only has fifteen minutes to spare.


nginx is events based which allows it to handle load better than for example Apache (which spawns a new process per connection).

It was built with the intention of handling high concurrency (lots of simultaneous connections) whilst performing quickly and efficiently.

It consists of a master process which:

  • reads and validates your configuration files
  • manages worker processes

The worker processes accept connections on a shared 'listen' socket and are capable of handling thousands of concurrent connections.

As a general rule you should configure one worker process per cpu core. Double that if you are serving mainly static content.

You can see these respective processes by executing ps -ax | grep nginx from the command line.

If you reload your nginx configuration, worker processes are gracefully shut down.


nginx follows a 'c-style' configuration format.

nginx configuration allows for powerful regular expression matching and variable utilization.

server blocks define the configuration for a particular host, ip, port combination.

default_server can be specified on the listen directive to indicate to utilize that configuration block for any connection on that port (should no other block match). That is to say the below block will match a request on port 80 even if the host is not

server {  
    listen       80  default_server;
    listen       8080;

server blocks allow for wildcard matching or regular expression matching. For example you could match both the www and non-www versions of a domain name.

Exact match server names are however more efficient than wildcards or regular expressions on the basis of how nginx stores host data in hash tables.

location blocks define the configuration for a specific location.

Location blocks only consider the URL. They do not consider the query string.

Longer matches take preference - that is to say location /images will be matched over location / were you to request

Regular expression matches are prioritized over the longest prefix. If you want to match a regular expression in a location block, prepend it with ~ e.g location ~ \.(gif|jpg|png)$

Regular expressions follow the PCRE format.

Any regular expression containing brace ({}) characters should be quoted as it will it would be otherwise unparseable (given nginx's usage of braces for block closures).

Regular expressions can use named captures. For example:

server {  
    server_name   ~^(www\.)?(?<domain>.+)$;

    location / {
        root   /sites/$domain;

Interesting features

Load balancing with nginx is really easy. There are three loading balancing methodologies available in nginx:

  • round-robin
  • least-connected
  • ip-hash

Load balancing is highly configurable and allows for the intelligent direction of requests to different servers.

Health checks are in built such that if a particular server fails to respond nginx refrains for sending the request to that server based on configurable parameters.

HTTPS is also easy to implement with nginx. It is a case of adding listen 443 ssl to your server block and adding directives for the locations of your certificate and private key.

Given that the SSL handshake is the most expensive part of a secure offering, you can cache your ssl sessions.

ssl_session_cache   shared:SSL:10m;  
ssl_session_timeout 10m;  


nginx is very modular in its nature. Although you interact with it as one unit through your configuration it is in fact made up of individual units doing different things.

nginx offer a detailled module reference - this is an overview of some interesting or less commonly discussed modules available to you.

The internal directive of the core module allows for a location to only be accesible to an internal redirect.

For example, if you want 404.html only to be accessible to a 404 response you can redirect requests from the error_page whilst not making it accessible to a user typing it directly into their browser.

error_page 404 /404.html;

location /404.html {  


If you dont want to show directory listings to nosy users you can turn autoindex to off. This can be used to protect your image directory for example.

Browser detection

You can use modern_browser and ancient_browser to show different pages dependent on the browser which your client is using. See here.

IP Conditionals

You can use the geo module to set variables based on IP ranges. You could for example set a variable based on a locale IP range and use that to send users from a specific country to a specific location.

Image manipulation

nginx even offers an image filter module. This module allows you to crop, resize, and rotate images with nginx.

Connection limits

You can limit a particular IP to a particular number of concurrent connections. For example you could only allow one connection to files within your 'downloads' folder at a given time.

In addition to that, you can limit the request rate and configure the handling of request bursts greater than a configurable value.

My initial thoughts were that this would be a fantastic method of preventing people from hammering a public API for example.

More information can be found here and here.

Request controls

Further to the above, nginx allows you to limit the request types that a particular block will handle.

This would again be extremely useful for an API offering.

limit_except GET {  
    deny  all;

Conditional logging

The log module allows for conditional logging.

I thought I'd give this a mention because I can see a lot of merit in only wanting to log access requests that result in bad response codes.

The example listed shows this:

map $status $loggable {  
    ~^[23]  0;
    default 1;

access_log /path/to/access.log combined if=$loggable;  

Secure links

This module is pretty awesome - it allows you to secure your links and associate validity time periods with them utilizing the nginx server software.

Personally, I have no use for it because although I have production use cases of similar functionality, I can't help but feel that you could do this is many easier ways.


There was one module that I found somewhat novel. I just can not see a use case for it - perhaps someone can enlighten me?

nginx offers a module to show a random file from a given directory as an index file.

I need this :P


As mentioned, the above is based on a thorough read through of the nginx documentation.

The following chapter from 'The Architecture Of Open Source Applications' was also incredibly interesting. It offers a significantly more complex look at the internals of nginx. Perhaps not suitable if you really did only have 15 minutes ;)

I highly reccomend reading the documentation yourself if you are interested in, or are running nginx on a server.

If you have any questions I would be happy to answer them.