Double Negative

Software, code and things.

nginx in 15 minutes

I am currently tying up various loose ends on a full stack project that we have invested a lot of our development time into over the past six months.

As a general knowledge exercise, and to make sure our server setup is optimized I have spent the past few hours fine-toothcombing the nginx docs.

In the process I learnt a number of new things, and discovered some interesting optimizations. I thought I'd post a brief 'nginx - what you need to know' kind of post. This is intended to be an overview of nginx for someone who has a generally solid knowledge of software/server architecture yet only has fifteen minutes to spare.


nginx is events based which allows it to handle load better than for example Apache (which spawns a new process per connection).

It was built with the intention of handling high concurrency (lots of simultaneous connections) whilst performing quickly and efficiently.

It consists of a master process which:

  • reads and validates your configuration files
  • manages worker processes

The worker processes accept connections on a shared 'listen' socket and are capable of handling thousands of concurrent connections.

As a general rule you should configure one worker process per cpu core. Double that if you are serving mainly static content.

You can see these respective processes by executing ps -ax | grep nginx from the command line.

If you reload your nginx configuration, worker processes are gracefully shut down.


nginx follows a 'c-style' configuration format.

nginx configuration allows for powerful regular expression matching and variable utilization.

server blocks define the configuration for a particular host, ip, port combination.

default_server can be specified on the listen directive to indicate to utilize that configuration block for any connection on that port (should no other block match). That is to say the below block will match a request on port 80 even if the host is not

server {  
    listen       80  default_server;
    listen       8080;

server blocks allow for wildcard matching or regular expression matching. For example you could match both the www and non-www versions of a domain name.

Exact match server names are however more efficient than wildcards or regular expressions on the basis of how nginx stores host data in hash tables.

location blocks define the configuration for a specific location.

Location blocks only consider the URL. They do not consider the query string.

Longer matches take preference - that is to say location /images will be matched over location / were you to request

Regular expression matches are prioritized over the longest prefix. If you want to match a regular expression in a location block, prepend it with ~ e.g location ~ \.(gif|jpg|png)$

Regular expressions follow the PCRE format.

Any regular expression containing brace ({}) characters should be quoted as it will it would be otherwise unparseable (given nginx's usage of braces for block closures).

Regular expressions can use named captures. For example:

server {  
    server_name   ~^(www\.)?(?<domain>.+)$;

    location / {
        root   /sites/$domain;

Interesting features

Load balancing with nginx is really easy. There are three loading balancing methodologies available in nginx:

  • round-robin
  • least-connected
  • ip-hash

Load balancing is highly configurable and allows for the intelligent direction of requests to different servers.

Health checks are in built such that if a particular server fails to respond nginx refrains for sending the request to that server based on configurable parameters.

HTTPS is also easy to implement with nginx. It is a case of adding listen 443 ssl to your server block and adding directives for the locations of your certificate and private key.

Given that the SSL handshake is the most expensive part of a secure offering, you can cache your ssl sessions.

ssl_session_cache   shared:SSL:10m;  
ssl_session_timeout 10m;  


nginx is very modular in its nature. Although you interact with it as one unit through your configuration it is in fact made up of individual units doing different things.

nginx offer a detailled module reference - this is an overview of some interesting or less commonly discussed modules available to you.

The internal directive of the core module allows for a location to only be accesible to an internal redirect.

For example, if you want 404.html only to be accessible to a 404 response you can redirect requests from the error_page whilst not making it accessible to a user typing it directly into their browser.

error_page 404 /404.html;

location /404.html {  


If you dont want to show directory listings to nosy users you can turn autoindex to off. This can be used to protect your image directory for example.

Browser detection

You can use modern_browser and ancient_browser to show different pages dependent on the browser which your client is using. See here.

IP Conditionals

You can use the geo module to set variables based on IP ranges. You could for example set a variable based on a locale IP range and use that to send users from a specific country to a specific location.

Image manipulation

nginx even offers an image filter module. This module allows you to crop, resize, and rotate images with nginx.

Connection limits

You can limit a particular IP to a particular number of concurrent connections. For example you could only allow one connection to files within your 'downloads' folder at a given time.

In addition to that, you can limit the request rate and configure the handling of request bursts greater than a configurable value.

My initial thoughts were that this would be a fantastic method of preventing people from hammering a public API for example.

More information can be found here and here.

Request controls

Further to the above, nginx allows you to limit the request types that a particular block will handle.

This would again be extremely useful for an API offering.

limit_except GET {  
    deny  all;

Conditional logging

The log module allows for conditional logging.

I thought I'd give this a mention because I can see a lot of merit in only wanting to log access requests that result in bad response codes.

The example listed shows this:

map $status $loggable {  
    ~^[23]  0;
    default 1;

access_log /path/to/access.log combined if=$loggable;  

Secure links

This module is pretty awesome - it allows you to secure your links and associate validity time periods with them utilizing the nginx server software.

Personally, I have no use for it because although I have production use cases of similar functionality, I can't help but feel that you could do this is many easier ways.


There was one module that I found somewhat novel. I just can not see a use case for it - perhaps someone can enlighten me?

nginx offers a module to show a random file from a given directory as an index file.

I need this :P


As mentioned, the above is based on a thorough read through of the nginx documentation.

The following chapter from 'The Architecture Of Open Source Applications' was also incredibly interesting. It offers a significantly more complex look at the internals of nginx. Perhaps not suitable if you really did only have 15 minutes ;)

I highly reccomend reading the documentation yourself if you are interested in, or are running nginx on a server.

If you have any questions I would be happy to answer them.

JAXL - Connecting to Google's Cloud Connection Service (CCS)

I recently implemented the server side of a system to send notifications to iOS apps through APNS. This was extremely easy to implement and opened my eyes to the benefits of having a persistent streaming connection to Apple's servers. That is to say my backend is a constantly running service, and as/when new 'notifications' are stored in our database they are immediately sent through to APNS.

For Android notifications I was utilizing GCM and a simple HTTP connection to Google's servers (using CURL). I ran the script as a cron every x minutes and it would send out the notifications as appropriate.

Whilst this worked perfectly fine.. the idea had now crossed my mind and it was inevitable that I'd have to implement a similar persistent connection to Google's servers for GCM.

Cloud Connection Service (CCS) and XMPP

Google have a service called the Cloud Connection Service to achieve this. It utilizes the XMPP protocol (originally an instant messaging xml based data transmission format) to communicate in both directions with a client.

As you may have noticed (if you are a regular reader of our blog) a lot of our products are PHP based. For this particular project it made sense to execute this functionality using PHP.

Google do not provide any examples of utilizing PHP to connect to CCS and in fact there are very few well maintained, well used, generally solid implementations of XMPP communication in PHP.

A brief 'Google' of XMPP presented a lot of information about the protocol. I would assume that Google opted to utilize XMPP because it is a 'standard', and has been used and developed over 15 years. It is seemingly well known within the engineering community and it allows for two way communication such that GCM can supply 'Receipts' for messages. This is something that iOS and APNS can not provide.

On a personal level I have absolutely no experience with XMPP. In its entirity it is quite a complicated protocol. For a GCM implementation the only things that are relevant are essentially authenticating, and sending messages. Even though CCS allows for bi-directional communication, this was of little interest to me.

Getting to work

I wrote a script to connect to Google's CCS servers and send messages using JAXL. This was really easy to do:

$this->client = new JAXL(array(
         'jid'=> $this->senderId .'',
         'pass'=> $this->googleApiKey,
         'auth_type'=> 'PLAIN',
         'host' => $this->host,
         'port' => $this->port, 
         'strict' => false,
         'force_tls' => true,
         'log_level' => JAXL_DEBUG,
         'protocol' => 'tls'

    //add a callback for authorisation success
        $this->client->add_cb('on_auth_success', function() { 

      $this->client->set_status("available!", "dnd", 10);

        //send your messages

    //start the client

Within my sendYourMessages method I was loading some notifications from my database, looping through them, and sending them utilizing $this->client->send() / $this->client->send_raw().

This worked perfectly.

Given the number of Stack Overflow questions and Google Code discussions about the difficulty of connecting to CCS with PHP I was somewhat bemused to say the least.

Unfortunately however I had been a little too optimistic. What I wanted to do was maintain a persitent connection to the CCS server and constantly poll for new notifications in my database.

Given that PHP and asynchronicity are rarely found in the same sentence together, this was going to be a little tougher to achieve.

My intention was to utilize a continuous while loop to continually poll my database:

while (true) {  
    //poll database
    //send messages over XMPP connection

    //take a nap

The problem is that this is blocking - nothing below this block will ever execute until the while loop completes (which is never).

If you look into the internals of JAXL it works in a slightly more complex yet similar way. That is to say there is a continous blocking loop checking the connected streams to see if it can/should read/write to them, and then acting accordingly.

When you execute start() on the client it configures things and then executes JAXLLoop::run() which starts this continuous blocking loop.

The long and the short of it is that you cannot continuously poll the XMPP connection and continuously poll your own data source.

Read the source

I made the foolish mistake of going in blind and trying to hack together an appropriate resolution.
A fear of the complexities of XMPP and a smidgen of laziness ironically meant that a resolution took significantly longer than it should have.

After a number of hours of futility I decided to step back and read through the JAXL source in its entirity. At this point everything slotted into place and a suitable resolution (see below) was relatively easy to come by.

The JAXL source is a little 'hmm ok'.. but it is pretty simple to get your head around.

As for XMPP.. whilst a lot of the information on the web is very much all or nothing, I did find this: How XMPP Works Step By Step which I found to be the most concise explanation of the relevant workings.

If you utilize the JAXL_DEBUG log_level in your configuration, the output matches up almost perfectly to that outlined in the above link.

The resolution

I wanted a resolution that could work on top of JAXL without requiring a significant time investment or refactoring.

Conceptually the resolution was as follows: Implement batch data polling within the JAXL Loop.

We can remove the issue of one loop blocking the second by.. only having one loop :)

During my research into the problem I stumbled upon this StackOverflow answer which suggests using UDP sockets. This seems like a complete 'overengineering' in the sense that it would work but the complexities and problems associated with it beg the question 'Why not do something easier?' (like the below).

I have forked JAXL and committed my changes to github here.

What i have essentially done is manipulate the 'periodic jobs' concept already contained within the JAXL codebase. If you look here there is a very brief explanation.

In that same file is the following message:

"Since cron jobs are called inside main select loop, do not execute long running cron jobs using JAXLClock else the main select loop will not be able to detect any new activity on watched file descriptors. In short, these cron job callbacks are blocking."

This is important. Essentially the clock (loop) is executed every second and we say 'has 15 seconds passed since we last got data'. If it has, we execute the callback which passes through the message to the JAXL event handler.

Within your implementation of the handler callback you load your data. Whilst you are doing this, the clock is not ticking. That is to say the streams are not being monitored. Make sure your data loads quickly !

This is by no means ideal but it is about as good as it gets with PHP.


The usage of this setup is as follows:

  • Pass the configuration parameter batched_data when you instantiate JAXL

  • Add the get_next_batch callback to your client.

  • Within the get_next_batch callback load your data and 'send' it using send / send_raw.


During my research I found a number of discussions questioning how to approach doing similar things with JAXL.

For example this one, and this one.

Abhinav Singh (the creator of JAXL) historically was quite active in responding to threads about JAXL.

In a number of threads I have found he mentions jaxlctl, and utilizing pipes. I investigated both of these options and found them to be not worth pursuing. That said.. have a play, and if you produce anything interesting I would be intrigued to see it.

As for JAXL in general. A post on Google Code and the lack of commits in the past few years suggests that Abhinav has stopped working on JAXL :(


It is a pretty simple conclusion really.. JAXL can be used with Google CCS, and a solution for a persistent connection whilst polling for new data is possible. It has a few drawbacks, but this code is being utilized without issue in a production environment.

Hopefully some of the people trying to implement such functionality will stumble upon this post.

If you have any questions/comments, I would be happy to answer them :) Escrow services review

Recently I was approached by a buyer based in China interested in purchasing one of the domain names owned by Double Negative. After a prolonged back and forth we agreed on a sales price and agreed to complete the transaction through an Escrow service.

Why not

The transaction was for a significant amount of money and as such safety and security for both parties was important.

Having previously worked for Uniregistry and their domain sales platform I had previous experience with and knew them to be a safe bet when it comes to domain name Escrow.

Unfortunately however things did not go to plan in that the buyer could not actually pay due to various (bank related) problems. I enquired to the support team as to whether they had any Chinese speaking staff members who could aid in resolving the issue. Unfortunately they did not.

Whilst I am appreciative that in the grand scheme of things the fees being charged are not extremely high I felt that the team at came across as being particularly disinterested in helping at all..

Given this, when the buyer suggested using after a little initial reluctance I decided to give it a try.


The buyer in this transaction was Chinese and he verified to me that he had previously transacted on high value deals through Furthermore, being local meant that sending a payment to was no issue for the buyer.

I was initially reluctant because is relatively new in the grand scheme of things. Given however that is associated with I decided that things were probably by the book.

I have never personally transacted on but I regularly hear about sales that occur through the platform and they generally seem to have a good reputation. Perhaps more importantly I have never heard about any negative experiences with them.

One thing that did concern me somewhat was their heavy marketing through - they offered various deals through the forum platform which seemed a little informal in their approach for a company dealing with potentially massive domain Escrow transactions.
That said, having conversed with Cynthia at (see below) apparently this marketting effort resulted in a significant number of new customers whereby neither the buyer or the seller was a Chinese national.

An overview

The process

The actual Escrow process was very standard.

  • Buyer submits payment
  • Seller transfers domain
  • release payment. do offer a 'Premier service' whereby they act as a middleman and the domain is actually transferred to them.


The Escrow fees are extremely competitive - better than any other service out there (that I know of). For a transaction in excess of $25,000 the fee is 0.8%. For the 'Premier Service' you pay 1.6%

The website

The interface is clear and logical - it is patently apparent what you need to do, when you need to do it, and how you go about doing it. That said, it does lack a little polish. For example when you press an action button there is no indication that anything is going on until the action is complete. They use a lot of Javascript behind the scenes and I suspect that if anything failed on the backend you'd be left completely bemused unsure as to why absolutely nothing had happened :)

One thing that I found particularly worrying was that the support form doesn't/didn't work. I tried submitting various questions (prior to completing the transaction) to which I received no response. A direct email followup to did receive a reply (see below).

These issues are really quite significant in that if your website doesn't instill confidence you may push away customers. It is somewhat bemusing that you would develop a full Escrow platform and then not put a loading spinner on a form :P

The only other issue with the website was speed. On a number of occasions I found the website to be particularly slow which once again did not fill me with confidence.

Notifications At each step of the way you get an email so you do not need to keep logging in to see what is going on. In addition to that, Cynthia at emailed me directly to verify my payment details - I liked this touch. No-one wants their money going into someone elses account because of a typo :)

Payments has a fantastic payment management system. You can hold currency in your account in CNY, USD, or EUR and you can convert between the three using their simple (albeit powerful) interface. Withdrawing funds was simple and painless.

History and statistics

I emailed Cynthia at to find out some further information about why they built and how things have been going so far.

Cynthia made some incredibly interesting points. was built to fill a hole in the market. They are working towards connecting the Chinese market (where there is a significant amount of capital and an ever growing interest in domain names) with the international markets where the best domain names are held (due to earlier Internet adoption).

"We want to make your first choice to make domain business with Chinese and we are going to stride further into the global market after that."

Cynthia mentioned that is the market leader and has resources and (unspecified) advantages that they cannot match at present. They are however targetting a niche market and hoping to serve that niche well.

Personally I think is right up there with (see below).

Some statistics

I was very kindly provided with some statistics. They certainly make for interesting reading.

  • Since launching (last September) they have completed over 300 transactions
  • 2/3 of transactions are in USD
  • Their transactions range in value from $150 to $400,000
  • Most of their transactions are in the $1,000 to $5,000 range
  • The largest transactions tend to be conducted in CNY by domestic users of
  • They have a number of clients who are 'regular' users, completing in excess of 10 transactions monthly.


Overall my experience with was really good. The only problems that I had were with the visuals of the platform. The service was second to none. With a little bit of polish would be my go to choice of Escrow service.

For so long in the industry has been the 'go to' Escrow service. There is however certainly another big player in

There are other Escrow services such as and but my personal view is that both of those platforms are outdated and out of touch. Services such as and would sadly never get my business because they are too small to instill confidence in the security of my money and domains.

When I was researching to decide as to whether I was prepared to use it I was unable to find much in the way of significant hands on experiences.

If you are considering utilizing and have stumbled upon this post.. I absolutely reccomend, and would be happy to answer any questions that you may have.

Don't Repeat Yourself - iOS

Don't Repeat Yourself (DRY)

The Don't Repeat Yourself principle is one of the most important concepts that a software engineer has in his/her arsenal (in my opinion).

I previously worked on a legacy PHP project whereby a complete disregard for DRY has resulted in a completely unmaintainable codebase. If someone found a small, insignificant bug you would often find yourself fixing the same issue in multiple places.

On the most primitive of levels, if you isolate one piece of business logic and reuse it as appropriate, when there is an issue or you decide to change the business logic then you only need to update your code in one place.

When can you repeat yourself

There is a very interesting Stack Overflow post on this topic here: Is violation of DRY principle always bad?.

My personal view is that DRY is a great principle to be thinking about as you code. In some circumstances however you need to do a simple cost benefit analysis. For me readability is vitally important in my codebase.

A few cases spring to mind from my current iOS project where I have explicitly chosen not to follow the DRY principle. For example:

  • I have some custom UIView subclasses which implement an interface to display a loading screen. At the moment they all show the same loading screen using the same code. By duplicating the code it makes each individual class more readable. Furthermore I intend for these implementations to diverge in the future.

  • I use a custom dependency factory pattern to abstract the dependencies of various classes. This in itself makes my codebase so much more readable and as such the fact that two similar classes have dependency factories with similar implementations is a worthwhile trade off. Any further abstraction would lose me the benefits of clarity.

iOS and the delegate pattern

The delegate pattern plays a big role in iOS development. The out of the box classes provided by Apple which make up most user interfaces rely on them heavily. For example UITableView uses the delegate pattern to implement its datasource and its layout. As another example UITextView has a delegate for indicating when various things happen.. for example the textViewDidBeginEditing: delegate method.

Extending protocols

On the iOS platform (Objective-C, Swift.. both can) you can extend a protocol. In Swift the code is exactly the same as how you would write a subclass.

protocol SubProtocol: SuperProtocol {}  

What has this got to do with DRY?

Very good point.. :P

I recently built a pull to refresh/scroll to load more piece of functionality in Swift. My conceptual premise was that anything I want to refresh/load more of will be in (or can be put in) a subclass of UIScrollView - UITableView, UICollectionView etc.

I figured that I could write a custom implementation of the various scroll view delegate methods to track the users scrolling and show the respective pull to refresh header/load more footer when appropriate.

Unfortunately it was not as easy as I had hoped.. (That is not to say that it was difficult ;) )

Apple give us these various UIView subclasses out of the box, and they are extremely powerful. One can not really complain. That said I don't understand why Apple didnt seperate the table view delegate from the scroll view delegate in relation to table views and collection views.

UITableViewDelegate is a sub-protocol of UIScrollViewDelegate which means when you implement the former you have to implement the latter. UIScrollViewDelegate has no required methods but that is beside the point :)

The difficulty is that I want to implement some of the UIScrollViewDelegate delegate methods but I do not want to couple them with an implementation of any UITableViewDelegate methods because I want to use them elsewhere. I do not want to repeat myself !


The way to resolve this is simple.

I wrote a class (PTRScrollDelegate) which implements the UIScrollViewDelegate protocol. It does various fancy things to make my pull to refresh functionality work.

I then have a custom class which extends this scroll view delegate AND implements UITableViewDelegate. Simple.

If I want to use the functionality with a UICollectionView I again create a custom delegate class, extend my scroll delegate and implement the UICollectionViewDelegate

You implement your scroll view delegate methods within your superclass whilst you implement your table view delegate methods in your subclass.


There is one patently apparent problem with this kind of setup (although it does not present itself in this specific case).

That is the fact that Swift does not support multiple inheritance whilst it does support the implementation of multiple protocols.

Consider the situation where you have a protocol which extends three additional protocols. You would not be able to keep the implementations of the various delegate methods in independent reusable classes in this case.

That said I can not think of (off the top of my head) a situation within the various provided iOS delegates where this would be a problem. Anyone know of any?

If you are in a situation where this is an issue for you there is more than likely or more appropriate way of structuring your design.

On that note..


The above is essentially a personal case study of my approach to avoiding code repetition on the iOS platform (in one particular situation). Hopefully this will be of use to somebody.

If you have any questions, comments, or suggestions for future posts then I would love to hear from you :)