Our products

We are passionate about building niche multi-platform products that provide real utility to our customers.

Anfield
Keep up to date with Liverpool FC news from across the web.
Domain Bites
Domain name industry news and resources.
Pub Reviews
Find the best pubs near you. Share your thoughts and photos.
Mmmm
Capture your taste and share it with the world.

Blog

We like to blog about software. Our stack, engineering problems, open source developments.. you will find it all discussed here.

Work less. Work better.

It is understandable that one might view any post advocating working less as an attempted justification for laziness. Within 'software' however I personally feel that it is imperative that one takes a step back and looks at their time investments more pragmatically.

This is a viewpoint shared by Itamar Turner-Trauring, who shares his viewpoint extremely succinctly over at Code Without Rules.

Uniregistry

Rolling back the clock, I was thinking back to my time at Uniregistry and trying to put a finger on exactly what made me leave. The answer was obvious. I was overworked, and it was my own fault.

When I was initially offered the job, it was on a trial basis. Given that I wanted the job, I worked extremely hard to show off my skill set and prove that I was up to the task. It turns out that I was. I got the job, and during my time with the company I was rewarded appropriately.

The problem was that by working 12 hour days, and by working weekends I was joining an inappropriate culture of excessively hard work. Not because hard work is bad, but rather because long hours are mentally and physically exhausting whilst not necessarily being particularly productive.

Yes, the euphoria of resolving an issue that you have been struggling with for hours is fantastic. On the flip side, the stress associated with being unable to resolve such an issue is unbearable. Independent of euphoria, or stress this approach is still inefficient. It is almost always the case that these incredibly tough problems are extremely simple and would be resolved in a matter of minutes after a good nights rest.

Frank Schilling (the CEO of Uniregistry) is an extremely intelligent, and targeted man. He has fantastic ideas, has built a fantastic team, and has created amazing products. I remember heated discussions pertaining to timelines and execution whereby he would be unhappy with how long we had proposed a particular piece of functionality would take to produce. To try and be as appeasing as possible we would provide realistic timelines premised on working long hours. In hindsight this is why I ended up burnt out, generally fatigued, and in the end left my position.

This case was particularly interesting because it was more psychological than anything. For me personally, I never wanted to be the first to leave the office. I didn't want to be perceived by my co-workers as being lazy when in fact leaving 'early' may have simply been realistic. I suspect that my coworkers may have felt similarly.

This certainly isn't a unique situation. I am aware of a number of people working for companies where similar working cultures develop. It is ridiculous - it is tantamount to self harm.

Double Negative

Fast forward to today. I am my own boss, so I can dictate my own working hours. This (unsurprisingly) means that I am able to justify taking time off as needed, and develop working habits that are appropriate to optimal performance.

Having built three significant multi platform products, and numerous other smaller projects it is certainly fair to say that I have been productive. I am however still prone to being generally over optimistic, and to overworking.

Software and programming requires mental capacity. It requires intelligence, and concentration. As such whilst there are some jobs whereby one can work on autopilot for ridiculous amounts of time, I do not believe that software is one of them. Itamar states that in his current position he has negotiated a 35 hour working week. I can absolutely see an argument for working even less than this.

When programming there are two things that have become patently apparent to me. Firstly my efficiency and productivity are obviously improved before lunch. Secondly, at the end of the day I have significantly more occurrences of the infamous 'why on earth won't this insert simple thing work !'.

As stated by Dan Abramov:

"If you’re debugging a problem late in the evening, drop it and sleep on it. Sleep deprivation makes it far too easy to miss typos and other simple mistakes."

I would extend this. As well as sleep deprivation causing you to miss typos, having been staring at a screen for the 10 previous hours has the same effect. As it happens, Itamar has dedicated a whole post to this: Go Home Already.

Going forward, it is clear to me that I need to get my programming done in the mornings, and be prepared to take breaks/leave it until the morning. It is a simple case of learning from experience.

Were I to still work for Uniregistry (or any other software company for that matter), I would be intrigued as to how the conversation would go down were I to propose only working mornings. On the face of things it looks like pure laziness, but taking a step back it is simply honest, and pragmatic.
As a company owner myself (now), I would certainly prefer for my employees to be realistic and up front with me. I am 100% certain that as long as you have good employees who you can trust to manage themselves appropriately then productivity will increase.

At the end of the day it comes down to two words: Realistic expectations.

What others say..

Having written this post, I wanted to investigate further what others have to say on this topic/'idea'. It turns out.. quite a lot.

This post from back in 2012 outlines the history of the 40 hour work week.

One quotation which resonated with me was the following:

"That output does not rise or fall in direct proportion to the number of hours worked is a lesson that seemingly has to be relearned each generation."

To me, it is not so much that each generation needs to 'relearn' this. As in my case, I am well aware of my reduced performance but for idealogical reasons choose to overwork. We need to listen to our bodies.

The previously mentioned post as well as this article from the Guardian pertain to working in general, not just the software industry. The latter refers to doctors who I agree should absolutely not be overworking - they are potentially responsible for peoples lives !

Whilst not (normally) effecting human life, the software that we build is oftentimes widely used and wide reaching. There are software tools used by millions of users daily/hourly which are responsible for extremely important systems, and account for millions of dollars of economic output.

It turns out that the 'literature' on the topic matches up with my own thought processes, whilst also referencing research which proves my sentiments.

The wise boss would let his workers manage themselves (within reason) to optimise output and efficiency.

I hope my boss does.

Automatically updating your self hosted Ghost install

Ghost version 0.9.0 has just been released, introducing awesome new functionality such as scheduled posting (which I am utilisng to post this :)). This is fantastic news, but I run three different self hosted ghost blogs and I only just updated to 0.8.1.

Whilst updating Ghost isn't exactly that difficult, it still takes time. Time which I sadly do not have. As an engineer it struck me as being somewhat ridiculous that I hadn't already written some sort of script to manage the upgrading of these various blogs. In fact, I am even more bemused that someone else hasn't done it either (seemingly).

The title is not strictly correct in that this is not actually automatic. It still requires you to execute the script, and to update multiple blogs you will need to modify it to take arguments. That said, it is a start. Something to get you thinking.

Pub Reviews and Mmmm have complex build scripts which checkout the latest versions of the source code, build the applications, run the test suites etc. Obviously nothing that complex is needed here, but given my experience with ssh, rsync and shell scripts more generally, I pieced together a simple script to update a Ghost blog on an external server from my Mac Book.

The code is embedded below:

Using it

There are a few considerations to note to correctly utilise this script.

  • It is based on the official Ghost upgrade document. Ghost could fundamentally change in the future (although it is unlikely) meaning that different files need to be updated.

  • To use the script you need to download the version of Ghost that you want to update to, and place it in a directory which you specify as the STAGEDIR variable.

  • This is a bare bones script. It has no saferty checks, and doesn't perform backups of your blog content. Make sure that you enter you enter your host/user details correctly, and be sure to backup your data prior to utilising this script.

  • My blogs are run utilising forever such that they don't go offline when I close my terminal.

  • I remove the node_modules directory completely, clean the npm cache, and do a fresh install of the various dependencies because I was encountering a number of dependency based issues when I did not do so.

  • You need to set up passwordless (key based) authentication for SSH connections to your external server. You don't need to do this - you can modify the code as you see fit, but in the interest of making it as quick and painless as possible it is something to consider. This answer on StackOverflow outlines how one can do this.

Thats it

Hopefully this is of use to you, and saves you a few vital moments every once in a while. If you have any questions, comments, or suggestions, let me know below.

The different programming paradigms

We utilise a number of different programming languages for the various products that we build here at Double Negative.

In keeping up to date with the development and progression of the various tools, frameworks, and paradigms that we use, I often find myself exposed to various 'buzzwords'.

I often find that these phrases are utilised in a seemingly misunderstood manner. One can not be a 'functional programmer' in that you can't explicitly always write code in a functional manner. Some programming languages are not built in a way conducive to functional programming.

This post serves as a general overview of my understanding of the different programming paradigms. I make no suggestions as to any being 'better' than the others simply because the utility of programming in a certain manner is completely context dependent. Furthermore utilisation of the paradigms below are not mutually exclusive in the same way that a 'project' often utilises multiple languages and tools.

Pub Reviews for example is written in Object Orientated PHP, yet the front end functionality utilises modules written in prototypical Javascript.

Simply being aware of these paradigms is not enough - it is all in the execution. I could write purely functional PHP but it would be a complex and arduous process not best suited to what I want to achieve.

Anyway.. onwards..

Functional Programming

Functional programming is explained very succinctly by Kris Jenkins in the context of what he terms 'side-causes' and 'side-effects'.

My interpretation of functional programming is programming whereby everything is built up from functions. Functions can take other functions as inputs (first class citizens), and can return functions in a manner such that when pieced together (like lego bricks) you have a complete program. In the context of Kris' explanation, there are no external or independent dependencies for the executing functions - all inputs and outputs are known and discernible from the function signature.

Functional programming does not allow for mutable state, and has no need for loop constructs, instead relying on recursion.

Haskell is one of the commonly referenced Functional programming languages. As outlined on the linked page this means that it is optimised and designed with the intention of being used with functional paradigms. It makes it inherently difficult to program in other ways.

PHP (which I regularly use) is not typically considered a functional language, but given that it is is somewhat unconstrained in its nature, it can be used in a functional manner.
It is like having a toolbox - you can use the tools however you like. That said, people might give you some funny looks if you use a hammer in a manner that is not considered 'normal' or is overkill for the situation/problem. A hammer might get the job done, but that does not mean that using a hammer is an appropriate or reasonable way to do so. This is the reason why PHP is often looked down upon - its flexibility means that is often used poorly.

Object Orientated Programming (OOP)

Object Orientated Programming.. it's in the name. It is premised around Objects.

I feel that this StackOverflow answer explains it fairly well: "Objects are nouns, methods are verbs.".

For example, an item of 'Food' is an object that can be broken down into more specific forms (objects) - 'Cake', 'Steak' etc.

The Food object definition describes the properties of an item of 'Food' and the things that can be done to it. For example eatIt().

OOP is associated with buzzwords such as:

  • Inheritance - the process by which 'Cake' inherits the properties and methods of its parent, 'Food'. As 'Cake' IS food, you can eatIt().

  • Interfaces - a set of methods which the implementing class must define to conform. The needsCooing interface could specify methods that must be implemented by conforming classes to define how one cooks the 'Food'.

  • Polymorphism - the process by which many different types of object can do the same behaviours. Whilst 'Cake' and 'Steak' are inherently different, both can be eaten. By implementing a specific interface both can have eatThis() implementations which do different things (stuff into mouth hole vs utilise knife and fork like a civilised human being).

Prototypical OOP

Before ES6, Javascript did not have the class keyword. Many esteemed Javascript programmers believe that this was a good thing (that is a story for another time). instead relying on prototypical OOP for their Javascript code design.

Nate Good provides an easily accessible introduction to prototypical OOP.

Essentially POOP (tehehe) involves no separation between classes and instances. There is no explicitly differentiated blueprint of what 'Food' instances should look (or act) like, and instead everything is an object. An instance object simply delegates calls up its prototype chain as opposed to copying functionality from a blueprint.

Your blueprint IS an object, and your instances are also objects which inherit (or copy) the prototype of the blueprint object.

I see no need to reinvent the wheel and explain prototypes when Sebastian Porto has done it so clearly already.

Interestingly, whilst Java is not typically considered a prototypical language, it can be utilised in such a manner. This answer gives some extremely interesting insights into the pros of prototypical design. This is also a prime example of paradigms in context. Whilst not the most obvious approach to designing a program in Java, given the context of what the author was trying to achieve, a prototypical approach served him best (for the reasons that he outlines).

Given how Javascript works many have suggested that Javascript is an example of a true 'object orientated' language, and what I hae defined as 'OOP' (above) should be more correctly called 'class orientated programming' (or similar).

Procedural

Procedural programming is by my definition a situation whereby your code executes as it reads. That is to say, line 1 executes, then line 2, then line 3 etc.

Procedural programming avoids the abstraction inherent in Object Orientated design. As a result procedural programs can easily get out of hand as complexity increases, and can become prone to repetition (going against the Don't Repeat Yourself (DRY) paradigm).

Again, things are not black and white in that within an Objects method definition you could perform a series of procedural steps - do X, then do Y. Different aspects of your program can (and should be) programmed utilising the most appropriate paradigms for the problem.

My view is that a program should not be purely procedural, and when utilised within an OO design it should be utilised appropriately alongside/within the appropriate design patterns so as to make it optimally maintainable and readable.

Modular

Modular programming isn't so much a programming style but rather a manner of organisation. It essentially involves separating different 'parts' of your (much larger) program into modules which do specific functionality.

In one of our PHP based projects we have two distinct 'modules' namely 'Admin' and 'Public'. If I am looking to modify some administrative functionality.. guess where I go?

On a more micro level a Javascript project (for example) can utilise 'packages' from NPM so as to essentially 'plug-and-play' required functionality. Packages and modules are one and the same.

This allows you to decouple your software. You can minimise your file sizes by only including the modules that you require, whilst also making your codebase more manageable.

I am hesitant to use the word 'interface' (given the discussion above), but one can utilise an externally developed module and being aware of the methods that its API exposes write software without an underlying knowledge of its internals. Because the codebase is not tightly coupled the module developer could completely rewrite the module (whilst maintaining the exposed interface) and your code that utilises it would still work as expected.

In the Javascript ecosystem, NodeJS allows you to require('module'). Browserify allows you to bring this functionality to the browser space. This means that 'reinventing the wheel' is no longer excusable. Why bother building something that someone else has already built?

Conclusions

So there you have it. An easily digestible overview of some of the more common programming paradigms.

If there is a particular paradigm that you want me to write about, or you have any questions or comments then do let me know.

Image caching on iOS with AlamofireImage

We recently released the updated and improved iOS application for Mmmm.com - a user curated visual menu for finding awesome food (and drink) in the city of Leeds, West Yorkshire.

Whilst testing the application it became apparent that the implemented image caching mechanisms were not appropriate for an inherently image heavy application.

I delved back into the source code and noted that for some reason I had attempted to reinvent the wheel - I had implemented my own caching methodology within my UICollectionView adapters (which implement the UICollectionViewDataSource protocol).

The logic was fine. I had opted to cache images in memory using NSCache. The idea was that rather than re-download an image from the server when a user scrolls back to a particular cell we would load the image directly from the cache.

I had also implemented a filesystem cache which would save the last ten displayed photos to the filesystem and would save details of their locations to a SQLite database (interfacing with the SQLite database through the fantastic FMDB library). The intention of this caching layer was that upon reopening the app we could display the last ten photos almost instantaneously prior to sending off a query to the server to request more up to date photos. It was implemented to provide an optimal user experience such that a user sees beautiful pictures of food upon reopening the app as opposed to a boring loading screen.

The problem

There was in fact no problem. The caching worked as expected and it saved on unnecessary server requests. The problem was what my caching mechanism didn't do.

If a user quickly scrolled through the collection of photos cached images would be shown (where appropriate) or requests would be sent off to the server to load each image for each cell. Even the cells that had been scrolled past extremely quickly and were no longer on screen. Not only does this result in unnecessary data usage but it also means that the requests to load the images for the cells that are on screen are at the end of a queue of unnecessary requests.

Fortunately I had noticed this and had implemented the concept of a request queue within my delegate. It essentially tracked ongoing requests and cancelled unnecessary ones.

This in principle should have worked but because of the complexities associated with asynchronous requests in collection views - race conditions, reused cells etc it was resulting in some visual issues such as flickering images and incorrect images on certain cells.

There is an interesting and informative question on StackOverflow which touches on some of these problems. It is well worth a read.

It was this that spurred me to look back at the code and come up with a plan for refactoring it.

AlamofireImage

Developing a plan of attack wasn't difficult. As soon as I looked at the code I was completely bemused as to why I had not used the AlamofireImage extension (of which I was aware). I believe the reason that I hadn't was because at the time the code had been originally written AlamofireImage did not exist. As such I had not reinvented the wheel.. I had created it. Unfortunately (Fortunately?) someone had since created a much better wheel.

My logic had been flawless, and what I had implemented was good. It was simply the case that here in July 2016 we have Alamofire which implements what I had done and much more.

Alamofire is actively developed, and has an active community of top quality developers maintaining it. It is a perfect example of the awesomeness of open source. Alamofire is what I utilise for the network interaction across the Mmmm application and as such it seemed like a logical extension to utilise the AlamofireImage extension for my image needs.

AlamofireImage is well documented and extremely simple to use. It provides various methods for downloading, caching, and manipulating images (transitions, rounded corners etc). In addition to this it specifically provides UIImageView extension methods which utilise these methods to provide simple 'helper' functionality for displaying these images.

The best bit about these UIImageView methods is that they handle a lot of the complexity behind the scenes. One could very easily naively implement them and encounter no issues. For example if I attempt to load an image from a URL into a UIImageView within a UICollectionViewCell utilising the af_setImageWithURL method it will automatically handle the cancellation of unnecessary requests when the containing cell is reused.

N.B. A well written app will take advantage of the abundance of information returned by Alamofire to act on situations like this appropriately. Alamofire will return an appropriate error code when a request is cancelled such that you can adjust the UI or execute any other behind the scenes actions as needed.

For further details on some of the benefits offered by the UIImageView extension methods, see here. It also outlines some of the things that the extension does not cover that you should be aware of.

Takeaways

There a few takeaways from this:

  • Take the time to look back through old code - there may be something from 'back in the day' that is no longer suitable or optimal given the fast paced nature of software development.

  • Open source is great. Alamofire has nearly 18,000 stars on GitHub. It is clear that a lot of people use it. Don't reinvent the wheel. If you want to spend time coding networking and caching mechanisms simply contribute to Alamofire :)

  • Be careful and considerate of your user base. I avoided any issue by catching it in advance. It is patently apparent that users like a good experience within your app, and they (generally) like money. If you are needlessly sending out network requests to load images you would be surprised at how quickly data usage can add up. Users are not going to be very happy if you drain their data caps.