Our products

We are passionate about building niche multi-platform products that provide real utility to our customers.

Anfield.com - Liverpool FC news from across the web Anfield
Keep up to date with Liverpool FC news from across the web.
Download the Anfield.com app on the App Store
DomainBites.com - Domain industry news Domain Bites
Domain name industry news and resources.
Download the DomainBites.com app on the App Store
PubReviews.com - Independent, social pub reviews Pub Reviews
Find the best pubs near you. Share your thoughts and photos.
Download the PubReviews.com app on the App Store Download the PubReviews.com app on Google Play
Mmmm.com - Find great food in Leeds, West Yorkshire Mmmm
Capture your taste and share it with the world.
Download the MMMM.com app on the App Store Download the MMMM.com app on Google Play


We like to blog about software. Our stack, engineering problems, open source developments.. you will find it all discussed here.

Uploading images using the Facebook and Twitter APIs

I recently decided that adding image uploads to Revision.net was of vital importance.

I personally use the Revision.net tooling on a daily basis to schedule my social media posts. Recently I have found myself taking significantly more photos, and as such I wanted an easy way to post them to Facebook, and Twitter and a specific time in the future.

In principle this should have been a relatively simple feature to implement. I did however encounter a few small issues which I wanted to document here.


Given the nature of the product, I wanted to allow a user to upload images directly to the Revision.net server. As the tool schedules posts to be submitted in the future this seemed like the logical approach.

Uploading photos directly to the Facebook and Twitter APIs on submission limits what we can and cannot do as regards future functionality. Furthermore, a brief look at the respective API documentation states:

  • Twitter: "The returned media_id is only valid for expires_after_secs seconds."

  • Facebook: "After you upload an unpublished photo, Facebook stores it in a temporary upload state, which means it will remain on Facebook servers for about 24 hours."

That is to say that were we to upload the images upon submission (to Revision.net) they would possibly not be available when we want to actual submit the status/tweet (to the respective API) at some point in the future.

The first specification requirement was thus image upload. This post will not go into how one can do that. The more crucial aspect of my implementation was the user experience - implementing image upload within the current React based interface so it looks good and functions seamlessly.

Uploading an image

The second part of the specification was obviously the uploading/transmission of the photos from our server to the respective APIs at/before the scheduled posting time, and then attaching them to the respective posts.


Twitter provides a guide to uploading media. Facebook provides documentation on photo uploads.

Reading these pieces of documentation you will discern that Twitter requires you to upload a files binary data directly to their API OR a base64 encoded representation of the image. The Facebook API is more flexible in that it allows uploading binary file data but it also allows you to specify an image URL from which Facebook will 'pull' the image.

The implementation is largely the same across platforms - you 'upload' the images, and then you post the Tweet/status attaching the unique identifiers of the uploaded images.


For interacting with the Twitter API I have been utilising twitter-api-php by James Mallison. I had been using my own extended fork of an older version of his codebase. I encountered a few issues with submitting image identifiers with a Tweet (once uploaded) as a result of some CURL encoding issues.

The latest version of the codebase works flawlessly out of the box, and even includes example implementations within the test suite.

I chose to continue with my own fork and implement upload of binary image data. This requires the specification of the Content-Type:multipart/form-data header within the CURL request.

The Twitter API upload endpoint (https://upload.twitter.com/1.1/media/upload.json) returns a media_id and a media_id_string value. These are appended to a comma separated string and then submitted as the media_ids parameter to the https://api.twitter.com/1.1/statuses/update.json endpoint.

One of the benefits of simple, bare bones open source projects like this is that you can (and should) read all of the source code. The simplicity makes it manageable, and easily extensible.


Facebook provide their own comprehensive SDKs. I am using their PHP SDK (version 5.1).

Uploading an image simply requires firing off a POST request to the me/photos endpoint.

In my workflow, (as alluded to above) I specify the published=false parameter. This prevents the photo from being shown on the users wall - it will show on their wall when you attach the image to a status submission.

This call returns an id which is then posted as a JSON encoded parameter to the me/feed endpoint.

$postData['attached_media[' . $i . ']'] = json_encode(array('media_fbid' => $facebookImageId));

Facebook Pages

One great thing about how Facebook have built their API is that pages and user accounts are identified by a type agnostic unique identifier.

Revision.net allows scheduling posts to user accounts and user administrated pages. To achieve this we simply use the endpoints unique-identifier/photos and unique-identifier/feed regardless of whether the unique-identifier references a page or an account.


The few issues that I had when working with these APIs were relatively simple to debug.

Both twitter-api-php and the Facebook PHP SDK use CURL behind the scenes. If you can not get things working I recommend debugging utilising CURL from the command line.

Twitter also provide twurl which makes the authorisation aspect of testing through the command line significantly simpler.

The Facebook SDK is feature rich, and it handles the processing of API errors and converts them to easily debuggable Exceptions. These relate to permissions, and access token validity for example. Simply catch these exceptions and handle them appropriately.

The only minor time drain that I had when developing this functionality was an Exception message from Facebook stating Unsupported post request. This occurred when attaching photos to a call to the feed endpoint that had previously been published (having not set published=false when uploading the image).
This error message is pretty unhelpful, and there seem to be a number of unanswered questions about it on StackOverflow. I suggest that should the above not resolve your issue, you:

  • Make sure the image you are trying to attach was uploaded by the same user.

  • Make sure that it has not already been 'published' through another medium.

  • Make sure the user has the appropriate permissions.

  • Make sure that the image has actually been uploaded correctly and that you can read data about it by calling the /v2.8/{photo-id} endpoint.

Given the simplicity of the twitter-api-php codebase similar Exception based simplicity is not provided out of the box. That said, the Twitter API does return JSON encoded error data which you can process and handle accordingly.

I have taken to handling all obvious errors, and logging anything that is not being (currently) handled. I can then monitor any issues that are occurring and implement appropriate handling down the line.

Intricacies and further considerations

It is probably apparent that my requirements are very basic - the uploading and scheduling of simple images via the Facebook and Twitter APIs.

Both APIs do however offer more complex endpoints for uploading other forms of media as well as for more complex upload requirements (resuming uploads etc).

One consideration of note is that the same user (API credentials) is used to post the tweet/status as was used to upload the photo. Twitter does allow the setting of the additional_owners property upon upload such that you can (for example) upload an image with your own site administrative credentials whilst specifying the user as the owner. This was not however appropriate for my use case - uploading and submitting as the user allows for permission verification, and error handling as part of the 'flow'.


Hopefully the above provides some pretty clear insight into how one can work with the Twitter and Facebook APIs for uploading images.

All things considered, they are very well built, and very easy to work with.

If you have any specific questions or concerns, please let me know and I will do my best to answer them.

Revisiting gulp. Realtime JSX compilation and browserification.

Today I decided that it was worth investing some time in revisiting the build tools that are used across the Double Negative web properties.

We use gulp pretty much universally across our projects. In fact not only do most projects use gulp, but most projects use very similar build processes full stop.

Why? Most of our properties are built with and run on similar technologies - React, Browserify, v8js etc.

It thus seemed pretty stupid that each project maintained its own completely independent build process such that when I made 'big' changes and/or learned new things I would have to implement it across every project. I would inevitably forget, and then spend time down the line debugging bugs and problems that I had already fixed numerous times before.

What I wanted to achieve

I remember watching Bret Victors Inventing on Principle and thinking that the kind of development processes he demonstrates are what I want to have in place for Double Negative.

That is to say that I want to be able to make a change and see what it does in 'as realtime as possible'.

Now.. writing code in JSX, and compiling it to Javascript (giving consideration for ES6 transformations etc) is simply not instantaneous. That said, compiling a single JSX file is pretty darn quick. Milliseconds.

I want to have 'something' occurring in the background which upon saving a JSX file recompiles just that file (OK.. and any files that depend on it) to Javascript such that by the time I have switched to my browser window, and clicked refresh the updates are ready to test.

That distinction in itself is very important. Previous 'versions' of my build processes have simply recompiled all the JSX files after a single change simply because it was not that slow - a few seconds. But.. those few seconds really (and surprisingly) mess with your head and your workflow.


I was aware that Gulp 4 had functionality pertaining to sequential execution of tasks. Having however previously had issues with Gulp 4, I wanted a solution that worked with Gulp 3.

I stumbled upon the run-sequence package as outlined in this StackOverflow answer. It does what it says on the tin, but I had a few issues in that it was seemingly considering incomplete asynchronous tasks complete when they were not. My code was being browserified before it had all been compiled from JSX.

I investigated asynchronicity and noted from this answer that a return statement informs gulp that the task in question is over. This resolved my issue.

My next issue was what exactly to watch. I knew that when I edited a JSX file I wanted it to recompile, and then be browserified. I also wanted Javascript files to be browserified when edited directly (in the cases where they were not compiled from JSX).

This posed a problem in that each time a JSX file finished compiling it was considered a JS file edit, and as such the watch on JS files was triggered. If multiple JSX files (or files that depended on them changed) then the second watch task would be triggered multiple times.

I resolved this in a non-perfect, but clean and effective way. Two watch tasks, and a global boolean indicating the status of ongoing tasks.

My final code looked like this:

//Watches our .jsx files and our .js. If any change, it calls the build task.

const gulp        = require('gulp');  
const runSequence = require('run-sequence');  
const gutil       = require('gulp-util');

//Boolean to hold state of watch JSX watch task
let taskOngoing = false;

//This task simply sets the taskOngoing boolean to false
gulp.task('jsxdone', function(callback) {

    gutil.log('JSX done executed');

    taskOngoing = false; 


//Does the 'watching'
gulp.task('watch', function() {

    //Watches .jsx files
    gulp.watch('../../public/assets/jsx/**/*.jsx', function(){ 

        //Indicate that we are running the JSX watch
        taskOngoing = true; 

        gutil.log('Running JSX watch');

        //Once the jsx task is executed we will execute jsxdone
        runSequence('jsx', 'jsxdone', 'browserify'); 

                //THIS line is discussed below
        gutil.log('Run sequence is NOT finished');

    //Watches .js files
    gulp.watch('../../public/assets/js/**/*.js', function(){ 

        if (!taskOngoing) { 

            gutil.log('Running Javascript watch');


        } else {

            //If our jsx - browserify watch task is executing then we will not allow this watch to execute

            gutil.log('SKIPPING Javascript watch. JSX task ongoing ' + (taskOngoing ? "true" : "false"));


    //Watches .css file for changes
    gulp.watch('../../public/assets/css/simple.css', ['cssconcat']); 

It works as expected.


This is another case whereby I feel that had I read through the gulp documentation in advance, things would have progressed a lot quicker.

That said, I achieved what I wanted to achieve and noted some of the interesting intricacies of modern day Javascript in the process.

For example:

  • I always forget that let is block scoped. That is why I define taskOngoing 'globally' outside of my 'watch' task body.

  • Javascript is great because you can do asynchronous things easily. Javascript is annoying when you do not know which things are being executed asynchronously :)

And some of the intricacies of gulp:

Take for example the line commented with 'THIS line is discussed below'. If you follow your log output (use gulp-util) this log will appear (in what might seem) the wrong place. Given that the tasks in my sequence are asynchronous this log may well be outputted prior to any logs contained within the sequenced tasks that only run asynchronously and after the previous task has completed.


Now when I begin a development session I simply make sure that my 'watch' task is running (gulp watch). If I change a JSX file then it is recompiled, as are any other files which depend on it. Once (and only once) all the aforementioned JSX files are recompiled our 'browserify' task is called which (again) only 'browserfies' the Javascript files which have changed.

As some of our products use raw unviolated Javascript (gosh !) we also watch them such that when such Javascript files are modified directly they are re-browserfied.

The initial problem

I noted that one part of the initial problem that I was trying to solve was that I used similar build processes across sites.

The final part of the puzzle was implementing this 'generic' build process as a git project in and of itself and including it within our projects as a git submodule.

The obvious down side to this is that it implies that my build process for all sites is exactly the same when it is in fact not. My resolution to this is simply to maintain a branch of the build project for each project. Generic changes to the build process can be merged into the individual branches, and the status of such merges can be easily and logically tracked from the repository.

Problem solved.

Shout outs

I skipped over some implementation details of the individual JSX/browserify tasks.

Some of the NPM package that I use are:

  • gulp-dependencies-changed - for discerning which files have dependencies that have changed.

  • gulp-newer - for discerning which JSX files are 'newer' than the JS files that they compile to.

  • gulp-babel - transforms the latest Javascript syntax into syntax which will work now.

Take a look, and have a play. If you have any issues, or questions then let me know and I will do my best to point you in the right direction.

Ethereum contract deployment and interaction

As an engineer, it is inherently of interest to me to investigate the 'languages' available to developers on the Ethereum platform.

Solidity is described as "contract-oriented, high-level language whose syntax is similar to that of JavaScript and it is designed to target the Ethereum Virtual Machine".

It is actively being developed, and as a result there are a lot of discussions about its strengths and weaknesses.

Whilst there is some fantastic documentation which is supplemented by the example code provided by the foundation, I had yet to find a clear, and concise web based interface for generically and universally submitting and interfacing with contracts on the Ethereum network.

As such we have built this functionality over at EthTools.com

Submitting a contract

Submitting a contract requires a user account to submit a transaction to the blockchain which contains the bytecode of the contract in question.

To get this bytecode one must compile (convert) their Solidity sourcecode.

This can be done through the command line using the solc compiler, but for simplicity the Ethereum developers have created browser solidity which is a web based mechanism for source code compilation.

We have taken the functionality of browser solidity and implemented it as a step by step process that integrates with the other EthTools.com tooling to make the compilation, signing, and submission of a contract as simple as possible.

We have implemented functionality to allow quick, painless, one click linking of libraries as well as allowing a user to specify additional options to the compiler.

Submit a contract in easy steps

It really is as easy as 1, 2, 3.. 4, 5 :)

Interacting with a contract

Contracts provide function definitions which when called 'do something'.

There is little point in having deployed a contract if you can not interact with it.

Again we have implemented functionality such that any contract can be interacted with through a web based interface. You simply provide the arguments (if there are any) to the function in question, and click a button.

Interact with a contract from your browser

Reading from the Ethereum blockchain is free. As such any functions which simply return data can be called with one click. These results are returned near instantaneously.

Functions which write to the Blockchain however incur costs. These are the costs incurred by the miners who secure the block chain. As such, once again, our interact functionality integrates directly with our wallet tools such that you can pay the gas fees associated with a write operation.

As write operations require miners to 'mine' the transaction onto the blockchain, their execution is not instantaneous. There may be a short delay whilst the transaction is confirmed as having been included in a block on the chain. As a result no return values are received (and thus displayed) instantly. You can however utilise Events to trigger a response when the transaction has been mined. We will display a log of events triggered by a given function call as/when they are received.

The long and the short of it is that using this tooling you can submit and interact with an Ethereum contract without leaving the page.

Verifying a contract

To be able to display the available methods that one can call on a deployed contract, we need to know what they are.

When compiled we are left with bytecode from which one can not inherently discern a contract interface.

Whilst the Solidity compiler returns an Application Binary Interface, sadly (:P) not all contracts are deployed to the blockchain using EthTools.com

To discern the interface for a contract submitted through another platform we require you to verify it.

This requires you to submit your contracts sourcecode, and select the version of the compiler with which it was originally compiled. We then recompile the code, and verify that the generated bytecode matches that found on the blockchain.

Not only does this allow you to interact with the contract through EthTools.com, but it also allows us to present information about your contract to our other users.

Other users can read through your sourcecode, and verify that it does what you say that it does. This mechanism provides confidence and security to your users.

Moving forward

Whilst heavily tested, we are iterating quickly and breaking things.

Please let us know if you encounter any issues with the functionality, or have any suggestions for its further development.

Whilst tested with simple contracts, we have not tested (and don't advise) using this functionality for extremely large and complex contracts. Compilation will be slow, and reading and displaying extremely complex interfaces may not offer an optimal user experience.

This functionality is really provided to make development and testing of simple modular functionality as quick and easy as possible.

It is fantastic for example as a playbox for developing your first Solidity contracts.

Enjoy !

EthTools.com - Tools for Ethereum

Today I am pleased to publicly release EthTools.com.

EthTools.com offers a selection of tools for interacting with the Ethereum network.

The idea behind the project was to focus on making extremely accessible tools that anyone (regardless of technical experience) can use and understand.

Greater adoption of Ethereum by the masses requires that we make somewhat complex concepts accessible to all.


Having read the discussions on the Ethereum reddit, the questions on the Ethereum Stack Exchange, and the conversations on Twitter we have developed the following tools.


The wallet functionality allows a user to generate an Ethereum account by following a simple step by step process.

We utilise BIP39 and generate a 12 word mnemonic that can be used to load/recover your account.

Alongside the creation functionality, you can load a previously generated wallet (by using the aforementioned mnemonic) or you can import a wallet from a Geth/Parity keyfile.


We have also developed functionality for submitting a contract to the blockchain.

Our step by step process allows compilation of solidity code, and submission to the blockchain.

It all links in with our wallet functionality such that you can 'load' the appropriate account (for paying the submission gas costs).


To bootstrap various parts of our functionality, and in preparation for some of our future ideas we have created a basic blockchain explorer.

This functionality allows you to see blocks, and transactions in real time. You can dive in and investigate individual blocks, transactions, and addresses.

We are still pulling and collating block and transaction data from the blockchain. As such some data is currently missing. We are pulling the data as quickly as possible.


We intend to build any tools that would be useful to Ethereum users.

So far the only tool we have is out unit convertor which allows you to convert quickly and easily between amounts in the different currency denominations (Wei, Szabo, Ether etc) of the network.


We have created a number of guides which seek to explain various concepts in a clear and concise manner.

These include:

Where possible we try to link to other resources that are useful and will allow a reader to expand their knowledge of a particular subject matter.


EthTools.com uses (and has taken ideas from) a number of fantastic open source projects including:

Ethereum has a thriving developer community that is expanding every day. I highly recommend reading through the source code of these projects to get an idea of the 'behind the scenes' of interacting with the network.

What next

We are actively listening to ideas for further tools that would be useful to the Ethereum community.

If there is something that would help you interact with the Ethereum network, please do leave a comment and let us know.

Please do take a look at the EthTools.com website and try out the various pieces of functionality.

Let us know if you have any issues,comments, or suggestions.