Our products

We are passionate about building niche multi-platform products that provide real utility to our customers.

Anfield.com - Liverpool FC news from across the web Anfield
Keep up to date with Liverpool FC news from across the web.
Download the Anfield.com app on the App Store
DomainBites.com - Domain industry news Domain Bites
Domain name industry news and resources.
Download the DomainBites.com app on the App Store
PubReviews.com - Independent, social pub reviews Pub Reviews
Find the best pubs near you. Share your thoughts and photos.
Download the PubReviews.com app on the App Store Download the PubReviews.com app on Google Play
Mmmm.com - Find great food in Leeds, West Yorkshire Mmmm
Capture your taste and share it with the world.
Download the MMMM.com app on the App Store Download the MMMM.com app on Google Play


We like to blog about software. Our stack, engineering problems, open source developments.. you will find it all discussed here.

Verifying an Ethereum signature on the server - PHP

Ethereum has an extremely strong Javascript ecosystem. There are fantastic open source projects such as ethereumjs-util which provide out of the box functionality for signing messages with an Ethereum account.

One downside to Javascript is that in many areas it poses security issues. One such security risk became apparent as a result of my efforts to implement persistent authentication on EthTools.com (still a work in progress - you were warned).

It is fairly easy to utilise open source projects (like ethereumjs-util) to sign arbitrary data messages. What is less easy however is to tell a server that someone has successfully verified their ownership of account x.

Well.. that is not strictly true - it is really easy to do exactly that. Simply build a simple API endpoint and fire off a request to it upon successful authentication.

The real problem is that it is really easy to create a 'fake' request and send it off to the aforementioned (easily discernible - just look in the console) endpoint. I could easily fire off a request saying that I had verified ownership of any account.

With cutting edge technology.. especially technology that 'handles' real value it is especially important that security is given the importance and respect that it deserves. The is especially the case in light of the various attack vectors that have historically been exploited.

Furthermore, in its infancy Ethereum has attracted the best of the best - the people that know what they are doing. If there is a security vulnerability, someone will find it.

Now.. whilst it is possible to secure AJAX requests and make forgery harder, it is nigh on impossible to make things 100% secure. I needed another way.

The way I eventually settled on was simple - server side authentication.

Everyone can see

One great thing about interacting with the blockchain in the client is that within reason anyone so inclined can see exactly what you are doing. They can feel confident knowing that you are not sending their private key to someone else. How? They can look in the console and see each and every outgoing request.

The console

If a service were POSTing my private key anywhere I would be extremely concerned.

Within our implemented authentication flow a user can see that we are not sending any data anywhere - everything is done in the client.

Sadly however my authentication resolution does require POSTing data.. but nothing important (some may disagree).

We POST the authenticated public key to our API endpoint. Whilst you can not verify what we do with your public key on the server, there is not really anything nefarious we can do with just your public key - that is why it is public.

On the server we utilise the submitted public key to verify that the submitted signature was created by someone with knowledge of the corresponding private key. To be explicitly clear here - we do not know your private key, yet elliptical curve cryptography allows us to verify that the signature was created using it by simply using the public key.

The is the premise behind the ecrecover methods in ethereumjs-util and Solidity except these work in the client and on the Ethereum blockchain respectively.

On the Ethereum forum chriseth gives the following useful explanation of ecrecover:

"The idea of ecrecover is that it is possible to compute the public key corresponding to the private key that was used to create an ECDSA signature given two additional bits which are usually supplied with the signature. The signature itself is the two (encoding of the) elliptic curve points r and s and v is the two additional bits needed to recover the public key. This also explains why the return type is address: It returns the address corresponding to the recovered public key (i.e. its sha3/keccak hash). This means to actually verify the signature, you check whether the returned address is equal to the one whose corresponding private key should have signed the hash."

We want the same functionality on the server.

Note: Solidity's ecrecover returns an address whereas ethereumjs-utils ecrecover returns a public key

Note: Whilst researching I found a number of interesting StackExchange questions on the subject matter. These are as follows:

The web3.js API docs also provide some insight into the parameters of ecrecover noting:

After the hex prefix, characters correspond to ECDSA values like this:

r = signature[0:64]  
s = signature[64:128]  
v = signature[128:130]

Note that if you are using ecrecover, v will be either "00" or "01". As a result, in order to use this value, you will have to parse it to an integer and then add 27. This will result in either a 27 or a 28.  


EthTools.com is built on the Phalcon PHP framework.

There is no real Ethereum PHP community, and PHP has its shortcomings when it comes to dealing with numerical representations.

Then of course there is the small issue of Elliptical curve cryptography being extremely complex, and me lacking any prior knowledge of its workings..

That said after a significant amount of research, and a significant amount of playing I managed to implement the ecrecover functionality in PHP.

Whilst discerning how to do this I wrote some 'notes' which I have tidied up and included below on the off chance they help someone else in the right direction.

My logic of action was to sign a transaction using ethereumjs-util using a known Ethereum private key. I would then mimic the code path of their ecrecover method in PHP and play until the outputted public key 'recovered' from the signature matched that of the original signing account.


Within Node, Buffers are arrays of unsigned 8 bit integers. The digits are their base 10 (decimal) representations.

With 8 bits there are 2^8 = 255 decimal options. These integers are the numeric code point representations of characters from the UTF-8 character set.

Node utilises these buffers for data manipulation of the sort required for doing these kind of computations.

On the server we have various strings (the message hash, and the signature), but PHP does not know that the characters in these string are base 16 numerical representations (hexadecimal).

Each character is a 'nibble' which requires 4 bits of data to represent (allowed hexadecimal characters are 0-9 and A-F).
As such 8 bits of data is two hexadecimal characters.

In Node, the string '61bf09' is converted into a Buffer by taking each set of two nibbles and converting it to its decimal form.

  • 61 becomes 97
  • bf becomes 191
  • 09 becomes 9

To do the equivalent in PHP we execute something like the following:

$r_byte_array = unpack('C*', hex2bin($r));

We call hex2bin which converts the hexadecimal string (without 0x) to its binary representation (base 2). By calling this method we are implicitly stating that the initial format is hexadecimal.

unpack then converts the string to an array of code points - our Buffer equivalent.

Initially PHP just thinks the string is UTF-8. If we dont call hex2bin first the first int is 54.

unpack without hex2bin

This is because unpack simply converts the first character (6) to its binary code in utf8 (54). 64 characters = 64 code points.

When we tell unpack that we are dealing with hexadecimal, it converts each two character hexidecimal set (each character representing 4 bits of data) to its decimal representation. 61 (0x61) becomes 97. Our 64 character hexadecimal string becomes 32 8 bit integers.

unpack with hex2bin

You can see these different representations by having a play with this convertor.

Now that you have an appropriately formatted representation of the message hash and the signature you can cheat..

I like to think myself relatively intelligent. That said trying to fully understand, appreciate, and implement the 'secp256k1' elliptical curve is simply not going to happen. Furthermore.. why bother? It is another case of not reinventing the wheel.

I found a few libraries pertaining to secp256k1 in PHP. For example:

I ended up using a combination of bits from all three libraries - I like to know what I am using, and have a basic (at least) understanding of what I am pushing to our servers. Given that the above libraries are fairly feature rich/complex it seemed pragmatic to simply extract what I needed for my relatively simple functionality.

After spending a significant amount of time getting my head around what is going on I have finally managed to achieve what I was trying to achieve - I have managed to verify that a signature created in the client was signed my a particular private key.

I will now move forward with implementing the functionality that I had in mind when I first climbed into this rabbit hole.

Shimming dependencies of dependencies when working with Browserify

This post details as to why I wanted to implement Semantic-UI-React into our Revision.net project.

It was not however as simple as I would have liked as a result of our build process.

Such that heavy Javascript (that is regularly utilised across the site) can be cached (and served from a CDN) we build the libraries on which the site depends into a separate 'libraries' Javascript file.

Our Reactive components which depend on these libraries then utilise them. We achieve this by utilising browserify-shim within our build process.

Essentially when we require('React'), the browserify-shim transform tells browserify (executed as part of the build process) that React is defined on the window object.

When building an individual Reactive component, it does not need to include the React codebase within the built Javascript. Instead we tell browserify that it will be available globally. This makes on the fly builds significantly quicker, and makes the product much quicker (from a user experience point of view).

Usage of Semantic-UI-React is pretty straight forward. Unfortunately however I was encountering issues whereby Semantic-UI-React was not playing nicely with the build process - React was being built into the components output Javascript. As React was already included in the aforementioned 'libraries' Javascript this was causing issues as a result of the presence of multiple instances of React.

The problem was that the dependency of the project (Semantic-UI-React) also depends on React, yet was not utilising the shim configuration.

After a little research I stumbled upon this StackOverflow answer which outlines how simple it is to resolve this issue. It is simply the case of specifying the global option when calling browserify.

Now, it seems to be the case that you can not specify options from within your package.json file. It is however super simple to call the shim transform from your build script.

const bundler = browserify(file.path, {debug: true});

bundler.transform('browserify-shim', {global: true});  

With this change I was able to build Semantic-UI-React into my project 'libraries' and utilise it within my React component classes. Awesome !

React and jQuery do not play nicely

React and jQuery do not play nicely. That is by design.

Recently I was attempting to implement modal functionality into our Revision.net project.

I was in a rush, and went ahead with trying to implement Semantic UIs modal offering as quickly as possible.

Semantic UIs Javascript functionality relies heavily on jQuery. At first sight this did not seem like any kind of issue to me. I simply created a Reactive modal component which I defined a reference on. I then created an onClick handler on a button such that when clicked the modal was launched (using jQuery).

This worked. I continued with my life.

Unfortunately however this only worked because of the specific implementation. It was not correct, and does not make sense in the context of Reactive development.

The reason why is simple - React works on the basis of monitoring a virtual DOM and manipulating the real DOM. It is efficient because it only updates the real DOM with changes - when the state of a stateful component changes only the things that need to be updated are.

This is the reason why React asks that you define key properties when iterating through an array and outputting content for example. The key property allows React to monitor the output for that particular iteration such that if the data changes it can update what is displayed to the user.

The reason why jQuery and React do not play nicely is now most likely obvious.

When we execute:




Semantic UI is explicitly manipulating the DOM. Be it adding classes, changing inline styles, or adding content. This means that the virtual DOM that React is aware of differs to what is actually presented to the user. React cannot reconcile changes any longer.

I only noticed this when I started to receive console error logs stating Failed to execute 'removeChild' on 'Node'. The reason I had not seen these errors previously was because I was launching the modal but not changing any state changes that caused React to attempt to re-render. As soon as I did.. boom.

This is outlined pretty succintly by Paul O’Shannessy (one of the React devs) on this issue.

This comment on another issue provides further clarity.

Um.. Semantic-UI-React

At this point you might be punching your monitor noting that various (awesome) contributors have produced Semantic-UI-React - a version of Semantic UI that does not depend on jQuery, and takes the form of a series of Reactive components.

As I mentioned.. I was in a rush (not an excuse), was not thinking, and was also aware of a number of complications pertaining to utilising Semantic-UI-React within our build process. These are discussed here: Shimming dependencies of dependencies when working with Browserify.

Regardless, my perverse scenario/example illustrates nicely as to why React and jQuery do not play nicely.


My naive attempt to implement the jQuery version of Semantic UIs modal also threw up another issue, namely the positioning within the DOM of the HTML code for my modal.

When at first I realised the problems with my implementation approach I figured that I could simply use Reactive state to add the appropriate CSS classes to my modal containers so as to show/hide them. In principle you can absolutely do this. In the case of modals however there is the additional complexity of having the HTML for your modal external to the containing parent (such that it can be positioned appropriately).

If I simply specify the modal division structure within my React component and then change its CSS classes then it is subject to the styling associated with the parent. Instead I want the modal HTML (DOM structure) external to its parent whilst still allowing it to be controlled/manipulated.

The way that Semantic-UI-React does this is utilising the Portal pattern.

This old StackOverflow answer details a basic implementation of this.

Once again in an attempt to do things quickly I encountered simple issues that would have been patently apparent had I taken a moment prior to putting metaphorical 'pen to paper'.

Hopefully this provides a little clarity, and prevents you from wasting time when working with React and third party libraries.

Uploading images using the Facebook and Twitter APIs

I recently decided that adding image uploads to Revision.net was of vital importance.

I personally use the Revision.net tooling on a daily basis to schedule my social media posts. Recently I have found myself taking significantly more photos, and as such I wanted an easy way to post them to Facebook, and Twitter and a specific time in the future.

In principle this should have been a relatively simple feature to implement. I did however encounter a few small issues which I wanted to document here.


Given the nature of the product, I wanted to allow a user to upload images directly to the Revision.net server. As the tool schedules posts to be submitted in the future this seemed like the logical approach.

Uploading photos directly to the Facebook and Twitter APIs on submission limits what we can and cannot do as regards future functionality. Furthermore, a brief look at the respective API documentation states:

  • Twitter: "The returned media_id is only valid for expires_after_secs seconds."

  • Facebook: "After you upload an unpublished photo, Facebook stores it in a temporary upload state, which means it will remain on Facebook servers for about 24 hours."

That is to say that were we to upload the images upon submission (to Revision.net) they would possibly not be available when we want to actual submit the status/tweet (to the respective API) at some point in the future.

The first specification requirement was thus image upload. This post will not go into how one can do that. The more crucial aspect of my implementation was the user experience - implementing image upload within the current React based interface so it looks good and functions seamlessly.

Uploading an image

The second part of the specification was obviously the uploading/transmission of the photos from our server to the respective APIs at/before the scheduled posting time, and then attaching them to the respective posts.


Twitter provides a guide to uploading media. Facebook provides documentation on photo uploads.

Reading these pieces of documentation you will discern that Twitter requires you to upload a files binary data directly to their API OR a base64 encoded representation of the image. The Facebook API is more flexible in that it allows uploading binary file data but it also allows you to specify an image URL from which Facebook will 'pull' the image.

The implementation is largely the same across platforms - you 'upload' the images, and then you post the Tweet/status attaching the unique identifiers of the uploaded images.


For interacting with the Twitter API I have been utilising twitter-api-php by James Mallison. I had been using my own extended fork of an older version of his codebase. I encountered a few issues with submitting image identifiers with a Tweet (once uploaded) as a result of some CURL encoding issues.

The latest version of the codebase works flawlessly out of the box, and even includes example implementations within the test suite.

I chose to continue with my own fork and implement upload of binary image data. This requires the specification of the Content-Type:multipart/form-data header within the CURL request.

The Twitter API upload endpoint (https://upload.twitter.com/1.1/media/upload.json) returns a media_id and a media_id_string value. These are appended to a comma separated string and then submitted as the media_ids parameter to the https://api.twitter.com/1.1/statuses/update.json endpoint.

One of the benefits of simple, bare bones open source projects like this is that you can (and should) read all of the source code. The simplicity makes it manageable, and easily extensible.


Facebook provide their own comprehensive SDKs. I am using their PHP SDK (version 5.1).

Uploading an image simply requires firing off a POST request to the me/photos endpoint.

In my workflow, (as alluded to above) I specify the published=false parameter. This prevents the photo from being shown on the users wall - it will show on their wall when you attach the image to a status submission.

This call returns an id which is then posted as a JSON encoded parameter to the me/feed endpoint.

$postData['attached_media[' . $i . ']'] = json_encode(array('media_fbid' => $facebookImageId));

Facebook Pages

One great thing about how Facebook have built their API is that pages and user accounts are identified by a type agnostic unique identifier.

Revision.net allows scheduling posts to user accounts and user administrated pages. To achieve this we simply use the endpoints unique-identifier/photos and unique-identifier/feed regardless of whether the unique-identifier references a page or an account.


The few issues that I had when working with these APIs were relatively simple to debug.

Both twitter-api-php and the Facebook PHP SDK use CURL behind the scenes. If you can not get things working I recommend debugging utilising CURL from the command line.

Twitter also provide twurl which makes the authorisation aspect of testing through the command line significantly simpler.

The Facebook SDK is feature rich, and it handles the processing of API errors and converts them to easily debuggable Exceptions. These relate to permissions, and access token validity for example. Simply catch these exceptions and handle them appropriately.

The only minor time drain that I had when developing this functionality was an Exception message from Facebook stating Unsupported post request. This occurred when attaching photos to a call to the feed endpoint that had previously been published (having not set published=false when uploading the image).
This error message is pretty unhelpful, and there seem to be a number of unanswered questions about it on StackOverflow. I suggest that should the above not resolve your issue, you:

  • Make sure the image you are trying to attach was uploaded by the same user.

  • Make sure that it has not already been 'published' through another medium.

  • Make sure the user has the appropriate permissions.

  • Make sure that the image has actually been uploaded correctly and that you can read data about it by calling the /v2.8/{photo-id} endpoint.

Given the simplicity of the twitter-api-php codebase similar Exception based simplicity is not provided out of the box. That said, the Twitter API does return JSON encoded error data which you can process and handle accordingly.

I have taken to handling all obvious errors, and logging anything that is not being (currently) handled. I can then monitor any issues that are occurring and implement appropriate handling down the line.

Intricacies and further considerations

It is probably apparent that my requirements are very basic - the uploading and scheduling of simple images via the Facebook and Twitter APIs.

Both APIs do however offer more complex endpoints for uploading other forms of media as well as for more complex upload requirements (resuming uploads etc).

One consideration of note is that the same user (API credentials) is used to post the tweet/status as was used to upload the photo. Twitter does allow the setting of the additional_owners property upon upload such that you can (for example) upload an image with your own site administrative credentials whilst specifying the user as the owner. This was not however appropriate for my use case - uploading and submitting as the user allows for permission verification, and error handling as part of the 'flow'.


Hopefully the above provides some pretty clear insight into how one can work with the Twitter and Facebook APIs for uploading images.

All things considered, they are very well built, and very easy to work with.

If you have any specific questions or concerns, please let me know and I will do my best to answer them.