Monitoring a Greenhouse with Ubuntu

credit: Creative Commons - ShareAlike 3.0

There are a wide range of solutions on the market today to help the avid gardener with their plants, some are low tech battery-less meters designed to give you a simple view of how your plant is doing, others are much more high-tech (and expensive). Two of the most important variables to monitor in a greenhouse are temperature and humidity, too hot and plants start to scorch and wilt, too cold and they can be damaged, and humidity is important for a whole host of plants you may find in a greenhouse.

At home I have a modest greenhouse, at 3 metres by 1.8 meters it has just enough room to start off delicate plants early and keep the family supplied throughout the season with chillies, tomatoes, cucumbers, strawberries, herbs, and even lemons. Although I can just barely make out the digital temperature gauge hanging on the inside of the greenhouse from my office window I thought is would be much better to have a more high-tech monitoring solution. After a bit of research I came across the Xiaomi Temperature and Humidity sensor which can be bought for as little as £9 from several Chinese online retailers. The sensor itself is a small, ZigBee-based device powered by a CR2032 button cell battery and capable of detecting temperatures from -20 to 60 celsius as well as humidity, perfect for this use case. The sensor does also need the Xiaomi gateway to work but I already had one of those. Another constraint is that any device you use to read the data needs to be within ZigBee range which, admittedly is 10s of meters, not a problem here.

Xiaomi produce an Android and iOS application that you can use to read the sensor data but after the initial (required) setup I wanted to use the data elsewhere.

Necessary Hardware and Software

credit: Creative Commons - ShareAlike 3.0

The initial idea was to put together a small, dedicated, low-powered device so the obvious choice was a Raspberry Pi3. The Ubuntu MATE project produce an so a short install later followed by enabling ssh access sudo systemctl enable ssh I had a fresh Ubuntu environment to get started.

In addition to Ubuntu MATE I am using MQTT in the form of Mosquitto to send messages around the network, the mihome Python bindings from jon1012 on GitHub, InfluxDB to store the data, and Grafana to visualise it. Installing Mosquitto can be done via the snap (sudo snap install mosquitto) or from the .deb. Setting up InfluxDB and Grafana is a little more involved but both have great step-by-step instructions on their respective websites.

Mosquitto needs to be running as well as the mihome software that sniffs the network for ZigBee packets. With both of these in place it’s just a matter of capturing the bits we are interested in and putting them in an InfluxDB database (warning, I’m no Python expert!).

import paho.mqtt.client as mqtt
import json
import sqlite3
from sqlite3 import Error
from influxdb import InfluxDBClient
import time
import datetime


topics = ["xiaomi/sensor_ht/158d0001ad37b7/temperature", "xiaomi/sensor_ht/158d0001ad37b7/humidity"]

def on_message(client, userdata, message):
    msg = message.payload.decode("utf-8")
    prefix, device, sid, prop = message.topic.split("/")

    rectime = datetime.datetime.utcnow()

    topic = prop + "_" + sid

    isfloatval = False

    # See if the message is a temperature or humidity data entry (float)
        val = float(msg)
        val = val / 100
        isfloatval = True

        print ("Not a temperature or humidity value")

    if isfloatval:
        json_body = [
                "measurement": topic,
                "time": rectime,
                "fields": {
                    "value": val

# Set up mqtt client
client = mqtt.Client("sensors")

# Set up InfluxDB client
dbclient = InfluxDBClient('', 8086, 'root', 'root', 'sensordata')

# Subscribe to topics
for t in topics:
    print("Subscribing to topic " + t)

# Loop and wait until interrupted
    while True:
except KeyboardInterrupt:
    print("interrupted by keyboard")


The code is pretty simple. Set up the MQTT and InfluxDB clients which are both hosted on the Raspberry Pi at and loop looking for MQTT messages that are listed in the topics array. In the code above we only have one sensor (but two topics) and the ID of the sensor was found by looking at the output of the running mihome software.


credit: Creative Commons - ShareAlike 3.0

Once you have Grafana set up and connected to the InfluxDB database it is simple to add charts visualising the data. I opted for two charts, one for temperature and another for humidity and as you can see, here in the south of England it got pretty hot in there at the start of May, too hot in fact. I have already made modifications to the greenhouse to try cool things down when it becomes too hot but some manual intervention is still needed on the warmest of days.

To finish it all off you can use Mir Kiosk, a small, light-weight graphical server and a small LCD display to provide an always on view of the data.

There are lots of things that can be done with the Xiaomi range of sensors, as of writing I have 9 temperature sensors around the home, smart switches on every door to monitor open/close events, and plant sensors in several indoor plants to monitor fertility, moisture, temperature, and other conditions necessary to keep them all healthy. I’ll expand on this further in a future blog post.

Using Snaps on Raspbian

Who isn’t a fan of the Raspberry Pi? The little, cost-effective single-board computer that has so many uses, in fact at home I have 4 doing various things from monitoring temperature and humidity in my greenhouse to serving up a dashboard interface for my home automation solution, both projects which I will write more about in future posts. But what makes the Raspberry Pi even better, well Snaps of course. I run Ubuntu on my devices which comes baked in with Snap support but the excellent Raspbian is also a popular choice of OS. As Rasbian is based on Debian it too can run Snaps with a few simple commands.

sudo sed -i 's/main/main contrib non-free/' /etc/apt/sources.list.d/raspi.list

# Not required but silences ERROR regarding the memset, memcpy preloaded libs for the Pi
sudo sed -i 's/\usr/#\/usr/` /etc/

sudo apt update
sudo apt install snapd
sudo reboot

After a reboot typing snap install hello-world should work (or any other snap name, use snap find to see more software).

Thanks to Martin Wimpress for the above commands.

Human Computer Interaction, to the max

credit: Creative Commons - ShareAlike 3.0

Interfacing with computers is inevitable in today’s developed societies but what most people think of when asked about computer interaction is probably bashing the keyboard, one finger at a time, on a laptop or desktop or, for the more tech-savvy they may talk about phones and tablets. Less people talk about cars, TVs, airport ticketing machines, wearables, digital assistants like Alexa or Siri, even fridges, washing machines, and billboards. A small but increasingly vocal subset talk about a new generation of computing that will revolutionise the definition of computers and even humanity.

You see, as a species we are on a path towards more and more technology entering our lives. Digital this and electronic that; football matches are being decided by technology, AI is predicting when people are likely to die, transportation is increasingly automated, and that is only the start of it. When peripherals like your mobile phone or smart glasses become, well, less peripheral, the whole debate about how technology is shaping mankind starts to heat up. Enter Hollywood.

For several decades now technology gone awry has been the villain and an apocalyptic future has put bums on seats at many a cinema but often these futuristic predictions are based on a solid, if not warped foundation. Take for example Dan Brown’s latest novel, Origin which, without giving away too many spoilers explores the potential fate of humanity, hint: it involves a human/technology twist. Hollywood examples include the popular and lucrative Terminator franchise that depicts a world where humanity is on the brink of extinction due to a meld of synthetic intelligence and robotics that even lead to news outlets asking How close were the Terminator films to reality? Phillip K Dick, a popular futurist author had a similar theme throughout several of his books and again, it was based around an advanced computer intelligence and some form of physical manifestation. Fast forward 34 years (in the case of the original Terminator movie) and we have examples of robotics that not only look like those depicted in the movies but also behaviour eerily similar. Just this week Boston Dynamics put out a video of a robot ‘dog’ that reminded me of scene from Jurassic Park where a Velociraptor opens a door in the pursuit of it’s prey. The new dystopian series from Netflix, Altered Carbon, itself an interpretation of a novel by Richard K Morgan goes further and depicts a future full of implantable and replacement technology as part of the human species itself, a point driven home by Dan Brown in his book.

Again just this week, popular futurist Dr Ian Pearson put out a blog post detailing his thoughts on the whole technology + human debate which, while full of superlatives and speculation is as accurate as many of the futuristic ideas from decades ago that are coming to fruition now. The idea that a “superhuman” will become a reality in little more than two decades, melding brain and device technology that “relays signals between your organic brain and an IT replica, but by doing so essentially makes external IT just another part of your brain” bodes well for Richard K Morgan’s idea of brain cortical stacks and paves the way for “trillions of AI and human-like minds in cyberspace”. Conversely, as humans evolve so too will our AI counterparts that will become “Super-smart AI or humans with almost total capability to design whatever synthetic biology is needed to achieve any biological feature”. In short, the lines between computers and humans will inevitably blur. Even popular future-thinker and entrepreneur Elon Musk, who publicly stated that there is a 90% chance we get AI wrong, is advocating that we embrace rather than eliminate AI but urges caution at the same time. The definition of Human Computer Interaction (HCI) will take on a new meaning in the next two decades. It will not be about how we physically touch, swipe, and tap a device, or even wear a peripheral, but more about how technology is interfaced within the human body, the term HCI will be more accurately about Human Computer Integration.

So, what will Human Computer Interaction look like in the future? I think the answer lies in a question, what is the future of humanity as we incorporate more and more technology into our very being? That question is much more interesting.

Spotify as a Snap, putting the developer in control

Just before Christmas Canonical announced that the world’s most popular music streaming service, Spotify, is now available as a Snap, but what makes this significant I hear you ask? Well Spotify and many other developers are choosing a new way of delivering software to Linux users and Snaps are at the forefront.

One of the important aspects of Snaps is not so much about the technology, but more about where the software comes from. Traditionally Linux software is packaged by a community of developers and brought together by a distribution such as Ubuntu to be consumed by everyone. There are archives of software to choose from but typically this software needs to be maintained and updated separate from the upstream development version, causing it to become a little ‘stale’. With technology like Snaps, the upstream software developers can release their software directly to the end user as and when they want; the developer is in complete control. Using the track mechanism built into Snaps the developer can test edge, beta, and candidate software separate from their stable release to ensure users get the version of software they want. And with vendors like Spotify this means that millions of users get access to the latest Spotify experience as soon as it is ready.

Many other software vendors are choosing to deploy their software this way, not only because it puts them directly in control, but because technology like Snaps means software is more secure, robust, and relevant. I expect that 2018 will see a massive uptake in this way of delivering software to Linux users and long may it continue.

Smart Healthcare Trends: Part 4, Preventative Healthcare

For Part 1, 2, and 3 in this series where I look at trends in Smart Healthcare please click here, here and here.

Implantable Medical Devices (IMD) and Wearables

Creative Commons: Thomas Hawk:
There are many challenges to designing devices that can operate in the somewhat harsh environment of the body. Using the Medical Implant Communication Service (MICS), low powered antennas, and programmable and configurable components which allow for changes after the implant has been made is the way to go but this of course comes with risks. Allowing a device so intimate in nature to be programmed after installation requires bullet-proof security and will fuel an intense scrutiny around hardware and software. Hacking the computer of a mobile phone is one thing, hacking an implantable medical device is entirely another. Security applies across the board with medical equipment but is particularly important in implantable devices due to the importance and control they have over their wearer. Much has been written in the press about compromised medical devices such as insulin pumps and pace makers which can lead to death.

Another field that is gaining in popularity, albeit in a somewhat niche market, are implants for convenience. Kevin Warwick has been a long-term proponent of augmenting the human body and his 2006 paper details his experimentation with a surgically implanted RFID (Radio Frequency Identification) device. This implant allows for user identification, movement detection, and automation, and enables the Smart Environment around him to react to the presence of the user. While being an extreme example, it can be argued that Warwick’s research in this area has broken new ground. It is now common to see transitory implantables in the form of ingestible devices that can measure everything from the level of acid in the stomach to blood alcohol level. This latter information could, for example, be used to enable or deny access to a vehicle by the home owner. Implantable devices will continue to be the focus of attention in the coming decades. We are much more likely, in the short-term at least, is to see an increase in the use of wearable devices.

The diagram below shows a number of implantable sensors and where they are located in the body.

Wearable sensors provide an excellent solution to monitor a person continuously and provide instant results. Mukhopadhyay in his assessment of wearables forecasts that wearable sensors might revolutionise exercising, communication and our life, similarly as personal computers did. Another advantage noted by Paolo Bonato more than decade ago in his assessment that wearable sensor platforms have progressed to produce medically accurate signals and the gadgets allow measuring a patient for an extended period, which could last even few month, and it is a great advantage in contrast to a stationary technology.

The demand for wearable devices has increased dramatically over the last decade and, according to Juniper Research in 2014, the wearable market is set for a more than 10-fold increase in hardware revenue alone by 2019. Wearables cover a wide range of uses such as lifestyle sports and fitness, entertainment, healthcare, and enterprise, with the current surge in awareness particularly focused on the former two, spearheaded by companies such as Fitbit, Apple, and Jawbone. While originally these devices performed simple operations, such as tracking the number of steps a wearer performed throughout the day they are now becoming more sophisticated. Most consumer wearables do not employ medical-grade sensors but devices such as the Apple iWatch can interoperate with medical-grade add-ons such as the Kardia Band which can be used to detect Atrial fibrillation, the leading cause of strokes. It is expected that the lines between traditional medial-grade devices and consumer health and fitness devices will continue to blur especially for use-cases including monitoring heart- and blood-related data. Fitness trackers will also continue to be the subject of clinical trials focused on weight-management, obesity, diabetes, cancer, and more.

The work by Zhu and others on correlating physical activity with mood is particularly interesting. The researchers use only off-the-shelf components, in this case the Pebble Smart Watch and an Android Smart Phone, to advise the wearer based on predicted mood. What makes this research unique is the fusion of activity tracking detection and mood inference engine that can use perceived mood such as “stressed” to try counter that by suggesting an anti-stress activity such as exercise based on heuristic data. Similarly, the software can correlate the activity with the mood so if “shopping” results in the wearer being in a bad mood this data can be used to predict future moods based on activity and advise accordingly. This fusion of data and interpretation holds promise for future research.

The environment could provide energy for wearables as well, for instance, it was reported that circularly polarised textile antennas could transmit power. Furthermore, sensors, which harvest energy from nearby health monitoring bands has been implemented. Moreover, radiofrequency (RF) energy harvesting antenna in a wearable sensor was introduced by researchers at the Massachusetts Institute of Technology. Current devices can generate energy from both a body and environment, but further development is required to harvest more power.

Research opportunities are present in data-mining the vast quantities of information that wearables generate, especially when it comes to detecting anomalies in individuals and in statistically-relevant groups. Apples initiative here with their ResearchKit and CareKit hint at innovation at a massive scale, enabling field trials with millions of individuals. There will be further research opportunities around combining data from several sensors to form a holistic picture or a wearers health and how that is affected by external factors such as weather (seasonal affective disorder (SAD) detection), amount of exercise (exercise-related endorphin boost), social interactions (GPS, Bluetooth, proximity data, and social media), TV viewing habits, or even a person’s income level. Hardware sensors will continue to be miniaturised and become more viable as they are embedded into all-day, every day accessories but research needs to done to improve the battery life of these devices. Typically, something such as an Apple iWatch lasts around a day of normal usage, a Fitbit Charge up to 5 days, but this will have to improve especially when consumers are used to watch-like devices lasting much longer.

Smart Healthcare Trends: Part 3, Preventative Healthcare

For Part 1 and 2 in this series where I look at trends in Smart Healthcare please click here and here.

Preventative Healthcare

There is a growing trend towards preventative measures especially as technology advances enable a new generation of smart healthcare. The vision portrayed by Rochester Universities “Smart Medical Home” in 2004 is one that controls everything from the “nutritious meals” to the “high-tech first aid kits that can diagnose and cure even the most life-threatening injuries and ailments”. Physician Alice Pentland, Medical Director of the centre is quoted as saying:

“Finding a cure for disease is the best option. But making sure people don’t get sick or identifying disease early, when it can be most easily treated, are the next best options”.

This early experiment into preventative healthcare implements a personal medical advisor with speech-recognition for conversing with, skin-disorder diagnosis through the use of a camera matrix, a gait monitor to detect walking abnormalities that may be a precursor to a stroke or Parkinson’s disease, and other such systems.

Building up a complete picture of an inhabitant’s medical state needs a whole host of personal data which can lead to privacy concerns. As with any monitoring solution, the data itself needs to be closely controlled and access restricted. Perhaps some of the most intimate data is a person’s genome which, thanks to reductions in the cost of gene sequencing, is now available to anyone. Once the domain of a laboratory, a person’s genotype can be data-mined for as little as £125 to discover, from what the website says, a person’s risk to inherited conditions, their response to particular drugs, genetic risk factors, and even a disposition to genetic traits such as lactose intolerance or male pattern baldness. The home can combine all these information sources to help prevent certain adverse conditions. If a home-owner has asthma or other airborne allergies, sensors could detect a high pollen count and automatically close windows or advise the occupant on the risk of venturing outside; diabetes suffers can be instructed on the right food choices to be made based on the homes recognition of available foods; exercise can be advised based on a lack of detected movement; the possibilities are vast.

Unless restricted by some ailment, most patients are ambulatory with means using wearable sensors to track motion and other variables is a useful addition to preventative healthcare. According to Clifton et al in 2014 there has been little work on mining the data to uncover potential future patient issues which is the main issue with today’s solutions. During their research they trialled their multi-sensor solution in a medical environment with 200 patients and an average hospitalization stay of 9 days. It should be noted that of the 200 patients they studied many actually removed their wearable sensors despite being told of the potential benefits. The main reason was that for certain sensors:

the current technological implementation was rejected as being inconvenient, uncomfortable, or too intrusive.

According to the feedback, the ECG sensor was particular uncomfortable for prolonged wear and finger-mounted pulse oximeters not practical. Finally, network connectivity problems, short battery lives, massive data sets, and patient forgetfulness to reattach the devices once removed plagued the final results. Despite all this the study did prove that patient deterioration and predictive monitoring was possible to a certain degree and further studies in this area should be pursued. The software-side of the solution, using machine learning, is relatively mature but todays sensor hardware is lacking.

There are commercial wearables being sold that claim to implement many of the sensors that the medical grade devices used in this study but Clifton claims they are just not accurate enough to avoid false-positive alerts. This study paints a rosy picture of using wearables for predictive monitoring as a form of preventative healthcare but technological advances need to catch-up with software before a more widespread adoption can occur. In the meantime, the wearables market is exploding with year-on-year near double digit growth and, according to IDTechEx, this trend is set to continue. As is often the case, combining data sources produces a more holistic picture and with wearables and the Smart Home combining the dataset becomes richer for preventative intervention.

Any look at Preventative Health should also take into consideration diet, with many of today’s ailments directly linked to an unhealthy lifestyle and eating habits. The Smart Appliances of the future can help with monitoring and suggesting healthy eating habits, from smart fridges to smart dining trays and a multi-faceted approach to prevention will be the best approach with the Smart Home at the center of this. There are many use cases where the home can help the occupant make preventative decisions and it is likely that this will be a popular future trend as more technology enters the home.

Smart Healthcare Trends: Part 2, Augmented Reality

For Part 1 in this series where I look at trends in Smart Healthcare please click here

Augmented Reality

Augmented Reality (AR) is the field of study concerned with meshing the real world with computer generated data and images to provide an enhanced view of reality. AR has been used in rehabilitation to encourage movement in patients suffering with akinesia, which is a disorder that is characterised as a loss or absence of voluntary movement. One study by Coiera in 1996 looked at a project that “projected virtual objects on to the patients’ physical world to give them the impression that they were walking through them, therefore restoring their mobility”. Coiera cites issues with the current (as of 1996) hardware displays in addition to ethical considerations as the major limiting factors to AR adoption in healthcare. He also notes that AR can have a detrimental effect on some people, stating that, “undesirable side effects such as equipment failure, fatigue, or motion sickness” were experienced in some trials.

More recently, efforts by companies such as Microsoft with Hololens and Google with Glass, have concentrated on the consumer market. Thus far, this has not proven particularly successful. Google withdrew from sale their Glass project in 2015 after poor sales, an immature product, and a backlash from the public concerned with potential privacy issues. This was later rebranded and targeted specific at the Enterprise market although developers continue to find unique healthcare applications. For example, R. Győrödi in 2015 demonstrated an acquaintance reminder solution that is able to store and recall photographs of people’s faces whilst using an algorithm to match them to an off-device database. Once matched, information about the individual can be presented to the wearer. Applications for this solution include patients with degenerative memory disorders such as Alzheimers, suffers of traumatic brain injuries, or just age-related memory loss. Another innovative application is TRAVEE by Voinea, again in 2015, which is a system designed to aid stroke survivors in their rehabilitation period. TRAVEE uses motion detection and a Head Mounted Display (HMD) to provide a virtual therapist avatar whose role is to guide the patient through a range of movements designed to improve mobility over time. There are several emergent devices and software already starting to appear, such as Microsoft HoloLens, Apple’s ARKit, Oculus Rift, HTC Vive, Google Cardboard, and Samsung Gear VR. All of these systems are targeted at the consumer market where developers are beginning to create health-related applications.

Despite initial resistance to AR adoption the technology does have benefits that will eventually see their way into the healthcare environment. Still, there are a lot of issues to overcome first. Concerns remain around the current technological shortcomings such as battery life, medical device interoperability, and government legislation, but more importantly about the privacy of patient data. These issues are not insurmountable and based on the rapid adoption of technologies such as the smart phone and tablet in similar environments, AR may well provide an important research area.

For more information on the content above please get in contact.

Smart Healthcare Trends: Part 1, Automation

Healthcare is going through a revolution at the moment. An increasingly aging population means that the strain on traditional healthcare resources is at boiling point but there is hope. Technological advances mean that healthcare can often be delivered in the home, promoting a more decentralised model and more importantly something that can scale. The UN Department of Economic and Social Affairs Population Division predicts that 25% of the UK population will be aged over 65 by 2035 compared to less than 18% today and this trend isn’t reversing any time soon. I have given this a lot of thought lately and have come to the conclusion that applying todays technology can bring about a sea of change in this industry.

At the same time that people are living longer and dealing with age-related illnesses there has been, and continues to be, a technological revolution. Devices are getting smaller and more powerful, software methodologies are maturing, and the populations adoption of technology is ever increasing. It is now common to see people carry around several devices at once, all of which are more powerful than their predecessors of 20 years ago. Smart phones, tablets, smart watches, fitness trackers, and even smart eyewear to name a few are often part of a person’s regular carry-around items and each are embedded with powerful processing capabilities, sensors, and software. In addition, our surroundings are becoming smarter. From washing machines to coffee-makers, wireless access points to door-bells, each have a part to play in the smart home revolution and each can contribute to making a person’s life better, especially those with healthcare problems.

Over the course of five blog posts I will take a look at a number of existing and emerging technologies that will become increasingly important in tomorrows delivery of healthcare.


The use of automation technology in the home is not new. From humble beginnings in the 1960s as hobbyists attempted to automate simple functionality, the term Smart Home was first coined in 1984 (almost ironic given the content of George Orwell’s novel of the same name) by the American Association of House Builders. Automation has been a key driver of consumer smart device uptake for the home but its use is not as wide-spread as some may have predicted. But, with the advent of low-cost computing platforms such as the Raspberry Pi Zero it is now more likely than ever that Mark Weiser’s vision of “integrating computers seamlessly into the world at large” will be realised. This increase in computing power integrated into today’s ‘dumb’ appliances allows for control of functionality such as lighting, heating, and security among others. In the context of healthcare, this automation can be a key enabler for people with physical and cogitative disabilities or impairments such as those experienced with age. The ability to automate common tasks relieves the burden of being able to operate knobs, switches, and various appliances around the home and can be the difference between a patient receiving care in their own home or in a specialised care setting. Initiatives such as Health Smart Home (HSM) use off-the-shelf, low-cost components to issue warnings to people with physical disabilities, such as visual or hearing impediments, to avoid potential hazards allowing them to stay at home longer. Similarly, using the bluetooth communication protocol can allow people with physical disabilities to control appliances by using only their smart phone or tablet.

The ability to automate and assists tasks such as laundry, bathing, medication reminders, and cooking will become more important as our aging population increases. Being self-sufficient and independent will not just be considered optional, it will be required for non-emergency conditions due to the strain on healthcare and medical services. As life-expectancy increases more elderly and disabled people will rely on technology around the home to assist their daily activities. Assistive technologies like the ones used in the ITEA2/GUARANTEE-project were trialled in Finland for people with intellectual disabilities and it is expected that these kinds of installations will become more widespread in the future.

Automation is particularly powerful when combined with sensor technologies embedded in the environment. Systems such as those produced by Belkin rely heavily on sensors and automation to provide assistive technologies to the elderly and it is likely that automation will be the key driver for technology embedded into tomorrow’s home. This trend can already be seen with housebuilders around the world using automated lighting, heating, and appliance control in their installations. Companies like Barratt of London in their Chandos Way development have added automation technologies as well as extensive Ethernet, wireless, and other network access devices around the home as a precursor to smart use cases.

Apple’s HomeKit and Google’s Home join other independent vendors such as Belkin, Philips, Nest, and Samsung in producing smart, automatable devices and this trend is set to continue as consumers expect to be able to do more remotely and autonomously with their purchases. More recently the cost of these devices has plumetted as companies such as Sonoff have entered the market cutting the price of many smart home components to a third. Research in this area will be spurred on by a need for interoperability. With so many device makers, with their own communications protocols and implementations vying for market share, it is imperative that the industry settles on a set of standards to enable devices to work together. Without this your thermostat will not be able to communicate with your smart radiator valve or heating controller rendering the whole system unusable for its intended use case. It is only when components of the Smart Home work together that some of the more useful use cases become possible. The door lock that can work with an indoor motion tracking device to prevent a dementia sufferer leaving the house or a telephone system that can call for help if a fall is detected in an occupants home is only possible if devices work together so standards need to be defined and agreed upon. Today this is mitigated by home hub software that can talk many different protocols but these platforms have to play catch-up with the devices makers. Research into protocols, especially over wireless networks will provide new and existing homes with a mechanism to fit, and retro-fit capabilities in the Smart Home.

For more information on the content above please get in contact.

Why software updates matter, especially in the world of IoT

Why update?

We all know what software updates are, we see them all the time on our phones, tablets, desktops and servers. They appear as pop-ups, in the message of the day (motd), and even have dedicated applications extolling their necessity, and all for good reason. Having out of date software often means, at best a degraded user experience, at worst a security hole exploitable by hackers. When you consider that analysts such as Gartner predict that the IoT will connect over 20 billion devices by 2020 the necessity for a clean and crisp update mechanism becomes paramount to any device. Exploitable devices hit the headlines often and most of the time this is due to a weakness that was not plugged fast enough or even not at all. Having connected devices means that with the right update mechanism, attacks like these should be mitigated through a fast and ubiquitous response. Of course updates are only one such security measure device makers have to take but it is one that can potentially prove to be the most future-proof.

The IoT implies a highly-connected network of “Things” which opens up the possibility to push security patches, fix bugs, and deploy new features remotely. Devices with static software do not have this ability and are stuck with whatever software the manufacturer thought was secure at the time. It’s easy to see which is the correct approach but getting software updates right can be a challenge. Updates should be secure, robust, atomic with rollback support, they should be seamless to the user and automatic. This is why every Ubuntu Core system bakes update functionality in from the start.

How Ubuntu Core keeps you safe

All software in an Ubuntu Core system is delivered as a snap, a self-contained, digitally-signed squashfs file carrying content and metadata mounted read-only on installation. This format is used throughout the system, including Ubuntu Core itself, and enables atomic refreshes of software across the whole stack. A small daemon runs that communicates with a ‘software store’, checking regularly for updates, automatically. If an update is found the daemon will download it, try to apply it, and if anything fails ‘rollback’ to the previous known-good version. In this way you never have a bricked device or broken software. On the store side software is updated to fix issues, security holes, and provide functionality by developers who don’t have to worry about the software deployment story or if their users have downloaded the latest essential release, this is all taken care of, effectively decoupling software development from software deployment.

Lets think about this for a second. A developer who finds a security issue in their software can push a fix and have every connected device out there get that fix automatically. No old insecure versions, no out-of-date software. Software delivery is secure, robust, atomic, and seamless, just like it should be.

Updates are systematic, not an afterthought

Several components need to work together for a complelling solution. There is more to a robust update mechanism than just a software store, or an agent, or some other over-the-air (OTA) system that is bolted on as an afterthough, every piece needs to work in synergy. The store should manage all deployable software and understand what devices need when they ask for updates. It should be able to handle delta downloads to conserve device bandwidth but also cater for delta uploads, reducing bandwidth for the developer too when they push new software to the store. It should be secure and transactional. Similarly, anything device-side should handle software rollbacks, system healthchecks, and security, be able to communicate with multiple stores and be underpinned by an authorisation mechanism that provides trust throughout the system. Using Ubuntu Core gives you all this and more.

If you are a device maker, software developer or just interested in the future of IoT software and devices I would encourage you to check out Ubuntu Core and for further information.

Ubuntu Core 16, a real landmark for IoT Software

Internet of Things (IoT) devices have come a long way over the last couple of decades. In 1996 I remember being in complete awe at turning on a light from my Pentium II-based desktop computer using X10 and being equally frustrated when I switched on the microwave and the same light turned off. In equal parts it marveled and disappointed and to top it off it wasn’t particularly cheap. Fast-forward a couple of decades and IoT is the new buzz-word, previously know as Ubiquitous Computing, Pervasive Computing, and a whole host of other buzz words but in short what it really means is our ‘things’, our devices, our environments, are all getting smarter. In part this is because technology is, as promised by Mark Weiser and others becoming more pervasive because of the low costs associated with microprocessors especially from ARM. ARM has designed the technology shipped in a staggering 86 billion chips over the last 25 years which is an amazing achievement and highlights the fact that yesterday’s ‘dumb’ device has given birth to today’s technology-emblazoned offspring. But what does that mean today? Well, there is a growing concern in the industry that what this really means when you peal back the marketing onion is that we are in for an interconnected mess of devices with little-to-no security and a potential nightmare for device management, control, and software updates.

Canonical have observed this problem for a long time. Being part of the early effort to bring a solid Linux distribution to ARM devices, helping found Linaro to work on essential Linux/ARM projects, and most recently on Ubuntu Core for IoT devices (and beyond), this and other efforts to shore-up defenses and bring about a step-wise improvement for Linux devices over the years really has improved the whole IoT offering today. Many companies have contribute so far and this certainly is not a one-man show, IoT and Linux on IoT is such a massive endeavor that the whole industry has to come together to agree on ways of working, software delivery mechanisms, device updatability, security mechanisms, and device management. The newly formed LITE group in Linaro, comprised of Canonical, ARM, Huawei, NXP, RDA, Redhat, Spreadtrum, STMicroelectronics, Texas Instruments, and ZTE, has the lofty goal of trying to foster security and inter-operability in this fragmented world and personally, I welcome initiatives such as this as long as together they drive the industry forward.

Last Thursday Canonical did just that with the release of Ubuntu Core 16. For those that are not familiar with this project, Ubuntu Core is an entirely new rendition of Ubuntu, stripped back and redesigned from the ground up to have security and IoT at the forefront right from the very start. It uses the revolutionary new package format: snaps, to deliver self-contained software into a constrained environment and builds upon great Linux projects such as systemd, seccomp, cgroups, Linux kernel enhancements, squashfs, and others to form a secure and extensible platform for the IoT. Ubuntu Core 16 is the latest in a line of releases that are already proven in the real world. Dell and others have been shipping Gateway devices with previous versions of this OS for sometime and many others are using it for use cases such as digital signage, robotics and more. The technology is mature and the developer story: creating and running software on your desktop that is deployable on your Ubuntu Core IoT device is compelling. In short, the team has done tremendously well to move the world of IoT forward and together, with innovative device makers, can truly deliver on the promises that the IoT has made for some time.

Fixing Git on MacOS X Sierra

If, like me you have upgraded to the Mac OS X Sierra beta you may find that Git is broken from the command line. The error I was seeing was:

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

It turns out that the developer tools are borked on upgrade so need reinstalling. To do that you need to run:

xcode-select --install

After a short download and installation everything should be back to normal.

Cheddar Half Marathon and Broadlands 10 Miler

A bit belated but here is a short report of two events I ran over the last month or so: the Keynsham 10 Miler and the Cheddar Gorge Half Marathon.

Keynsham 10 Miler (10 miles, 16.1 km)

This event was held on Sunday 22nd May in, you guessed it, Keynsham. The weather was fine although it had been raining the previous days which mean the largely undulating and off-road run was, muddy. Billed as a “multi-terrain” event the run took us down step and slippy hills (I clearly wore the wrong footware), over decrepit wooden styles, through long grass, up grueling inclines, and over foot-destroying loose stones; and then you do it again (its a two lap course). It was great. I must admit that due to a shoulder injury I picked up running trails in Salt Lake City my fitness has completely plummeted so this challenging off-road 10 miler took more out of me than I expected. Towards the 8 mile mark I was experiencing hamstring craps and at the end the pace slowed to a crawl. Despite all this I posted a respectable time and I will definitely be back again next year.

Cheddar Gorge Half Marathon (13.1 miles, 21.1km)

This was the second time I’d ran in Cheddar this year, the first was for the 10km event, and again the event was run to perfection. Tom and the team do an admirable job of ensuring the runners are registered quickly, drop their bags of easily, and are on the start line vaguely knowing where they are going. But the main actor in this event is the stunning scenery. Up on the top of Cheddar Gorge you get a great view of the surroundings and a glimpse of the pain to come. I started my Garmin on the walk to the start line this time and my watch read 528ft climbed, which is more than most complete marathons, and this was before the run itself!

Again the run was off-road, challenging, and “undulating”, but this time it was also cold, wet, and extremely muddy. Coupled with the lack of training this was probably my most challenging run this year. I took it easy on the first lap but with a 500ft climb in the middle, everything after that, whether it be flat, decline, or incline, gave my legs a bashing. A disappointing time (I was aiming for sub 2hrs and missed it) but a great day so overall a success. I’ll be back again in a few weeks to complete the Cheddar Gorge challenge and look forward to running many more Relish Running events next year.

snap try: The quick way to package snaps

This weeks snippet is all to do with improving the snap developer experience.

Since the release of snapd 2.0.8 we have added one of the most useful tools for snap developers:

snap try

What this does is effectively mounts any folder containing an unpackaged snap at /snap/snapname as a writeable folder allowing quick iteration during the packaging process. No longer do you have to create a read-only squashfs snap and install it to try out your latest changes and this speeds up the workflow tremendously. The process I use from the package directory is:

% snapcraft prime
% snap try prime
% /snap/bin/snapname
... test/hack ....
% snap remove 

When you are happy with your testing you can create the real snap file with:

% snapcraft

Happy snapping!

Are snaps really cross distro?

Yesterday we announced the new home for everything snaps and Snapcraft,, and at the same time made available the cross-distribution work that really does means snaps can run on virtually any Linux distribution. Today we have enabled support for Debian, Arch, Gentoo, Fedora, and of course Ubuntu and all its flavours but enabling more including OpenSUSE, CentOS, Elementary, Mint, OpenWrt and others is in the works.

The announcements was met with a mostly positive response which, given that Linux packaging has been an issue for so many people and for so long, is hardly surprising. This particular problem has resulted in the community creating a few different initiatives such as AppImage and OrbitalApps, tackling the problem in different ways and all of which have their own merits and limitations. In my opinion (of course slightly biased but based on technical fact), snaps are the best solution for a complete cross-platform universal package format whether that be for the Linux desktop, IoT and mobile devices, or the server. Snaps are surprisingly easy to use, encompass industry-leading security with AppArmor, Seccomp filters, and more, and all with the very familiar and popular Ubuntu development workflow. A special mention has to go to the tooling as well; Snapcraft is already pretty awesome but week after week new features are being added to improve this further and snapd, the tool that runs snaps in a confined environment, is seeing so much innovative, open development that these two are great examples of projects to get your teeth into if you are a developer, tester, or just want to dabble in an open source project.

There has been some doubt that snaps really could run on multiple distributions and, as part of the team who tested this extensively, I can definitely confirm this is the case. I played around with several native installations and VMs over the course of last week and the results were super positive. As the saying goes, a picture is worth a thousand words so I took a couple of screenshots. Enjoy!

Contributing to the snapd project

There is a lot of buzz around snaps, the new packaging format created by Canonical to enable secure, transactional, and robust application updates, and rightly so. This new method of distributing applications is revolutionizing not only software on IoT devices, but on the desktop, server, and beyond. The software that actually runs the snap applications is called snapd. Hosted on GitHub, snapd uses the Go language and is actively developed by a core set of developers, but, like most projects at Canonical, we actively encourage as much community participation as possible. One of the core developers, Zygmunt, posted a great outline on how to make your first contribution to snapd and, taking up the challenge, I did exactly that.

Zygmunt’s instructions are pretty clear but I thought I would look at this from a new developer’s perspective, using a clean Ubuntu 16.04 install and with a new GitHub account. What follows is a guide to setting everything up ready for your first contribution. As a side note I have various machines I work on regularly, mainly my Lenovo laptop running Ubuntu, but also my Macbook Pro, and a HP Windows all-in-one. This allows me to look at the various platforms available to developers all of which are valid options for software development, especially in the IoT world.

Installing Ubuntu 16.04

Installing Ubuntu is very straight-forward and, depending on the way you intend to run it, the instructions vary ever-so-slightly. There is a comprehensive guide to installing Ubuntu as the only OS available on the Ubuntu Site which will get you up and running in no time. Remember to download Ubuntu 16.04. On the Mac and Windows platform you will need to install Ubuntu as a Virtual Machine (VM) but again, this is straight-forward. I chose to use VirtualBox as it is free, feature-rich, and runs Ubuntu great; download it from the website. The Ubuntu Community is great when it comes to sharing information and there are already many tutorials on how to get an Ubuntu VM working. Although a few years old now, the answer on AskUbuntu is still valid today and is worth reading after you have installed VirtualBox. I highly recommend you also install the Guest Additions iso to enable functionality such as drag and drop between host and guest, and window resizing.

There are a few extra packages you will need out-of-the-box for snapd development so once logged in to your shiny new Ubuntu install you will need to check for, and install, updates:

  sudo apt-get update
  sudo apt-get upgrade

In addition to the updates you will need to install git, bzr, golang, and it is useful to install snapcraft and snapcraft-examples for snapping applications later:

  sudo apt-get install git bzr golang snapcraft snapcraft-examples

So now you have Ubuntu on your development machine, what next?


As stated previously, all snapd development occurs in GitHub. To be able to contribute to the project you will need a GitHub account so go ahead and create one. Fortunately GitHub’s documentation is easy to follow so you will need to sign up for a new account - a free account is just fine, and follow that up by setting up Git. We are going to be using ssh for cloning so generating SSH keys is also needed but if this is a little too scary you can use the HTTPS method instead, just be aware that instructions further down and in Zygmunt’s post will need to be modified slightly but I’ll point that out. For reference there is a good [Git cheatsheet](Git Cheatsheet available from GitHub that is worth looking at.

Now we have a working Ubuntu install, Git and GitHub all set up, it is time to get developing.

snapd development

Fortunately Zygmunt has this bit covered. He has a great tutorial on how to fork the snapd project in GitHub (just go to the project and click the Fork button in the top-right corner) and set up the GO environment variables ready for development. I highly recommend using the same directory structure as Zygmunt (mine is ~/development/src/ for snapd) as this allows you to use his very useful devtools branch for useful helper functions and make sure you run the script in the snapd directory to ensure you have everything set up correctly. Make sure to also get the devtools branch with a git clone and have this ready somewhere in your development directory outside of the snapd directory.

  git clone

Development workflow is particularly nice. You hack on the code, run the tests, refresh the snapd code with your changes, and test them manually. The refresh-bits script from devtools starts up a separate instance of snapd with your changes and allows you to install snaps and test your code without affecting the host system. On my system this looks something like this (expanded a little for clarity):

cd ~/development/src/
** hack, hack, hack **
** iterate until the tests are happy **
cd ~/development/src/
./refresh-bits snap snapd setup run-snapd restore
** open second terminal and test changes **

Whats next?

Well that bit is easy, talk to the developers on irc, sign up to the mailing list, go find a bug to fix, but most importantly, get involved!

Snapping an Electron-based Application: Simplenote

Snapcraft is described as a “delightful packaging tool”, enabling a developer to package their app from a single tree by aggregating the pieces from multiple places if necessary. It supports multiple build technologies and produces a .snap package with all its dependencies for Ubuntu Core and now the Ubuntu Classic Desktop (using snapd). It is the future of packaging for Linux-based systems. I encourage the reader to read the documentation on GitHub to get a flavour of what Snapcraft is and to learn more about the key concepts, setting up your system to produce snaps, and a nice first snap walkthrough example. For this post I am going to introduce a couple of concepts that served me well when snapping the Electron-based application, Simplenote.


Simplenote is a cross-platform note-taking application that uses Simperium to sync notes across Android, iOS, Mac, and of course Ubuntu (and other Linux systems). It has support for instant search, tags, note sharing, backups, and best of all it is free. This makes Simplenote a great choice if, like me, you use several systems on a daily basis.

Snapping Simplenote is relatively straight-forward but as I walk through the process below there are a few concepts that could help others snap applications. The rest of this post assumes you are using an Ubuntu 16.04-based system (VM, PC) and have installed snapcraft as per the instructions on the GitHub page.


First we need to get the latest version of Simplenote from GitHub, at time of writing this was 1.0.1.

  tar xvzf Simplenote-linux-x64.1.0.1.tar.gz

Then we need to create our initial snapcraft.yaml file in the Simplenote directory to tell Snapcraft how to package the application.

  cd Simplenote-linux-x64
  snapcraft init

Running snapcraft init creates a barebones file ready for editing. For Simplenote you need the following, don’t worry about the contents just yet, I will point out the important bits soon.

name: simplenote
version: 1.0.1
summary: The simplest way to keep notes.
description: The simplest way to keep notes. Light, clean, and free.
    command: wrapper
    plugs: [unity7, opengl, network]
    plugin: copy
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      Simplenote: Simplenote
      wrapper: usr/bin/wrapper
      icudtl.dat: icudtl.dat
      snapshot_blob.bin: snapshot_blob.bin
      natives_blob.bin: natives_blob.bin
      resources*: resources usr/lib/x86_64-linux-gnu/ usr/lib/x86_64-linux-gnu/

The first 4 lines of the file are there to describe details about the package itself including the name, version number, and a plain text summary and description. After that we get on to the sections that describe how the application is executed and what features it needs from the underlying system. command is the command to run on execution and plugs tells the system that this application wants to use the unity7, opengl, and network interfaces - all mandatory for Simplenote. You can read more about interfaces in Zygmunt’s series of interface articles.

As a side note, the command entry is a little different for this application as Simplenote (and other Electron-based apps) need a few environment variables set up before the actual binary can be called. This is accomplished by creating a wrapper script that does this set up. The contents of the wrapper file can be seen later in this post.

Back to the snapcraft.yaml file. The parts section describes what Snapcraft needs to do when creating the the .snap package. In this case we rely on the copy plugin that enables, you guessed it, copying of files from the host system during packaging to the snap package. The actual files to copy are listed in the files section. This copying is done to ensure that Simplenote has all the libraries and binary blobs necessary to execute once mounted within it’s snap-based confined area, running on a Snappy system. The format is:

file_to_copy: destination_in_the_snap

One entry that sticks out a little is the resources line:

resources*: resources

The use of the globbing wildcard ‘*’ ensures that the whole resources directory is copied across.

One section we skipped over was stage-packages. Snapcraft is a very clever tool and we benefit from it’s knowledge of Debian-based packages by stating in this section that we want to install libnss3, fontconfig-config, and gnome-themes-standard from the Ubuntu archive into the snap; again these are required packages for Simplenote.

Another side note. To understand what libraries are required by a binary like Simplenote we can use the ldd command:

ldd Simplenote

This produces an output that can be studied to understand where the binary is looking for its dependencies. Simplenote comes with a couple of libraries embedded in the package, namely and and ldd shows these on my system as: => /home/jamie/snapping/simplenote/Simplenote-linux-x64/./ (0x00007fa6caa85000) => /home/jamie/snapping/simplenote/Simplenote-linux-x64/./ (0x00007fa6c5801000)

Notice that the entry after ‘=>’ points to local .so files in the Simplenote-linux-x64 directory, this means that we need to copy these over into the snap, hence the copy entry in the snapcraft.yaml file. All other libraries are present on the host system and will be used automatically by Snapcraft.

The wrapper file

The contents of the wrapper file discussed above are:

export FONTCONFIG_PATH=$SNAP/etc/fonts
export FONTCONFIG_FILE=$SNAP/etc/fonts/fonts.conf
export XDG_DATA_HOME=$SNAP/usr/share
export LD_LIBRARY_PATH=$SNAP_LIBRARY_PATH:$SNAP/usr/lib/x86_64-linux-gnu/
exec "$SNAP/Simplenote" "$@"

There is nothing too exciting about this file, we set up the fonts path to be inside the snap directory itself (remember snaps are confined) as well as the share folder. We also set the LD_LIBRARY_PATH to ensure that the snap is looking for it’s libraries in the right location, namely $SNAP/usr/lib/x86_64-linux-gnu/. Again, looking at the snapcraft.yaml file you can see that we copy and here to ensure Simplenote does not complain about missing libs. The last line executes the Simplenote binary.

The wrapper file needs to be executable so we do this using the chmod command:

  chmod +x wrapper

Building and Installing

All that is left to do is build and install the snap.

  sudo snap install simplenote_1.0.1_amd64.snap

This process will pull down any dependencies and create the squashfs-based .snap file. After this we should have a simplenote_1.0.1_amd64.snap file in the local directory ready to be installed with the snap install command. One caveat at the moment is that Simplenote will expect to use dbus and with AppArmor confinement this is not possible with the application we just built. It is possible to get around this but I will leave that as an exercise for a later post.

If you try to run the application using:


You will see the following:

What we can do, which introduces a new concept nicely, is use —devmode. devmode allows the snap to break out of it’s confinement during development to quickly get up and running. From there you can look at what policy violations would potentially occur and adjust your application accordingly. When you are happy that your application is working in a confined environment you can simply install without devmode to test.

To install Simplenote with devmode you can use the —devmode flag:

  sudo snap install —devmode simplenote_1.0.1.snap

Running simplenote now produces the following:

It is not perfect but that wasn’t the aim of this post. Instead we looked at packaging an application, copying files inside the snap using copy, using a wrapper file to set up a snap environment, and now to conclude lets introduce another little snippet of information to help you debug your snapping process: busybox.

Debugging snaps using busybox

Snapping an application is usually pretty simple but when you get stuck and just need to look inside the snap itself to see what is going on there is a simple trick to allow you to do this. Adding busybox as an application to your snap gives you to have a shell environment right there in the snap. This allows you to poke and prod at directories, seeing if files you thought you copied over are present, and generally debugging (you can also add gdb and other tools if necessary in the same way).

To add busybox we would modify the snapcraft.yaml file above as follows:

name: simplenote
version: 1.0.1
summary: The simplest way to keep notes.
description: The simplest way to keep notes. Light, clean, and free.
    command: wrapper
    plugs: [unity7, opengl, network]
  busy box:
    command: sh
    plugs: [unity7, opengl, network]
    plugin: copy
      - libnss3
      - fontconfig-config
      - gnome-themes-standard
      Simplenote: Simplenote
      wrapper: usr/bin/wrapper
    plugin: nil
      - busybox

Notice the extra busybox-related lines. To access busybox once the snap is installed you can use:


Important directories are:

$SNAP - the snaps home directory i.e. /snap/simplenote/id
$SNAP_DATA - the snaps data directory in var i.e. /var/snap/simplenote/id
$SNAP_USER_DATA - the snaps user data directory i.e. /home/jamie/snap/simplenote/id

All of the above directories are on a Classic 16.04 Ubuntu system.

All code can be found at:

The next steps for this application are to sort out the fonts and menu spacing, add a .desktop file for dash discovery, get around the AppArmor violation, and upload to the store, but all of this is for another post.

The Pensford 10k

Another week, another race, this time the Pensford 10k event, but lets take a little step back first.

At the beginning of my ‘racing’ (very loose term for official events) calendar selection I had the aim of adding at least one, preferably two events per month to ensure that the pressure to line up against others kept me honest and provided the motivation to get my backside off the couch and training. When the event is close by and offers an “undulating, and seldom flat course” with a nice 50m and 100m steep climb in the middle, it is worth a try, and try I did.

Race day I lined up with nearly 200 other people, but amongst the normal pre-race banter I was hearing whispers from the more experienced runners: ‘it only gets going around the 5k mark, watch out’, and ‘the hill in the middle is a killer’. With this in mind, I moved myself from the 40min+ starting group to the 50min+ group all whilst I was wondering what I had gotten myself in to.

I set off at a slow but steady pace, trying to conserve energy for ‘the hills’ although at this point I was overtaking people in the faster group which had me a little worried. Despite a few hills the first 5k was fine, up, down, up, down, and up again but rather doable. Then the 5k marker appeared and right on cue, the hill that was the focus of many a conversation at the start of the race.

Now, a 100m climb sounds rather small, and on paper it is, but when you are trying for a good race time, with tired legs, a dislocated shoulder (did I mention that?) and enjoying the best the South West has to offer you in terms of windy, cold weather, the start of this tiny little hill was not welcome. I’d set my Garmin watch to alert me when I was slower than an 8:30 pace thinking me and the beep were never going to meet but unfortunately the slow beep was my unwelcome companion the whole time I climbed this section.

Despite a slow time (51:27) I really did enjoy it. I am confident I could knock a large chunk of time off that next time, in good health, and with next year being the 30th anniversary of the race, I think I will be back to prove that.

Next on the agenda, the Keynsham 10 miler but I am sorely tempted to add the Bath Ultra Marathon in September to the list, after all, I’m still looking for races.

Running the Cheddar Gorge 10k

This week I ran the 2016 inaugural Cheddar Gorge Challenge event, the 10k race. Billed as a ‘lumpy course’ these series of runs offer “more climbing just getting to the start than you will in most other events”. With the affectionately named Hell Steps towards the end this is a tough run but more importantly, it is a fun event. Cheddar is beautiful, steeped in history and picturesque from the bottom of the gorge let alone from running up and down it so the prospect of completing 3 races (10k, half marathon, and marathon) in and around the area was too enticing to pass up.

The terrain, according to the website is “steep in places and very steep everywhere else” which sets the scene but to be honest I arrived at the event fully trained but expecting the worse. In reality there were hard sections, easy sections, a little doubt that I could actually complete the event around the 5k mark, and a ‘this is great’ moment around 7k. The distance is tiny compared to the training I have done but it was a combination of the race-day ‘too fast start pace’ and the lumpiness that hit my tired legs hard. There was never a time that I truly wanted to give up but there were times when I thought about a brisk walk rather than running. Overall I am happy to say that even up the Hell Steps I broke into a run and completed the course in a not impressive, but rewarding 58:28. I quietly wanted a sub-1hr finish but given the course I was a little sceptical about that but even though the wind was against us most of the race (how does it constantly blow directly at you regardless of your orientation?) I hit my goal.

So why would I put my body through this I hear you ask. Well, this year I am doing something out of my comfort zone, something a little crazy (for me), and something that I hope will make a difference to others. This year I am running, a lot, and for a good reason. This year I am running to raise money for MacMillan Cancer Support. To read about my story please click the link which is also where you can donate to this very worthy cause. I am putting myself through several challenges, including a marathon in the Himalayas of Nepal and a 45 mile Ultra Marathon because there are people out there who just can’t, so lets together make their lives a little easier during a really hard time.

So if you can, please donate.

A Change of Scenery

A few weeks I joined Canonical and for the eagle-eyed you’d realise this is actually for the second time. Previously at Canonical I spent my days with the Mobile Team, realising the goal of a good Linux on ARM experience which eventually culminated in the foundation of Linaro. This time I am equally excited about another formative stage of technology, that of IoT and the possibilities of interoperable and extensible devices running a standard Linux operating system.

I personally will be working directly on Snappy Core, the technology used to provide a stable, secure, transactional, and featureful platform for the internet of things (IoT) and beyond. I believe in the future of IoT, big data, and a world full of Mark Weiser’s vision of Ubiquitous Computing and I am excited to be part of it.

Running in Nepal

Nepal is such a wonderful place. Steeped in history and culture with some of the most breath-taking sights to be seen, Nepal is the home to Buddhism, the Himalayas, and of course the mighty Mt Everest. Despite all this wealth, the country and the people themselves face a lot of challenges. Economically Nepal is considered a third-world country with many people in utter poverty. To compound this hardship, Nepal has also experienced terrible earthquakes that have left many dead and even more without basic needs such as accommodation, access to food and clean water, and education. Last years earthquakes were terrible and it continues to happen. This had me thinking, wouldn’t it be great if I could do something, no matter how small, to help out in some way?

Through my love of running I got to learn about the Impact series of Marathons and right there as the inaugural run was Nepal. Impact Marathons has lofty goals, in fact they are aiming to contribute to the 17 UN Global Goals through running. “Our runners will truly see the impact of their time, money and resources by visiting and meeting all of the projects they are supporting”. As part of the week long trip we will be helping repair a school that was badly affected by the earthquake as well as undertaking other community projects. The actually marathon will be at the end of the week and promises to combine stunning views, up to 2300m of altitude, and 26.2 miles of running for a great cause. I am really looking forward to it.

This year I am hoping to complete several new challenges and up my running to ultra distances. At the moment, Nepal is my final planned race of the year but between now and then there is a lot of training to do.


I have a confession to make. While I have publicly supported the parkrun initiative for some time and I wholeheartedly believe it is a great idea, I have never actually ran one myself. There have been many excuses from family conflicts at the weekends to blaming the weather but this week I thought I would actually ignore all of this and partake in the spectacle. I chose the closest event, which for me was Southwick Country Park in Trowbridge and arrived before the customary 9am start time. Nearly 300 people turned up to run the wet and muddy course and despite not knowing anything about the logistics of Park Run (you pick up a finishing token at the end and have that scanned with your personal parkrun barcode) I mingled into the crowd ready to run.

The course itself was great. 3 laps around a semi-gravelled path with some sections completely covered by 6 inches of rain water. The first lap around I tried to avoid the water but this this only pushed you onto the slippery surrounding mud so for lap 2 and 3 it became obvious that the best option was to just get your feet wet.

I did not manage any personal bests but that was not the point; the event itself was great fun and the volunteers were excellent. This may have been my first parkrun but it is definitely not my last, I will be there again this Saturday, 9am, ready to do it all again.

Fujitsu SnapScan 1300i with Ubuntu

I’ve been using the Fujitsu ScanSnap 1300i on Mac OS X for some time now in the pursuit of a paperless life but now that I am using Ubuntu more and more it was apparent that I was going to have to get this little scanner working on Linux. Searching the internet threw up a few interesting articles but nothing worked 100%. In the end the steps I used under Ubuntu 15.10 were:

Install sane and gscan2pdf:

$ sudo apt-get install sane gscan2pdf

Download the scanner firmware from copy it to the relevant directory:

$ wget && sudo mkdir /usr/share/sane/epjitsu && sudo cp 1300i_0D12.nal /usr/share/sane/epjitsu

Open up the scanner lid first then initialise it with:

$ sudo scanimage -L

Run the gscan2pdf application:

$ gscan2pdf

You can tweak some of the scanner options by clicking the scan button and playing around with the pop-up box tabs. For me the most important were to select duplex as I regularly scan double-sided documents and to select colour.

Happy scanning.

Running Ubuntu Snappy Core Virtualised on Mac OSX

Although most of the documentation out there today shows you how to run Ubuntu Snappy Core on an Ubuntu desktop, it is also pretty simple to do this on Mac OSX. In short:

Download the Ubuntu Snappy Core image from:

You will need the amd64 version of Snappy.

Unarchive the file:

unxz ubuntu-15.04-snappy-amd64-generic.img.xz 

Then convert the image into something that VirtualBox can run:

qemu-img convert -f raw -O vmdk ubuntu-15.04-snappy-amd64-generic.img ubuntu-15.04-snappy-amd64-rpi2.vmdk

At this point you want to create a new VM with VirtualBox. Make sure you create a Linux Ubuntu image but when you get to the Hard drive section select “Use an existing virtual hard drive file”. Navigate to your .vmdk image and click create. Now, when you start the VM you should be greated (pretty quickly) with the login prompt from Snappy.

Installing Ubuntu Snappy Core on a Rasberry Pi 2 using a Mac

This is a short guide to installing Ubuntu Snappy Core on a Rasberry Pi 2 using a Mac. It is pretty straight-forward but there are a couple of areas where you can get caught out.

First, download the Ubuntu Snappy image from:

As of writing the latest release was:

  • ubuntu-15.04-snappy-armhf-rpi2.img.xz

Insert your SD card if you haven’t done so already and use diskutil to find it.

$ diskutil list

Make sure you are confident that you know exact which disk is your SD card before proceeding. The relevant part of my output was:

/dev/disk4 (external, physical):
#:                       TYPE NAME                    SIZE       IDENTIFIER
0:     FDisk_partition_scheme                        31.9 GB     disk4
1:                  Windows_FAT_32 Untitled          31.9 GB     disk4s1

Umount the disk with:

diskutil unmountDisk /dev/disk4

Then proceed to write your Ubuntu image to the card with:

$ unxz -c ubuntu-15.04-snappy-armhf-rpi2.img.xz | sudo dd of=/dev/rdisk4 bs=32m && sync

Notice the use of ‘r’ in front of the /dev/disk4 file.

Thats all there is to it. Pop the SD card into your Rasberry Pi 2 and start using Ubuntu Snappy Core.

Life with Apple's iWatch

I confess, I’m a bit of a gadget hound. I own four different smart watches all with different OSs:

  • Pebble with PebbleOS
  • Samsung Gear with Tizen
  • Motorola 360 with Android Wear
  • and now Apple iWatch with WatchOS

When I first got the Pebble (Kickstarter model) I was instantly impressed. It was a device that lasted days, gave me notifications at a glance, and allowed me keep my phone in my pocket unless it was really needed. The trouble is that I got bored pretty quickly with it’s lack of functionality and the Samsung Gear looked enticing. With the ability to make calls, as well everything the Pebble did and more, it seemed like a no brainer. It originally came with a version of Android so buggy it made the device pretty much unusable. Much later this was ‘upgraded’ to a Tizen OS which to its credit was better, but was still limited. Enter Android Wear. In a blaze of publicity at Google I/O 2014 this wearable OS seemed perfect, so I went and purchased the Motorola 360, arguably the best looking device on the market. Unfortunately, this was also crippled. No sound (so no beeps, no notification noises), no ability to make and receive calls, and more importantly no real way to get back to notifications once they were dismissed as well as no real compelling stock applications, this watch just felt like a device to vibrate every time something happened and to ignore at all other times. Android Wear just wasn’t compelling enough so I always gravitated back to Garmin watches (Forerunner 620, Forerunner 920t). Now there is the Apple Watch.

I’m still in the honeymoon stages with it at the moment but I have been wearing it exclusively for the past three weeks. It’s proven to be a useful aid: the fitness app is a bit poor and I’ll revisit this point in another blog post but overall the experience of using it has been pleasant. I can read messages and email, make and receive calls, the calendar app is super useful, and I’ve found that I use it extensively for reminders. All in all it has been a success so far but it is not without its problems. Watch OS 2.0 promises to improve the device further and I am certainly looking forward to it but for now, the iWatch is the best smart watch in an immature market.


As a scholar of software engineering with a particular interest in the field of ubiquitous computing and artificial intelligence, the recent series by AMC, “Humans” really did peak my interest. Is it something based on sensationalism or alternatively something that could be considered grounded in reality? Well, I believe that it is a drama that reflects more on the latter than the former. I really like the concept so far and it raises questions that only academia have explored in detail before movie studios love; concepts such as artificial understanding, consciousness, love, and the projection of human traits upon non-human subjects (anthropomorphism).

Sure, the movie industry have toyed with a multitude of these concepts with many dollars flowing in at the box office but what Humans does is ground these in such mundaneness, run of the mill reality, that I really like the play with boundaries, the grey line that the whole programme is toying with. What defines humanity and what distinguishes it from an imposter? What is the point in humans learning when machines can do it much better? Maybe more importantly do the majority of people, the Joe Bloggs of the world, care about the gap in what is possible with strong AI and what is human; a posit that I believe will be in the forefront of minds for the next 50 years.

In short, I really do like Humans so far.

Creating bootable USB images on the Mac

Creating a bootable image for installing a Linux OS is pretty straight-forward but when you are doing this on the Mac there is a specific way it needs to be done. I alway use USB drives for this purpose so what follows are the steps needed to create a bootable USB stick from a Linux .iso image.

I presume you have already downloaded your favourate Linux distribution in .iso format, below I’m using Debian Jessie.

First conver the .iso image into a .img image.

$ hdiutil convert -format UDRW -o debian-jessie-DI-rc1-amd64-netinst.img debian-jessie-DI-rc1-amd64-netinst.iso

You then need to find your USB drive.

$ diskutil list

Look for USB device. I’ll use /dev/disk7 for this example. First make sure it is unmounted.

$ diskutil unmountDisk /dev/disk7

Then copy the image to the USB stick. CAUTION This will overwrite anything that is already on the drive.

$ sudo dd if=debian-jessie-DI-rc1-amd64-netinst.img.dmg of=/dev/disk7

Safely eject the USB disk before using it for booting on your target device.

$ diskutil eject /dev/disk7

And there you have it, a bootable, Linux install USB drive.

Trusted Execution Environments in Android

Continuing on from my post about TrustZone it seems that there is a lot of interest in hardware-backed security for Android and what you can do with it. One of the most interesting things that a hardware-isolated area can do for devices, whether that be a dedicated co-processor or technology such as TrustZone, is to provide a trusted enviroment dedicated to protecting your most valuable assets and the operations that are performed on them. Installing something like a micro operating system in this divide can give you a lot of features that the main OS just cannot gain access to and is the thrust of standards bodies such as Global Platform 1. This micro OS, or to use the popular parlance: a Trusted Execution Environment (TEE), is becoming more important in a world of one-click / swipe / wave-a-device payments and device authorisation and over the coming years will see a surge in popularity not only from independant vendors but from the large OS vendors too. But lets take a step back.

The concept of a Trusted Execution Environment is to provide a secure area of the main processor, memory, and peripherals, that can be used to perform privileged operations. First defined by the Open Mobile Terminal Platform (OMTP) forum in their Advanced Trusted Environment:OMTP TR1 standard 2 and later adopted by Global Platform in their standardisation effort, the TEE has become a bridge between pure software security mechanisms and hardware-only solutions. The TEE uses the concept of isolation that technologies such as TrustZone enable to execute in the processors Secure World mode.

The TEE can be a fully-functional operating system offering software developers the opportunity to create Trusted Applications: applications that reside in the Secure World and perform security-critical functions outside of the control of the main operating system running in the Normal World. An example of such a Trusted Application can be a Trusted User Interface (TUI) - a display that is presented to the user completely protected by the Secure World and inaccessible to the main operating system such as Android. The interface could display sensitive information such as passwords and be confident that attacks such as screen scraping or video buffer capture would not reveal anything.

It is clear that the popularity of TEEs is increasing. Based on one commercial TEE vendors press releases the adoption rate of the Trustonic TEE is reported to be over 100m devices every 6 months (source: - figures from February 2014 to July 2014) although wide-spread utilisation by third-party developers is yet to be exploited. Ekberg et al 3 attribute this to a lack of access to the TEE stating that “Despite TEE’s large-scale deployment, there’s been no widely available means for application developers to benefit from its functionality as mobile device manufacturers have restricted TEE access to their internal use cases.”, but also admit that standardisation could potentially solve this issue. Recent announcements by companies such as Linaro point to a more open access model 4 but we are yet to see commercial devices with OP-TEE technology.

In short, TEEs are here to stay and I expect that the likes of Apple and Android will open up access to this trusted area for more developers to enhance the security of their applications in the near future.

What are you passionate about?

I have recently been reading the book entitled Talk Like TED Carmine Gallo which promises to bestow the virtues of great public speaking upon all who read it. Early on in the book there is a rather salient point that got me thinking, a point that starts with a simple question, “What are you passionate about”. Now there are quite a few things I am passionate about but in the context of Software Engineering, my chosen career path, it is something that underpins all the great projects that over time I have really enjoyed working on. What is it? Data.

I am passionate about data, specifically the conclusions you can draw from it. This is not to say the actual gathering of data, although that can be quite interesting in itself: constructing tools and processes as you squirrel away the nuts of information that together paint a picture that no one individual data point can allude to. I am more passionate about the ‘Wheres Wally’ dance: the finding of that little something you’ve been looking for in a sea of noise, the epiphany, the moment, the unveiling. The answer to the puzzle that is something you intrinsically know is just outside your grasp and that with the data, that collection of measurements and information, the answer will magically appear. The puzzle that is made up of a thousand pieces and by putting them all together it becomes clear. That is what I’m passionate about. I guess my career has always followed that route of problem solving.

Software Engineering is a great field to be in if you enjoy problem solving: you get to create a solution based upon parts constructed with only your imagination, a programming language, and your favourite text editor. In my experience, the first solution you produce is often not quite what you were looking for, and the itch remains. You continue to iterate, introduce bugs, fix bugs, thinking of new and novel ways to answer your initial questions and finally you have something that not only works, it satisfies that itch. When you employ this process to scratch a larger itch, a higher-level more abstract problem that requires the gathering and analysis of data I find there is satisfaction from the initial problem solving during development plus the benefit of discovering that pattern or snippet of information that maybe you only thought was there before but now is proven with the data. Maybe this explains why I have an affinity with Pervasive Computing and, in its latest incarnation as a buzz word - Internet of Things (IoT). The topic of Data Inference, that is what I really enjoy.

I’ve gathered much data over the years: email achives and usage data, energy monitoring and the subsequent discovery of inefficient appliances, health data with Fitbit and Garmin or lifestyle monitoring with Slogger, it can all be combined to do wonderful things. But there is a tendancy to gather data just of the sake of it and I have certainly been guilty of that but I am starting to take a step back and trust the data more - to make informed decisions based upon it - so lets see how that goes this year. Big data is definately here, but the more important point everyone should be asking is “What do we do with all that data and how can it benefit humanity?”.

TrustZone For Android Mobile Security

Recently I was asked to provide a quick, high-level introduction to TrustZone and how it could potentially improve the security on Android platforms. Any response to this is tricky: TrustZone is just a mechanism built in to a platform that if unused can do very little for device security but when utilised to its fullest, can create a totally seperate environment dedicated to protecting your most important secrets. But first a bit of background.

According to Bloomberg 1 ARM’s chip designs are found in 99% of the world’s smartphones and tablets; 2013 alone saw ARM’s partners ship over 10 billion chips (source: ARM Strategic Report 2013). Popular devices such as the Apple iPhone and iPad, Amazon’s Kindle, and Samsung’s flagship Galaxy series all use a Central Processing Unit (CPU) based on an ARM design. In 2004 ARM released its design for a hardware-enforced parallel execution environment for the PB1176 and ARMv7 architectures that was adopted into all later application processor designs.

TrustZone itself is an implementation of device-level security utilizing extensions to the CPU and Advanced Microcontroller Bus Architecture (AMBA), or memory bus. By connecting all these components together in a homogeneous architecture it is possible to contruct two distinct ‘worlds’, a “Secure World” and a “Non-Secure World” (or “Normal World”) 2. The two modes are orthogonal to each other with the Secure World enjoying full access to all memory regions and priviledged CPU areas whereas the Normal World can be restricted. This arrangement is configured during the boot process. The interface between the two worlds is governed by a special Secure Monitor Mode, accessible via an interrupt instigated with the Secure Monitor Call (SMC) instruction. Identification of which world the processor is currently executing it is possible by the use of a extra ‘flag’ known as the NS, or Non-Secure bit. All components that wish to use the functionality provided by TrustZone must beaware of this flag.

With TrustZone it is possible to isolate an area of the CPU, memory, and peripherals for use by a trusted software component called a Trusted Execution (TEE) 2 or other such privileged software. For example, Android’s implementation of the core crytographical keystore functionality, KeyChain, can use hardware components such as TrustZone, Sim Card, or Trusted Platform Module (TPM), to enhance overall security. By using TrustZone a device can provice secure software functionalty, backed up by the hardware it is running on.

It is clear that with more widespread use TrustZone could benefit an increasingly mobile society who expect to do the most secure of operations with their devices.

  1.] [return]
  2. J. Winter. Trusted computing building blocks for embedded linux-based arm trust- zone platforms. In Proceedings of the 3rd ACM workshop on Scalable trusted com- puting, pages 21–30. ACM, 2008. [return]

Getting back into blogging

Its been a while, in fact it has been around a year since I updated this site (to be fair I did write a few posts on another blog during that period … excuses, excuses) which I attribute to a increasingly busy schedule but more to a lack of enthusiasm. So, in an attempt to get back into this blogging lark I thought it would be a good opportunity to redesign the site with Hugo, a static, but more importantly Markdown-based web engine, and put up a few articles on something dear to my heart, Software Engineering. So expect more development related posts interspersed with running, triathlon, travel, and other randomness as I attept to do this on a semi-regular basis.

Oh, and if you are looking for any of my past entries from 2007 onwards, they will be back up shortly as I figure out how to convert WordPress content to Hugo and still keep some form resemblence to the original post.