Off to Cyprus

It seems somewhat fitting that almost exactly 5 years since I became aware that I had Lyme-like symptoms, I’ve made the big decision to travel to Cyprus for intensive treatment as my own home treatment has been good at keeping the worst symptoms at bay but has proven very hard to totally eradicate the infection.

Cyprus has some world-class medical clinics and seems to be leading the field in terms of Lyme treatment using ozone. I’ll be attending for a month at a newly-built clinic in Limassol (so new that it’s not mentioned on that site yet) from June 20th this year.

I will continue to work as all I need is a laptop and an Internet connection. My accommodation and the clinic are all wifi-ed up!

I’ll post more updates as I progress through this treatment, and give you all the inside info in case you are thinking of making the same trip.

Posted in Lyme, personal | Leave a comment

Random Test Inputs – part 2

Test

If you saw my last post Random inputs in unit testing you’ll remember that I was advocating the benefits of using random test data in your unit tests.

One of the bits of feedback that I had (and seems to be the main complaint of most people who are against this) is that you need reliable test input so that you can recreate failures.

I don’t think these two concepts are mutually exclusive. Let me explain.

A decent test framework will help you recreate failures

I use testtools as my preferred test framework for Python. One of its benefits is that its so-called Matchers will output useful information for failures, including the data used for input. As a very basic example:

import testtools 
 
class TestMe(testtools.TestCase): 
    def test_failure(self): 
        self.assertThat(5, testtools.matchers.Equals(4))

This produces the output:

$ python -m testtools.run fail.py       
Tests running... 
====================================================================== 
FAIL: fail.TestMe.test_failure 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "fail.py", line 7, in test_failure 
    self.assertThat(5, testtools.matchers.Equals(4)) 
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 435, in assertThat 
    raise mismatch_error 
testtools.matchers._impl.MismatchError: 4 != 5

You can see that the inputs are clearly listed. If this is not clear enough you can add more explanatory text that is output on failure. Changing your assertion to:

def test_failure(self):
    self.assertThat(5, testtools.matchers.Equals(4), "Expected 4, got 5")

Results in:

testtools.matchers._impl.MismatchError: 4 != 5: Expected 4, got 5

Thus, if you get a failure from a random input, it’s easy to see what data caused that.

Making your data recognisable

Once you start randomising a lot of input data, it starts to become hard to identify which input is which when you have a failure for code that has many inputs. Take this for example, which I have taken from my days as the MAAS engineering lead:

from itertools import imap, islice, repeat
import testtools
import random
import string


class TestMe(testtools.TestCase):
    random_letters = imap( 
        random.choice, repeat(string.letters + string.digits))

    def make_string(self, size=10):
        return "".join(islice(self.random_letters, size))

    def test_large_comparison(self):
        dict1 = dict(
            person=self.make_string(),
            age=self.make_string(),
            weight=self.make_string()
        )
        expected_dict = dict(person="foo", age="10", weight="200")

        self.assertThat(dict1, testtools.Matchers.Equals(expected_dict))

This results in:

====================================================================== 
FAIL: fail.TestMe.test_large_comparison 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "fail.py", line 26, in test_large_comparison 
    self.assertThat(dict1, testtools.matchers.Equals(expected_dict)) 
  File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 435, in assertThat 
    raise mismatch_error 
testtools.matchers._impl.MismatchError: !=: 
reference = {'age': '10', 'person': 'foo', 'weight': '200'} 
actual    = {'age': 'HL8qw3xnJO', 'person': 'IuhoaGzQHB', 'weight': 'Rfi3lxsiHf'}

My first question on seeing output like this is “where did that data come from?” because it could have leaked from buggy code and be totally unrelated to your intended input.

This has an easy answer if you make a small change to the test harness:

    def make_string(self, prefix="", size=10): 
        return prefix + "".join(islice(self.random_letters, size)) 
 
    def test_large_comparison(self): 
        dict1 = dict( 
            person=self.make_string("person"), 
            age=self.make_string("age"), 
            weight=self.make_string("weight") 
        ) 
        expected_dict = dict(person="foo", age="10", weight="200") 
 
        self.assertThat(dict1, testtools.matchers.Equals(expected_dict))

Here, the code has been changed so that the make_string() function will accept a fixed prefix for the string you want generated. This has the following effect on the output:

reference = {'age': '10', 'person': 'foo', 'weight': '200'} 
actual    = {'age': 'ageb8BIwdwZrM', 
 'person': 'personBSUhvYxFTU', 
 'weight': 'weightngdjfm7ef1'}

You can instantly see that your random input was in fact generated in the way you intended (or not, as the case may be) and thus easy to identify its source!

I hope this was useful. Leave me feedback if this helped you at all!

Posted in tech | Leave a comment

Random inputs in unit testing

CC by StockMonkeys.com

I recently had an upstream reviewer telling me that I should not randomise my test input because “randomness does not provide a useful input to the test and sets a bad example of practices for writing tests”.

I am going to explain here why this is wrong and it’s actually good practice to randomise inputs. Let me start by saying that random test failures are not the same thing as spurious test failures. I’ll come back to that later.

Consider this simple code under test, it’s a contrived example but you will get the idea:

def myfunc(thing):
    """This function just returns what it's given."""
    return "foo"

OK so let’s consider this a stub implementation, as it has an obvious bug. So, if we wanted to write a test for this we might write something like this:

class TestMyfunc(unittest.TestCase):
    def test_myfunc_returns_same_input(self):
        returned = myfunc("foo")
        self.assertEqual(returned, "foo")

Here, I am using a fixed input of “foo” as many people like to do in tests as a way of saying “this value is irrelevant”.

The bug should be obvious here — the test passes when it should not because the code under test is returning the same value as used in the test. As I say a fairly contrived example, but it illustrates the point that tests should never assume anything about code under test.

Here’s a better way of writing the test:

import random

class TestMyfunc(unittest.TestCase):
    def test_myfunc_returns_same_input(self):
        expected = random.randint(0, 1000)
        returned = myfunc(expected)
        self.assertEqual(returned, expected)y

(A further improvement could be to generate a random string, but I’ll leave that for a future blog entry.)

Here, we’re generating a random input and asserting that the returned value is the same as the input. This not only avoids the bug above but it is far better at demonstrating test intent. It will also never fail unless the code under test is buggy, and that brings me back to the point above about random vs spurious test failures.

A random test failure is good. It means you found a bug! A spurious test failure is one that indicates you’re not testing properly – an example of this is where you depend on some network connectivity to complete your test; as networks are inherently unreliable this is a bad test and will create spurious test failures when the network fails.

Finally, I can recommend that you look at a tool called Hypothesis, which is a property-based testing utility. My friend Jono explains it in his blog here: https://jml.io/2016/06/evolving-toward-property-based-testing-with-hypothesis.html

Posted in tech | Tagged , | 1 Comment

Glen Rock State Forest

The latest Phantom 4 video shot at Glen Rock State Forest in South East Queensland’s Scenic Rim.

Posted in Uncategorized | Leave a comment

Montville via Phantom 4

Enjoy the spectacular scenery of the Sunshine Coast Hinterland.

Posted in Drone | Leave a comment

Super Moon

While I was dropping my mother at the airport tonight, I thought I’d try to get a nice shot of the “super moon”. I wasn’t too disappointed with this result!

It’s hard to convey just how bright the moon is. I processed this shot so the detail stands out, but in real life it’s a glowing white ball.

Supermoon

Posted in Photography | Tagged | Leave a comment

Another drone video on the Brisbane River

I tried to do some colour grading here. The range is spot on now, but I think I could increase the saturation a bit. Next time.

Posted in Drone | Leave a comment

LXD on Linode servers

I was recently trying to get LXD working on my Linode server, but was getting this error:

$ lxc launch ubuntu:16.04 wordpress       
Creating wordpress
Starting wordpress
error: Error calling 'lxd forkstart wordpress /var/lib/lxd/containers /var/log/lxd/wordpress/lxc.conf': err='exit status 1'
  lxc 20161010203338.247 ERROR lxc_seccomp - seccomp.c:get_new_ctx:224 - Seccomp error -17 (File exists) adding arch: 2
  lxc 20161010203338.247 ERROR lxc_start - start.c:lxc_init:430 - failed loading seccomp policy
  lxc 20161010203338.247 ERROR lxc_start - start.c:__lxc_start:1313 - failed to initialize the container

The fantastic Stéphane Graber helped me to work out that the default Linode kernel doesn’t have the right bits compiled into it, and I should be using an Ubuntu kernel instead.

So, following the guide at https://www.linode.com/docs/tools-reference/custom-kernels-distros/run-a-distribution-supplied-kernel-with-kvm to upgrade my kernel, it now works.

Posted in tech | Leave a comment

Brisbane River Drone Flight

My second drone video! A bit longer, and has a special appearance by my dog at the end.

Posted in Drone, tech | Tagged | Leave a comment

Droning on!

I went and bought myself a DJI Phantom 4. Holy crap, I’m impressed. 12MP images and 4k video. Not to mention all the other bells and whistles like collision avoidance, auto return to home and when you put it in speed mode, it does 70km/h!

Here’s the first of a couple of videos I put together for it:

Posted in Drone, tech | Tagged | 2 Comments