Self-treating Lyme Disease with ozone

Like many people with chronic Lyme Disease, I’ve struggled to get on top of treating it since it’s a very persistent infection. Antibiotics would work to keep the symptoms at bay for a while and then the inevitable resistance came and the symptoms came back.


Hocatt Sauna

Late last year I discovered how effective ozone treatment is as a replacement for antibiotics. I initially treated intravenously at a local clinic, where blood is removed and infused with ozone and then replaced. This was amazingly effective! I then then followed this up at the same clinic with some sessions in a Hocatt sauna.

The Hocatt was equally as effective for me as the intravenous delivery. The problem though is that this is a very expensive piece of equipment and naturally the clinic charges a commensurate amount of money to use it.

DIY Time!

Because of the expense, I set about recreating my own ozone treatment. It turns out to be reasonably simple and very effective indeed for me. Please note that this is not general medical advice and it may not work for you, I am just explaining what worked for me.

So here’s my kit:

  • One ozone generator
  • One portable steam sauna tent

That’s literally it. The Ozone machine was AUD $130 and the steam tent was AUD $80 or so, both bought from eBay. This is what it looks like ready to go.


The ozone machine is bottom left of the picture and the steam generator is bottom right.







I put a fold-up camping chair inside the tent to sit on.tent+chair

You can see the steam outlet on the floor at the back.






ozoneClose up of the ozone machine. The cylindrical object is an air dryer which makes the machine more effective at generating ozone. This one makes 500mg an hour.







I feed the tubing from the ozone generator into the tent via the zip holes meant for hands at the front.







And here’s the boiler that generates the steam. It feeds steam via the tube at the side.






How to use it

The boiler takes about ten minutes to get hot enough to make steam, so set it off and at the same time zip up the tent and start off the ozone generator — I program it for 40 minutes, allowing a ten minute “pre-fill” period and 30 minutes of sauna time for me.

As soon as you hear the boiler boiling, unzip the tent and get in quickly to avoid letting out the ozone gas. It will be a bit smelly, try not to breathe it in, it will irritate your lungs if you breathe too much of it in.

Wrap a towel around your neck and zip up the tent with your head stuck out the top, sealing off your neck as much as you can. OK, now relax for 30 minutes!


  • Really, try not to breathe in the ozone, it will damage your lungs.
  • Put the sauna tent OUTSIDE, you don’t want ozone indoors.
  • You need to be able to stand a bit of heat for this to be really effective. If you feel too hot, get out, or open the hand/arm zips to let some heat out.
  • If you feel faint, get out immediately. Your blood pressure might be too low to stand the heat. If you have a BP monitor, do use it!

What to expect

The first few times you do this, you’ll feel pretty whacked, it’s quite intense. For that reason, it’s a good idea to build up to the 30 minutes over a few sessions, or even leave out the steam initially. The last 10 minutes of the 30 are the toughest, but they are also the most effective, so try to get there gently.

I usually get a herx from this between 1-4 hours after getting out. I do not get in and do another session until 2 days after my previous herx has stopped.

Good luck!


Posted in Lyme | Tagged | 3 Comments

Yubikey as Google Authenticator on Ubuntu

Second factor authentication (2FA) is a fact of life these days for serious security. Many sites accept and use Google Authenticator which uses a time-based code on your phone that changes every 30 seconds.

A Yubikey as shown is also another 2FA device that is able to work as a USB HID (it appears as a keyboard) and can send one-time codes when the button is pressed, which is loads more convenient than opening up an app on your phone.

yubikeyBecause it doesn’t have a clock, however, it might not seem apparent how you can use it as a Google Authenticator replacement, but there is a way!

Yubico has a few tools that you can use to program the key. On Ubuntu you can grab them by installing the yubikey-personalization package:

sudo apt-get install yubikey-personalization

You will also need a Python script that handles a few things that you need to interact with the Yubikey:


Finally, you will need the Google Authenticator secret key. It’s not easy to get this from an existing configured Google Authenticator but if you are using it for SSH it may be on your SSH host in first line of the $HOME/.google_authenticator file. If not, you need to talk to your admin.

OK¸ now you can program your Yubikey. The Yubikey has got two slots for configuration. I put mine in slot 2 but you can use slot 1 as required.

ykpersonalize -2 -o chal-resp -o chal-hmac -o hmac-lt64 -a $(./ --convert-secret | cat) -y

This will prompt you for the Google Authenticator secret (Change the -1 to a -2 if you want to use slot 2). Now, you are ready to generate the 6-digit codes that Google Authenticator uses.

As I said above, the codes are time-based but the Yubikey doesn’t have a clock so you need to use the script to send the right challenge to the key, which will respond with the code:

./ --yubi-no-sudo

If you used slot 1 instead of slot 2 you’ll need to change the hard-coded slot around line 103 of where it constructs the ykchalresp command.

So this is nice but we can make it more convenient by using a global shortcut. I use KDE as my desktop environment but you should be able to adapt this to other desktops.

There may be a better way of sending keystrokes to the focused window in KDE than this, but I am using a program called xte that you  can find in the xautomation package:

sudo apt-get install xautomation

Now, open up your system settings and go into the Workspace/Shortcuts section. Then click on the “Custom Shortcuts”. (This may be under Common Appearance and Behaviour/Shortcuts and Gestures if you’re using an old version of KDE like the one on Trusty 14.04).

Then click the Edit drop down and further select the New → Global Shortcut → Command/URL. This will give you a new shortcut called New Action by default (you can click on that and rename it) which has three tabs on the right, Comment / Trigger / Action.

Under Trigger you can assign a global shortcut. I am using Ctrl-Alt-Y (Y for Yubikey).

Under Action you need to paste some code in the Command/URL text box. Assuming you put in /usr/local/bin:

echo str $(/usr/local/bin/ --yubi-no-sudo) | xte; echo key Return | xte

What this will do now is when you press Ctrl-Alt-Y, it generates a code and passes it to xte along with a Return keypress. xte sends the provided input to the currently focused window.


Much quicker than opening up the Google Authenticator app every time!

(PS If someone tells me how to do this on Ubuntu desktop I’ll add the instructions here)

Posted in tech | Leave a comment

Dr Brad McKay’s response to critical analysis…

Censorship and blocking those who criticise, of course.

A number of people have posted on his Facebook page (mostly) politely questioning his article’s worth and pointing out errors. As is the usual response of someone who is unable to deal with his own cognitive dissonance, he removed all the posts and blocked those who posted them.

I even got blocked on Twitter for asking for a reply to my question about NATA’s mutual recognition of overseas labs that are returning positives for Australians.

While these scoundrels continue to trot out the tired old lines, we are winning the argument with logic and science.

Lyme is in Australia.

Posted in Lyme | Leave a comment

Open Letter to Dr Brad McKay

Dear Dr McKay,

You recently had syndicated an article in various newspapers across the world and appeared on Australian TV entitled “The great Australian Lyme consconspiracypiracy”.

As you are most likely aware by now, this article has been extremely controversial.

I would like to address some of the inaccuracies and omissions in your article about Lyme Disease in Australia.

“No proof”

Your article states “Lyme disease is real, but there’s no scientific proof that it’s occurring in Australia.

This is incorrect.  B. Queenslandica was found in rats in Richmond, North Queensland in 1962.  Additionally, other tick borne infections associated with Lyme Disease such as Bartonella, Babesia, Rickettsia etc. are found in Australia.

A more recent ongoing study at Murdoch University has also found evidence of relapsing fever Borrelia and a new type of Neoehrlichia bacterium.

“Only in Europe and North America”

Your article states “this bacteria is transmitted to humans via tick bites in North America and Europe.“.

This is incorrect. Various Borrelia strains have been identified across Asia and Japan.

“Overseas labs are unaccredited”

This is incorrect. Having been pressed by various people, including myself on Twitter, you have said that a valid accreditation is only the NATA one. To summarily dismiss overseas accreditations as bogus is highly illogical.

The two main labs that people use for overseas tests, IgeneX in California, and Armin in Germany, are both accredited. Igenex uses CLIA, Clinical Laboratory Improvement Amendments of 1988, and are United States federal regulatory standards that apply to all clinical laboratory testing performed on humans in the United States, except clinical trials and basic research.

This is a very stringent accreditation.

ArminLabs is a German specialist Lyme testing lab run by Dr Armin Schwarzbach, formerly of Infectolab and the BCA clinic who treat tick-borne diseases. ArminLabs works in association with Gärtner Labs in Ravensburg and so has been accredited by the Deutsche Akkreditierungsstelle GmbH (DAkkS) (German Accreditation Board), all tests are CE-certified for use within the EU.

NATA also has reciprocal agreements with many other countries, including the two mentioned above.

Again, dismissing overseas labs with government-accredited approval is not only highly illogical, but by your own insistence that NATA is the only valid accreditation, patently wrong by virtue of the mutual recognition.

Dismissal of criticisms of NATA labs

You state “Lyme activists will tell you that NATA-accredited labs don’t detect Borrelia because their machines aren’t sensitive enough to pick it up. The truth is that unaccredited labs aren’t specific enough, and tend to deliver positive results for Borrelia whether you’ve got Lyme disease or not.

The truth is that NATA labs in Australia only detect 2 out of the 14 species of Borellia known to cause Lyme Disease / Lyme-like Disease / Relapsing fever. So when you talk about labs not being specific enough, being specific to the point of only checking a small number of species is certainly going to return fewer positives.


“We don’t know what it is, but we know it’s not Lyme.”

How do you know it’s not Lyme? It’s a logical fallacy to prove a negative. For example, if I claim there are invisible pixies at the bottom of my garden you are not going to be able to prove otherwise.

“Using up to four weeks of antibiotics is the treatment recommended to eradicate Borrelia

This is only if you follow the outdated and discredited IDSA guidelines, which were recently dropped by the CDC. The latest peer-reviewed guidelines published by ILADS does not recommend only 4 weeks of antibiotics.


Treatment regimens of 20 or fewer days of phenoxymethyl-penicillin, amoxicillin, cefuroxime or doxycycline and 10 or fewer days of azithromycin are not recommended for patients with EM rashes because failure rates in the clinical trials were unacceptably high. Failure to fully eradicate the infection may result in the development of a chronic form of Lyme disease


While continued observation alone is an option for patients with few manifestations, minimal QoL impairments and no evidence of disease progression, in the panel’s judgment, antibiotic retreatment will prove to be appropriate for the majority of patients who remain ill. Prior to instituting antibiotic retreatment, the original Lyme disease diagnosis should be reassessed and clinicians should evaluate the patient for other potential causes of persistent disease manifestations. The presence of other tick-borne illnesses should be investigated if that had not already been done. Additionally, clinicians and their patients should jointly define what constitutes an adequate therapeutic trial for this particular set of circumstances.

“Use ELISA as a screening test”

ELISA is known to deliver both false positives and false negatives. You said to me on Twitter that we should use ELISA as a screening test and if it’s positive, then use the more accurate Western Blot. Given the failure rate of both ELISA and Western Blot, this is a highly illogical approach.
The CDC itself says that “the diagnosis of Lyme disease is based primarily on clinical findings, and it is often appropriate to treat patients with early disease solely on the basis of objective signs and known exposure.” Based on this recommendation, the diagnosis of Lyme disease should not be contingent on a positive ELISA followed by a positive Western Blot.

Both tests rely on antibody proteins produced by the immune system and both HIV and Borellia are known to suppress immune response. If someone tests negative but is still symptomatic, a clinical diagnosis is valid.

 You didn’t tell us how your Lyme patient fared

I sent her straight to hospital in an attempt to save her liver and her life.

You didn’t tell us what your diagnosis was, if this woman does not have Lyme. Presumably she had unexplained neurological, joint, fatigue and cardiac symptoms? CFS/ME, Fibromyalgia and “I don’t know” are all neither useful nor helpful as symptom labelling rather than a causative explanation.

Can you tell us about all the people you’ve successfully treated who have come to see you with these conditions? How did you treat them? What were their long-term outcomes?


Your article is not only misleading and inaccurate, it is dangerous. There are many chronically ill people in Australia who need help who may now not seek advice in the right place.

Let me propose something to you: If someone presents with symptoms known to be Lyme disease (joint pain, neurological problems, cardiac problems and fatigue, to name a few) what is the likelihood they have a number of concurrent separate issues? If that same person knows when they were bitten by a tick and experienced an EM rash shortly before onset of symptoms, what is the most likely cause? If someone spent years with doctors who cannot make this person better, who then seeks a Lyme-literate physician and undertakes a Lyme-specific protocol and recovers, what is the most likely cause of the original symptoms?

Let’s look at the balance of probabilities here.

I humbly await your response.

Edit: I’m attaching a further reference of Karen Smith’s counter argument of the Australian Government’s denial which goes into some scientific detail of why the study performed over 20 years ago was flawed.

Posted in Lyme, personal | Tagged | 6 Comments

WebEx in Ubuntu LXC containers

If, like me, you’ve Googled around looking for a solution to get Cisco WebEx working in Ubuntu and nothing really explained it properly, or you ended up with a messed up system, then I am here to help!

Most of the stuff I’ve seen requires a 32-bit installation of Firefox, which doesn’t help me much since I use a 64-bit OS, so I decided to put it all in a container (which is good practice anyway for anything that installs binaries).

Here, I’m installing my container as root as it removes a load of hassle later. You can install them as a regular user but you need more configuration, which overcomplicates things. I’ll leave it as an exercise to the reader to figure that out.

Create a 32-bit container, I’m calling mine “webex”:

sudo lxc-create -n webex -t download

It’ll prompt you for details, answer ‘ubuntu’, ‘trusty’, ‘i386’. and

Edit the config at /var/lib/lxc/webex/config and add these lines:

lxc.cgroup.devices.allow = c 116:* rwm
lxc.mount.entry = /dev/snd dev/snd none rw,bind,create=dir 0 0

These allow the container to access the host’s sound device.

Now start up the container and access its console:

sudo lxc-start -n webex
sudo lxc-attach -n webex

The first thing I do is install openssh-server

sudo apt-get install openssh-server

and then install firefox and a java plugin. Some blogs say you need Oracle Java, but I find that OpenJDK works fine.

sudo apt-get install firefox icedtea-7-plugin openjdk-7-jre

At this point, go ahead and set a password for the ubuntu user:

passwd ubuntu

Log out of the root console and now you can SSH into the ubuntu account like this:

ssh -Y ubuntu@webex

(I’ve left out the bit where ‘webex’ resolves to a real machine, just add it to your ssh config)

The -Y tells ssh to forward Xserver connections back to the host.

Now, we can test the sound to make sure that the config worked, try something like this:

aplay /usr/share/sounds/alsa/Front_Center.wav

If you hear the test sound, then it’s all good. If you don’t hear it, and get an error, then you’ll have to Google. In my case, the command was working without any error but there was no sound. I fixed this by adding a custom .asoundrc in the ubuntu user’s home directory:

pcm.!default {
 type plug
 slave.pcm {
 type hw
 card 1
 device 0

defaults.ctl.card 1

It’s highly likely you may have to edit this for your sound hardware, but then again it may work. I’m not an ALSA expert, do some Googling if there’s still no sound, you just need to find the right device. You can test more quickly with a line like this:

aplay -D plughw:1,0 /usr/share/sounds/alsa/Front_Center.wav

Vary the device numbers of 1,0. Hopefully you’ll get it working eventually.

Now start up firefox and visit the test WebEx site:

Start up a test meeting – and then close down firefox straight away. You did this step to get a .webex directory created, but it needs fixing. In the .webex directory you’ll see some files like this:

ubuntu@webex:~/.webex$ ls -F
1524/ remembercheckbox.bak tmpfile/

The numbered directory may be a different number, but you will have one nonetheless. Change into the directory and you’ll see some files, some of which are .so files. The problem lies in that these files depend on other libraries which are not present in Ubuntu’s latest releases (they were installed with the ia32-libs package which no longer exists). However, we can work out what’s needed and just install the packages manually.

First, we need to install a helper to find the files:

sudo apt-get install apt-file
sudo apt-file update

Now find the files that are missing:

ldd *.so | grep "not found" | sort -u

Now review what’s missing, you will see output like this (it may not be exactly the same): => not found => not found => not found => not found => not found

Now for each missing file, we use apt-file to find out which package will install it:

apt-file search

And then install with:

sudo apt-get install -y libxmu6

After you finish this for each file, you should be all set. Start up firefox again and visit the test WebEx meeting. With any luck, the audio buttons will now be active and you can start your WebEx meeting!

Note, I am still missing a file that provides, but things still work for me. Go figure …

Posted in tech | 3 Comments

A rant on printer DRM

EDIT: I found this which works like a charm:

This post is unashamedly a total rant about printer DRM. If you don’t enjoy a good rant, you’d better stop reading now.

I have the relatively cheap Samsung ML-2240 laser printer. It recently started running out of toner so I ordered a new cartridge.

RANT ONE: I can’t just buy the damn toner to refill it, you need a whole new drum cartridge, wasting perfectly good hardware. What the fuck?

I plugged in the cartridge and turned on the printer. Its light frustratingly stayed red, which means something is wrong. I plugged the old cartridge back in to check the printer wasn’t broken, and the light went green (albeit with a low toner light).


I contacted the people who sent me the cartridge and complained. After a few back and forth emails, it turns out that my printer has got regional DRM and because I bought it in the UK it won’t accept cartridges from here in Australia.

RANT TWO: My printer has got fucking regional restrictions on where it can be used. What the fuck?

RANT THREE: I did some reading and it turns out that the chip also has a page counter in it and will lock out the cartridge when it gets to 1500 pages! What the fuck?

I ended up mail ordering a hacked cartridge chip from a UK retailer to replace the one in the Australian cartridge, so that it can be reused in the UK printer. I was shocked by what I read in the instructions:

RANT FOUR: If the chip thinks the toner cartridge has totally run out of toner, it permanently bricks the cartridge. What the fuck?


I’m done with Samsung. Here’s a message to the Samsung printer people:


Posted in tech | 1 Comment

SAML Federation with Openstack

This is a bit of a followup to my last post on Kerberos-based federation so this post will make a lot more sense if you read that one. Kerberos didn’t really suit my needs because there’s no real web sign on to speak of, so getting hold of a Kerberos ticket in a friendly way on non-Windows platforms is problematic. The answer to this is to use SAML,which has some good support in Keystone, and more to come.


I’m not going to go into too much detail of how SAML works here, and assume you know a little, or are prepared to infer things as you go from this post. There’s more detailed information in the Shibboleth wiki but importantly you must know the concept of an identity provider (which holds authentication data) and a service provider (which protects a resource).

In this example. I’m going to use Shibboleth as a service provider, and the service as an identity provider.

As before, I am doing all this on Ubuntu so if you’re on a different OS you’ll have to tweak things.


Shibboleth is quite solid but its logs and error messages are extremely cryptic and not particularly helpful. There are quite a few gotchas, and it simply doesn’t tell you exactly what went wrong. The main one is that all the entityID configs for Shibboleth and in Keystone MUST match up, and Apache must have its ServerName configured to the matching domain name.

Apache config

You will need the shibboleth module for Apache so go ahead and install it:

sudo apt-get install libapache2-mod-shib2

That will enable the module, so you don’t need to explicity do that. You’ll also have a shibd daemon running after installation.

Inside your Virtualhost block in /etc/apache2/sites-enabled/keystone, you’ll need to add some Shibboleth config:

<Virtualhost *:5000>

  WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/protocols/.*?/auth)$ /var/www/keystone/main/$1
  <Location ~ "/v3/auth/OS-FEDERATION/websso/saml2">
    ShibRequestSetting requireSession 1
    AuthType shibboleth
    # ShibRequireAll On  # Enable this if you're using 12.043
    ShibRequireSession On
    ShibExportAssertion Off
    Require valid-user

<VirtualHost *:80>
  <Location /Shibboleth.sso>
    SetHandler shib

You also need to make sure that your Apache knows what its server name is. If it complains that it doesn’t when you restart it, add an explicit ServerName directive that matches the exact domain name that you are going to give to testshib, shortly.

Now restart Apache.

sudo service apache2 restart

Testshib config

Visit and follow the instructions carefully. It will eventually generate some Shibboleth configuration for your service provider, which you need to save as /etc/shibboleth/shibboleth2.xml

If you take a look in the config, you’ll see three main important things.

<ApplicationDefaults entityID="<your service provider ID>" REMOTE_USER="eppn">

You need to remove REMOTE_USER entirely as this causes Keystone to do the wrong thing.

Inside the ApplicationDefaults you’ll see:

<SSO entityID="">

This is the part that tells Shibboleth what the ID of the identity provider is. Further down the file you’ll see something like:

<MetadataProvider type="XML" uri=""
 backingFilePath="testshib-two-idp-metadata.xml" reloadInterval="180000" />

It tells Shibboleth where to get the IdP’s metadata, which describes how to interact with it (mainly URLs and signing keys).

These three parts are the main parts of the config that describe the remote IdP. If you change the IdP for a different one, it’s unlikely you’ll need to edit anything else.

Keystone config

As in the Kerberos post, you need to enable some things in the keystone.conf. Since I wrote that post, I’ve seen that federation is enabled by default in Kilo, so there’s much less to do now. Basically:

  • Configure trusted_dashboard,e.g. trusted_dashboard = http://$HOSTNAME/auth/websso/
  • Add saml2 to the list of protocols:
saml2 = keystone.auth.plugins.mapped.Mapped

remote_id_attribute = Shib-Identity-Provider
  • Copy the callback template to the right place:
cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/
  • Create the federation database tables if you haven’t already:
keystone-manage db_sync --extension federation

Keystone mapping data configuration

As before, we have to use the v3 API for federation. If you have sourced the credentials file already, you can just do two more environment variables:

export OS_AUTH_URL=http://$HOSTNAME:5000/v3

You may remember from the Kerberos post that we need a mapping file. The mapping used for kerberos can be re-used for this SAML authentication, here it is:

    "local": [
        "user": {
          "name": "{0}",
          "domain": {"name": "Default"}
        "group": {
          "id": "GROUP_ID"
    "remote": [
        "type": "REMOTE_USER"

Save this as a file called add-mapping.json.  Although it can be re-used from before, I’ll re-add it here for completeness:

openstack group create samlusers
openstack role add --project demo --group samlusers member
openstack identity provider create testshib
group_id=`openstack group list|grep samlusers|awk '{print $2}'`
cat add-mapping.json|sed s^GROUP_ID^$group_id^ > /tmp/mapping.json
openstack mapping create --rules /tmp/mapping.json saml_mapping
openstack federation protocol create --identity-provider testshib --mapping saml_mapping saml2
openstack identity provider set --remote-id <your entity ID> testshib

Replace <your entity ID> with the value mentioned above in the SSO entityId in the shibboleth2.xml config. Shibboleth sets Shib-Identiy-Id in the Apache request variables with the value of the entityId used, and we configured keystone to use this in keystone.conf above. This is the “remote id” for the identity provider, and keystone uses this to apply the correct protocol and mapping.

Horizon config

As before, a few Django config tweaks are needed. Edit the /opt/stack/horizon/openstack_dashboard/local/

 ("credentials", _("Keystone Credentials")),
 ("testshib", _("Testshib SAML")),
 "identity": 3

Replace $HOSTNAME with your actual keystone hostname.

Now, restart apache2 and shibd:

service apache2 restart
service shibd restart

You should now be all set. After making sure “Testshib SAML” is selected in the login screen, click connect and you will be redirected to the testshib login page. It has its own fixed users and tells you what they are when you visit that page.

Good luck!

Posted in tech | 2 Comments

Federated Openstack logins using Kerberos


I recently had cause to try to get federated logins working on Openstack, using Kerberos as an identity provider. I couldn’t find anything on the Internet that described this in a simple way that is understandable by a relative newbie to Openstack, so this post is attempting to do that, because it has taken me a long time to find and digest all the info scattered around. Unfortunately the actual Openstack docs are a little incoherent at the moment.


  • I’ve tried to get this working on older versions of Openstack but the reality is that unless you’re using Kilo or above it is going to be an uphill task, as the various parts (changes in Keystone and Horizon) don’t really come together until that release.
  • I’m only covering the case of getting this working in devstack.
  • I’m assuming you know a little about Kerberos, but not too much 🙂
  • I’m assuming you already have a fairly vanilla installation of Kilo devstack in a separate VM or container.
  • I use Ubuntu server. Some things will almost certainly need tweaking for other OSes.


The federated logins in Openstack work by using Apache modules to provide a remote user ID, rather than credentials in Keystone. This allows for a lot of flexibility but also provides a lot of pain points as there is a huge amount of configuration. The changes described below show how to configure Apache, Horizon and Keystone to do all of this.

Important! Follow these instructions very carefully. Kerberos is extremely fussy, and the configuration in Openstack is rather convoluted.


If you don’t already have a Kerberos server, you can install one by following

The Kerberos server needs a service principal for Apache so that Apache can connect. You need to generate a keytab for Apache, and to do that you need to know the hostname for the container/VM where you are running devstack and Apache. Assuming it’s simply called ‘devstackhost’:

$ kadmin -p <your admin principal>
kadmin: addprinc -randkey HTTP/devstackhost
kadmin: ktadd -k keytab.devstackhost HTTP/devstackhost

This will write a file called keytab.devstackhost, you need to copy it to your devstack host under /etc/apache2/auth/

You can test that this works with:

$ kinit -k -t /etc/apache2/auth/keytab.devstackhost HTTP/devstackhost

You may need to install the krb5-user package to get kinit. If there is no problem then the command prompt just reappears with no error. If it fails then check that you got the keytab filename right and that the principal name is correct. You can also try using kinit with a known user to see if the underlying Kerberos install is right (the realm and the key server must have been configured correctly, installing any kerberos package usually prompts to set these up).

Finally, the keytab file must be owned by www-data and read/write only by that user:

$ sudo chown www-data /etc/apache2/auth/keytab.devstackhost
$ sudo chmod 0600 /etc/apache2/auth/keytab.devstackhost

Apache Configuration

Install the Apache Kerberos module:

$ sudo apt-get install libapache2-mod-auth-kerb

Edit the /etc/apache2/sites-enabled/keystone.conf file. You need to make sure the mod_auth_kerb module is installed, and add extra Kerberos config.

LoadModule auth_kerb_module modules/

<VirtualHost *:5000>


 # KERB_ID must match the IdP set in Openstack.
 <Location ~ "kerberos" >
 AuthType Kerberos
 AuthName "Kerberos Login"
 KrbMethodNegotiate on
 KrbServiceName HTTP
 KrbSaveCredentials on
 KrbLocalUserMapping on
 KrbAuthRealms MY-REALM.COM
 Krb5Keytab /etc/apache2/auth/keytab.devstackhost
 KrbMethodK5Passwd on #optional-- if 'off' makes GSSAPI SPNEGO a requirement
 Require valid-user


  • Don’t forget to edit the KrbAuthRealms setting to your own realm.
  • Don’t forget to edit Krb5Keytab to match your keytab filename
  • Pretty much all browsers don’t support SPNEGO out of the box, so KrbMethodK5Passwd is enabled here which will make the browser pop up one of its own dialogs prompting for credentials (more on that later). If this is off, the browser must support SPNEGO which will fetch the Kerberos credentials from your user environment, assuming the user is already authenticated.
  • If you are using Apache 2.2 (used on Ubuntu 12.04) then KrbServiceName must be configured as HTTP/devstackhost (change devstackhost to match your own host name). This config is so that Apache uses the service principal name that we set up in the Kerberos server above.

Keystone configuration

Federation must be explicitly enabled in the keystone config. explains this, but to summarise:

Edit /etc/keystone/keystone.conf and add the driver:

driver = keystone.contrib.federation.backends.sql.Federation
trusted_dashboard = http://devstackhost/auth/websso
sso_callback_template = /etc/keystone/sso_callback_template.html

(Change “devstackhost” again)

Copy the callback template to the right place:

$ cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/

Enable kerberos in the auth section of /etc/keystone/keystone.conf :

methods = external,password,token,saml2,kerberos
kerberos = keystone.auth.plugins.mapped.Mapped

Set the remote_id_attribute, which tells Openstack which IdP was used:

remote_id_attribute = KERB_ID

Add the middleware to keystone-paste.conf. ‘federation_extension’ should be the second last entry in the pipeline:api_v3 entry:

pipeline = sizelimit url_normalize build_auth_context token_auth admin_token_auth json_body ec2_extension_v3 s3_extension simple_cert_extension revoke_extension federation_extension service_v3

Now we have to create the database tables for federation:

$ keystone-manage db_sync --extension federation

Openstack Configuration

Federation must use the v3 API in Keystone. Get the Openstack RC file from the API access tab of Access & Security and then source it to get the shell API credentials set up. Then:

$ export OS_AUTH_URL=http://$HOSTNAME:5000/v3
$ export OS_USERNAME=admin

Test this by trying something like:

$ openstack project list

Now we have to set up the mapping between remote and local users. I’m going to add a new local group and map all remote users to that group. The mapping is defined with a blob of json and it’s currently very badly documented (although if you delve into the keystone unit tests you’ll see a bunch of examples). Start by making a file called add-mapping.json:

        "local": [
                "user": {
                    "name": "{0}",
                    "domain": {"name": "Default"}
                "group": {
                    "id": "GROUP_ID"
        "remote": [
                "type": "REMOTE_USER"

Now we need to add this mapping using the openstack shell.

openstack group create krbusers
openstack role add --project demo --group krbusers member
openstack identity provider create kerb group_id=`openstack group list|grep krbusers|awk '{print $2}'`
cat add-mapping.json|sed s^GROUP_ID^$group_id^ > /tmp/mapping.json
openstack mapping create --rules /tmp/mapping.json kerberos_mapping
openstack federation protocol create --identity-provider kerb --mapping kerberos_mapping kerberos
openstack identity provider set --remote-id KERB_ID kerb

(I’ve left out the command prompt so you can copy and paste this directly)

What did we just do there?

In my investigations, the part above took me the longest to figure out due to the current poor state of the docs. But basically:

  • Create a group krbusers to which all federated users will map
  • Make sure the group is in the demo project
  • Create a new identity provider which is linked to the group we just created (the API frustratingly needs the ID, not the name, hence the shell machinations)
  • Create the new mapping, then link it to a new “protocol” called kerberos which connects the mapping to the identity provider.
  • Finally, make sure the remote ID coming from Apache is linked to the identity provider. This makes sure that any requests from Apache are routed to the correct mapping. (Remember above in the Apache configuration that we set KERB_ID in the request environment? This is an arbitrary label but they need to match.)

After all this, we have a new group in Keystone called krbusers that will contain any user provided by Kerberos.

Ok, we’re nearly there! Onwards to …

Horizon Configuration

Web SSO must be enabled in Horizon. Edit the config at /opt/stack/horizon/openstack_dashboard/local/ and make sure the following settings are set at the bottom:


("credentials", _("Keystone Credentials")),
("kerberos", _("Kerberos")),





"identity": 3


Make sure $HOSTNAME is actually the host name for your devstack instance.

Now, restart apache

$ sudo service apache2 restart

and you should be able to test that the federation part of Keystone is working by visiting this URL


You’ll get a load of json back if it worked OK.

You can now test the websso part of Horizon by going here:


You should get a browser dialog which asks for Kerberos credentials, and if you get through this OK you’ll see the sso_callback_template returned to the browser.

Trying it out!

If you don’t have any users in your Kerberos realm, it’s easy to add one:

$ ktadmin
ktadmin: addprinc -randkey <NEW USER NAME>
ktadmin: cpw -pw <NEW PASSWORD> <NEW USER NAME>

Now visit your Openstack dashboard and you should see something like this:


Click “Connect” and log in and you should be all set.

Posted in tech | Tagged | 2 Comments

New MAAS features in 1.7.0

MAAS 1.7.0 is close to its release date, which is set to coincide with Ubuntu 14.10’s release.

The development team has been hard at work and knocked out some amazing new features and improvements. Let me take you through some of them!

UI-based boot image imports

Previously, MAAS used to require admins to configure (well, hand-hack) a yaml file on each cluster controller that specified precisely which OSes, release and architectures to import. This has all been replaced with a very smooth new API that lets you simply click and go.

New image import configuration page

Click for bigger version

The different images available are driven by a “simplestreams” data feed maintained by Canonical. What you see here is a representation of what’s available and supported.

Any previously-imported images also show on this page, and you can see how much space they are taking up, and how many nodes got deployed using each image. All the imported images are automatically synced across the cluster controllers.


Once a new selection is clicked, “Apply changes” kicks off the import. You can see that the progress is tracked right here.

(There’s a little more work left for us to do to track the percentage downloaded.)

Robustness and event logs

MAAS now monitors nodes as they are deploying and lets you know exactly what’s going on by showing you an event log that contains all the important events during the deployment cycle.


You can see here that this node has been allocated to a user and started up.

Previously, MAAS would have said “okay, over to you, I don’t care any more” at this point, which was pretty useless when things start going wrong (and it’s not just hardware that goes wrong, preseeds often fail).

So now, the node’s status shows “Deploying” and you can see the new event log at the bottom of the node page that shows these actions starting to take place.

After a while, more events arrive and are logged:


And eventually it’s completely deployed and ready to use:


You’ll notice how quick this process is nowadays.  Awesome!

More network support

MAAS has nascent support for tracking networks/subnets and attached devices. Changes in this release add a couple of neat things: Cluster interfaces automatically have their networks registered in the Networks tab (“master-eth0” in the image), and any node network interfaces known to be attached to any of these networks are automatically linked (see the “attached nodes” column).  This makes even less work for admins to set up things, and easier for users to rely on networking constraints when allocating nodes over the API.


Power monitoring

MAAS is now tracking whether the power is applied or not to your nodes, right in the node listing.  Black means off, green means on, and red means there was an error trying to find out.


Bugs squashed!

With well over 100 bugs squashed, this will be a well-received release.  I’ll post again when it’s out.

Posted in tech | Tagged | Leave a comment

Enabling KVM via VNC access on the Intel NUC and other hurdles

While setting up my new NUCs to use with MAAS as a development deployment tool, I got very, very frustrated with the initial experience so I thought I’d write up some key things here so that others may benefit — especially if you are using MAAS.

First hurdle — when you hit ctrl-P at the boot screen it is likely to not work. This is because you need to disable the num lock.

Second hurdle — when you go and enable the AMT features it asks for a new password, but doesn’t tell you that it needs to contain upper case, lower case, numbers AND punctuation.

Third hurdle — if you want to use it headless like me, it’s a good idea to enable the VNC server.  You can do that with this script:

AMT_PASSWORD=<fill me in>
VNC_PASSWORD=<fill me in>
wsman put -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RFBPassword=${VNC_PASSWORD} &&\
wsman put -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k Is5900PortEnabled=true &&\
wsman put -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k OptInPolicy=false &&\
wsman put -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k SessionTimeout=0 &&\
wsman invoke -a RequestStateChange -h ${IP} -P 16992 -u admin -p ${AMT_PASSWORD} -k RequestedState=2

(wsman comes from the wsmancli package)

But there is yet another gotcha!  The VNC_PASSWORD must be no more than 8 characters and still meet the same requirements as the AMT password.

Once this is all done you should be all set to use this very fast machine with MAAS.

Posted in tech | Tagged | 8 Comments