Trust not ye dodgy CAs

Because I don’t follow security advisories religiously, & check my site certs only when I think I need to (i.e. a little time before they expire), I missed the near unanimous decision to distrust StartCom as a Certificate Authority. Instead, I found out by attempting to show a friend my evolution simulator. My recently updated Android Firefox browser showed me a certificate warning despite the fact that my cert should have been good until 2020. Aiiee! Since I was at the 2017 Bosch Hackathon, I was a little busy to do anything about it until later. What a keen site maintainer I am (not).

Referring to my earlier experiences on this topic, I see that Eric Mill is a much better man than I, and has already been advising people against using StartCom certs since last year.

I chose to use Let’s Encrypt to generate my new certs. My hosting provider,, provides integrated Let’s Encrypt support, but this would not cover all the subdomains I wanted out of the box. I also do not have direct ssh access to my box, opting for a cheaper shared server. Instead, I installed certbot,

and used it to request a cert including all my subdomains.

Because certbot needs to verify my ownership of each subdomain, & some subdomains I wanted a cert for don’t have a server behind them, I used DNS TXT entries.

DNS TXT entries don’t update immediately, so I used MXToolBox’s outstanding DNS text lookup tool. This meant I could tell certbot to verify my DNS TXT for a given domain only after that value had been successfully published.

In the future, I should be able to renew this cert by simply running:

All that was left was a simple matter of uploading the cert using the site management tools & I’m your uncle, or something. Please don’t call me Bob.


This part is likely only of interest to me.

The DNS TXT entries confuse subnet resolution. Prefixing them prevents this confusion, but I’m sure there must be a better way to do this. I’ll have to remember to remove the prefix the next time I want to update my cert.

A better way to do this

As I mentioned, there has to be a better way to do this. My DNS records contains an asterisk (*) term for the base hostname. Adding an explicit subdomain name, like, means that the “blog” part of that subdomain becomes explicit. There’s no explicit definition of the address of blog, so the DNS behaviour is to return unresolved. I solved this by explicitly pointing blog to my server address. This at least means that I don’t have to do anything silly, like breaking my  DNS for every subdomain next time I want to renew my cert.

A better better way to do this

Instead of adding an address resolution record, I can add a canonical name record (CNAME) for blog, resolving to the root domain, which can then be resolved to the server address.

This is better because it’s easier for me or others to update the server address then. Hooray. Probably anyone who knew anything about DNS could have figured this out way quicker than I have. Hey-ho. It works.

Eric Mill’s Switch to HTTPS Now, For Free is excellent

I just read this little gem by Eric Mill, and decided to give it a try. Despite a CA problem with the latest version of Firefox, this went surprisingly well.

I was slightly disappointed that WordPress uses absolute addresses to it’s resources with an HTTP class. Should it not use relative paths for locally hosted content like images, and let user agents sort out whether or not to use SSL? I think so.

Probably there’s something to configure. When it annoys me enough, I’ll blog about it.

Whoop, and there it is. That was easy to at least get WordPress to use HTTPS.

Censorship and searching

As the UK begins the fight to force ISPs to implement filternets, I’m reminded of the four horsemen of the infocalypse. Currently, paedophiles and a 5th horseman, armchair child psychology, are the tools of choice.

The argument goes that children should be technologically prevented from stumbling across porn or other “harmful material” by forcing DNS filters on service providers.

Currently, one problem I have with the proposal is that”child” appears to be defined as “any internet user”. The default for filtering is “on”, so if I’m using a UK endpoint in my quest for utter filth, extreme views (whatever the hell that means), or anything else frowned upon by the government, I need to tell my ISP about it (or easily TOR round it).

Here, I’ll focus on porn because it’s controversial and ubiquitous, and because it seems to be receiving the lion’s share of the debate.

Since many people aren’t savvy enough or disciplined enough (that’s me) to use other methods for domain name resolution, or indeed to proxy their internet traffic, this will prevent many people from accessing a variety of legal content. This is a form of restriction of information – an activity extremely useful if you want to control a population. For this reason, any attempt at filtering the internet, even one as easily circumvented as this, should have a cast-iron, peer reviewed scientific justification, and should be subject to continuous independent public oversight to prevent the usual misuses. To be clear, freedom of information is so crucial to a democratic civilization, that even if it caused (it doesn’t) harm to children, there’s still a strong case for keeping it and finding another way to deal with the resulting harm (if any).

As you can probably infer from the last paragraph, I’m unconvinced that any harm comes from children, or other groups that we tend to feel entitled to curb, accessing controversial content. The bar of evidence should be astronomically high before any attempt is made to regulate what we or our children can read. I don’t think the evidence is even close now. What little evidence about the effects of pornography does exist, actually contradicts the received wisdom that porn is bad.

Another danger area is interpretation. If we agree that porn should be hard to reach, what do we call porn? Does satire involving sex count? What about sex education? Art? What the heck is art? That’s another loosely interpreted word that could be (and has been) used for censorship. I think the whole idea from concept to implementation is at best ineffective, and at worst, dangerous.