Deleted Facebook

So I’ve done it.  I’ve deleted Facebook from my life.  Sorry Zuck, you’ve breached my trust one last time and you’re never getting another chance, ever.

This latest breach of trust is enormous in it’s breadth and scale.  The empty promises made by Facebook after each past breach of trust and/or data are just that, empty.

I’ve watched every minute of the two days of testimony and the hours of Christopher Wylie’s testimony and it makes for some very sobering listening. I’ve now permanently deleted my presence on the platform and removed the app and messenger from my iPad.

Around 2 years ago I had the app on my Android phone.  When the listening and other intrusions into my life became apparent I decided to stop using it.  I uninstalled the application and was horrified to discover that the majority of that application – the “background services” – remained on my phone!  Apparently, for convenience reasons.  Who’s convenience is a mystery obvious.

In order to get these background service uninstalled I had to delve down into the Linux terminal on the phone and force uninstall the service using sudo commands.

How many users are going to go look for these services?  Meanwhile the average user thinks they’ve uninstalled Facebook yet there is the bulk of those applications still installed and running on the device! This is the kind of behaviour that landed Microsoft in the courts in the 1990s. It seems Facebook was getting away with it, or at least has been until recently.

Way back in 2011 I was having conversations with my techie friends on how we were waiting for the backlash to arrive. I personally didn’t think it would take as long as it has. I also didn’t think the sheer audacity and arrogance of the company would stretch as far as it clearly has.  We were waiting … that wait seems finally over.

For them to have known about the Cambridge Analytica issue as far back as 2015 yet stayed silent on the issue is really the crux of the problem.  Separate from the obvious technical ones. If you send data to a 3rd party there simply has to be a level of trust. Zuckerburg can apologise all he likes.  He can also appear as sorry as he likes, it’s no longer enough.  Trust has left the building. It appears some very serious political, collaterally damage is inevitable. If not out-right direct manipulation through the use of the data obtained from the Facebook Graph API.

I’ve been reminded why I never used the option to “Log in with Facebook”, so I’ve not even had anything to migrate in terms of security.  I just downloaded my data and then requested deletion.  For anyone reading this, deactivating is not deleting.  In order to request that Facebook delete your data you have to find this very hard to find page.

I couldn’t find a link to this page anywhere in the Facebook support pages, it must be there somewhere.

Some people I’ve spoken to have thought this is a little over-reaction. They may well be right but this isn’t an isolated incident.  Lets not forget the psychological experiments conducted on users in secret.  But it’s OK, apparently they were allowed to manipulate users as it was in the T&Cs.  Er … What?

Honestly, just go delete your account now.  Do it.


Dear Procurement Department

I understand that you like to standardise the computers in your business. The ease of maintenance and support is a huge topic, I get that. I also get that costs are always an issue for business. Save where you can etc …

BUT, please bear in mind that a software developer has a completely different set of requirements to the receptionist or the account manger. A poxy little i5 laptop with 8GB of RAM is fine if all you’re doing is writing Word documents and replying to some email. For a developer, you’ve just shot him in the foot and then told him to go dancin’.


Would you employ a gardener and then demand that all they can use is a teaspoon to dig your garden? No, so why do it to your software engineers?

False economy.


Meltdown / Spectre Speculation Issues

So, we are in Meltdown!  Thanks for this Intel.  So, Intel has been having a very hard time lately.  AMD has more or less just blind-sided them with Ryzen.  They’ve been rushing new platforms to market as a result and compressing their release schedules in order to respond.

It’s worth stating here that a lot of media spots on this keep citing individual companies like Apple or Intel.  The truth is this effects all processors implementing the x86 architecture.  This covers a vast majority of processors in the entire world.  Intel, AMD, ARM etc.  It’s true that the ultimate source of this issue is designs from Intels 8086 processors, AMD also had input into the overall x86 architecture.

Having said all that Intel is now facing one of the worst hardware issues to ever hit any technology.  If you think that sounds over-dramatic you’re wrong.  I think it is completely fair to hold Intel responsible for this.  Some security researchers are referring to Meltdown as an industry wide catastrophe.  It’s genuinely a complete nightmare.  I dread to think of the industry wide costs this is going to incur.  it would be next to impossible to calculate on a global scale.

Since so many CPUs implement the same architecture (x86) all chips based on the design are effected in some way.  Basically the entire world is about to lose a potentially significant portion of processing power.  Everyone.  It boggles the mind.

The issue is basically a problem with the way chips handle context switching.  CPUs have two modes, kernel mode and user mode.  The kernel is trusted and therefore has access to lots of security data.  Whereas user mode has a lower security status.  When the context is switched, this mode switching protects sensitive data by preventing user mode processes from reading the contents of protected memory.  Due to the security issue that has been found, CPUs now need to flush their kernel mode cache far more often than would otherwise be required.  Since this issue allows user processes to read kernel mode memory, when the context switches from kernel to user mode, the kernel now has to flush the protected memory.  This was even if a nefarious user mode process tries to read the memory there isn’t anything useful left lying around.

This switching and cache flushing is the source of the potential performance impact.  Obviously, if you are performing tasks that don’t cause much context switching the impact will be minimal.  However, if the task performs many context switches, such as virtualisation, the performance hit will increase accordingly.

If I was a large scale hosting provider, or cloud computing platform I’d be really concerned.  Imagine the worst case scenario where 30% of your computing resources vanished over-night.  Poof!  Gone.  If you had been running your platform with anything less than around 50% headroom in relation to the required processing power you’ve got a significant investment in more hardware needed.

But what is really incredible is this issue isn’t new.  What?  Yes.  This issue was first discussed in some depth in 1995, in a white paper called “Intel 80×86 Processor Architecture: Pitfalls for Secure Systems*“.  This paper also referenced older papers from 1992.  Over 25 years ago.

Yup, this issue has been kicking around for a very, very long time.


No Comp Sci degree? == No Interview

Well really?

I’ve only been turned down for an interview once with this stated as the reason.  With 20+ years in IT behind me that’s pretty good going from a personal perspective.  However, I know this happens to others more frequently, and it’s a problem.

What has prompted this article is that I recently picked up a CTO role for a budding startup.  They’d already spent time and money on producing the basis for a tech startup.  They’d also engaged with a company to develop the mobile applications.  That relationship had only managed to produce what they described as a working prototype and needed things finished with some additional features.

Once I’d started digging around in the code my heart sank.  The company that wrote this code had been recommended to them from no less that a Microsoft MVP.  So they should have been in good shape to get things built.  The main developer had a Masters Degree in Computer Science from a prestigious university, so should know a few things about technology, right? … wrong.

This developer had managed to produce the most garbled and confused code base I have ever had to work with, the quality of the code is indescribably poor.  He’s displayed a complete lack of understanding of even the most basic principals of object orientation and displayed a complete, fundamental lack of understanding of how the internet works.  Yup, a mobile apps developer that doesn’t understand the basics of the internet.

I can forgive some programming language hiccups and idiosyncrasies but to not even understand what you’re doing at such a basic level is inexcusable.  Especially when you are selling yourself as a mobile application expert with a masters degree in comp sci and taking peoples money.

I cannot fathom where to start to describe how bad the code is.  Right up from executing POST requests when they should be GET (Or PUT sometimes or any variation thereof, never the right one) or thinking that authentication is getting an Id from a local device database and considering that as “logged in”.  Or maybe the method called GetPeople that gets locations (inside an object called People) but is actually pointing at the endpoint that gets Activities (using a POST of course).  Or is it the droves of test data being generated in the body of production code methods? … sheesh I don’t know.  Or is it one of the 35 dialogs that could be shown between login and the first application page being displayed?  Does a user really want to be informed every time you make a web request?  I literally don’t know where to start.  There isn’t a single line of code worth keeping.  Not one.

The architecture isn’t even worth mentioning as there simply isn’t any.  Need to make a web request?  Fine, create a new one and copy all the code from somewhere else and change the URL.  In an application with just a few pages there are 262 instances of “new HttpClient(“.  For the uninitiated that number should be between 0 and x (x being the number of HTTP verbs you need), but in reality this should probably be 1 as the HttpClient is designed to be reused.

Unfortunately, this isn’t the first time I have encountered a situation like this.  I once worked with a guy that had equally tied himself and consequently his entire development team in knots.  This chap had a Masters Degree in Mathmatics and Computer Science from Oxford University.  Yes, the Oxford University.  He had equally ballsed up in a completely different way and cost his employer a substantial amount of money in real-terms.

The point I’m making here is just because someone has a fancy sounding qualification, don’t assume they won’t or can’t ruin your application development efforts.  “But, but, but he’s from OXFORD!”  And conversely don’t assume that someone without a fancy sounding qualification can’t get the job done.  That is a complete and utter fallacy.

I know it’s anecdotal but the two biggest messes I’ve encountered in my career were both entirely created by the “qualified” developer.

(image credit

2018 Update

So here we are in 2018 and I’ve just read some very interesting news.  A list of who’s who in the technology world (Google, Apple, Microsoft, IBM etc) have all dropped their degree requirements.  Yes, you read that right – DROPPED THAT REQUIREMENT.

This gives me a warm fuzzy feeling actually as it utterly vindicates my original post.

So, if you are a tech company and refuse to interview me due to not having a degree I think you need to take a long look at yourself as an organisation. Basically due to the fact that you’re going to have a fucking hard time convincing me that what you do is more involved/difficult/hard-core than the companies in that list I mentioned above.

In fact, further to this.  If you tell me that you don’t want to interview me due to not having a degree, don’t worry, I don’t want to work for you anymore anyway. Bye!


Shell Overlay Icons – The Space Wars

For some of us there has been a quiet war raging inside the Windows registry. The fight is over your shell overlay icons and their priority. I fought back!

The Problem – Shell Overlay Icons Limit

Amazingly, even going into the modern era of Windows 10 in 2017 this is still an issue.  The issue is that many tools want to make use of shell overlay icons but Windows only has 15 “slots”.  It’s safe to assume that many of these tools would use this Windows feature for an “at a glance” method of displaying state.  This also means that a single tool wont just use one icon but many to display various states of files and folders.  For example the little screen shot below shows two shell overlay icons, one is green denoting a committed state and one red highlighting an uncommitted state.

an example showing shell overlay icons

As our ever-more digitally connected world evolves, more and more tools want to make use of this feature.  I’m sure most of you reading this have at least a couple of tools that do this.  However, the list for me goes on and on.  Tools like OneDrive, Dropbox, Git and SVN and … you get the picture.

Given that Windows only uses the top 15 entries in the registry, I have over 30 listed.  Dropbox alone brings 10 to the table, so what are we supposed to do?  As you can see 15 isn’t going to go particularly far given this scenario.  Whilst these shell overlay icons are useful in some scenarios they probably aren’t the best solution to the problem anyway.  There is a good discussion here on the problem from Raymond Chen who works on the Shell Team at Microsoft.

The Space Wars

What the hell am I on about anyway – Space Wars?  Many of these tool vendors, Dropbox I’m looking at you in particular, have started a kind of war inside your registry.  Aside from any chuckling ‘nix users chortling about the fact Windows even has something as insane as a registry we still have to deal with it.  Actually, to be fair a lot of the issues with the registry is down to my fellow developers abusing it but that’s a whole other story for another time.

So, what exactly is the problem?  When you take a registry entry key name like “DropboxExt01”  you’d expect that come before “DropboxExt02”, right?  Well, kinda.  If I rename “DropboxExt02″ to ” DropboxExt02″ (notice the leading space) ” DropboxExt02″ now comes before “DropboxExt01”.  And thus was born the space wars …

Each vendor thinks their tool is the most important, obviously.  So they’ve taken it on themselves to start appending their shell overlay icons key names with ALL THE SPACES, ARRRRR!!  Forcing their entries to top of the tiny selection that Windows will actually bother to use.

I Fought Back!

I’ve lost count of the number of times I’ve fired up regedit in order to fix this insane situation.  Countering a vendors update install that “fixes” that tools entries (read: appends more spaces).  “Why wouldn’t you want our icons to work”.  Well dear Dropbox your tool isn’t the center of my world, in fact I’ve nearly uninstalled you as this is such an annoyance.

Anyway, I’d just had enough yesterday.  So I wrote a tool.  Say hello to Overlay Ninja …

shell overlay icons - overlay ninja

Okay, okay it doesn’t look that great (yet) but I knocked it up in a few hours.  Now I can fix this problem easily in a couple of clicks and without going anywhere near regedit.  The source code for this is all up on my GitHub page.  It’s under GPLv3 so if you make any improvements please do submit a pull request so we can all benefit.

You can set priorities by application or by each individual shell overlay icons entry which adds a lot of flexibility.  I’ve tested this as far as I can and all is working as expected.  As ever, when doing anything in the registry make a backup first.  If you’re reading this and don’t even know what the registry is, what the hell are you doing reading this? 🙂  Go have a read of this before doing anything with the tool.

x86 or x64?

Inside the GitHub repository I’ve also uploaded pre-compiled versions of the tool so users without the required build tools can still use it.  Due to some architecture redirection foibles within the Windows registry you will need to use either the x86 or the x64 version of the tool for it to actually work as expected.  If your OS is 32-bit, use the x86 version, if it’s 64-bit use the x64 version (here’s how to check that).

Happy, er … Ninjaing, Ninjaning?  Ninjining?


.NET Geographic Searches

I recently had to implement a localised search mechanism on an API.  Maths and Trigonometry aren’t my strengths it has to be said.  So I started looking online for pointers and background info.

Initially it might seem like you have to do some complicated maths to work out things like the size of the circle you’re dealing with etc.  But really the solution is very simple.  You know the starting points latitude and longitude and you also have a radius in a known unit, such as miles, meters or kilometers.  You’ll also have a dataset of potential target locations complete with their latitude and longitude points.

The only measurement you need is the distance from your starting point to a potential target location.  If that distance is less than or equal to the target location it’s within the circle designated by the radius.

The most accurate and popular algorithm for working this out is the Haversine algorithum.  The .NET framework has this built in.  The GeoCoordinate class inside the System.Device.Location namespace.

To use this:

<br />
var startingPoint = new GeoCoordinate(51.212213,-2.122312);<br />
var targetPoint = new GeoCoordinate(52.212213,-1.122312);<br />
var distanceInMeters = startingPoint.GetDistanceTo(targetPoint);<br />

You can then convert the number of meters into whatever unit you would like to work with. Pretty simple really. The Haversine algorithm accounts for the curvature of the planet as well and has a very small margin of error.

You can find lots of implementations of this online, such as this one:

<br />
public static class Haversine {<br />
  public static double calculate(double lat1, double lon1, double lat2, double lon2) {<br />
    var R = 6372.8; // In kilometers<br />
    var dLat = toRadians(lat2 - lat1);<br />
    var dLon = toRadians(lon2 - lon1);<br />
    lat1 = toRadians(lat1);<br />
    lat2 = toRadians(lat2);</p>
<p>    var a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2) + Math.Sin(dLon / 2) * Math.Sin(dLon / 2) * Math.Cos(lat1) * Math.Cos(lat2);<br />
    var c = 2 * Math.Asin(Math.Sqrt(a));<br />
    return R * 2 * Math.Asin(Math.Sqrt(a));<br />
<p>  public static double toRadians(double angle) {<br />
    return Math.PI * angle / 180.0;<br />
  }<br />
<p>void Main() {<br />
  Console.WriteLine(String.Format(&quot;The distance between coordinates {0},{1} and {2},{3} is: {4}&quot;, 36.12, -86.67, 33.94, -118.40, Haversine.calculate(36.12, -86.67, 33.94, -118.40)));<br />
<p>// Returns: The distance between coordinates 36.12,-86.67 and 33.94,-118.4 is: 2887.25995060711<br />

I would definitely recommend using the .NET version though as this will be thoroughly battle tested.


Docker Gotcha

I’ve just had my first Docker gotcha moment.  I was happily working away using the setup detailed in my first Docker article.  All was going well with some development work migrating a site to a new WordPress implementation.  Boom!  I had the first Windows 10 BSOD I’ve ever experienced.

The machine restarted fine without any problems.  I saw a notification that I needed to reset the password on my Microsoft account, which I dutifully did.  I then restarted Docker to carry on development and when I visited my local URL to carry on the site was new.  The development work I’d completed so far was all “gone”.  Visiting the WordPress directory I could see everything in there as it should be.  Since the Database had apparently gone I decided to rip it all down again and start afresh.  I hadn’t done much work and what I had done was all still in the file system anyway.

So I clear everything down and execute the docker-compose up -d command again.  Everything seems to go fine but the database and ui directories are still empty.  After some poking around and Googling it suddenly hit me.

When you share drives with Docker it needs your Windows credentials to mount the shared drives.  I’d just changed those which means Docker no longer had my new credentials.

An error message or warning would have been helpful Docker.


Password Sensibilities

I’ve lost count of the number of organisations I’ve worked for that have adhered to the NIST (National Institute of Standards and Technology) password advice from back in 2003.  I’ve scoffed every time I’ve looked around their offices to see (a sea?) password post-it notes with various passwords written down.  Honestly, every 90 days?

Clearly they have people in charge of making security decisions that simply are not qualified to.

Finally, the author of those guidelines, NIST manager Bill Burr, has admitted that he was completely wrong.


PlasticSCM – Just Don’t

If you’re considering this source control tool.  Stop.  If you’re considering this for your development team, have mercy on them and stop.

I haven’t been confused by, annoyed by, dumbfounded by or angry at a source control tool this much since I stopped using Visual Source Safe 10 years ago.  This is one of the absolute worst development tools I’ve ever had forced on me.

The hilarious thing is – it isn’t even free, people are paying real money for this!.  Believe me, if it was my choice in tooling they’d have to pay me to use it.  If you are charging for source control these days you have to be pretty damn special and plasticSCM really is not special at anything at all.

I’m not even going to go into the details of why you shouldn’t use it as I simply cannot be bothered to have this tool waste any more of my time, but I feel the need to warn others.  This tool is incoherent.  Just use Git.  There is no benefit in this tool over Git, plus Git is free.


Git, Git, Git, Git, Git, Git.


Docker + Windows + WordPress + MySQL + PHPMyAdmin = Nirvana

The last time I did any WordPress development was over a year ago so I no longer have PHP and MySQL installed.  I started marching off down my well beaten path.  Download PHP and MySQL, install on my machine, do dev work.  Just after clicking on the MySQL download I suddenly thought, what the hell am I doing?  DOCKER!!!!  I already have Hyper-V installed so all I need is a container and some YUMMY YAML.

I’ve used Docker once before, for around 5 minutes, so I’m a complete noob really.  So going through this process to work it all out for myself was initially extremely painful.  I failed on my first session and ended up having to walk away from it all as I just wasn’t getting it.

I did a lot of reading and Googling and just wasn’t finding the explanation I needed to grok it all properly.  There are lots of explanations of how to get this working but they all seemed to stop at the crucial point for me.  They covered off some yml to get MySQL and WordPress containers up and running but stopped there.  What about persistence?  Or deployment?  Where the hell are all the WordPress files?

Some of them seemed to demo the solution I wanted but seemed to miss out on how they achieved it, or were doing it on different platforms.  I needed a noobs Docker for Dummies walk through.  So I’m going to document what I’ve found out in the hope that it crosses some of the Ts and dots the Is for others getting started.

Docker is The New VM

Don’t get me wrong virtual machines are great and very necessary but they’re also a bit overkill a lot of the time.  This is where Docker comes in.  It still requires virtualisation technology under the hood but it’s now transparent and not directly controlled.

Microsofts own virtualisation technology is Hyper-V.  Docker on Windows uses this by default but it could be used with VirtualBox from Oracle as well.  I’ve had lots of success running virtualised OSes on top of Hyper-V and more or less utter failure using it for Microsofts own emulators, the irony here isn’t lost on me by the way.

Docker is a container technology that wraps up specific services, such as databases, runtime or any other dependencies tasks require.  It lets you run just what you need for a given task without installing these services on the host OS. Fantastic.  Lets dig in.

Installing Docker

Dead easy.  Make sure Hyper-V is installed on your Windows box (Home users you’re out of luck here btw).  Go here, download for your OS and architecture, install.  Done.

The Docker installation is a very painless process.

Check Installation

Once installed., open a command line (Windows + X -> Command Prompt (Admin)) and execute:

docker version

You should then see some version information:

 Version:      17.06.0-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:30:30 2017
 OS/Arch:      windows/amd64

 Version:      17.06.0-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:51:55 2017
 OS/Arch:      linux/amd64
 Experimental: true

If you see an error mentioning the daemon, Docker may well still be setting itself up in the background.  You can also create a Docker ID on the Docker site and configure your install to use this ID though I’ve not needed to so far so cannot comments on this aspect.

Next run this:

docker info

This gives you some useful info about your Docker environment, like so:

C:\WINDOWS\system32>docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 17.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
  Profile: default
Kernel Version: 4.9.36-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.837GiB
Name: moby
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 15
 Goroutines: 25
 System Time: 2017-08-06T15:06:13.8983022Z
 EventsListeners: 0
Experimental: true
Insecure Registries:
Live Restore Enabled: false


It’s probably worth having a look over your Docker settings before doing anything else.  You can right click on the Task Tray Docker icon to see various options, Click on Settings first and have mooch about to see if things are setup as you’d like.

The General tab is pretty self-explanatory and controls how Docker starts up and some security settings.  You can probably leave most of these alone.

The Shared Drives page is also pretty self explanatory.  Using this page you can control which host machine drives are available to services running within containers.  Since I’m using my H drive for my projects I’ve shared this drive with Docker.  This allows processes running inside your Docker containers to access stuff on the host machine drives.

docker shared drives settings

The Advanced tab is worth reviewing.  If you have used Hyper-V before and have customised it’s setup you’ll find that Docker automatically picks up some of these settings.  I’d configured Hyper-V to put VM images on one of my large external drives so Docker will install it’s Moby Linux VM here.

docker advanced settings

I’ve also upped the available RAM Docker can use to 4Gb, my dev box has 24Gb so I’ve plenty to throw at Docker.  Since it’s running on Linux 4Gb RAM should be more than enough to keep things running at a decent speed.

The Network, Proxies and Daemon pages are fine in their default state for now.

My Requirements

I wanted a WordPress development environment with persisted data.  If Docker is restarted or the host machine reboots, I want the state of both stored.  Both in terms of the WordPress state such as themes, plugins and so on and the database.  I’m fast with WordPress development but not that fast!

So Docker should accept all the responsibility of hosting Apache, PHP, MySQL and phpMyAdmin.  Docker also takes care of all the networking and configuration of those services.  The host Windows machine exposes a drive and hosts the WordPress files and MySQL databases on it’s “normal” file system.  These files are stored in directories that are mapped into the containers which allows for simple deployment once the development is complete.

I’m sure in time and as I learn more about Docker I’ll find a lot of this can be handled better.  For now this is where I am in the learning curve and it’s working.

Yummy YAML & Docker Compose

The idea here is to produce a file that tells Docker what you want to do.  It specifies one or more images to use to create one or more containers that provide the services you need.  Images are templates for containers, much in the same way classes can be thought of as templates for objects in an OO sense.  There are some issues here in terms of terminology which I don’t get.  Although we are creating containers, in the .yml files they are called services.  It’s probably my limited knowledge here but it would be clearer to new users if they just stuck to using the same terms.

What I was struggling to understand and configure was volumes.  I’m still a little in the dark to be honest and I’m not entirely sure I have this configured in the best way.  But what I’m showing here is working and it suits the requirements I mentioned above.

Directory Structure

As I showed above I’ve shared the host machines H drive.  Within this drive I have the following directory structure:


In the root directory (H:\js2017) I have created a Docker compose YML file called docker-compose.yml.  This is where the magic happens.  The file contains this YAML:

version: '2'
      - db
    image: wordpress
    restart: always
      - ./ui:/var/www/html
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_PASSWORD: p4ssw0rd!
      - 8082:80
      - back
    image: mysql
    restart: always
      - ./database:/var/lib/mysql
      MYSQL_ROOT_PASSWORD: p4ssw0rd!
      - back
      - db
    image: phpmyadmin/phpmyadmin
    restart: always
      - 8083:80
      PMA_HOST: db
      MYSQL_ROOT_PASSWORD: p4ssw0rd!
      - back

So in this file we have 3 services defined.  We have wordpress, db, phpmyadmin.  We also have a network aliased back (backend).  You could almost certainly take this file, alter the ports if need be and the volume entries and have this up and running pretty quickly.

You can see the references to the database and ui directories in the volumes declaration for the wordpress and db services.  The “.” notation is the relative path from the docker-compose.yml file in the project root directory.  These are mapped by docker to the containers internal file system running on top of Linux within the Docker virtual machine.  So anything written to the /var/www/html directory within the container ends up in the ui directory on the host machine and the same for the /var/lib/mysql directory for the databases.

The port mappings for the services are mapping host machine ports to container ports (hostmachineport:containerport).  So to view the WordPress site, I navigate to localhost:8082 in my host machines browser and this is forwarded to port 80 in the container and serves the page.


Going over the deployment of the MySQL database to a live server is beyond the scope of this article but to deploy the WordPress site it’s just a case of taking the contents of the ui directory and uploading it to the public html directory of your web server.

I’m sure there is a better way of managing this but for now with my limited understanding of the finer details of Docker and it’s container model this works for me.  Hopefully this has gotten you started and I’m sure I’ll revisit this again in the not too distant future with some better solutions and tips.

Happy Docking!!


The State of Login in 2017 – LastPass to the Rescue!!

I’ve been putting this off for far, far too long.  I knew migrating to a password manager was going to be a long boring task and now I’ve started, I wasn’t wrong. Migrating to LastPass itself hasn’t been the painful part, the complete mess of all the myriad login implementations has.

The free solution is pretty comprehensive and covers all the popular browser, OS and platforms combinations you’re probably using.  However, if you also want help with desktop applications you’ll need to purchase a subscription.  The paid subscription is incredibly affordable at around £9.  I can hear some of you crying:

“Why LastPass???  Are you mad?  They’ve been hacked!!”

Well sure, they have been, like many other organisation have.  The online world is a complex one and security is probably the most complex, subtle and poorly understood.  The reason I trust LastPass is that in each instance of an issue being found they have responded immediately and, more importantly, in a sensible fashion.  They’ve been candid with their users, issuing advisories and getting the fix implemented and deployed as quickly as possible.  They don’t hide behind a facade of faked perfection, they “get it”.

Rather paradoxically, security is best handled in an open fashion.  When the likes of Tavis Ormandy are poking around you have to be proactive.

LastPass Security Challenge

Migrating was utterly painless, LastPass imports from pretty much any source you could imagine.  Once imported you can analyse your stored accounts for issues such as password strength, duplication or any that may have been compromised.  How this compromised analysis works is something I haven’t looked into yet but I’m assuming it’s using some lookup process using previously compromised authentication databases.

Luckily the only issues I found were some duplication, yes I was guilty of this too, and some older passwords not being particularly strong.

Updating Passwords

So as I’ve been going through the process of addressing these issues I’ve gotten quite alarmed to be honest.  It’s incredible what a sorry state security, and particularly login, is in.

I’ve not done any investigation into how LastPass works in terms of form completion but boy has it got it’s work cut out.  Most of the time it succeeds in getting the passwords into the right boxes and the password updating process goes without a hitch.

The problems I’ve seen in many instances are inexcusable and frankly baffling at a technical level.  It’s amazing how many sites consider a password of more than 16 or 20 characters as too long.  Whoever made these technical decisions absolutely does not understand security and should be removed from the technical decision making process.  Many other sites don’t allow special characters which further weakens the potential complexity of the passwords they accept.

Other sites have prevented LastPass from auto-filling the password fields in the web forms and then also block copy+paste in some hugely misguided stupid idea that somehow makes things more secure.  It doesn’t, and I am looking at you

The arguments here are that the clipboard, the memory used to store copied items, can be compromised.  Well, here’s the thing, if your users clipboard is compromised there isn’t a damn thing you can do about it. The user has far bigger problems to deal with than your web login form.  The clipboard argument is plain stupid, if you hear it from someone in a technical meeting, calmly explain how misguided an idea it is, we need to kill these ideas.

Online Forums

Then there are the myriad options for online forum software.  Things like vBulletin, phpBB et. al.  They have horrifically bad login processes and are generally hosted on sites without SSL/TLS, which means credentials are sent in plain text – the worst security offence a site can commit.  It makes a complete mockery of the entire concept of security since there simply isn’t any. 

They are also usually hosted and configured by people that haven’t got a clue what they are doing and no doubt find the whole security issue gets in the way of their big idea anyway.  I dread to think of what is happening between the submit button, database and the page reload.

If you can, get someone that knows what they are doing to set things up or use a hosted solution.

Dynamic Forms

Sites also sometimes dynamically load the login or password change textboxes.  LastPass can sometimes detect this automatically, or using the advanced options to re-scan the page can update them.  Although in many instances this fails as well.  Again it seems a lot of the time these are incorrectly configured or somehow non-standard implementations.

LastPass at Fault?

It’s worth saying here that in many instances I might be inclined to point the finger at LastPass, but in all honesty it’s the myriad implementations and mistakes on the part of site developers that is to blame.

It’s clear that many form fields themselves are incorrectly setup, badly named or otherwise implemented in non-standard ways.

Orphaned Credentials

It’s also amazing how many large companies seem to ruin their relationship with users by altering login processes and getting it completely wrong.  As I’ve been going through this migration and updating process I’ve found it incredible how many accounts have just vanished.

Granted, some of these accounts haven’t been used for a while, but surely these companies would prefer to keep these details rather than just obliterate a history with a customer?  If an account has gone, it’s highly likely my order history followed it down the digital plughole.

I’ve encountered all manner of account migration issues as well.  Filling out migration forms that are thoroughly broken despite going over instructions repeatedly.  In some instances I’ve had to capitulate and contact their support departments for assistance.  That is precisely what companies should do all they can to avoid.  Support from their staff isn’t free, that costs time and money for them.

Over-posting Nightmares

This one is also incredibly stupid, and worryingly common.  For some reason sites often need you to send other details along with your password resets.  Asking for details like first name, last name, addresses or dates of birth, even when you aren’t changing them.

The problems here stems from man in the middle (MITM) attacks.  Combined with some of the other issues described here accounts could be easily borked if the POST is changed during transit.  Site developers should always require the absolute minimum of data in any request and response cycle.  This is even more important when dealing with security tasks.

If it isn’t made obvious from the form layout, you can end up hitting the submit button and LastPass thinks it all went well.  No doubt because the site returned a HTTP status code 200 OK response but the response detailed errors.  Boom, you now have account details out of sync in your password vault with the details the server has.  Your login is now broken for that site.  The server still has your old password and LastPass has what it thinks is your new one.

Password Rules Roulette

This problem is completely inexcusable on the part of site developers.  Enter a strong 25 character password, hit the submit button and then it errors telling you that the password needs to be at least 6 characters long.

Well, er … OK.  I could try to figure out what your password rules are but this really should be taken care of by unit testing adequately.  The login form should advise of any password rules before you allow the form to be posted.  Or at least have the validation occur on the client side rather than waiting for the server to respond with an error and only then giving the user a clue about your password rules.

This also follows along to another problem, when a site decides to change their password rules and neglect the fact that all their legacy users have passwords that no longer match the new rules.  I had one scenario where the login process allowed me in but the password change form rejected the old password as it didn’t match the new rules.  Utterly ridiculous.

The obvious and simple solution to this is implement the new rules only on the New and Confirm New password boxes.  Let the accounts slowly migrate from there and if it’s a more serious issue force a password change cycle on the next login.

Please Send Us Your Password … Again?

The other great chestnut is situations where you log into a site to manage your account.  For some reason some sites seem to think that in order for you to manage your account it makes sense to ask for your password again.  This is one of the core issues OAuth attempted to solve (not constantly sending passwords over the wire) , so asking us to do it twice is less than optimal.  It doesn’t increase security in any meaningful way, in fact it doubles the risk of interception by definition.

Do you somehow not trust your own initial login result?  You have already established the users identity during the initial login process, the only reason they can even get to the account management screen is precisely because you HAVE identified them?  If you are feeling the need to do this maybe you have an issue elsewhere?  If you’re concerned about long session maybe have an inactivity timeout on the server session?  Most modern web servers have this ability baked in.  Although I would argue that some aspects of security concerns have to be accepted by the user as well as by site developers.

Hints? LOL

Please provide a hint so you can remember your password …


Er … seriously folks, hints are stupid at the best of times, don’t make them mandatory … ever.

I actually even encountered a local government site that believed that it was sufficient to show me the hint text rather than having any kind of real password reset/recovery process.  Seriously, my jaw hit the floor at that one.  Incredibly stupid.  You have to email them to reset a password.  Literally, unbelievable.

Another completely awful idea that some hint implementations use are often called “security questions”.  You’re presented with a drop down of fixed questions which you choose from and then provide an answer to.  This isn’t a good idea on any level.  Some of these questions can appear pretty innocuous, but some of the answers could provide useful meta data should the databases be compromised and published by a nefarious party.  I also think it would be safe to assume that these question and answer combinations aren’t encrypted and trivial for a hacker to piece together with data from other sources.

Emailing Passwords!??  Just … no

I’m amazed at how often I’ve seen this one.  You change your password on a site to something nice and strong.  Excellent.  Then the nice developers email it back to you IN PLAIN TEXT to help you remember it.  Guys, you just crapped a big one.  You’ve shafted any security you think you have.

I recently had a scenario on a site I was building where during a license purchase a management account was auto generated.  Since this involved the creation of a special kind of user account.  The database also required a non-null password.  In the background the server would auto generate a password, do the usual hashing and salting process and immediately throw the the plain text version away.

That left a user account with a password that noone had ever known.  The email address associated with the new account was then sent a time-restricted, one-time-use tokenised reset link, not the password itself.  The account was also marked to force a password reset.  So the server would never accept the generated password anyway.  If the token expired before the user attempted their first login, they could use the forgot password process to generate another token.

Case Sensitivity

Usernames and passwords are usually case sensitive meaning “hello” is not the same as “HELLO”, or even “Hello”.  Upper and lower case characters are not equal, not are the strings that contain them, obvious for a developer, less so for a general user.  I’ve had a number of scenarios where I’ve ended up with “split records” in LastPass.  It seems that just prior to a login POST request some JavaScript on the page upper-cased my username prior to sending it over the wire to the server.  This confused LastPass and instead of updating an existing record in my vault, it created a new one.  This smacks of some odd code smell in the site code or database in my opinion, especially considering the username was an email address.

To Login or Not Login … Registration?

This is an odd scenario sometimes.  The login and registration forms are on the same page/form.  This is odd and confusing, certainly for a password manager.  As I’ve been going through my stored credentials I’ve found many instances where the credentials were stored multiple times.  Often with a sub-domain prefix, like “register.”.  Account creation and logging an existing user into your site are two completely different tasks and shouldn’t be mixed.  Adding pages/urls to sites for such core functions are hardly major tasks for developer.  Having forms on pages with different actions is all well and good but shouldn’t make a developer “page shy”, go on, create another page, you know it makes sense!

Code Project!!!?? Noooo Surely not?

One of the password change processes that I was truly shocked at was Code Project’s.  The password change form is in a stupid tool-tip attached to a piece of text styled like a link.  When I entered the LastPass generated password I was informed that it was very strong.  Not surprising.  I was then greeted with an odd message about my current password being wrong.  I was then also unable to click any links on the site and needed to ditch the tab to get the site working again.  All very odd.


OAuth was an attempt in getting login right.  Whilst it isn’t an authentication mechanism per se, it attempts to federate login.  Reducing the number of passwords a user has and also the number of times a password is sent over the wire.  Anyone that has followed the history of OAuth knows that its inventor walked away from OAuth2 during the production of the specification for reasons too numerous and complex for this article.

Suffice to say we need some new standardisation efforts, badly.

I’ll Just Leave this here … yes this is from a real site …

… and no, you shouldn’t … ever.

The Future??

Firstly, I don’t consider myself a security expert by any stretch of the imagination.  I’ll leave that moniker to the likes of Tavis Ormandy, Troy Hunt, Steve Gibson and Moxie Marlinspike.  Having said that, I’m not a security noob either.

Everyone should also take their online security seriously.  Developers having a huge responsibility to ensure we don’t make silly mistakes but users also have to accept some of the responsibility, that’s inescapable.  Use a password manager, it’s really not difficult and empowers you to be more secure online.

The idea of a password protected system is now over 50 years old.  There are lots of efforts to improve things such as OAuth and SQRL.  OAuth has it’s own issues and none of the things on the immediate horizon appear to be the panacea we need.

One interesting idea I’ve encountered recently is the password-less email sign on that Medium uses.  I was initially skeptical but having used it and thought about it, there is a lot of merit in that scenario.  It doesn’t suit every situation at all but we need that kind of original thinking to tackle this problem.

So, what are you waiting for, go download a password manager … go on … off you pop.


SQLite Extensions for .NetStandard NuGet

I just created a couple of NuGet packages for helping to consume SQLite in .NetStandard1.5 applications.  Includes all the nice Async stuff you’d need as well.

It’s based on the twin coders Extensions that provide entity relationships within the database so makes your data tasks easier and wotnot.  All the code is on GitHub here:

And you can get the NuGet packages like this:

Install-Package SQLiteNetExtensionsCore(link)

Install-Package SQLiteNetExtensionsCore.Async

Let me know if you encounter any issues or open a support/bug report on GitHub


Oh Dear, Borked Your Registry Key Permissions?

I recently got so fed up with DropBox trouncing all over my shell icon overlays that I decided to attempt a drastic action.  Remove the write access to writing to the registry key that handles the shell icon overlays.  Specfically this one:


However, I was a tad overzealous with the ticking the deny checkbox and completely borked the access to the key for everything, including the Administrators group.  Even running RegEdit as Administrator wasn’t working.

So, how do you fix this on Windows 10?  SysInternals strikes again.  Pop over here and download PSExec.  Then basically we’re going to launch RegEdit, not as Administrator but as SYSTEM.  To do this CD over to the directory where PSExec is now living on your system and open a command window.  Run this command window as administrator.  The command you want to run is:

psexec -i -d -s c:\windows\regedit.exe

And Voilá, you can see all your stuff again. Yay!


Windows 10 UWP – Emulator- The Revenge

I’ll tell you why UWP is suffering.  The UWP flavour of XAML is a mere shadow of WPF and the emulator is shockingly hard to get working or working reliably.  For all the warts and problems Xamarin has at least the emulators work.  At this point I’m approaching 10 straight days over the course of three months doing nothing but attempting to debug a UWP application in an emulator running on my development machine.  That’s an inordinate waste of my own life and valuable development time.

Imagine that, a Windows machine with a Windows IDE and a Windows VM engine fails to just work, WTF??  Microsoft, there is no excuse for this.   From all my hours of reading it seems droves of developers are failing to make meaningful use of their own freakin emulator.  I’ve been plugging away at trying to get this to work, on and off for over three months.  I find myself with regained enthusiasm for the problem to start trying again.  Go through the same pain and exasperation and give up again in complete and utter frustration.

Utterly ridiculous.

I’ve tried every solution I’ve found online and nothing has fixed the problem.  You can see long posts on StackOverflow discussing a shit-ton plethora of potential solutions.  The very fact there are such a vast array of potential solutions points to this entire feature of UWP and Windows Phone development needing a lot of TLC from Microsoft.

Why can I debug an iOS application crossing not just the network for compilation but also across operating systems with ease yet I cannot do the same internally to my development machine?  This is a seriously stupid situation.  Some of these discussions span years and multiple Windows OS versions and emulator versions.  Threads starting in 2012 and running through to today, with developer after developer failing to get a proper solution.  Or finding a solution that works until you reboot the machine and have to perform a load of random steps to get it working again.  I’ve wasted a day here and a day there trying to get this to work and it looks like I’m very close to just throwing in the towel – permanently.

In fact this is making me completely reconsider continuing any work with UWP at all.

You can see from threads like this one and this one and this one and this one (you can see a pattern here?!) that lots of people either had (the lucky ones) or are still having issues.  This post in particular was originally written in September of 2012, it has comments up to January 2016 and here I am looking at it March 2017 trying to solve the same problems.  5 years later people are still struggling with Hyper-V and emulating a Windows Phone.  This smacks of a stupid and a hugely counter-productive situation to put your developers in.

It also shows that it is a severe and very real point of pain for many and far from an isolated obscure networking issue.  At least give us some better useful exceptions, or things to Goolge Bing.  At this point I’m seriously underwhelmed with UWP development to the point where I really don’t care about it being a success or not.  Is that really where Microsoft wants me or other developers to be?

I can emulate ALL THE OTHER PLATFORMS with nothing more than a press of a button, but using a completely Microsoft owned tool-chain fails miserably.  It’s no wonder the Windows Store shelves are threadbare and the UWP platform is thus-far an utter flop.

I guess you can tell from the tone in this post that I’m far from happy with this situation, it’s an insanely frustrating situation all round.  I don’t give up on problems easily but this one has me utterly beaten.  I would list all the solutions and voodoo that I’ve tried in order to get this working but frankly I’m struggling to even keep the enthusiasm to finish this blog post explaining my own personal failure.

I guess I’m going to leave this rant alone now unless I find more energy or enthusiasm for the task, this maybe my first and last ever post on UWP.  Frankly, I’ve got better things to do with my time and lets face it, even if I could be arsed to dive into this again and finish my UWP application it’s not going to make me a millionaire anytime soon …


MacBook Prolapse

Apple, seriously … you fucked up.

Are you seriously telling your MacBook Pro & iPhone users that they’ll have to buy a dongle to connect their flagship Apple phone to their Flagship Apple laptop?  What’s that all about?

Your core user base (in your own words) are creatives.  I’ve been a pro photographer and a recording studio owner in some of my previous lives and you’ve just alienated all of them too.  All the graphics packages I use and Nvidia CUDA core hardware acceleration features.  This is KEY technology for these folks, Blender, Premier Pro etc … why did you throw out NVidia and welcome AMD?  All our pro cameras use SD cards and you’ve thrown that port away as well.  And we can’t even connect them via USB since you threw all of those ports away as well.

What the hell are you doing?

I like to run VMs on my Mac and I was seriously hopeful that this “upgrade” would include a 32Gb RAM option.  Why?  Why oh why?  I don’t care what you want me to think, this is NOT an upgrade.

Then the price … holy shit.  Just no, bye bye Apple.  Why even Apple fanboys are crying over the price you know you have a problem on your hands.  It’s going to be interesting watching the sales figures fall off a cliff.

No ESC key you say … oh really?  WTF?

£2,349  Fuck Off … just FUCK OFF Apple you have lost the plot.

windows 7

Windows 7 Updates Roll-up

There’s really no doubt that Windows 7 updates have become a bit of a nightmare.  Microsoft have been aggressively pushing Windows 10.  Some of the stories that have been rolling around have clearly highlighted this to anyone watching.  Despite all the statements coming from Microsoft themselves they do seem to have employed a form of plausible deniability tactics to increase Windows 10 adoption.  It even drove Steve Gibson to write Never10.

Whilst I can understand getting as many people on the new Windows 10 OS has lots of advantages, for everyone.  It would make my development life easier so in some ways I have a vested interested but you can’t arbitrarily push a new OS on someone.  There are so many dependencies and potential issues in some scenarios it just isn’t smart to force this.  Someone with a fully working system that falls foul to these tactics and then has lots of issues is the worst thing you can do to a customer.

I can’t think of another situation where you can buy something from a company, use it successfully for years, then the company that you originally bought this thing from 5 years later comes to your house and breaks it.  Then, offers to give you the new all singing all dancing replacement “thing” vNext.  Can you imagine the builder who built your new dining room 5 years ago coming back and taking a wall down then offering you Wall V2?

Me neither …

It seems this happened more times that Microsoft would ever admit to.

Windows 7 Updates

With all the goings on Windows 7 Updates seem to be in a bit of a mess.  Installing Windows 7 updates has become a bit ridiculous and takes an incredibly long time.  To be fair we are dealing with 5 years (Windows 7 SP1 was released February 22, 2011) worth of OS updates on the most popular desktop OS on the planet.  In it’s defense Windows update is a complex piece of software doing lots of sophisticated dependency checking and wotnot.

Either way, it’s gotten to the point where applying Windows 7 updates has become really painful.  Urgh …

Good News! Update Roll-up

The Good news in that MS have done a Windows 7 SP1+ update roll-up.  They have taken all the updates applied to Windows 7 since SP1 in 2011 and packaged them up for easier deployment.  To make use of these packages you need to do a bit of manual stuff to a new install.

I was setting up a new VM in Virtual Box using my old Windows 7 license and got fed up with watching the indeterminate “progress bar” (that’s an Oxymoron if ever there was one) I decided to look for this update.

The New Windows 7 Updates Process

It’s actually really easy.  In my case, as I was going from fresh install to latest I did this:

  1. Created VM and installed Windows 7
  2. Applied updates normally to get Windows 7 to SP1
  3. Go and grab KB3020369 and install it
  4. Get the Windows 7 Updates roll-up here (requires IE) and update away!

Alas, even after going through this process the last check for updates process I ran took around 5 hours.