Docker + Windows + WordPress + MySQL + PHPMyAdmin = Nirvana

The last time I did any WordPress development was over a year ago so I no longer have PHP and MySQL installed.  I started marching off down my well beaten path.  Download PHP and MySQL, install on my machine, do dev work.  Just after clicking on the MySQL download I suddenly thought, what the hell am I doing?  DOCKER!!!!  I already have Hyper-V installed so all I need is a container and some YUMMY YAML.

I’ve used Docker once before, for around 5 minutes, so I’m a complete noob really.  So going through this process to work it all out for myself was initially extremely painful.  I failed on my first session and ended up having to walk away from it all as I just wasn’t getting it.

I did a lot of reading and Googling and just wasn’t finding the explanation I needed to grok it all properly.  There are lots of explanations of how to get this working but they all seemed to stop at the crucial point for me.  They covered off some yml to get MySQL and WordPress containers up and running but stopped there.  What about persistence?  Or deployment?  Where the hell are all the WordPress files?

Some of them seemed to demo the solution I wanted but seemed to miss out on how they achieved it, or were doing it on different platforms.  I needed a noobs Docker for Dummies walk through.  So I’m going to document what I’ve found out in the hope that it crosses some of the Ts and dots the Is for others getting started.

Docker is The New VM

Don’t get me wrong virtual machines are great and very necessary but they’re also a bit overkill a lot of the time.  This is where Docker comes in.  It still requires virtualisation technology under the hood but it’s now transparent and not directly controlled.

Microsofts own virtualisation technology is Hyper-V.  Docker on Windows uses this by default but it could be used with VirtualBox from Oracle as well.  I’ve had lots of success running virtualised OSes on top of Hyper-V and more or less utter failure using it for Microsofts own emulators, the irony here isn’t lost on me by the way.

Docker is a container technology that wraps up specific services, such as databases, runtime or any other dependencies tasks require.  It lets you run just what you need for a given task without installing these services on the host OS. Fantastic.  Lets dig in.

Installing Docker

Dead easy.  Make sure Hyper-V is installed on your Windows box (Home users you’re out of luck here btw).  Go here, download for your OS and architecture, install.  Done.

The Docker installation is a very painless process.

Check Installation

Once installed., open a command line (Windows + X -> Command Prompt (Admin)) and execute:

docker version

You should then see some version information:

Client:
 Version:      17.06.0-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:30:30 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.06.0-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   02c1d87
 Built:        Fri Jun 23 21:51:55 2017
 OS/Arch:      linux/amd64
 Experimental: true

If you see an error mentioning the daemon, Docker may well still be setting itself up in the background.  You can also create a Docker ID on the Docker site and configure your install to use this ID though I’ve not needed to so far so cannot comments on this aspect.

Next run this:

docker info

This gives you some useful info about your Docker environment, like so:

C:\WINDOWS\system32>docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 17.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.36-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.837GiB
Name: moby
ID: 3RBI:664X:UXGI:FB6Y:3K7K:LEMA:BWRR:6SLX:5M7J:P66D:T4XN:L7XH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 15
 Goroutines: 25
 System Time: 2017-08-06T15:06:13.8983022Z
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Settings

It’s probably worth having a look over your Docker settings before doing anything else.  You can right click on the Task Tray Docker icon to see various options, Click on Settings first and have mooch about to see if things are setup as you’d like.

The General tab is pretty self-explanatory and controls how Docker starts up and some security settings.  You can probably leave most of these alone.

The Shared Drives page is also pretty self explanatory.  Using this page you can control which host machine drives are available to services running within containers.  Since I’m using my H drive for my projects I’ve shared this drive with Docker.  This allows processes running inside your Docker containers to access stuff on the host machine drives.

docker shared drives settings

The Advanced tab is worth reviewing.  If you have used Hyper-V before and have customised it’s setup you’ll find that Docker automatically picks up some of these settings.  I’d configured Hyper-V to put VM images on one of my large external drives so Docker will install it’s Moby Linux VM here.

docker advanced settings

I’ve also upped the available RAM Docker can use to 4Gb, my dev box has 24Gb so I’ve plenty to throw at Docker.  Since it’s running on Linux 4Gb RAM should be more than enough to keep things running at a decent speed.

The Network, Proxies and Daemon pages are fine in their default state for now.

My Requirements

I wanted a WordPress development environment with persisted data.  If Docker is restarted or the host machine reboots, I want the state of both stored.  Both in terms of the WordPress state such as themes, plugins and so on and the database.  I’m fast with WordPress development but not that fast!

So Docker should accept all the responsibility of hosting Apache, PHP, MySQL and phpMyAdmin.  Docker also takes care of all the networking and configuration of those services.  The host Windows machine exposes a drive and hosts the WordPress files and MySQL databases on it’s “normal” file system.  These files are stored in directories that are mapped into the containers which allows for simple deployment once the development is complete.

I’m sure in time and as I learn more about Docker I’ll find a lot of this can be handled better.  For now this is where I am in the learning curve and it’s working.

Yummy YAML & Docker Compose

The idea here is to produce a file that tells Docker what you want to do.  It specifies one or more images to use to create one or more containers that provide the services you need.  Images are templates for containers, much in the same way classes can be thought of as templates for objects in an OO sense.  There are some issues here in terms of terminology which I don’t get.  Although we are creating containers, in the .yml files they are called services.  It’s probably my limited knowledge here but it would be clearer to new users if they just stuck to using the same terms.

What I was struggling to understand and configure was volumes.  I’m still a little in the dark to be honest and I’m not entirely sure I have this configured in the best way.  But what I’m showing here is working and it suits the requirements I mentioned above.

Directory Structure

As I showed above I’ve shared the host machines H drive.  Within this drive I have the following directory structure:

H:\js2017
H:\js2017\database
H:\js2017\ui

In the root directory (H:\js2017) I have created a Docker compose YML file called docker-compose.yml.  This is where the magic happens.  The file contains this YAML:

version: '2'
services:
  wordpress:
    depends_on:
      - db
    image: wordpress
    restart: always
    volumes:
      - ./ui:/var/www/html
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_PASSWORD: p4ssw0rd!
    ports:
      - 8082:80
    networks:
      - back
  db:
    image: mysql
    restart: always
    volumes:
      - ./database:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: p4ssw0rd!
    networks:
      - back
  phpmyadmin:
    depends_on:
      - db
    image: phpmyadmin/phpmyadmin
    restart: always
    ports:
      - 8083:80
    environment:
      PMA_HOST: db
      MYSQL_ROOT_PASSWORD: p4ssw0rd!
    networks:
      - back
networks:
  back:

So in this file we have 3 services defined.  We have wordpress, db, phpmyadmin.  We also have a network aliased back (backend).  You could almost certainly take this file, alter the ports if need be and the volume entries and have this up and running pretty quickly.

You can see the references to the database and ui directories in the volumes declaration for the wordpress and db services.  The “.” notation is the relative path from the docker-compose.yml file in the project root directory.  These are mapped by docker to the containers internal file system running on top of Linux within the Docker virtual machine.  So anything written to the /var/www/html directory within the container ends up in the ui directory on the host machine and the same for the /var/lib/mysql directory for the databases.

The port mappings for the services are mapping host machine ports to container ports (hostmachineport:containerport).  So to view the WordPress site, I navigate to localhost:8082 in my host machines browser and this is forwarded to port 80 in the container and serves the page.

Deployment

Going over the deployment of the MySQL database to a live server is beyond the scope of this article but to deploy the WordPress site it’s just a case of taking the contents of the ui directory and uploading it to the public html directory of your web server.

I’m sure there is a better way of managing this but for now with my limited understanding of the finer details of Docker and it’s container model this works for me.  Hopefully this has gotten you started and I’m sure I’ll revisit this again in the not too distant future with some better solutions and tips.

Happy Docking!!

Share

Package Dependencies & Deploying .NET

I recently had a requirement to completely package up a .NET application so that it was extremely portable beyond it’s initial installation. By bundling all the dependencies into the main executable package you can move the application around with minimal fuss once installed.  The goal was to keep the application as independent as possible.

The application is actually fairly simple with only a handful of required library files to package.  Once all the dependencies had been packaged the entire application executable is still under 600k.  This process detailed below also enables the package to contain non .NET assemblies, like C++ components and other files.  What’s really good is by default the dependencies are never extracted onto the file system which is ideal.

Initially this might sound like a complicated process. However … all you need is one NuGet package Fody/Costura (Source).  That’s it!  Just install the package and you’re done!

The NuGet Package Solution

make sure that you have your start up project selected in the package manager console and execute this command:

Install-Package Costura.Fody

You don’t even have to configure anything.  Just build …

The only thing that might need doing in order to make sure your package only contains exactly what’s needed you can add a new set of targets to your *.csproj file.

When you install the package it will automatically add a new import:

<Import Project="packages\Fody.1.26.1\build\Fody.targets" Condition="Exists('packages\Fody.1.26.1\build\Fody.targets')" />

Then to make sure you get the package your really want you can add:

<Target AfterTargets="AfterBuild;NonWinFodyTarget" Name="CleanReferenceCopyLocalPaths">
  <Delete Files="@(ReferenceCopyLocalPaths->'$(OutDir)%(DestinationSubDirectory)%(Filename)%(Extension)')" />
</Target>

Done. Very nice and a must know trick.  The only thing I want to now tackle is bundling in the base configuration file.

Share

View IISEXpress Hosted Sites On Your Local Network

This is a useful little tidbit of knowledge to have.  Suppose you can browse to your new spanky site using IISExpress at http://localhost:54275/ … now you want to look at it on your phone?  Or your laptop? Or your … you get the picture …

Pre Visual Studio 2015

  1. Open up C:\Users\<yourname>\Documents\IISExpress\config\applicationhost.config
  2. Find your site definition and add in a new binding <binding protocol=”http” bindingInformation=”*:54275:<your-ip-address>” />
  3. Open Command Prompt (as admin) netsh http add urlacl url=http://<your-ip-address>:54275/ user=everyone
  4. Then execute netsh advfirewall firewall add rule name=”IISExpressWeb” dir=in protocol=tcp localport=54275 profile=private remoteip=localsubnet action=allow
  5. Then point your remote machines to http://<your-ip-address>:54275
  6. Voila!

That wasn’t so hard eh!

Visual Studio 2015

You need to complete steps 2 to 5 above, but Visual Studio 2015 by default doesn’t use the global configuration file for these IISExpress bindings.  In order to configure this you know have a couple of options.

The first option is to configure your project to use the global configuration files by added this to your *.csproj file:

<UseGlobalApplicationHostFile>True</UseGlobalApplicationHostFile>

Or you can add your addition bindings to the solution specific configuration file Visual Studio 2015 generates here:

<solution_dir>\.vs\config\applicationhost.config

Now running your solution under Visual Studio 2015 will behave as required.

Potential Errors

There are obviously too many potential errors to keep track of on a single blog post but I thought I’d detail a few fixes to issues I’ve personally experienced.

Access Denied

sometimes you may see this message when trying to launch your solution in Visual Studio.  To get around this close everything down and re-launch Visual Studio “as admin”.  This should fix the issue and then subsequent launches should work without running Visual Studio as an admin.

Failed to register URL "http://192.168.0.8:51258/" for site "<name>" application "/"
Error Description: The Network location cannot be reached.

This was a particularly annoying issue and took quite some time to track down.  It seems that the Threshold 2 update to Windows 10 removed all my listening IP address entries!  You can check that by executing this command in a Command Window:

netsh http show iplisten

If your own IP address isn’t listed here you need to add it.  You can do that by using this command (use your own IP address obviously):

netsh http add iplisten 192.168.0.8

You can see more information about netsh here.

Share

Compile Time Checking for MVC Views

One advantage of MVC Razor could also be deemed a disadvantage.  No comile time checking of views.  I personally love the dynamic nature of build the Razor views but you can get caught out occassionally and find yourself navigating to a view that is actually broken and generates an error and crashes the web site.

To help allieviate this problem there is a project level setting that you can use to pre-compile the Razor views which will help highlight issues like this before they hit production.  Yes, yes we should all be picking up issues like this well in advance of production and generally that is true but even the hardiest and more comprehensive set of UI testing can miss things or not be quite as comprehensive as one would imagine.

Anyway, the project setting is called MvcBuildViews and you can add it to a project level property group:

<MvcBuildViews>true</MvcBuildViews>

However, use with caution, this can dramatically increase build times and for any large project really is impractical for using in Debug mode. I only ever add this to Release mode meaning that this will get checked in your build process on it’s way to production, if not staging before that.

Share

Nuget Strikes Again

Honestly, for tool that’s supposed to make life easier and more robust I find myself battling with it often.

I’ve been upgrading some of the packages in one of my projects and ran into this pretty error:

The schema version of 'Microsoft.Bcl' is incompatible with version 2.1.31002.9028 of NuGet.
The schema version of 'Microsoft.Bcl.Build' is incompatible with version 2.1.31002.9028 of NuGet.
The schema version of 'Microsoft.Net.Http' is incompatible with version 2.1.31002.9028 of NuGet.
etc ...

Er …

OK, so initially I thought I’ll update the version of Nuget that TeamCity is running. Updated that, reran the build … same problem. Odd I thought … did some Googling around and found out that you have the MANUALLY … yes you read that right MANUALLY update Nuget … What the very fuck? This tool gets worse in my opinion the more I learn about it.

SO, go to you .nuget directory in your solution and run this:

nuget.exe update -self

So just what the hell is happening when VS tells me Nuget Package Manager needs an update? … yet my nuget.exes in the solution where still launguishing around at version 2.1 … honest, this is shit!

Now I’ve gotten my Debug build to work, but I keep getting these errors on my release build:

The build restored NuGet packages. Build the project again to include these packages in the build.

WHAT????

This is EXACTLY what Nuget is supposed to bloody do! That is one of it’s core functions … run the build again? Can you tell that I’m seriously pissed off with this tool right now?

Share

CI Process – Why?

Why is having a CI process good?

There are so many reasons why CI is good. I use it on everything I do. For my recent web based project it’s simple a must have. Forcing you to take an application and pipe it through a series of standardised processes is invaluable. In regards to web applications I go through the full monty. The process I use is:

  • Dev & Debug in Visual Studio
  • run unit tests locally
  • commit to SVN
  • TeamCity Debug Build (Unit Tests included)
  • TeamCity Debug Build with ReSharper Code Inspections and FxCop
  • TeamCity Release Build with NuGet Artifacts
  • Octopus Deploy to local IIS for Staging

I do this even though it’s only me on the project. A web application should be sturdy enough to make it through this process and flourish at the other end of the process. It helps keep things robust, transferable, repeatable, tracked and traceable. All good things for software!

Obviously the quality and coverage of your unit tests is a key element in the process but the pipeline described above is also key to ensuring that an engineered process is always taking place, the randomness is removed entirely. Ideally the build process should be occuring on separate machine as the environment it’s being integrated in is same environment it’s designed in which isn’t ideal but arguably less troublesome than doing it with a desktop application complete with windows installer.

The next step to be added to this process is something along the lines of Selenium automated tests to run against the website itself.

Share

MVC Deployment – JavaScript – ReferenceError is not defined …

This had me stumped for a moment.

All the Javascript was there (linkable from bundle links in the page source), all the files were there, everything worked in dev (doesn’t it always?).

Anyway, if you’re seeing this error in your deployed applications there are a couple of steps to take.

Firstly you should download and install the Visual Studio tool JSLint.VS2012. Then set this up to show warnings and don’t set it up to fail the build. JSLint is not kind and very strict about things (your adherence to BP is your choice but recommended for sure!)

So, you deploy your app and BANG all your lovely JS is twatted. Never fear … pop over to your web apps Global.asax file and in the Application_Start method include this:

BundleTable.EnableOptimizations = true;

So now with this setting set you can launch your app in debug mode with all the Optimizations forced on to test minification and more closely model your testing to the deployed version.

Once you have that working and are seeing the problems more clearly you can start to work through the potential issues using the reports from your new JSLint VS plugin to fix the syntax and other formatting issues.

Share

MVC – Display Version Number in Your UI

If you ever want to display the version of your app in the UI (useful in development with lots of environments) you can do the following.

In your Global.asax start method you can add:

Version version = Assembly.GetExecutingAssembly().GetName().Version;
Application["Version"] = string.Format("{0}.{1}.{2}.{3}", version.Major, version.Minor, version.Build, version.Revision);

This somewhere in your site UI you can do this:

@HttpContext.Current.ApplicationInstance.Application["Version"]
Share

Build Process – TeamCity, NUnit, dotCover & Octopus Deploy

I’ve blogged about TC a bit in the past but I’ve just setup a whole new build process for my current project.

Added into the mix these days is dotCover and Octopus deploy. Have to say that I’m seriously impressed with the simplicity of this flow to release. Still a few elements of Octopus Deploy to get my head around (deceptively simple tool!). Anyway, as ever getting some of the reports configured took a while …

Get your coverage filters right in TeamCity:

+:<SomeString>*
-:*Test

If for instance your project is called JamSoft and all your application dlls are called JamSoft.blah or JamSoft.blah.blah then somestring should be “JamSoft” and of course you’ve suffixed all your test libraries with .Test haven’t you … 🙂

Unless you already have a tool you’re using I found the TeamCity internal instance of dotCover a good solution. I tried to get NCover 1.5.8 working and just gave up in the end, it’s old and getting problematic as far as I’m converned and since the TC internal solution is available use it. Saves some potential headaches.

I also had a few teething problems getting NUnit working this time around. I was using an installed version of NUnit 2.6.2. However on first build in TeamCity it could completely the initial compile process as it couldn’t find the dlls. I ended up switching over to using a NuGet NUnit package and then the compilation steps were fine. Bit odd since I was running the TC builds on my development machine so NUnit was definitely available.

As soon as I’ve got a couple of things ironed out with my use of Octopus Deploy I’ll no doubt blog about that as well, however for now I’m total noob so …

Share

WiX UI Not Updating As Expected

During my recent forays into the world of WiX I’ve been slowly hacking away at the steep learning curve. This really is a huge framework that is dealing with an intrisically complicated process. For my own projects I’ve always stuck with Inno Setup and to be honest I don’t have any compelling reason to move them away from Inno at the moment as they are functioning as expected.

Anyway, the last problem I’ve encountered that took a question to the WiX mailing list to answer was regarding showing messages conditionally in the UI based on custom property values changing.  What was confusing was that the log file showed all the custom actions and properties being correctly processed but the UI never actually updated.  Sound familiar?  If you’r having this problem, simply try hitting the back or next button and then go back to the dialog that should have updated?  I bet you now see the updated value in the UI (provided all your WiX source is correct of course!).

This issue is that dialogs are never redrawn.  This is actually a limitation in the MSI UI implementation.  Whilst I haven’t looked into alternatives like replacing the MSI UI (lots of work apparently) there is a hacky solution to this.  Making twin dialogs … basically you create an exact copy of the dialog that should be updated and then once the property changes you simply show this twin dialog and it will appear as though the UI has updated when in fact it’s showing an entirely new dialog.

In the snippet below you can see that the last published event is the call to the current dialogs twin. You will need to give the twin dialog a new Id but all the others can remain the same. Not a nice solution as you now have two dialogs to maintain with any changes but it is a solution none-the-less.

<Control Id="TestDbConnection" Type="PushButton" Width="100" Height="17" X="19" Y="202" Text="Test Connection" TabSkip="no">
    <Publish Event="DoAction" Value="SetGeneratedConnectionString">1</Publish>
    <Publish Event="DoAction" Value="CheckDataAccessCa">1</Publish>
    <Publish Event="NewDialog" Value="DatabaseConfigTwinDlg">1</Publish>
</Control>
Share

Managed Custom Actions Failing in WiX?

Well after many hours wondering why my WiX custom actions were failing to run I made an interesting discovery.  The library that I had created to hold my custom actions was all being referenced correct from the main WiX project code in a separate WiX fragment, like this:

	<Fragment Id="CheckDatabaseAccessCa">
    <CustomAction Id="CheckDataAccessCa" BinaryKey="ProjectName.dll" DllEntry="CheckDatabaseConnection" Execute="immediate" Return="check" />
    <Binary Id="ProjectName.dll" SourceFile="$(var.ProjectName.TargetDir)$(var.ProjectName.TargetName).CA.dll"/>
  </Fragment>

Changing any part of this made the build process fail so it was finding the dll and adding it to the MSI package correctly.  The first line of the custom action being called was a call to launch a debugger so that I could then step through the method and see it in action, this was never being called so it wasn’t even getting this far.

System.Diagnostics.Debugger.Launch();

Anyway, I have now cracked this particular problem.  As it happens the library containing the custom action was compiled for .NET 4.0 … as soon as I switched this to either .NET 2.0 or .NET 3.5 all is happy.  All I can assume is that since WiX itself is .NET 3.5 that is the version that is loaded into the process by default.  So when the MSI tried to access my custom action in a .NET 4.0 library it simply bombed the whole process.

Share

Building Installers

If you are ever in a situation where you are building an installer for an application and things have gone a bit awry with the uninstall process (tut, tut) there is a nice simple way to get yourself back on track.  Simply run the command below and this will force an uninstall of the product matching the GUID supplied in the command.

Msiexec /x {your-product-guid-code} IGNORE_PRE_CHECK=1

I’ve not needed this yet but no doubt it’s worth blogging about as it will rear it’s ugly head at some point!

Share