Bookmarks Toolbar in FireFox Full-Screen Mode

This is a neat trick that I use in FireFox.  I like to use FireFox in full-screen mode and I use the bookmarks toolbar a lot.  The trouble begins when using full-screen mode, toggled with F11 for the uninitiated.

Using full-screen mode really improves the browsing experience.

It gets rid of UI clutter and lets you focus on the web site you’re browsing.  Unfortunately it also hides useful things, such as your bookmarks toolbar.

You still have access to your tab bar by placing your cursor at the top of the view port.

You can also use CTRL+B to toggle your sidebar into view which is useful.  Unfortunately, the bookmarks toolbar will disappear … BOO!  You’re left with this:

bookmarks toolbar missing

Missing bookmarks toolbar

To bring it back whilst also in full-screen mode, you need a spot of profile CSS jiggery pokery but its really straight forward.

How to Show your Bookmarks Toolbar

Firstly, locate your active profile:

  1. Open the Run dialog by:
    1. Pressing Windows Key+R;
    2. Clicking on Start then type “run“;
    3. Press the Windows Key to open the start menu and then type “runbookmarks-toolbar-run-dialog
  2. In the Open command textbox in the Run dialog, type the following following command:
    firefox.exe -P

This will open the Profile Manager view showing your active profile.  To view the location on disk, hover over the  active profile entry shown on the right-hand side of the dialog, a tool-tip will pop up showing the path.

bookmarks-toolbar-firefox-profile-manager

  1. Navigate to this profile directory in explorer then look for a “chrome” directory containing a userChrome.css file.  An example location might look like this:
    \%APP DATA%\Roaming\Mozilla\Firefox\Profiles\<your profile directory>\chrome\userChrome.css
  2. If none of this exist, create what you need.
    1. Create a “chrome” directory if it doesn’t exist
    2. Create a new text file inside the new chrome directory (Right-click -> New -> Text Document)
    3. Name it “userChrome.css“.  Click on yes when asked about the file extension.
  3. Edit userChrome.css by selecting the file and hitting Enter on the keyboard or double-clicking on the file name.
  4. Paste in the following code:

    @namespace url(http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul);<br />
     #navigator-toolbox[inFullscreen] #PersonalToolbar{<br />
     visibility: visible !important;<br />
    }

  5. Save the file

Now restart FireFox and test the full-screen mode (F11).  Going into full-screen mode should now allow access to your bookmarks toolbar.

bookmarks toolbar full-screen mode

Shows the bookmarks toolbar in full-screen mode

Useful Full-Screen Shortcuts

When if full screen mode, you can use these shortcuts:

  • Open new tab – Hold CTRL + T
  • Scroll a page – Press PgUp or PpDn
  • Open Search – Hold CTRL + K
  • Refresh Page – Press F5 or Hold ALT + R
  • Go to saved Home Page – Hold ALT + Home
  • Cycle through open tabs – Forwards – Hold CTRL + TAB – Backwards – Hold CTRL, SHIFT + TAB
  • Close current tab – Hold CTRL + W
  • Undo Close Tab – Hold CTRL, SHIFT + T
  • Navigate history – Hold ALT and then tap an Arrow Key
  • Find in page – Hold CTRL +F then press F3 to continue looking
  • Open developer tools – Press F12
  • Open mobile view port – Hold CTRL, SHIFT + M

You can find many more useful keyboard shortcuts in the FireFox documentation.

 

Share

Apple Music – A Disaster for Users

This is the worst thing I’ve ever read regarding an online service.  Apple Music service hijacks ALL your content – even your own created works and holds you to ransom for access to it.  Literally unbelievable.  Not only that if you have nice high-bandwidth WAV files, it’ll convert them to MP3 or AAC at the same time.  This is just the worst thing you could possibly do to a composers own work.  I would be livid if it were my stuff.

“The software is functioning as intended,” said Amber.
“Wait,” I asked, “so it’s supposed to delete my personal files from my internal hard drive without asking my permission?”
“Yes,” she replied.

https://blog.vellumatlanta.com/2016/05/04/apple-stole-my-music-no-seriously/

I’ve never read anything like it.

The author – James Pinkstone – is also composer and has had his own material taken from his hard drive.  This is theft.  I hope that the attention this is drawing will force a rethink as this is probably one of the worst content hijacking stories I’ve ever read and in my opinion one of the worst consumer relations scenarios I’ve ever read about – bar none.

I have a MacBook Pro, an iPad and an iPhone.  However, all of these devices are dedicated to testing software I make.  I remember becoming so frustrated and annoyed, not to mention confused, with iCloud “sync” that I turned it all off.  I hate it.  It seems like the new music “service” from Apple has taken that premise, given it a limitless stash of steroids and unleashed it onto an unwitting public.

Draconian is the only word that fits.

Apple you already were an incredibly arrogant company but this is on a whole new level of arrogance.  This is criminal in my opinion.

Share

Blender GTX980ti

So last year I upgraded from my old EVGA GTX580 Classified to a Palit GTX980ti JetStream.  I expected a massive increase in performance in Blender, both in the preview rendering whilst working on models and also in final render times.  Since I’ve been incredibly busy on other projects I’ve not had much time for working in Blender.

I ran a few benchmarks and was more than a little disappointed in the results.  Using the BMW Blender benchmark file I was getting render times that were in fact slower.  Since I wasn’t doing any serious work at the time it wasn’t a huge problem, however I’ve now got a bit of time to look at this again.  Turns out there is a known compatibility/performance issue using this card.  The issue is not limited to a single manufacturer and seems to be related to the GM200 chip used in the 980ti and Titan X cards.

This issue is now being investigated by the Blender developers and you can keep up with the developments on their Blender bug reporting site here.

Update!

OK, so we’ve actually seen quite a bit of movement on this issue both from Blender and from NVidia.  NVidia have reproduced the issue and are looking into it.  That said, if they do make additional improvements things are looking very good indeed.

I’ve just performed a new test with Mike Pans BMW benchmark file.

With the very latest NVidia driver (368.22) and using the latest Blender nightly (2.77.0 – nightly – Thu May 26 04:46:46 2016) my time has drastically improved from 2:10 down to 1:01.  I think with the rapidly moving world of NVidia drivers we may well see further improvements over these times.

Share

The Dangers of Self Build Web Site Platforms

This might at first appear like a form of scare mongering, or a thinly veiled sales pitch on my part but I can assure you it isn’t.  I have too much work to do already!

For a small business owner the attractions are many and varied.  Obviously cost is always a very large consideration.  Probably the at the top of your list of reasons for not engaging with an agency or web site company to do the work for you.  And this doesn’t even consider involving the cost of a designer and a developer (yes they are more often separate roles rather than one all powerful individual!).

The allure of being able to build your own web site is a strong one for a small business.  After all, how hard can it be?

There are now many platforms that offer a kind of DIY E-Commerce solution that sell themselves as a kind of panacea to all web site requirements.  Some are good and some are outright dreadful but all of them will allow you to make very big mistakes along the way.  If it was easy, we would all be experts.  The pitfalls are incalculable in number.  Even after 20 years in the business I’m still learning.  I don’t work on my car for the same kinds of reasons.  Sure I can use a spanner but that doesn’t make me a car mechanic with oodles of experience under my belt.

If you aren’t in the software engineering game by profession what makes you think, other than the aforementioned sales pitches, you can do it?  Even when you have it “working” do you really know it’s working?  Do you know it’s secure?  Most of the time I see people doing these self build web sites 9 times out of 10 it’s for their business and is also an E-Commerce system.  In which case you dealing with some very sensitive data belonging to your customers.

It’s often overlooked by non-professionals that a web site is somehow not software.  It is software.  It’s an application like any, such as the web browser you are viewing this with, it’s just hosted on someone elses computer somewhere “out there” on the internet.

The issues I often see in these sites include things like:

  • Very poor UX (if you don’t know what UX is there’s one reason you shouldn’t build a web site right there)
  • Confusing navigation
  • Insecurities
  • Poor SEO planning
  • Awful design and page layout
  • Untested features
  • Incompatible plugins
  • Poor performance
  • etc.

Granted, some of the platforms negate some of these issues since you are sometimes forced down particular routes by design.  However, all of the platforms I have used will still allow the uninitiated to make huge blunders.  Whilst you are “saving money” by not engaging with a professional you could be dooming your site to complete inactivity due to any of these issues.  How can you calculate that cost to your business?  The short answer to that is, you can’t.

The platforms I’ve used personally and would recommend are not many.  Shopify and Squarespace are probably the only two I would recommend.

You have to remember that any time a user comes to a web site with issues that’s the first experience of you and your business they get.  First impressions do last when it comes to technology and if you don’t fill them with confidence they are not going to convert into a sale.  The point here is that most visits to most sites turn into a bounce statistic, the come to your site, spend 10 seconds (if you’re lucky) on the home page and leave again.  Boom, lost business, sad face.

Don’t make these mistakes, do research and my advice is at least speak to a professional before doing anything or making a decision, either way.  You might learn something useful even from a chat.

Share

ReSharper 10 Speed Woes

I love ReSharper, I really do feel slightly lost when using Visual Studio and hitting Alt+Enter doesn’t offer the same code sugar rush I’ve come to expect.  But I’ve gotta say that VS2015 Update 2 with ReSharper 10.0.2 is just starting to get to a point where the marriage seems to be stagnating.

For the first time since I started using ReSharper all those years ago I’m reaching for the suspend button.  I’m regularly seeing it use such a vast amount of my CPU cycles that I just cannot justify leaving ReSharper active all the time.  There are certain problems I know work on where the syntax can structure can see the instance of of Visual Studio running ReSharper start using 80+% of my i7.

That’s a LOT of CPU grunt.  I should really also say that as far as I can tell so far Visual Studio 2015 seems to be the buggiest RTM version I have used.  I have multiple issues at the moment with no solid workaround or even much help/input from Microsoft.

The versions of tools I have here are:

  • Visual Studio 2015 (14.0.25029.00 Update 2 RC)
  • .NET 4.6.01038
  • ReSharper 10.0.2 (104.0.20151218.120627)

There is a guide provided by JetBrains for speeding up ReSharper.  You could argue that having a guide like this acknowledges there is an issue, which I think is both fair and slightly unfair.  Either way I’m a die-hard fan and even I’ve had to resort to looking over this guide and trying to arrive at a feature/usefulness balance that I can life with day-to-day.  Seeing delays of 2 seconds between key presses is the makings of pure frustration but it is what I have been seeing on occasions, at times like that ReSharper is anything but a productivity tool.

I was working on a solution the other day and was having some serious issues with the performance I was seeing, I switched from using ReSharper intellisense to using the built in Visual Studio intellisense and was genuinely shocked to see the immediate difference.  The difference in speed was actually shocking.

This is actually the first thing I would suggest people try in order to speed up their system.  It appears ReSharper is doing an awful lot of text chugging in order to arrive at some of the suggestions and naming options.  Whilst useful, in the particular section of code I was writing (View constraints in a Xamarin iOS solution) the processing was just killing my machine.

Will keep this up-to-date with anything useful I find along the way.

Share

App Store App Submission – Continual Failure and Obfuscation

The App Store has to be one of the flakiest “services” I’ve ever had the displeasure of having forced on me.  App submissions seem to fail constantly, more often than not, with little or no real error information.  I am genuinely in shock about how bad the entire iOS software distribution process it.  It is thoroughly, thoroughly broken.

The first submission I did failed 3 times even to just get the build up onto the Apple servers.  When it finally made it onto their server to took an overnight wait to get out of the “Processing” status and into a state where I could push it to our testers.  This second time I’ve wanted to push a new build it’s uploaded all 4 times I’ve tried but failed “Processing” 4 times as well.  Two days and 5 builds (from the same source code) and I’m nowhere nearer getting this build out to testers.

Local validations pass, then an upload fails, try again, upload works … processing fails … ad nauseam.

Episode IV – A New Build

I uploaded a new version of that same app to the app store last night for instance.  Waited nearly an hour watching the “(Processing)” text again,  got thoroughly bored waiting and went to bed.  Woke up this morning to the incredibly helpful message:

There was an error importing this build

No error information, no reasoning, nothing to work from, nada, zilch …

And now since Apple refuse to accept another upload with the same build number I have to push EXACTLY the same code-base through my build pipeline just to artificially up the version number to suit Apple.  Shambolic.

According to this thread I am not alone.  This seems to be quite an issue in fact.  And it’s not related to Xamarin since it’s happening to Obj-C and Swift app developers alike.

Now that I have made a new build with a new version number, I cannot even get this build to upload through XCode.  I now get:

This action could not be completed.  Try Again. (-22421)

I’m PAYING Apple for this service … real cash money … I expect more, a LOT more.  You can see from this SO thread that the problem happens often.  If you get an error from a system like this and the “fix” is to just repeat the same steps until it does work is absolutely 100% proof of a piss poor architecture behind the scenes.

No wonder Apple use Microsoft Azure as the real back-end powering iCloud, they simple couldn’t build Azure.

Fixes like “as with everything XCode a reboot fixed the problem” is hilarious.  Just proves to me all these people that make out that Apple stuff “just works” really aren’t using their systems for anything other than looking at cat videos.

Shambolic.

Another error I get when validating the archive before submission is:

iTunes Store operation failed

Seriously Apple, you need to clear up this embarrassing mess of a process.  This is XCode 7.2.1 not XCode 0.5.

Application Loader vs XCode

Confusingly, there are two methods to get (I say get generously as neither are working reliably) apps into the App Store.  XCode only deals with xcarchive files via the organiser in XCode.  Whereas Application Loader only deals with .ipa files.  Confused?  Yes well, confusion is name of the Apple game.  Application loader always fails on the API analysis stage, but don’t worry the message is only an advisory note that something else in your distribution tool chain is frankly shit.

The fact that these two methods seem to behave so differently and support completely different workflows suggests to my developer mind that they aren’t even using the same code to perform what should be a standardised process.  It’s not like the App Store is new …

XCode has told me every time that my binary is valid but it so far has always failed once it hits the Apple servers and the LONG 4+ hour wait while they are “Processing” it.  Jeez, people moaning about Microsoft should have a look at Apple if you wanna see a mess at work.

This from the richest company in the world, literally inexcusable.  I wouldn’t mind so much if it actually gave any hint at all as to what the issue might be.  “It Failed” is literally useless.

Error Message At Last

So on my third attempt to get my app up to Apple, I only had to wait 4 hours to be told that processing had failed again.  However, this time I actually got an error message, hidden under the little red exclamation mark:

itc.apps.prereleasebuild.errors.processingfailed.

However, what that actually means is anyone’s guess, I cannot find much about it online which leads me to believe this isn’t an error that should ever make it to the UI.  Half an hour later and that little error message has disappeared anyway.  It’s now back to the useless “It Failed” crap.  I haven’t received any other info from Apple.  Nothing, no emails, no explanation and no advice and obviously, nothing in my TestFlight dashboard.

I literally give up, I’m waiting for a few days until I even bother trying again.

Seriously, Apple this is a shambles.

Apple System Status

Due to all these App Store frustrations I found a useful link on my journey of failure.  Apple System Status link.  You can see from here that iTunes Connect is experiencing issues.

Share

The Decline of Ebay

Ebay used to be great.  Today, however, was the last nail in the coffin for me.  Either as a seller, or a buyer.

When perusing the site these days I regularly see items that are wildly overpriced, even when compared to “normal” retailers, and often not just by a small margin but by orders of magnitude more expensive.  The original core attraction to Ebay was not only it’s auction format but the change to get a good deal.  After the auction novelty wears off and the Ebay business ego inflates along with its fees, what are you left with?  A massively over-complicated and expensive e-commerce site.

Finding items has become a hit-and-miss chore.  Then when you do find what you’re looking for the price is either the same as other sites or wildly over-priced (no doubt desperate sellers trying to recoup the inflated final selling fees).  The search feature is something lots of sellers are complaining about.  It since Ebay took the hosting in-house during 2015 and started “tinkering”.

Anyway, I completed my last Ebay sell last month, never again will I darken your site.  I sold an item for £145 – the cost to me £11 shipping – £14.50 final value fees.  £14.50 – 10%!!!!!  This makes Ebay utterly irrelevant for me to ever sell another item.  Free listing my arse, you’ve just up the final value fees.  So after 13 years bye, bye Ebay.  You just don’t make sense any more.

Searching through the discussion forums proves for some sobering reading for any sellers.  Droves of messages from sellers detailing how their sales , literally, fell off a cliff over the Summer of 2015.

Messages along the lines of:

we were doing $250k a month until the summer, now we’re lucky to do $50k a month – we have 20 staff and are looking at bankruptcy within months at this rate

Seems this is incredibly common and is utterly unsustainable for either Ebay or the sellers.  I think this is truly the final decline of Ebay.  My personal usage has ground to a halt over the last 12 months in any capacity and it seems this is true for a very large number of people.

You can read some here.

https://community.ebay.com/t5/eBay-Chat/The-decline-of-eBay/td-p/2921049

RIP Ebay.

Share

My Life With a MacBook Pro – 3 Years In

Hmm  … where to start?

I had to buy a MacBook, I need to write and compile apps for iPhones and iPads.  I would have preferred to spend the extortionate cost for it on something else, like keeping it as inheritance for my kids or buy a house.  But well, my hand was forced by Apple.

I’d love to be able to say it’s been a pleasure to own but frankly it hasn’t.  I know it’s fine for the average user who wants to browse the web, send a few emails and “facetime” but trying to use it for my needs and it a pig frankly.  I’ve had more weird issues with this one laptop than I’ve had in previous 18 years of Windows based PC ownership, I’m not exaggerating.

It just works? If it “just works”, why the need for genius bars?

I’m a power user, a software engineer and I build desktop PCs and mission critical servers so I know stuff about hardware as well as software.  I just do not rate these machines.  They look pretty but my god when something goes wrong you end up in a mess.

Hardware Issues

Great hardware I hear people saying?  In these three years I’ve had to replace the power supply twice (£65 a pop – £195 on power supplies!!!).  I’ve never replaced a power supply on anything in my life, let alone a laptop.  I’ve also never had a port on a machine die.  This is a £2000 laptop and I’ve experienced both multiple times.

The first Thunderbolt port became less than reliable after the first 12 months, now this weekend the one that did work also suddenly stopped working.  Now to continue using my Ethernet network connection I have to purchase some more hardware just to keep using what has been a computer standard for decades – the RJ45 connector.

Add to those problems the completely unserviceable build methodology and it seems I’m slowly cruising towards owning a very expensive brick.

I’ve also got to make an appointment with the “genius bar” folks as this laptop is also afflicted with the screen coating issue Apple have issued a recall over.  Luckily my MacBook Pro doesn’t seem to have the issue with the video card (which also effects early 2013 MBP).

Remember, this is a PREMIUM £2000 laptop … I’m really not impressed to be honest.  Two major recalls (there may be more I don’t know about) on a product like this isn’t good.

Bluetooth has also been a constant source of hate for me.  For the majority of the time I have owned this laptop I couldn’t use the Magic Mouse & W-Fi at the same time.  The Magic Mouse, at the best of times, drops it’s connection for a hobby.  How dare I expect to use the mouse and internet at the same time!  But it doesn’t matter since I stopped using the Magic Mouse completely because it would make my Wi-Fi either painfully slow or not work at all.  So there’s another £59 pissed up the wall.

Software Issues

OSX is horrific.  I hate it.  I hate it’s design and I hate the way it operates.  Loads of things hidden in the UI until you press the option key is a usability disaster.  The finder is a joke of disk navigating tool.

When I first got my hands of OSX (Mavericks) I was literally blown away by what it couldn’t do out of the box.  Finding that I had to buy additional software tools to do proper window management was a joke.

But OSX doesn’t get viruses I hear everyone yelling.  Well, in the past 20 years I’ve had … lets see … 2 viruses on a PC that I had to deal with.  Both back when I was using Windows 98SE.  So shut up, this is moot issue.  If people will click on every link or dodgy web site they’re sent they should expect them to get viruses.  A bad workman always blames his tools …

Every time I’ve done an upgrade on the OS I’m left with crap to deal with.  Resetting the PRAM or SCM or both because some issue has crept into the system.  Either Wi-Fi not working or the Bluetooth connectivity going nuts.  For the longest time after the Yosemity update I couldn’t use my UEBoom at all as the audio was never in sync with the video.

One of the reasons Windows is so pervasive is backwards compatibility.  I had programs from 1995 that I can happily run on my Windows 10 box.  It seems with every update of OSX something stops working.  The classic example is Parallels.  They seem to capitalise on this fact with their marketing and will scare users into upgrading, even when their app will carry on working.  But for me Parallels 8 stopped working on El Capitan, so I switched to using VirtualBox (which is free).

All in all, I won’t be recommending Apple stuff anymore.  To those that I have recommended they get an Apple product, I apologise.  The problem is that now I’m an iOS developer I’ll always need a Mac around for code compilation duties.  But, no more MacBook for me, I’ll get a Mini and hide it away somewhere so I don’t have to look at it.

Crisp Retina displays on the MacBook are no compensation for the issues I have.  Particularly when the hardware is all glued together.  That fact renders this gadget as basically throw away tech.  £2000 throw away tech.  How Apple can boast of being green is beyond my comprehension.

As for developing software for the Apple platforms?  I’ll leave that to a future post as that is even worse than dealing with their hardware …

The one thing that Apple gets absolutely correct – marketing.

Share

TeamCity – Sharing Build Properties

Sharing parameters and properties can be a bit confusing for new users of TeamCity.  Lets use a simple example to illustrate how you can do this.

For sake of this explanation lets say that you have two build configurations that are essentially building the same thing but for different distribution uses, the source code is the same which would result in the same revision or commit of the source repository etc.  The only material difference is maybe some certificates or binary labels or identifiers.

Another scenario might be to integrate with an external system for deployment like Octopus Deploy and you want to kick that off in its own build configuration. Due to this you want the two build configurations to have the same build number.

Lets say you have an application called MyApp with a Debug, Release and Deploy build configurations and you want the Deploy configuration to use the Release build number.

So, to do this:

  1. Navigate to the Release build configuration and note down the buildTypeId from the browser URL (for this example lets say this is webrelease).
  2. Navigate to the Deploy build configuration
  3. Click on Edit Configuration Settings, then Dependencies, then Add Snapshot Dependency
  4. In the dialog that pops up select the check-box that corresponds to the Release build configuration and set any other relevant settings
  5. You now have “linked” the two build configurations which means you can use the %dep.x% parameter notation
  6. Now in the case to set the build number we navigate to the General Settings on the same Deploy build configuration
  7. In the Build Number Format text box we can enter %dep.webrelease.build.number%

Voila!  The next execution of the Deploy build configuration will have the same build number as the Release configuration.  Go nuts with your params!

 

Share

Xamarin iOS TeamCity Build

So this turned into a complete nightmare for a while.

TeamCity Mac Agent Push Installation

I started by trying to get TeamCity to do a push install of a new agent on my MacBook Pro and very quickly things got very, very ugly.  First off it couldn’t find a JDK (which was installed) so I was using bash to update my ~/.bash_profile with the JAVA_HOME variable, but nothing seemed to work.  I fixed that by simply downloading the latest JDK release package and installing it.

Once the JDK errors and my /.bash_profile were fixed I then got the error (this is it in its entirety) “su: Sorry” … at that point I threw in the towel.  I’ve got better things to do that battle with this crap.  I abandoned the plan to push install the agent as I simply couldn’t figure out what was wrong with no information, the error text above was all that appeared in the agents log file as well so its  anyones guess as to what the problem really was.

NOTE: Make sure you can browse to the TeamCity site on your network using the agent machine

So I dialed into my TeamCity server, grabbed the zip installer and did it all manually.  This worked first time (after amending the buildAgent properties file) so what ever the issues were with the push installation I don’t know and frankly I don’t care, JetBrains have some tiding up to do both on this.

Build Steps

Since the build for the iOS application is now happening using Xamarin Studio installed on the Mac the build process is much simpler.  At the moment my unit tests are all run during debug builds only and these occur on a PC running another build agent.  I will revisit this to start executing test runs on the Mac agent to make for a more holistic testing process.

Updating Plist Files

There are a number of ways you can update the plist data on OSX.  Some people seem to suggest using the termainal DEFAULTS command but I found that didn’t work when presented with an iOS style Info.plist.  Chiefly it seemed to be confused by the internal layouts having arrays and a dict element.

By far the simplest way I found was making use of the standard OSX tool PlistBuddy which can deal with our iOS plist data.  So I added a new build step (executed before the compile step) and have configured it to allow me to update it’s internal version numbers and bundle identifiers based on the build configuration.

I normally use a 4 element version number for .NET (major).(minor).(build).(revision) which gives full traceability back to the source control for an individual release.  But if you use this in all cases you’ll have problems as iOS uses the semantic version number format (major).(minor).(build) for the “CFBundleShortVersionString” version number.

But you can also use arbitrary values in the “CFBundleVersion”.  Lots of people seem to ignore the AssemblyInformationalVersionAttribute in .NET but this is a great feature as you can use arbitrary strings as well as the more formal version format when using it in .NET assemblies.

<br />
$ /usr/libexec/PlistBuddy -c &quot;Set :CFBundleIndentifier (yourbundleid)&quot; Info.plist<br />
$ /usr/libexec/PlistBuddy -c &quot;Set :CFBundleShortVersionString %build.number%&quot; Info.plist<br />
$ /usr/libexec/PlistBuddy -c &quot;Set :CFBundleVersion %build.number%.%build.vcs.number%&quot; Info.plist<br />

Compile – Obsolete

The build step for compiling the iOS configuration is executed using a CommandLine build runner with a simple command like this:

<br />
/Applications/Xamarin\ Studio.app/Contents/MacOS/mdtool build &quot;--configuration:Debug|iPhone&quot; /path/to/solution.sln<br />

Archiving Builds – Obsolete

Once the build is complete you’ll want to automatically create your archive. When you come to distribute your application to the App Store or to TestFlight via the XCode you’ll want to use xcarchive. You can use IPA files from the Build step with the Application Loader but as far as I can tell there is no way to upload dSYM files for any symbolicate processes which is a bit crazy (unless I’ve missed something)

Create another build step and use:

<br />
/Applications/Xamarin\ Studio.app/Contents/MacOS/mdtool -v archive &quot;--configuration:Debug|iPhone&quot; /path/to/solution.sln<br />

This will place an xcarchive file into ~/Library/Developer/XCode/Archives/.  You can then validate this archive via XCode -> Window -> Organizer on the Archives panel.

Bye Bye MDTOOL Hello XBUILD

So, it seems the recent versions of Xamarin Studio and the MDTOOL method of compiling and archiving have become problematic.  I started to see builds failing with some odd errors relating to my Droid project inside the build logs of the iOS builds.  Hugely confusing.  I never got to the bottom of why this was occurring and decided to review the whole process.  Both the compile and archive steps are now combined into one step and one command.  The command I’m now using is:

<br />
/Library/Frameworks/Mono.framework/Commands/xbuild /p:Configuration=AppStore /p:Platform=iPhone /p:BuildIpa=true /p:ArchiveOnBuild=true /target:Build /path/to/solution.sln<br />

Obviously you can change any of the configuration or platform parameters to suit your requirements / project setup.

As I work through the next steps of this process I’ll keep this blog post updated – you can read the Android version here.

Share

Source Control – Learning Git

Over the years I’ve used many source control systems.  Most of them seemed pretty simple to use after a short period of time.  However they all have their idiosyncrasies that will occasionally catch you out and some of them have been next to useless at their intended purpose.

The worst of these systems I have had personal experience with was Visual SourceSafe from Microsoft, this shouldn’t be used by anyone, ever.  I’ve seen it do the only thing a source control system should never do – lose your work.  I must admit that all my experiences with Microsoft solutions for this particular task have been poor.  I’ve only had TFS inflicted on me once and that experience was extremely lumpy and I really never want to have it again.  Certainly after reading that at the core of TFS is a re-written Visual Source Safe, shudder.

There are many source control systems to choose from which can make it a bewildering decision to make.  They range in price from free to eye-wateringly expensive for “corporate solutions”.  Source control is source control, it either does it or it doesn’t, you can keep your bells and whistles thank you!  I haven’t had the opportunity to try any of those since I haven’t worked for a massive organisation that was ever prepared to spend that kind of cash – yes they really can be THAT expensive, think six figure sums of cold hard cash, just on source control.  Your product has to be worth millions before you’d even consider them.

Subversion Source Control

For the most time, including my home projects and business source control tasks I have been using Subversion, particularly SVNServer and TortoiseSVN.  Since these projects are a one developer affair the extra functionality of a distributed architecture for collaboration was never a consideration and since I was also already very familiar with the system this was a no-brainer decision.  Install, setup a repository, get on with code.

To be honest SVN still isn’t a terrible choice (unless your name is Linus then you are stupid and ugly by default).  It does the job, it doesn’t get in my way (that often) and since everything is local to my machine it also performs well.  As soon as you have to start going over the network the performance can degrade dramatically though.

Enter Git Source Control

So last year (yes I know I’m very late to the party) I had my first commercial requirement to use Git and I also started using GitHub to host some of my own projects related to my Prism articles (part1, part2) and other bits and bobs.  I think one of the reasons that I had at that point kept Git at arms length was that I hadn’t been given a compelling reason to NOT carry on using Subversion.  I also hadn’t been given a commercial reason to NEED to use Git, I was already busy enough writing code to start changing core elements of my workflow.

Another issue was that during my first commercial use of Git, when I hit an issue the guys who owned the repository also didn’t know how to solve the issues which wasn’t very reassuring.  That just put a flag up for me hammering home that “Git was inherently complicated” – avoid.  I had also struggled to get some of my own stuff up onto GitHub using their own Git client application (which I now don’t use as I don’t think it’s very good to be honest).

There was no denying the rise and rise of GitHub as a platform and the fact that all my fellow developers seemed to be flocking to Git and loving it.  The final straw that led to this blog post was that I really wanted to help out on some open source projects – chiefly MvvmCross.

Now I had to learn Git and boy am I glad I did.

From the initial confusion, which is absolutely related to the way you think about source control and the choice of verbs like Add, Commit and Branch (among others), is that Git thinks very differently about these verbs.  It also does very different things in response to them on the command line.  BUT once you get it, you’ll realise the shortcomings of all the source control systems you’ve used before.

Git solves problems you didn’t even know you had.

The biggest difference that WILL confuse you is going from a traditional centralised repository model to Gits distributed model.  Don’t assume you will just get this fact, I thought I would, I didn’t, it hurt.  The best graphic I’ve seen that encapsulates this is …

source control git distributed model

Gits distributed model

The basic gist of this is that there is no one master repository that you are used to interacting with.  There are many but most importantly you have both a local “working copy” and a “local master repository” or “staging area” which you commit to.  You can link to an arbitrary number of “remotes” (other repositories) and push and pull from any of them as if they were all master repositories (in centralised parlance).  It couldn’t be more different from something like Subversion.

In the graphic above the origin can be thought of as a traditional central repository but you can have lots of these if you want.  More importantly you can also pull changes from anyone else and they can pull (update in SVN terms) from you.  All very cool, and very powerful.

Switching to Git Source Control

To get started with Git you first need to install Git itself (none of the gui downloads include Git).  A good GUI client (none of them are perfect) is SourceTree from Atlassian.  They also provide great resources here.  There is also a completely free Pro Git book.

I’m not going to go into any more details about that here as this is what the videos and links are for.  All these resources and the main video above will give you some context on Git.  Once you’ve watched that video this is the next one you should watch to get to grips with the nuts and bolts of Git.

Share

Bye, Bye Parallels – Hello Virtual Box

So I finally did it.  When I first purchased my MacBook Pro I bought a copy of Parallels 8.  What a mistake that turned out to be.  Don’t get me wrong the software did what it should have done when I bought it whilst running Mountain Lion.  Then their marketing kicked in banging on about how Parallels won’t work after each upgrade of OSX.  You can read about some of that story here so I won’t go over that again here.

I finally upgraded to El Capitan and boom, Parallels issues with the networking, I tried a few fixes published on various sites, none of which worked.  I refuse to give them any more cash frankly so – bye, bye Parallels – hello Virtual Box.

I’ll keep this thread up-to-date with anything relevant or potentially interesting that happens during the switch.

Installing Virtual Box & Win7 Virtual Machine

Theres practically nothing to write about this, it was ridiculously easy.  Virtual Box installed without any issues at all.  Plugged in the SuperDrive to the Mac, launched Virtual Box, went through the VM wizard and Win7 installed without any issues.

Thumbs up all round frankly.  And no cash changed hands!

Share

MvvmCross iOS Support and Samples

A while ago Martijn van Dijk and I had a very brief chat about creating an MvvmCross iOS Support library for use with MvvmCross.  Just to add in some general usage bits and bobs to make iOS development with MvvmCross a little bit easier.  I had been creating some useful bits for my own projects and could definitely see the value in them for other people to use in their projects.  But … as ever, my time is stupidly short.

Turns out it’s not as short as I thought and to be honest it didn’t take much effort to pull some things out and wrap them up in a self-contained fashion.  So today I spent a bit of time doing just that.  The results are now available in the MvvmCross iOS Support GitHub repository along with a newly minted MvvmCross iOS Support sample application.

MvvmCross iOS Support

The library really only contains one “helper” element at the moment in the shape of a pretty nifty presenter called – MvxSidePanelsPresenter.  Mvx Presenters are a kind of “glue class” that orchestrates what happens in between a view model navigation request and the presentation of it’s view in the application UI.  So any calls like ShowViewModel<TViewModel>(); will eventually end up being processed by a presenter, which decides what to do with it.

The presenter provides 3 panels as view “targets”, a main central panel, a right panel and a left panel.  Where a view is placed is controlled by a class level attribute.

Panel Presentation

<br />
[Register(&quot;CenterPanelView&quot;)]<br />
[MvxPanelPresentation(MvxPanelEnum.Center, MvxPanelHintType.ActivePanel, true)]<br />
public class CenterPanelView : BaseViewController&amp;lt;CenterPanelViewModel&amp;gt;<br />
{<br />
}<br />

So to explain this example it’s telling the presenter that I want to be displayed in the center panel as the active panel and I also want to be shown immediately.  If this was requesting MvxPanelEnum.Left for instance this would show the view in the left hand panel and would also immediately slide the left panel into view.  Pretty neat.

Combining A UISplitViewController for Master / Detail Views

Another neat feature of the presenter is that it can present views in a split view if the presentation attribute includes the a specific request to do so (by specifying MvxSplitViewBehaviour) and the application is also running on an iPad.  In the case where there are split view behaviours added to the attribute and the application is running in an iPhone these behaviours will be ignored. The views are in that case presented in whichever panel has been specified. So there is a graceful fallback to default behaviour when needed.  You can specify this split view behaviour like this:

<br />
[Register(&quot;MasterView&quot;)]<br />
[MvxPanelPresentation(MvxPanelEnum.Center, MvxPanelHintType.ActivePanel, true, MvxSplitViewBehaviour.Master)]<br />
public class MasterView : BaseViewController&lt;MasterViewModel&gt;<br />
{<br />
}</p>
<p>[Register(&quot;DetailView&quot;)]<br />
[MvxPanelPresentation(MvxPanelEnum.Center, MvxPanelHintType.ActivePanel, true, MvxSplitViewBehaviour.Detail)]<br />
public class DetailView : BaseViewController&lt;DetailViewModel&gt;<br />
{<br />

The master view with be shown in the left hand portion of the split view and any detail views will be automatically shown in the right hand portion of the same split view.

Let me know if you find this stuff useful and obviously open any pull requests or open issues for any bugs.

You can see the code here on GitHub.  And you can read the more official documentation for this library on the MvvmCross documentation pages here.

Share

SSD Windows 10 and The Dreaded INACCESSIBLE_BOOT_DEVICE

So, I this has happened to me a couple of times now and frankly it’s a sham, it really is.  I honestly do not know the inner workings of how this situation occurs, I doubt many people outside of Microsoft truly understand it since Windows isn’t open source but I’m sure it’s catching lots of people out and causing all manner of headaches.

I made the simple mistake of hitting “sleep” instead of “shutdown” earlier today – nightmare ensues.  The machine would not wake from sleep so I had to pull the plug (hold the power button down).  Not a pleasant thing to do doing.  I worry about the state of SSD drives in situations like this and apparently so does Windows, but its not terribly smart at recovering from this and this is where my beef is.

If you end up in this situation you are highly likely to end up in a boot -> failed to boot INACCESSIBLE_BOOT_DEVICE -> restart – repair cycle that frankly would continue until the planet vanishes.  Not helpful and … not entirely correct either.  Windows will insist on repeating this.

What you need to do is boot into Safe Mode and all will be well.

  1. When you get to the Restart screen hit Advanced options.
  2. Then select the “Troubleshoot” button
  3. then the Advanced Options
  4. then select Startup Settings and restart
  5. The screen will fade to almost invisible and then a list is shown
  6. Select 4 to Enable Safe Mode and restart

Windows should now boot into safe mode and once it has you can safely reboot into Windows properly.  Why on earth it insists on cycling through a useless and ultimately pointless repair / restart process I don’t know.

Share

Abstracting APIs – Abstract Potentially Insane?

There’s no doubt about it, things that help are good.  This topic of abstraction came up in a chatroom the other day and I found myself advocating not abstracting in the client.  Having had more time to think about it I finally realised my own thoughts about it.  Just crept up from my subconscious.

At the moment I’m writing my own mobile apps, consuming an API that I’ve also written and maintain for use by mobile apps and web sites.  So naturally looking at anything to help is always worth considering.  The basic premise of my thinking is that in my particular situation, where I’m developing the API, dogfooding is the way to go.

Having to manually integrate with the API could be beneficial in a number of ways.  Having to deal with its security, data and inevitable idiosyncrasies can all potentially reveal design improvements.  If you immediately abstract your API behind someone else’s logic and expectations you’ll miss these opportunities during development.  You could argue the reverse is also true.  If Refit has troubles integrating with your API that could also reveal other problems or design improvements.

The particular client abstraction we were discussing is Refit.  It’s really very cool and if you aren’t in control of the API you’re interacting with it’s ideal, I’ll definitely be using it.  But I’m sure the authors of that API are at least testing that API if not also manually integrating with it at some point for some of the reasons outlines above.

Share

View Model Value Change Tracking in MvvmCross & Xamarin

Change tracking object contents at first might seem like an easy or straightforward task but there are lots of pitfalls and potential issues with many solutions.  What works in your situation might not work in other situations due to implementation details.

The most obvious use of change tracking is at the heart of many ORM frameworks.  These frameworks manage data, often through proxy classes, in order to track what needs to be written to the data store when inserts or updates are applied.  These are often very granular and can be significant performance drains if misused or used in situations when not really required.

Most solutions you’ll find simply add an IsDirty property on a base class or interface and then just set that to true in any property setters in order to mark the class as dirty.  Other solutions maybe a little smarter and have a SetProperty() method that compares the new value to the current value and only executes the set if the value is new or unequal.  Whilst this isn’t change tracking it is at least efficient in view models when bound to interfaces.  An example method might look like this:

<br />
protected void SetProperty&amp;lt;T&amp;gt;(ref T backingfield, T value, Expression&amp;lt;Func&amp;lt;T&amp;gt;&amp;gt; property)<br />
{<br />
    if (Equals(backingfield, value))<br />
    {<br />
        return;<br />
    }</p>
<p>    var mem = property.Body as MemberExpression;<br />
    if (mem != null)<br />
    {<br />
        var prop = mem.Member as PropertyInfo;<br />
        if (prop != null &amp;amp;&amp;amp; prop.CanWrite)<br />
        {<br />
            // take advantage of any setter logic<br />
            prop.SetValue(this, value);<br />
        }<br />
        else<br />
        {<br />
            // update backing store<br />
            backingfield = value;<br />
        }<br />
    }</p>
<p>    RaisePropertyChanged(property);<br />
}<br />

If you’re trying to compare to original values you could start tracking each change to each property and working out if the current value has been used at any point in past.  This could very easily become a very bloated object graph.  You’re also still left in a situation where developers have to remember to call the SetProperty() method and not just set a property either directly or miss the public class API altogether and set the private backing field and negating any setter logic anyway.  If you don’t want to have property changed code in all your setters this can be a good approach if a little more expensive.

Negating all of these issues in one solution is extremely hard but minimising them is a must.

My Solution

I wanted a fairly simple IsDirty semantics at the class level.  I wanted my solution to just take care of itself and I didn’t want to have to do too much work in order to get this reliably functioning.  I also want it to be efficient and smart enough to accurately report that the class data really had changed and also wasn’t the same as the original values. My requirements where:

  • Entirely PCL (Portable Class Library) based implementation
  • Behaves in data binding situations
  • Attribute-driven Opt-In
  • Complex reference & value types included in object graph
  • Public only tracking (this is a view model after all!)
  • Filtering properties on Type or Name (string fuzzy matched)
  • iOS, Android, WP & MvvmCross friendly
  • As performance & memory efficient as possible
  • Class level monitoring only
  • Minimal extra library dependencies

So given these change tracking requirements I have coded my solution and I’m going to include all the code in a GitHub GIST below.

Usage

I wanted this to be as simple as possible pollute the code as little as possible.  I think it’s OK at the moment but will no doubt change and hopefully get better.

Property Attribute Example

If you use the attribute only those properties will be monitored, all other properties will be ignored.

public class PersonViewModel : MvxViewModel<br />
{<br />
    // ignored<br />
    public string Name { get; set; }</p>
<p>    [IsDirtyMonitoring]<br />
    public string DisplayName { get; set; }<br />
}<br />

IsMonitoring Example

To start the monitoring, at some point after you can guarantee that all the values you regard as “clean” available in the object set Ismonitoring to true.  This forces a the calculation of an MD5 value for all the properties, now any subsequent calls to the IsDirty getter will compare this hash value to a newly created one with the current data.

You can reset the clean hash at any point by simply setting IsMonitoring to true again.

<br />
public sync void Init(int id)<br />
{<br />
    var data = await _myapiClient.GetPersonAsync(id);<br />
    Mapper.Map(data, this);<br />
    IsMonitoring = true;<br />
}<br />

Dependencies & PCL

The basic design has two dependencies.  Since I already had Newtonsoft.Json used in my project I have made use of parts of this and in order to implement the MD5 object hashing I have had to add a library – the xBrainLab.Security.Cryptography library available on codeplex.  Weighing in at 9,728 bytes this isn’t really much of a concern.

Basic Change Tracking Design

The basic design is that when IsMonitoring is set to true an object immediately generates an MD5 hash of all the relevant properties (default minus filters or explicitly opted in).  Then each subsequent call to IsDirty will first generate a new MD5 hash of the current values and compares that to the stored “clean hash”.

Hash Values (MD5)

Hashing in PCLs isn’t the norm so we have to use a library.  When generating the hash the object is first serialised to a memory stream using a JSON.NET serialiser.  This is then MD5 hashed and either stored or compared in order to calculate if an object has changes since monitoring began.

You can also set a newly minted clean hash by simply setting IsMonitoring to true on live objects.

Class Serialisation

The serialisation is all handled my JSON.NET and it’s built in serialiser.  I did try using the BSON serialiser as this may have a performance gain but it seemed to be erroring too often so I may have to revisit this.  There is a custom contract resolver that controls what is or isn’t to be tracked.  If a class has any properties marked with the IsDirtMonitoringAttribute then only these marked properties will be tracked.  If none of the properties are marked with the attribute then all public properties will be included in the change tracking minus any properties of the same type as the filter types or property name masks in the name filter.

Performance

It’s not particularly scientific or probably that accurate but I have added a unit test that monitors it’s execution time.  The test performs 100,000 IsDirty calls in a loop and will fail if it takes more than a second to complete based on the built in StopWatch class in System.Diagnostics it’s about as accurate as I’m needing.  Considering this will be called pretty infrequently and usually on a background thread this performance doesn’t concern me.  I have now implemented this on a few classes in an iOS application and I haven’t noticed any runtime issue (as yet!).

Anyway, enough wibbling as I should be out Christmas shopping … here’s the code … any comments or improvements really welcome.

Share

Whitelist SSL Certificates in .NET for Xamarin / MvvmCross Apps

I have a fairly complex development environment for my current project and needed to allow some self-signed certificates to pass through the SSL certificate chaining process when calling from mobile platforms.  Obviously, this is potentially breaking a major security feature and need to be done safely.

In a ‘normal’ MvvmCross or Xamarin.Forms application you want to include as much in the core PCL project as possible for portability and code-sharing.  Unfortunately there is no PCL implementation of the ServicePointManager class, but this is defined in both Xamarin.iOS and Xamarin.Android.  Strangely, another instance where Windows Phone is problematic as this isn’t defined at all in WP.

Given that I have another shared library that hosts my API client that is compiled into a platform specific version I can do this here and have that shared.  Otherwise your best options are including this code in the platform specific projects somewhere in the Setup.cs class for instance.

Bypass Certificate Checking

<br />
#if DEBUG<br />
ServicePointManager.ServerCertificateValidationCallback += new RemoteCertificateValidationCallback(<br />
    (sender, certificate, chain, policyErrors) =&gt;<br />
    {<br />
        if (policyErrors == SslPolicyErrors.None)<br />
        {<br />
            return true;<br />
        }</p>
<p>        var certThumprint = string.Join(&quot;&quot;, certificate.GetCertHash().Select(h =&gt; h.ToString(&quot;X2&quot;)));<br />
        var thumbprint = &quot;&lt;YOUR CERT THUMBPRINT&gt;&quot;;<br />
        if (certThumprint == thumbprint)<br />
        {<br />
            return true;<br />
        }</p>
<p>        return false;<br />
    });<br />
#endif<br />

Share