BeagleBone Black Serial Debug Connection

Connector 5 - Connected

The BeagleBone Black has a built-in debug header allowing you to connect via a serial terminal emulator. Not only does it allow you to view low-level firmware pre-boot and post-shutdown/reboot messages, but it also acts as a fully interactive Linux console once the operating system has started. This post demonstrates with pictures and software links how to quickly setup such a connection.


First of all you need a TTL-UART serial adapter/cable to connect the device to your PC, where you’ll run a serial terminal emulator program to view messages and enter commands. There are many available so make sure you get the right type. For debugging hardware we need an old fashioned “TTL” style chipset, which is not the same as a more modern RS232 serial adapter! They have up to six cables but only three will be used, GND (Ground) RXD (Receive Data) and TXD (Transmit Data).

Connector 1 - USB to UART Adapter

Also they come in different voltages, 5v or 3.3v and with varying cable to pin mapping. Basically you need an adapter with a USB port and serial pins/cable with a small chip inside to do the conversion. The TTL standard compatible adapters are commonly based on the CP210x (e.g. CP2101 or newer CP2102) or a popular chipset known as “FTDI” made by “Future Technology Devices International”. If like me you wish to buy an adapter to use with other devices, then you’ll also want to find one with jumper wires at the end rather than a fixed connector. This allows you to easily re-map the pins from the adapter to your device as required.

The device I have was a cheap CP2102 mini adapter with jumper wires included. In addition I purchased a USB extension cable to make it easier to work with. It has seven pins, the extra pin is a choice between a 5v or 3.3v supply. But in this case we don’t need power, just ground. So you can ignore (or disconnect) cables from all pins of the adapter shown in the photo other than GND, RXD and TXD. By the way if we did need power it’s important to mention again that these small devices often need the lower powered 3.3v connectors not 5v. Accidentally connecting the higher voltage pin to a lower powered device would most likely damage it. So be careful and double-check all wiring, board and adapter documentation before powering on or connecting the USB cable!

Serial Connection

First of all locate the debug header pins on the BeagleBone Black. They’re on the right side (looking towards the Ethernet port) the middle just inside of the long cape connector on that side. You can see this near the top of the photo below, the six pins sticking up with the upside-down letters “J1” below the first pin.

Connector 2 - BeagleBone Debug Header

Take note of where J1 is located, the other pins are J2, J3, J4, J5 and J6 going from right to left on the photo above. As documented in the BeagleBone Black Serial support wiki, the pins numbers we need to connect are as follows:

  • J1 = GND (Ground)
  • J4 = RXD (Receive Data)
  • J5 = TXD (Transmit Data)

Shutdown and disconnect all power (DC and USB) from the BeagleBone Black before starting to connect the wires!

Connector 3 - Jumper Wire Pins 1

Depending on your USB to Serial adapter, you may or may not have to cross-over the RX and TX connections. The original wiki documentation from BeagleBone suggests it is not necessary because the board handles this itself, which worked for me. Also you should ignore the cable colours mentioned in the Wiki because it is likely that your specific adapter/cable bundle is different.

Connector 4 - Jumper Wire Pins 2

When ready, you can connect the adapter to your PC.

Driver Installation

If you are lucky your version of Windows or other operating system will already have detected a USB Serial Port device and configured it. However in my case on 64bit Windows 8.1 it was not detected. To check open “Device Manager”. Unknown hardware will usually be listed under “Other devices”, successfully installed devices will be under “Ports (COM & LPT)”. To open Device Manager on Windows 8.1 right click the desktop start button then choose the “Device Manager” menu option, or on Windows 7 and earlier versions look for advanced hardware settings in the Control Panel.

Driver 1 - Unknown Device

This can be a pain sometimes because these cheap OEM devices commonly have no drivers or support web site. However by searching the web for “CP2102 Windows Drivers” I came across the Silicon Labs CP210x USB to UART Bridge VCP Drivers page with signed drivers for all major operating systems and processors. This download came not only with the signed drivers but a convenient installation executable.

Driver 2 - Install Driver

By the way if you didn’t have an installer you would have to right click the device in Device Manager, choose upgrade then browse for the “.inf” file in the downloaded driver directory.

Driver 3 - Installed Successfully

Luckily this driver worked perfectly with my device. Maybe it is from Silicon Labs, maybe it’s a copy. Anyway it works 🙂 Open Device Manager again to check the device is now recognized and working. When it is running you’ll see the communications (COM) port number next to it in brackets. Make a note of this number as you’ll need it later when connecting with a terminal program, e.g. mine is COM4 as shown below.

Driver 4 - COM Port Number Visible

Now we have to check the settings to make sure they are compatible. Open the properties of the device and edit the port settings so that the speed is 115200 baud, with 8 data bits, no parity, 1 stop bit and no flow control:

Driver 5 - Port Settings

Click OK to save the settings and close Device Manager.

Terminal Connection

Now we have all the hardware connected and drivers installed, we just need to run any serial capable terminal program to get access to the debug console of the BeagleBone Black. There are many available but I chose one of the most popular free programs called PuTTY. The latest version can be downloaded here. Just download and run PUTTY.EXE.

In the start-up “Session” category, choose the “Serial” connection type (not the default SSH or Telnet) then enter the communications port number and set the speed to 115200:

Terminal 1 - PuTTY Connect

Click the “Connection-Serial” category then check the settings are the same as the serial port: “bits=8”, “parity=none”, “stop bits=1” and “handshake=none”.

Terminal 2 - PuTTY Connect Serial Settings

Finally click “Open” to launch the terminal window. You are now ready and waiting for data on the serial cable! Power-on the BeagleBone Black to see the first console messages appear during BIOS/firmware start-up, or hit ENTER if it is already running to see a command prompt response.

Terminal 2 - Open Then Power On BeagleBone

The operating system (in my case the default Angstrom distribution booting off the eMMC) will continue to load until the usual Linux terminal login prompt appears.

Terminal 3 - Booted To Logon Prompt

You’re now connected to a debug console via serial cable, without any network, SSH server or USB cable required! You can continue to logon and enter console commands or just leave it running to see firmware debug messages as they are generated.


The serial connector of the BeagleBone Black is fairly straightforward to connect with a generic TTL-UART adapter, or easy if you find the right hard-wired connector. Although you may not normally need it (with the default SSH/network management capability already available via USB) it’s certainly useful to know how to connect it and have the cable ready in case you do need it.

It can also be used to correct HDMI settings in case your monitor is not compatible with the standard EDID data detection mechanism, to diagnose the detection phase and tweak the resolutions. It may also be useful in locations where you don’t have or don’t want wired or wireless network connectivity but still wish to monitor or issue commands. In theory it could also be used to transmit small amounts of data.

Many (maybe even most) people with BeagleBone Black devices like to tinker with hardware and settings. If you’re one of those people I’d recommend completing this exercise in case you need it in the future and so you know your toolbox includes the right debug adapter/cable.

Setting-Up Raspberry Pi for Headless Mode with X11VNC

If you own a Raspberry Pi and want to use it in “headless” mode (without a display) you’ll probably want more than just an SSH command shell to administer it.

Many people install TightVNC however this doesn’t provide connectivity to the root display interface, only to virtual secondary interfaces. Only the root interface behaves like a real remote control session. And there are limitations of the secondary interfaces, e.g. you cannot connect before a user has logged on and the desktop has been loaded.

After a searching around a bit I found a better solution which provides root display connectivity called X11VNC. Here is how you install and configure it:

  1. Logon as the default user “pi”.
  2. Download and install X11VNC using the following command:
    sudo apt-get install x11vnc
  3. Set the password required for VNC clients to connect by entering the following command. It’s important you do NOT run this command elevated (do not use sudo) because it writes the encrypted password in the current user’s home directory, which must be the default user if you want to connect before logon and desktop start-up:
    x11vnc -storepasswd
  4. Create/edit the VNC start-up configuration file, stored in your home directory:
    nano ~/.xsessionrc
  5. Enter/edit the text as follows:
    # Start X11VNC
    x11vnc -bg -nevershared -forever -tightfilexfer -usepw -display :0
  6. Press CTRL+O then ENTER to write (save) the file then CTRL+X to exit.
  7. Make the file executable:
    chmod 775 ~/.xsessionrc
  8. Edit the Raspberry Pi boot configuration to set HDMI as the standard output and set the default resolution, used when no physical display is detected. If you do not do this the default is the analogue output which is an extremely low resolution.
    sudo nano /boot/config.txt
  9. Set the following line to force HDMI to be the only detected connection, i.e. disable the analogue video default:
  10. Set the HDMI “group” and “mode” number to select the default resolution. You can find these codes on Wikipedia or internet searches. To start with, a couple of useful modes:
    1. SVGA 1024×768@60Hz:
    2. Full HD 1920×1080@60Hz:
  11. There are other useful settings here you may wish to play with, such as overscan and CPU overclocking. 800Mhz works well, at least with my board which has custom heat sinks stuck on top of the three main chips.
  12. Press CTRL+O then ENTER to write (save) the file then CTRL+X to exit.
  13. Reboot:
    sudo reboot
  14. Test connectivity with any VNC client, e.g. TightVNC client (even though the server does not satisfy our needs it’s still a great client).
  15. Power-off and disconnect the monitor. Power-on and after a few seconds you should still be able to connect via VNC with the correct display resolution.

Running a Raspberry Pi in headless mode really demonstrates the power of these little devices. You can hide them away with no monitor or keyboard, just provide a little power and network connectivity.


Add a USB wireless adapter to eliminate the network cable. Add a battery pack to go mobile. You can run them anywhere, even on the go! I’d recommend setting-up headless mode even if you don’t plan to use it immediately. Because it also provides a great way to quickly get GUI access from your other systems. You can even download VNC clients for mobile phones, allowing you to fully control them from anywhere.

De-Bricking a Buffalo WiFi Router with an Arduino

My wireless router (Buffalo WZR-D1800H) failed the other day, when an apparent firmware bug caused it to go into a continuous reboot cycle. After going through the various reset options it seemed like it had been turned into an expensive plastic brick. Luckily there was a last resort method to gain debug access, see what was going on and hopefully bring it back to life. Internally there is a basic serial port, you just have to crack open the case and connect it to a PC somehow. However the serial port is not a usual RS232 standard, but a simplified TTY variant used more commonly for hardware debugging and firmware updates. That was the theory, next was to try it out…

Opening The Case

The pictures below are from the DD-WRT forum post by “Magnetron1.1 which shows how to open the case: Tools Opening 1Opening 2  Opening 3 Jumper

Connecting to a PC via an Arduino

After realizing I couldn’t just hook this up to my PC via an USB to serial (RS232) adapter, before giving-up and ordering a USB to TTY adapter, I stumbled upon this Electrical Engineering blog post that suggested my Arduino could do the same job. This was confirmed directly on the Arduino forum as being a valid configuration (that wouldn’t break my Arduino). Here’s how it looks when it is connected properly (using the “tri-state reset method” from the Arduino forum): Hardware 1 Hardware 2 Hardware 3 Hardware 4 Hardware 5 Hardware 6 Hardware 7 Hardware 8 Hardware 9 Hardware 10 First of all I only had the router->Arduino TX->RX and RX->TX connections and the Arduino Reset->Ground. But I was getting corrupt messages on the serial console of the PC. To correct this I found a few things were necessary:

  1. The baud rate must match the router and there is no flow control so they must be in sync. Setting the port speed to 115200 fixed most of the corruption.
  2. The ground cable totally eliminated the rest of the corruption of the messages coming back from the router (black cable in photo above).
  3. Although the output (received data) was now perfect, I still had to hit each key more than once to get it to go through. That maybe something to do with my serial console program or other buffer settings. But since I only wanted to issue some simple commands I just put-up with it.

At this point I could see the problem, “broken firmware”, followed by a reboot:

Serial 1 Failed Boot

Fixing The Router

The DD-WRT firmware start-up sequence will temporarily start with an IP address of (regardless of your configuration) then check for the auto-pair (“AOSS”) button press. When pressed it will attempt to download a file called “firmware.ram” from via TFTP. So you setup a free TFTPD server, download and rename the firmware you want to try. I did that, it downloaded, but failed to start. Not good… As the last resort, there is a command line you can use to fix it. To get into it you have to hit CTRL+C and hold it during the reboot:

Serial 2 Break

This breaks you into the “CFE” boot loader command line. The next step is to issue some commands to clear the non-volatile RAM (NVRAM) then reload the flash from the downloaded copy. But this failed as there was some issue with the download mechanism or format of the internal storage:

Serial 3 Auto Flash Download Fail

After a lot of searching through a sparsely documented command system, I worked out a command to download the same firmware again from my TFTP server. I wanted to see what the “timeout” was about (because all timeout settings were correct on the server). And to my amazement this forced download fixed the problem: flash -noheader -size= nflash1.trx

Serial 4 Manual Flash Download Success

After that I was able to get to the default HTTP configuration page at the default user IP address of and reconfigure my router. It works fine now.


Forcing a manual download with the “size” parameter specified worked around the failure of the “automatic” firmware recovery download. There must be some bug in the boot loader with auto-download, perhaps not flushing a download buffer or something like that. I still think DD-WRT and open source router/devices are great, but I’d recommend sticking to the older/stable firmware. If anyone can see why the serial input (keyboard) was out of sync (having to hit each key more than once) even though the output (received data) was perfect, please tell me! Maybe it should be cabled differently or different settings are required? I plan to look into the TTY standard more closely sometime. One thing I avoided was connecting the 3.3v power pin, because according to the Arduino forum that’s the most common way to damage an Arduino. 2014.04.21 UPDATE: I recently found the following article which has CTS and RTS connected to the 5V and RESET instead of a loopback jumper; perhaps that’s the synchronization fix? Needs further experimentation…

References & Acknowledgements

Thanks to “Magnetron1.1” on the DD-WRT forum for providing the information to help me fix my router. Visit the DD-WRT organization web site to download custom software for your compatible router! Thanks to Arduino for making such a universal device!

Extracting VMware Web Service Proxies for .NET

The current VMware 5.1 SDK no longer redistributes pre-built .NET proxies, just the source WSDL. If you follow the documentation you are directed to complete a complicated procedure to generate and your own signed proxies.

Fortunately, there is an easier way to extract official signed assemblies from a VMware PowerCLI (PowerShell extensions) installation. This is much better than the previous method as you are using original DLLs provided (and supported by) VMware.

  1. Download and install the current VMware PowerCLI.
  2. Install the PowerCLI which includes the VMware VIX infrastructure components.
  3. Although the PowerCLI documentation suggests you reference VMware.Vim.dll in the program files infrastructure path, we won’t do this because some additional files are missing and we want a full stand-alone copy. The VMware VIX setup installs everything primarily in the GAC. So we will run one command to extract  the files we need from the GAC and copy them into the source so they can be referenced locally and checked-in. Change the “.” at the end of the following command to your own target path, else it copies into the current directory:
    for /f %i in ('dir %WinDir%\assembly\GAC_MSIL\*Vim*.dll /s /b') do copy %i .
  4. This will find several files of which the following are necessary to call VMware web services:
  5. Add a reference to VMware.Vim.dll then use the proxy classes there to call the web services, according to the VMware Web Services SDK documentation. Note that the main VIM assembly will load the relevant VimService## proxy according to the version it detects when connecting to the server (factory pattern), so you need them all including the XML serialization assemblies (it’s hard-coded, they are not optional)!

As a best practice I like to check-in all stand-alone dependencies into a “Dependencies” folder of the solution/checked-in source tree. Make sure you also redistribute these files with your application. I prefer this “local copy” method because this enables your code to compile on any developer PC or build server and be deployed as a stand-alone package.

Versioning Visual Studio Solutions The Easy Way

Versioning your assemblies on each test or release build is an essential best practice to ensure identifiable releases (for testing and support) and comply with Windows Installer standards if you produce MSIs (for example using WIX).

However many developers won’t have access to a fully setup and maintained “continuous integration” build environment, which typically provide versioning support built-in. And anyway there are the interim builds you may want to make quickly on a local developer PC, for example during WIX development. They probably won’t be done via a build sever for convenience and speed.

This brings a requirement for a simple local method to quickly increment the build across all projects and various configuration files. Here is a solution involving just a PowerShell script and text file, which you can add to your solution to achieve the same. You could also integrate this with a build server, providing the best of both solutions.

Firstly, here are the files you add to the root of your solution (in “Solution Items”):

  • Version.txt   – Stores the major and minor version parts which you will manually increment according to your target product and sprint/release versions, plus the build and revision parts which are automatically set by the script to the year, month, day and build number.
  • Version.ps1 – PowerShell script which reads the old Version.txt, generates a new version number then writes it back. Then it edits all necessary project files (e.g. AssemblyInfo.cs, Global.wxi, App.config, Web.config).
  • Version.cmd – Windows Command script file which calls the PowerShell script with the necessary parameters. Makes it easy for the developer or build script to invoke the PowerShell script.

So you start by creating a Version.txt file with your major and minor versions and any build and revision, for example:

Then create the Version.ps1 containing the following script, editing or extending the final subroutine calls to cover all of your relevant project files:

Write-Output "Version"
Write-Output "======="
Write-Output "Increments the version number stored in the Version.txt file,"
Write-Output "then applies it to all relevant source files in the solution."
Write-Output "Build is set to the UTC year and month in ""yyMM"" format."
Write-Output "Revision is set to the UTC day * 1000 plus a three digit incrementing number." 
Write-Output ""

    Write-Error $_
    exit 1

function Update-Version ([Version]$Version)
    $date = (Get-Date).ToUniversalTime()
    $newBuild = $date.ToString("yyMM")
    $dayRevisionMin = $date.Day * 1000
    if (($Version.Build -lt $newBuild) -or ($Version.Revision -lt $dayRevisionMin)) { $newRevision = $dayRevisionMin + 1 } else { $newRevision = $Version.Revision + 1 }
    New-Object -TypeName System.Version -ArgumentList $Version.Major, $Version.Minor, $newBuild, $newRevision

function Get-VersionFile ([String]$File)
    Write-Host ("Reading version file " + $File)
    $versionString = [System.IO.File]::ReadAllText($File).Trim()
    New-Object -TypeName System.Version -ArgumentList $versionString

function Set-VersionFile ([String]$File, [Version]$Version)
    Write-Host ("Writing version file " + $File)
    [System.IO.File]::WriteAllText($File, $Version.ToString())

function Set-VersionInAssemblyInfo ([String]$File, [Version]$Version)
    Write-Host ("Setting version in assembly info file " + $File)
    $contents = [System.IO.File]::ReadAllText($File)
    $contents = [RegEx]::Replace($contents, "(AssemblyVersion\("")(?:\d+\.\d+\.\d+\.\d+)(""\))", ("`${1}" + $Version.ToString() + "`${2}"))
    $contents = [RegEx]::Replace($contents, "(AssemblyFileVersion\("")(?:\d+\.\d+\.\d+\.\d+)(""\))", ("`${1}" + $Version.ToString() + "`${2}"))
    [System.IO.File]::WriteAllText($File, $contents)

function Set-VersionInWixGlobal ([String]$File, [Version]$Version)
    Write-Host ("Setting version in WIX global file " + $File)
    $contents = [System.IO.File]::ReadAllText($File)
    $contents = [RegEx]::Replace($contents, "(\<\?define\s*ProductVersion\s*=\s*"")(?:\d+\.\d+\.\d+\.\d+)(""\s*\?\>)", ("`${1}" + $Version.ToString() + "`${2}"))
    [System.IO.File]::WriteAllText($File, $contents)

function Set-VersionInAssemblyReference ([String]$File, [String]$AssemblyName, [Version]$Version)
    Write-Host ("Setting version in assembly references of " + $File)
    $contents = [System.IO.File]::ReadAllText($File)
    $contents = [RegEx]::Replace($contents, "(["">](?:\S+,\s+){0,1}" + $AssemblyName + ",\s+Version=)(?:\d+\.\d+\.\d+\.\d+)([,""<])", ("`${1}" + $Version.ToString() + "`${2}"))
    [System.IO.File]::WriteAllText($File, $contents)

function Set-VersionInBindingRedirect ([String]$File, [String]$AssemblyName, [Version]$Version)
    Write-Host ("Setting version in binding redirects of " + $File)
    $contents = [System.IO.File]::ReadAllText($File)
    $oldVersionMax = New-Object -TypeName "System.Version" -ArgumentList $Version.Major, $Version.Minor, $Version.Build, ($Version.Revision - 1)
    $pattern = "([\s\S]*?<assemblyIdentity\s+name=""" + $AssemblyName + """[\s\S]+?/>[\s\S]*?<bindingRedirect\s+oldVersion=""\d+\.\d+\.\d+\.\d+-)(?:\d+\.\d+\.\d+\.\d+)(""\s+newVersion="")(?:\d+\.\d+\.\d+\.\d+)(""[\s\S]*?/>)"
    $contents = [RegEx]::Replace($contents, $pattern, ("`${1}" + $oldVersionMax.ToString() + "`${2}" + $Version.ToString() + "`${3}"))
    [System.IO.File]::WriteAllText($File, $contents)

$scriptDirectory =  [System.IO.Path]::GetDirectoryName($MyInvocation.MyCommand.Definition)

$versionFilePath = $scriptDirectory + "\Version.txt"
$version = Get-VersionFile -File $versionFilePath
Write-Host ("Old Version: " + $version.ToString())

$newVersion = Update-Version -Version $version
Write-Host ("New Version: " + $newVersion.ToString())
Set-VersionFile -File $versionFilePath -Version $newVersion

$otherVersionFilePath = $scriptDirectory + "\Dependencies\Other.Component.Version.txt"
$otherVersion = Get-VersionFile -File $otherVersionFilePath
Write-Host ("Other Component Reference Version: " + $otherVersion.ToString())

Set-VersionInAssemblyInfo -File ($scriptDirectory + "\MyProduct.Assembly1\Properties\AssemblyInfo.cs") -Version $newVersion
Set-VersionInAssemblyInfo -File ($scriptDirectory + "\MyProduct.Assembly2\Properties\AssemblyInfo.cs") -Version $newVersion
Set-VersionInAssemblyInfo -File ($scriptDirectory + "\MyProduct.Assembly3\Properties\AssemblyInfo.cs") -Version $newVersion
Set-VersionInAssemblyReference -File ($scriptDirectory + "\MyProduct.Assembly3\App.config") -AssemblyName "MyProduct.Assembly2" -Version $newVersion
Set-VersionInAssemblyInfo -File ($scriptDirectory + "\MyProduct.Assembly4\Properties\AssemblyInfo.cs") -Version $newVersion
Set-VersionInBindingRedirect -File ($scriptDirectory + "\MyProduct.Assembly4\Web.config") -AssemblyName "Other.Component" -Version $otherVersion
Set-VersionInWixGlobal -File ($scriptDirectory + "\MyProduct.Setup\Global.wxi") -Version $versionString

exit 0

Lastly the Version.cmd invokes the PowerShell Version.ps1 correctly from the command line, e.g. I usually open a command prompt using the Visual Studio Power Tools toolbox menu item then run either my build script (which calls Version.cmd) or just Version.cmd to increment the version alone.

@powershell -File "%~dp0Version.ps1"

Most of this script remains static. The functions provided in this example will edit the “AssemblyVersion” attribute in an “AssemblyInfo.cs” or a WIX “ProductVersion” variable in a WIX file (typically stored in a shared include file named Global.wxi). Finally the last lines of the script are customized for your actual solution, calling the relevant functions to update the version number in files of your solution.

Now each time you build your solution you have an identifiable version which is exactly same across all assemblies and even the MSI. For example, the third build of the above solution on the 23rd April 2013 will produce the following version:


This is broken down as follows:

  • 2.5 comes from the static major and minor version you typed in the Version.txt.
  • 1303 comes from the year and month, 2013 and 03 for March.
  • 23003 is the day, the 23rd, multiplied by 1000 to give us three predictable digits for the build number, 003 is the third build on this day, with up to 999 possible before clashing with builds on the next day (should never occur then).

Following this pattern, you could add the version files to every solution you create and maintain it simply by adding a line or two each time you add projects and calling it from any build script or system you may have. If you have new types of files to edit add your own functions and call them at the end. The general solution of using regular expression replacements should suffice for most use cases. Other more complex requirements could use an XPath query to edit configuration files where it is not possible to identify the version by string patterns alone (when a structured XML path is more appropriate to accurately locate the version entries).

I like this solution because it allows you to use best practice versioning without the overhead. And even if you do have a proper continuous build system in place, using this in place of any proprietary versioning add-on or Visual Studio extension is much simpler, flexible (e.g. use offline or when build server is out of service)  and easy to maintain, reducing your 3rd party product dependencies.

How to Setup Windows 8 with UEFI BIOS in UEFI Mode

People who bought new computers in the last six months probably have UEFI compatible BIOS, however it is probably not enabled. Unfortunately the Windows setup procedure is obscure, defaulting to non-UEFI mode, or you have to manually activate it in your BIOS settings first.

The simple way to tell if you have UEFI boot mode or not is to watch Windows starting up. If you see the Windows logo when it boots, you are in old fashioned BIOS boot mode. However if you see your manufacturer or custom BIOS logo remaining whilst Windows starts you are in UEFI mode!

4 UEFI boots with BIOS logo instead of WIndows 2 4 UEFI boots with BIOS logo instead of WIndows 1

Why do this? Mainly because it enables exclusive UEFI and Windows boot features, such as:

  • Booting from drives larger than 2TB (for example 3TB drives are now cheap).
  • Super-fast start mode, booting in a few seconds including hardware (BIOS) initialization.
  • Additional protection for BitLocker (should you have a Professional edition Windows license),  e.g. safe PIN-only start-up (new feature in Windows 8).

Other benefits are less well documented, but basically it’s going to give you the best hardware integration possible (depending on your manufacturer, BIOS upgrade and device driver versions).

So it’s something which is only “nice to have” if you’ve already finished installed your PC, however a must for people with large drives or corporations wishing to maximize the security of their computers. Most importantly it’s something you have to get right at the start, because you have to reinstall your PC completely (no upgrade possible) to switch from BIOS to UEFI. Shame Microsoft didn’t think of that, but then they didn’t allow 32bit to 64bit upgrades either, which is disappointing because they should promote support for the new hardware technologies.


  1. Make sure your Windows setup USB stick is formatted as FAT32. The UEFI boot of Windows setup does not support NTFS! Again strange, because it is able to boot in UEFI mode from NTFS on the hard drive after installation!
    1 UEFI USB Disk must be FAT32
    Then copy all the files from the Windows setup ISO or DVD onto the USB stick, including most importantly the “EFI” subdirectory and boot files. You can double-click an ISO in an existing Windows 8 machine to mount it as a drive letter, for easy access to copy. You cannot use the Windows 7 USB Boot Tool from Microsoft.
  2. Ensure your BIOS has been configured for UEFI boot. Here’s an example of the necessary settings on my ASUS P9X79 motherboard:
    2 UEFI BIOS settings 12 UEFI BIOS settings 22 UEFI BIOS settings 32 UEFI BIOS settings 42 UEFI BIOS settings 5 2 UEFI BIOS settings 6
  3. Boot Windows setup in UEFI mode, causing Windows to automatically install with UEFI boot drivers. To do this you have to select your USB device in UEFI mode. The ASUS BIOS shows two entries when a device supports UEFI, one with and one without. So make sure you choose the one with UEFI in the name when two entries are displayed! You usually select the start-up device from the BIOS setup menu or a mini-start-up menu (e.g. F8 on ASUS machines):
    3 Force UEFI boot of Windows setup USB from F8 menu3 Force UEFI boot of Windows setup USB from BIOS config
  4. Optional: Convert your disk to GPT (GUID Partition Table). Do this if your disk is empty, you want to boot from a disk more than 2TB or you don’t mind losing any data on the disk.
    Select advanced options from the Windows 8 setup main menu.
    Open a command prompt, enter “diskpart”.
    Enter “list disk” then identify the disk to install on (the size is a good guide) and its number.
    Enter “select disk #” where # is the disk number.
    When you successfully selected the correct disk (be 100% sure) enter “clean” which deletes EVERYTHING on that disk (!).
    Finally, the most important part, enter “convert gpt” to prepare the disk with a GUID Partition Table, which the UEFI boot mode can use to access the full (greater than 2TB) storage.
    Exit and reboot, selecting UEFI mode again, following on through the normal installation to setup Windows in UEFI mode 🙂
    Note we do not bother creating a primary partition or formatting because Windows will do that, we just need the bare disk with the right partition table.


This could have been easier. I think the main problem is there is no visible information to tell the user they have UEFI capability, or any option to explicitly install Windows in UEFI mode. I wonder also if it would be technically possible to write the correct data to the disk for a UEFI installation even from a non-UEFI boot stick. For example Microsoft could detect your hardware then show a warning message that you are not installing Windows in the optimal configuration. They could have done that with 32bit installations on 64bit hardware too.

So you probably don’t have UEFI installed, but following a straightforward procedure as demonstrated here could bring some benefits for you. With the price of hard disks falling and the annoying 2TB limit in a traditional (non-UEFI) Windows installation, I think more people will be searching for a UEFI solution now. Further, since Windows 8 is generally available the manufacturers appear to be rolling out UEFI as a standard. Actually UEFI was supported in Windows 7 but nobody really knew about it or had hardware for it. Now is a good time to adopt this technology in it’s second generation (Windows stable) form.

Further Reference

Installing Windows on UEFI Systems

Firmware and Boot Environment

Script To Get Specific DHCP Subnet

Sometimes, especially in corporate networks, you may find yourself in a situation where you receive multiple DHCP offers for different subnets from one or more DHCP servers. The worst case I experienced included different firewall rules depending on which DHCP subnet you were issued, making network connectivity like a lottery!

Unfortunately there is no way to tell the Windows DHCP Client to select specific DHCP offers in preference to others. You can add firewall filters but that only works if the DHCP offers are received from entirely different DHCP servers and does not work for everybody. So I wrote this script to check the current lease subnet and go into a release and renew loop until a DHCP lease on the desired subnet is achieved:

“Get Specific DHCP Subnet.cmd”

@echo off
rem *** Parameters rem Could pass as arguments but we want to double-click this 
rem as an icon with fixed parameters for a known subnet.
set AdapterName=Local Area Connection
set RequiredSubnet=10.20.30.

rem *** Get current lease information
ipconfig /all >"%~dp0DHCP Lease.txt"

rem *** Loop until correct subnet is provided...
type "DHCP Lease.txt"
find /c "%RequiredSubnet%" "%~dp0DHCP Lease.txt"
if %errorlevel% == 0 goto found
echo *** Wrong subnet or no lease. 
echo *** Trying again to get lease on subnet "%RequiredSubnet%"...
ipconfig /release "%AdapterName%" >NUL
ipconfig /renew "%AdapterName%" >"%~dp0DHCP Lease.txt"
goto loop:

rem ** Exit successfully when found
echo *** DHCP lease on subnet "%RequiredSubnet%" obtained!
exit /b 0


To use this you need to know the name of the network adapter on your machine and the first part of the subnet you want (usually the first three parts of the 4 part dotted IP address, e.g. “1.2.3.” from “”). You could leave this blank to release all network adapters, but that just causes a delay and could interrupt other valid connections on a machine with multiple network adapters. It’s best to give your adapters a name which is easy to remember, so go to the control panel, network and sharing center, adapter settings, select the adapter then hit F2 (or right click then choose “Rename”)  then type a good name, e.g. “Company”.

Once you have set the parameters correctly in the script, you can just double-click it from Windows Explorer when you need it. Note the use of the “%~dp0” path prefixes and “” quotes throughout to ensure this script works from any location and with spaces in the path or connection name.

The script also creates a temporary file called “DHCP Lease.txt” in the same directory as the script, which is useful for reference too. You may want to alter this script in two ways. First some people may want to change the script so the parameters are passed as command line arguments (but then it doesn’t work so easily from the desktop) and delete the temporary file or store it in a different place like “%TEMP%”.

Remember quality software? A bit more time upfront saves a ton of trouble later! Agile is cool but not an excuse for slack development.