Latest Entries »

A few weeks back there was a Hackaday article about someone who had built themselves a Mandelbrot Set generator using an FPGA to accelerate the iterative calculations this requires.  Fractal generation can be a very CPU-intensive process but it lends itself reasonably well to being parallelized since each pixel can be calculated independently of all others.  There are various PC-based apps that will do this on a GPU or using the CPU for floating point, such as Ultra Fractal or many others.

The version in the article uses a much older FPGA than one I have on hand (Artix 7, vs. the much older Spartan-3, both by Xilinx).  The author of the one described had written their version entirely in VHDL, but I thought it a little strange to write the code at that low a level when Xilinx now offers their High-Level Synthesis tool for free. HLS allows you to write an algorithm in C, C++, or System C, and the tool will analyze the code, convert it to optimized and pipelined Verilog or VHDL for you, and then package it up as a blockbox “Intellectual Property” that you can easily interface with the rest of an FPGA design.

I was curious about mucking around with HLS due to a pending career move I’m making into a team that specializes in FPGA and ASIC IP design, so I decided I would reinvent the wheel again and try this out for myself.

The HLS part:

The good news:  The C code for the algorithm that generates the Mandelbrot set values for given X/Y coordinates is almost laughably trivial, and there is pseudo-code that describes it on the Wikipedia page.  There is even, helpfully, some additional code there that helps optimize the algorithm to bail out in certain cases before performing lots of expensive calculations. Translating all of this into some working C code that compiles in the HLS tool was a fairly quick and painless process.


The bad news:  Getting good output from the HLS tool that maximally utilizes the capabilities of the FPGA requires a lot of finicky experimentation with your original code.  Some of the key problems to solve are:

  • Floating point math can be done by the FPGA, but it takes a lot more room on the chip and is slower than integer math.  The algorithm is much faster and more efficient with the hardware if you use Fixed Point math instead, which is a trick that avoids the complexities of Floating Point.  However, you then need to make more design decisions about how many bits to allocate to the integer portion, how many to the fractional portion, how the system should behave in an overflow/underflow condition (e.g. “saturate” at the min/max values rather than rolling over) and then modify the code to use the proper types.  Using more bits than needed slows things down and wastes hardware, not using enough limits the depth to which you can zoom in on the fractal.
  • Which loops can be unrolled and how far? By unrolling loops, multiple iterations of a loop can execute in parallel, provided the FPGA can fit all of the required hardware.  However, since this algorithm can potentially run through thousands of iterations per pixel, you’d never be able to fit that many copies on the chip. (For this particular design, unrolling the main loop by a factor of four turned out to be all I could fit and still include the optimization code discussed above.
  • How does memory need to be organized?  If you have four pixels all being calculated at once and they need to write the results to memory for later retrieval, you don’t want all four instances trying to write to a single memory port at the same time.  So now you have to explain to the compiler that you want the memory access split across four different memories as well, with the addresses interleaved to look like one big memory.
  • Which parts of the code should the compiler attempt to pipeline? (i.e. be working on a later stage of the problem simultaneously with an earlier part to get more results out per unit time.)  With this algorithm, unfortunately, there’s not all that much deep pipelining that can be done, since each iteration working on a single pixel is dependent on the results of the prior iteration, implying that an iteration must be completed before you can move on to the next one.  However the tool does do a pretty good job of scheduling the operations within each iteration to take the minimum possible time by doing non-dependent sub-operations simultaneously.

This optimization portion of the coding took a lot longer than I expected it to – I spent at least a few weeks on this, an hour or two at a time, dealing with some mildly frustrating tool bugs, poor documentation, and other things I would have expected to be better in a tool that’s several years old as this one is, before I finally had a design that had a reasonable amount of parallel computation capability and that would reach the desired level of performance.  Long story short, the sales pitch for HLS is that you don’t have to be a hardware engineer, you’ll just plop your C algorithm into the tool and out will come beautifully optimized hardware.  Nothing could be further from the truth.  Getting a good result requires an intimate knowledge of both the algorithm you are attempting to accelerate, how it can be massaged to suit the needs of the tool (rather than vice versa), and what kind of hardware you want to get out of it and what will fit into your target FPGA.

Wrapping a microprocessor around the calculator:

The next portion of the project was to get the data this calculator produces out to something you can see the screen of a PC.  The only (easy) way on and off the FPGA circuit board is via a USB port.  This would be like the XMODEM of yore, sending bits over a telephone line modem back to the PC, if somewhat faster – around 3Mbps, which isn’t network speeds but certainly better than telephone lines. The common solution, which I utilized here, is to build a computer-in-the-FPGA. This computer has direct access to the memory that the Mandelbrot calculator stores its results in, and which can program the calculator for the coordinates the user wants to focus on.


This icky looking thing is the Xilinx Vivado tool’s block-diagram description of said computer, with a MicroBlaze CPU, the “Calc” module output from the HLS tool, a serial port emulating an old 16550 UART chip, external memory interface, clock generator, interrupt controller and various other necessary bits and pieces.

This is all slowly and painstakingly compiled into a final bitfile which is used to program the FPGA with the final hardware design.  Slow, because the tools have to figure out just where on the chip everything needs to be placed and how all of the connections must be routed such that all internal signals will reach their destinations in time for the upcoming clock pulse, which is what ultimately limits the clock frequency of the final design.  With a validated hardware design in hand, writing the software part of the project actually begins.

Software for the FPGA:

This part wasn’t too bad, if slightly tedious.  Xilinx provides an Eclipse-based SDK to write C or C++ based code to drive your custom computer.  It’s not beefy enough to run Linux with this kind of chip, so what you get instead is a relatively barebones set of drivers and headers to control the hardware you’ve created, or an instance of FreeRTOS (which I haven’t tried yet.)

The basic design of this program is fairly simple: After setting up the serial port, it announces its presence to the connected PC and waits for communication of any of a few simple commands to set up the starting values for the calculator.  Once it receives the “go” command from the PC, it gives a little kick to the calculator which does its thing, and waits for it to signal that the calculation has been completed.  A basic “fully zoomed out” Mandelbrot set image at 1024×768, 2000 maximum iterations per pixel, takes less than a second, with the calculator IP running at 160Mhz.  Then the CPU sends the output bytes back to the PC in chunks of about 32K, with a simple CRC calculated and tacked onto the end for some basic transmission error detection by the receiver.

My first working stab at this sent the raw data back, which is 1.5MBytes for a 1024x768x16-bit array.  At 3Mbits per second over the serial line, waiting for the image data to arrive was still annoyingly slow (5-6 seconds), so my most recent iteration on this uses the open-source LZ4 “fast compression” algorithm to squish the raw data before sending.  In most cases the algorithm is able to accomplish anywhere from a 2:1 to a 30:1 compression ratio at a speed that is better than break-even: The transmission time gains aren’t completely offset by the compression time losses, so it’s definitely a win over not doing so at all. The FPGA CPU is only running at 80Mhz, which is pitiful by modern standards, so it’s no speed demon even with “fast” compression and the physical limitations of this particular FPGA keep it from running any faster.  However, the speed increase in transmission allows you to zoom around the image much more interactively.

Software for the PC:

This is my first WPF application, as the last thing I wrote in C# used Windows Forms.  There were some differences to get used to but they’re not terribly important here.  The .NET Framework has an API for everything, which made the app relatively simple to finish outside of my own incompetence and re-learning curve.  There’s a class to manage the serial port and buffer incoming characters, a class to let you write pixels directly into an onscreen image control, classes (from NuGet) to handle the decompression of the LZ4-compressed data… not too bad all things considered.

Coloring Mandelbrot images is also something for which there are many different algorithms you can use, I started with a fairly simple linear mapping between white, green, and black. Here’s an example of an output image from the entire system:



After completing a fourth batch of SP-4 SCSI cards for some nice patient folks who still needed one, I wanted to find a new electronics project to keep me busy.  I’ve had an old TRS-80 Model III rotting at the bottom of a closet for about the last twenty years or so which I originally purchased secondhand from a guy at work.  Even though it was stored in dry conditions, I’ve read a fair number of horror stories about ancient electrolytic capacitors that go up in smoke when you actually power these things up after sitting idle for so long.  So my new project was to “re-cap”, or replace all the failure-prone capacitors, in the old grey box.

  • Find the Repair Manual. Several years ago I found a large online trove of TRS-80 book scans and virtually all of the interesting software images for the system, which I promptly robo-downloaded and now keep safe for posterity. You never know when sites like this are going to fold up shop unexpectedly, so my feeling is that it’s usually best to grab stuff while you can in such cases.  The archive includes the Model III Technical manual among many other goodies.
  • Take The Thing Apart.  The manual includes good disassembly instructions, which is nice to have because these things are a mass of hand-soldered wires, sheet metal shielding, and odd-sized screws.  They weren’t exactly designed for manufacturing efficiency and near-snap-together construction like most modern PCs.  There’s also a 12″ CRT tube for the screen in there, which is highly fragile and could easily be broken by whacking it against some other internal component.
  • Get All The PCBs out. (Printed circuit boards, not polychlorinated biphenyls.) To remove the old components and replace them requires being able to get at the undersides to desolder the leads.  Below are six of the seven boards in the Model III.  Clockwise from the top, these are:
    • The RS-232 port (a whole board!)
    • The CPU board, which includes three ROMs at lower left, 48K of DRAM at lower right, the Z80 CPU (bottom center), and the video generator (much of the upper section of the board, including 1K of SRAM and the character generator ROM (the largish chip top center.)
    • The floppy disk controller, with edge connectors at the top for internal drives and at the bottom for external ones. The large chip is the WD1793 controller.
    • Two identical power supplies putting out +12, -12, and +5V.  One powers the CPU board, the CRT control board (not shown), and the serial port.  The second powers the floppy controller and the internal floppy drives.  Each supply is rated for about 40W of output.
  • Figure Out Replacement Parts:  The caps in question are all those little blue or black round cans sprinkled across the boards, and a few others. The caps most prone to “sudden spontaneous disassembly” are the two rectangular green ones on the power supplies – the left most board and the identical one in the middle.  These filter RF noise from getting out of the computer and back into the power lines.Figuring out the right replacements was a fairly time consuming task. The Technical Manual actually includes complete schematics and part lists for the entire computer, something that’s completely unheard of today.  (It’s an enormously interesting and educational read to go through, as they explain the theory of operation behind all of the major sections of the design.)  However, the part lists are not 100% accurate and occasional board revisions resulted in changes to what’s actually populated.  So it was necessary to cross reference the listed parts with what was actually there, and note the changes.   Then it was a long session with the Digi-Key online part finder to choose appropriate replacements – the most important thing to ensure is that the approximate size of the part you’re choosing matches what was there previously so that it fits in the available space – very frequently, the same nominal capacitor (in uF capacity and rated voltage) will be available in a wide range of physical sizes, some of which might be far too small or too large to be a suitable replacement.  Furthermore, since many of these are in power supplies and being exposed to mains voltage and potential fluctuations, choosing a suitable replacement which can handle significant ripple current and is also rated for multiple-thousand hours of useful life is an important consideration. Ideally, I’d like the system to be able to go another twenty or thirty years (of light usage) before needing to consider repeating the exercise, sometime around my eightieth birthday. 🙂
  • Order Parts, and Install Them.  Big box o’ caps arrives in the mail, and then it’s just a long session with the solder wick and the iron, removing the old and reinstalling the new.

Interestingly, this turned out to be the easy part of the rehab, because several additional age-related issues (if not really wear per se) made themselves evident during the reassembly.

First, the floppy controller and the serial port board are connected to the CPU board via some very early examples of flat flex cables.  These are ubiquitous today and are ideal for interconnection of parts in mobile phones when space considerations require components to be split across multiple boards (or not part of a board at all, like iPhone camera modules, for example.)  However, these flex cables were really primitive – they amounted to flat strips of aluminum or tin (something shiny but not copper) adhered to a base plastic layer with some kind of adhesive and then sandwiched under another layer of plastic to protect the conductors, except at the ends where they connect to the circuit boards.  While it was necessary to remove these cables in order to take the boards out, putting them back in proved to be problematic.  The adhesive used to hold the conducting strips in place had completely disintegrated with age, and any attempt to reinsert them in their connecting terminals just resulted in the whole thing delaminating and leaving me with something like shredded tin foil. 😦IMG_0776Luckily, you can buy modern off-the-shelf replacements made from Kapton (polyimide) from Digi-Key with nice robust solder tabs on the ends, so this wasn’t too big a deal.

The other minor repair that turned out to be an unexpected pain was in the latch of one of the floppy drives.  As soon as I had the whole thing back together and working, the latch of floppy #2 IMMEDIATELY broke as soon as I opened the drive, and came off in my hand!  Ugh.  This turns out to be a known weak point of these Texas Peripherals drives. (TP was evidently a joint manufacturing venture between Tandy and Tandon.)  There is a little bracket that connects the floppy door to the internal mechanism that clamps down on the disk, and the whole thing relies on some tiny plastic pins that have to withstand what is likely several pounds of spring force from the latch trying to re-open itself.  The plastic of the latch obviously got brittle with age and gave way instantly.

Some folks on the VCFED.ORG (Vintage Computer Forum) site recommended epoxy to hold the weak piece back in place, but several tries at this proved that it was never going to withstand the stress.  This is where having a 3D printer and a micrometer comes in handy. I’m kinda proud of myself, because after giving up on the epoxy route, I whipped up a replacement piece in Fusion 360 by measuring the original, and printed it out.  Worked perfectly the first time.  The STL files are now posted on Thingiverse, but if you want a few printed and shipped just drop me a note in the comments.Pivot.png

It looks big in the screenshot, but it’s only about 3cm across, and the holes for the pins are only about 2.5mm.  Incidentally, the yellow plane cutting the thing down the middle is something I put there to allow me to export it in halves and print it in two pieces. This way, the outside edges would lay flat on the printer bed without having to print with “overhangs.” 3D printers don’t do so well printing parts hanging in empty space, which is what would’ve happened if I printed it all in one piece.  The two halves glue together with acrylic cement in just a few seconds. It’ll hold up just fine, since there are two big screws holding the flat part down through the oval holes, and there’s no stress at all placed on the glued seam.

Here are the pins, the original piece after the ill-fated epoxying, and the final 3D-printed replacement after installation, just visible toward the top of the interior behind the open door. It looks a little warped because of the funny angle and the close-up shot, it’s actually pretty straight in reality.  And yes, the glued gap between the two halves looks a little ugly, but it’s quite strong and invisible inside the drive, whereas the epoxied original didn’t withstand more than a few seconds in place before going Sproiinnggg.9BD32541-2BC8-40D8-A3E2-67A716F8EBBE


So, that’s it!  Model III refurbishment complete!

In my last post I discussed creating some low-cost Azure virtual infrastructure to handle failover duties for Active Directory domain controller roles using OpenVPN to create a site-to-site encrypted tunnel.

Managing some of these roles, including OpenVPN Access Server’s management website, and Remote Desktop Gateway, requires exposing a web site to the public internet and using SSL (https://…) to encrypt the traffic between the client and the server.  Out of the box, OpenVPN AS runs on Linux with a self-signed certificate from OpenVPN, but this inevitably causes security warnings from any web browser, since OpenVPN is not a recognized public CA and trust is therefore unverified.  In order to rid yourself of the warnings, but more importantly, to ensure that the web site you think you’re connecting to really is that site and not an impostor, you need to either add the OpenVPN CA to your Windows certificate store on every machine you might connect from, or alternatively, you can set up your own CA on Windows Server.

An Enterprise PKI in Windows Server creates a self-signed root certificate which can then easily be distributed to all machines in the domain with group policy, allowing any child certificates created by that root authority to be automatically trusted by web browsers.  Running your own CA also creates other benefits such as allowing a higher degree of security and trust between domain controllers and clients, limiting remote access to clients with a valid certificate, and so forth.

Unfortunately, it turned out to not be such a slam dunk to generate a web server certificate for the Linux VM running the OpenVPN Access Server due to a combination of tools and processes that weren’t really designed to work together seamlessly, and it becomes more complex if you want the cert to be valid for multiple names for the same site, e.g. “https://internalname” and “” and “”.  I ultimately found two ways to get the Windows CA to issue certs of this sort:

The All-Windows method:

  1. Using the Certificate Templates tool on the CA machine, make a duplicate of the stock Web Server template, give it a new name, and enable the checkbox that allows export of the private key.  This is a must, since we need to be able to extract the private key from a Windows certificate store later.  Under the Security tab, ensure that domain admins, or another suitably restricted user group, is able to read, write, and enroll this certificate type.  Auto-enrollment is not recommended here.
  2. In the CA management snap-in, add the new template you just created to the list of available templates, and restart the CA so that it will be available for issuance.
  3. Make an .INF file that acts as the template for the certificate request. This process is described here and here, the latter providing an example INF with multiple Subject Alternative Names. Also note that you can delete parts of the sample INF as mentioned in the comments if your CA is set up as an Enterprise CA rather than Standalone, and also assuming you are running on Windows Server 2016 rather than an older version.
  4. Run
    certreq.exe -new yourinf.inf output.req

    This step, critically, creates both the private and public keys that the certificate will use, and creates the signed request which we will next send to the CA to generate as a child of the CA’s root certificate.

  5. Obviously, replace the italicized portion below with the CA server’s network name, and CA_Name with the name visible in the Certificate Authority management snap-in.  If you omit this parameter, you should get a pop-up asking you to select the CA from a list, which also works just as well.
    certreq -submit -config "yourCAserver\CA_name" "output.req" "output.cer"
  6. You should get back a request ID (typically a number), and the message “Certificate retrieved(Issued) Issued” and the files output.cer and output.rsp (response) should have been created in the directory where you ran certreq.
  7. Another critical step: You must run
    certreq -accept output.rsp

    This step installs the certificate (temporarily) in the Local Machine’s “Personal” or “My” certificate store, with the private key. As I discovered the hard way, simply double clicking on the .cer file and attempting to import it or export it only results in the public key being available. In other words, the .cer file can be freely distributed (and will be, by the web server), but we need to have the private key as well in order to be able to use the certificate to decrypt the incoming traffic to the site later.  Running certreq with the -accept command is what causes the private key to be installed in a location where it can then be exported.

  8. Run “mmc” and use Ctrl-M to choose the Certificates snap-in.  At the prompt, select the Local Computer store rather than the user’s store.  Open up the “Personal” folder within the store and select Certificates. You should now be able to locate the newly-generate certificate issued by your CA within the list.  However, we’ve got it on the wrong machine – now we need to put it in a format where we can move it over to the Linux machine.
  9. Highlight the certificate than choose All tasks->Export from the right click menu.  Select “Yes” on the option to export the private key, choose the PFX file type, and select the box to export all extended properties.  Also, select the option to protect the exported certificate with a password, rather than a user/group.  For the filename, just choose a path and filename, but the .PFX will be added automatically.
  10. On completion of the wizard you will now have the cert, including its public key in a .pfx file in the selected directory.  At this point you can delete the certificate from the Certificates snap-in and close it, you won’t need to come back here.
  11. Now you need to have the Windows version of OpenSSL installed.  The latest, at the time of writing, is available here, although you may want to check if a newer version is available.
  12. From the command prompt, change to the location where you extracted the OpenSSL binaries and run the following command to extract the public key from the certificate:
    openssl pkcs12 -in yourcert.pfx -nocerts -out key.pem
  13. You will be asked for the password you created during the export step, and then you will prompted to create another password (twice) to encrypt the key.pem file as it’s created.
  14. Now run one more command:
    openssl rsa -in key.pem -out server.key
  15. You will prompted for the password of the key file, and the final result is the creation of the actual private key in a raw, unprotected format.  Take very good care of this file, and do not send it over unencrypted channels as you transfer it to the linux machine.  Delete it as soon as practical.

If you are using the Linux OpenVPN Access Server, you can now upload your certs and the private key through its admin web interface (Export your domain’s CA certificate from the Trusted Root Authorities of any machine that received it via group policy, or from the CA management snap-in, and also upload the .crt file and the server.key you just created.)  The admin web interface will show you the validation results of your certs and key to ensure that the chaining is correct and that the private key matches the public key provided in the certificate, plus the names of any alternate subject names you may have provided.  After installation, you should now be able to browse the admin interface without errors, and with the lock icons that signal that the https:// connection is using a trusted certificate (assuming that your browser trusts your local domain’s root certificate, which is a manual option in third party browsers like Firefox, and built-in to Microsoft’s browsers like Edge and IE.)

The second technique: Create the cert request on the Linux machine and issue it from the Windows CA.

This is actually the simpler technique overall, and also avoids ever having to transport the private key between a Windows machine where the issuance takes place and the server it’s ultimately used on.

In general, you want to follow the processes documented here.  The .cnf file you create in this process is the equivalent of the Windows .INF file we created in the previous procedure.  The private key is created locally to a .key file, and then a .CSR file is produced with the certificate request.

After creating the CSR, rather self-signing it on the Linux machine (which would still be untrusted by most browsers), you move the .CSR file to the Windows machine, and pick things up at step 5.  Move the .cer and .rsp files back to the Linux machine, and then you can install the .crt, the CA’s .crt, and the private key using the OpenVPN admin scripts per their documentation, or into whatever web server package you may be using.  For OpenVPN AS, specifically, the sudo commands are:

./sacli --key "cs.priv_key" --value_file "private.key" ConfigPut
./sacli --key "cs.cert" --value_file "" ConfigPut
./sacli --key "cs.ca_bundle" --value_file "intermediary_bundle.pem" ConfigPut
./sacli start

You may need to convert your private root CA certificate to PEM format as noted in the OpenVPN documentation, where they suggest some tools that can accomplish that.

I use the term “home” loosely.

My idea of a home network may be a little different from yours – back in 2003, when Microsoft released the first version of Windows Small Business Server, they handed out free CDs and licenses to anyone in the company who wanted one.


About as great an example of Microsoft product naming cliches as has ever existed.

In hindsight, it seems like a bit of a commitment – Server Management has, over the long haul, consumed a lot of hours that probably could have been spent more productively elsewhere.  However, I took the bait. Although the SBS edition is long gone, I’ve been steadily upgrading Windows Server over the years.  As a result, my home network is more like a small business, partially for the added security and control over system maintenance that kind of infrastructure can bring to the table, and also because it’s like “continuing ed” for Microsoft product familiarity. Perhaps most importantly, keeping it all running smoothly is, for me anyway, a slightly sick and twisted Business Simulator videogame.  I have domain controllers, a variety of group policies applied (which was actually a great way to keep my kids from doing really dumb things when they were small), a Remote Desktop gateway for access to home machines from offsite, and an OpenVPN server.

A lot of the roles in an Active Directory Domain will be more robust if you can have additional domain controllers to handle failover duty if the primary goes down.  Some of the typical server roles such as DNS, DHCP, etc, are usually handled by the router/wifi access point you get from your ISP, but once you go the domain route some of these jobs are better served with a more powerful computer and letting Windows Server do the job.  One slightly out-of-date server machine is about all I’m willing to invest in the way of physical computing infrastructure, though, so being able to deploy some of these backup and failover roles in the cloud offers a good opportunity to increase the reliability of the overall system with no additional outlay of space, electricity, or money on my part.

“Free Azure” you say? Well, with certain MSDN account types, you get $150 worth of monthly credit to play around with develop on Azure, which is actually enough to set up quite a bit of low-cost infrastructure. You won’t be running GPU workloads or 8-core virtual machines, but setting up a few low-memory, low-disk single-core virtual machines on a virtual network and keep them backed up for safety can easily be accomplished “for free” with that kind of allowance with change to spare.  However, the main trick is how to seamlessly connect the private home LAN with the Azure machines over the public internet and still maintain the security of your private information.

The answer is nothing new to network administrators:  This is the classic scenario for a site-to-site VPN (virtual private network), frequently used to connect a branch office to the “headquarters network” over the public internet, using an encrypted tunnel. The traffic in transit remains securely encrypted once it leaves the building, while the machines at either end see each other as residing on a (mildly laggy) transparent extension of the local network.

The standard Azure answer to this scenario is to set up an IPSEC/L2TP tunnel using Windows Server at the local end and a “network gateway” at the Azure end. However, setting up the required local server behind a typical home NAT appliance isn’t supported, although I hear it’s possible if you forward the right ports.  However, I didn’t really want to go this route for a different reason:  If the whole point of the exercise is to keep things on the network running smoothly if the onsite server machine (and any VMs it may host) go down, then the VPN would go with it unless it was running on yet another physical machine – which is exactly what I’m trying to avoid needing.  Luckily, my home router runs the third party ASUSWRT “Merlin” Linux firmware, which is powerful enough to run a variety of useful extra features, including both an OpenVPN server (for external connections inbound to my network when I’m not home) and a simultaneous OpenVPN client to connect the tunnel to a corresponding OpenVPN server sitting in my private network in Azure.

For the Azure end, instead of the standard Network Gateway, there is a very convenient alternative: the OpenVPN Access Server appliance that’s available for point-and-click setup in Azure.  It runs quite efficiently on a single core under Ubuntu Xenial Linux with only .75GB of memory and a small disk. Even better, there’s no charge for the software itself if you’re using only one or two simultaneous connections, and in the site-to-site case you’ll only ever be using one. The total cost to run this server in the cloud full time is only a few bucks a month.


The setup is fairly simple and wizard-driven, however afterwards you do need to create the right private subnets and Azure routing tables to direct the traffic from your Azure subnet to the tunnel (and thence to your home LAN), and the reverse is required as well to direct the outbound traffic correctly.  I found a nice article online that was of great help in walking through the process: Dinesh Sharma’s Blog. I followed most but not all of the recommendations, but this article was a big help.  Pinging each machine in the chain to ensure the packets were getting where they were supposed to helped with troubleshooting some early connectivity problems and figuring out where it was getting stuck.  The connection process between the OpenVPN endpoints was quite simple, but getting the traffic to flow turned out to be more difficult.

In truth, getting all the network settings right took me quite a while (needed to keep that “game” mentality I mentioned earlier to keep from defenestrating the laptop at a few points :-).  The main issue that took a while to find was that client-side OpenVPN settings on the ASUS router needed to be mucked with a bit to get the right routing to take place (i.e. “LAN” traffic should go to the VPN address, and “everything else” to the internet gateway, i.e. the router). The setting for this was a little bit buried and not super-well documented.

The highlighted setting here was not the default, nor does “Redirect Internet Traffic” necessarily describe the intent, in my opinion.  After all, it was “local” traffic I was looking to redirect.  Also “Policy Rules” is more than a little vague.  However, once you choose that setting, the rules list pops up below and then you can set up the right rules to say which source/destination networks should be routed via the VPN tunnel.  Any traffic not covered in the list is routed to the internet normally.azvpn3

I got it all sorted out in the end, so now I’m up and running with a backup domain controller/DNS server in the cloud visible full time to all machines on the home network via the OpenVPN server. In the next installment, I’ll talk about setting up a simple public-key infrastructure (PKI) using an Active Directory CA to issue certificates for security and authentication purposes.


Bye bye Diesel, Hello Volt

About 7 years ago, I began the search for a new car to replace my aging Volvo S60. I really wanted to go greener than a straight-up gas car, but I wasn’t terribly happy with most of the hybrid options that were available at the time. On the other hand, since most of my personal driving is just a couple of miles back and forth to work every day, I couldn’t really justify something very expensive/luxurious, it just wouldn’t get used enough. Audi, however, was promising the release of an A3 with this revolutionary new “Clean Diesel” technology that would push over 40MPG on the highway with drastically reduced emissions. Long story short, that’s I wound up buying.
Fast forward to today, and we all know how that ended, with one of the biggest class action settlements in history finalized just last week, for Volkswagen/Audi’s blatantly fraudulent advertising claims and NOx emissions up to 40x the legal limit. Not only were they not “clean”, but far dirtier than anyone could possibly have imagined.
With my personal settlement payment pending shortly, I decided the replacement for the A3 SuperPolluter needed to go full electric to try to exact a little bit of karmic payback for my unwitting years of complicity in the Dieselgate disaster. There are lots of better hybrids now, but still – what about something that could go full electric for most of my daily driving? Sure, I’d love a Tesla Model S, but the luxury pricing again makes no sense for someone who only drives 10-20 miles a day most of the time. How about a Leaf? Well, yes, it’d work great most of the time…but… there’s still that nagging range anxiety. What about those days where I really do need to go somewhere, or make multiple short trips that will exceed the battery-only range? It was starting to seem like I should consider researching the one car living squarely in feared territory: The American alternative, the Chevy Volt.
When I was a kid, my Dad owned a succession of Buicks. Evidently he got good pricing from one of his medical patients, who owned the local dealership and was willing to cut him a deal. My recollection of those Buicks, though, is classic 1970’s GM: Heavy, stodgy, gas guzzling, and thoroughly unreliable. He drove his last Buick with a gaping hole in the dashboard for years after the speedometer broke, and the dealership was somehow unable to successfully fix or replace it. My only recent experience with GM cars in the last two decades has been the occasional car rental on vacation, where I’ve usually been left relatively unimpressed by the experience. Nothing bad happened, but “boring”, “plastic”, and “unimaginative” are the words that leap most readily to mind.
So it was really on somewhat of a lark that I headed to the local dealer a few months back to take a look at the Volt. I definitely went in with low expectations, but a short test drive left me pleasantly surprised. The 2017 car was quiet even with the engine running, and had snappy acceleration, comfortable leather seats, and crisp handling. It wasn’t thoroughly ugly, nor as unappealingly “concept”-like as the original Volt body style from 2011, and while it still suffered a bit by using too much plastic in the interior fittings, it wasn’t so much that the balance was tipped towards an overly cheapened feeling. I came away far more intrigued than I expected to be. This was not, to steal a line from a different GM division, my father’s Buick.
The Volt holds a somewhat unique position as the only car on the market today that has a significant all-electric range (53 miles in the 2016/17 model years), but also has a gas engine whose purpose is not to drive the wheels directly (in most cases), but rather to run a generator that extends the electric range of the car as needed, even if the primary battery’s capacity has been essentially fully exhausted. Unlike most hybrids, where the gas engine generally can be expected to kick in after at most a few miles of ordinary driving, the Volt will happily run its full 53 electric miles before the engine ever even starts up. So on my typical workdays, with a full charge overnight, I would still be using zero gas, just as if I had settled on the Leaf or the (overpriced, in my opinion) BMW i3. But if I need to go further, or even take a bit of a roadtrip, then I’ll have no limits to how far I can go, as long as I’m willing to put some gas in the tank on those rare occasions.
I did finally pull the trigger on the Volt, choosing to make a detailed selection of options from the dealer’s menu and have a car manufactured to my particular specs, rather than picking one off the lot.  For example, lots of the available inventory included the $500 in-car GPS option, but I’ve essentially given up using the GPS in my Audi in favor of running Waze on my iPhone, for its much better local traffic reporting. Since I saw no particular reason to change that behavior, why would I want to spend the money for a GPS I’d never use? Further, the Volt includes a Carplay-capable touchscreen, so if I really need to have the map in view, and I can stomach using Apple’s own Maps app (which is the only CarPlay compatible map option at the moment) I can still do that. I also wanted to load the car up with the optional safety and convenience features. The 2017 Volt has the many of the recent technical advancements, such as:

  • Automatic forward braking if it detects that a crash is imminent.
  • Obstacle and rear-crossing sensors
  • Lane-change and fast-approaching-from-the-rear warnings in the side mirrors.
  • Adaptive cruise control and computer-vision lane-keeping assistance. You barely have to steer on the highway at all if you’re not changing lanes, and the car will seamlessly maintain a steady distance behind the car in front of you if your cruise speed exceeds what the car in front is doing.
  • The usual ABS, traction control, and directional stability systems.
  • Auto-dimming bright headlights, again based on computer-vision detection of cars in front of you travelling either direction.

I picked up the car a few days ago and have put a bit over 100 miles on it since then, and I don’t regret it for a moment yet. No new surprises or disappointments in my choice have made themselves obvious yet, so I’m looking forward to several years of nearly emissionless driving. Bon voyage!

I have 4 unclaimed boards completed and ready to ship. This is also likely to be the last build I make, so when they’re gone, they’re gone.

Be one of the lucky owners and contact me to get yours.

Several prospective buyers have been non responsive, so they’re all open for purchase now.  $160USD via PayPal gets you the completed board and complete installation and usage instructions, delivered worldwide. Leave a comment here and I’ll get in touch via email with account information.

In our last installment, I mentioned that XCOM 2 should be have been called “Tom Cruise’s Live, Die, Repeat:  The Game” for all the save-scumming required to beat the thing.  Last night I finally reached the end of the road and successfully completed the final mission, which sure felt like an achievement (of sorts.)

In the end, though, I had to restart the campaign three times to finally find the right mix of balanced buildup of squad armor, weapons, and miscellaneous capabilities to be able to survive into the later stages of the game where some of the nastier enemies start to show up, like the Sectopods and Gatekeepers.  A critical element of mission strategy was to always have a grenadier on hand with EMP Bombs or Gas Bombs gained from Experimental Grenade research.  These were good for inflicting large amounts of damage from range to the nastier groups as they arrived, rather than having to do protracted gun engagements with them to wear them down over time.  The latter was frequently a recipe for someone in the squad to wind up dead or heavily wounded.  I also didn’t start to train Psionic soldiers until relatively late in the game, but they became pretty indispensable for mind control once I did have them, allowing me to frequently pit enemies against each other and keep my own guys out of harm’s way.

The final game stats showed that I had won somewhat more slowly than average, in terms of simulated days of game time, and having spent nearly a thousand supplies (credits) less than average along the way, so I’ll consider that a compliment of extreme efficiency and thriftiness on my part.  The problem, in reality, was that I was slow to establish contact with new areas, which led to a low monthly income, and I wound up spending a lot of intel to purchase supplies on the black market.

So final verdict? Maddeningly frustrating, but very satisfying to finally beat, even if I did have to rely too much on restoring the game in the early stages, after particularly unlucky pronouncements from RNGesus.  As the troops leveled up, I found that I was being significantly more successful in-mission and not relying on restores nearly as much.

XCOM 2 is awfully hard!

I’ve played virtually all of the XCOM games, going back to a “lost weekend” in the early 90’s playing the original DOS version.  The new XCOM 2 better re-creates the urgency and “just one more mission” crack-addict feeling better than any installment since the original, with just one small difference:  this one is freakin’ hard.

I consider myself a reasonably good player, but they really could have called this one “Tom Cruise: Live. Die. Repeat: The Game.”  Even on just the “veteran” level, one up from “Rookie”, I’ve found myself having to play many missions out exactly as the movie unfolded:  Tom takes a turn.  Dies horribly.  Reloads.  Takes a slightly different turn.  Dies horribly.  Reload.  Repeat… until finally I’m able to just barely survive an encounter without half my squad being rendered unconscious or dying in the process, at which point I save again and edge forward to the next encounter.  Maybe several hours later, I can finally complete the mission in a reasonably successful manner after countless reloads.

Furthermore, the game heavily penalizes “mostly succeeding” in a mission, because it enforces lengthy game-clock delays to heal gravely wounded soldiers back to usable status for future missions.  Combined with the high “supplies” (currency) cost to add new soldiers to your roster, the penalty for allowing yourself to complete missions with wounded, killed,  or captured soldiers is very steep.  The overall result is that allowing soldiers to be killed or captured is a recipe for having to take on increasingly difficult missions with an understrength squad, and you dig yourself into an unretrievable deficit in the overall campaign.

I’m enjoying the game, but overall I just feel that the balance is tipped a smidge too far in favor of true diehard players – I honestly can’t imagine trying to play this game in the higher difficulty levels or in IronMan where there are no restores allowed.  I know there are 22 year olds who must laugh at my pain, but I think Firaxis missed the mark here.  #LoveHateRelationship

So far, I only have two firmly interested parties in an SP-4 card from the second batch, but I’m going to make at least 3, and I’ve ordered enough of the critical SCSI chip part to build up to 6 total.  That way I’d have a few on hand for ad-hoc orders in the future, or to sell on eBay or something.

Feel free to comment on this post or the previous one, if you’d like to add your name to the list.

Some parts are coming from China via The Slow Boat, and combined with PCB fabrication it takes roughly a month to get everything together, so I’ll post again when I’m about to start actually assembling the new batch.

%d bloggers like this: