Category: General Computing


In my last post I discussed creating some low-cost Azure virtual infrastructure to handle failover duties for Active Directory domain controller roles using OpenVPN to create a site-to-site encrypted tunnel.

Managing some of these roles, including OpenVPN Access Server’s management website, and Remote Desktop Gateway, requires exposing a web site to the public internet and using SSL (https://…) to encrypt the traffic between the client and the server.  Out of the box, OpenVPN AS runs on Linux with a self-signed certificate from OpenVPN, but this inevitably causes security warnings from any web browser, since OpenVPN is not a recognized public CA and trust is therefore unverified.  In order to rid yourself of the warnings, but more importantly, to ensure that the web site you think you’re connecting to really is that site and not an impostor, you need to either add the OpenVPN CA to your Windows certificate store on every machine you might connect from, or alternatively, you can set up your own CA on Windows Server.

An Enterprise PKI in Windows Server creates a self-signed root certificate which can then easily be distributed to all machines in the domain with group policy, allowing any child certificates created by that root authority to be automatically trusted by web browsers.  Running your own CA also creates other benefits such as allowing a higher degree of security and trust between domain controllers and clients, limiting remote access to clients with a valid certificate, and so forth.

Unfortunately, it turned out to not be such a slam dunk to generate a web server certificate for the Linux VM running the OpenVPN Access Server due to a combination of tools and processes that weren’t really designed to work together seamlessly, and it becomes more complex if you want the cert to be valid for multiple names for the same site, e.g. “https://internalname” and “https://external.domain.com” and “https://name.cloudapp.azure.com”.  I ultimately found two ways to get the Windows CA to issue certs of this sort:

The All-Windows method:

  1. Using the Certificate Templates tool on the CA machine, make a duplicate of the stock Web Server template, give it a new name, and enable the checkbox that allows export of the private key.  This is a must, since we need to be able to extract the private key from a Windows certificate store later.  Under the Security tab, ensure that domain admins, or another suitably restricted user group, is able to read, write, and enroll this certificate type.  Auto-enrollment is not recommended here.
  2. In the CA management snap-in, add the new template you just created to the list of available templates, and restart the CA so that it will be available for issuance.
  3. Make an .INF file that acts as the template for the certificate request. This process is described here and here, the latter providing an example INF with multiple Subject Alternative Names. Also note that you can delete parts of the sample INF as mentioned in the comments if your CA is set up as an Enterprise CA rather than Standalone, and also assuming you are running on Windows Server 2016 rather than an older version.
  4. Run
    certreq.exe -new yourinf.inf output.req

    This step, critically, creates both the private and public keys that the certificate will use, and creates the signed request which we will next send to the CA to generate as a child of the CA’s root certificate.

  5. Obviously, replace the italicized portion below with the CA server’s network name, and CA_Name with the name visible in the Certificate Authority management snap-in.  If you omit this parameter, you should get a pop-up asking you to select the CA from a list, which also works just as well.
    certreq -submit -config "yourCAserver\CA_name" "output.req" "output.cer"
  6. You should get back a request ID (typically a number), and the message “Certificate retrieved(Issued) Issued” and the files output.cer and output.rsp (response) should have been created in the directory where you ran certreq.
  7. Another critical step: You must run
    certreq -accept output.rsp

    This step installs the certificate (temporarily) in the Local Machine’s “Personal” or “My” certificate store, with the private key. As I discovered the hard way, simply double clicking on the .cer file and attempting to import it or export it only results in the public key being available. In other words, the .cer file can be freely distributed (and will be, by the web server), but we need to have the private key as well in order to be able to use the certificate to decrypt the incoming traffic to the site later.  Running certreq with the -accept command is what causes the private key to be installed in a location where it can then be exported.

  8. Run “mmc” and use Ctrl-M to choose the Certificates snap-in.  At the prompt, select the Local Computer store rather than the user’s store.  Open up the “Personal” folder within the store and select Certificates. You should now be able to locate the newly-generate certificate issued by your CA within the list.  However, we’ve got it on the wrong machine – now we need to put it in a format where we can move it over to the Linux machine.
  9. Highlight the certificate than choose All tasks->Export from the right click menu.  Select “Yes” on the option to export the private key, choose the PFX file type, and select the box to export all extended properties.  Also, select the option to protect the exported certificate with a password, rather than a user/group.  For the filename, just choose a path and filename, but the .PFX will be added automatically.
  10. On completion of the wizard you will now have the cert, including its public key in a .pfx file in the selected directory.  At this point you can delete the certificate from the Certificates snap-in and close it, you won’t need to come back here.
  11. Now you need to have the Windows version of OpenSSL installed.  The latest, at the time of writing, is available here, although you may want to check if a newer version is available.
  12. From the command prompt, change to the location where you extracted the OpenSSL binaries and run the following command to extract the public key from the certificate:
    openssl pkcs12 -in yourcert.pfx -nocerts -out key.pem
  13. You will be asked for the password you created during the export step, and then you will prompted to create another password (twice) to encrypt the key.pem file as it’s created.
  14. Now run one more command:
    openssl rsa -in key.pem -out server.key
  15. You will prompted for the password of the key file, and the final result is the creation of the actual private key in a raw, unprotected format.  Take very good care of this file, and do not send it over unencrypted channels as you transfer it to the linux machine.  Delete it as soon as practical.

If you are using the Linux OpenVPN Access Server, you can now upload your certs and the private key through its admin web interface (Export your domain’s CA certificate from the Trusted Root Authorities of any machine that received it via group policy, or from the CA management snap-in, and also upload the .crt file and the server.key you just created.)  The admin web interface will show you the validation results of your certs and key to ensure that the chaining is correct and that the private key matches the public key provided in the certificate, plus the names of any alternate subject names you may have provided.  After installation, you should now be able to browse the admin interface without errors, and with the lock icons that signal that the https:// connection is using a trusted certificate (assuming that your browser trusts your local domain’s root certificate, which is a manual option in third party browsers like Firefox, and built-in to Microsoft’s browsers like Edge and IE.)

The second technique: Create the cert request on the Linux machine and issue it from the Windows CA.

This is actually the simpler technique overall, and also avoids ever having to transport the private key between a Windows machine where the issuance takes place and the server it’s ultimately used on.

In general, you want to follow the processes documented here.  The .cnf file you create in this process is the equivalent of the Windows .INF file we created in the previous procedure.  The private key is created locally to a .key file, and then a .CSR file is produced with the certificate request.

After creating the CSR, rather self-signing it on the Linux machine (which would still be untrusted by most browsers), you move the .CSR file to the Windows machine, and pick things up at step 5.  Move the .cer and .rsp files back to the Linux machine, and then you can install the .crt, the CA’s .crt, and the private key using the OpenVPN admin scripts per their documentation, or into whatever web server package you may be using.  For OpenVPN AS, specifically, the sudo commands are:

./sacli --key "cs.priv_key" --value_file "private.key" ConfigPut
./sacli --key "cs.cert" --value_file "vpn.yourdomain.com.crt" ConfigPut
./sacli --key "cs.ca_bundle" --value_file "intermediary_bundle.pem" ConfigPut
./sacli start

You may need to convert your private root CA certificate to PEM format as noted in the OpenVPN documentation, where they suggest some tools that can accomplish that.

Advertisements

I use the term “home” loosely.

My idea of a home network may be a little different from yours – back in 2003, when Microsoft released the first version of Windows Small Business Server, they handed out free CDs and licenses to anyone in the company who wanted one.

About as great an example of Microsoft product naming cliches as has ever existed.

In hindsight, it seems like a bit of a commitment – Server Management has, over the long haul, consumed a lot of hours that probably could have been spent more productively elsewhere.  However, I took the bait. Although the SBS edition is long gone, I’ve been steadily upgrading Windows Server over the years.  As a result, my home network is more like a small business, partially for the added security and control over system maintenance that kind of infrastructure can bring to the table, and also because it’s like “continuing ed” for Microsoft product familiarity. Perhaps most importantly, keeping it all running smoothly is, for me anyway, a slightly sick and twisted Business Simulator videogame.  I have domain controllers, a variety of group policies applied (which was actually a great way to keep my kids from doing really dumb things when they were small), a Remote Desktop gateway for access to home machines from offsite, and an OpenVPN server.

A lot of the roles in an Active Directory Domain will be more robust if you can have additional domain controllers to handle failover duty if the primary goes down.  Some of the typical server roles such as DNS, DHCP, etc, are usually handled by the router/wifi access point you get from your ISP, but once you go the domain route some of these jobs are better served with a more powerful computer and letting Windows Server do the job.  One slightly out-of-date server machine is about all I’m willing to invest in the way of physical computing infrastructure, though, so being able to deploy some of these backup and failover roles in the cloud offers a good opportunity to increase the reliability of the overall system with no additional outlay of space, electricity, or money on my part.

“Free Azure” you say? Well, with certain MSDN account types, you get $150 worth of monthly credit to play around with develop on Azure, which is actually enough to set up quite a bit of low-cost infrastructure. You won’t be running GPU workloads or 8-core virtual machines, but setting up a few low-memory, low-disk single-core virtual machines on a virtual network and keep them backed up for safety can easily be accomplished “for free” with that kind of allowance with change to spare.  However, the main trick is how to seamlessly connect the private home LAN with the Azure machines over the public internet and still maintain the security of your private information.

The answer is nothing new to network administrators:  This is the classic scenario for a site-to-site VPN (virtual private network), frequently used to connect a branch office to the “headquarters network” over the public internet, using an encrypted tunnel. The traffic in transit remains securely encrypted once it leaves the building, while the machines at either end see each other as residing on a (mildly laggy) transparent extension of the local network.

The standard Azure answer to this scenario is to set up an IPSEC/L2TP tunnel using Windows Server at the local end and a “network gateway” at the Azure end. However, setting up the required local server behind a typical home NAT appliance isn’t supported, although I hear it’s possible if you forward the right ports.  However, I didn’t really want to go this route for a different reason:  If the whole point of the exercise is to keep things on the network running smoothly if the onsite server machine (and any VMs it may host) go down, then the VPN would go with it unless it was running on yet another physical machine – which is exactly what I’m trying to avoid needing.  Luckily, my home router runs the third party ASUSWRT “Merlin” Linux firmware, which is powerful enough to run a variety of useful extra features, including both an OpenVPN server (for external connections inbound to my network when I’m not home) and a simultaneous OpenVPN client to connect the tunnel to a corresponding OpenVPN server sitting in my private network in Azure.

For the Azure end, instead of the standard Network Gateway, there is a very convenient alternative: the OpenVPN Access Server appliance that’s available for point-and-click setup in Azure.  It runs quite efficiently on a single core under Ubuntu Xenial Linux with only .75GB of memory and a small disk. Even better, there’s no charge for the software itself if you’re using only one or two simultaneous connections, and in the site-to-site case you’ll only ever be using one. The total cost to run this server in the cloud full time is only a few bucks a month.

azvpn

The setup is fairly simple and wizard-driven, however afterwards you do need to create the right private subnets and Azure routing tables to direct the traffic from your Azure subnet to the tunnel (and thence to your home LAN), and the reverse is required as well to direct the outbound traffic correctly.  I found a nice article online that was of great help in walking through the process: Dinesh Sharma’s Blog. I followed most but not all of the recommendations, but this article was a big help.  Pinging each machine in the chain to ensure the packets were getting where they were supposed to helped with troubleshooting some early connectivity problems and figuring out where it was getting stuck.  The connection process between the OpenVPN endpoints was quite simple, but getting the traffic to flow turned out to be more difficult.

In truth, getting all the network settings right took me quite a while (needed to keep that “game” mentality I mentioned earlier to keep from defenestrating the laptop at a few points :-).  The main issue that took a while to find was that client-side OpenVPN settings on the ASUS router needed to be mucked with a bit to get the right routing to take place (i.e. “LAN” traffic should go to the VPN address, and “everything else” to the internet gateway, i.e. the router). The setting for this was a little bit buried and not super-well documented.

The highlighted setting here was not the default, nor does “Redirect Internet Traffic” necessarily describe the intent, in my opinion.  After all, it was “local” traffic I was looking to redirect.  Also “Policy Rules” is more than a little vague.  However, once you choose that setting, the rules list pops up below and then you can set up the right rules to say which source/destination networks should be routed via the VPN tunnel.  Any traffic not covered in the list is routed to the internet normally.azvpn3

I got it all sorted out in the end, so now I’m up and running with a backup domain controller/DNS server in the cloud visible full time to all machines on the home network via the OpenVPN server. In the next installment, I’ll talk about setting up a simple public-key infrastructure (PKI) using an Active Directory CA to issue certificates for security and authentication purposes.

azvpn2

Linux setup nightmares

So in that last post, I hinted that getting the user-programmable joystick working properly on Raspberry Pi’s Linux OS was, well, less than ideal.  Here’s just how non-ideal it was:

Windows:

  1. Download and install tools.
  2. Run tools and play around to your heart’s content.
  3. Walk around with a silly glow on your face.

Linux:

  1. Linux joystick configuration utility has a link on the manufacturer’s download page.
  2. Tool must be compiled on the Pi’s particular Linux release. (This is pretty common since Linux runs on all kinds of different processors, and this tool is not part of the “standard” OS.)
  3. Tool requires libhid library to compile.  Locate and download libhid library source.  Account must be created on a Debian archive site before download is possible.
  4. Libhid requires libusb and lib-usbcompat libraries to compile.  Locate and download libusb/lib-usbcompat sources.
  5. Libhid needs to be manually patched before it will compile properly.  Locate blog post from other unlucky user who, luckily for me, has already endured and documented all this pain, with the appropriate fixes.
  6. With libraries compiled and installed, now the joystick utility can be compiled.
  7. Discover another recommended manual patch, this one to the joystick utility’s source code, apply patch and compile again.
  8. Running the utility to reconfigure the joystick works successfully, except the joystick is dropped by the Linux kernel’s USB HID driver after reconfiguration, leaving it inoperative.
  9. Follow instructions from previous blogger to create manual scripts to allow manual re-binding of the stick to the HID driver after uploading revised configurations.
  10. Discover that the Linux kernel driver also filters out half of the valid joystick location values with any of the customized configurations.  Anonymous comment on previous blog indicates a line in the kernel HID driver that can be commented out to disable overzealous validation of stick’s output values.
  11. Download entire RPi Raspbian Linux kernel source.
  12. Modify 3 lines of code.
  13. Recompile entire Linux kernel with Adafruit’s virtual-machine kernel-o-matic. (A breath of fresh air, something easy in a tedious process).
  14. Install new kernel and cross fingers.  Luckily, it boots correctly and doesn’t brick the device. (If you’re attempting to reproduce my pain, make sure you choose the branch that matches your existing kernel!)
  15. Validate that joystick now functions properly.
  16. Discover that during some bootups, Joystick is detected as “second” joystick rather than first, causing failure to detect stick in rickety old emulator program.
  17. Research and create “udev rules” which allow particular devices to be forced to have specific names in the OS, so as not to make the rickety old emulator grumpy.

Linux:  It’s the way of the future, you know.

I’m late to the party, but I started building a retro arcade machine using a $35 Raspberry Pi as the computer. The various buttons, joysticks, and other bits of control hardware will likely cost at least 5-6x times that amount by the time the whole thing is done.

I’m doing the Linux setup and prototyping control layouts (more about that tomorrow) but I’m hoping my buddy Raman with the awesome wood shop will help build a real cabinet for it when we’re ready. The thing I want to do that’s different from most of the emulator builds people usually do is that I want the control panel to lift out like a fridge shelf and be replaceable. That way when you want to play a game like Tempest or Centipede that needed a spinner or trackball, you just replace the controls. The rest of the time you can have a typical one or two stick-and-buttons setup, but this way the cabinet doesn’t need to be a mile wide to accommodate all the controls at once.

At the moment, though, it looks more like, well, “3rdWorldCade.” But it works great. Thanks to Amazon for the flexible and sturdy “building materials framework!”

IMG_1101

IMG_1103

Today I received an antique in the mail.  A rather ancient SCSI CD-ROM drive, in an external enclosure, and an equally crusty 50-pin to DB-25 “Mac style” SCSI cable from SamplerZone.com.  It takes me back to the good old days of somewhere around 1993-1995, when I had boatloads of external hard drives in cases a lot like this one. But the good news: After hooking it up to my Ensoniq keyboard with the SCSI board I described in the last post, success! I can load lots of different sounds from CD-ROMs now.  It’s nice when things work the first time. OK, there was one little glitch – the memory expansion I installed a month or so ago turned out to be faulty.  After loading up sampled sounds I was hearing pops and clicks where I was sure there weren’t supposed to be any.  I was a little panicky at first that somehow the SCSI interface was corrupting the data or there was a noisy signal, but then I remembered that I’d never really tested the sample RAM after installing it.  I popped the factory SIMMs back in and all the noise problems immediately went away.  So it’s just bad memory from eBay. Phew.  Time to go see if I can get the seller to exchange it for a different set.

I needed to add some new storage to my server at home, which I recently upgraded to Windows Server 2012. The chassis has no room for new drives but it had several open PCIe slots, so I thought I’d just add a USB3 controller and a couple of external drives, shouldn’t be too hard?

As with most things PC it’s never as easy as it’s supposed to be. I Newegg’d a controller from SIIG that claimed to have a TI chipset, and a few 2TB external drives from Fantom, who I have a nice ESata drive from already. Things went well until the second day, when I realized that the drives were failing to show up only after warm reboots. If I’d power cycle them after the system was already up, or if it was cold booted, everything would be fine, but warm boot no dice. This was clearly a deal breaker for a largely unattended and remote managed system.

I tried everything. Bios settings, obscure registry flags, nada.

Today I decided to try the cheapest potential fix, a different controller. I made a quick run to the local retail hardware place, and found a Vantec card that did the same thing but with a NEC/Renesas chip. Went home, made the swap, powered up, and……

New problem! Yellow bang on the controller in Device Manager. Seriously? Hit the interwebs… It turns out that there is firmware in the controller chip that’s upgradable and a quick download from some random French website (why doesn’t the manufacturer post this stuff?) and luckily the problem was solved rapidly.

Thankfully, this was the end of the story- the drives now work normally under all conditions. I’ll be RMA’ing the TI controller back to newegg.

Now I have them set up with the new Storage Spaces feature as multiple virtual drives- some mirrored for safety, some striped for speed, and one with the ReFS file system for ultimate reliability. And all thin-provisioned: when the space starts running out, you just add more physical space to the pool and the space seamlessly grows to take advantage of it. Now that’s cool!

%d bloggers like this: