Latest Entries »

In my last post I discussed creating some low-cost Azure virtual infrastructure to handle failover duties for Active Directory domain controller roles using OpenVPN to create a site-to-site encrypted tunnel.

Managing some of these roles, including OpenVPN Access Server’s management website, and Remote Desktop Gateway, requires exposing a web site to the public internet and using SSL (https://…) to encrypt the traffic between the client and the server.  Out of the box, OpenVPN AS runs on Linux with a self-signed certificate from OpenVPN, but this inevitably causes security warnings from any web browser, since OpenVPN is not a recognized public CA and trust is therefore unverified.  In order to rid yourself of the warnings, but more importantly, to ensure that the web site you think you’re connecting to really is that site and not an impostor, you need to either add the OpenVPN CA to your Windows certificate store on every machine you might connect from, or alternatively, you can set up your own CA on Windows Server.

An Enterprise PKI in Windows Server creates a self-signed root certificate which can then easily be distributed to all machines in the domain with group policy, allowing any child certificates created by that root authority to be automatically trusted by web browsers.  Running your own CA also creates other benefits such as allowing a higher degree of security and trust between domain controllers and clients, limiting remote access to clients with a valid certificate, and so forth.

Unfortunately, it turned out to not be such a slam dunk to generate a web server certificate for the Linux VM running the OpenVPN Access Server due to a combination of tools and processes that weren’t really designed to work together seamlessly, and it becomes more complex if you want the cert to be valid for multiple names for the same site, e.g. “https://internalname” and “https://external.domain.com” and “https://name.cloudapp.azure.com”.  I ultimately found two ways to get the Windows CA to issue certs of this sort:

The All-Windows method:

  1. Using the Certificate Templates tool on the CA machine, make a duplicate of the stock Web Server template, give it a new name, and enable the checkbox that allows export of the private key.  This is a must, since we need to be able to extract the private key from a Windows certificate store later.  Under the Security tab, ensure that domain admins, or another suitably restricted user group, is able to read, write, and enroll this certificate type.  Auto-enrollment is not recommended here.
  2. In the CA management snap-in, add the new template you just created to the list of available templates, and restart the CA so that it will be available for issuance.
  3. Make an .INF file that acts as the template for the certificate request. This process is described here and here, the latter providing an example INF with multiple Subject Alternative Names. Also note that you can delete parts of the sample INF as mentioned in the comments if your CA is set up as an Enterprise CA rather than Standalone, and also assuming you are running on Windows Server 2016 rather than an older version.
  4. Run
    certreq.exe -new yourinf.inf output.req

    This step, critically, creates both the private and public keys that the certificate will use, and creates the signed request which we will next send to the CA to generate as a child of the CA’s root certificate.

  5. Obviously, replace the italicized portion below with the CA server’s network name, and CA_Name with the name visible in the Certificate Authority management snap-in.  If you omit this parameter, you should get a pop-up asking you to select the CA from a list, which also works just as well.
    certreq -submit -config "yourCAserver\CA_name" "output.req" "output.cer"
  6. You should get back a request ID (typically a number), and the message “Certificate retrieved(Issued) Issued” and the files output.cer and output.rsp (response) should have been created in the directory where you ran certreq.
  7. Another critical step: You must run
    certreq -accept output.rsp

    This step installs the certificate (temporarily) in the Local Machine’s “Personal” or “My” certificate store, with the private key. As I discovered the hard way, simply double clicking on the .cer file and attempting to import it or export it only results in the public key being available. In other words, the .cer file can be freely distributed (and will be, by the web server), but we need to have the private key as well in order to be able to use the certificate to decrypt the incoming traffic to the site later.  Running certreq with the -accept command is what causes the private key to be installed in a location where it can then be exported.

  8. Run “mmc” and use Ctrl-M to choose the Certificates snap-in.  At the prompt, select the Local Computer store rather than the user’s store.  Open up the “Personal” folder within the store and select Certificates. You should now be able to locate the newly-generate certificate issued by your CA within the list.  However, we’ve got it on the wrong machine – now we need to put it in a format where we can move it over to the Linux machine.
  9. Highlight the certificate than choose All tasks->Export from the right click menu.  Select “Yes” on the option to export the private key, choose the PFX file type, and select the box to export all extended properties.  Also, select the option to protect the exported certificate with a password, rather than a user/group.  For the filename, just choose a path and filename, but the .PFX will be added automatically.
  10. On completion of the wizard you will now have the cert, including its public key in a .pfx file in the selected directory.  At this point you can delete the certificate from the Certificates snap-in and close it, you won’t need to come back here.
  11. Now you need to have the Windows version of OpenSSL installed.  The latest, at the time of writing, is available here, although you may want to check if a newer version is available.
  12. From the command prompt, change to the location where you extracted the OpenSSL binaries and run the following command to extract the public key from the certificate:
    openssl pkcs12 -in yourcert.pfx -nocerts -out key.pem
  13. You will be asked for the password you created during the export step, and then you will prompted to create another password (twice) to encrypt the key.pem file as it’s created.
  14. Now run one more command:
    openssl rsa -in key.pem -out server.key
  15. You will prompted for the password of the key file, and the final result is the creation of the actual private key in a raw, unprotected format.  Take very good care of this file, and do not send it over unencrypted channels as you transfer it to the linux machine.  Delete it as soon as practical.

If you are using the Linux OpenVPN Access Server, you can now upload your certs and the private key through its admin web interface (Export your domain’s CA certificate from the Trusted Root Authorities of any machine that received it via group policy, or from the CA management snap-in, and also upload the .crt file and the server.key you just created.)  The admin web interface will show you the validation results of your certs and key to ensure that the chaining is correct and that the private key matches the public key provided in the certificate, plus the names of any alternate subject names you may have provided.  After installation, you should now be able to browse the admin interface without errors, and with the lock icons that signal that the https:// connection is using a trusted certificate (assuming that your browser trusts your local domain’s root certificate, which is a manual option in third party browsers like Firefox, and built-in to Microsoft’s browsers like Edge and IE.)

The second technique: Create the cert request on the Linux machine and issue it from the Windows CA.

This is actually the simpler technique overall, and also avoids ever having to transport the private key between a Windows machine where the issuance takes place and the server it’s ultimately used on.

In general, you want to follow the processes documented here.  The .cnf file you create in this process is the equivalent of the Windows .INF file we created in the previous procedure.  The private key is created locally to a .key file, and then a .CSR file is produced with the certificate request.

After creating the CSR, rather self-signing it on the Linux machine (which would still be untrusted by most browsers), you move the .CSR file to the Windows machine, and pick things up at step 5.  Move the .cer and .rsp files back to the Linux machine, and then you can install the .crt, the CA’s .crt, and the private key using the OpenVPN admin scripts per their documentation, or into whatever web server package you may be using.  For OpenVPN AS, specifically, the sudo commands are:

./sacli --key "cs.priv_key" --value_file "private.key" ConfigPut
./sacli --key "cs.cert" --value_file "vpn.yourdomain.com.crt" ConfigPut
./sacli --key "cs.ca_bundle" --value_file "intermediary_bundle.pem" ConfigPut
./sacli start

You may need to convert your private root CA certificate to PEM format as noted in the OpenVPN documentation, where they suggest some tools that can accomplish that.

Advertisements

I use the term “home” loosely.

My idea of a home network may be a little different from yours – back in 2003, when Microsoft released the first version of Windows Small Business Server, they handed out free CDs and licenses to anyone in the company who wanted one.

About as great an example of Microsoft product naming cliches as has ever existed.

In hindsight, it seems like a bit of a commitment – Server Management has, over the long haul, consumed a lot of hours that probably could have been spent more productively elsewhere.  However, I took the bait. Although the SBS edition is long gone, I’ve been steadily upgrading Windows Server over the years.  As a result, my home network is more like a small business, partially for the added security and control over system maintenance that kind of infrastructure can bring to the table, and also because it’s like “continuing ed” for Microsoft product familiarity. Perhaps most importantly, keeping it all running smoothly is, for me anyway, a slightly sick and twisted Business Simulator videogame.  I have domain controllers, a variety of group policies applied (which was actually a great way to keep my kids from doing really dumb things when they were small), a Remote Desktop gateway for access to home machines from offsite, and an OpenVPN server.

A lot of the roles in an Active Directory Domain will be more robust if you can have additional domain controllers to handle failover duty if the primary goes down.  Some of the typical server roles such as DNS, DHCP, etc, are usually handled by the router/wifi access point you get from your ISP, but once you go the domain route some of these jobs are better served with a more powerful computer and letting Windows Server do the job.  One slightly out-of-date server machine is about all I’m willing to invest in the way of physical computing infrastructure, though, so being able to deploy some of these backup and failover roles in the cloud offers a good opportunity to increase the reliability of the overall system with no additional outlay of space, electricity, or money on my part.

“Free Azure” you say? Well, with certain MSDN account types, you get $150 worth of monthly credit to play around with develop on Azure, which is actually enough to set up quite a bit of low-cost infrastructure. You won’t be running GPU workloads or 8-core virtual machines, but setting up a few low-memory, low-disk single-core virtual machines on a virtual network and keep them backed up for safety can easily be accomplished “for free” with that kind of allowance with change to spare.  However, the main trick is how to seamlessly connect the private home LAN with the Azure machines over the public internet and still maintain the security of your private information.

The answer is nothing new to network administrators:  This is the classic scenario for a site-to-site VPN (virtual private network), frequently used to connect a branch office to the “headquarters network” over the public internet, using an encrypted tunnel. The traffic in transit remains securely encrypted once it leaves the building, while the machines at either end see each other as residing on a (mildly laggy) transparent extension of the local network.

The standard Azure answer to this scenario is to set up an IPSEC/L2TP tunnel using Windows Server at the local end and a “network gateway” at the Azure end. However, setting up the required local server behind a typical home NAT appliance isn’t supported, although I hear it’s possible if you forward the right ports.  However, I didn’t really want to go this route for a different reason:  If the whole point of the exercise is to keep things on the network running smoothly if the onsite server machine (and any VMs it may host) go down, then the VPN would go with it unless it was running on yet another physical machine – which is exactly what I’m trying to avoid needing.  Luckily, my home router runs the third party ASUSWRT “Merlin” Linux firmware, which is powerful enough to run a variety of useful extra features, including both an OpenVPN server (for external connections inbound to my network when I’m not home) and a simultaneous OpenVPN client to connect the tunnel to a corresponding OpenVPN server sitting in my private network in Azure.

For the Azure end, instead of the standard Network Gateway, there is a very convenient alternative: the OpenVPN Access Server appliance that’s available for point-and-click setup in Azure.  It runs quite efficiently on a single core under Ubuntu Xenial Linux with only .75GB of memory and a small disk. Even better, there’s no charge for the software itself if you’re using only one or two simultaneous connections, and in the site-to-site case you’ll only ever be using one. The total cost to run this server in the cloud full time is only a few bucks a month.

azvpn

The setup is fairly simple and wizard-driven, however afterwards you do need to create the right private subnets and Azure routing tables to direct the traffic from your Azure subnet to the tunnel (and thence to your home LAN), and the reverse is required as well to direct the outbound traffic correctly.  I found a nice article online that was of great help in walking through the process: Dinesh Sharma’s Blog. I followed most but not all of the recommendations, but this article was a big help.  Pinging each machine in the chain to ensure the packets were getting where they were supposed to helped with troubleshooting some early connectivity problems and figuring out where it was getting stuck.  The connection process between the OpenVPN endpoints was quite simple, but getting the traffic to flow turned out to be more difficult.

In truth, getting all the network settings right took me quite a while (needed to keep that “game” mentality I mentioned earlier to keep from defenestrating the laptop at a few points :-).  The main issue that took a while to find was that client-side OpenVPN settings on the ASUS router needed to be mucked with a bit to get the right routing to take place (i.e. “LAN” traffic should go to the VPN address, and “everything else” to the internet gateway, i.e. the router). The setting for this was a little bit buried and not super-well documented.

The highlighted setting here was not the default, nor does “Redirect Internet Traffic” necessarily describe the intent, in my opinion.  After all, it was “local” traffic I was looking to redirect.  Also “Policy Rules” is more than a little vague.  However, once you choose that setting, the rules list pops up below and then you can set up the right rules to say which source/destination networks should be routed via the VPN tunnel.  Any traffic not covered in the list is routed to the internet normally.azvpn3

I got it all sorted out in the end, so now I’m up and running with a backup domain controller/DNS server in the cloud visible full time to all machines on the home network via the OpenVPN server. In the next installment, I’ll talk about setting up a simple public-key infrastructure (PKI) using an Active Directory CA to issue certificates for security and authentication purposes.

azvpn2

Bye bye Diesel, Hello Volt

About 7 years ago, I began the search for a new car to replace my aging Volvo S60. I really wanted to go greener than a straight-up gas car, but I wasn’t terribly happy with most of the hybrid options that were available at the time. On the other hand, since most of my personal driving is just a couple of miles back and forth to work every day, I couldn’t really justify something very expensive/luxurious, it just wouldn’t get used enough. Audi, however, was promising the release of an A3 with this revolutionary new “Clean Diesel” technology that would push over 40MPG on the highway with drastically reduced emissions. Long story short, that’s I wound up buying.
Fast forward to today, and we all know how that ended, with one of the biggest class action settlements in history finalized just last week, for Volkswagen/Audi’s blatantly fraudulent advertising claims and NOx emissions up to 40x the legal limit. Not only were they not “clean”, but far dirtier than anyone could possibly have imagined.
With my personal settlement payment pending shortly, I decided the replacement for the A3 SuperPolluter needed to go full electric to try to exact a little bit of karmic payback for my unwitting years of complicity in the Dieselgate disaster. There are lots of better hybrids now, but still – what about something that could go full electric for most of my daily driving? Sure, I’d love a Tesla Model S, but the luxury pricing again makes no sense for someone who only drives 10-20 miles a day most of the time. How about a Leaf? Well, yes, it’d work great most of the time…but… there’s still that nagging range anxiety. What about those days where I really do need to go somewhere, or make multiple short trips that will exceed the battery-only range? It was starting to seem like I should consider researching the one car living squarely in feared territory: The American alternative, the Chevy Volt.
When I was a kid, my Dad owned a succession of Buicks. Evidently he got good pricing from one of his medical patients, who owned the local dealership and was willing to cut him a deal. My recollection of those Buicks, though, is classic 1970’s GM: Heavy, stodgy, gas guzzling, and thoroughly unreliable. He drove his last Buick with a gaping hole in the dashboard for years after the speedometer broke, and the dealership was somehow unable to successfully fix or replace it. My only recent experience with GM cars in the last two decades has been the occasional car rental on vacation, where I’ve usually been left relatively unimpressed by the experience. Nothing bad happened, but “boring”, “plastic”, and “unimaginative” are the words that leap most readily to mind.
So it was really on somewhat of a lark that I headed to the local dealer a few months back to take a look at the Volt. I definitely went in with low expectations, but a short test drive left me pleasantly surprised. The 2017 car was quiet even with the engine running, and had snappy acceleration, comfortable leather seats, and crisp handling. It wasn’t thoroughly ugly, nor as unappealingly “concept”-like as the original Volt body style from 2011, and while it still suffered a bit by using too much plastic in the interior fittings, it wasn’t so much that the balance was tipped towards an overly cheapened feeling. I came away far more intrigued than I expected to be. This was not, to steal a line from a different GM division, my father’s Buick.
The Volt holds a somewhat unique position as the only car on the market today that has a significant all-electric range (53 miles in the 2016/17 model years), but also has a gas engine whose purpose is not to drive the wheels directly (in most cases), but rather to run a generator that extends the electric range of the car as needed, even if the primary battery’s capacity has been essentially fully exhausted. Unlike most hybrids, where the gas engine generally can be expected to kick in after at most a few miles of ordinary driving, the Volt will happily run its full 53 electric miles before the engine ever even starts up. So on my typical workdays, with a full charge overnight, I would still be using zero gas, just as if I had settled on the Leaf or the (overpriced, in my opinion) BMW i3. But if I need to go further, or even take a bit of a roadtrip, then I’ll have no limits to how far I can go, as long as I’m willing to put some gas in the tank on those rare occasions.
I did finally pull the trigger on the Volt, choosing to make a detailed selection of options from the dealer’s menu and have a car manufactured to my particular specs, rather than picking one off the lot.  For example, lots of the available inventory included the $500 in-car GPS option, but I’ve essentially given up using the GPS in my Audi in favor of running Waze on my iPhone, for its much better local traffic reporting. Since I saw no particular reason to change that behavior, why would I want to spend the money for a GPS I’d never use? Further, the Volt includes a Carplay-capable touchscreen, so if I really need to have the map in view, and I can stomach using Apple’s own Maps app (which is the only CarPlay compatible map option at the moment) I can still do that. I also wanted to load the car up with the optional safety and convenience features. The 2017 Volt has the many of the recent technical advancements, such as:

  • Automatic forward braking if it detects that a crash is imminent.
  • Obstacle and rear-crossing sensors
  • Lane-change and fast-approaching-from-the-rear warnings in the side mirrors.
  • Adaptive cruise control and computer-vision lane-keeping assistance. You barely have to steer on the highway at all if you’re not changing lanes, and the car will seamlessly maintain a steady distance behind the car in front of you if your cruise speed exceeds what the car in front is doing.
  • The usual ABS, traction control, and directional stability systems.
  • Auto-dimming bright headlights, again based on computer-vision detection of cars in front of you travelling either direction.

I picked up the car a few days ago and have put a bit over 100 miles on it since then, and I don’t regret it for a moment yet. No new surprises or disappointments in my choice have made themselves obvious yet, so I’m looking forward to several years of nearly emissionless driving. Bon voyage!

I have 4 unclaimed boards completed and ready to ship. This is also likely to be the last build I make, so when they’re gone, they’re gone.

Be one of the lucky owners and contact me to get yours.

Several prospective buyers have been non responsive, so they’re all open for purchase now.  $160USD via PayPal gets you the completed board and complete installation and usage instructions, delivered worldwide. Leave a comment here and I’ll get in touch via email with account information.

In our last installment, I mentioned that XCOM 2 should be have been called “Tom Cruise’s Live, Die, Repeat:  The Game” for all the save-scumming required to beat the thing.  Last night I finally reached the end of the road and successfully completed the final mission, which sure felt like an achievement (of sorts.)

In the end, though, I had to restart the campaign three times to finally find the right mix of balanced buildup of squad armor, weapons, and miscellaneous capabilities to be able to survive into the later stages of the game where some of the nastier enemies start to show up, like the Sectopods and Gatekeepers.  A critical element of mission strategy was to always have a grenadier on hand with EMP Bombs or Gas Bombs gained from Experimental Grenade research.  These were good for inflicting large amounts of damage from range to the nastier groups as they arrived, rather than having to do protracted gun engagements with them to wear them down over time.  The latter was frequently a recipe for someone in the squad to wind up dead or heavily wounded.  I also didn’t start to train Psionic soldiers until relatively late in the game, but they became pretty indispensable for mind control once I did have them, allowing me to frequently pit enemies against each other and keep my own guys out of harm’s way.

The final game stats showed that I had won somewhat more slowly than average, in terms of simulated days of game time, and having spent nearly a thousand supplies (credits) less than average along the way, so I’ll consider that a compliment of extreme efficiency and thriftiness on my part.  The problem, in reality, was that I was slow to establish contact with new areas, which led to a low monthly income, and I wound up spending a lot of intel to purchase supplies on the black market.

So final verdict? Maddeningly frustrating, but very satisfying to finally beat, even if I did have to rely too much on restoring the game in the early stages, after particularly unlucky pronouncements from RNGesus.  As the troops leveled up, I found that I was being significantly more successful in-mission and not relying on restores nearly as much.

XCOM 2 is awfully hard!

I’ve played virtually all of the XCOM games, going back to a “lost weekend” in the early 90’s playing the original DOS version.  The new XCOM 2 better re-creates the urgency and “just one more mission” crack-addict feeling better than any installment since the original, with just one small difference:  this one is freakin’ hard.

I consider myself a reasonably good player, but they really could have called this one “Tom Cruise: Live. Die. Repeat: The Game.”  Even on just the “veteran” level, one up from “Rookie”, I’ve found myself having to play many missions out exactly as the movie unfolded:  Tom takes a turn.  Dies horribly.  Reloads.  Takes a slightly different turn.  Dies horribly.  Reload.  Repeat… until finally I’m able to just barely survive an encounter without half my squad being rendered unconscious or dying in the process, at which point I save again and edge forward to the next encounter.  Maybe several hours later, I can finally complete the mission in a reasonably successful manner after countless reloads.

Furthermore, the game heavily penalizes “mostly succeeding” in a mission, because it enforces lengthy game-clock delays to heal gravely wounded soldiers back to usable status for future missions.  Combined with the high “supplies” (currency) cost to add new soldiers to your roster, the penalty for allowing yourself to complete missions with wounded, killed,  or captured soldiers is very steep.  The overall result is that allowing soldiers to be killed or captured is a recipe for having to take on increasingly difficult missions with an understrength squad, and you dig yourself into an unretrievable deficit in the overall campaign.

I’m enjoying the game, but overall I just feel that the balance is tipped a smidge too far in favor of true diehard players – I honestly can’t imagine trying to play this game in the higher difficulty levels or in IronMan where there are no restores allowed.  I know there are 22 year olds who must laugh at my pain, but I think Firaxis missed the mark here.  #LoveHateRelationship

So far, I only have two firmly interested parties in an SP-4 card from the second batch, but I’m going to make at least 3, and I’ve ordered enough of the critical SCSI chip part to build up to 6 total.  That way I’d have a few on hand for ad-hoc orders in the future, or to sell on eBay or something.

Feel free to comment on this post or the previous one, if you’d like to add your name to the list.

Some parts are coming from China via The Slow Boat, and combined with PCB fabrication it takes roughly a month to get everything together, so I’ll post again when I’m about to start actually assembling the new batch.

Based on comments I’ve received over the last few months, I think I now have enough takers to build a second batch of Ensoniq-compatible “SP-4 Rewind” SCSI cards.  If you are interested in purchasing, what I need you to do is to post a comment in response to this post containing an email address that I can contact you at.  I will moderate the comments so that your email address doesn’t get posted publicly to the actual comment section, but that way I’ll have a way to get in touch with you about cost and timeframe for manufacturing the next batch.

Hope to hear from you all soon-

Thanks!
Andy

 

A few posts back I was whining heavily about the gyrations it was taking to get an arcade emulator running well on a Raspberry Pi.  This had nothing at all to do with the cpu capability of the Pi (especially since upgrading to a Pi 2), but was all about the nightmares of Linux configuration, and to some extent, the less than stellar support for Linux by the makers of the arcade control hardware.

The guy who runs the joystick company had convinced me to send back the stick’s circuit board so that it could be upgraded (one time process) to support user-reflashable firmware and better Linux support: no need to recompile a custom kernel and all that.  Well, it took a month for the round trip to England and back, but it did as he advertised. But of course, there’s a catch.  The new firmware uses a completely different technique to upload the joystick control maps than the old firmware version. 

This is where it gets technical and ugly: the old version sent USB control setup packets to the default control channel, while the new one uses HID output reports to the third of three separate endpoints and HID interfaces.  Worse, the guy who wrote the original Linux utility was no longer available to lend assistance.

I know a fair amount about USB, but this went way deeper than I was used to. I also hadn’t monkeyed with any C or C++ in a LONG time.  I had to download and read significant hunks of the USB spec, the hid spec, and some pretty sparse documentation for some abandoned Linux libraries, and restructure and rewrite some decent hunks of the original app.

Shortening this long story, in a few hours of tinkering on and off over the course of about a day, I was able to completely upgrade the original app to enumerate the correct HID interface, navigate the path to the correct report descriptor, and output the correct sequences of bytes to successfully reconfigure the stick on demand. At the same time, I preserved the original logic for anyone using the older stick firmware and made the whole thing completely transparent to the user, while at the same time adding more robust error checking and fixing one or two other small nits.

Then I wrapped the whole thing back up and posted it to github. 🙂 Not too shabby for this longtime program manager.  I have to say, it’s been teaching the APCS class for the last two years that has given me the confidence in my own coding skills again to tackle stuff like this.  Yes, my degree was in CS, but that was almost 25 years ago and boy, those muscles were a bit rusty.

%d bloggers like this: