Steel /stēl/ Verb: Mentally prepare (oneself) to do or face something difficult.

20Aug/180

On $releasever and rpmdb

A few days ago a coworker inquired about an odd error:

https://cdn.redhat.com/content/dist/rhel/server/7/%24releasever/x86_64/optional/os/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found

The inclusion of the '%24relesever" in the URL was extremely weird, the question was how did it get there? The answer is a bit complicated.

As most folks who are reading this will probably be aware the yum.conf file can substitute certain variables listed in a repo defintion, from man yum.conf:

REPO VARIABLES
       Right side of every repo option can be enriched by the following  variables:

       $arch
          Refers to the system’s CPU architecture e.g, aarch64, i586, i686 andx86_64.

       $basearch
          Refers to the base architecture of the system. For example, i686 and i586  machines  both have a base architecture of i386, and AMD64 and Intel64 machines have a base architecture of x86_64.

       $releasever
          Refers to the release version of operating system which DNF  derivesfrom information available in RPMDB.

But where exactly does releasever come from? It turns out the answer is this query:

rpm -q --provides $(rpm -q --whatprovides "system-release(releasever)") | grep "^system-release(releasever)"

So on a redhat 7.5 server system, system-release(releasever) is provided by the redhat-release-server package and the system-release(releasever) is set to 7Server.

After re-installing that package, RPM was again able to provide that information to YUM and the error went away since the variable had a value. It was certainly a strange error to run into...

Tagged as: , No Comments
29Oct/170

On Password Protecting Kickstarts

Kickstarting a Red Hat Enterprise Linux (based or derivative) system is an extremely handy way to get a system off the ground quickly. Even in today's world of configuration management systems, it is  still an extremely handy tool for the initial bootstrapping.

However, one thing has bothered me for some time, the kickstarts may contain potentially sensitive information, As well, the kickstarts generally need to be available to your entire organization, or even worse the entire internet to be useful, and all of this seems to happen without any form of authentication to get to the kickstart file.

Yes it would be possible with advanced networking to mitigate this issue, as well kickstarts themselves try to mitigate this issue by, for instance, hashing the root and grub passwords.

However, if your organization is anything like mine, the left hand often doesn't even know the right hand exists, and so we must work with networking that is not that advanced. Further, outside of the hashed passwords for grub and root, there are other areas of the kickstart that can be sensitive, for instance your Red Hast Subscription Manager (RHSM) activation key, or perhaps you are doing something special in the %post section of your kickstart.

Wouldn't it be nice to at a minimum provide basic password authentication in front of your kickstart?

It turns out you can, it is just poorly documented (if at all, I couldn't find anything about this, I ended up guessing based off of an old bugzilla report).

Enabling Basic Authentication Using HTTPS with Kickstart:

Simple place your kickstart on an Apache server or equivalent, and configure Basic Authentication for the location of the kickstart.

Anaconda appears to rely in libcurl underneath and as such you can use curl's syntax for usernames and passwords from the kernel command line like so:

ks=https://user:passwd@kickstart.example.com

And that right there is really all there is to it, as long as your TLS certificate is recognized by the default trust store.

Filed under: Uncategorized No Comments
27Jun/160

On Building a Quick TLS Server with Flask

I have the need, at the moment, to have a TLS site running from my local system, served only to my local system, for development purposes. This is needed for the OAuth flow so that I can gather the token returned, but that is another story...

I didn't want to deal with generating a CA, and server certs etc. I wanted the code to be fully portable and just work, ignoring of course the security implications of not having a 'legit' certificate.

It turns out setting up a quick TLS enabled site with Flask is ridiculously easy, this was one of those, 'thank the deity moments that someone had already thought of and implemented this particular use case'.

Anyway, on to the code:

from flask import Flask
app = Flask(__name__)

if __name__ == '__main__':
app.run(debug=True, port=8443, ssl_context='adhoc')

Make sure you have pyOpenSSL installed, and that is it! Amazing really, clearly you don't need to set the port if you don't want to, it will default to 5000 if none is supplied.

The ssl_context is the interesting bit. Adhoc means that new certificates will be generated on each app start. This is not secure, since the certs are not based off of any trusted CA, but it will get the job done. You can also supply the cert, key, and CA as an SSL Context object if you need to, more info here.

To reinforce this point, this is a quick setup using the development server built into flask. DO NOT use this in production, this is for testing only. The certs themselves should be secure, however you have no CA chain, thus no trust, again this should NEVER be used in production.

Filed under: Uncategorized No Comments
15Feb/160

On Red Hat Satellite 6.1 and HTTP Strict Transport Security

Red Hat's Satellite product is, to put it frankly, in a pretty sorry state of affairs. I understand large projects are complex, but the sheer number of bugs I have tripped across while trying to use and configure this product is absurd, especially for a paid product.

Anyway, one of the many issues I have run across is that the satellite server is advertising that it supports HSTS, yet a large amount of content from satellite is available via HTTP.

This effectively means that if a user visits the satellite web page, and then tries to download an ISO or RPM from the satellite server that is only exposed via HTTP, the browser will block them from going to the HTTP site because of the HSTS headers it received.

I have filed a bug report about the issue, but in the meantime, in order to work around the issue, you can do the following:

  1. Create the file /etc/httpd/conf.d/01-headers-hack.conf
  2. Place the following in the file:
  3. <IfModule mod_headers.c>
      Header unset Strict-Transport-Security
      Header always set Strict-Transport-Security "max-age=0;includeSubDomains"
    </IfModule>
    
  4. Restart apache: sudo systemctl restart httpd

What we are doing here is taking the max age down to zero for the HSTS headers being sent by apache. This will override the HSTS headers that are set via passenger and allow browsers to get both HTTP and HTTPS content from your satellite server.

Hopefully this won't be necessary for too long, however I have had little luck gaining much traction with Red Hat support or the satellite developers.

14Jan/160

On Vagrant, Red Hat Sattelite, and Red Hat Subscription Manager

Red Hat provides scripts for subscribing a system via subscription manager for Vagrant. However, these mysterious scripts are pay walled, you either need to be a Technology Partner with Red Hat (which means being an ISV, OEM, or IHV) or you need to pay them $99 a year for the Red Hat Developer Suite subscription. Personally, I believe paywalling this stuff was en extremely poor decision on Red Hat's part, and I would encourage any other effected users to communicate their displeasure with Red Hat as I have.

Nevertheless, we deal with what we are given, and in this case we are given nothing. So how do we subscribe a Vagrant box via subscription manager so we can easily test against it? Enter Vagrant Triggers.

Assumptions:

I am assuming you are running on a Linux(ish) platform for your development host.

I am further assuming that you have a Red Hat Satellite server that is >= 6.0. On that satellite server you have configured an activation key for an organization that is appropriate for using your vagrant images in.

Get the Bits:

You will need Vagrant installed and working in order for any of this to work.

After Vagrant is installed and verified, install Vagrant Triggers:

$ vagrant plugin install vagrant-triggers

Configure Vagrant Triggers:

Vagrant Triggers allows you to run any action based off of any vagrant command. But essentially all we are interested in is running subscription-manager register after a vagrant image is brought up, and subscription-manager unregister before an image is destroyed.

Add the following to your ~/.vagrant.d/Vagrantfile (create it if it does not exist):

Vagrant.configure("2") do |config|

  if Vagrant.has_plugin?("vagrant-triggers")
    config.trigger.before :destroy do
      info "Removing system from RHSM if it is registered."
      run_remote "/usr/sbin/subscription-manager unregister"
    end

   config.trigger.after :up do
     info "Registering system to CU Boulder Satellite in the Vagrant Organization."
     run_remote "rpm -Uvh http:///pub/katello-ca-consumer-latest.noarch.rpm && subscription-manager register --org '' --activationkey ''"
    end
  end
end

You will obviously need to substitute the address of your satellite server in, place the name of your organization in, and enter the appropriate activation key that you have created.

Gotchas:

This method is not without its drawbacks. I'll list the ones that I know.

All commands must exit with a 0 status:

If they don't, the vagrant up or destroy process will be halted immediately. This effectively means, that lacking networking, you won't be able to either create or destroy a RHEL vagrant image. You can manually delete the image but that is about all.

This Runs for ALL Vagrant Images:

Bringing up an Ubuntu image? Ain't going to fly, it is going to try and run the subscription-manager command and when that fails execution will be halted.

You Cannot Destroy a Halted Image:

Ever shut down your system and left behind some running Vagrant images? They get paused right? Well, vagrant destroy is not going to work, because it needs to run the subscription-manager command and the system is not actually up. You can start the image then run the destroy, but a straight destroy will not work.

Fin:

That is it, you should not be able to bring up RHEL images and have them registered so that you can pull down packages. Pay attention to the gotchas, I suspect many of these can be worked around with more error checking etc. but this was a first pass to get things going. Thoughts are welcome.

24Dec/151

On Building Red Hat Enterprise Linux Vagrant Boxes

Vagrant is an extremely powerful tool, the streamlining of testing is potent, and has probably saved me much trouble, and probably created more trouble too (as with all steps forward there are often steps back).

Red Hat does, in fact, provides official vagrant images as part of its Container Development Kit (CDK). However, you either need to be a technology partner with Red Hat, or you need to purchase a Red Hat Developer Suite subscription, starting at $99 a year and be a part of the Red Hat Developer program (which is free).

Put simply unless you are at a for profit company, that is also an IHV, OEM, or ISV, you are going to have to pay for the privilege of getting access to Red Hat's official vagrant images.

I'll be blunt on this fact, I believe Red Hat has made a very poor decision in pay walling these images. They are in essence, making it more difficult for us to deploy their OS into our environments. I have spoken with them, maybe something will change, maybe it won't. I would encourage any folks reading this to do the same.

Anyway, because of Red Hat's decisions on the matter, I needed to build vagrant box files for RHEL 6 and RHEL 7. I wanted to do this in the most efficient manner possible (read laziest). After a number of hours of research, here is what I came up with, it is the quickest method toward that goal that I could find, any improvements will be posted here

Prerequisites:

As well, an Atlas account (which is free) will ease your burdens in terms of distribution, if you are working with a team.

VirtualBox is not strictly required, we will be building the box images for the VirtualBox provider and thus VirtualBox is required, however, if you are building for a different provider, well, you probably know what is needed.

Get the Bits:

Download the RHEL ISOs you desire to build, personally I always pull the latest point release, 6.7 and 7.2 at the time of this writing. Obviously, you need an active RHEL subscription to get the ISOs.

Clone the packer build templates from the kind folks over at Chef:

$ git clone https://github.com/opscode/bento.git
$ cd bento

In the directory are packer templates available for a huge number of flavors. We are clearly interested in the RHEL templates:

$ls -1 rhel*
rhel-5.11-i386.json
rhel-5.11-x86_64.json
rhel-6.6-i386.json
rhel-6.6-x86_64.json
rhel-6.7-i386.json
rhel-6.7-x86_64.json
rhel-7.2-x86_64.json

Create an iso directory inside the bento directory and copy the ISOs that you obtained into it:

$ mkdir -p iso/rhel/
$ cp /some/location/*.iso iso/rhel/

Build the RHEL Images with Packer:

We are going to build box images for VirtualBox only. You can certainly build the image using other builders, just check out the packer documentation.

$ packer build -only=virtualbox-iso -var "mirror=file:///$(pwd)/iso" rhel-7.2-x86_64.json

You are going to see a lot of output going on by, along with VirtualBox windows opening as the image is prepared. Wait, and at the end you will have a shiny new RHEL-7.2 box image for vagrant located in the 'builds/' directory.

Fin:

Congratulations you now have a RHEL vagrant box with a minimum of work having to be done by you, truly standing on the shoulders of giants.

I would recommend that folks take a look at Atlas for group collaboration. It is very likely that distributing the RHEL bits publicly are a violation of some legalese, so keep em private.

8Apr/150

On Cisco ASA and Secure Logging to Rsyslog

Rsyslog has the ability receive logs securely via TCP, there is a very good tutorial available in the documentation for rsyslog on how to set this up. So I will not cover that aspect of it.

A Cisco ASA device running at least version 8.0(2), is able to support secure logging. However, documentation for this on the ground is thin at best. Having just worked through the entire process with Cisco support, I figured I would put this documentation up for the use of other folks.

Ensure Your Certificates are in Order:

Probably the de facto most important thing to do is to ensure that your entire certificate chain is saved on the ASA. You cannot, for instance, have a root CA and two subordinate CAs one subordinate which issues certificates to your ASA the other subordinate which issues certificates to your rsyslog server, and expect things to work. All certificates in both chains must be loaded on the ASA.

Further, you will need to associate a certificate with the interface from which you will be sending the syslog packets. This certificate will have to come from the same root CA or subordinate CA as the certificate configured on the rsyslog server.

Configure the ASA for Logging:

I will walk through each of the following commands after the fact, but here is the basic configuration:

logging enable
logging timestamp
logging facility 20
log host interface IP TCP/6514 secure
logging permit-hostdown

  • logging enable: The command enables the logging facilities of the ASA.
  • logging timestamp: This enables time stamps for the syslog messages. This is not required, but it is often nice to know when something happened, assuming your clock is set.
  • logging facility 20: This sets the logging facility that the syslog messages will be sent as, 20 equates to local4, 21 local5, 22 local6, and so forth.
  • logging host interface IP TCP/6514 secure: This sets up the syslog logging host, sending messages out of the defined interface to the defined IP. TCP/6514 defines TCP as the protocol, which is required for TLS secure logging and 6514 defines the port of the remote rsyslog server. secure states that the messages are to be sent encrypted. Please note: when you run 'show running-config logging', TCP/6514 will be shown as 6/6514 instead, this is a bug in the display of the configuration.
  • logging permit-hostdown: this command specifies that traffic through the ASA will still flow if the ASA is unable to send messages to the syslog server, otherwise traffic will be blocked.

As long as the certificate chain is correct and the ASA can reach the rsyslog server, the above should be all that you need. If however you are running into issues continue on to the troubleshooting section, as Cisco's errors for this particular issue are rather opaque.

Troubleshooting:

Enable the following debug messages:

debug ssl 255
debug crypto ca 255
debug crypto ca messages 255
debug crypto ca transactions 255

You may end up with debug messages like the following:

CERT-C: I pkixpath.c(1167) : Error #72eh
CRYPTO_PKI: Certificate validation: Failed, status: 1838
CRYPTO_PKI:PKI Verify Certificate Check Cert Revocation unknown error 1838

This highly informative error message can mean a few different things:

  1. The intermediate CA is not installed, or not installed correctly. Make sure you check that the entire chain is in place.
  2. The certificate is not installed on the logging server. Ensure that you have the right certificates set up for rsyslog and that they go to the same intermediate CA or root.
  3. Finally the CA certificate may not be correct. If this is a self generated certificate make sure the constraints etc. for the CA certificate are correct.
Filed under: Cisco, Networking No Comments
2Jun/140

On Building the Gemalto .NET PKCS 11 Module for Linux

Gemalto manufactures a number of different smart cards. However, one that seems to be rather popular is the Gemalto IDPrime .NET series of cards. Gemalto has also kindly developed an LGPL licensed PKCS 11 module for this series of cards that can be used in Linux. However, tracking the module down can be a real pain.

Gemalto's website seems to be only sporadically maintained, lots of dead links, pointers to drivers and tools that no longer exist etc. As such I had to chase this down a little, and in the spirit of helping others, and myself, out in the future, here is what is needed to be done to find, compile, and install the module.

Get the Module Source Code:

I was able to locate the module source code here. Probably the simplest way to get the source is simply to click on the 'Zip Archive' link under 'Download in other formats:' this will give you a zip file of the entire archive at its current revision.

You can also use Subversion to pull down the repository if you like:

svn checkout https://svn.macosforge.org/repository/smartcardservices

Build the Module:

If you downloaded the zip file you will have a file like 'trunk-REVISION.zip' the REVISION number will change as the repository changes. Unzip the file and you should end up with a 'trunk' folder. Now cd into the modules folder.

$ unzip trunk-160.zip
$ cd trunk/SmartCardServices/src/PKCS11dotNetV2/

If you used Subversion to checkout the repository you should have a 'smartcardservices' directory, inside of which are all the branches and the trunk, we will be working against the trunk only, and only building a very small part of that,.

$ cd smartcardservices/trunk/SmartCardServices/src/PKCS11dotNetV2/

At this point you need to run autogen.sh for the PKCS11dotNetV2 module in order to create the configure files etc.

$ chmod 755 autogen.sh
$ ./autogen.sh

Hopefully everything goes smoothly, no we run through the general configure, make, make install steps. Note that you will need to build against the system boost libraries, so you need to pass a flag to configure for that.

$ ./configure --enable-system-boost
$ make

By default the module will be installed into /usr/local/lib/ if this is a problem, you will need to adjust your configure flags to set the location, read through './configure --help' for more information.

Now we install the module:

$ make install

The module is now installed in /usr/local/lib/pkcs11/libgtop11dotnet.so, this is the location you will need to point programs like Thunderbird and Firefox at in order to use the module.

Filed under: Uncategorized No Comments
18Feb/140

On Cisco ASA High Availability Remote Upgrades

Problem:

One of the issues with setting up the Cisco ASA in a High Availability configuration is that, though most things are replicated, installation of the AnyConnect software, ASDM, and the ASA base software itself is not replicated. For folks sitting behind the ASA this is not a problem, you can just connect to one ASA and then the other to update the software. However, if you are connecting remotely via the VPN this is simply not possible. There is a feature request open with Cisco for this ability, but since it has been open for a few years I am not holding out hope.

So how does one upgrade the software on both the primary and secondary units remotely?

Solutions:

There are two generally accepted solutions to this conundrum.

Upgrade and Fail Over Method:

In this method you upgrade the primary using whatever means you usually use such as ASDM or via the CLI. However, after the ASDM and ASA software is installed on the system you do not choose to activate the software, instead you fail over to the secondary, reconnect using ASDM or SSH and do the install on the secondary system. You the fail back to the primary and activate the software and reboot the systems in whatever order you please.

Upgrade via TFTP Method:

Unfortunately in my particular circumstance I simply can't fail over as the fail over is not working due to IPv6 issues. IN order to do this you are going to need a TFTP server on a network segment that is accessible to the ASAs. I won't go through setting up a TFTP server as it is a pretty simple process.

The following is all done from the CLI on the primary:

Copy over the ASA software to the secondary:
failover exec standby copy /noconfirm tftp://asa-smp-<version>-k8.bin flash:/

Copy over the ASDM software to the secondary:
failover exec standby copy /noconfirm tftp://asdm-<version>.bin flash:/

If necessary copy over the new versions of the AnyConnect client to the secondary:
failover exec standby copy /noconfirm tftp://anyconnect-<platform>-<architecture>-<version>.pkg flash:/

Your secondary/standby unit is now prepped up for the upgrade. At this point repeat the same steps for the primary unit:

Copy over the ASA software to the primary:
copy /noconfirm tftp://asa-smp-<version>-k8.bin flash:/

Copy over the ASDM software to the primary:
copy /noconfirm tftp://asdm-<version>.bin flash:/

If necessary copy over the new versions of the AnyConnect client to the primary:
copy /noconfirm tftp://anyconnect-<platform>-<architecture>-<version>.pkg flash:/

Now issue the command on the primary to configure the ASA, ASDM, and AnyConnect (if needed) images as the defaults. Because these commands are replicated automatically to the secondary execution is only required on the primary:

On the primary set the system to boot using the new ASA software:
boot system disk0:/asa-smp-<version>.bin

On the primary set the new version of ASDM to be used:
asdm image disk0:/asdm-<version>.bin

If necessary on the primary set the new version of AnyConnect to be used:
webvpn
anyconnect image disk0:/anyconnect-win-<architecture>-<version>.pkg 1 regex "Windows NT"
anyconnect image disk0:/anyconnect-macosx-i386-<architecture>-<version>.pkg 2 regex "Intel Mac OS X"

At this point your configuration is set for the new versions of the software on both the primary and secondary, all that is left is to write it to memory and reboot either the primary or the secondary depending on how you like to gamble. I prefer to reboot the secondary first and then the primary:

Write the configuration to memory on the primary:
write mem

Reboot the secondary:
failover reload-standby

After the secondary unit reboots (assuming nothing goes wrong) check its status:
show failover state

Reboot the primary:
reload noconfirm save-config

You should now have the latest version of the software installed on both the primary and secondary units.

12Jul/131

On FreeIPA, PKI, and Exporting the CA

FreeIPA by default, generates a certificate for each host that is joined to an IPA domain. It also copies the CA certificate for the domain over to the client system. However, both of these certificates are held in an NSS DB which may or may not be the most useful location for your needs.

In my particular case, I needed the CA certificate for Postfix. Postfix is unable to use NSS DBs (as far as I know). So I needed to extract the CA certificate from the NSS DB and add it to the /etc/pki/tls/certs/ca-bundle.crt file. This allows other programs, such as Postfix, that don't understand NSS to use the CA certificate for verification.

Here is what I did to export the certificate and add it to the ca-bundle.crt file:

Environment:

All work was done in a RHEL 6.4 x86_64 environment, your mileage on other platforms may vary.

Find the Correct Certificate:

The NSS DB that FreeIPA uses is located in /etc/pki/nssdb/. The first step is to take a look at that location and find out what certificates are in the DB.


certutil -L -d /etc/pki/nssdb/

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

IPA CA                                                       CT,C,C
IPA Machine Certificate - host.example.com                   u,u,u

Let's cover the flags there:

  • L: List all certs.
  • d: Certificate directory, this should be followed by the location to use, the default is ~/.netscape.

The CA certificate is helpfully labelled IPA CA, it might also be listed as FreeIPA CA. What really gives it away is the trust attributes though. The CT, C, C fields indicate that this is a CA certificate, you can find more information from this Oracle blog post.

Export the CA Certificate:

Now that we have identified the CA certificate, it is time to export it. We are going to export this as an ASCII encoded certificate into a separate file. At that point we will use openssl to add some metadata about the certificate for convenience. Then we will append the contents of that certificate to the /etc/pki/tls/certs/ca-bundle.crt file.

In order to export the certificate run:

certutil -L -d /etc/pki/nssdb/ -a -n 'IPA CA' > IPA_CA.crt

Let's cover the new flags:

  • a: For single certificate, print ASCII encoding.
  • n: Pretty print named certificate.

So basically we are search for the certificate labelled 'IPA CA', substitute your own name here if necessary, and export it in ASCII format, redirecting said output to a file of course.

Add Metadata to SSL Certificate:

This is in no way a requirement, you could simply take the contents of the file you created and add it to the ca-bundle.crt. However, if you look in the ca-bundle.crt file you will find that most all the certificates are preceded by information about them in (relatively) clear text. This is handy for the poor human that has to read through the file and try to discern what is what. It might even be useful for something else, but I don't know what that is.

In order to get that metadata about the certificate you need to use openssl as follows:

openssl x509 -in IPA_CA.crt -out IPA_CA.pem -text

Again we will cover the flags being used though these are pretty obvious, but for completeness sake.

  • in: The name of the input file.
  • out: The name of the output file.
  • text: "Prints out the certificate in text form. Full details are output including the public key, signature algorithms, issuer and subject names, serial number any extensions present and any trust settings."

If you take a look at the IPA_CA.pem file you now have your cert and associated metadata. Finally, you will want to append this to the ca-bundle.crt file.

Append the CA:

This is a very simple step:

cat IPA_CA.pem >> /etc/pki/tls/ca-bundle.crt

In One Line:

One line, sort of. You are going to need to know the name of the certificate before you can export it, but assuming you know that, then this should work:

certutil -L -d /etc/pki/nssdb/ -a -n 'IPA CA' | openssl x509 -text >> /etc/pki/tls/certs/ca-bundle.crt

Caveats:

The /etc/pki/tls/certs/ca-bundle.crt file is a part of the ca-certificates rpm that does get updated. These files come in the form of ca-bundle.crt.rpmnew and should be moved to ca-bundle.crt, the trouble is of course, you will need to re-add your CA. Setting up an automated process to do this is necessary, as should be a process for moving the .rpmnew file into place.

As well, there is no easy and automated way to handle this file, inserting is easy enough, but what happens if your CA certificate expires and needs to be replaced? Removal is not as easy. Coming along in the Fedora 20 timeframe there will be a solution to all of this hopefully in the form of the Shared System Certificates feature.

Tagged as: , 1 Comment
This work by Erinn Looney-Triggs is licensed under a Creative Commons Attribution-ShareAlike 3.0 United States.