Steel /stēl/ Verb: Mentally prepare (oneself) to do or face something difficult.

27Jun/160

On Building a Quick TLS Server with Flask

I have the need, at the moment, to have a TLS site running from my local system, served only to my local system, for development purposes. This is needed for the OAuth flow so that I can gather the token returned, but that is another story...

I didn't want to deal with generating a CA, and server certs etc. I wanted the code to be fully portable and just work, ignoring of course the security implications of not having a 'legit' certificate.

It turns out setting up a quick TLS enabled site with Flask is ridiculously easy, this was one of those, 'thank the deity moments that someone had already thought of and implemented this particular use case'.

Anyway, on to the code:

from flask import Flask
app = Flask(__name__)

if __name__ == '__main__':
app.run(debug=True, port=8443, ssl_context='adhoc')

Make sure you have pyOpenSSL installed, and that is it! Amazing really, clearly you don't need to set the port if you don't want to, it will default to 5000 if none is supplied.

The ssl_context is the interesting bit. Adhoc means that new certificates will be generated on each app start. This is not secure, since the certs are not based off of any trusted CA, but it will get the job done. You can also supply the cert, key, and CA as an SSL Context object if you need to, more info here.

To reinforce this point, this is a quick setup using the development server built into flask. DO NOT use this in production, this is for testing only. The certs themselves should be secure, however you have no CA chain, thus no trust, again this should NEVER be used in production.

Filed under: Uncategorized No Comments
15Feb/160

On Red Hat Satellite 6.1 and HTTP Strict Transport Security

Red Hat's Satellite product is, to put it frankly, in a pretty sorry state of affairs. I understand large projects are complex, but the sheer number of bugs I have tripped across while trying to use and configure this product is absurd, especially for a paid product.

Anyway, one of the many issues I have run across is that the satellite server is advertising that it supports HSTS, yet a large amount of content from satellite is available via HTTP.

This effectively means that if a user visits the satellite web page, and then tries to download an ISO or RPM from the satellite server that is only exposed via HTTP, the browser will block them from going to the HTTP site because of the HSTS headers it received.

I have filed a bug report about the issue, but in the meantime, in order to work around the issue, you can do the following:

  1. Create the file /etc/httpd/conf.d/01-headers-hack.conf
  2. Place the following in the file:
  3. <IfModule mod_headers.c>
      Header unset Strict-Transport-Security
      Header always set Strict-Transport-Security "max-age=0;includeSubDomains"
    </IfModule>
    
  4. Restart apache: sudo systemctl restart httpd

What we are doing here is taking the max age down to zero for the HSTS headers being sent by apache. This will override the HSTS headers that are set via passenger and allow browsers to get both HTTP and HTTPS content from your satellite server.

Hopefully this won't be necessary for too long, however I have had little luck gaining much traction with Red Hat support or the satellite developers.

14Jan/160

On Vagrant, Red Hat Sattelite, and Red Hat Subscription Manager

Red Hat provides scripts for subscribing a system via subscription manager for Vagrant. However, these mysterious scripts are pay walled, you either need to be a Technology Partner with Red Hat (which means being an ISV, OEM, or IHV) or you need to pay them $99 a year for the Red Hat Developer Suite subscription. Personally, I believe paywalling this stuff was en extremely poor decision on Red Hat's part, and I would encourage any other effected users to communicate their displeasure with Red Hat as I have.

Nevertheless, we deal with what we are given, and in this case we are given nothing. So how do we subscribe a Vagrant box via subscription manager so we can easily test against it? Enter Vagrant Triggers.

Assumptions:

I am assuming you are running on a Linux(ish) platform for your development host.

I am further assuming that you have a Red Hat Satellite server that is >= 6.0. On that satellite server you have configured an activation key for an organization that is appropriate for using your vagrant images in.

Get the Bits:

You will need Vagrant installed and working in order for any of this to work.

After Vagrant is installed and verified, install Vagrant Triggers:

$ vagrant plugin install vagrant-triggers

Configure Vagrant Triggers:

Vagrant Triggers allows you to run any action based off of any vagrant command. But essentially all we are interested in is running subscription-manager register after a vagrant image is brought up, and subscription-manager unregister before an image is destroyed.

Add the following to your ~/.vagrant.d/Vagrantfile (create it if it does not exist):

Vagrant.configure("2") do |config|

  if Vagrant.has_plugin?("vagrant-triggers")
    config.trigger.before :destroy do
      info "Removing system from RHSM if it is registered."
      run_remote "/usr/sbin/subscription-manager unregister"
    end

   config.trigger.after :up do
     info "Registering system to CU Boulder Satellite in the Vagrant Organization."
     run_remote "rpm -Uvh http:///pub/katello-ca-consumer-latest.noarch.rpm && subscription-manager register --org '' --activationkey ''"
    end
  end
end

You will obviously need to substitute the address of your satellite server in, place the name of your organization in, and enter the appropriate activation key that you have created.

Gotchas:

This method is not without its drawbacks. I'll list the ones that I know.

All commands must exit with a 0 status:

If they don't, the vagrant up or destroy process will be halted immediately. This effectively means, that lacking networking, you won't be able to either create or destroy a RHEL vagrant image. You can manually delete the image but that is about all.

This Runs for ALL Vagrant Images:

Bringing up an Ubuntu image? Ain't going to fly, it is going to try and run the subscription-manager command and when that fails execution will be halted.

You Cannot Destroy a Halted Image:

Ever shut down your system and left behind some running Vagrant images? They get paused right? Well, vagrant destroy is not going to work, because it needs to run the subscription-manager command and the system is not actually up. You can start the image then run the destroy, but a straight destroy will not work.

Fin:

That is it, you should not be able to bring up RHEL images and have them registered so that you can pull down packages. Pay attention to the gotchas, I suspect many of these can be worked around with more error checking etc. but this was a first pass to get things going. Thoughts are welcome.

24Dec/150

On Building Red Hat Enterprise Linux Vagrant Boxes

Vagrant is an extremely powerful tool, the streamlining of testing is potent, and has probably saved me much trouble, and probably created more trouble too (as with all steps forward there are often steps back).

Red Hat does, in fact, provides official vagrant images as part of its Container Development Kit (CDK). However, you either need to be a technology partner with Red Hat, or you need to purchase a Red Hat Developer Suite subscription, starting at $99 a year and be a part of the Red Hat Developer program (which is free).

Put simply unless you are at a for profit company, that is also an IHV, OEM, or ISV, you are going to have to pay for the privilege of getting access to Red Hat's official vagrant images.

I'll be blunt on this fact, I believe Red Hat has made a very poor decision in pay walling these images. They are in essence, making it more difficult for us to deploy their OS into our environments. I have spoken with them, maybe something will change, maybe it won't. I would encourage any folks reading this to do the same.

Anyway, because of Red Hat's decisions on the matter, I needed to build vagrant box files for RHEL 6 and RHEL 7. I wanted to do this in the most efficient manner possible (read laziest). After a number of hours of research, here is what I came up with, it is the quickest method toward that goal that I could find, any improvements will be posted here

Prerequisites:

As well, an Atlas account (which is free) will ease your burdens in terms of distribution, if you are working with a team.

VirtualBox is not strictly required, we will be building the box images for the VirtualBox provider and thus VirtualBox is required, however, if you are building for a different provider, well, you probably know what is needed.

Get the Bits:

Download the RHEL ISOs you desire to build, personally I always pull the latest point release, 6.7 and 7.2 at the time of this writing. Obviously, you need an active RHEL subscription to get the ISOs.

Clone the packer build templates from the kind folks over at Chef:

$ git clone https://github.com/opscode/bento.git
$ cd bento

In the directory are packer templates available for a huge number of flavors. We are clearly interested in the RHEL templates:

$ls -1 rhel*
rhel-5.11-i386.json
rhel-5.11-x86_64.json
rhel-6.6-i386.json
rhel-6.6-x86_64.json
rhel-6.7-i386.json
rhel-6.7-x86_64.json
rhel-7.2-x86_64.json

Create an iso directory inside the bento directory and copy the ISOs that you obtained into it:

$ mkdir -p iso/rhel/
$ cp /some/location/*.iso iso/rhel/

Build the RHEL Images with Packer:

We are going to build box images for VirtualBox only. You can certainly build the image using other builders, just check out the packer documentation.

$ packer build -only=virtualbox-iso -var "mirror=file:///$(pwd)/iso" rhel-7.2-x86_64.json

You are going to see a lot of output going on by, along with VirtualBox windows opening as the image is prepared. Wait, and at the end you will have a shiny new RHEL-7.2 box image for vagrant located in the 'builds/' directory.

Fin:

Congratulations you now have a RHEL vagrant box with a minimum of work having to be done by you, truly standing on the shoulders of giants.

I would recommend that folks take a look at Atlas for group collaboration. It is very likely that distributing the RHEL bits publicly are a violation of some legalese, so keep em private.

8Apr/150

On Cisco ASA and Secure Logging to Rsyslog

Rsyslog has the ability receive logs securely via TCP, there is a very good tutorial available in the documentation for rsyslog on how to set this up. So I will not cover that aspect of it.

A Cisco ASA device running at least version 8.0(2), is able to support secure logging. However, documentation for this on the ground is thin at best. Having just worked through the entire process with Cisco support, I figured I would put this documentation up for the use of other folks.

Ensure Your Certificates are in Order:

Probably the de facto most important thing to do is to ensure that your entire certificate chain is saved on the ASA. You cannot, for instance, have a root CA and two subordinate CAs one subordinate which issues certificates to your ASA the other subordinate which issues certificates to your rsyslog server, and expect things to work. All certificates in both chains must be loaded on the ASA.

Further, you will need to associate a certificate with the interface from which you will be sending the syslog packets. This certificate will have to come from the same root CA or subordinate CA as the certificate configured on the rsyslog server.

Configure the ASA for Logging:

I will walk through each of the following commands after the fact, but here is the basic configuration:

logging enable
logging timestamp
logging facility 20
log host interface IP TCP/6514 secure
logging permit-hostdown

  • logging enable: The command enables the logging facilities of the ASA.
  • logging timestamp: This enables time stamps for the syslog messages. This is not required, but it is often nice to know when something happened, assuming your clock is set.
  • logging facility 20: This sets the logging facility that the syslog messages will be sent as, 20 equates to local4, 21 local5, 22 local6, and so forth.
  • logging host interface IP TCP/6514 secure: This sets up the syslog logging host, sending messages out of the defined interface to the defined IP. TCP/6514 defines TCP as the protocol, which is required for TLS secure logging and 6514 defines the port of the remote rsyslog server. secure states that the messages are to be sent encrypted. Please note: when you run 'show running-config logging', TCP/6514 will be shown as 6/6514 instead, this is a bug in the display of the configuration.
  • logging permit-hostdown: this command specifies that traffic through the ASA will still flow if the ASA is unable to send messages to the syslog server, otherwise traffic will be blocked.

As long as the certificate chain is correct and the ASA can reach the rsyslog server, the above should be all that you need. If however you are running into issues continue on to the troubleshooting section, as Cisco's errors for this particular issue are rather opaque.

Troubleshooting:

Enable the following debug messages:

debug ssl 255
debug crypto ca 255
debug crypto ca messages 255
debug crypto ca transactions 255

You may end up with debug messages like the following:

CERT-C: I pkixpath.c(1167) : Error #72eh
CRYPTO_PKI: Certificate validation: Failed, status: 1838
CRYPTO_PKI:PKI Verify Certificate Check Cert Revocation unknown error 1838

This highly informative error message can mean a few different things:

  1. The intermediate CA is not installed, or not installed correctly. Make sure you check that the entire chain is in place.
  2. The certificate is not installed on the logging server. Ensure that you have the right certificates set up for rsyslog and that they go to the same intermediate CA or root.
  3. Finally the CA certificate may not be correct. If this is a self generated certificate make sure the constraints etc. for the CA certificate are correct.
Filed under: Cisco, Networking No Comments
2Jun/140

On Building the Gemalto .NET PKCS 11 Module for Linux

Gemalto manufactures a number of different smart cards. However, one that seems to be rather popular is the Gemalto IDPrime .NET series of cards. Gemalto has also kindly developed an LGPL licensed PKCS 11 module for this series of cards that can be used in Linux. However, tracking the module down can be a real pain.

Gemalto's website seems to be only sporadically maintained, lots of dead links, pointers to drivers and tools that no longer exist etc. As such I had to chase this down a little, and in the spirit of helping others, and myself, out in the future, here is what is needed to be done to find, compile, and install the module.

Get the Module Source Code:

I was able to locate the module source code here. Probably the simplest way to get the source is simply to click on the 'Zip Archive' link under 'Download in other formats:' this will give you a zip file of the entire archive at its current revision.

You can also use Subversion to pull down the repository if you like:

svn checkout https://svn.macosforge.org/repository/smartcardservices

Build the Module:

If you downloaded the zip file you will have a file like 'trunk-REVISION.zip' the REVISION number will change as the repository changes. Unzip the file and you should end up with a 'trunk' folder. Now cd into the modules folder.

$ unzip trunk-160.zip
$ cd trunk/SmartCardServices/src/PKCS11dotNetV2/

If you used Subversion to checkout the repository you should have a 'smartcardservices' directory, inside of which are all the branches and the trunk, we will be working against the trunk only, and only building a very small part of that,.

$ cd smartcardservices/trunk/SmartCardServices/src/PKCS11dotNetV2/

At this point you need to run autogen.sh for the PKCS11dotNetV2 module in order to create the configure files etc.

$ chmod 755 autogen.sh
$ ./autogen.sh

Hopefully everything goes smoothly, no we run through the general configure, make, make install steps. Note that you will need to build against the system boost libraries, so you need to pass a flag to configure for that.

$ ./configure --enable-system-boost
$ make

By default the module will be installed into /usr/local/lib/ if this is a problem, you will need to adjust your configure flags to set the location, read through './configure --help' for more information.

Now we install the module:

$ make install

The module is now installed in /usr/local/lib/pkcs11/libgtop11dotnet.so, this is the location you will need to point programs like Thunderbird and Firefox at in order to use the module.

Filed under: Uncategorized No Comments
18Feb/140

On Cisco ASA High Availability Remote Upgrades

Problem:

One of the issues with setting up the Cisco ASA in a High Availability configuration is that, though most things are replicated, installation of the AnyConnect software, ASDM, and the ASA base software itself is not replicated. For folks sitting behind the ASA this is not a problem, you can just connect to one ASA and then the other to update the software. However, if you are connecting remotely via the VPN this is simply not possible. There is a feature request open with Cisco for this ability, but since it has been open for a few years I am not holding out hope.

So how does one upgrade the software on both the primary and secondary units remotely?

Solutions:

There are two generally accepted solutions to this conundrum.

Upgrade and Fail Over Method:

In this method you upgrade the primary using whatever means you usually use such as ASDM or via the CLI. However, after the ASDM and ASA software is installed on the system you do not choose to activate the software, instead you fail over to the secondary, reconnect using ASDM or SSH and do the install on the secondary system. You the fail back to the primary and activate the software and reboot the systems in whatever order you please.

Upgrade via TFTP Method:

Unfortunately in my particular circumstance I simply can't fail over as the fail over is not working due to IPv6 issues. IN order to do this you are going to need a TFTP server on a network segment that is accessible to the ASAs. I won't go through setting up a TFTP server as it is a pretty simple process.

The following is all done from the CLI on the primary:

Copy over the ASA software to the secondary:
failover exec standby copy /noconfirm tftp://asa-smp-<version>-k8.bin flash:/

Copy over the ASDM software to the secondary:
failover exec standby copy /noconfirm tftp://asdm-<version>.bin flash:/

If necessary copy over the new versions of the AnyConnect client to the secondary:
failover exec standby copy /noconfirm tftp://anyconnect-<platform>-<architecture>-<version>.pkg flash:/

Your secondary/standby unit is now prepped up for the upgrade. At this point repeat the same steps for the primary unit:

Copy over the ASA software to the primary:
copy /noconfirm tftp://asa-smp-<version>-k8.bin flash:/

Copy over the ASDM software to the primary:
copy /noconfirm tftp://asdm-<version>.bin flash:/

If necessary copy over the new versions of the AnyConnect client to the primary:
copy /noconfirm tftp://anyconnect-<platform>-<architecture>-<version>.pkg flash:/

Now issue the command on the primary to configure the ASA, ASDM, and AnyConnect (if needed) images as the defaults. Because these commands are replicated automatically to the secondary execution is only required on the primary:

On the primary set the system to boot using the new ASA software:
boot system disk0:/asa-smp-<version>.bin

On the primary set the new version of ASDM to be used:
asdm image disk0:/asdm-<version>.bin

If necessary on the primary set the new version of AnyConnect to be used:
webvpn
anyconnect image disk0:/anyconnect-win-<architecture>-<version>.pkg 1 regex "Windows NT"
anyconnect image disk0:/anyconnect-macosx-i386-<architecture>-<version>.pkg 2 regex "Intel Mac OS X"

At this point your configuration is set for the new versions of the software on both the primary and secondary, all that is left is to write it to memory and reboot either the primary or the secondary depending on how you like to gamble. I prefer to reboot the secondary first and then the primary:

Write the configuration to memory on the primary:
write mem

Reboot the secondary:
failover reload-standby

After the secondary unit reboots (assuming nothing goes wrong) check its status:
show failover state

Reboot the primary:
reload noconfirm save-config

You should now have the latest version of the software installed on both the primary and secondary units.

12Jul/131

On FreeIPA, PKI, and Exporting the CA

FreeIPA by default, generates a certificate for each host that is joined to an IPA domain. It also copies the CA certificate for the domain over to the client system. However, both of these certificates are held in an NSS DB which may or may not be the most useful location for your needs.

In my particular case, I needed the CA certificate for Postfix. Postfix is unable to use NSS DBs (as far as I know). So I needed to extract the CA certificate from the NSS DB and add it to the /etc/pki/tls/certs/ca-bundle.crt file. This allows other programs, such as Postfix, that don't understand NSS to use the CA certificate for verification.

Here is what I did to export the certificate and add it to the ca-bundle.crt file:

Environment:

All work was done in a RHEL 6.4 x86_64 environment, your mileage on other platforms may vary.

Find the Correct Certificate:

The NSS DB that FreeIPA uses is located in /etc/pki/nssdb/. The first step is to take a look at that location and find out what certificates are in the DB.


certutil -L -d /etc/pki/nssdb/

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

IPA CA                                                       CT,C,C
IPA Machine Certificate - host.example.com                   u,u,u

Let's cover the flags there:

  • L: List all certs.
  • d: Certificate directory, this should be followed by the location to use, the default is ~/.netscape.

The CA certificate is helpfully labelled IPA CA, it might also be listed as FreeIPA CA. What really gives it away is the trust attributes though. The CT, C, C fields indicate that this is a CA certificate, you can find more information from this Oracle blog post.

Export the CA Certificate:

Now that we have identified the CA certificate, it is time to export it. We are going to export this as an ASCII encoded certificate into a separate file. At that point we will use openssl to add some metadata about the certificate for convenience. Then we will append the contents of that certificate to the /etc/pki/tls/certs/ca-bundle.crt file.

In order to export the certificate run:

certutil -L -d /etc/pki/nssdb/ -a -n 'IPA CA' > IPA_CA.crt

Let's cover the new flags:

  • a: For single certificate, print ASCII encoding.
  • n: Pretty print named certificate.

So basically we are search for the certificate labelled 'IPA CA', substitute your own name here if necessary, and export it in ASCII format, redirecting said output to a file of course.

Add Metadata to SSL Certificate:

This is in no way a requirement, you could simply take the contents of the file you created and add it to the ca-bundle.crt. However, if you look in the ca-bundle.crt file you will find that most all the certificates are preceded by information about them in (relatively) clear text. This is handy for the poor human that has to read through the file and try to discern what is what. It might even be useful for something else, but I don't know what that is.

In order to get that metadata about the certificate you need to use openssl as follows:

openssl x509 -in IPA_CA.crt -out IPA_CA.pem -text

Again we will cover the flags being used though these are pretty obvious, but for completeness sake.

  • in: The name of the input file.
  • out: The name of the output file.
  • text: "Prints out the certificate in text form. Full details are output including the public key, signature algorithms, issuer and subject names, serial number any extensions present and any trust settings."

If you take a look at the IPA_CA.pem file you now have your cert and associated metadata. Finally, you will want to append this to the ca-bundle.crt file.

Append the CA:

This is a very simple step:

cat IPA_CA.pem >> /etc/pki/tls/ca-bundle.crt

In One Line:

One line, sort of. You are going to need to know the name of the certificate before you can export it, but assuming you know that, then this should work:

certutil -L -d /etc/pki/nssdb/ -a -n 'IPA CA' | openssl x509 -text >> /etc/pki/tls/certs/ca-bundle.crt

Caveats:

The /etc/pki/tls/certs/ca-bundle.crt file is a part of the ca-certificates rpm that does get updated. These files come in the form of ca-bundle.crt.rpmnew and should be moved to ca-bundle.crt, the trouble is of course, you will need to re-add your CA. Setting up an automated process to do this is necessary, as should be a process for moving the .rpmnew file into place.

As well, there is no easy and automated way to handle this file, inserting is easy enough, but what happens if your CA certificate expires and needs to be replaced? Removal is not as easy. Coming along in the Fedora 20 timeframe there will be a solution to all of this hopefully in the form of the Shared System Certificates feature.

Tagged as: , 1 Comment
9Jul/130

On FreeIPA, Postfix and a Relaying SMTP Client

I wanted to configure a number of my hosts to relay all their e-mail through an SMTP gateway that was already configured to allow Kerberos authentication (GSSAPI). The problem was, the SMTP clients were all over the internet IP space (both IPv4 and IPv6). Opening up subnets to allow relaying didn't seem like the smartest idea, as in some locations I didn't control all of the subnet, and opening up for each individual IP was a real pain and would not scale well. So I turned to using authentication in the Postfix SMTP client.

Relaying using authentication in the Postfix client is possible using name/password maps. However, I wanted to use Kerberos (GSSAPI) for the client authentication. Using Kerberos allows me to centrally control the resources. If a machine were to walk off my network, I can revoke all the Kerberos credentials associated with that machine, and it will no longer function. Besides, Kerberos is just sexy.

However, Kerberos being Kerberos, this configuration is a bit trickier and less straightforward than most. It took some back and forth with some helpful users on the Postfix e-mail list who eventually pointed me in the right direction. The following documentation is the result of that work.

Be forewarned, there may in fact be more efficient ways to do this. If you know of some, please post them up in the comments section.

Environment:

  • RHEL 6.4 x86_64 systems.
  • Red Hat IPA Server (FreeeIPA) version 3.0 (although just a straight Kerberos back end would work as well).
  • Postfix 2.6.6
  • SELinux is enabled and in enforcing mode

Our infrastructure will be very basic, an SMTP client host, an SMTP server, and an IPA server. We will have client.example.com, smtp.example.com, and ipa.example.com. The IPA server is just acting, in this instance, as a simple Kerberos server, so you can substitute your own Kerberos server there and not use IPA if you need.

smtp.example.com and client.example.com, are already tied into the IPA infrastructure. For our purposes this just means that they can easily use Kerberos.

Your mileage may vary with other environments, the general principals should be the same, however, the specifics of the Postfix configuration and file locations are apt to change.

Server Configuration:

Configuring a Postfix mail server to use Kerberos is a well documented process, so I will not cover the steps required for that configuration here. With my specific server configuration all authentication must be done over TLS. This is probably the only thing that is different than most standard configurations.

A regular connection to the SMTP server does not advertise authentication:

telnet smtp.example.com 587
Trying 192.168.1.100...
Connected to smtp.example.com.
Escape character is '^]'.
220 smtp.example.com ESMTP Postfix
EHLO example.com
250-smtp.example.com
250-PIPELINING
250-SIZE 10240000
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN

However using STARTTLS you get the authentication advertisements:

openssl s_client -host smtp.example.com -port 587 -starttls smtp -quiet
depth=0 O = EXAMPLE.COM, CN = smtp.example.com
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 O = EXAMPLE.COM, CN = smtp.example.com
verify error:num=27:certificate not trusted
verify return:1
depth=0 O = EXAMPLE.COM, CN = smtp.example.com
verify error:num=21:unable to verify the first certificate
verify return:1
250 DSN
EHLO example.com
250-smtp.example.com
250-PIPELINING
250-SIZE 10240000
250-ETRN
250-AUTH PLAIN GSSAPI LOGIN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN

Client Configuration:

Install Postfix:

Postfix must, of course, be installed and running on the client host.

sudo yum install postfix

Modify Postfix's Configuration:

After installation, we modify the /etc/postfix/main.cf file and add the directives needed for the SMTP client configuration:

smtp_tls_CAfile = /etc/pki/tls/certs/ca-bundle.crt
smtp_tls_session_cache_database = btree:${data_directory}/smtp_tls_session_cache
smtp_tls_security_level = secure
smtp_tls_mandatory_ciphers = high
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_mechanism_filter = gssapi
relayhost = smtp.example.com
import_environment =
                     MAIL_CONFIG MAIL_DEBUG MAIL_LOGTAG TZ LANG=C
                     KRB5CCNAME=FILE:${queue_directory}/kerberos/krb5_ccache

We will go through these directives individually:

  • smtp_tls_CAfile: This points to a file containing the Certificate Authority (CA) certificates. By default the location listed above is the central CA location for RHEL systems. This will allow the SMTP client to verify the authenticity of the cert presented to it upon connection if the certificate is signed by one of the well known CAs in the file. If you are using self signed certificates issued by the IPA CA you will need to add the CA certificate to the ca-bundle in order for TLS verification to work.
  • smtp_tls_session_cache_database: This btree file keeps a cache of the TLS information for connections allowing easier build up and tear down of connections.
  • smtp_tls_security_level: This directive sets the security level for the TLS session. We choose to use secure here. Because we control both the client and the server verification of certificates should work. As well, we are forced to use the secure option because otherwise the session may be hijacked as GSSAPI inside of TLS does not perform channel binding which may allow session hijacking.
  • smtp_tls_mandatory_ciphers: This directive sets the ciphers we choose to use for communication between the client and the server. We set this to high on the assumption that because we control both the client and server these ciphers will work. Because we don't have to inter-operate with the larger internet we can be much more restrictive in terms of what we will allow.
  • smtp_sasl_auth_enable: This enables SASL authentication for the SMTP client, it also necessitates the next directive being in place.
  • smtp_sasl_password_maps: This directive points to the hash mapped location of the SASL password file, we will configure that file in the next step.
  • smtp_sasl_mechanism_filter: This directive is probably not strictly necessary. The client and the server will look for authentication methods that they both share by default, however this will assure that only GSSAPI (Kerberos) is used.
  • relayhost: Of course this points to the location for e-mail to be relayed through.
  • import_environment: This directive sets a number of environmental variables to be imported and passed to the SMTP client. Most important is the KRB5CCNAME, which points to the location of the Kerberos credential cache. The other variables are just the defaults that are normally imported.

Create the SASL Password Map:

We now create a mapping of host names to the associated credentials. This file needs to exist, however, in our case the credentials don't really matter as that is handled by Kerberos.

Edit /etc/postfix/sasl_passwd:

#Destination                Credentials
smtp.example.com            gssapi:nopassword
smtp.example.com:submission gssapi:nopassword

Though there is no real sensitive information in this file, it is always best to restrict access unless needed. Because Postfix accesses this file before it drops its privileges, you can set the ownership to root. So we change permissions to root:root, and set the permissions to mode 640:

sudo chmod 640 /etc/postfix/sasl_passwd
sudo chown 0:0 /etc/postfix/sasl_passwd

Run postmap to convert the file to a hash DB:

sudo postmap hash:/etc/postfix/sasl_passwd

Create and Retrieve the keytab:

There is nothing really special about using IPA in this section, I am simply relying on the IPA convenience tools. If you are running a straight Kerberos environment you can do the same thing easily using kadmin.

Make sure that you have a ticket as a user that has admin privileges in IPA, or as the admin user.

ipa service-add smtp/client.example.com

Retrieve the keytab:

ipa-getkeytab -s ipa.example.com -p smtp/client.example.com -k smtp.keytab

Move the keytab into the proper location and set the ownership and permissions:
There is no standardized Kerberos keytab location on the system (that I know of) so I tend to create an /etc/keytabs directory and store the keytabs there. You can place your keytabs wherever you like as long as the permissions are correct and SELinux doesn't deny access.

sudo mkdir -m 755 /etc/keytabs/
sudo mv smtp.keytab /etc/keytabs/
sudo chown postfix:postfix /etc/keytabs/smtp.keytab
sudo chmod 440 /etc/keytabs/smtp.keytab
sudo restorecon -Rv /etc/keytabs/

Create the Credential Cache:

Create a location for the Kerberos credential cache to exist. Placing it in /var/spool/postfix/kerberos/ works with the current SELinux policy. Below is the policy directives allowing access to the postfix_spool_t label:

sesearch -s postfix_smtp_t -t postfix_spool_t --allow
Found 2 semantic av rules:
allow postfix_smtp_t postfix_spool_t : file { ioctl read write getattr lock append open } ;
allow postfix_smtp_t postfix_spool_t : dir { ioctl read getattr lock search open } ;

So we create /var/spool/postfix/kerberos/ and chown it to postfix:postfix mode 755:

sudo mkdir -m 755 /var/spool/postfix/kerberos/
sudo chown postfix:postfix /var/spool/postfix/kerberos/

Create the Kerberos Cron Job:

You now need to add in a cron job running as the postfix user to update the tickets using the keytab:

sudo crontab -u postfix -e

This cron job fetches a ticket granting ticket (TGT) on system reboot and once every 12 hours. My policy allows for Kerberos tickets to be valid for 24 hours so this is a bit of overkill. cron contents:

@reboot kinit -c FILE:/var/spool/postfix/kerberos/krb5_ccache -k -t /etc/keytabs/smtp.keytab smtp/$(uname -n)
* 0-23/4 * * * kinit -c FILE:/var/spool/postfix/kerberos/krb5_ccache -k -t /etc/keytabs/smtp.keytab smtp/$(uname -n)

There may be a more concise way to express this in cron, but I don't know it.

At this point you should have a /var/spool/postfix/kerbers/krb5_ccache file owned by the postfix user. If you don't, you are going to need to trouble shoot what happened. Try running the kinit command above using sudo and see if the file appears, check /var/log/messages for any errors.

All the constituent parts are now in place. Restart Postfix on the client system and test.

sudo service postfix restart

Test:

At this point your mail client should be configured to use Kerberos for authentication against your mail server. Assuming your mail server is properly configured, things should just work.

Attempt to send a test e-mail from the client host and check /var/log/maillog on both the client and the server for any messages. If Kerberos authentication is working properly you will see the authentication type listed as gssapi on the SMTP server.

If things are not working, a handy way to get a real low level view of exactly what is going on is with the use of debug_peer_list directive in /etc/postfix/main.cf. You add this directive in and reload Postfix, from the server side you will see exactly what commands are being received and vice versa on the client side:

#For the client
debug_peer_list = smtp.example.com
#For the server
debug_peer_list = client.example.com

Fin.

4Jan/130

On Python, LDAP, and Kerberos

Currently I am working on a program that needs to bind to an LDAP server, of course I want to do this securely, and I need to authenticate using Kerberos in the process to have the appropriate right to extract the information I need.

Below is a very simple (as in no exception checking), program to securely bind and authenticate to an LDAP server using TLS. Examples like this are a bit few and far between so I figured another one could help. This was created on a Fedora 18 system using Python 2.7 and python-ldap version 2.4.6.

[sourcecode lang="python"]
import ldap
import ldap.sasl

#Initialize your connection and force it to use TLS
con = ldap.initialize('ldap://yourserver.com')
con.set_option(ldap.OPT_X_TLS_DEMAND, True)
con.start_tls_s()

#Configure auth to use GSSAPI
auth = ldap.sasl.gssapi("")

#Actually make the connection
con.sasl_interactive_bind_s("", auth)

#Your operation here
[/sourcecode]

This assumes of course that you have a Kerberos infrastructure in place, as well as having a TGT. In addition, OpenLDAP will need to be configured with the appropriate Certificate Authority in order to securely bind to the LDAP server.

Filed under: Uncategorized No Comments
This work by Erinn Looney-Triggs is licensed under a Creative Commons Attribution-ShareAlike 3.0 United States.