New universal forwarder won’t connect to another forwarder

Error on the sender end’s ($SPLUNK_HOME/var/log/splunk/splunkd.log)

Error on the receive end’s ($SPLUNK_HOME/var/log/splunk/splunkd.log)

I checked the certificate to make sure the common name is right:

So, the common name on the certificate is different to the name presented by the server. I reissued the cert for the new server name, restart, still no no good.

This was in the logs on the sender’s end:

Turns out, the CAcert was wrong. It presents as a “file not found” error, but it’s just a file that doesn’t work. It needs to include both certs from the chain, which is different to how the chain’s exported from the windows CA server normally.

Once I fixed that, all good:

Sometimes it pays to check both logs :)

IPtables logging firewall blocks

Basically we’ll set up another chain for it to be forwarded to, filter the packets to move them to the chain, then a logging command created to log if packets end up in that chain.

Enabling logging

We’ll need to know where to put the filter for the redirection:

In this example, use line 9 on the second command.

Create the rules:

Basically create the new chain, then redirect packets. Add a rule to the logging chain, then drop the packets to be sure.

Reverting/Disabling the changes

To undo it, check the line of the redirect (because you may have made changes):

Remove the rule:

Delete any rules on the LOGGING chain:

Delete the chain:

Relevant links (where I got the main part of the info from:

Troubleshooting Ironport HTTPS Certificate Issues

SSL is great, except when you’re trying to audit access or filter things, let alone simple troubleshooting. Long story short, we run a Man In The Middle style system where our proxies are the HTTPS clients and they have an SSL certificate which all of our clients trust.

This relies on the proxies trusting the certificate chain, and these chains need to be updated periodically. Here’s an example of how to fix it when it goes wrong.

  1. Open the site in a browser, which fails with a certificate trust issue.
  2. Looks like the certificate’s trust chain is wrong
  3. What now? Two things:
    1. Test it on a machine that isn’t using the SSL MITM
    2. Test with the Qualys SSL test.
  4. Once you’ve established that it only happens when the Ironport is involved, we need to find the certificate chain.
    • Here’s what it looks like on Safari:
    • Or from the Qualys site:
  5. The certificates stored in the Ironports are shown by logging in to each WSA individually, then clicking
    Security services -> HTTPS Proxy -> Manage Trusted Root Certificates
  6. To verify we have the correct CA certificates, we need to compare the fingerprint to what’s in the results above. Starting with the root certificate, do the following.
    1. Find it by name in the Ironport interface
    2. Click the arrow next to the name, then right click on the “Download Certificate” link and download it.
    3. To check the fingerprint matches, open a command prompt and run “openssl x509 -fingerprint -noout -in [filename]” Where [filename] is the certificate file you downloaded before.
    4. This will show the fingerprint of the file, and you can match it up with the corresponding information above. It’ll be in hexadecimal, ignore any punctuation. If it’s correct, work up the chain until you find the one that’s wrong.
  7. Once you’ve found the missing/outdated certificate, you’ll need to find the updated one online. Google’s your friend.
  8. Download the certificate you require
  9. Install the intermediate certificate into each Ironport in your cluster individually
    1. Under “Custom Trusted Root Certificates” click “Import”
    2. Select the file you need to import
    3. Commit the changes
    4. Check that the certificate’s in the list
  10. Now try to access the site. You may need to clear your cache or reopen the browser (I’m looking at you, Internet Explorer)