Error on the sender end’s _($SPLUNKHOME/var/log/splunk/splunkd.log)
02-18-2015 12:32:06.160 +1000 ERROR TcpOutputFd - Read error. Connection reset by peer
Error on the receive end’s _($SPLUNKHOME/var/log/splunk/splunkd.log)
02-18-2015 12:31:14.423 +1000 ERROR TcpInputProc - Error encountered for connection from src=senderip:47960. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
I checked the certificate to make sure the common name is right:
sudo openssl x509 -text -noout -in $SPLUNK_HOME/etc/auth/mycerts/sender.pem | grep Subj Subject: C=AU, ST=Queensland, L=Brisbane, O=sender, OU=Company, CN=sender.example.com/[email protected] /opt/splunkforwarder/etc$ sudo grep -i sender * -R etc/system/local/server.conf:serverName = sender etc/system/local/inputs.conf:host = sender etc/system/local/outputs.conf:sslCertPath = $SPLUNK_HOME/etc/auth/mycerts/sender.pem
So, the common name on the certificate is different to the name presented by the server. I reissued the cert for the new server name, restart, still no no good.
02-18-2015 12:44:44.163 +1000 ERROR TcpInputProc - Error encountered for connection from src=sender:47997. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
This was in the logs on the sender’s end:
02-18-2015 12:48:19.271 +1000 ERROR TcpOutputProc - Error initializing SSL context - invalid sslCertPath for server receiver.example.com:9997
Turns out, the CAcert was wrong. It presents as a “file not found” error, but it’s just a file that doesn’t work. It needs to include both certs from the chain, which is different to how the chain’s exported from the windows CA server normally.
Once I fixed that, all good:
02-18-2015 12:53:09.862 +1000 INFO TcpOutputProc - Connected to idx=receiver.example.com:9997 using ACK.
Sometimes it pays to check both logs 🙂