ELK + Pfsense 2/2

You have now installed Elasticsearch but you need Logstash and Kibana as well. This two steps and the necessary configurations you will on this site. The last step will be the configuration in your Pfsense.

Installation Kibana

First make sure you're in the root directory if you don't know how it works just type "cd" and then you're there.

Now we're downloading Kibana.

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-x86_64.rpm

This will install Kibana, it's the same procedure as Elasticsearch.

rpm -ivh kibana-6.2.3-x86_64.rpm

Then we're going to configure the Kibana config file.

vi /etc/kibana/kibana.yml

You have to change the following lines and make sure they look like that.

server.port: 5601 (Comment out this means, remove the #)
server.host: "localhost" (Comment out this means, remove the #)
elasticsearch.url: "http://localhost:9200" (Comment out this means, remove the #)

Now you're already done with Kibana. I think this was a simple one? Here we do the same as with Elasticsearch.

sudo systemctl enable kibana
sudo systemctl start kibana

Installation Nginx with EPEL

Of course it's getting a little bit more complicated. We use Nginx as our webserver and we configure this as reverse proxy and set a password for accessing later kibana. This gives us a little bit of security. I seriously not recommending this set up for using it online in the internet. You probably need more security otherwise you get later in trouble. 

First we need to install EPEL you can use this command.

yum -y install epel-release

We also need HTTPD Tools.

yum -y install nginx httpd-tools

We change the directory and go to the Nginx folder.

cd /etc/nginx/

Now we change the configuration and it should look like this image I have here. You have to remove some lines.

vi nginx.conf

Nginx Config 1

it has to look like that after the modification.

Nginx Config 2


Now we create a new config file for Kibana

vi /etc/nginx/conf.d/kibana.conf

The following lines you have to paste into the new file kibana.conf. No other lines are required please be carefully and all lines are needed for the setup.

server {
listen 80;
server_name elk-stack.co;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.kibana-user;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

Now we can set the password for the user "admin". This is for accessing Kibana thru the webinterface. Please set a strong password then you have at least there a little bit security.

sudo htpasswd -c /etc/nginx/.kibana-user admin

In the end we do the following commands for finishing.

nginx -t
systemctl enable nginx
systemctl start nginx

When you type that lines you will get the following result if you have done everything corret. 

Nginx test

Installation Logstash

Now we're installing Logstash for this we're again downloading a package. 

wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.3.rpm

Then we install Logstash.

rpm -ivh logstash-6.2.3.rpm

Now we go into the TLS folder.

cd /etc/pki/tls

We're opening the openssl.cnf file.

vi openssl.cnf

Now we add the following line into it, this you should do under [ v3_ca]. You can see it in the picture. Please adjust the IP adress to your Elasticserver IP.

[ v3_ca ]

subjectAltName = IP: XXX.XXX.XXX.XXX


Import the open SSL certificate.

openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/logstash-forwarder.key -out /etc/pki/tls/certs/logstash-forwarder.crt

Now we go back to the logstash folder just for fun ;=).

cd /etc/logstash/

We make the final commands for Logstash.

sudo systemctl enable logstash
sudo systemctl start logstash

This was the installation of Logstash.

Installation GeoIP Database Maxmind

We need to go for this to the logstash folder as above I wrote that. If you're already there you don't really need line 1.

cd /etc/logstash
sudo wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
sudo gunzip GeoLite2-City.mmdb.gz

Logstash Configuration Grokk Patterns and Config files

Now you need to add the Grokk Patterns and the conf.d files for logstash. If you don't know how to upload them then you can google for Winscp or you can create them via CLI and use "vi".

I provide them as a download, you have to add them into: /etc/logstash/conf.d

Don't forget to create the folder or upload the folder "patterns". Please do do not change anything else if you don't know what you're doing.

You only have to modify 10-syslog.conf and change the IP adress to your Pfsense IP. If you're not located in Europa you have to modify the timezone as well. You find the timezone in 11-pfsense.conf on line 12.

On the following website you will find your timezone: http://joda-time.sourceforge.net/timezones.html

IF you recieve later in Kibana the message with <134> then you have probably added the wrong IP in 10-syslog.conf!

The files you will find here.

After you have added those file into the "conf.d" folder please restart all services. You find the commands at the bottom.

Pfsense configuration

Go to Status -> System Logs -> Settings and make the configuration as you can see on the images. Please change the IP to your Elasticsearch server IP. Of course you have to save the changes you made.

 Pfsense configurations


Pfsense configurations 2


Kibana Configuration

Now it's time to open a webbrowser and type in the IP adress of your ELK-Stack and enter with the admin password which you have during setting up NGINX. If everything worked fine you should see for the first time the Kibana website.

Follow these steps to configure Kibana and add the dashboards which you can download here.

Go to "Dev Tools -> Get to work"

On the left side you have to add this command:

PUT _template/logstash-
"template" : "logstash-*",
"version" : 60001,
"settings" : {
"index.refresh_interval" : "5s",
"number_of_shards": 1
"mappings" : {
"doc" : {
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}, {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword", "ignore_above": 256 }
} ],
"properties" : {
"@timestamp": { "type": "date"},
"@version": { "type": "keyword"},
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }

Go to "Management -> Index patterns" and there you can write "logstash-" in it. After that you have to choose @timestamp. I made some images and then when you're done you can click on "create index pattern"


Kibana 1

Now you can go to "Management -> Saved Objects -> Searches" (Click on searches) and then on the upper right you can import the dashboards. After this step you should be fine and you can view the Dashboards with your data. Yes you will get an error because on my dashboards I have some fields from Synology NAS which you probably don't have. You can just skipp them or add them and later you can delete them. I have not included the synology configurations for logstash because I don't want that you're even more in trouble ;=).


Important commands

Controling if the services are up you can do this with this commands.

systemctl status elasticsearch.service
curl -X GET http://localhost:9200
systemctl status kibana.service
systemctl status logstash.service

If you want to stop your ELK Stack you can use those commands.

sudo systemctl stop logstash
sudo systemctl stop kibana
sudo systemctl stop elasticsearch

When you want to start the services manually.

sudo systemctl start elasticsearch
sudo systemctl start kibana
sudo systemctl start logstash

If you want to see what Logstash is doing. Here it's going to be very advanced....

tail -f /var/log/logstash/logstash-plain.log


Back to Top