Redis-Sentinel cluster on CentOS 7

Posted Leave a comment


I have tried left right and the center for a good source to explain this from head to toe and failed to find any, so decided to put up this walk-through, share the whys and hows and perhaps it comes handy for anyone. hope it takes off some burden and useful.

  • Most of the examples are run from the compiled binary redis/redis-sentinel while I wanted a properly installed service by the repositories and controlled by systemd.
  • I needed some sort of protection at the firewall level, later explain how I achieved it.
  • I needed all the kernel parameters incorporated ,either by manipulating the sysconfig or yet in later stages grub-hooks. In case of the latter I had to restrain to self-composed systemd script which Ill provide here along the reason.
  • Almost always examples are run with several instances on the same node without a real separated role groups. A matter that needs extra-fine granulation and production disaster avoidance. A great source to explain and eye-opening is comments from #The_real_bill in here.


The environment I run in is CentOS 7 with firewalld and selinux enabled. The packages at the time of installation is grabbed from epel repository. The redis servers are configured with 100GB RAM and sentinels are really tiny with 1 core + 1G RAM. All the settings are done in vanilla installation and a word of caution, if you messed up the premission and port not able to bound to or the service stopped starting, please do reinstall a fresh instance as I learned it the hard way and the time I spent was too dear.

The diagram



Record the IPs and if needed nmtui the static settings. Get both redis server packages installed by:

# yum install redis

Add the necessary firewall rules:

# firewall-cmd –new-zone=redis –permanent

# firewall-cmd –zone=redis –add-source={x.x.x.x/32,x.x.x.x/32,…} –permanent

# firewall-cmd –zone=redis –add-service=redis –permanent

# firewall-cmd –add-port=26379/tcp –permanent –zone=redis

# firewall-cmd –reload

# firewall-cmd –list-all –zone=redis

Check with the above if all the rules are set and OK-Look for the redis entry in the services list and ports and IPs in the relative parts.

Now time for the config changes for redis

On redis1 server, edit the below:


#protected-mode yes <–comment out this to allow communication

On redis2 server:


#protected-mode yes <–comment out this to allow communication

slaveof 6379

On both servers start the service and check the replication status and also test if you see the replication taking effects:

# systemctl start redis && systemctl status redis

# redis-cli info replication && redis-cli set foo bar && redis-cli get foo

Now if the replication info is what you expect and master and slave roles are already there appearing, you are done with the Master-Slave redis config. Time to jump to the redis-sentinel configuration.

Minimal configuration for the redis-sentinel based on the above diagram IP info can be set as:

bind             <- this is the tricky bit that costs me a  lot of debugging even in case of sentinel and redis on the same server do not put the loopback interface here
port 26379             <- that is the default and we keep it that way
sentinel monitor mymaster 6379 2           <- some clarification mymaster is the default cluster naming …the tricky bit is that we do not provide loopback ip here even if that is the sentinel on the master node…I learned this the hard way and after a long debug
sentinel down-after-milliseconds mymaster 6000                <-announcement agreement for the master re-election after 6 seconds
sentinel failover-timeout mymaster 6000      <- in a nutshell it will fail over after only 6 seconds,since the servers are on the same LAN 6 seconds pretty fine
supervised systemd               <- change this to the systemd

Now there are some other lines added to the above file after starting the redis-sentinel like the id number and the node view-point of itself along the rest of the nodes and also the redis nodes roles i.e who is master whois slave and whois master and consensus of the redis-sentinel nodes.

Here is an example after redis-sentinel service started on all nodes and convergence done:

port 26379
dir “/tmp”
sentinel myid 801d529ccc1ded9762c096faff3d66ea5ee1ce04
sentinel monitor mymaster 6379 2
sentinel down-after-milliseconds mymaster 6000
sentinel failover-timeout mymaster 6000
logfile “/var/log/redis/sentinel.log”
supervised systemd
sentinel config-epoch mymaster 2
sentinel leader-epoch mymaster 2
sentinel known-slave mymaster 6379
sentinel known-sentinel mymaster 26379 b2466fa5d687182e96decca1076f2d2d4ae7d781
sentinel known-sentinel mymaster 26379 c61da1e439edca0b67cb6c177ebd07b0fa16b9ac
sentinel current-epoch 2

The last bits and tricks:

Install the tuned package and change the server profile so that they are throughput intensive load responsive if on VM environment. This is imperative as the benchmarking of the servers proves many folds improvement.

# yum install tuned

# tuned-adm profile throughput-performance

The above commands essentially re-edit and fine-tune performance of the server. check redhat document for further details

Create a systemd service and enable it to disable hugepagefile issue that affects the performance of redis:

# vim /etc/systemd/system/hpf_disable.service

And the content reads:

Description=Disable Transparent Huge Pages (THP)

ExecStart=/bin/sh -c “echo ‘never’ > /sys/kernel/mm/transparent_hugepage/enabled && echo ‘never’ > /sys/kernel/mm/transparent_hugepage/defrag”


Enable and start the service:

systemctl enable hpf_disable && systemctl start hpf_disable

Last word: essentially there are two kernel parameters in case you did not go the tuned package settings which are :

vm.swappiness = 10
vm.overcommit_memory = 1

The HAproxy minimal config for the backend redis as a sample is:

backend BE_redis
mode tcp
option tcp-check
option tcpka
tcp-check connect
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
fullconn 30000
server red0 weight 1 check inter 3000 rise 3 fall 3
server red110.0.0.54:6379 weight 1 check inter 3000 rise 3 fall 3 backup



Unbound: A great tool for a secure fast local DNS

Posted Leave a comment

This article is a quick howto into installing Unbound. While the archwiki  page has it all; There were little tweaks that needed you get further and explained in laymens’ terms for a successful set-up.


The following are presumed to be present while so the unbound package is so versatile and agnostic:

  • Archlinux OS (your laptop in this case)
  • Access to the yaourt repo ( All major distros do include the package in their base )
  • Packages to deal with: unbound,dnssec-trigger,openresolv,drill and dig

The Goal

To have a caching and recursive DNS service which circumvent the MITM and ISP’s meddeling in DNS querries in a censor-opt countries like mine: Iran. Oh, yes beleive me it is a great deal to have a tool as such. It is fast easy to set-up and once done you can all the way forget that such service at all existed. Lets get started. The service will serve the localhost all the querries in my case being my laptop.


Install the unbound and dnssec with:

$ yaourt -S unbound dnssec-trigger

Do not worry abou the openresolv which will be installed and used by DNSSEC to modify the /etc/resolv.conf.

Go to the /etc/NetworkManager/NetworkManager and add this snippet:


Once done we need to do some modification on the unbound config file which resides on /etc/unbound/unbound.conf there is a fully explained config lines FYI. There is a reference file /etc/unbound/unbound.conf.example for your further reading. Here is my config explained for get you going:

$ cat /etc/unbound/unbound.conf

use-syslog: yes                                  # Sends the logs to the syslog
username: “unbound”                         # App user account which is created at the installation step…you can further chroot it for security if you want
directory: “/etc/unbound”                   # Root directory of the app
trust-anchor-file: trusted-key.key         # Location to look for the trusted root servers key; the file is auto generated

root-hints: root.hints                          #root servers list
interface:                             # unbound listens on this interface for DNS querries; a word of caution if you are like me with KVM and Docker service better not to set the   interface value to otherwise you need to do mroe config changes for dnsmasq and port it listens to.

prefer-ip6: no                                     # I learned it the hard way that this entry does a great deal of help as most of the out-there-networks are not yet IPV6 enabled so keep it off

It is now time for the DNSSEC directives population:

$ curl -o /etc/unbound/root.hints

We now set a systemd task for monthly update of this file like below:

$ sudo vim /etc/systemd/system/roothints.service

Description=Update root hints for unbound

ExecStart=/usr/bin/curl -o /etc/unbound/root.hints

$ sudo vim /etc/systemd/system/roothints.timer

Description=Update root hints for unbound

ExecStart=/usr/bin/curl -o /etc/unbound/root.hints
[hsafe@thinkt470 ~]$ cat /etc/systemd/system/roothints.timer
Description=Run root.hints monthly



Time to save everything and enable and start the services:

$ sudo systemctl enable dnssec-triggerd && sudo systemctl start dnssec-triggerd

$ sudo systemctl enable unbound && sudo systemctl start unbound

$ sudo systemctl restart NetworkManager

Check if everything is fine with below commands…The first one must return empty while the second must be successful:

$ drill

$ drill

$ dig





A poorman’s monitoring solution for personal website

Posted Leave a comment

Are you like me running a personal website from scratch? Is it on the tight budget pinching minimal resources and runs on Nginx? Have you ever wondered if there was a monitoring solution that helped you out on the normal web stats, performance and traffic?

Behold amplify is what you must definitely give a shot, it is a SaaS application, currently hosted in AWS. One word of caution is that not all the graphs work out of the box but it holds enough to sustain a good bloody long look into site performance.

The advantage of such tool to me were the below: a. it does not force me into setting up a separate VPS, is addressing the issue of availability if anything happens to the server itself and finally it is cost-free. How does it all work? pretty straight it is a python script that after you sign-up for free get it from your profile from:
It checks your server, installs the amplify plugin and couple of other requirements after you pass it your generated UUID…that is all. You get yourself a good dashboard of reasonably great input regarding your web server performance and the host itself. Enjoy

JVM settings on Elasticsearch >=5

Posted Leave a comment

I am sure if your data ingestion into elasticsearch is above values like 400,000 daily and in case you are visualizing the pattern in a kibana interface, after some times and no matter how beefy is your server; you will see that the visualization will get hampered once time gauges exceeds week-view. To address this issue and further investigations revealed the culprit to be the JVM RAM size. Now historically the change of the value and in elasticsearch <5 was done in either of the files:

Surprised like me to find out that the conf changes do not take affects? digging more in resources in elastics website I found that the elasticsearch>=5 conf directive for the JAVA parameters is:

Changing the values of Xms1g and Xmx1g to the half your actual RAM allowed me to fix the issue of the kibana data source visualization failures for larger spans like 1 month or 6 months.

چطور در زمانه” گرگ پشت در “با ایمنی به شبکه های اجتمایی دسترسی پیدا کنید

Posted 1 Comment

پس از رفتار های جدید حکومت در جلوگیری و بلاک کردن هر گونه وی پی ان در ایران و روی آوردن کاربران در استفادهاز افزار های ساده تر ,خواستم قدری هک انجام بدهم روی تور پروکسی بروزر تا هم از فواید آن بهره ببرید و هم اینکه مشکلاتی که با این ابزار برای کاربران هست مثل بروزری که حافظه ندارد یا اینکه پسورد های شما را در خود ندارد برایتان حل کنم.
تور بروزر ابزاره قدرتمندی است که کسانی چون ادوارد اسنودن از آن استفاده میکند .در حالت معمولی و توصیه شده یک بروزر داخلی داره که ترافیک آن نه تنها رمز گذاری شده است بلکه به نحوی این رمز گذاری انجام میشود که سانسور و برادران از آن با خبر نمیشوند و در واقع مثله یک هشت پا شکل عوض میکند . حالا در بحرانی ترین شرایط و در زمانی که همه وی پی ان ها از کار افتاده باشند و تلگرام کار نکنند میتوانید با یک هک ساده از این ابزار سود بجویید .اینطور :
تور بروزر را راه بیاندازید و سرویسobfs4proxy را مطمئن شوید که استفاده میشود (که در ۹۹%موارد اینطور اجرا میشود).وقتی بروزر تور راه افتاد دیگه بهش کار نداشته باشد و میتوانید آن را مینی مایز کنید.
خوب هک از اینجا آغاز میشه که سرویسه تور داخل تور براوزر به صورت پیش فرض روی لوکال هوست و پورت ۹۱۵۰ گوش میده کافی است که به سادگی براوزر خود را(مثالا فیرفوکس یا کروم) یک ساکس پروکسی براش بگذارید با مشخصات زیر :

type:socks5 ipaddress: port:9150

اگر هم از یک اد آن فیرفوکس به نام فاکسی پروکسی استفاده کنید که راحت تره اینطور میشه:

حالا برویم سراغ تلگرام و در قسمت تلگرام تنظیمات و تنظیمات شبکه همان تنظیمات را قرار بدهید. همین !شما به سادگی و با ایمن ترین شکل ممکن توانستید از سده هر فیلترینگ بگذارید .ما این را مدیون تیم یاری کننده بی صدایان در تور پروژه هستیم

502 BAD GATEWAY Nginx proxy-pass error

Posted 2 Comments

Ever had this error while proxy-passing nginx to your back-end application? Strange enough there are few sources out there stating how it is raised and its solution.

The environment: Centos7+enforced Selinux where there is an application(mattermost)being proxy-passed to by nginx.
After setting up the nginx to proxy pass the application and opening up the firewall service http, you encounter with the error: 502 bad gateway.
Tailing the nginx error log gets you:

tail /var/log/nginx/error.log prints out something vague:
2018/01/31 07:55:25 [error] 2470#0: *1 no live upstreams while connecting to upstream, client:, server:, request: "GET /favicon.ico HTTP/1.1", upstream: "http://backend/favicon.ico", host: "", referrer: ""

Only it is after tailing the /var/log/audit/audit.log; and piping denied keyword that you see something like:
type=AVC msg=audit(1517401834.352:180): avc: denied { name_connect } for pid=1678 comm="nginx" dest=8065 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket

a. Install the python module:
# yum install policycoreutils-python-2.5-17.1.el7.x86_64
b. Get the audit exception created by:
# cat /var/log/audit/audit.log |grep nginx |grep denied |audit2allow -M nginx

c. Make it permanent by:
# semodule -i nginx.pp

Reload the page again and …Hey presto…

Nimble little bash tools you stumble upon while drunk…

Posted Leave a comment

The other day out of curiosity I just typed in the bash prompt the two letters pw and then tabbed it to see what the bash suggests…the result was completely astonishing…

Trust me, once you get into most of the higher level configuration and administration,you tend to let loose on the tools at the foot. Guess this is shared with many many of veteran system engineers,so read on…

This little guy-oh by the way I presume it is present in kernel at least both in centos7 and Archlinux-can tell you immediately how strong your password is by accepting the input from the command prompt. You simply type in the command and then enter;it waits for your input password and gives the score…apparently to the scale of 100 and above the 50 considered fairly strong passwords.What a nimble tool to tell you without pulling your foot from bash 🙂 immediately where strength of your password stands.

This little guy is also another wonder; I know that there are millions of tools out there for the same purpose, but a bash tool to create passwords based on the entropy you state for it; to spit out random relatively easily pronounceable passwords for you…that is really really cool like Kate Moss not ever losing her charm…

Jumping today and thinking of the good old hdparm; I was trying to think of a way to make the USB stick readable only and I knew from years ago that it is doable. No problem just lsblk to get the drive partition and then as simple as:
$ sudo hdparm -r 1 /dev/ /dev/:
setting readonly to 1 (on)
readonly = 1 (on)

If in anyway you wanted to reverse the writability to back on swap 1 with 0. That is

Logstash grok pattern for nginx

Posted Leave a comment

Do you use ELK stack in your env ? if so have you noticed that there is no real nice way to integrate nginx logs into it?Last time I was trying to submit a ticket even going to IRC channel to question a proven way of parsing nginx logs into it. While so, I need to mention that if otherwise your log will be a string of unrecognized and patterned data that later you can not manipulate. There is also a fully customization for creating your own grok pattern using online grok pattern testing tools as well.
The short howto
Install filebeat and configure it to ship the data to the logstash server. There is a new method of segregating and labeling logs but I prefer the good old method being:
document_type: nginx_access
Try to enable and start the filebeat service and make sure that the logs state successful connection. Now on the nginx side we are dealing with combined log format which is the default if nothing mentioned as in example below in your config directives:
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

Time now to head to logstash server.
Here is the tricky bit since the new method tidy up the logstash into input>filter>output sort of discipline. We need to follow them and basically it boils to an input.conf that universally listens for any data on logstash port, the output that sends the filtered data to an elasticsearch server. Now the filter segment is where all our interests will reside, and here is the sample of my nginx that does the trick:
[root@elk 0]# cat /etc/logstash/conf.d/11-nginx-filter.conf
filter {
if [type] == "nginx_access" {
grok {
match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
overwrite => [ "message" ]

If you happen to use the geoip then the geoip section needs to be added to the filter as:
geoip {
source => "clientip"

Remember the field name is the actual source field holds the info required to geolocate. Note also that the geoip plugin needs to be installed in your elk stack:
/usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip

Not without my say so….

Posted Leave a comment

Live in an embargoed state? that is then the common symptom: you are barred to access resources freely available to any soul around the world. I am not here to lecture whether it is right or wrong, but my pragmatics advises me to circumvent the bar with no conscience burden. Do not take me wrong there is no wrong doing done when you want to get the open-source contents that happen to be hosted on the US soil and unlike +90% of other resources suddenly they are not available to you. Examples? consider Docker hub images; elastic resources from the elasticsearch to the plugins or to resources related to virtualbox or oracle resources like OCFS or their unbreakable kernel hehehe…

Now normally what we do we create a vpn tunnel to a VPS halfway across the world and then we get the things we want through it. Openvpn is fantastic solution and works like a charm…actually for oppressed people like us from our regime, that is how we securely get through to social media and uncensored contents on the Internet from our PCs. However working on servers remotely(ssh access) and running the openvpn causes our ssh session dropped i.e
Everything is passed to the newly created tunnel that passes all to the other end of the tunnel:
default via dev tun0 proto static metric 50
and you have to devise other methods of access like Host-to-guest console access be it kvm,esx or any other methods. Now the issue with the remote console as you may guess is inability to copy and paste things from your machine making it cumbersome.
Recently I addressed this through an easier and cheaper method and even with a bonus security in there 🙂 bear with me to explain:

Basically it boils to two packages: proxychains and tor. Proxychains is basically a socket proxy for processes ran in terminal to talk to the localhost:9050 tor.
Here is the nitty-gritty of the process(Instructions for Centos7 run as root):
a. install the dev tools:
yum groupinstall "Development Tools"
b. install git, wget and vim as the tools to use
yum install epel-release && yum update && yum install git wget vim
c. install tor and edit the torrrc to log
yum install tor && vim /etc/tor/torrc
uncomment below line:
Log notice file /var/log/tor/notices.log
d. enable the tor service and start it:
systemctl enable tor && systemctl start tor && tail /var/log/tor/notices.log
The latter will tell you if the circuit is functional.
Now time to compile proxychains
a. cd to the directory and get the proxychains
cd /usr/local/src && git clone
b. configure and make and make install
./configure && make && make install
c. Now you have the proxychain binary and conf in related dirs as:
/usr/local/bin/proxychains4 and /usr/local/etc/proxychains.conf
I tend to make a symbolic link to make things easier for myself by:
ln -s /usr/local/bin/proxychains4 /usr/local/bin/proxychains
Lastly you may want to just change one line in the conf file to make proxychain talk to tor in socks5:
vim /usr/local/etc/proxychains.conf
socks5 9050

That is all basically. Now any command in the bash that requires a circumvention needs to be preceded by proxychains like:
proxychains curl -fsSL |proxychains sh

Voila enjoy… 😉

Elasticsearch: deadly gorgeous

Posted Leave a comment

Do you have elasticsearch somewhere in your infra? I do and that is to say extensively but this post is not about how gorgeous it is to use but rather how ugly it can be after the first six months of use or the year,we use the tool in both graylog and elk-stack. However if ever you used it, you know that it is quite a bit of headache to manage and to monitor it i.e the administration of the size on disk,various indices administration and snapshotting.
Lets be honest, the graylog interface has got it right on retention/rotation on the system/indicdes menu;a thumps up for graylog guys. This is where elk-stack fails poorly, particularly with the newest and brightest of the all: 5.6 version.
I admit to spend many days and weeks in search of a better tool than simply going to “curl -XDELETE http://localhost:9200/the_target_indices -u “username:passwprd” “.

There is a tool called the elasticsearch-HQ where by a tweak on the elasticsearch.yml file and adding the below entries:
http.cors.allow-origin: "*"
http.cors.enabled: true
node.master: true

you may connect to it in a browser and encounter massive information about any part of your elasticsearch. However it does not allow you any administration…it is rather a pretty GUI for the things under the hood.

This is where we need to turn to a python tool called elasticsearch-curator that does the job. It is acquired by the elastic team lately and maintained by them which shows the importance of such can see the full documentation in here.
Here is the impatient guide in installing and running it on the elasticsearch node.

a. Install pip
yum isntall python-pip python-virtualenv
we need python virtual env to keep the host env seperate from where we run curator as versioning of heck of a lot of python tools do not get into your hair.
b. create a dir
mkdir curator-virt
c. create the environment
virtualenv curator-virt
d. get to the environment
. curator-virt/bin/activate
e.Install elasticsearch curator
pip install elasticsearch-curator
f. check the version and note that the compatibility of the version of curator with your elasticsearch in here
curator –version

Now rest of this article assumes that you successfully got to the stage where you have the latest curator i.e 5.3 and the latest elasticsearch i.e 5.6 which happens to be secured by the x-pack. Basically you need two files preferably in the same dir as yor working env for the ease of use and reference, which are config.yml and action_delete.yml
-config.yml sample:
- #this is the elasticsearch listening port interface
port: 9200
use_ssl: False
ssl_no_validate: False
http_auth: "uname:password"
timeout: 30
master_only: False

loglevel: INFO
logformat: default
blacklist: ['urllib3']

-action_delete.yml example(Caution:please do read the descriptions and change them accordingly):
# want to use this action as a template, be sure to set this to False after
# copying it.
action: delete_indices
description: >-
Delete indices older than 30 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
ignore_empty_list: True
continue_if_exception: False
disable_action: False
- filtertype: pattern
kind: prefix
value: metricbeat- #trying to filter only on the *metricbeat* indices
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 30

Now lets get dirty: first things first try to run the code to get indices and make sure all is good:
curator_cli --config config.yml show_indices
It should return you all the indices…and now with the action:
curator --config config.yml action_delete_metricbeat.yml
which will delete indices older than 30 days.