Unbound: A great tool for a secure fast local DNS

Posted Leave a comment

This article is a quick howto into installing Unbound. While the archwiki  page has it all; There were little tweaks that needed you get further and explained in laymens’ terms for a successful set-up.

Prerequisites

The following are presumed to be present while so the unbound package is so versatile and agnostic:

  • Archlinux OS (your laptop in this case)
  • Access to the yaourt repo ( All major distros do include the package in their base )
  • Packages to deal with: unbound,dnssec-trigger,openresolv,drill and dig

The Goal

To have a caching and recursive DNS service which circumvent the MITM and ISP’s meddeling in DNS querries in a censor-opt countries like mine: Iran. Oh, yes beleive me it is a great deal to have a tool as such. It is fast easy to set-up and once done you can all the way forget that such service at all existed. Lets get started. The service will serve the localhost all the querries in my case being my laptop.

Installation

Install the unbound and dnssec with:

$ yaourt -S unbound dnssec-trigger

Do not worry abou the openresolv which will be installed and used by DNSSEC to modify the /etc/resolv.conf.

Go to the /etc/NetworkManager/NetworkManager and add this snippet:

[Main]
dns=unbound

Once done we need to do some modification on the unbound config file which resides on /etc/unbound/unbound.conf there is a fully explained config lines FYI. There is a reference file /etc/unbound/unbound.conf.example for your further reading. Here is my config explained for get you going:

$ cat /etc/unbound/unbound.conf

server:
use-syslog: yes                                  # Sends the logs to the syslog
username: “unbound”                         # App user account which is created at the installation step…you can further chroot it for security if you want
directory: “/etc/unbound”                   # Root directory of the app
trust-anchor-file: trusted-key.key         # Location to look for the trusted root servers key; the file is auto generated

root-hints: root.hints                          #root servers list
interface: 127.0.0.1                             # unbound listens on this interface for DNS querries; a word of caution if you are like me with KVM and Docker service better not to set the   interface value to 0.0.0.0 otherwise you need to do mroe config changes for dnsmasq and port it listens to.

prefer-ip6: no                                     # I learned it the hard way that this entry does a great deal of help as most of the out-there-networks are not yet IPV6 enabled so keep it off

It is now time for the DNSSEC directives population:

$ curl -o /etc/unbound/root.hints https://www.internic.net/domain/named.cache

We now set a systemd task for monthly update of this file like below:

$ sudo vim /etc/systemd/system/roothints.service

[Unit]
Description=Update root hints for unbound
After=network.target

[Service]
ExecStart=/usr/bin/curl -o /etc/unbound/root.hints https://www.internic.net/domain/named.cache

$ sudo vim /etc/systemd/system/roothints.timer

[Unit]
Description=Update root hints for unbound
After=network.target

[Service]
ExecStart=/usr/bin/curl -o /etc/unbound/root.hints https://www.internic.net/domain/named.cache
[hsafe@thinkt470 ~]$ cat /etc/systemd/system/roothints.timer
[Unit]
Description=Run root.hints monthly

[Timer]
OnCalendar=monthly
Persistent=true

[Install]
WantedBy=timers.target

Time to save everything and enable and start the services:

$ sudo systemctl enable dnssec-triggerd && sudo systemctl start dnssec-triggerd

$ sudo systemctl enable unbound && sudo systemctl start unbound

$ sudo systemctl restart NetworkManager

Check if everything is fine with below commands…The first one must return empty while the second must be successful:

$ drill sigfail.verteiltesysteme.net

$ drill sigok.verteiltesysteme.net

$ dig google.com

 

 

 

 

A poorman’s monitoring solution for personal website

Posted Leave a comment


Are you like me running a personal website from scratch? Is it on the tight budget pinching minimal resources and runs on Nginx? Have you ever wondered if there was a monitoring solution that helped you out on the normal web stats, performance and traffic?

Behold amplify is what you must definitely give a shot, it is a SaaS application, currently hosted in AWS. One word of caution is that not all the graphs work out of the box but it holds enough to sustain a good bloody long look into site performance.

The advantage of such tool to me were the below: a. it does not force me into setting up a separate VPS, b.it is addressing the issue of availability if anything happens to the server itself and finally it is cost-free. How does it all work? pretty straight it is a python script that after you sign-up for free get it from your profile from:
https://www.nginx.com/products/nginx-amplify/
It checks your server, installs the amplify plugin and couple of other requirements after you pass it your generated UUID…that is all. You get yourself a good dashboard of reasonably great input regarding your web server performance and the host itself. Enjoy

JVM settings on Elasticsearch >=5

Posted Leave a comment

I am sure if your data ingestion into elasticsearch is above values like 400,000 daily and in case you are visualizing the pattern in a kibana interface, after some times and no matter how beefy is your server; you will see that the visualization will get hampered once time gauges exceeds week-view. To address this issue and further investigations revealed the culprit to be the JVM RAM size. Now historically the change of the value and in elasticsearch <5 was done in either of the files:
/etc/sysconfig/elasticsearch
/usr/lib/systemd/system/elasticsearch.service

Surprised like me to find out that the conf changes do not take affects? digging more in resources in elastics website I found that the elasticsearch>=5 conf directive for the JAVA parameters is:
/etc/elasticsearch/jvm.options

Changing the values of Xms1g and Xmx1g to the half your actual RAM allowed me to fix the issue of the kibana data source visualization failures for larger spans like 1 month or 6 months.
Ref:Here

چطور در زمانه” گرگ پشت در “با ایمنی به شبکه های اجتمایی دسترسی پیدا کنید

Posted 1 Comment

پس از رفتار های جدید حکومت در جلوگیری و بلاک کردن هر گونه وی پی ان در ایران و روی آوردن کاربران در استفادهاز افزار های ساده تر ,خواستم قدری هک انجام بدهم روی تور پروکسی بروزر تا هم از فواید آن بهره ببرید و هم اینکه مشکلاتی که با این ابزار برای کاربران هست مثل بروزری که حافظه ندارد یا اینکه پسورد های شما را در خود ندارد برایتان حل کنم.
تور بروزر ابزاره قدرتمندی است که کسانی چون ادوارد اسنودن از آن استفاده میکند .در حالت معمولی و توصیه شده یک بروزر داخلی داره که ترافیک آن نه تنها رمز گذاری شده است بلکه به نحوی این رمز گذاری انجام میشود که سانسور و برادران از آن با خبر نمیشوند و در واقع مثله یک هشت پا شکل عوض میکند . حالا در بحرانی ترین شرایط و در زمانی که همه وی پی ان ها از کار افتاده باشند و تلگرام کار نکنند میتوانید با یک هک ساده از این ابزار سود بجویید .اینطور :
تور بروزر را راه بیاندازید و سرویسobfs4proxy را مطمئن شوید که استفاده میشود (که در ۹۹%موارد اینطور اجرا میشود).وقتی بروزر تور راه افتاد دیگه بهش کار نداشته باشد و میتوانید آن را مینی مایز کنید.
خوب هک از اینجا آغاز میشه که سرویسه تور داخل تور براوزر به صورت پیش فرض روی لوکال هوست و پورت ۹۱۵۰ گوش میده کافی است که به سادگی براوزر خود را(مثالا فیرفوکس یا کروم) یک ساکس پروکسی براش بگذارید با مشخصات زیر :

type:socks5 ipaddress:127.0.0.1 port:9150

اگر هم از یک اد آن فیرفوکس به نام فاکسی پروکسی استفاده کنید که راحت تره اینطور میشه:

حالا برویم سراغ تلگرام و در قسمت تلگرام تنظیمات و تنظیمات شبکه همان تنظیمات را قرار بدهید. همین !شما به سادگی و با ایمن ترین شکل ممکن توانستید از سده هر فیلترینگ بگذارید .ما این را مدیون تیم یاری کننده بی صدایان در تور پروژه هستیم

502 BAD GATEWAY Nginx proxy-pass error

Posted 2 Comments

Ever had this error while proxy-passing nginx to your back-end application? Strange enough there are few sources out there stating how it is raised and its solution.

The environment: Centos7+enforced Selinux where there is an application(mattermost)being proxy-passed to by nginx.
After setting up the nginx to proxy pass the application and opening up the firewall service http, you encounter with the error: 502 bad gateway.
Tailing the nginx error log gets you:

tail /var/log/nginx/error.log prints out something vague:
2018/01/31 07:55:25 [error] 2470#0: *1 no live upstreams while connecting to upstream, client: 192.168.122.1, server: mattermost.devopt.net, request: "GET /favicon.ico HTTP/1.1", upstream: "http://backend/favicon.ico", host: "192.168.122.254", referrer: "http://192.168.122.254/"

Only it is after tailing the /var/log/audit/audit.log; and piping denied keyword that you see something like:
type=AVC msg=audit(1517401834.352:180): avc: denied { name_connect } for pid=1678 comm="nginx" dest=8065 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket

Solution:
a. Install the python module:
# yum install policycoreutils-python-2.5-17.1.el7.x86_64
b. Get the audit exception created by:
# cat /var/log/audit/audit.log |grep nginx |grep denied |audit2allow -M nginx

c. Make it permanent by:
# semodule -i nginx.pp

Reload the page again and …Hey presto…

Nimble little bash tools you stumble upon while drunk…

Posted Leave a comment

The other day out of curiosity I just typed in the bash prompt the two letters pw and then tabbed it to see what the bash suggests…the result was completely astonishing…

Trust me, once you get into most of the higher level configuration and administration,you tend to let loose on the tools at the foot. Guess this is shared with many many of veteran system engineers,so read on…

pwscore
This little guy-oh by the way I presume it is present in kernel at least both in centos7 and Archlinux-can tell you immediately how strong your password is by accepting the input from the command prompt. You simply type in the command and then enter;it waits for your input password and gives the score…apparently to the scale of 100 and above the 50 considered fairly strong passwords.What a nimble tool to tell you without pulling your foot from bash 🙂 immediately where strength of your password stands.

pwmake
This little guy is also another wonder; I know that there are millions of tools out there for the same purpose, but a bash tool to create passwords based on the entropy you state for it; to spit out random relatively easily pronounceable passwords for you…that is really really cool like Kate Moss not ever losing her charm…

hdparm
Jumping today and thinking of the good old hdparm; I was trying to think of a way to make the USB stick readable only and I knew from years ago that it is doable. No problem just lsblk to get the drive partition and then as simple as:
$ sudo hdparm -r 1 /dev/ /dev/:
setting readonly to 1 (on)
readonly = 1 (on)

If in anyway you wanted to reverse the writability to back on swap 1 with 0. That is

Logstash grok pattern for nginx

Posted Leave a comment

Do you use ELK stack in your env ? if so have you noticed that there is no real nice way to integrate nginx logs into it?Last time I was trying to submit a ticket even going to IRC channel to question a proven way of parsing nginx logs into it. While so, I need to mention that if otherwise your log will be a string of unrecognized and patterned data that later you can not manipulate. There is also a fully customization for creating your own grok pattern using online grok pattern testing tools as well.
The short howto
Install filebeat and configure it to ship the data to the logstash server. There is a new method of segregating and labeling logs but I prefer the good old method being:
document_type: nginx_access
Try to enable and start the filebeat service and make sure that the logs state successful connection. Now on the nginx side we are dealing with combined log format which is the default if nothing mentioned as in example below in your config directives:
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

Time now to head to logstash server.
Here is the tricky bit since the new method tidy up the logstash into input>filter>output sort of discipline. We need to follow them and basically it boils to an input.conf that universally listens for any data on logstash port, the output that sends the filtered data to an elasticsearch server. Now the filter segment is where all our interests will reside, and here is the sample of my nginx that does the trick:
[root@elk 0]# cat /etc/logstash/conf.d/11-nginx-filter.conf
filter {
if [type] == "nginx_access" {
grok {
match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
overwrite => [ "message" ]
}
}
}

If you happen to use the geoip then the geoip section needs to be added to the filter as:
geoip {
source => "clientip"
}

Remember the field name is the actual source field holds the info required to geolocate. Note also that the geoip plugin needs to be installed in your elk stack:
/usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip

Not without my say so….

Posted Leave a comment


Live in an embargoed state? that is then the common symptom: you are barred to access resources freely available to any soul around the world. I am not here to lecture whether it is right or wrong, but my pragmatics advises me to circumvent the bar with no conscience burden. Do not take me wrong there is no wrong doing done when you want to get the open-source contents that happen to be hosted on the US soil and unlike +90% of other resources suddenly they are not available to you. Examples? consider Docker hub images; elastic resources from the elasticsearch to the plugins or to resources related to virtualbox or oracle resources like OCFS or their unbreakable kernel hehehe…

Now normally what we do we create a vpn tunnel to a VPS halfway across the world and then we get the things we want through it. Openvpn is fantastic solution and works like a charm…actually for oppressed people like us from our regime, that is how we securely get through to social media and uncensored contents on the Internet from our PCs. However working on servers remotely(ssh access) and running the openvpn causes our ssh session dropped i.e
Everything is passed to the newly created tunnel that passes all to the other end of the tunnel:
default via 10.8.0.1 dev tun0 proto static metric 50
and you have to devise other methods of access like Host-to-guest console access be it kvm,esx or any other methods. Now the issue with the remote console as you may guess is inability to copy and paste things from your machine making it cumbersome.
Recently I addressed this through an easier and cheaper method and even with a bonus security in there 🙂 bear with me to explain:

Basically it boils to two packages: proxychains and tor. Proxychains is basically a socket proxy for processes ran in terminal to talk to the localhost:9050 tor.
Here is the nitty-gritty of the process(Instructions for Centos7 run as root):
a. install the dev tools:
yum groupinstall "Development Tools"
b. install git, wget and vim as the tools to use
yum install epel-release && yum update && yum install git wget vim
c. install tor and edit the torrrc to log
yum install tor && vim /etc/tor/torrc
uncomment below line:
Log notice file /var/log/tor/notices.log
d. enable the tor service and start it:
systemctl enable tor && systemctl start tor && tail /var/log/tor/notices.log
The latter will tell you if the circuit is functional.
Now time to compile proxychains
a. cd to the directory and get the proxychains
cd /usr/local/src && git clone https://github.com/rofl0r/proxychains-ng.git
b. configure and make and make install
./configure && make && make install
c. Now you have the proxychain binary and conf in related dirs as:
/usr/local/bin/proxychains4 and /usr/local/etc/proxychains.conf
I tend to make a symbolic link to make things easier for myself by:
ln -s /usr/local/bin/proxychains4 /usr/local/bin/proxychains
Lastly you may want to just change one line in the conf file to make proxychain talk to tor in socks5:
vim /usr/local/etc/proxychains.conf
socks5 127.0.0.1 9050

That is all basically. Now any command in the bash that requires a circumvention needs to be preceded by proxychains like:
proxychains curl -fsSL https://get.docker.com/ |proxychains sh

Voila enjoy… 😉

Elasticsearch: deadly gorgeous

Posted Leave a comment


Do you have elasticsearch somewhere in your infra? I do and that is to say extensively but this post is not about how gorgeous it is to use but rather how ugly it can be after the first six months of use or the year,we use the tool in both graylog and elk-stack. However if ever you used it, you know that it is quite a bit of headache to manage and to monitor it i.e the administration of the size on disk,various indices administration and snapshotting.
Lets be honest, the graylog interface has got it right on retention/rotation on the system/indicdes menu;a thumps up for graylog guys. This is where elk-stack fails poorly, particularly with the newest and brightest of the all: 5.6 version.
I admit to spend many days and weeks in search of a better tool than simply going to “curl -XDELETE http://localhost:9200/the_target_indices -u “username:passwprd” “.

There is a tool called the elasticsearch-HQ where by a tweak on the elasticsearch.yml file and adding the below entries:
http.cors.allow-origin: "*"
http.cors.enabled: true
node.master: true

you may connect to it in a browser and encounter massive information about any part of your elasticsearch. However it does not allow you any administration…it is rather a pretty GUI for the things under the hood.

This is where we need to turn to a python tool called elasticsearch-curator that does the job. It is acquired by the elastic team lately and maintained by them which shows the importance of such tool.you can see the full documentation in here.
Here is the impatient guide in installing and running it on the elasticsearch node.

Installation
a. Install pip
yum isntall python-pip python-virtualenv
we need python virtual env to keep the host env seperate from where we run curator as versioning of heck of a lot of python tools do not get into your hair.
b. create a dir
mkdir curator-virt
c. create the environment
virtualenv curator-virt
d. get to the environment
. curator-virt/bin/activate
e.Install elasticsearch curator
pip install elasticsearch-curator
f. check the version and note that the compatibility of the version of curator with your elasticsearch in here
curator –version

Now rest of this article assumes that you successfully got to the stage where you have the latest curator i.e 5.3 and the latest elasticsearch i.e 5.6 which happens to be secured by the x-pack. Basically you need two files preferably in the same dir as yor working env for the ease of use and reference, which are config.yml and action_delete.yml
-config.yml sample:
client:
hosts:
- 127.0.0.1 #this is the elasticsearch listening port interface
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth: "uname:password"
timeout: 30
master_only: False

logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['urllib3']

-action_delete.yml example(Caution:please do read the descriptions and change them accordingly):
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete indices older than 30 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: metricbeat- #trying to filter only on the *metricbeat* indices
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 30
exclude:

Now lets get dirty: first things first try to run the code to get indices and make sure all is good:
curator_cli --config config.yml show_indices
It should return you all the indices…and now with the action:
curator --config config.yml action_delete_metricbeat.yml
which will delete indices older than 30 days.

Sweet firewalld and selinux with no sweat

Posted Leave a comment

How many times you heard of the cliche to turn off the firewalld and disable selinux after any fresh machine spinning up?
This post is a minimal how-to not to do so and as a personal reference and the lookalikes stumbling here….
lets get dirty:
Note: all the instruction presume that you are on a CentOS7 with root privileges.
First things first: install epel repository by:
yum install epel-release && yum update

selinux
Selinux is important as it blocks any processes to run outside defined dirs or utilizing ports. Having it said, Ill get to the core of scenarios that you need to know to smooth up its operation.
Scenario1: Due to the requirements raised, you need to add a new drive partition it and completely assign it to the /var/log as the box generates a lot of log or it is a centralized rsyslog for other boxes. Once added and present under the blkid and mounted to a temp location like /mnt/temp you rsync -av /var/log/ /mnt/temp, then modify the /etc/fstab so that new drive is mounted to /var/log and reboot. You are spitted out with a lot of errors at boot and your box goes to emergency mode even…
solution to scenario1
Give password to modify the /etc/fstab and commenting the new drive to /var/log and reboot.
Install policycoreuyils by:
yum install policycoreutils-python ;which is by the way in eple repo
Issue the below command to reset all the necessary selinux value for the new dir:
chcon -v -R --reference /the/model/dir /the/target/dir/or/file
In our case that will be:
chcon -v -R --reference /var/log/ /mnt/temp ;presuming that you mounted the new drive is mounted to /mnt/temp;uncomment the entry on /etc/fstab and time to reboot only normally ….
Scenario 2
You have nginx installed to proxy-pass your application. However browsing to the nginx serving port shows nothing. you check firewall all the nginx and all is pristine…tailing /var/log/audit/audit.log is where your catch some interesting output
cat /var/log/audit/audit.log |grep nginx|grep blocked
solution to scenario2
Just run this command,resuming you have already install policycoreutils
cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M nginx
It asks you to activate it so run it as well:
semodule -i nginx.pp

Firwalld

You want to allow a certain range of IP to be trusted outside the scope of zones for the reasons that arouse in clusters where the applications are complex and communicating with multiple different ports happen like a cluster of NDB.

Create an ipset(here are targetting individual ips not network):
$firewall-cmd --new-ipset=databaseNodes --type=hash:ip --permanent
$firewall-cmd --reload

Add individual or group of ips to the created ipset here called:databaseNodes:
$firewall-cmd --ipset=databaseNodes --add-entry=192.168.122.1 --permanent
or
$firewall-cmd --ipset=databaseNodes --add-entry={192.168.122.2,192.168.122.3} --permanent

Create a rich rule and referring to the ipset to accept all connection from the group:
$firewall-cmd --add-rich-rule 'rule family="ipv4" source ipset=databaseNodes accept' --permanent

Check the firewalld status for the rules:
$firewall-cmd --list-all
Finally get the specific database firewall richrule:
$firewall-cmd --info-ipset=databaseNodes

Ill add to the above scenario as I go and it is a work-in-progress since the tricks in the field are alot…