How To Protect your Linux Server Against the GHOST Vulnerability

On January 27, 2015, a GNU C Library (glibc) vulnerability, referred to as the GHOST vulnerability, was announced to the general public. In summary, the vulnerability allows remote attackers to take complete control of a system by exploiting a buffer overflow bug in glibc’s GetHOST functions (hence the name). Like Shellshock and Heartbleed, this vulnerability is serious and affects many servers.

The GHOST vulnerability can be exploited on Linux systems that use versions of the GNU C Library prior to glibc-2.18. That is, systems that use an unpatched version of glibc from versions 2.2 to 2.17 are at risk. Many Linux distributions including, but not limited to, the following are potentially vulnerable to GHOST and should be patched:

  • CentOS 6 & 7
  • Debian 7
  • Red Hat Enterprise Linux 6 & 7
  • Ubuntu 10.04 & 12.04
  • End of Life Linux Distributions

It is highly recommended that you update and reboot all of your affected Linux servers. We will show you how to test if your systems are vulnerable and, if they are, how to update glibc to fix the vulnerability.

Check System Vulnerability

The easiest way to test if your servers are vulnerable to GHOST is to check the version of glibc that is in use. We will cover how to do this in Ubuntu, Debian, CentOS, and RHEL.

Note that binaries that are statically linked to the vulnerable glibc must be recompiled to be made safe—this test does not cover these cases, only the system’s GNU C Library.

Ubuntu & Debian

Check the version glibc by looking up the version of ldd (which uses glibc) like this:

ldd --version

The first line of the output will contain the version of eglibc, the variant of glibc that Ubuntu and Debian use. It might look like this, for example (the version is highlighted in this example):

ldd (Ubuntu EGLIBC 2.15-0ubuntu10.7) 2.15
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.

If the version of eglibc matches, or is more recent than, the ones listed here, you are safe from the GHOST vulnerability:

  • Ubuntu 12.04 LTS: 2.15-0ubuntu10.10
  • Ubuntu 10.04 LTS: 2.11.1-0ubuntu7.20
  • Debian 7 LTS: 2.13-38+deb7u7

If the version of eglibc is older than the ones listed here, your system is vulnerable to GHOST and should be updated.

CentOS & RHEL

Check the version glibc with rpm:

rpm -q glibc

The output should look like this, with the package name followed by version information:

glibc-2.12-1.132.el6_5.4.x86_64

If the version of glibc matches, or is more recent than, the ones listed here, you are safe from the GHOST vulnerability:

  • CentOS 6: glibc-2.12-1.149.el6_6.5
  • CentOS 7: glibc-2.17-55.el7_0.5
  • RHEL 5: glibc-2.5-123.el5_11.1
  • RHEL 6: glibc-2.12-1.149.el6_6.5
  • RHEL 7: glibc-2.17-55.el7_0.5

If the version of glibc is older than the ones listed here, your system is vulnerable to GHOST and should be updated.

Fix Vulnerability

The easiest way to fix the GHOST vulnerability is to use your default package manager to update the version of glibc. The following subsections cover updating glibc on various Linux distributions, including Ubuntu, Debian, CentOS, and Red Hat.

APT-GET: Ubuntu / Debian

For currently supported versions of Ubuntu or Debian, update all of your packages to the latest version available via apt-get dist-upgrade:

sudo apt-get update && sudo apt-get dist-upgrade

Then respond to the confirmation prompt with y.

When the update is complete, reboot the server with this command:

sudo reboot

A reboot is necessary since the GNU C Library is used by many applications that must be restarted to use the updated library.

Now verify that your system is no longer vulnerable by following the instructions in the previous section (Check System Vulnerability).

YUM: CentOS / RHEL

Update glibc to the latest version available via yum:

sudo yum update glibc

Then respond to the confirmation prompt with y.

When the update is complete, reboot the server with this command:

sudo reboot

A reboot is necessary since the GNU C Library is used by many applications that must be restarted to use the updated library.

Now verify that your system is no longer vulnerable by following the instructions in the previous section (Check System Vulnerability).

Posted in Linux How-To | Leave a comment

Amanda Cheat Sheet

Config

The workings of Amanda all center around one or more config directories you set up We currently have only one configuration set up to dump all the workstations and it is called

all
All of the Amanda commands need the name of the config file so it knows how to handle your request. One item of interest is the tape device you are using for Amanda:

	# fgrep 'tapedev' /usr/local/amanda/config/all/amanda.conf
	tapedev "/dev/rmt/tps3d1nrvc"   # or use the (no-rewind!) tape device directly

you will need to know this to unmount tapes and do restores. One handy way to get this set correctly is to put something like this in the /bin/.cshrc file:

	setenv TAPE `grep tapedev /usr/local/amanda/config/all/amanda.conf | tr '"' ' ' | nawk '{print $2}'`

then the TAPE variable is set when you login as bin to do Amanda work.

Preparing a Tape

Before you can use a tape in Amanda you must label it. Put the tape in the drive you usually use for Amanda and get the label string from the config file so you give the tape a label that matches the pattern chosen for the config you are using. ie:

	# fgrep 'labelstr' /usr/local/amanda/config/all/amanda.conf
	labelstr "^ARCall[0-9][0-9]*$"  # label constraint regex:
	# amlabel all ARCall99

We label our tapes something like ARCall04 or ARCall999.

Preparing for an Amanda Dump

Amanda also provides a handy command to check and make sure things are correctly prepared for next dump. The following will check the server to make sure there is enough room on the holding disk to buffer the dumps before they are written to tape, and make sure the correct tape is in the drive:

	# /usr/local/amanda/bin/amcheck -s all
	Amanda Tape Server Host Check
	-----------------------------
	/export/backups/amanda/all: 8882376 KB disk space available, that's plenty.
	Tape ARCall152 label ok.
	NOTE: skipping tape-writeable test.
	Server check took 0.088 seconds.
	(brought to you by Amanda 2.3.0.4)

Errors from the above command are pretty self explanatory. Here are a few examples of what you might see:

	ERROR: cannot overwrite active tape ARCatRICE23.
		(expecting tape ARCatRICE24 or a new tape)
	ERROR: /dev/rmt/tps3d1nrvc: rewinding tape: Resource temporarily unavailable.
       		(expecting tape ARCall24 or a new tape)

This will check to see that all the clients are up and running and able to talk to Amanda:

	# /usr/local/amanda/bin/amcheck -c all
	Amanda Backup Client Hosts Check
	--------------------------------
	Client check: 46 hosts checked in 10.390 seconds, 0 problems found.
	(brought to you by Amanda 2.3.0.4)

Running an Amanda Dump

The version of Amanda we run uses the native Unix dump program to backup the filesystems. The following will do the nights dumps:

	# /usr/local/amanda/bin/amdump all

and send email to the people/alias listed in the config file. The output would look something like this:

	To: arc_amanda@arc.umn.edu
	Subject: AHPCRC AMANDA MAIL REPORT FOR April 28, 1999

	These dumps were to tape ARCall92.
	Tonight's dumps should go onto 1 tape: ARCall24.

	FAILURE AND STRANGE DUMP SUMMARY:
  	in7        / lev 0 FAILED [no estimate]

	STATISTICS:
                          	Total     Full    Daily
                       	-------- -------- --------
	Dump Time (hrs:min)       1:57     1:21     0:18 (0:07 start, 0:11 idle)
	Output Size (meg)       4043.0   3393.5    649.5
	Original Size (meg)     4043.0   3393.5    649.5
	Avg Compressed Size (%)    --       --       -- 
	Tape Used (%)             19.3     16.2      3.1 (level:#disks ...)
	Filesystems Dumped          57        5       52 (1:31 2:3 3:15 4:2 5:1)
	Avg Dump Rate (k/s)      559.1    713.2    262.6
	Avg Tp Write Rate (k/s)  698.3    712.8    631.1
	NOTES:
  	planner: Request to in7 timed out.
  	planner: Incremental of is:/usr/home bumped to level 4.
	DUMP SUMMARY:
                             	DUMPER STATS                  TAPER STATS
	HOSTNAME  DISK  L  ORIG-KB   OUT-KB COMP%  MMM:SS   KB/s  MMM:SS   KB/s
	----------------- -------------------------------------- --------------
	i0        /     1    38784    38784   --     0:58  666.0    0:25 1570.6
	i10       /     1    33632    33632   --     2:09  259.9    2:10  259.1
	i2        /     3     5120     5120   --     0:37  137.1    0:02 2903.5
	...
	(brought to you by Amanda version 2.3.0.4)

Flushing Amanda

Occasionally a normal dump won’t be run because Amanda can’t access the tape drive. This can be due to a variety of reasons, the drive might need cleaning and promptly kicked out tape the tape you thought you loaded. Or the scsi bus is getting reset. Or the wrong tape was loaded. Or the previous nights tape was never unloaded. Or…Anyway, Amanda will try to at least dump the incremental changes that occured on the system and put them on the holding disk. To get them onto tape, just mount the tape Amanda really wanted to write to and run a flush:

	# amadmin all tape
	The next Amanda run should go onto tape ARCall20 or a new tape.
	# amflush -f all
	Scanning /export/backups/amanda/all...
  	19990421: found non-empty Amanda directory.

	Flushing dumps in 19990421 to tape drive /dev/rmt/tps3d1nrvc.
	Expecting tape ARCall20 or a new tape.  (The last dumps were to tape ARCall19)
	Are you sure you want to do this? y
	taper: pid 11000 executable taper version 2.3.0.4
	taper: read label `ARCall20' date `19981223'
	taper: wrote label `ARCall20' date `19990421'
	taper: reader-side: got label ARCall20 filenum 1
	taper: reader-side: got label ARCall20 filenum 2
	taper: reader-side: got label ARCall20 filenum 3
	...
	taper: reader-side: got label ARCall20 filenum 18
	taper: DONE [idle wait: 2.002 secs]
	taper: writing end marker.

Locating the Right Tape for a Restore

Lets assume a user lost a file on machine ivie in the filesystem /usr/people. To find out what dump levels were done on what tapes on what days, use this command:

	# amadmin all find ivie /usr/people | head -6
	date        host disk lv tape  file status
	1999-08-31  ivie /usr/people 2  ARCatRICE178  19 OK
	1999-08-27  ivie /usr/people 2  ARCatRICE177  18 OK
	1999-08-26  ivie /usr/people 1  ARCatRICE176  12 OK
	1999-08-25  ivie /usr/people 1  ARCatRICE175  14 OK
	1999-08-24  ivie /usr/people 0  ARCatRICE174  50 OK

Performing a Local Restore

Now that you know what tape to load and restore from, you simply need to load the tape on the usual Amanda device. Since Amanda writes a tape label file on the front of a tape, and then a series of files for each filesystem it dumps, it is easier to restore if you use Amanda commands to extract the info from it.

	# mkdir /tmp/restore ; cd /tmp/restore
	# amrestore -p /dev/rmt/tps0d5nrv ivie /usr/people | restore -ivf - .
	Verify tape and initialize maps
	amrestore:   0: skipping start of tape: date 19990610 label ARCall122
	amrestore:   1: skipping kosh._.19990610.1
	amrestore:   2: skipping lupo._.19990610.1
	amrestore:   3: skipping s99._.19990610.1
	...
	amrestore:  54: restoring ivie._usr_people.19990610.0
	Dump   date: Thu Jun 10 20:56:01 1999
	Dumped from: the epoch
	Level 0 dump of / on ivie.arc.umn.edu:/dev/sd0s1a
	Label: none
	Extract directories from tape
	Initialize symbol table.
	restore >

Performing a Local XFS Restore

Beware! On an SGI there are several different types of filesystems. The EFS filesystem is the native one that the normal unix dump/restore will work on. SGI’s also have xfs filesystems and they must be built, checked, dumped and restored with a special set of xfs commands. Thus you need to use xfsrestore to pull files and dirs out of an xfsdump file. Note too that xfsrestore doesn’t like pipes to an interactive xfsrestore, so you have to do a bit more work to restore from an xfs filesystem. Here is an example:

	# amrestore -p $TAPE in10 '/usr' > /tmp/dumpfile
	amrestore:   0: skipping start of tape: date 19990819 label ARCall171
	amrestore:   1: skipping i10._usr_staff_Images.19990819.1
	amrestore:   2: skipping kosh._.19990819.1
	amrestore:   3: skipping i4._.19990819.2
	...
	amrestore:  32: restoring in10._usr_people.19990819.2
	# xfsrestore -i -v verbose -f /tmp/dumpfile .
	xfsrestore: version 2.0 - type ^C for status and control
	xfsrestore: searching media for dump
	xfsrestore: examining media file 0
	xfsrestore: dump description: 
	xfsrestore: hostname: in10
	xfsrestore: mount point: /usr/people
	xfsrestore: volume: /dev/dsk/dks0d3s6
	xfsrestore: session time: Thu Aug 19 20:20:04 1999
	xfsrestore: level: 2
	...
	xfsrestore: directory post-processing
	========== subtree selection dialog =======================
	the following commands are available:
	        pwd 
	        ls [  ]
	        cd [  ]
	        add [  ]
	        delete [  ]
	        extract 
	        quit 
	        help 
	
	 ->

Performing a Remote Restore

What if your Amanda server and the client you want to restore are different architectures, say an SGI server and a FreeBSD PC? Chances are the SGI machine won’t understand the FreeBSD machines dump. Try remote shelling over to the server from the client and piping the output back to the client. Also, don’t forget the “-n” option to rsh, otherwise you will lose control of the standard input to the restore command. This becomes obvious when you can’t enter anything in to the interactive prompt! Heres an example:

	# mkdir /tmp/restore ; cd /tmp/restore
	# rsh -n i0 "/usr/local/amanda/bin/amrestore -p $TAPE valen '^/$'" | restore -ivf - .
	...

# kinit root
# krsh i10 -n /usr/local/amanda/bin/amrestore -p /dev/rmt/tps3d1nrvc delenn /home5 | /sbin/restore -ivf - .               Verify tape and initialize maps
This rsh session is using DES encryption for all data transmissions.
amrestore:   0: skipping start of tape: date 20000321 label ARCall107
amrestore:   1: skipping valen._.20000321.1
amrestore:   2: skipping delenn._.20000321.1
...
amrestore:  56: skipping kosh._var_mail.20000321.1
amrestore:  57: skipping delenn._home2.20000321.3
amrestore:  58: restoring delenn._home5.20000321.0
Dump   date: Wed Mar 22 00:02:24 2000
Dumped from: the epoch
Level 0 dump of /home5 on delenn.arc.umn.edu:/dev/da2s1a
Label: none
Extract directories from tape
Initialize symbol table.
restore >  ls 
.:
     2 ./                 508349 herbert/           444864 nrowe-WES_DELETE/
     2 ../                301590 hinman/                 3 quota.user 
119211 avr/               507996 jrm/               484450 rannow/
 31744 bbryan-WES_DELETE/ 547593 kjm/               135105 ray-WES_DELETE/
325839 ewing/             531876 lbuhse/             47991 sko/
436493 frank/             452852 lost+found/        373477 tdavis/
103245 gumby/             214588 maier-WES_DELETE/     608 users.dat 

restore > cd kjm
restore > ls
./kjm:
547593 ./                     547653 .webspace-preferences 
     2 ../                    547654 .wm_style 
547594 ...cshrc               547655 .wshttymode 
547595 ...signature             8049 .xauth/
547596 ..cshrc                547656 .xcontactPrefs 
547597 ..disableDesktop       547657 .xinitrc 
547598 ..login                547658 .xinitrc.bak 
547599 ..mwmrc                547659 .xmodmap 
547601 .4Dwmrc                547660 .xrn 
restore > add .cshrc
Make node ./kjm
restore > extract
Extract requested files
extract file ./kjm/.cshrc
Add links
Set directory mode, owner, and times.
set owner/mode for '.'? [yn] n
restore > quit




Cron Entry

We run Amanda Monday thru Friday every week on I0 via cron. Here are the cron entries we use, note it is run as user bin-not root:

	# id
	uid=2(bin) gid=2(bin)
	# crontab -l |grep amanda
	0 15 * * 1-5 /usr/local/amanda/bin/amcheck -m all
	0 20 * * 1-5 /usr/local/amanda/bin/amdump all

The above commands help remind us to load the days tapes, flush any dumps that may be clogging up the holding disk and then run the dump automagically later on at night when we are gone and the system and network are more lightly used.

Common Problems and Fixes

If you use a non-root account to do your dumps with (which is highly recommended) you will find that machines with recent OS upgrades and new machines may not dump. If this is the case, you may need to make group and mode changes to your raw filesystem devices and your dump/restore binaries. In the examples below we use the “bin” account as the non-root account. We can confirm the problem is a permissions problem by connecting to the machine having problems and looking at the /tmp/sendsize.debug file:

	# less /tmp/sendsize.debug
	sendsize: debug 1 pid 13027 ruid 2 euid 2 start time Thu Dec 16 20:00:05 1999
	/usr/local/amanda/libexec/sendsize: version 2.3.0.4
	calculating for amname '/', dirname '/'
	sendsize: getting size via dump for / level 0
	sendsize: running "/sbin/dump 0sf 100000 - /"
	 DUMP: Cannot open/stat /dev/rroot, Permission denied <- This is the problem!
	.....
	(no size line match in above dump output)
	.....
	sendsize: pid 13027 finish time Thu Dec 16 20:00:17 1999
	

Next check the permissions on all the raw disk devices. They should all be group “bin” and mode 640:

	# ls -ld `df -l | tail +2 | sed 's?^/dev/?/dev/r?' | awk '{printf "%s ", $1}'`
	crw-------    2 root     root      128, 16 Feb  6 05:00 /dev/rroot
	crw-------    1 root     root      128,279 Feb  6 04:59 /dev/rdsk/dks1d1s7
	crw-------    1 root     root      128,310 Feb  6 03:26 /dev/rdsk/dks1d3s6

	# chgrp bin `df -l | tail +2 | sed 's?^/dev/?/dev/r?' | awk '{printf "%s ", $1}'`
	# chmod 640 `df -l | tail +2 | sed 's?^/dev/?/dev/r?' | awk '{printf "%s ", $1}'`

	# ls -ld `df -l | tail +2 | sed 's?^/dev/?/dev/r?' | awk '{printf "%s ", $1}`
	crw-r-----    2 root     bin      128, 16 Feb  6 05:00 /dev/rroot
	crw-r-----    1 root     bin      128,279 Feb  6 04:59 /dev/rdsk/dks1d1s7
	crw-r-----    1 root     bin      128,310 Feb  6 03:26 /dev/rdsk/dks1d3s6

Finally, you may have problems with the dump/restore or xfsdump/xfsrestore binaries. We had problems with the xfs commands on IRIX so we changed the group to “bin” and the mode to be suid root:

	# ls -ld /usr/sbin/xfsdump /sbin/xfsrestore
	-rwxr-xr-x    1 root     root      304084 Jan  6 10:34 /sbin/xfsrestore*
	-rwxr-xr-x    1 root     root      242644 Jan  6 10:37 /usr/sbin/xfsdump*

	# chgrp bin  /usr/sbin/xfsdump /sbin/xfsrestore
	# chmod 4750 /usr/sbin/xfsdump /sbin/xfsrestore
	# ls -ld /usr/sbin/xfsdump /sbin/xfsrestore
	-rwsr-x---    1 root     bin       304084 Jan  6 10:34 /sbin/xfsrestore*
	-rwsr-x---    1 root     bin       242644 Jan  6 10:37 /usr/sbin/xfsdump*

Commands to Know

Here is a short list of the Amanda commands and how to use them

amadmin (8)        - administrative interface to control Amanda backups
amanda (8)         - Advanced Maryland Automatic Network Disk Archiver
amcheck (8)        - Amanda pre-run self-check
amcleanup (8)      - runs the Amanda cleanup process after a failure
amdump (8)         - backs up all disks in an Amanda configuration
amflush (8)        - flushes Amanda backup files from holding disk to tape
amlabel (8)        - labels an Amanda tape
amrestore (8)      - extract files from an Amanda tape
xfsrestore (1M)    - XFS filesystem incremental restore utility

Here are some examples to jump start your use of Amanda:

	amlabel all ARCatRice99
	amadmin all tape
	amflush -f all
	amcheck all
	amcheck -c all
	amcheck -s all
	amadmin all force
	amadmin all unforce
	amadmin all balance
	amadmin all find ivie /usr/people
	amrestore -p /dev/rmt/tps0d5nrv ivie /usr/people | restore -ivf - .
Posted in Linux How-To | Leave a comment

How to Protect your Server Against the Shellshock Bash Vulnerability

On September 24, 2014, a GNU Bash vulnerability, referred to as Shellshock or the “Bash Bug”, was disclosed. In short, the vulnerability allows remote attackers to execute arbitrary code given certain conditions, by passing strings of code following environment variable assignments. Because of Bash’s ubiquitous status amongst Linux, BSD, and Mac OS X distributions, many computers are vulnerable to Shellshock; all unpatched Bash versions between 1.14 through 4.3 (i.e. all releases until now) are at risk.

The Shellshock vulnerability can be exploited on systems that are running Services or applications that allow unauthorized remote users to assign Bash environment variables. Examples of exploitable systems include the following:

  • Apache HTTP Servers that use CGI scripts (via mod_cgi and mod_cgid) that are written in Bash or launch to Bash subshells
  • Certain DHCP clients
  • OpenSSH servers that use the ForceCommand capability
  • Various network-exposed services that use Bash

A detailed description of the bug can be found at CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, andCVE-2014-7187.

Because the Shellshock vulnerability is very widespread–even more so than the OpenSSL Heartbleed bug–and particularly easy to exploit, it is highly recommended that affected systems are properly updated to fix or mitigate the vulnerability as soon as possible. We will show you how to test if your machines are vulnerable and, if they are, how to update Bash to remove the vulnerability.

Check System Vulnerability

On each of your systems that run Bash, you may check for Shellshock vulnerability by running the following command at the bash prompt:

env 'VAR=() { :;}; echo Bash is vulnerable!' 'FUNCTION()=() { :;}; echo Bash is vulnerable!' bash -c "echo Bash Test"

The highlighted echo Bash is vulnerable! portion of the command represents where a remote attacker could inject malicious code; arbitrary code following a function definition within an environment variable assignment. Therefore, if you see the following output, your version of Bash is vulnerable and should be updated:

Bash is vulnerable!
Bash Test

If your output does not include the simulated attacker’s payload, i.e. “Bash is vulnerable” is not printed as output, you are protected against at least the first vulnerability (CVE-2014-6271), but you may be vulnerable to the other CVEs that were discovered later. If there are any bash warnings or errors in the output, you should update Bash to its latest version; this process is described in the next section.

If the only thing that is output from the test command is the following, your Bash is safe from Shellshock:

Bash Test

Test Remote Sites

If you simply want to test if websites or specific CGI scripts are vulnerable, use this link: ‘ShellShock’ Bash Vulnerability CVE-2014-6271 Test Tool.

Simply enter the URL of the website or CGI script you want to test in the appropriate form and submit.

Fix Vulnerability: Update Bash

The easiest way to fix the vulnerability is to use your default package manager to update the version of Bash. The following subsections cover updating Bash on various Linux distributions, including Ubuntu, Debian, CentOS, Red Hat, and Fedora.

APT-GET: Ubuntu / Debian

For currently supported versions of Ubuntu or Debian, update Bash to the latest version available viaapt-get:

sudo apt-get update && sudo apt-get install --only-upgrade bash

Now check your system vulnerability again by running the command in the previous section (Check System Vulnerability).

End of Life Ubuntu / Debian Releases

If you are running a release of Ubuntu / Debian that is considered end of life status, you will have to upgrade to a supported to use the package manager to update Bash. The following command can be used to upgrade to a new release (it is recommended that you back up your server and important data first, in case you run into any issues):

sudo do-release-upgrade

After the upgrade is complete, ensure that you update Bash.

YUM: CentOS / Red Hat / Fedora

Update Bash to the latest version available via yum:

sudo yum update bash

Now check your system vulnerability again by running the command in the previous section (Check System Vulnerability).

End of Life CentOS / Red Hat / Fedora Releases

If you are running a release of CentOS / Red Hat / Fedora that is considered end of life status, you will have to upgrade to a supported to use the package manager to update Bash. The following command can be used to upgrade to a new release (it is recommended that you back up your server and important data first, in case you run into any issues):

sudo yum update

After the upgrade is complete, ensure that you update Bash.

Posted in Linux How-To | Leave a comment

Blocking a DNS DDOS using the fail2ban package

how you can reject these DDOS attempts via the fail2ban package.

These events look something like this:

System Events
=-=-=-=-=-=-=
Jan 21 06:02:13 www named[32410]: client 66.230.128.15#15333: query (cache)
+'./NS/IN' denied

Tired of your DNS server being used as someone’s DOS amplifier weapon? Try Debian’s fail2ban package. The homepage for fail2ban is http://www.fail2ban.org

First install the Debian fail2ban package. By default it only watches and bans ssh. That is probably a good idea, further discussion of which is somewhat beyond the scope of this article.

apt-get install fail2ban

Then inspect the contents of /etc/fail2ban/jail.conf
As per the notes at the end of that file, you’ll need to modify your bind logging so fail2ban can understand it.

First make the directory for the bind log file.

mkdir /var/log/named
chmod a+w /var/log/named

I’m sure a reader will complain about making a log file a+w, but it is the simplest way to make this demo work. In your spare time, once everything works, find a better way.

Next, edit /etc/bind/named.conf.local and add the following lines

logging {
    channel security_file {
        file "/var/log/named/security.log" versions 3 size 30m;
        severity dynamic;
        print-time yes;
    };
    category security {
        security_file;
    };
};

Restart Bind using /etc/init.d/bind9 restart
Test bind to make sure it’s still working and also verify the log file /var/log/named/security.log is filling up with lines like this:

21-Jan-2009 07:19:54.835 client 66.230.160.1#28310: query (cache) './NS/IN' denied

OK, now to set up fail2ban. Edit the /etc/fail2ban/jail.conf file and change from:

[named-refused-udp]

enabled  = false

to:

[named-refused-udp]

enabled  = true

and from:

[named-refused-tcp]

enabled  = false

to:

[named-refused-tcp]

enabled  = true

Then restart fail2ban in the usual manner,

/etc/init.d/fail2ban restart

Now verify that fail2ban is doing something by checking out the log file located at /var/log/fail2ban.log it should contain something like

2009-01-21 07:34:32,800 fail2ban.actions: WARNING [named-refused-udp] Ban 76.9.16.171
2009-01-21 07:34:32,902 fail2ban.actions: WARNING [named-refused-tcp] Ban 76.9.16.171

Verify that fail2ban is modifying the iptables rules

iptables -L

Now verify that fail2ban’s iptables rules are actually stopping access

tail -f /var/log/named/security.log

DNS error messages should be several minutes apart rather than multiple per second.

Now for some fine tuning.

First we have to modify logcheck to look at the new location of named error messages. Edit /etc/logcheck/logcheck.logfiles and add this to the end of the file:

/var/log/named/security.log

Next modify logcheck to report what fail2ban is doing. edit the same file, /etc/logcheck/logcheck.logfiles and add this line to the end of the file:

/var/log/fail2ban.log

Now verify you are getting both named and fail2ban messages in your hourly logcheck emails.

Posted in Linux How-To | Leave a comment

How to Flush DNS cache

HowTo: Flush nscd dns cache

Nscd caches libc-issued requests to the Name Service. If retrieving NSS data is fairly expensive, nscd is able to speed up consecutive access to the same data dramatically and increase overall system performance. Just restart nscd:
$ sudo /etc/init.d/nscd restart
OR
# service nscd restart
OR
# service nscd reload
This daemon provides a cache for the most common name service requests. The default configuration file, /etc/nscd.conf, determines the behavior of the cache daemon.

Flush dnsmasq dns cache

dnsmasq is a lightweight DNS, TFTP and DHCP server. It is intended to provide coupled DNS and DHCP service to a LAN. Dnsmasq accepts DNS queries and either answers them from a small, local, cache or forwards them to a real, recursive, DNS server. This software is also installed many cheap routers to cache dns queries. Just restart the dnsmasq service to flush out dns cache:
$ sudo /etc/init.d/dnsmasq restart
OR
# service dnsmasq restart

Flush caching BIND server dns cache

A caching BIND server obtains information from another server (a Zone Master) in response to a host query and then saves (caches) the data locally. All you have to do is restart bind to clear its cache:
# /etc/init.d/named restart
You can also use rndc command as follows flush out all cache:
# rndc restart
OR
# rndc exec
BIND v9.3.0 and above will support flushing all of the records attached to a particular domain name with rndc flushname command. In this example flush all records releated to cyberciti.biz domain:
# rndc flushname cyberciti.biz
It is also possible to flush out BIND views. For example, lan and wan views can be flushed using the following command:
# rndc flush lan
# rndc flush wan

A note about Mac OS X Unix users

Type the following command as root user:
# dscacheutil -flushcache
OR
$ sudo dscacheutil -flushcache
If you are using OS X 10.5 or earlier try the following command:
lookupd -flushcache

A note about /etc/hosts file

/etc/hosts act as the static table lookup for hostnames. You need to remove and/or update records as per your requirements under Unix like operating systems:
# vi /etc/hosts
Sample outputs:

127.0.0.1	localhost
127.0.1.1	wks01.WAG160N	wks01
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.37.34.2     build
192.168.1.10	nas01
192.168.1.11	nas02
192.168.1.12	nas03
#192.168.2.50	nfs2.nixcraft.net.in nfs2
#192.168.2.51	nfs1.nixcraft.net.in nfs1
172.168.232.50  nfs1.nixcraft.net.in nfs1
172.168.232.51  nfs2.nixcraft.net.in nfs2
192.168.1.101	vm01

 

Posted in Linux How-To | Leave a comment

How to setup NRPE monitoring with Nagios

1 – Summary

NRPE (Nagios Remote Plugin Executor) plugin allows you to monitor any number of remote network devices and services using Nagios. Installing and Configuring a Nagios Server is not part of this HowTo. You will need a nagios machine already in place on your internal network to monitor the host we install the nrpe daemon on.

2 – installation nrpe, nagios-plugins-all

you will need to have yum EPEL repository enabled for your Redhat/CentsOs machine. Check out this guide: Enable EPEL Repository

sudo yum install nrpe
sudo yum install nagios-plugins-all

3 – edit nrpe.cfg to allow your nagios server

Edit nrpe configuration file:

vim /etc/nagios/nrpe.cfg

find line allowed_hosts . it is a comma separated list. add your nagios server ip to the list

allowed_hosts=127.0.0.1,192.168.1.100

4 – IPTables

nrpe daemon binds to port 5666. edit your iptables filter to accept connection from your nagios server

-A RH-Firewall-1-INPUT -s 192.168.1.100 -p tcp --dport 5666 -j ACCEPT

5 – Hosts.allow

Now, open the /etc/hosts.allow file and add an entry for the IP address of your remote monitoring server.

nrpe: 192.168.1.100   nagios.example.edu

6 – Start nrpe service

Start nrpe service

sudo /sbin/service nrpe start

7 – Test Connection

test the connection from your nagios box and see if you can connect to nrpe daemon

telnet 192.168.1.100 5666

If the connection immediately closes you’ve got a problem and something isn’t right. If the socket opens and you are met with the following:

Escape character is '^]'.

Then y ou’re ready to move on. If you’ve got problems at this point, go back through each of the steps above and check for any errors in configuration.

9 – Start nrpe service on system start up

Enable the nrpe service so that it will start when the system starts up.

sudo /sbin/chkconfig nrpe on 
sudo /sbin/chkconfig --list nrpe 
nrpe 0:off 1:off 2:on 3:on 4:on 5:on 6:off

8 – Define Command Definition for check_nrpe

now that the nrpe service is installed and running, lets make sure there is command definition for check_nrpe if there is no command please add the below code. open your checkcommands.cfgfile.These are specified in the $NAGIOSHOME/etc/checkcommands.cfg file. Where there are parameters available for a command, these can be passed through from services.cfg. my checkcommands.cfg is located in/usr/local/nagios/etc/objects/checkcommands.cfg

When monitoring remote services, we first issue a check_nrpe command followed by a ! and the command on the remote machine to run. This means that we are going to need an instance ofcheck_nrpe on our Nagios Server. check_nrpe should be under /usr/local/nagios/libexec if you can’t find it. you will need to compile or install it for your nagios

define command{
        command_name check_nrpe
        command_line $USER10$/check_nrpe -H $HOSTADDRESS$ -t 30 -c $ARG1$
}

8 – Add New Host & Service

We are now ready to add our new host to our primary Nagios installation. This is very straight forward and should only take a moment.

Back on the primary Nagios installation server we need to edit our hosts.cfg configuration file. The file is located in /usr/local/nagios/etc/hosts.cfg. This may change depending on your installation and organization of configuration files. Read the first part of this whitepaper for organization advise.

In the hosts.cfg file, add your new host object:

define host{
        use generic-host
	#Hostname of remote system
	host_name host.domain.com
	# A friendly name for this server
	alias Friendly  name 
	# Remote host IP address
	address 127.0.0.1
	check_command check-host-alive
	max_check_attempts 10
	notification_interval 30
	notification_period 24x7
	notification_options d,r
	# Your defined contact group name
	contact_groups admins
}

At this time our hosts.cfg file contains two hosts objects, the localhost which is running the Nagios application and our remote host which we will be monitoring.

We now want to add the service objects to our services.cfg file located in the same directory. Add the following single service to your services.cfg file:

define service{
	use generic-service
	# Hostname of remote system
	host_name host.domain.com
	service_description Primary Disk Usage
	is_volatile 0
	check_period 24x7
	max_check_attempts 3
	normal_check_interval 5
	retry_check_interval 1
	# Change to your contact group
	contact_groups admins
	notification_options w,u,c,r
	notification_interval 10
	notification_period 24x7
	check_command check_nrpe!check_disk1
}

Posted in Linux How-To | Leave a comment

Nagios check_http Plugin Examples for HTTP / HTTPS

1. Check HTTP

Check whether Apache HTTP is running on a remote server using check_http.

$ check_http -H 192.168.1.50
HTTP OK HTTP/1.1 200 OK - 332 bytes in 0.004 seconds |time=0.004144s;;;0.000000 size=332B;;;0

2. Check HTTPS

Check whether Apache HTTPS is running on a remote server using check_http.

$ check_http -H 192.168.1.50 -S
HTTP OK HTTP/1.1 200 OK - 332 bytes in 0.004 seconds |time=0.004144s;;;0.000000 size=332B;;;0

If the remote server runs only HTTP and not HTTPS, you’ll get “HTTP CRITICAL – Unable to open TCP socket” message as shown below.

$ check_http -H 192.168.1.50 -S
Connection refused
HTTP CRITICAL - Unable to open TCP socket

3. Check HTTP (or HTTPS) on different port

You can check tomcat server, or apache server, or glassfish, or any server that is running on a different port by speficying the port number as shown below.

$ check_http -H 192.168.1.50 -p 8080
HTTP OK HTTP/1.1 200 OK - 332 bytes in 0.004 seconds |time=0.004144s;;;0.000000 size=332B;;;0

For HTTPS running on a different port, do the following.

$ check_http -H 192.168.1.50 -S -p 8443
HTTP OK HTTP/1.1 200 OK - 332 bytes in 0.004 seconds |time=0.004144s;;;0.000000 size=332B;;;0

4. Check Specific URL

To check whether a specific webpage is available, use the -u option as shown below.

$ check_http -H 101hacks.com -u http://test.com/test

5. Check SSL Certificate Expiry

You can check whether a SSL certificate of the website expires within the next X number of days as shown below. In the following example, we are checking whether the website certificate expires in the next 365 days. The output indicates that it expires in 300 days.

$ check_http -H test.com -C 365
WARNING - Certificate expires in 300 day(s) (01/01/2011 10:10).

Syntax and Options

check_http -H hostname (or) -I ip-address {optional options}

Short Option Long Option Option Description
-H –hostname host name of the server where HTTP (or HTTPS) daemon is running
-I –IP-address ip address of the HTTP (or HTTPS) server
-p –port Port number where HTTP server runs. Default is 80
-4 –use-ipv4 This will use IPv4 connection
-6 –use-ipv6 This will use IPv6 connection
-S –ssl This will use HTTPS using default 443 port
-C –certificate Minimum number of days a SSL certiface must be valid.
-e –expect Expected response string. Default is HTTP/1
-s –string Expected content string.
-u –url URL to check
-P –post URL encoded http POST data
-N –no-body Do not wait for whole document body to download. Stop once the headers are downloaded.
-M –max-age Check whether a document is older than x seconds. Use 5 for 5 seconds, 5m for 5 minutes, 5h for 5 hours, 5d for 5 days.
-T –content-type Indicate content type in header for POST request
-l –linespan Regular expression can span to new line (Use this with -r or -R option)
-r –regex, –ereg Use this regular expression to search for string in the HTTP page
-R –eregi Same as above, but with ignore case.
-a –authorization If the site user basic authentication send uid, pwd in the format uid:pwd
-A –useragent Pass the specified string as “User Agent” in HTTP header.
-k –header Add additional tags that should be sent in the HTTP header.
-L –link The output is wrapped as HTML link
-f –onredirect When a URL is redirected, use this to either follow the URL, or send ok, warning, or critical notification
-m –pagesize Specify the minimum and maximum page size expected in bytes. Format is minimum:maximum
-w –warning Response time in seconds for warning state
-m –pagesize Specify the minimum and maximum page size expected in bytes. Format is minimum:maximum
-w –warning Response time in seconds for warning state
-c –critical Response time in seconds for critical state
-t –timeout Number of seconds to wait before connection times out. Default is 10 seconds
Posted in Linux How-To | Leave a comment

NTP Server using Ubuntu 14.04

Network Time Protocol (NTP) is a networking protocol for time and date synchronisation between computers. By default, Windows 7 provides five servers (default being time.windows.com) to synchronise with. Time varies based on network latency however with tens of milliseconds over the Internet and almost one millisecond on LAN. Having a NTP server also reduces the amount of calls to the Internet made by hosts and achieves a better system time for all computers that rely on performance, integration and timeliness.

Prerequisites

  • Internet connection
  • Ubuntu 14.04
  • Networking

NTP Installation Guide

1. Install Ubuntu 14.04 LTS with roughly:

    • 1 CPU
    • 256MB RAM
    • 5GB HDD

This will be all you need.

2. Install the NTP daemon using the command:

3. Let’s configure the NTP servers we are going to retrieve from. Edit the ntp.conf using the command:

Here are the current servers that the service is currently retrieving the time from:

This is something you should change to a local/country instead of the Ubuntu pools. You can find this at the NTP Pool Project. I will be using the Australian pools so change the lines as necessary:

updated servers

Place the word iburst onto one pool to indicate you want to retrieve from this as soon as possible. This causes the daemon to synchronise with this server after starting up, otherwise it will take somewhere up to 20 minutes before the first synchronisation.

4. Add a fallback server. Ubuntu already provides their own fallback but we will use the current server’s time as the default. Otherwise you can specify any other server you know of:

fallback server

5. Your file will look something like this now:

config

Hit CTRL+X, enter Y to confirm and hit Enter.

6. Restart the daemon service using the command:

7. Monitor the log to see when it starts synchronising using the command:

(Ctrl + C to exit)

restart server

8. It nothing comes up (which usually happens to me), run the command ‘ntpq -p‘ and it should show you all the time servers you are currently connecting with. This is enough to know if it is synchronized for now.

polling server

9. Find the hostname of the server (hostname -A) or its IP address (ifconfig) and start synchronising everything!

 

Posted in Linux How-To | Leave a comment

How to shut down or restart the computer with a batch file

Below are steps on how to restart, shutdown, and hibernate a Windows computer from a batch file or the command line.

Windows Vista, 7, and 8 users
Windows XP users
Windows 95, 98, and ME users
MS-DOS users

Windows Vista, 7, and 8 users

Microsoft Windows Vista, 7, and 8 includes a similar shutdown command feature that XP did to shutdown the computer through the command line, shortcut, or batch files. Below are the steps required for creating a shutdown, restart, and hibernate shortcut.

1. Create a new shortcut.

2. For the location of the shortcut type one of the below commands depending on what you want to do.

To shutdown the computer type the below line in the location text field.

shutdown.exe /s /t 00

To restart the computer type the below line in the location text field.

shutdown.exe /r /t 00

To hibernate the computer type the below line in the location text field.

shutdown.exe /h

3. Click Next, and then for the name of the shortcut type either Shut down, Restart, or Hibernate and then click Finish.

After completing the above steps, double-click the shortcut icon to shut down, restart, or put the computer into hibernation.

Additional information and options about the shutdown command is on our shutdown command page.

Windows XP users

Microsoft Windows XP includes a new shutdown command that allows users to shutdown the computer through the command line, shortcut, or batch files. Below are the steps required for creating a shutdown and restart shortcut.

1. Create a new shortcut.

2. For the location of the shortcut type one of the below commands depending on what you want to do.

To shut down your computer type the below line in the location.

shutdown.exe -s -t 00

To restart the computer type the below line in the location.

shutdown.exe -r -t 00

3. Click Next, and then for the name of the shortcut type either Shut down or Restart and then click Finish.

After completing the above steps, double-click the shortcut icon to shut down or restart the computer.

Additional information and options about the shutdown command is on our shutdown command page.

Windows 95, 98, and ME users

Create a batch file with the lines mentioned below for the action you want to perform.

Restarting the computer

START C:\Windows\RUNDLL.EXE user.exe,exitwindowsexec
exit

Shut down the computer

C:\Windows\RUNDLL32.EXE user,exitwindows
exit

Note: When typing the above two lines, spacing is important. Also, make sure to enter the exit line at the bottom of the batch file in case Windows cannot restart the computer because of the open MS-DOS window.

Microsoft Windows 98 and Windows ME users can also run the below command to perform different forms of rebooting or shutting down.

rundll32.exe shell32.dll,SHExitWindowsEx n

Where n is equal to one of the numbers below, depending on the action you want the computer to perform.

  • 0 – LOGOFF
  • 1 – SHUTDOWN
  • 2 – REBOOT
  • 4 – FORCE
  • 8 – POWEROFF

MS-DOS users

If you need to restart from MS-DOS, see the debug page for steps on how to write a debug routine to restart these computers.

Posted in Windows How-To | Leave a comment

Batch file to copy files from one folder to another folder

xcopy.exe It’s built into Windows, so its cost is nothing.

Just xcopy /s c:\source d:\target

You’d probably want to tweak a few things; some of the options we also add include these:

  • /s/e – recursive copy, including copying empty directories.
  • /v – add this to verify the copy against the original. slower, but for the paranoid.
  • /h – copy system and hidden files.
  • /k – copy read-only attributes along with files. otherwise, all files become read-write.
  • /x – if you care about permissions, you might want /o or /x.
  • /y – don’t prompt before overwriting existing files.
  • /z – if you think the copy might fail and you want to restart it, use this. It places a marker on each file as it copies, so you can rerun the xcopy command to pick up from where it left off.

If you think the xcopy might fail partway through or that you have to stop it and want to continue it later, you can use xcopy /s/z c:\source d:\target.

Posted in Windows How-To | Leave a comment