Friday, December 30, 2011

What is SUID and how to set it in Linux?

SUID (Set owner User ID up on execution) is a special type of file permissions given to a file. Normally in Linux/Unix when a program runs, it inherits access permissions from the logged in user. SUID is defined as giving temporary permissions to a user to run a program/file with the permissions of the file owner rather that the user who is running it. In simple words users will get file owner’s permissions as well as their UID and GID when executing a file/program/command.

 When we try to change our password we will use passwd command which is owned by root as shown below. This passwd command file will try to edit some system config files such as /etc/passwd, /etc/shadow etc when we try to change our password. These files cannot be opened or viewed by normal user only root user will have permissions. So if we try to remove SUID and give full permissions to this passwd command file it cannot open other files such as /etc/shadow file to update the changes and we will get permission denied error or some other error when tried to execute passwd command. So passwd command is set with SUID to give root user permissions to normal user so that it can update /etc/shadow and other files.

How can I setup SUID for a file?
SUID can be set in two ways
1) Symbolic way(s, Stands for Set) 2) Numerical/octal way(4)
Use chmod command to set SUID on file: file1.txt
Symbolic way:
chmod u+s file1.txt
Here owner permission execute bit is set to SUID with +s
Numerical way:
chmod 4750 file1.txt
Here in 4750, 4 indicates SUID bitset, 7 for full permissions for owner, 5 for write and execute permissions for group, and no permissions for others.
How can I check if a file is set with SUID bit or not?
Use ls –l to check if the x in owner permissions field is replaced by s or S
For example: file1.txt listing before and after SUID set
Before SUID set:
ls -l
total 8

-rwxr--r-- 1 xyz xyzgroup 148 Dec 22 03:46 file1.txt
After SUID set:
ls -l
total 8

-rwsr--r-- 1 xyz xyzgroup 148 Dec 22 03:46 file1.txt
Some FAQ’s related to SUID:
A) Where is SUID used?
1) Where root login is required to execute some commands/programs/scripts.
2) Where you dont want to give credentials of a perticular user and but want to run some programs as the owner.
3) Where you dont want to use sudo command but want to give execute permission for a file/script etc.
B) I am seeing “S” I.e. Capital “s” in the file permissions, what’s that?
After setting SUID to a file/folder if you see ‘S’ in the file permission area that indicates that the file/folder does not have executable permissions for that user on that particular file/folder.
For example see below example
chmod u+s file1.txt
ls -l
-rwSrwxr-x 1 surendra surendra 0 Dec 27 11:24 file1.txt
If you want to convert this S to s then add executable permissions to this file as show below
chmod u+x file1.txt
ls -l
-rwsrwxr-x 1 surendra surendra 0 Dec 5 11:24 file1.txt
you should see a smaller ‘s’ in the executable permission position now.
C) How can I find all the SUID set files in Linux/Unix.
find / -perm +4000
The above find command will check all the files which is set with SUID bit(4000).
D) Can I set SUID for folders?
Yes, you can if its required(you should remember one thing, that Linux treats everything as a file)
E) What is SUID numerical value?
It has the value 4 for SUID.
What is SGID?
 
SGID (Set Group ID up on execution) is a special type of file permissions given to a file/folder. Normally in Linux/Unix when a program runs, it inherits access permissions from the logged in user. SGID is defined as giving temporary permissions to a user to run a program/file with the permissions of the file group permissions to become member of that group to execute the file. In simple words users will get file Group’s permissions when executing a Folder/file/program/command.
SGID is similar to SUID. The difference between both is that SUID assumes owner of the file permissions and SGID assumes group’s permissions when executing a file instead of logged in user inherit permissions.

When implementing Linux Group quota for group of people SGID plays an important role in checking the quota timer. SGID bit set on folder is used to change their inherit permissions to group’s permissions to make it as single user who is dumping data. So that group members whoever dumps the data the data will be written with group permissions and inturn quota will be reduced centrally for all the users. For clear understanding of this you have to implement group quota from the above link. Without implementation of SGID the quota will not be effective.
How can I setup SGID for a file?
SGID can be set in two ways
1) Symbolic way (s)

2) Numerical/octal way (2, SGID bit as value 2)
Use chmod command to set SGID on file: file1.txt
Symbolic way:
chmod g+s file1.txt
Let me explain above command we are setting SGID(+s) to group who owns this file.
Numerical way:
chmod 2750 file1.txt
Here in 2750, 2 indicates SGID bitset, 7 for full permissions for owner, 5 for write and execute permissions for group, and no permissions for others.
How can I check if a file is set with SGID bit or not?
Use ls –l to check if the x in group permissions field is replaced by s or S
For example: file1.txt listing before and after SGID set
Before SGID set:
ls -l

total 8

-rwxr--r-- 1 xyz xyzgroup 148 Dec 22 03:46 file1.txt
After SGID set:
ls -l

total 8

-rwxr-sr-- 1 xyz xyzgroup 148 Dec 22 03:46 file1.txt
Some FAQ’s related to SGID:
Where is SUID used?
1) When implementing Linux group disk quota.
I am seeing “S” ie Capital s in the file permissions, what’s that?
After setting SUID or SGID to a file/folder if you see ‘S’ in the file permission area that indicates that the file/folder does not have executable permissions for that user or group on that particular file/folder.
chmod g+s file1.txt
output:
-rwxrwSr-x 1 surendra surendra 0 Dec 27 11:24 file1.txt

so if you want executable permissions too, apply executable permissions to the file.
chmod g+x file1.txt
output:
-rwxrwsr-x 1 surendra surendra 0 Dec 5 11:24 file1.txt

you should see a smaller ‘s’ in the executable permission position.
How can I find all the SGID set files in Linux/Unix.
find / -perm +2000
The above find command will check all the files which is set with SGID bit(2000).
Can I set SGID for folders?
Yes, you can if it’s required (you should remember one thing, that Linux treats everything as a file)
How can I remove SGID bit on a file/folder?
chmod g-s file1.txt

Thursday, December 29, 2011

Server Configuration Syntax error check

SSHD server check for syntax error
 

# sshd -t

FTP server check for syntax error
# vsftpd

DNS server check for syntax error
 
For checking syntax errors in main configuration file.
# named-checkconf named.conf 
Syntax OK


#named-checkzone example.com /var/named/chroot/var/named/exaple-zone.frd

 
SAMBA server check for syntax error
# testparm

APACHE server check for syntax error
# httpd -t

For virtual hosts
# httpd -t -D DUMP_VHOSTS


TCP Wrappers check for syntax error
# tcpdchk
# tcpdchk -v

Postfox server check for syntax error

# postfix check
# postfix -vv

LIGHTTPD server check for syntax error

# lighttpd -t -f /etc/lighttpd/lighttpd.conf

 
Squid server check for syntax error

# squid -k check
# squid -k parse

NAGIOS server check for syntax error

# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Monday, December 19, 2011

Nagios server configuration

Prerequisites
During portions of the installation you'll need to have root access to your machine.
Make sure you've installed the following packages on your Fedora installation before continuing.
  • Apache
  • PHP
  • GCC compiler
  • GD development libraries
You can use yum to install these packages by running the following commands (as root):
yum install httpd php  yum install gcc glibc glibc-common  yum install gd gd-devel
1) Create Account Information
Become the root user.
su -l
Create a new nagios user account and give it a password.
/usr/sbin/useradd -m nagios passwd nagios
Create a new nagcmd group for allowing external commands to be submitted through the web interface. Add both the nagios user and the apache user to the group.
/usr/sbin/groupadd nagcmd /usr/sbin/usermod -a -G nagcmd nagios /usr/sbin/usermod -a -G nagcmd apache

2) Download Nagios and the Plugins
Create a directory for storing the downloads.
mkdir ~/downloads  cd ~/downloads  
Download the source code tarballs of both Nagios and the Nagios plugins (visit http://www.nagios.org/download/ for links to the latest versions). These directions were tested with Nagios 3.1.1 and Nagios Plugins 1.4.11.
wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.3.tar.gz
wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.11.tar.gz  
3) Compile and Install Nagios
Extract the Nagios source code tarball.
cd ~/downloads  tar xzf nagios-3.2.3.tar.gz  cd nagios-3.2.3  
Run the Nagios configure script, passing the name of the group you created earlier like so:
./configure --with-command-group=nagcmd  
Compile the Nagios source code.
make all  
Install binaries, init script, sample config files and set permissions on the external command directory.
make install 
make install-init 
make install-config 
make install-commandmode  
Don't start Nagios yet - there's still more that needs to be done...
4) Customize Configuration
Sample configuration files have now been installed in the /usr/local/nagios/etc directory. These sample files should work fine for getting started with Nagios. You'll need to make just one change before you proceed...
Edit the /usr/local/nagios/etc/objects/contacts.cfg config file with your favorite editor and change the email address associated with the nagiosadmin contact definition to the address you'd like to use for receiving alerts.
vi /usr/local/nagios/etc/objects/contacts.cfg
5) Configure the Web Interface
Install the Nagios web config file in the Apache conf.d directory.
make install-webconf
Create a nagiosadmin account for logging into the Nagios web interface. Remember the password you assign to this account - you'll need it later.
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
Restart Apache to make the new settings take effect.
service httpd restart

6) Compile and Install the Nagios Plugins
Extract the Nagios plugins source code tarball.
cd ~/downloads 
tar xzf nagios-plugins-1.4.11.tar.gz 
cd nagios-plugins-1.4.11  
Compile and install the plugins.
./configure --with-nagios-user=nagios --with-nagios-group=nagios 
make 
make install

7) Start Nagios  
Add Nagios to the list of system services and have it automatically start when the system boots.
chkconfig --add nagios chkconfig nagios on
Verify the sample Nagios configuration files.
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
If there are no errors, start Nagios.
service nagios start

8) Modify SELinux Settings
Fedora ships with SELinux (Security Enhanced Linux) installed and in Enforcing mode by default. This can result in "Internal Server Error" messages when you attempt to access the Nagios CGIs.
See if SELinux is in Enforcing mode.
getenforce  
Put SELinux into Permissive mode.
setenforce 0  
To make this change permanent, you'll have to modify the settings in /etc/selinux/config and reboot.
Instead of disabling SELinux or setting it to permissive mode, you can use the following command to run the CGIs under SELinux enforcing/targeted mode:
chcon -R -t httpd_sys_content_t /usr/local/nagios/sbin/ 
chcon -R -t httpd_sys_content_t /usr/local/nagios/share/

9) Login to the Web Interface  
You should now be able to access the Nagios web interface at the URL below. You'll be prompted for the username (nagiosadmin) and password you specified earlier.
http://localhost/nagios/





Wednesday, December 14, 2011

Using VMware Converter to convert XenServer virtual machines to VMware virtual machines

Technical Specifications

The following product versions were used to write this article:
  • Citrix XenServer version 4 Enterprise Edition
  • VMware Converter version 3.0.3 Enterprise Edition or later

Guest operating systems supported

All Windows guests supported by XenServer and Converter.

Prerequisites

  1. You must be using VMware Converter 3.0.3 or later.
  2. The source virtual machine must not be para-virtualized. Normally none of the previously listed guest operating systems are para-virtualized.

Methods of Conversion

Cold Cloning Process

For this process use the VMware Converter Boot CD to clone the virtual machine. To download the CD image, see: http://www.vmware.com/download/download.do?downloadGroup=CONVERTER3

To cold clone:

  1. Restart the source virtual machine on the originating platform and boot into the VMware Converter Boot CD. For more information on assigning the VMware Converter Boot CD to the virtual machine, see the section below.

    Note: The virtual machine must be assigned at least 264MB of memory to successfully run the Converter Boot CD.

  2. When the VMware Converter application is launched, use it to convert the virtual machine into a VMware Workstation virtual machine or VMware ESX virtual machine. For more information on completing this step, see the VMware Converter 3.0.3 User's Manual, the VMware vCenter Converter Standalone 4.x User's Guide, or the VMware vCenter Converter 4.x Installation and Administration Guide.

Booting the Converter Boot CD from the virtual machine

There are several ways of associating the CD to the XenServer virtual machine.
  • Using physical media
    Burn the Converter Boot CD image (ISO) onto physical media and insert it in the XenServer physical server.
    Use the XenCenter GUI to associate the CD to a virtual machine and restart it.

    By default, virtual machines that use Xen Paravirtualization (PV) do not allow you to choose the boot order on XenCenter. To be able to choose the boot order, login to the Xen host via SSH and perform these steps:
    1. Run this command to list the virtual machines on the Xen server:

      xe xm-list

    2. Find your Xen virtual machine name in the format [name-label] and obtain the uuid.
    3. Run this command:

      xe vm-param-set HVM-boot-policy="BIOS order" uuid=""

      Where is the virtual machine uuid discovered using the xe xm-list command.

      You can now select the boot order using VM properties on XenCenter by selecting Startup Options.

  • Using a CIFS share
    Share the folder containing the Boot CD image ISO. Use XenCenter GUI to access the CIFS share and assign the ISO image to the virtual machine and then restart it.

  • Using xe command line interface
    Copy the Boot CD ISO to the XenServer using SSH WinSCP. Use the xe command to enumerate the virtual machines. Attach the CD ISO to the virtual machine and restart it. For detailed list of xe commands, see the Xen wiki.

Ref: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005298


Tuesday, December 13, 2011

Installation of Apache tomcat


Installing Apache Tomcat on Linux

This article is a step by step guide for installing Apache Tomcat 6.0 (6.0.18) on 64-bit Debian Linux 4.0. It covers the setup of multiple Tomcat JVM instances on a single Linux server. The instructions in this guide are applicable to most other Linux distributions.
Contents


Introduction

This article discusses how to install Apache Tomcat 6.0 (6.0.18) on 64-bit Debian Linux 4.0. Additionally it shows how to setup multiple Tomcat JVM instances on a single Linux server. For each Tomcat JVM instance a web application and Java servlet example is configured. The Tomcat installation steps outlined in this article are also applicable to most other Linux distributions.

Note that this document comes without warranty of any kind. But every effort has been made to provide the information as accurate as possible. I welcome emails from any readers with comments, suggestions, and corrections at webmaster_at_puschitz.com.

Installing Java Runtime Environment

To run Tomcat, you need Java Standard Edition (Java SE), also known as the JDK.

For the Tomcat installation I used SUN's latest Java SE JDK that was available at the time of this writing: Java SE Development Kit (JDK) 6 Update 10 (6u10). Regarding Java SE 6, Platform Name and Version Numbers, see http://java.sun.com/javase/6/webnotes/version-6.html. And for the whole Java version history I recommend the Wiki article http://en.wikipedia.org/wiki/Java_version_history.

You can download SUN's latest Java JDKs at: http://java.sun.com/javase/downloads/index.jsp.

For my 64-bit Debian system I selected the 64-bit JDK multiplatform binary for Linux: jdk-6u10-linux-x64.bin.
I downloaded the binary file to /tmp and installed it as follows as root:
# mkdir -p /usr/java
# cd /usr/java
# chmod 700 /tmp/jdk-6u10-linux-x64.bin
# /tmp/jdk-6u10-linux-x64.bin ...
creating: jdk1.6.0_10/ 
creating: jdk1.6.0_10/db/
creating: jdk1.6.0_10/db/bin/   .
inflating: jdk1.6.0_10/db/bin/ij  
inflating: jdk1.6.0_10/db/bin/NetworkServerControl  
inflating: jdk1.6.0_10/db/bin/setNetworkClientCP.bat  
inflating: jdk1.6.0_10/db/bin/derby_common.sh   ... Done.
# export JAVA_HOME=/usr/java/jdk1.6.0_10
# export PATH=$JAVA_HOME/bin:$PATH
# which java
/usr/java/jdk1.6.0_10/bin/java
# java -version
java version "1.6.0_10"
Java(TM) SE Runtime Environment (build 1.6.0_10-b33)
Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode) 

Installing Tomcat Software

Download the latest Tomcat 6.x version from http://tomcat.apache.org/download-60.cgi. For Debian I downloaded the Binary Core Distribution file apache-tomcat-6.0.18.tar.gz which was the latest version at the time of this writing.

Once you downloaded the tar file make sure the MD5 checksum matches the value posted on Tomcat's web site, see http://www.apache.org/dist/tomcat/tomcat-6/v6.0.18/bin/apache-tomcat-6.0.18.tar.gz.md5:
# md5sum /tmp/apache-tomcat-6.0.18.tar.gz
8354e156f097158f8d7b699078fd39c1  /tmp/apache-tomcat-6.0.18.tar.gz  
Installing Tomcat from a binary release (tar file) requires manual creation of the Tomcat user account. This is not necessary if you install the Tomcat RPM package on a Linux system that supports RPMs.

For security reasons I created a user account with no login shell for running the Tomcat server:
# groupadd tomcat
# useradd -g tomcat -s /usr/sbin/nologin -m -d /home/tomcat tomcat 
(It should be noted that other Linux systems have nologin under /sbin not /usr/sbin)

Next I extracted the tar file to /var/lib and changed the ownership of all files and directories to tomcat:
# cd /var/lib # tar zxvf /tmp/apache-tomcat-6.0.18.tar.gz
# chown -R tomcat.tomcat /var/lib/apache-tomcat-6.0.18  
The get the Tomcat version of the newly installed Tomcat, run:
# /var/lib/apache-tomcat-6.0.18/bin/version.sh
Using CATALINA_BASE:   /var/lib/apache-tomcat-6.0.18
Using CATALINA_HOME:   /var/lib/apache-tomcat-6.0.18
Using CATALINA_TMPDIR: /var/lib/apache-tomcat-6.0.18/temp
Using JRE_HOME:       /usr Server version: Apache Tomcat/6.0.18
Server built:   Jul 22 2008 02:00:36
Server number:  6.0.18.0
OS Name:        Linux
OS Version:     2.6.18-6-amd64
Architecture:   x86_64
JVM Version:    1.4.2
JVM Vendor:     Free Software Foundation, Inc. #  
Starting/Stopping Tomcat

Now try to startup the Tomcat server to see whether the default Tomcat home page is being displayed.

For security reasons I don't run the Tomcat server as user root but as tomcat which was created with no login shell. Therefore, to run Tomcat use the su command with the -p option to preserves all the environment variables when switching to tomcat (more on the Tomcat environment variables later). And since the tomcat account has no login shell, it needs to be specified with the -s option. (You may want to use this su command if you plan on writing and implementing a system startup and shutdown script for system reboots.)
# export JAVA_HOME=/usr/java/jdk1.6.0_10
# export PATH=$JAVA_HOME/bin:$PATH
# export CATALINA_HOME=/var/lib/apache-tomcat-6.0.18
# export CATALINA_BASE=/var/lib/apache-tomcat-6.0.18
# su -p -s /bin/sh tomcat $CATALINA_HOME/bin/startup.sh
Using CATALINA_BASE:   /var/lib/apache-tomcat-6.0.18
Using CATALINA_HOME:   /var/lib/apache-tomcat-6.0.18
Using CATALINA_TMPDIR: /var/lib/apache-tomcat-6.0.18/temp
Using JRE_HOME:       /usr/java/jdk1.6.0_10 #  
Now verify that Tomcat was started successfully by opening the URL http://localhost:8080 (Port number 8080 is the default port used by Tomcat). Note that you should also be able to use the name of your server instead of localhost. Once you opened the URL in your browser you should see Tomcat's Congratulation page. If you don't see the page, check the log files under $CATALINA_HOME/logs (/var/lib/apache-tomcat-6.0.18/logs).

Before you continue with the next steps, make sure to shut down Tomcat since we want to run the Tomcat server out of a separate application directory which is covered in the next chapter.
# su -p -s /bin/sh tomcat $CATALINA_HOME/bin/shutdown.sh
Using CATALINA_BASE:   /var/lib/apache-tomcat-6.0.18
Using CATALINA_HOME:   /var/lib/apache-tomcat-6.0.18
Using CATALINA_TMPDIR: /var/lib/apache-tomcat-6.0.18/temp
Using JRE_HOME:       /usr/java/jdk1.6.0_10 
Switching to Tomcat User Account

Most of the next steps in this article assume that you switched to the tomcat user account. If you see a '$' prompt, then the steps in this article are executed as the tomcat user. If you see a '#' prompt, then the steps are executed as root.

Since for security reasons the tomcat user has no login shell, it needs to be specified with the -s option when switching from root to tomcat:
# su - -s /bin/sh tomcat
$ id
uid=1001(tomcat) gid=1001(tomcat) groups=1001(tomcat) 
Note that non-root users cannot switch to the tomcat account.

Setting Up First Tomcat JVM Instance

It is recommended not to store the web applications's files in Tomcat's distribution directory tree. For example, having a separate directory makes Tomcat upgrades easier since it won't overwrite configuration files like server.xml. And since this tutorial shows how to run two Tomcat instances concurrently on a single Linux server, two separate directories are needed anyway. It should be noted here that it's also possible to run multiple web applications per Tomcat JVM instance. This HOWTO shows the creation and configuration of one web application for each Tomcat instance.

Setting up Directories and Files

In the following example I setup the first Tomcat JVM instance under the base directory /opt/tomcat-instance/sales.example.com. It's a good practice to name the base directory after the site name, in this example sales.example.com.

Creating a new base directory for a new instance requires the creation and copying of various directories and configuration files. Execute the following commands as root:
# mkdir -p /opt/tomcat-instance/sales.example.com
# cd /opt/tomcat-instance/sales.example.com
# cp -a /var/lib/apache-tomcat-6.0.18/conf .
# mkdir common logs temp server shared webapps work
# chown -R tomcat.tomcat /opt/tomcat-instance  
Most of the remaining steps are executed as the tomcat user. So make sure you switch from root to tomcat:
# su - -s /bin/sh tomcat
$ id
uid=1001(tomcat) gid=1001(tomcat) groups=1001(tomcat) $  
Next I created an environment file for the new Tomcat instance. This will be useful for easily setting the environment variables when starting/stopping the new Tomcat instance:
$ cat > /opt/tomcat-instance/sales.env << EOF
export JAVA_HOME=/usr/java/jdk1.6.0_10
export PATH=\$JAVA_HOME/bin:\$PATH
export CATALINA_HOME=/var/lib/apache-tomcat-6.0.18
export CATALINA_BASE=/opt/tomcat-instance/sales.example.com
EOF
$ cat /opt/tomcat-instance/sales.env
export JAVA_HOME=/usr/java/jdk1.6.0_10
export PATH=$JAVA_HOME/bin:$PATH
export CATALINA_HOME=/var/lib/apache-tomcat-6.0.18
export CATALINA_BASE=/opt/tomcat-instance/sales.example.com 
CATALINA_HOME is the base directory of Tomcat that contains all the libraries, scripts etc.
for Tomcat. This is the parent directory of the extracted Tomcat tar file.


CATALINA_BASE
is the base directory of the new Tomcat instance, which in this example points to /opt/tomcat-instance/sales.example.com.


Configuring Tomcat Network Ports

Since this is the first Tomcat instance that's being created here, the default port numbers can be left unchanged in $CATALINA_BASE/conf/server.xml (/opt/tomcat-instance/sales.example.com/conf/server.xml):
                  
However, these port numbers will have to be changed for the second Tomcat instance though, see Steps for Second Tomcat JVM Instance and Application.

Starting First Tomcat Instance

To start the newly created Tomcat JVM instance, ensure that the environment variables are set for the new instance and execute the startup script:
$ source /opt/tomcat-instance/sales.env $ $CATALINA_HOME/bin/startup.sh Using CATALINA_BASE:   /opt/tomcat-instance/sales.example.com Using CATALINA_HOME:   /var/lib/apache-tomcat-6.0.18 Using CATALINA_TMPDIR: /opt/tomcat-instance/sales.example.com/temp Using JRE_HOME:       /usr/java/jdk1.6.0_10 $ 
If everything has been configured correctly, you should now see an empty white page when opening the URL http://localhost:8080. Note that instead of localhost you should also be able to use the name of your server.
If you get an error in the browser instead of an empty page, check the log files under $CATALINA_BASE/logs (/opt/tomcat-instance/sales.example.com/logs). Note that since CATALINA_BASE has been changed for the new Tomcat instance, the logs are no longer written to /var/lib/apache-tomcat-6.0.18/logs.

Relaying HTTP Port 80 Connections to Tomcat Port 8080

By default, Tomcat listens on port 8080. To have the Tomcat server itself listen on HTTP port 80, Tomcat would have to run as root since only root can listen on ports below 1024 on Linux. But for security reasons this is not recommended. The solution I prefer is to relay port 80 TCP connections to port 8080 using the Netfilter package that comes with Linux. An alternate solution would be to use a service wrapper like jsvc from the Jakarta Commons Daemon project. But this solution would require the installation and maintenance of another piece of software on my system that I want to avoid.

The Netfilter package that comes already with Linux is transparent to Tomcat. The following steps show how to relay port 80 TCP connections to Tomcat's port 8080 using the iptables command from the Netfilter package. Note that these steps must be executed as root:
# iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
# iptables -t nat -I OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 8080 
The first rule redirects incoming requests on port 80 generated from other computer nodes, and the second rule redirects incoming requests on port 80 generated from the local node where Tomcat is running.

To see the newly configured rules, run:
# iptables -t nat -L Chain PREROUTING (policy ACCEPT) target     prot opt source               destination          REDIRECT   tcp  --  anywhere             anywhere            tcp dpt:www redir ports 8080   Chain POSTROUTING (policy ACCEPT) target     prot opt source               destination           Chain OUTPUT (policy ACCEPT) target     prot opt source               destination          REDIRECT   tcp  --  anywhere             anywhere            tcp dpt:www redir ports 8080  # 
To remove the NAT rules we just created, you can run the iptables -t nat -F command which flushes and deletes the rules. Note that this will also flush any other rules that may have been configured on your system! For more information on iptables, see netfilter/iptables documentation.

To make the rules permanent for reboots, you can use the following option outlined here for Debian (other Linux distributions have other methods). First save the newly created rules in a file:
# iptables-save > /etc/iptables.conf 
Then edit the /etc/network/interfaces file and add the line highlighted in blue for the public network interface. For example:
iface eth0 inet static         address 192.168.1.23         netmask 255.255.255.0         network 192.168.1.0         broadcast 192.168.1.255         gateway 192.168.1.1         pre-up iptables-restore < /etc/iptables.conf 
The pre-up configuration in this example activates the iptables rules on my system before the public interface eth0 comes up. So the rules can be seen with iptables -t nat -L after each reboot. Note that for security reasons it's important that firewall rules are established before the network interfaces come up. Even though this is not an issue for relaying Tomcat connections, as a matter of good practice, the iptables rules should always be established before the network comes up.

It should be noted here that there is one Tomcat configuration parameter that you may or may not want to change, the proxyPort parameter in the server.xml file. Since Tomcat still receives requests on port 8080 as they are relayed by the Linux Netfilter system from port 80, Tomcat may display port 8080 in the URL depending on the application's content. So if you want to change it to port 80, the proxyPort parameter would need to be added in the $CATALINA_BASE/conf/server.xml (/opt/tomcat-instance/sales.example.com/conf/server.xml). file for port 8080:
    proxyPort="80"                connectionTimeout="20000"                redirectPort="8443" /> 
After that you need to restart Tomcat to make this change effective.

Connecting to First Tomcat Instance Using Default HTTP Port

If iptables have been configured correctly, you should now be able to open the URL http://localhost and see an empty white page. You could also use the URL http://localhost:80 (port 80 is the default port used by browsers) or the name of your server. If you get an error in the browser instead of an empty page, check the iptables configuration and check the log files under $CATALINA_BASE/logs (/opt/tomcat-instance/sales.example.com/logs). Note that since CATALINA_BASE was changed for the new Tomcat instance, the logs are no longer written to /var/lib/apache-tomcat-6.0.18/logs.

Setting Up a Web Application for First Tomcat JVM Instance

You can setup multiple web applications for each Tomcat JVM instance. In this guide we are setting up one web application for each Tomcat JVM instance.

First make sure to switch to the tomcat user account and source in the environment variables for the remaining steps:
# su - -s /bin/sh tomcat $ source /opt/tomcat-instance/sales.env  
Setting up Web Application Layout

In the previous chapter the first Tomcat JVM instance was setup under the base directory $CATALINA_BASE (/opt/tomcat-instance/sales.example.com). In the following example I create a new directory called "sales" under $CATALINA_BASE/webapps which will become the root directory for the first web application, that is $CATALINA_BASE/webapps/sales. In Tomcat web application root directories are created under $CATALINA_BASE/webapps by default.
$ mkdir $CATALINA_BASE/webapps/sales  
Configuring Web Application

To configure Tomcat to recognize the new web application under $CATALINA_BASE/webapps/sales (/opt/tomcat-instance/sales.example.com/webapps/sales), the $CATALINA_BASE/conf/server.xml file needs to be edited. This is done by adding a new Context element with the path and docBase attributes. Note that Tomcat refers to webapps as "context". So Context here represents the configuration of a web application. The path attribute is the application name used within the URL, and the docBase attribute is the absolute path name of the new web application root under $CATALINA_BASE/webapps:
                 
In this example you can see that appBase already points to webapps by default, that is $CATALINA_BASE/webapps. The newly added path attribute points to the sales directory under $CATALINA_BASE/webapps which is the location for the new application. And the docBase attribute is set to mysales which stands for the application name within the URL, i.e. "http://localhost/mysales" or "http://localhost:8080/mysales". Make sure to add this new Context element inside the Host container element for 'localhost' which is the default host name.

Home Page for Web Application

To have a starting page for the new web application, you can simply create and add a index.html file under the web application's root directory $CATALINA_BASE/webapps/sales (/opt/tomcat-instance/sales.example.com/webapps/sales). You could also create your own JSP page here. For testing purposes here is a simple index.html example for the new application:
$ cat > $CATALINA_BASE/webapps/sales/index.html << EOF     

Apache Tomcat Sales Home Page

EOF $

Restarting First Tomcat Instance

Now check whether the new web application has been configured correctly. To do that, run the following commands to restart the new Tomcat JVM instance:
$ source /opt/tomcat-instance/sales.env $ $CATALINA_HOME/bin/shutdown.sh $ $CATALINA_HOME/bin/startup.sh 
If everything was configured correctly, you should now see the default home page for the new web application when opening the URL http://localhost/mysales or http://localhost/mysales:8080. Instead of localhost you should also be able to use the name of your server. If you get the error 'java.net.ConnectException: Connection refused' when you shutdown Tomcat, then Tomcat was probably not running. If you don't see the home page, check the log files under $CATALINA_BASE/logs.

Deploying Java Servlet for Web Application in First Tomcat JVM Instance

Setting up Java Servlet Layout

To follow the Java Servlet Specification for the new "sales" web application, I created the class directory for the Java class files under the new directory $CATALINA_BASE/webapps/sales/WEB-INF, see also Packaging Web Components. The WEB-INF directory is protected from access by browsers, meaning they are unbrowsable and safe from client views. The classes directory under WEB-INF is where web components and server-side utility classes should go. To create the WEB-INF and classes directories, run the following command:
$ mkdir -p $CATALINA_BASE/webapps/sales/WEB-INF/classes  
JAR Files

Most Java servlets also need JAR (Java ARchive) files which should be put under the lib directory. Since it's a good practice to keep the application separate from the Tomcat distribution directory tree, I created a new lib directory under $CATALINA_BASE/webapps/sales/WEB-INF which is consistent with WAR's hierarchical directory structure.
$ mkdir $CATALINA_BASE/webapps/sales/WEB-INF/lib  
The Java servlet example below requires the servlet-api.jar JAR file. This JAR is already available in the Tomcat distribution directory tree $CATALINA_HOME/lib. You could copy this JAR file to the application's new lib directory $CATALINA_BASE/webapps/sales/WEB-INF/lib, but then you would get the following warning in the $CATALINA_BASE/logs/catalina.out log file when you startup Tomcat:

INFO: validateJarFile(/opt/tomcat-instance/sales.example.com/webapps/sales/WEB-INF/lib/servlet-api.jar) - jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class

Tomcat shows this warning since it tries now to load the JAR file twice, first from $CATALINA_HOME/lib and then from $CATALINA_BASE/webapps/sales/WEB-INF/lib. Even though it's not going to be a problem for Tomcat, it's better not to keep JARs in two places. Since the servlet-api.jar JAR file already exists in the Tomcat distribution directory, I did not copy it to the $CATALINA_BASE/webapps/sales/WEB-INF/lib directory. I use this directory for application specific JARs that don't come with the Tomcat distribution. You could also remove the JAR in $CATALINA_HOME/lib but remember that it will reappier the next time you upgrade the Tomcat software.

Creating a Java Servlet

Since server-side classes are supposed to go to the WEB-INF/classes directory, I created the following class file example under $CATALINA_BASE/webapps/sales/WEB-INF/classes (/opt/tomcat-instance/sales.example.com/webapps/sales/WEB-INF/classes) and saved it as Sales.java:
$ cat $CATALINA_BASE/webapps/sales/WEB-INF/classes/Sales.java import java.io.*; import javax.servlet.*; import javax.servlet.http.*;  public class Sales extends HttpServlet {      public void doGet(HttpServletRequest request, HttpServletResponse response)     throws IOException, ServletException     {         response.setContentType("text/html");         PrintWriter out = response.getWriter();         out.println("");         out.println("");         out.println("<b>Sales</b> Page");         out.println("");         out.println("");         out.println("

Executing Sales ...

"); out.println(""); out.println(""); } }
To compile the new Java servlet, the servlet-api.jar JAR file is needed which can be specified with either the -classpath option or the CLASSPATH environment variable. The -classpath option for SDK tools is preferred over the CLASSPATH environment variable since it can be set individually for each application without affecting others. In the following example I specify the path of the class directory with the basename '*' (if you are unfamiliar with basename, see 'man basename'). This is equivalent to specifying all files with the extensions .jar or .JAR files in the directory and therefore individual JAR files like servlet-api.jar don't need to be specified.

The following command should now compile the Java servlet without errors:
$ cd $CATALINA_BASE/webapps/sales/WEB-INF/classes $ javac -classpath "$CATALINA_HOME/lib/*" Sales.java $ ls Sales.class  Sales.java $  
Configuring the Java Servlet

To configure servlets and other components for an application, an XML file called web.xml needs to be configured. The format of this file is defined in the Java Servlet Specification. In Tomcat, this file exists in two place:
  $CATALINA_BASE/conf/web.xml   $CATALINA_BASE/webapps/{your-appname}/WEB-INF/web.xml
The first one is the default web.xml file which is the base for all web applications in a Tomcat JVM instance, and the latter one is for the web application where WEB-INF resides for overwriting application specific settings.

For the newly created Java servlet "Sales" I created a new web.xml file under $CATALINA_BASE/webapps/sales/WEB-INF:
$ cat $CATALINA_BASE/webapps/sales/WEB-INF/web.xml            servlet_sales     Sales            servlet_sales     /execute       
For each servlet there is a element. It identifies the servlet name () and the Java class name (). The servlet mapping () maps a URI to the servlet name (). In the above example "/execute" in "http://localhost:8080/mysales/execute" maps to "servlet_sales" which points to the "Sales" servlet class. Note that the order of these elements is important. So when you open the URL "http://localhost:8080/mysales/execute", the "Sales" Java servlet will be executed.

In the following example I updated the $CATALINA_BASE/webapps/sales/index.html file to provide an entry point to the new Java servlet:
$ cat $CATALINA_BASE/webapps/sales/index.html     

Apache Tomcat Sales Home Page

Execute Sales $

Testing and Executing the Java Servlet

Note that if you run javac with the -classpath option or the CLASSPATH environment variable in the same shell before you startup Tomcat, you will get java.lang.NoClassDefFoundError / java.lang.ClassNotFoundException errors in your browser when you execute a servlet. To avoid this, simply re-login as the tomcat user before you startup Tomcat:
# su - -s /bin/sh tomcat $ source /opt/tomcat-instance/sales.env $ $CATALINA_HOME/bin/shutdown.sh $ $CATALINA_HOME/bin/startup.sh 
After Tomcat restarted, open the URL http://localhost/mysales (or use the server name instead of localhost) and you should see the "Execute Sales" link. Clicking on this link should invoke the Java servlet and display "Executing Sales" in your browser. If you are presented with an empty page instead, review the above steps and make sure you didn't miss a step. Check also the log files under $CATALINA_BASE/logs.

Setting Up Second Tomcat JVM Instance

General

If you've gone through all the previous steps in this HOWTO, then the following steps should be very easy to follow and to understand without much explanations. Therefore, I'll provide here just the steps for setting up a second Tomcat JVM instance and an application called "Order".

Steps for Second Tomcat JVM Instance and Application

Login as root and execute the following steps to setup the second Tomcat JVM instance:
# mkdir -p /opt/tomcat-instance/order.example.com # cd /opt/tomcat-instance/order.example.com # # cp -a /var/lib/apache-tomcat-6.0.18/conf . # mkdir common logs temp server shared webapps work # # chown -R tomcat.tomcat /opt/tomcat-instance/order.example.com # # su - -s /bin/sh tomcat $ cat > /opt/tomcat-instance/order.env << EOF export JAVA_HOME=/usr/java/jdk1.6.0_10 export PATH=\$JAVA_HOME/bin:\$PATH export CATALINA_HOME=/var/lib/apache-tomcat-6.0.18 export CATALINA_BASE=/opt/tomcat-instance/order.example.com EOF $ $ source /opt/tomcat-instance/order.env $  
For the second Tomcat JVM instance the default port numbers need to be changed in $CATALINA_BASE/conf/server.xml (/opt/tomcat-instance/order.example.com/conf/server.xml). In the following example I increased the port numbers by one:
    8006" shutdown="SHUTDOWN">      8081" protocol="HTTP/1.1"                connectionTimeout="20000"                redirectPort="8444" />      8010" protocol="AJP/1.3" redirectPort="8444" />  
Create a new application root directory:
$ mkdir $CATALINA_BASE/webapps/order  
To configure the new web application, edit $CATALINA_BASE/conf/server.xml (/opt/tomcat-instance/order.example.com/conf/server.xml) and add the following entry in blue:
                  
Create a new home page for the new "Order" application and include a link to the Java servlet that will be setup next:
$ cat > $CATALINA_BASE/webapps/order/index.html << EOF     

Apache Tomcat Order Home Page

Execute Order EOF $
Now setup and create a new Java servlet:
$ mkdir -p $CATALINA_BASE/webapps/order/WEB-INF/classes $ mkdir $CATALINA_BASE/webapps/order/WEB-INF/lib 
$ cat $CATALINA_BASE/webapps/order/WEB-INF/classes/Order.java import java.io.*; import javax.servlet.*; import javax.servlet.http.*;  public class Order extends HttpServlet {      public void doGet(HttpServletRequest request, HttpServletResponse response)     throws IOException, ServletException     {         response.setContentType("text/html");         PrintWriter out = response.getWriter();         out.println("");         out.println("");         out.println("<b>Order</b> Page");         out.println("");         out.println("");         out.println("

Executing Order ...

"); out.println(""); out.println(""); } }
Compile the new Java servlet:
$ cd $CATALINA_BASE/webapps/order/WEB-INF/classes $ javac -classpath "$CATALINA_HOME/lib/*" Order.java $ ls Order.class  Order.java $  
Configure the Java servlet:
$ cat $CATALINA_BASE/webapps/order/WEB-INF/web.xml            servlet_order     Order            servlet_order     /execute       
Now make sure to relogin as tomcat and start the second Tomcat JVM instance:
# su - -s /bin/sh tomcat $ source /opt/tomcat-instance/order.env $ $CATALINA_HOME/bin/startup.sh  
After the second Tomcat JVM restarted, open the URL http://localhost:8081/myorder (or use the server name instead of localhost) and you should see the "Execute Order" link. Clicking on this link should invoke the Java servlet and display "Executing Order" in your browser. If you are presented with an empty page instead, review the above steps and make sure you didn't miss a step. Check also the log files under $CATALINA_BASE/logs.

Friday, November 11, 2011

Migration from NFSv3 to NFSv4

Comparison Between NFSv3 and NFSv4

1. Transport protocols

For NFSv3, the MOUNT service is normally supported over the TCP and UDP protocols.
For NFSv4, only the TCP protocol is supported.
NFS v4 is designed for internet use. One unique network port is used on NFSv4. This predetermined port is fixed. The default is port 2049. Using NFS v4 through firewalls is easier than with earlier NFS versions.

2. Locking operation

NFS v3 protocol is stateless, so an additional Network Lock Manager (NLM) protocol, an auxiliary protocol for file locking, is required to support locking of NFS-mounted files READ/WRITE. Also NLM is stateful in that the server LOCKED keeps track of locks.

NFSv4 is stateful. Locking operations(open/read/write/lock/locku/close) are part of the protocol proper. NLM is not used by NFSv4.

3. Required Services

NFSv3 relies on Remote Procedure Calls (RPC) to encode and decode requests between clients and servers. NFSv3 depends on portmapper, rpc.mountd, rpc.lockd, rpc.statd.

NFSv4 has no interaction with portmapper, rpc.mountd, rpc.lockd, and rpc.statd, since protocol support has been incorporated into the v4 protocol. NFSv4 listens on “well-known” TCP port (2049) which eliminates the need for the portmapper interaction. The mounting and locking protocols have been incorpated into the V4 protocol which eliminates the need for interaction with rpc.mountd and rpc.lockd.

4. Security
NFS v3 supports export/mount. Thus the host makes the mount request, not a user of the file system.

With NFSv4, the mandatory security mechanisms are oriented towards authenticating individual users, e.g. by configuring the Kerberos version 5 GSS-API or other security mechanism.

Migration From NFSv3 to NFSv4

Migrating a system from NFSv3 to NFSv4 is a five step process:

1. Listing the data to export

The first step is to list all data to be exported by NFSv3. Usually, there are several directories to be exported. In the example that follows, the following directories are used.

# showmount -e
Export list for jch-lnx:
/myshare *
/distros *

2. Choosing the NFSv4 virtual root

To define the NFSv4 virtual root:
a) Create a NFSv4 root directory. Example: /exports/ .
b) Choose an existing directory which will be used as virtual root for. Example: /home/ .
In the following examples, /exports is used as the virtual root.

3. Data migration to the virtual root

Once we have a virtual root, we need to move data exported from the NFSv3 virtual root, as defined previously. There are two main possibilities:

A.Copy or move data from its path to the virtual root (recommended):

# mv /myshare /exports/myshare_v4
# mv /distros /exports/distros_v4
B.Move the data path sub tree to the virtual root (recommended):

# mkdir /exports/myshare_v4
# mount –-bind /myshare /exports/myshare_v4/
# mkdir /exports/distros_v4/
# mount –-bind /distros /exports/distros_v4/

4. Modifying export options

The goal is to convert the NFSv3 export options into NFSv4 ones. Most of the NFSv3 and NFSv4 options are the same (though NFSv4 requires an additional option to exportfs to provide the virtual root).

1)Virtual root export

#exportfs -ofsid=0,insecure,no_subtree_check *:/exports
2) Exporting subdirectories of the virtual root

# exportfs -orw,nohide,insecure,no_subtree_check *:/exports/myshare_v4
# exportfs -orw,nohide,insecure,no_subtree_check *:/exports/distros_v4


5.Mounting

With the previous configuration and examples, the following commands would be used:

# mkdir /mnt/nfs_v4_root
# mount -t nfs4 10.182.121.238:/ /mnt/nfs_v4_root
# echo $?
0

Friday, August 26, 2011

skel directory

/etc/skel directory to push configuration to user

By default all files from /etc/skel are copied to the new user's home directory; when a new user account created. There are few files included in /etc/skel/ by default. you can copy custom script or whatever you want for every new user, just copied inside /etc/skel/

  • /etc/skel/.bash_logout
  • /etc/skel/.bashrc
  • /etc/skel/.profile
  • /etc/skel/.cshrc
  • /etc/skel/.exrc (/etc/skel/.vimrc)

Friday, August 12, 2011

How to open port on Linux

Open port 8080
Open flle /etc/sysconfig/iptables:
# vi /etc/sysconfig/iptables


Append rule as follows:
#iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT

Save and close the file. Restart iptables:
# /etc/init.d/iptables restart
Open port 8080 that port is open Run following command:

netstat -tulpn | less
Make sure iptables is allowing port 8080
iptables -L -n

Refer to iptables man page for more information about iptables usage and syntax:
man iptables
Since you should not give up your firewall, you will have to add a rule to open this port.

Do:
cd /etc/sysconfig
cp iptables iptables.save_it
vi iptables

You will find lines like this:

Enter a line right behind this to open port 8080:

#iptables -A INPUT -p tcp -m tcp --dport 8080 --syn -j ACCEPT

Save it and restart the service "iptables" as described above and your port 8080 will work.


https://help.ubuntu.com/community/IptablesHowTo

Thursday, August 4, 2011

Linux User Disk quota implementation

What is disk quota?
Ans :
Disk quota is nothing but restricting the disk-space usage to the users. We have to remember one thing when we are dealing with disk quota i.e Disk Quota can be applied only on disks/partitions not on files and folders.

So how we can implement disk quota?
Disk quota can be implemented in two ways

a. On INODE
b.
On
BLOCK

What is an INODE?
Ans :
In Linux every object is consider as file, every file will be having an inode number associated and this is very much easy for computer to recognise where the file is located.

Inode stands for Index Node, and is the focus of all file activities in the UNIX file-system.
Each file has one inode that defines the file’s type (regular, directory, device etc),The location on disk, The size of the file, Access permissions, Access times.

Note that the file’s name is not stored in the inode.

So how to know what is your file Inode number?

Ans : Its just simple execute ls -i on your file.

ls -i xmls.txt

13662 xmls.txt

I think now you got what is INODE? Lets move on to BLOCK.

A block usually represents one least size on a disk, usually one block equal to 1kb. Some terms in Disk quota.

Soft limit : This is the disk limit where the user gets just a warning message saying that your disk quota is going to expire. This is just a warning, no restriction on data creation will occur at this point.

Hard limit : This is the disk limit where user gets error message, I repeat user gets error message stating that unable to create data.

Implementing QUOTA :
Step1 : Select/prepare the partition for quota, most of the time disk quota is implemented for restricting users not to create unwanted data on servers, so we will implement disk quota on /home mount point.

#vi /etc/fstab

Edit the /home mount point as follows
Before editing

/dev/hda2 /home ext3 defaults 0 0


after editing

/dev/hda2 /home ext3 defaults,usrquota 0 0


Step2 : Remounting the partition(this is done because the mount table should be updated to kernel). Other wise you can reboot the system too for updating of mount table, which is not preferred for live servers.

#mount -o remount,rw /home

Here -o specifies options, with remounting /home partition with read and write options.

Step3 : Creating quota database

#quotacheck -cu /home

The option -c for creating disk quota DB and u for user
Check for user database is created or not when you give ls /home you have to see auota.user file in /home directory,which contains user database.

Step4 : Switching on quota

#quotaon /home

Now get the report for default quota values for user surendra

#repquoata -a | grep surendra
surendra_anne --   4 0 0 1 0 0
surendra_a -- 4 0 0 1 0 0
surendra_test -- 16 0 0 4 0 0

Step5 : Now implementing disk quota for user phani on /home mount point(/dev/hda2)

#setquota -u surendra_anne 100 110 0 0 /dev/hda2

Step6 : Checking quota is implemented or not login to user surendra_anne and execute this command

#repquota -a 

or

#quota 

Step7 : Keep creating data, once 100MB is reached user will get an warning message saying, and when he reaches 110MB he can not create any more data.

Hint : To create a data file you can use seq command as below

#seq 1 10000 > test.txt

this command will create a file with 10000 lines with numbers in it.

Removing quota :
To do this one, all the users should log out from the system so better do it in runlevel one.

Step8 : Stop the disk quota

#quotaoff /home

Step9 : Removing quota database which is located /home

#rm /home/aquota.user

Step10 : Edit fstab file and remove usrdata from /home line

#vi /etc/fstab

Before editing

/dev/hda2 /home ext3 defaults,usrquota 0 0

After editing

/dev/hda2 /home ext3 defaults 0 0

Step11 : Remount the /home partition

#mount -o remount,rw /home
That’s it you are done with Disk Quota Implementation in Linux. Now test your self in creating Linux user disk quota on your own.

Wednesday, July 6, 2011

SSH DSA without paaswd login

How do you make sure that your passwords are safe? You can make them longer, complicate them by adding odd characters, making sure to use different passwords for each user account that you have. Or, you can simply skip them all together.

The secure shell, ssh, is a key tool in any Linux user's toolbox. As soon as you have more than one machine to interact with, ssh is the obvious choice.

When logging into a remote machine through ssh, you are usually prompted with the remote user's password. An alternative to this is to use an asymmetric key pair.

An asymmetric key pair consists of a private and public key. These are generated from an algorithm - either RSA or DSA. RSA has been around for a long time and is widely supported, even by old legacy implementations. DSA is safer, but requires v2 of the ssh protocol. This is not much of an issue in an open source world - keeping the ssh daemon implementation up to date is not a problem, but rather a requirement. Thus, DSA is the recommended choice, unless you have any specific reason to pick RSA.

The generated keys are larger than a common user password. RSA keys are at least 768 bits, default 2048 bits. DSA keys are 1024, as the standard specifies this.

To generate a DSA key, use the following command:

$ ssh-keygen -t dsa

This generates the files ~/.ssh/id_dsa and ~/.ssh/id_dsa.pub. You can specify a passphrase in the key generation process. This means that they key only can be used in combination with a passphrase, adding to the security.

The generated file ending with pub is the public half of the pair. This can be shared with remote hosts that you want to ssh to. The other file, id_dsa, is the private half of the pair. This must be protected just as you password. I.e. do not mail this, do not store it on untrusted machines, etc.

Having a 1024 bits long key can be thought of as having a 128 characters long password. This means that the key pair method is safer than most passwords that you can remember. Keys are also completely random, so they cannot be cracked using dictionary attacks. This means that you can increase the safety of your remote host by disabling logins using passwords, thus forcing all users to use key pairs.

Having generated your key pair, all that is left is copying the public half of the key to the remote machine. You do that using the ssh-copy-id command.

$ ssh-copy-id -i ~/.ssh/id_dsa.pub user@remote

This adds your key to the remote machine's list of authorized keys. Just to be on the safe side, it is also good to ensure that the ~/.ssh and ~/.ssh/authorized_keys aren't writable by any other user than you. You might have to fix this using chmod.

Having added the key to the remote machine, you should now be able to ssh to it without using a password.

$ ssh user@remote

This applies to all sshd-based mechanisms. So you can scp freely, as well as mount parts of the remote file system using sshfs.

One potential catch twenty two issue here is if the remote machine does not allow password-based logins. Then the ssh-copy-id command will not work. Instead you will have to take the contents of the public key half and manually add it as a new line to the ~/.ssh/authorized_keys file on the remote machine. This is what the ssd-copy-id command does for you.

This also tells you what to do if a key is compromised, or simply falls into disuse. Simply remove the corresponding line from the remote's list of authorized keys. You can usually recognize the key in question from the end of the line where it reads username@hostname.

So, until next time, no more passwords!

Thursday, June 16, 2011

/bin/rm: Argument list too long.

root@mx /var/virusmails # ls
razor-agent.log
spam-3398a20c9a59797df9b57fbe34feeace-20040519-084342-19051-05.gz
spam-57e230b6d1dca0dadf83d858d0b10788-20040519-084400-19144-03.gz
spam-6f3be6d2304f90e418db23443916101a-20040519-082357-18227-10.gz
virus-20040419-091017-12544-01
virus-20040419-130621-14993-07
virus-20040421-120113-57877-07
virus-20040421-165651-61698-07
virus-20040423-020850-90966-03
virus-20040423-090733-97665-04
virus-20040427-211030-99133-07
virus-20040427-225312-01622-01
virus-20040428-190241-18845-05
virus-20040505-103654-59956-10

root@mx /var/virusmails # rm spam-*
/bin/rm: Argument list too long.
How many files was I dealing with here?
root@mx /var/virusmails # ls -1 | grep virus | wc -l
1667

This is not a limitation of the rm command, but a kernel limitation on the size of the parameters of the command. Since I was performing shell globbing (selecting all the files with extension .wrk), this meant that the size of the command line arguments became bigger with the number of the files involved. For who cares this is defined by:
egrep ARG_MAX /usr/include/linux/limits.h

#define ARG_MAX 131072 /* # bytes of args + environ for exec() */


Solution is to remove file through following find command

root@mx /var/virusmails # find . -name 'spam-*' | xargs rm
it works like a charm.

Wednesday, June 15, 2011

LVM


LVM


Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM, «logical» partitions can span across physical hard drives and can be resized. A physical disk is divided into one or more physical volumes (PVs), and logical volume groups (VGs) are created by combining PVs. Notice the VGs can be an aggregate of PVs from multiple physical disks.

Example Configuration

This article describes a Linux logical volume manager by showing an example of configuration and usage. We use RedHat Linux for this example.

Physical Volumes PV
With LVM, physical partitions are simply called «physical volumes» or «PVs». These PVs are usually entire
disks but may be disk partitions, for example /dev/sda3 in the above figure. PVs are created with pvcreate to initialize a disk or partition.
Command
Remarks
pvcreate
Initialize a disk or partition for use by LVM
pvchange
Change attributes of a physical volume
pvdisplay
Display attributes of a physical volume
pvmove
Move physical extents
pvremove
Remove a physical volume
pvresize
Resize a disk or partition in use by LVM2
pvs
Report information about physical volumes
pvscan Scan all disks for physical volumes
Example: pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Physical Volume Groups VG
The PVs in turn are combined to create one or more large virtual disks called «volume groups» or «VGs». While you can create many VGs,one may be sufficient. A VG can grow or shrink by adding or removing PVs from it.
The command vgcreate creates a new volume using the block special device previously configured with pvcreate.
Command
Remarks
vgcreate
Create a volume group
vgchange
Change attributes of a volume group
vgdisplay
Display attributes of volume groups
vgcfgbackup
Backup volume group descriptor area
vgcfgrestore
Restore volume group descriptor area
vgck
Check volume group metadata
vgconvert
Convert volume group metadata format
vgexport
Make volume groups unknown to the system
vgextend
Add physical volumes to a volume group
vgimport
Make exported volume groups known to the system
vgmerge
Merge two volume groups
vgmknodes
Recreate volume group directory and logical volume special files
vgreduce
Reduce a volume group
vgremove
Remove a volume group
vgrename
Rename a volume group
vgs
Report information about volume groups
vgscan
Scan all disks for volume groups and rebuild caches
vgsplit
Split a volume group into two
Example: vgcreate VGb1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Logical Volumes LV
Once you have one or more physical volume groups you can create one or more virtual partitions called «logical volumes» or «LVs». Note each LV must fit entirely within a single VG.
The command lvcreate creates a new logical volume by allocating logical extents from the free physical extent pool of that volume group.
Command
Remarks
lvcreate
Create a logical volume in an existing volume group
lvchange
Change attributes of a logical volume
lvdisplay
Display attributes of a logical volume
lvextend
Extend the size of a logical volume
lvmchange
Change attributes of the logical volume manager
lvmdiskscan
Scan for all devices visible to LVM2
lvreduce
Reduce the size of a logical volume
lvremove
Remove a logical volume
lvrename
Rename a logical volume
lvresize
Resize a logical volume
lvscan
Scan (all disks) for logical volumes
Example: lvcreate -L 400 -n LVb1 VGb1
This creates a logical volume, named «LVb1», with a size of 400 MB from the virtual group «VGb1».
Filesystems

   Finally, you can create any type of filesystem you wish on the logical volume, including as swap space. Note that some filesystems are more
  useful with LVM than others. For example not all filesystems support growing and shrinking. ext2, ext3, xfs, and reiserfs do support such operations and would be good choices.

Creating the Root Logical Volume «LVa1» during Installation

The physical volumes are combined into logical volume groups, with the exception of the /boot partition. The /boot partition (/dev/sda1) cannot be
on a logical volume group because the boot loader cannot read it. If the root partition is on a logical volume, create a separate /boot partition which is not a part of a volume group.
In this example the swap space (/dev/sda2) is also created on a normal ext3 partition. The setup of the LVM for the root filesystem (/dev/sda3) is done during the installation of RedHat Linux.
After creating the /boot filesystem and the swap space, select the free space and create the physical volume for /dev/sda3 as shown in the next figure.
  1. Select New.
  2. Select physical volume (LVM) from the File System Type pulldown menu.
  3. You cannot enter a mount point yet.
  4. A physical volume must be constrained to one drive.
  5. Enter the size that you want the physical volume to be.
  6. Select Fixed size to make the physical volume the specified size, select Fill all space up to (MB) and enter a size in MBs to give range for the physical volume size,

  1. or select Fill to maximum allowable size to make it grow to fill all available space on the hard disk.
  2. Select Force to be a primary partition if you want the partition to be a primary partition.
  3. Click OK to return to the main screen.
The result is shown in the next figure, the physical volume PV is located on /dev/sda3.
Once all the physical volumes are created, the volume groups can be created.
  1. Click the LVM button to collect the physical volumes into volume groups. A volume group is basically a collection of physical volumes.

  1. You can have multiple logical volumes, but a physical volume can only be in one volume group.
  2. Change the Volume Group Name if desired.
  3. Select which physical volumes to use for the volume group.
Enter the name for the logical volume group as shown in the next figure.
The result is the logical volume group VGa1 located on the physical volume /dev/sda3.

Creating the Logical Volume «LVb1» manually

Create Partitions
For this LVM example you need an unpartitioned hard disk /dev/sdb. First you need to create physical volumes.
To do this you need partitions or a whole disk. It is possible to run pvcreate command on
but I prefer to use partitions and from partitions I later create physical volumes.


fdisk -l
....
vice Boot Start End Blocks Id System
/dev/sda1 * 1 127 1020096 83 Linux
/dev/sda2 128 382 2048287+ 82 Linux swap / Solaris
/dev/sda3 383 2610 17896410 8e Linux LVM
....

The partition type for LVM is 8e.

fdisk /dev/sdb

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
1
First cylinder (1-2136, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2136, default 2136):
Using default value 2136

Command (m for help):
t
Selected partition 1
Hex code (type L to list codes):
8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help):
w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
This is done for all other disks as well.

Create physical volumes

Use the pvcreate command to create physical volumes.
pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created
Physical volume "/dev/sdd1" successfully created
Physical volume "/dev/sde1" successfully created

Create physical volume group VGb1
At this stage you need to create a physical volume group which will serve as a container for your physical volumes. To create a virtual group with the name «VGb1» which will include all partitions, you can issue the following command.
vgcreate VGb1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Volume group "VGb1" successfully created

vgdisplay

--- Volume group ---
VG Name VGb1
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 65.44 GB
PE Size 4.00 MB
Total PE 16752
Alloc PE / Size 16717 / 65.30 GB
Free PE / Size 35 / 140.00 MB
VG UUID 2iSIeo-dw0Q-NA07-HUt0-Pjxq-m3gh-f33lAh
Create Logical Volume Group LVb1
To create a logical volume, named «LVb1», with a size of 400 MB from the virtual group «VGb1» use the following command.
lvcreate -L 65.3G -n LVb1 VGb1

Rounding up size to full physical extent 65.30 GB
Logical volume "LVb1" created
Create File system on logical volumes
The logical volume is almost ready to use. All you need to do is to create a filesystem.
mke2fs -j /dev/VGb1/LVb1

mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
8568832 inodes, 17118208 blocks
855910 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
523 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
mount -a
You can now use the filesystem, for the maintenance use one of the above LVM commands.
Moving a VG to another server:

To do this we use the vgexport and vgimport commands.

vgexport and vgimport is not necessary to move disk drives from one server to another.
 It is an administrative policy tool to prevent access to volumes in the time it takes to move them.

1. Unmount the file system
First, make sure that no users are accessing files on the active volume, then unmount it

# unmount /appdata

2.Mark the volume group inactive
Marking the volume group inactive removes it from the kernel and prevents any further activity on it.

# vgchange -an appvg
vgchange -- volume group "appvg" successfully deactivate


3. Export the volume group

It is now must to export the volume group. This prevents it from being accessed on the old server and prepares it to be removed.

# vgexport appvg
vgexport -- volume group "appvg" successfully exported

Now, When the machine is next shut down, the disk can be unplugged and then connected to it's new machine

4. Import the volume group

When it plugged into the new server, it becomes /dev/sdc (depends).

so an initial pvscan shows:

# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sdc1" is in EXPORTED VG "appvg" [996 MB / 996 MB free]
pvscan -- inactive PV "/dev/sdc2" is in EXPORTED VG "appvg" [996 MB / 244 MB free]
pvscan -- total: 2 [1.95 GB] / in use: 2 [1.95 GB] / in no VG: 0 [0]

We can now import the volume group (which also activates it) and mount the file system.

If you are importing on an LVM 2 system, run:

# vgimport appvg
Volume group "vg" successfully imported

5. Activate the volume group

You must activate the volume group before you can access it.

# vgchange -ay appvg

Mount the file system

# mkdir -p /appdata
# mount /dev/appvg/appdata /appdata

The file system is now available for use.