Friday, August 28, 2015

How to Install Tomcat 7.0.63 Server on CentOS/RHEL 7/6 -- 64bit

Step 1: Check Java Version
# java -version

java version "1.8.0_31"
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)

install: jdk-7u79-linux-x64.rpm

Downloading Tomcat 7 Archive
# cd /tmp
# wget http://www.us.apache.org/dist/tomcat/tomcatcd 7/v7.0.63/bin/apache-tomcat-7.0.63.tar.gz
# tar xzf apache-tomcat-7.0.63.tar.gz
# mv apache-tomcat-7.0.63 /usr/local/tomcat7

Starting Tomcat
# cd /usr/local/tomcat7
# ./bin/startup.sh

Using CATALINA_BASE:   /usr/local/tomcat7
Using CATALINA_HOME:   /usr/local/tomcat7
Using CATALINA_TMPDIR: /usr/local/tomcat7/temp
Using JRE_HOME:        /opt/jdk1.8.0_31
Using CLASSPATH:       /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar
Tomcat started.

Access Tomcat in Browser
http://svr1.tecadmin.net:8080

Setup User Accounts

Edit conf/tomcat-users.xml file in your editor and paste inside <tomcat-users> </tomcat-users> tags.
<!-- user manager can access only manager section -->
<role rolename="manager-gui" />
<user username="manager" password="_SECRET_PASSWORD_" roles="manager-gui" />

<!-- user admin can access manager and admin section both -->
<role rolename="admin-gui" />
<user username="admin" password="_SECRET_PASSWORD_" roles="manager-gui,admin-gui" />

Stop Tomcat
 ./bin/shutdown.sh

Thursday, August 6, 2015

Linux Command To Find the System Configuration And Hardware Information

On Linux based system most of the hardware information can be extracted from /proc file system, for example display CPU and Memory information, enter:

cat /proc/meminfo
cat /proc/cpuinfo

Linux cpu/hardware information

Use any one of the following command:
# less /proc/cpuinfo

OR
# lscpu

Linux show free and used memory in the system

Use any one of the following command:
# cat /proc/meminfo

OR
# free
# free -m
# free -mt
# free -gt

Linux find out the current running kernel version

Type the following command:
# cat /proc/version

OR use the following command:
# uname -mrs
# uname -a

Find out information about the Linux distribution and version
$ cat /etc/*release*

List all PCI devices

# lspci

List all USB devices

# lsusb

List all block devices (hard disks, cdrom, and others)

# lsblk

Display installed hard disk and size

# fdisk -l | grep '^Disk /dev/'

Dump all hardware information

Type the following command to see your motherboard, cpu, vendor, serial-numbers, RAM, disks, and other information directly from the system BIOS:

# dmidecode | less
Understand TOP command

1° Row — top

topr1

topr1

This first line indicates in order:

current time (11:37:19)
uptime of the machine (up 1 day, 1:25)
users sessions logged in (3 users)
average load on the system (load average: 0.02, 0.12, 0.07) the 3 values refer to the last minute, five minutes and 15 minutes.

2° Row – task
topr2

topr2


The second row gives the following information:

Processes running in totals (73 total)
Processes running (2 running)
Processes sleeping (71 sleeping)
Processes stopped (0 stopped)
Processes waiting to be stoppati from the parent process (0 zombie)

3° Row – cpu
topr3
topr3

The third line indicates how the cpu is used. If you sum up all the percentages the total will be 100% of the cpu. Let’s see what these values indicate in order:

Percentage of the CPU for user processes (0.3%us)
Percentage of the CPU for system processes (0.0%sy)
Percentage of the CPU processes with priority upgrade nice (0.0%ni)
Percentage of the CPU not used (99,4%id)
Percentage of the CPU processes waiting for I/O operations(0.0%wa)
Percentage of the CPU serving hardware interrupts (0.3% hi — Hardware IRQ
Percentage of the CPU serving software interrupts (0.0% si — Software Interrupts
The amount of CPU ‘stolen’ from this virtual machine by the hypervisor for other tasks (such as running another virtual machine) this will be 0 on desktop and server without Virtual machine. (0.0%st — Steal Time)

4° and 5° Rows – memory usage
topr45

topr45

 The fourth and fifth rows respectively indicate the use of physical memory (RAM) and swap. In this order: Total memory in use, free, buffers cached. On this topic you can also read the following article

Following Rows — Processes list
topr6

topr6

 And as last thing ordered by CPU usage (as default) there are the processes currently in use. Let’s see what information we can get in the different columns:

PID – l’ID of the process(4522)
USER – The user that is the owner of the process (root)
PR – priority of the process (15)
NI – The “NICE” value of the process (0)
VIRT – virtual memory used by the process (132m)
RES – physical memory used from the process (14m)
SHR – shared memory of the process (3204)
S – indicates the status of the process: S=sleep R=running Z=zombie (S)
%CPU – This is the percentage of CPU used by this process (0.3)
%MEM – This is the percentage of RAM used by the process (0.7)
TIME+ –This is the total time of activity of this process (0:17.75)
COMMAND – And this is the name of the process (bb_monitor.pl)
Conclusions
Now that we have seen in detail all the information that the command “top” returns, it will be easier to understand the reason of excessive load and/or the slowing of the system.

 ZFS  Filesytem:    ARC and L2ARC

ZFS includes two exciting features that dramatically improve the performance of read operations. I’m talking about ARC and L2ARC. ARC stands for adaptive replacement cache. ARC is a very fast cache located in the server’s memory (RAM). The amount of ARC available in a server is usually all of the memory except for 1GB.

For example, our ZFS server with 12GB of RAM has 11GB dedicated to ARC, which means our ZFS server will be able to cache 11GB of the most accessed data. Any read requests for data in the cache can be served directly from the ARC memory cache instead of hitting the much slower hard drives. This creates a noticeable performance boost for data that is accessed frequently.

As a general rule, you want to install as much RAM into the server as you can to make the ARC as big as possible. At some point, adding more memory is just cost prohibitive. That is where the L2ARC becomes important. The L2ARC is the second level adaptive replacement cache. The L2ARC is often called “cache drives” in the ZFS systems.

These cache drives are physically MLC style SSD drives. These SSD drives are slower than system memory, but still much faster than hard drives. More importantly, the SSD drives are much cheaper than system memory. Most people compare the price of SSD drives with the price of hard drives, and this makes SSD drives seem expensive. Compared to system memory, MLC SSD drives are actually very inexpensive.

When cache drives are present in the ZFS pool, the cache drives will cache frequently accessed data that did not fit in ARC. When read requests come into the system, ZFS will attempt to serve those requests from the ARC. If the data is not in the ARC, ZFS will attempt to serve the requests from the L2ARC. Hard drives are only accessed when data does not exist in either the ARC or L2ARC. This means the hard drives receive far fewer requests, which is awesome given the fact that the hard drives are easily the slowest devices in the overall storage solution.

In our ZFS project, we added a pair of 160GB Intel X25-M MLC SSD drives for a total of 320GB of L2ARC. Between our ARC of 11GB and our L2ARC of 320GB, our ZFS solution can cache a total of 331GB of the most frequently accessed data! This hybrid solution offers considerably better performance for read requests because it reduces the number of accesses to the large, slow hard drives.

Things to Keep in Mind
There are a few things to remember. The cache drives don’t get mirrored. When you add cache drives, you cannot set them up as mirrored, but there is no need to since the content is already mirrored on the hard drives. The cache drives are just a cheap alternative to RAM for caching frequently access content.

Another thing to remember is you still need to use SLC SSD drives for the ZIL drives, even when you use MLC SSD drives for cache drives. The SLC SSD drives used for ZIL drives dramatically improve the performance of write actions. The MLC SSD drives used as cache drives are use to improve read performance.

If you decide to use MLC SSD drives for actual storage instead of using SATA or SAS hard drives, then you don’t need to use cache drives. Since all of the storage drives would already be ultra fast SSD drives, there would be no performance gained from also running cache drives. You would still need to run SLC SSD drives for ZIL drives, though, as that would reduce wear on the MLC SSD drives that were being used for data storage.

If you plan to attach a lot of SSD drives, remember to use multiple SAS controllers. The SAS controller in the motherboard for our ZFS Build project is able to sustain 140,000 IOPS. If you use enough SSD drives, you could actually saturate the motherboard’s SAS controller. As a general rule of thumb, you may want to have one additional SAS controller for every 24 MLC style SSD drives.

Effective Caching to Virtualized Environments
At this point, you are probably wondering how effectively the two levels of caching will be able to cache the most frequently used data, especially when we are talking about 9TB of formatted RAID10 capacity. Will 11GB of ARC and 320GB L2ARC make a significant difference for overall performance? It will depend on what type of data is located on the storage array and how it is being accessed. If it contained 9TB of files that were all accessed in a completely random way, the caching would likely not be effective. However, we are planning to use the storage for virtual machine (VPS) file systems and this will cache very effectively for our intended purpose.

When you plan to deploy hundreds of virtual machines, the first step is to build a base template that all of the virtual machines will start from. If you were planning to host a lot of Linux/cPanel virtual machines, you would build the base template by installing CentOS and cPanel. When you get to the step where you would normally configure cPanel through the browser, you would shut off the virtual machine. At that point, you would have the base template ready. Each additional virtual machine would simply be chained off the base template. The virtualization technology will keep the changes specific to each virtual machine in its own child or differencing file.

When the virtualization solution is configured this way, the base template will be cached quite effectively in the ARC (main system memory). This means the main operating system files and cPanel files should deliver near RAM-disk performance levels. The L2ARC will be able to effectively cache the most frequently used content that is not shared by all of the virtual machines, such as the content of the files and folders in the most popular websites or MySQL databases. The least frequently accessed content will be pulled from the hard drives, but even that should show solid performance since it will be RAID10 across 20 drives and none of the frequently accessed read requests will need to burden the RAID10 volume since they are already served from ARC or L2ARC.
Built-in Command-line Monitoring Tools


top


The top tool provides a dynamic, real-time view of the processes in a running system. It can display
a variety of information, including a system summary and the tasks currently being managed by the
Linux kernel. It also has a limited ability to manipulate processes. Both its operation and the
information it displays are highly configurable, and any configuration details can be made to persist
across restarts.
By default, the processes shown are ordered by the percentage of CPU usage, giving an easy view
into the processes that are consuming the most resources.
For detailed information about using top, refer to its man page: man top.
ps

The ps tool takes a snapshot of a select group of active processes. By default this group is limited to
processes owned by the current user and associated with the same terminal.
It can provide more detailed information about processes than top, but is not dynamic.
For detailed information about using ps, refer to its man page: man ps.
vmstat

vmstat (Virtual Memory Statistics) outputs instantaneous reports about your system's processes,
memory, paging, block I/O, interrupts and CPU activity.
Although it is not dynamic like top, you can specify a sampling interval, which lets you observe
system activity in near-real time.
Chapt er 3. Monit oring and Analyzing Syst em Performance
25
For detailed information about using vmstat, refer to its man page: man vmstat.
sar

sar (System Activity Reporter) collects and reports information about today's system activity so far.
The default output covers today's CPU utilization at ten minute intervals from the beginning of the
day:
12:00:01 AM CPU %user %nice %system %iowait %steal
%idle
12:10:01 AM all 0.10 0.00 0.15 2.96 0.00
96.79
12:20:01 AM all 0.09 0.00 0.13 3.16 0.00
96.61
12:30:01 AM all 0.09 0.00 0.14 2.11 0.00
97.66
...
This tool is a useful alternative to attempting to create periodic reports on system activity through top
or similar tools.
For detailed information about using sar, refer to its man page: man sar

Tuesday, August 4, 2015




Configure Puppet for Centos 6.0

1> yum install ntp

2>   dowlaod puppet repository

[root@bik ~]# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm


3> go to /etc/yum.repos.d
edit file puppetlabs.repo

[puppetlabs-devel]
name=Puppet Labs Devel El 6 - $basearch
baseurl=http://yum.puppetlabs.com/el/6/devel/$basearch
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
enabled=1    -------changed from 0 to 1
gpgcheck=1

4> yum install puppet-server

5> go to /etc/puppet/puppet.conf

vi /etc/puppet/puppet.conf

Add following line after [main] last line

#Set up DNS names that the server will respond to
dns_alt_names = puppet puppet.mydomain.local

save it

6>create a new cetificate

puppet master --verbose --no-daemonize


7> make a couple of dir for mainfest for diff environment
cd /etc/puppet

[root@bik ~]# cd /etc/puppet/
[root@bik puppet]# ls
auth.conf  environments  fileserver.conf  manifests  modules  puppet.conf
[root@bik puppet]# cd environments/
[root@bik environments]# ls
example_env
[root@bik environments]# ls -a example_env/
.  ..  manifests  modules  README.environment

create diff folder for diff environment ---

[root@bik environments]# mkdir -p prodcution/manifests
[root@bik environments]# mkdir -p prodcution/modules
[root@bik environments]# mkdir -p development/manifests
[root@bik environments]# mkdir -p development/modules
[root@bik environments]# ls
development  example_env  prodcution
[root@bik environments]# cd development/
[root@bik development]# ls
manifests  modules

now we are going to tell puppet these above file exists.
[root@bik development]# vi /etc/puppet/puppet.conf

#Tell puppet where the environment directories live
environmentpath = $confdir/environments

root@bik development]# service puppetmaster  start
Starting puppetmaster:                                     [  OK  ]

[root@bik development]# service puppetmaster  stop
Stopping puppetmaster:                                     [  OK  ]

8> [root@bik development]# yum install httpd httpd-devel mod_ssl ruby_devel rubygems gcc

[root@bik development]# chkconfig httpd on

9> install passenger
 gem install rack passenger
10 > lunch the installer for passenger
development]# passenger-install-apache2-module

Eneter ---

install all dependency
yum install gcc-c++ openssl-devel  zlib-devel ruby-devel

Rerun

[root@bik development]# passenger-install-apache2-module

-----------------------------------------------------------------

# You'll need to adjust the paths in the Passenger config depending on which OS
# you're using, as well as the installed version of Passenger.

# Debian/Ubuntu:
#LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-4.0.x/ext/apache2/mod_passenger.so
#PassengerRoot /var/lib/gems/1.8/gems/passenger-4.0.x
#PassengerRuby /usr/bin/ruby1.8

# RHEL/CentOS:
#LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-4.0.x/ext/apache2/mod_passenger.so
#PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-4.0.x
#PassengerRuby /usr/bin/ruby

# And the passenger performance tuning settings:
# Set this to about 1.5 times the number of CPU cores in your master:
PassengerMaxPoolSize 12
# Recycle master processes after they service 1000 requests
PassengerMaxRequests 1000
# Stop processes if they sit idle for 10 minutes
PassengerPoolIdleTime 600

Listen 8140
<VirtualHost *:8140>
    # Make Apache hand off HTTP requests to Puppet earlier, at the cost of
    # interfering with mod_proxy, mod_rewrite, etc. See note below.
    PassengerHighPerformance On

    SSLEngine On

    # Only allow high security cryptography. Alter if needed for compatibility.
    SSLProtocol ALL -SSLv2 -SSLv3
    SSLCipherSuite EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!DSS:!RC4:!SEED:!IDEA:!ECDSA:kEDH:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
    SSLHonorCipherOrder     on

    SSLCertificateFile      /etc/puppetlabs/puppet/ssl/certs/puppet-server.example.com.pem
    SSLCertificateKeyFile   /etc/puppetlabs/puppet/ssl/private_keys/puppet-server.example.pem
    SSLCertificateChainFile /etc/puppetlabs/puppet/ssl/ca/ca_crt.pem
    SSLCACertificateFile    /etc/puppetlabs/puppet/ssl/ca/ca_crt.pem
    SSLCARevocationFile     /etc/puppetlabs/puppet/ssl/ca/ca_crl.pem
    SSLCARevocationCheck     chain
    SSLVerifyClient         optional
    SSLVerifyDepth          1
    SSLOptions              +StdEnvVars +ExportCertData

    # Apache 2.4 introduces the SSLCARevocationCheck directive and sets it to none
    # which effectively disables CRL checking. If you are using Apache 2.4+ you must
    # specify 'SSLCARevocationCheck chain' to actually use the CRL.

    # These request headers are used to pass the client certificate
    # authentication information on to the Puppet master process
    RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
    RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

    DocumentRoot /usr/share/puppet/rack/puppetmasterd/public

    <Directory /usr/share/puppet/rack/puppetmasterd/>
      Options None
      AllowOverride None
      # Apply the right behavior depending on Apache version.
      <IfVersion < 2.4>
        Order allow,deny
        Allow from all
      </IfVersion>
      <IfVersion >= 2.4>
        Require all granted
      </IfVersion>
    </Directory>

    ErrorLog /var/log/httpd/puppet-server.example.com_ssl_error.log
    CustomLog /var/log/httpd/puppet-server.example.com_ssl_access.log combined
</VirtualHost>

----------------------------------------------------------------------------------------------------

vim /etc/httpd/conf.d/puppet.conf

Uncomment following lines

# RHEL/CentOS:
LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-4.0.x/ext/apache2/mod_passenger.so
PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-4.0.x
PassengerRuby /usr/bin/ruby

install mlocate for file index system
yum install mlocate

populate database

use command  --locateupdatedb

find out the path of file

[root@bik development]# locate mod_passenger.so
/usr/lib/ruby/gems/1.8/gems/passenger-5.0.15/buildout/apache2/mod_passenger.so
[root@bik development]#

copy the whole path and edit conif file

vim /etc/httpd/conf.d/puppet.conf
change to

 RHEL/CentOS:
LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-5.0.15/buildout/apache2/mod_passenger.so
PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-5.0.15
PassengerRuby /usr/bin/ruby

vim /etc/httpd/conf.d/puppet.conf



 SSLCertificateFile     /var/lib/puppet/ssl/certs/puppet.mydomain.local.pem
    SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/puppet.mydomain.local.pem
    SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
    SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem
    SSLCARevocationFile    / var/lib/puppet/ssl/ca/ca_crl.pem
    #SSLCARevocationCheck     chain
    SSLVerifyClient         optional
    SSLVerifyDepth          1




vim  /etc/httpd/conf/httpd.conf
#ServerName puppet.mydomain.local:80
:wq




[root@bik ca]# mkdir -p /usr/share/puppet/rack/puppetmasterd
[root@bik ca]# mkdir -p /usr/share/puppet/rack/puppetmasterd/public
[root@bik ca]# mkdir -p /usr/share/puppet/rack/puppetmasterd/tmp
[root@bik ca]# cp /usr/share/puppet/ext/rack/config.ru  /usr/share/puppet/rack/puppetmasterd/
[root@bik ca]# chown puppet:puppet /usr/share/puppet/rack/puppetmasterd/config.ru
[root@bik ca]# service httpd restart

[root@bik ca]# netstat -anl | grep 8140
tcp        0      0 :::8140                     :::*                        LISTEN     
[root@bik ca]#