Thursday, December 22, 2011

FTP on RHEL 2.1 Kernel

I had issue configuring the ftp on Linux 2.1 kernel. I found a article written by  VIVEK GITE and published at:
http://www.cyberciti.biz/faq/linux-how-do-i-configure-xinetd-service/

The extracts of this article are as below:

Howto: configure xinetd service under Linux or UNIX systems

Q. How do I configure xinetd under Fedora Core Linux?
A. xinetd, the eXtended InterNET Daemon, is an open-source daemon which runs on many Linux and Unix systems and manages Internet-based connectivity. It offers a more secure extension to or version of inetd, the Internet daemon.
xinetd performs the same function as inetd: it starts programs that provide Internet services. Instead of having such servers started at system initialization time, and be dormant until a connection request arrives, xinetd is he only daemon process started and it listens on all service ports for the services listed in its configuration file. When a request comes in, xinetd starts the appropriate server. Because of the way it operates, xinetd (as well as inetd) is also referred to as a super-server.

Task: xinetd Configuration files location

Following are important configuration files for xinetd:
  • /etc/xinetd.conf - The global xinetd configuration file.
  • /etc/xinetd.d/ directory - The directory containing all service-specific files such as ftp

Task: Understanding default configuration file

You can view default configuration file with less or cat command:
# less /etc/xinetd.confOR# cat /etc/xinetd.confOutput:
# Simple configuration file for xinetd
#
# Some defaults, and include /etc/xinetd.d/
defaults
{
        instances               = 60
        log_type                = SYSLOG authpriv
        log_on_success          = HOST PID
        log_on_failure          = HOST
        cps                     = 25 30
}
includedir /etc/xinetd.d
Where,
  • instances = 60 : Determines the number of servers that can be simultaneously active for a service. So 60 is the maximum number of requests xinetd can handle at once.
  • log_type = SYSLOG authpriv: Determines where the service log output is sent. You can send it to SYSLOG at the specified facility (authpriv will send log to /var/log/secure file).
  • log_on_success = HOST PID: Force xinetd to log if the connection is successful. It will log HOST name and Process ID to /var/log/secure file.
  • log_on_failure = HOST: Force xinetd to log if there is a connection dropped or if the connection is not allowed to /var/log/secure file
  • cps = 25 30: Limits the rate of incoming connections. Takes two arguments. The first argument is the number of connections per second to handle. If the rate of incoming connections is higher than this, the service will be temporarily disabled. The second argument is the number of seconds to wait efore re-enabling the service after it has been disabled. The default for this setting is 50 incoming connections and the interval is 10 seconds. This is good to avoid DOS attack against your service.
  • includedir /etc/xinetd.d: Read other service specific configuration file this directory.

Task: How to create my own service called foo

Here is sample config file for service called foo located at /etc/xinetd.d/foo
# vi /etc/xinetd.d/foo
And append following text:
service login
{
socket_type = stream
protocol = tcp
wait = no
user = root
server = /usr/sbin/foo
instances = 20
}
Where,
  • socket_type: Sets the network socket type to stream.
  • protocol: Sets the protocol type to TCP
  • wait: You can set the value to yes or no only. It Defines whether the service is single-threaded (if set to yes) or multi-threaded (if set to no).
  • user: User who will run foo server

Task: Stop or restart xinetd

To restart xinetd service type the command:
# /etc/init.d/xinetd restart
To stop xinetd service type the command:
# /etc/init.d/xinetd stop
To stop xinetd service type the command:
# /etc/init.d/xinetd start

Task: Verify that xinetd is running

Type the following command to verify xinetd service is running or NOT:
# /etc/init.d/xinetd statusOutput:
xinetd (pid 6059) is running...

Tuesday, November 22, 2011

Unable to delete files with full disk quota on ZFS


Problem
When I was removing old build logs and source from my ZFS file serever to create for fresh builds, i tried removing old bits.
bash# rm gmake-optimize-domestic.log.1

rm: cannot remove file `gmake-optimize-domestic.log.1': Disk quota exceeded
Solution
You will need to copy /dev/null into file that is taking up space.
Example:
bash# ls -la
drwxr-xr-x   2 svbld    staff          8 Sep 26  2010 ./
drwxr-xr-x   3 svbld    staff          3 Sep 24  2010 ../
-rw-r--r--   1 svbld    staff       2707 Sep 26  2010 20100924.1.rep.1
-rw-r--r--   1 svbld    staff    1748129 Sep 24  2010 cvs-get.log.1
-rw-r--r--   1 svbld    staff        388 Sep 26  2010 email-mailx.log.1
-rw-r--r--   1 svbld    staff    3593895 Sep 26  2010 gmake-optimize-domestic.log.1
-rw-r--r--   1 svbld    staff      40709 Sep 24  2010 rt.log.1
-rw-r--r--   1 svbld    staff      43369 Sep 24  2010 sour.conf
bash# cp /dev/null gmake-optimize-domestic.log.1
bash#ls -la
total 10943
drwxr-xr-x   2 svbld    staff          8 Sep 26  2010 ./
drwxr-xr-x   3 svbld    staff          3 Sep 24  2010 ../
-rw-r--r--   1 svbld    staff       2707 Sep 26  2010 20100924.1.rep.1
-rw-r--r--   1 svbld    staff    1748129 Sep 24  2010 cvs-get.log.1
-rw-r--r--   1 svbld    staff        388 Sep 26  2010 email-mailx.log.1
-rw-r--r--   1 svbld    staff          0 Nov 22  2011 gmake-optimize-domestic.log.1
-rw-r--r--   1 svbld    staff      40709 Sep 24  2010 rt.log.1
-rw-r--r--   1 svbld    staff      43369 Sep 24  2010 sour.conf
bash# rm gmake-optimize-domestic.log.1
bash# ls -lah gmake-optimize-domestic.log.1
/bin/ls: gmake-optimize-domestic.log.1: No such file or directory

As you can see, the file is 3593895 KBs in size, I then make the file zero bytes, and then I'm able to remove it. Once enough files have been removed in this manner, you should be able to use the rm command again.
What causes this?
This is due to how ZFS functions. ZFS is a Copy On Write filesystem, so a file deletion actually takes slightly more space on disk before a file is actually deleted, as it writes the metadata involved with the file deletion before it removes the allocation for the file being deleted. This is how ZFS is able to always be consistent on disk, even in the event of a crash.

Monday, November 21, 2011

Enabling Telnet and FTP services in RHEL and Solaris

This was the problem I use to face whenever a new build machine had to be configured.

Following steps were taken to configure FTp on Linux and Solaris boxes.
Linux is configured to run the Telnet and FTP server, but by default, these services are not enabled. To enable the telnet service, login to the server as the root user account and run the following commands:
# chkconfig telnet on
# service xinetd reload
Reloading configuration: [  OK  ]
Starting with the Red Hat Enterprise Linux 3.0 release (and in CentOS Enterprise Linux), the FTP server (wu-ftpd) is no longer available with xinetd. It has been replaced with vsftp and can be started from /etc/init.d/vsftpd as in the following:
# /etc/init.d/vsftpd start
Starting vsftpd for vsftpd:         [ OK ]
If you want the vsftpd service to start and stop when recycling (rebooting) the machine, you can create the following symbolic links:
# ln -s /etc/init.d/vsftpd /etc/rc3.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc4.d/S56vsftpd
# ln -s /etc/init.d/vsftpd /etc/rc5.d/S56vsftpd


On Solaris Sparc:
#vi /etc/services - uncomment
-> ftp 21/tcp
#vi /etc/inetd.conf - uncomment
-> ftp stream tcp nowail root /usr/sbin/in.ftpd in.ftpd
# vi /etc/ftpd/ftpuser - to uncomment "root" out
-> # root
#vi /etc/shells - put in all the shell as possible
-> /usr/sbin/ksh
#vi /etc/default/login - to uncomment
-> CONSOLE=/dev/console
check ftp.allow and ftp.deny files as well
#kill -HUP pid - to restart #/usr/sbin/inetd -s

Setting passswd on RHEL

I was trying to reset the password on the newly installed build machine.

I logged in as root and tried changing the password.

when I did passwd<use-name>
A error was to pop-up saying:
"passwd: Authentication token manipulation error"

After lot of googling I came to know that the command passwd was trying to update the nis password for the user-name, who never was a NIS user.

The worksaround for this issue:
comment out passwd and group lines in ./etc/nsswitch.conf file.
for the changes to take affect run pwconv command.

Its better to make a copy of original file before inflicting any changes in the system files.

Now try changing the passwd, it should go through fine.



Wednesday, August 31, 2011

Setting up the first SVN build

For almost first two years of my RE career I was working on builds running on CVS and shell scripts. Now was the time and opportunity to move to relatively new technology and SCM. I was owed with a responsibility of setting up build environment for continuous build, using SVN and maven as the tools.

This was a responsibilty I was eagerly awaiting as till this time I had never done a end to end setup of build environment. I grabbed the opportunity with both the hands.

As it looked in the beginning, my primary task was to
  • create the branch.
  • Prepare the trunk for SNAPSHOT build
  • from the trunk, create a branch for the sustaining to chack-in 
I will detail each of the above one-by-one.

Creating the branch.
I was given a tag from which I had to create the branch which can be used to checking in the  fix for sustaining team. This was the my first stint with the SVN and handling the branch.

Following are the sequence of steps I executed to create the branch.
To check-out the x.y tag and export it to the internal svn repo, I used "svn export"
1. svn mkdir http://aaa.bbb.ccc.ddd/svn/product/sustaining/trunk/<sustainng dir>
2. svn checkout http://aaa.bbb.ccc.ddd/svn/product/sustaining/trunk/<sustainng dir>
3. cd <sustaining_dir>
4. svn export https://svn.net/svn/<pde-product>/tags/<pde-product> .
5. svn add .
6. svn commit .

It looked so simple but it took almost a day's time to get going with the branch not because of  the time involved in commiting the files rather because of my inexperience with the branch creation using SVN.The main issue I faced during the branch creation was the error message I use to get when I tried to commit, a message saying " the directory is always part of the svn repo".  I tried all the tricks to get the files commited into the repo, but with no positive outcome. Finally introduction of argument "-f" (forced) got the job done for me.
Prepare the trunk for SNAPSHOT build



Saturday, August 27, 2011

My responsibilities/activities as release engineer


The responsibilities/activities I handled as release engineer can by broadly divided into three phases as follows:
Phase1: Pre-Build. This stage pertains to the activities expected to execute before starting the daily nightly builds.
Phase2: Nightly Builds. This is the phase starting from the just before code-freeze day till the build promotion day.
Phase3: Post-promotion. This phase starts after the promotion of the nightly builds to QA till the final release of patch to outside world.


Table of contents
1 Phase 1: Pre Nightly Build Activities
1.1 Responsibilities of new products
1.2 Responsibilities of already existing products

2 Phase 2: Nightly Builds
2.1 Nightly build activities
2.2 Nightly Build Tests
2.3 CVS, SVN and Other Source code management
2.4 SVN Branches
2.5 Source Code Control System(SCCS)
2.6 Shared Components

3 Phase 3: Post Build Activities

Phase 1: Pre Nightly Build Activities

The activities in this phase can be broadly divided into two depending on the state of products, i.e. whether its a new product coming to sustaining or a product which is already supported by the sustaining.


Responsibilities of new products

For any new product to be supported by sustaining, its RE responsibility to set-up the build machine and build environment for the new product.
Following are the tasks RE execute in case if there is a new product in sustaining's kitty.
  • Create the sustaining source code branch. RE work with the PDE team to get the PDE branch inside the firewall and making it available for the the sustaining engineer for checking-in their bug fixes.
  • Identify all platforms on which the new product needs to be built and identify the systems for setting up the new build environment.
  • Work/interact with lab team across the geography to get the necessary machine for the fresh build.
  • Setup/install the build dependent soft-wares like CVS, SVN, BUILD_TOOLS, various compilation tools, Microsoft tools for windows platform builds on the new build machines.
  • Setup HUDSON and necessary supporting tools for setting up the continuous builds new variants for GF 3.x



Responsibilities of already existing products

Before starting the nightly build activities it's RE responsibility to execute the below activities.
  1.  Open the SCM branch for the checkins and notify the regarding the branch status.
  2. Verify if there is enough space on nightly workspace as well as on the build machines
  3. Update the configuration files with the build number, patch number and rpm build version correctly.
  4. Update the patch numbers for every fresh patch.Patch is the final deliverable from sustaining, hence care needs to be taken while updating the patch numbers.
  5. After the code-freeze date, lock the branch and update the patch READMEs with bug number and the description and check-in the READMEs

Phase 2: Nightly Builds



Nightly build activities

This phase basically involves the following RE responsibilities
  •  scheduling the nightly builds
  • Notify the build issues to the team if there is any issues, resolve the issues.
  • Making sure that nightly builds on all the supported platform go through successfully without any issues

The table below lists the Product against the currently supported Product tails and the platform on which it is built.

S/N Product Platform
1 Application Server
Tails Platforms Distros
App Server 8.1_0
  • Solaris Sparc
  • Solaris i586
  • RHE Linux
  • Windows
    The distros for 8.1_sust builds are divided into Enterprise Edition(EE) deliverable and non-enterprise platform(PE) bits. For Enterprise edition, RE has to build file-based patches as well as the package based patches.
    For non-enterprise edition, RE has to generate only the file-based patches

    The 8.1_02 builds the bits are delivered to the outside world in the form of:
    • Files-based patches.
    • Package-Based patches.
    • JES5 based windows MSI patch.
    Creating of JES5 patch for windows build is very resource and time intensive process and takes almost a day

    Apart from the file and package based patches, RE is expected to generate and make available the bits in the form of bundles for QA.

    App Server 8.2_sust
    • Solaris Sparc
    • Solaris i586 
    • RHE Linux
    • Windows
      The distros for 8.1_sust builds are divided into Enterprise Edition(EE) deliverable and non-enterprise platform(PE) bits.
      For Enterprise edition, RE has to build file-based patches as well as the package based patches.
      For non-enterprise edition, RE has to generate only the file-based patches

      The 8.2_Sust builds the bits are delivered to the outside world in the form of:
      • Files-based patches.
      • Package-Based patches.
      • JES5 based windows MSI patch.
      Creating of JES5 patch for windows build is very resource and time intensive process and takes almost a day

      Apart from the file and package based patches, RE is expected to generate and make available the bits in the form of bundles for QA.

      Sun Glassfish App Server V2.1.1
      • Solaris Sparc
      • Solaris i586 
      • RHE Linux
      • Windows
        For 8.2_Sust builds the bits are delivered to the outside world in the form of:

        • Files-based patches.
        • Package-Based patches.

          For Glassfish V2.1.1 also apart from the file and package based patches, RE is expected to generate and make available the bits in the form of bundles for QA.

          Oracle Glassfish App Server V3.0.1
          • Solaris Sparc
          • Solaris i586
          • RHE Linux
          • Windows
            For Glassfish V3.0.1, the bits are delivered in the form of closed network patches.

            Oracle Glassfish App Server V3.1.0_1
            • Solaris  Sparc
            • Solaris i586
            • RHE Linux
            • Windows
              For Glassfish V3.0.1, the bits are delivered in the form of closed network patches.

              2 Web Server
              Tails Platforms Distros
              Oracle Web Server 6.1
              • AIX 
              • Solaris Sparc 
              • Solaris i586 
              • RHE Linux
              • Windows
                For Oracle Web Server 6.1, the final deliverable are in the form of

                • File-based patches.
                • JES based patches.
                • Creating of JES5 patch for windows build
                  Apart from the file and package based patches, RE is expected to generate and make available the bits in the form of bundles for QA.

                  Oracle Web Server 7.x
                  • AIX 
                  • Solaris Sparc
                  • Solaris i586 
                  • RHE Linux
                  • Windows
                    For Oracle Web Server 6.1, the final deliverable are in the form of

                    • File-based patches. 
                    • JES based patches.
                      Apart from the file and package based patches, RE is expected to generate and make available the bits in the form of bundles for QA.

                      3 Proxy Server
                      Tails Platforms Distros
                      Oracle Proxy Server 4.0.1x
                      • AIX 
                      • Solaris Sparc
                      • Solaris i586 
                      • RHE Linux
                      • Windows
                        For Oracle Web Server Oracle proxy Server 4.0.x, the final deliverable are in the form of

                        • File-based patches. 
                        • JES based patches.
                          Apart from the file and package based patches, RE is expected to generate and make available the bits in the form of bundles for QA.





                          Nightly Build Tests

                          As part of nightly build, RE is expected to execute certain tests against the fresh builds. Its RE responsibility to see to it that the fresh builds pass these tests with the expected results. These tests are differ for every products. Below is the listed that RE run against the products.
                          Product Tails Nightly Tests
                          Application server 8.1 and 8.2
                          • PE Quick Looks DEBUG (against DAS) 
                          • EE Quick Looks DEBUG (against remote instance) 
                          • PE Quick Looks OPTIMIZED (against DAS) 
                          • EE Quick Looks OPTIMIZED (against remote instance) 
                          • SQE Smoke Tests OPTIMIZED (against DAS): 
                          • ANT-CORE Tests OPTIMIZED: 
                          • OPTIMIZED CTS Smoke Tests:
                            Glassfish V2.1.1
                            • Quick Look Tests on the PE Installer 
                            • Smoke Tests on the PE Installer 
                            • Quick Looks on GlassFish EE (Clustering) server image 
                            • Quick Look Tests on the EE Installer 
                            • Smoke Tests on the EE Installer
                              Glassfish V3.0.1
                              • Quick Look Tests on the Installer 
                              • Smoke Tests on the Installer
                                Oracle Glassfish Communication Server V1.5 and V2.0
                                • GlassFish Quick Looks on server image 
                                • Smoke Tests on server image 
                                • Installer CTS Smoke Tests 
                                • SailFin QL (cluster) Tests 
                                • Functional Test 
                                • SailFin Smoke Tests 
                                • GlassFish Clustering Quick Looks on server image
                                  Web Server 6.1 And 7.0
                                  • GAT Tests




                                    CVS, SVN and Other Source code management

                                    RE maintain the following CVS branches.
                                    • SJSAS82_FCS-SUSTAINING_BRANCH
                                    • SJSAS81_FCS-SUSTAINING_BRANCH.
                                    • WebServer70_u11Rtm_Branch
                                    • S1WS61RTM_SPI_AS81_Branch
                                    • SGES21_FCS-SUSTAINING_SECURITY_BRANCH
                                    • SGCS15_FCS_SUSTAINING_BRANCH
                                    • SGCS15_FCS-SUSTAINING_PRIVATE_BRANCH
                                    • SGES211_FCS-SUSTAINING_PRIVATE_BRANCH
                                    • SGCS20_FCS-SUSTAINING_PRIVATE_BRANCH
                                    • Proxy40RTM_Branch


                                    SVN Branches:

                                    • http://mercurial.us.oracle.com/svn/glassfish/sustaining/trunk/3.0.1-1
                                    • http://mercurial.us.oracle.com/svn/glassfish/sustaining/trunk/3.1.0-1


                                    Source Code Control System(SCCS)

                                    Other than maintaining the CVS and SVN source code branches RE is also responsible for maintaining the build scripts which are crucial for any builds.
                                    These are maintained in Source Code Control System(SCCS)
                                    RE maintain the build scrips, build configuration files of the below listed products
                                    • AS 8.1 
                                    • AS 8.2 
                                    • AS 2.1.1 
                                    • Sailfin Communication Server v1.5 
                                    • Sailfin Communication server v2.0 
                                    • WebServer 6.1 
                                    • WebServer 7.0 
                                    • Proxy Server 4.x



                                      Shared Components

                                      Below listed shared components are built and staged by me
                                      Shared Component RE responsibilities
                                      ORB
                                      • Maintain the build infrastructure including the machine and the build environment
                                      • Build and stage the ORB
                                      NSS
                                      • The NSS is delivered to Application server in different formats. For GF v2.1.1 its in the form of bundled jars where as for other application server the bits are staged as directories. RE responsibility is of packaging the NSS bits as per the products requirements.
                                      • RE responsibility in regards to NSS pertains to following tails of application servers.
                                        • AS 8.1
                                        • AS 8.2
                                        • AS 2.1.1
                                        • Sailfin Communication Server v1.5
                                        • Sailfin Communication server v2.0
                                      • The NSS bits need to be packaged per platform and spacial care needs to be taken to support 64 bit support.
                                      JDK
                                      • Similar to NSS, the JDK s delivered to Application server in different formats. RE responsibility is of packaging the NSS bits as per the products requirements.
                                      • RE responsibility in regards to JDK pertains to following tails of application servers.
                                        • AS 8.1
                                        • AS 8.2
                                        • AS 2.1.1
                                        • Sailfin Communication Server v1.5
                                        • Sailfin Communication server v2.0
                                      MQ
                                      • MQ bits are staged by RE at the external staging location.
                                      • The bits are staged for three tails of applications servers listed below:
                                        • AS 8.1
                                        • AS 8.2
                                        • AS 2.1.1
                                      Load Balancer Pluggin
                                      • RE responsibility is to maintain the CVS branch of the LB, build infrastructure, machines, build environment and stage the load balancer.
                                      • LB is built on 4 different platforms for the following applicaiton servers:
                                        • AS 8.1
                                        • AS 8.2
                                        • AS 2.1.1
                                      • Load banlancer is built on following platforms
                                        • Solaris-Sparc
                                        • Solaris -i586
                                        • Windows
                                        • Linux
                                      pwc1.2
                                      • This shared component is built for Web Server.
                                      • RE responsibility is to maintain the build infrastructure, build environment for building PWC. It also includes staging the component in the external staging location(/share/build/component



                                      Phase 3: Post Build Activities

                                      After couple of stable nightly builds, on or before the promotion date RE is expected to execute the following activities. 1. Do a sanity on the latest nightly builds.
                                      2. Verify if there is enough space on the disc to copy the promoted builds.
                                      3. Promote the builds to QA.
                                      4. Verify if all the bits are promoted.
                                      5. Once the QA gives his approval the patches need to be pushed to PST pipeline.
                                      6. Verify the patch READMEs before releasing the patch.
                                      7. Finally RE has to clear the requester hold off the patches which were submitted to the PST pipeline.

                                      Wednesday, January 12, 2011

                                      Release Engineer- By Accident

                                      I completed my masters in Networks and Internet technologies dreaming  of being a N/W technologist.