Friday, April 30, 2010

Password Protect you zip tar gz bz2 archives

Yes! It is possible.


Today some one just asked me, is it possible to protect my tar archive, he is NOT so close to me, i don't like this, he tries to protect everything man! Anyway probably he is on a Operation Cock Block!

It is possible.

To create password protected archive

tar cfz - TheFolder/ | gpg -c -o TheFolder.stgz

and to decrypt and decompress the password protected archive

gpg -d TheFolder.stgz |tar xfz -

BTW, did you also forget the Power of linux Z Commands – you may have a look at them once again, they comes handy sometimes - Zcat, Zmore,Zless, Zgrep, Zdiff


Cheers!

D E B U

Screen Utility

Screen utility

-----------------------------------------

Working on a remote server and your coworker wanted you to share and allow control of the shel?l so that it can be a collaborative effort! The UNIX way it would be is GNU Screen.

GNU Screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells.

While you google out the detail, let me quickly put down the working step for the same.

Download

yum screen
chmod u+s /usr/bin/screen
chmod 755 /var/run/screen

Tell you friend to login to the same server which you both wanted to work.
Now once he starts :

At your end,
# open a screen
e.g $screen -S
$ Press Ctrl+a, then type :multiuser on and press Enter.

$ Press Ctrl+a, then type :acladd debu

At the other end, you have to access it like this


login
and
screen -x debu/
Note: I believe your friend is working with login id "debu"

And now you see what he is doing and also enter command if you want. Vitually you guys are working on the same screen.

Complaints :

!/ I was working with a shell, accdently i hit the close button of my shell and all gone
2/ I have started so many things and want to go back home leaving all as it is and just login and start working again.(specially kernel compilation and all)

Solution:
1/ Start work on screen
2/ get the screen id

sreen -ls

3/ detach(mind u not terminate) with ctrl+a + d and go home
4/ connect to screen with the same user

screen -x debu


Next, i wanted to share screens multi window capability which i just loved. It reminded me Linux's multiple pseudo terminal switching cpability on the physical native host, with alt/ctrl/Fn combinations.

The following are the commans/keys lets you navigate through your screen environment. Note that unless modified by your .screenrc, by default every screen shortcut is preceded by Ctrl+a. Note that these shortcuts are case-sensitive.

  • ctrl + a c = new window
  • Ctrl+a n = Switches to the next available window
  • ctrl +a p = previous window.
  • Ctrl+a S = split the terminal horizontally
  • Ctrl+a " = select window from list
  • Backspace – Switches to the previous available
  • Ctrl+a – Switches back to the last window you were on
  • A – Changes window session name
  • K – Kills a window session
  • c – Creates a new window
  • [ - Then use arrows to scroll up and down terminal,
Do see the man page for screen for more $man screen
:-)



~Thanks - Debu

Wednesday, April 21, 2010

Wget Vs. libcurl

People still use Wget as a quick command, sometimes from cosole and even embedded in their code.Please take a few precautions while using WGET

1/ WGET’s default behavior is retry, which can be killing at times. So use it with retry option if you presume, it to fail,

Ex.

wget -t 10 -w 5 --waitretry=10 2/ Try using libcurl instead of WGET, if possible,

As curl uses somewhat single-shot transfers of data mode, and WGET works on recursive mode. Hence, a particular http connection in an apache web server, which maintains Persistency, can prove fatal at times, that too when no retry count is specified, as we will be kind of misusing a valuable http connection slot, as it will linger for ever, or till when apache gets restarted.

Wget still does its HTTP operations using HTTP 1.0. In contrast curl which is basiccaly NOT a commnad rather a a cross-platform library with a stable API, supports multiple protocols.


Cheers! Happy piping !

Tuesday, April 20, 2010

Apache split logs utility

Too many virtual Host calls for' n' no. of log files, which is OK for a small number of virtual hosts, but if the number of hosts is very large, it can be complicated to manage and also call for insufficient file descriptors.

I decided to implement apache split_log utility for some of my web servers cluster, just wanted to share with you.

Why:

In case of multiple verticals since we have 14 vertical(virtual hosts) ( 14*2 log file active at one time for one httpd process), sometimes I am running out of available fd’s even after increasing it to a considerable good amount.

#ls –l /proc/16958//fd/

lr-x------ 1 root root 64 Feb 4 16:28 0 -> /dev/null
l-wx------ 1 root root 64 Feb 4 16:28 1 -> /dev/null
l-wx------ 1 root root 64 Feb 4 16:28 10 -> /usr/local/apache2/logs/example1.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 11 -> /usr/local/apache2/logs/example2.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 12 -> /usr/local/apache2/logs/apps.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 13 -> /usr/local/apache2/logs/data.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 14 -> /usr/local/apache2/logs/example3.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 15 -> /usr/local/apache2/logs/adsense.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 16 -> /usr/local/apache2/logs/example4.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 17 -> /usr/local/apache2/logs/music.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 18 -> /usr/local/apache2/logs/example5.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 19 -> /usr/local/apache2/logs/example6.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 2 -> /usr/local/apache2/logs/error.log
l-wx------ 1 root root 64 Feb 4 16:28 20 -> /usr/local/apache2/logs/example7.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 21 -> /usr/local/apache2/logs/marketing.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 22 -> /usr/local/apache2/logs/example8.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 23 -> /usr/local/apache2/logs/opendocs.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 24 -> /usr/local/apache2/logs/openbooks.domain.com-access_log
l-wx------ 1 root root 64 Feb 4 16:28 25 -> /usr/local/apache2/logs/example9.domain.com-access.log

# ls –l /proc/23060/fd/ | wc -l
48

Now if I have 500 similar connections, I need minimum 500*48=24,000 only for apache. Now I have entire system/ application process aside running therein. Which needs more fd’s, which I never know how much they need, as environment is dynamic.

#Edit htpd.conf

LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" virt_log

Note: “%v” does the trick.

Add:

ErrorLog logs/virt_error_log
CustomLog logs/virt_access_log virt_log

And during night hours split the logs, with apache provided perl script.

Perl split-logfile < /logs /virt_access_log virt_log, It will split all access files based on Vhost names.

Testing:

I had two vhosts viz. (1) marketing.domain.com and (2) adsense.domain.com

And after running the command I got, two log files

marketing.domain.com.log
adsense.domain.com.log

Cheers!!

Happy Splitting........

Monday, April 19, 2010

Total Traffic on my Apache WS


How much traffic was served by my Apache web server today?

Once i had a situation where we had to quickly figure out how much traffic was served by our web cluster. We had a Bandwidth issue in one of of our shared hosted Tier1/2 DC. I know mod_status/server-status will show me the figures, BUT, it will show a total figure from the time of last server start till now.

But i wanted today's figure.

I could not find any other quickest way hence..ran the following.

#cd /usr/local/apache2/logs

#ls -lrt /usr/local/apache2/logs/ ¦ grep access ¦ awk '{ print $9}' > /tmp/url.txt

#for i in `cat /tmp/url.txt` ; do cat $i | awk '{ sum = sum + $10 } END {print sum }'; done | awk '{ sum = sum + $1 } END { print sum/1073741824 ,"GB"}'

---------

$10 = apache logs “Bytes transferred”

1024*1024*1024 = 1073741824

DO get back, if you do it some other way!

Cheers!!

Saturday, April 17, 2010

Overcoming Backlog

Being in Operations its quite common to face piled up backlog on a regular basis, but at the same time to mitigate the same is even more fun..

I just wanted to share how do I do handle it. Here I Go...

Carrying a notebook always helps. Do Start each day marking a new page with date, create a to-do list of what needs to be done today like current works which needs attention today. Start with the biggest and most important task and prioritize all tasks on both lists. Work through each item one at a time and mark them off your list as you do them. Fit backlog work around work that needs to be done, while ensuring that all work that needs to be done today. This way you can reduce your backlog pile.

Its is also very important to flip to yesterday’s notes and copy every task or all recurring tasks with a blank checkbox next to it, to the new empty page (today). As the day progresses and you go to meetings, do your work, or get interrupted to do something…jot it down in today’s page and put an empty checkbox next to it. If you get it done during the day, awesome. Mark it complete.

Overcoming Backlog Requires Focus, and a bit of better time management.

Wednesday, April 14, 2010

How often should I reboot Linux servers?

This Question sometimes raise a controversy whether or NOT i need to reboot my Linux server on a regular basis. I do agree that,Linux servers never need to be rebooted unless you absolutely need to change the running kernel version. Linux memory handling is very good and linux works in a modular order. Most updates even do not require a reboot, but Kernel updates do (you can't really replace the running kernel without rebooting!).


As far my understanding goes, one might want to reboot the linux server in one of the following scenarios:

1/ Upgrade or change the current Kernel version.
2/ Critical system library upgrade NOT behaving as expected.
3/ As one of DR plan, to make sure the server and all services come back as expected.
4/ Suspect for messed up settings file. This also adds to the "risk of downtime" when rebooting infrequently(like,
5/ Of course! When your system is in a hang state. May be an OOM issue or some new unstable library causing it to behave odd.
6/ Physical movement of the box.
7/ Tier-1/2 DC maintenance where not enough redundant power supply in case of emergency.
8/ Critical firmware upgrade. or hardware maintenance.
9/ Linux may handle its memory OK, but individual applications may not - their heaps could become fragmented if they run for a longer time.

Personally I prefer to reboot on a monthly cycle during a maintenance window to make sure the server and all services come back as expected. This way I can be reasonably certain if I have to do an out of schedule reboot (i.e. critical kernel update) that the system will come back up properly.


Its not a bad idea to reboot if it has been that long so you can run a disk check ( fsck ) on the root partition. Doing so can help you to be sure of data integrity. We also would want to do native disk checks (FSCK) by doing such regular reboots, and this will definitly reduce the the time it takes to get back up and running next time. So, its a good idea to anticipate this before hand and plan for it.


We also  have also discovered that config changes can sometimes get missed from one or the other server, (such as adding new multipath conguration/iptables rules, etc) and this does not get noticed until such reboot s are performed. This actually adds to the "risk of downtime" when we don't at all reboot our servers. And imagine how the  hardware and software failures will manifest themselves in such scenarios; and only when we reboot we can  find out and create a scope of proactive sanity check when we are  planning a reboot instead of living in the fear of an unplanned outage.

The proper method of rebooting a Linux system ensures data integrity by terminating processes and synchronizing the file systems. So a better and safe plan would be to get an approved downtime and  reboot our servers periodically - or if this is not a desirable option, we can set up our servers up in clusters so that reboots can be done if necessary without any downtime to our applications.

Although everyone knows how to reboot a linux server, still a few words on that -
Reboot and halt DO NOT SAFELY shutdown a system. reboot is a symlink to halt and halt nukes the platform. shutdown -rn now is how I usually reboot a machine of mine. shutdown -r by it's self will sync the disks, -n forces this (I like to be on the safe side).

Cheers! Happy Uptime!! Enjoy Runtime!!!

Friday, April 9, 2010

Linux Server Load


Server load - just a number? Regular monitoring of server has always been one of the top priority tasks of server system administrator. Almost all of us use commands such as uptime, top, w, procinfo, etc, comes with a line denotes load average. In one line The load average is the sum of the run queue length and the number of jobs currently running on the CPUs.

The Linux load average is a set of 3 real numbers separated by comma, which defines as, the number of processes waiting in the run-queue (to compete for CPU processing) plus the number currently executing over the last 1, 5, and 15 minutes respectively.

They will tell how busy or how CPU bound the Linux system might be. As long as the CPU utilization rate is not easily exceed 70%, the CPU is still able to handle CPU-bound processes or busiest Linux system!Linux load average is intended to provide some kind of information about how much work has been done on the system in the recent past 1 minute, the past 5 minutes and the distant past 15 minutes.Linux load average is not about utilization but the total queue length.Linux load average figures are point samples of three different time series.Linux load average are exponentially-damped moving averages.Linux load average figures are in the wrong order to represent trend information. Linux load average is an exponentially smoothed moving average function. In this way sudden changes can be damped. Hence, Linux load average figures don’t contribute significantly to the longer term picture.

Knowing what the value of the server load is not very important though. Knowing how to interpret the value is what counts. Lets understand that, The load averages differ from CPU percentage in two significant ways: 1. load averages measure the trend in CPU utilization not only an instantaneous snapshot, as does percentage, and 2. Load averages include all demand for the CPU not only how much was active at the time of measurement.

Now question might come, what is a ideal /optimum load for my server. Server load is a number in the format x.xx. And of course a load of 0.xx is always safe :-). because the server load represents the number of processes waiting to access the CPU.

Well even though to derive on a point “What is high”, in one line, for an ideal kind utilization of your CPU , the maximum value here should be equal to the number of CPU’s in the box. If the server has a single CPU (central processing unit), a server load higher than 1.00 is not good; if the server has two CPUs, a server load over 2.00 is not good and so on.

Now what to do when i am encountering a very odd load scenario.

1/ Try top command.

2/ Shift + p

3/ Check if the topmost processes are gone haywire somewhere.

4/ Just after typing top press ‘1′ to get a more realistic picture CPU wise.

5/ May be you can kill some pid’s to get a rescue.

Nevertheless, identifying the exact cause review of you application design is must as a post analysis along with logs etc.


Hope it helps.


Further reading Here: Unix/Linux Load average.


~Debu

Thursday, April 8, 2010

Why I decided To Write a Blog..


Why I started blogging... Well, this is the ultimate question I get asked when I tell people I started a blog!

I have taken a lot of time to read, study, and listen and research on services operations Domain.I enjoy my job, and I am happy with it. But there are still times when I would find myself blue about not taking a chance at writing? Probably I don't want to repent any further. I always wanted to have a place where I blog about my journey. The day to day life and the challenges I face while trying to move past various obstacles, the way to victory, what did i do wrong to have someone yelling at me, etc. etc. As a student i was very lazy to have my own diary. But when i ask my topper colleagues who maintained it with utmost sincerity, i come to know that either they have stopped it or Don't maintain it any further. Saying so, do i mean, I did the right job NOT to maintain one? Probably NO. Now I am 29 years old, and there is very little of my childhood that I can recall. There are little glimpses of memories here and there, but as soon as I find them, they seem to fade away. But Now i look into blogging a different way!

World is a open one now. Gone are the days of Bill's secret coding and the future is open source. So lets share and learn more. Never forget-even when the questions about who, how, when, and why remain unanswered. Most of the time I am with my laptop and internet, I think blogging will also help me to vent out my anger and frustration as well as excitement. yes I do come across them quite often. When and how, will come in installments in this blog itself.

I even thought to blog in a diff. pen name, thinking if I blogged under my own name I can't be so sarcastic, and I might also feel compelled to actually say what I think instead of playing devil's advocate. But, who cares. I will do it in my real identity. I wont also use much of exclamation sign in my sentence, as doing so, would mean that its not my own thought.


Nevertheless, I Will do my best to keep my high spirit this time, blogging each moment, hoping someday to become a cult entertainer by sharing my technical experience and humorous outlook while approaching a problem. I wont provoke people here even though i know, doing so will pave the way for thoughtful responses and hence enlightenment. Blogging for fun. What a concept! Now where am I going with this? Well, I want to get better, I want to write more for technology, fun, humor - capturing every small little incident with a holistically horrible outlook. Kick me in the virtual butt if I stop or go totally boring. Oh, I know, that is totally not your responsibility. However, when I know that at least one person is reading and at least one person cares, it makes a difference. It makes me want to be a better blogger. A better writer. So just put some words on the comments sections...it will keep me in high spirit.

Blogging with pleasure...Happy Blogging.... Do hop, read fast, skip fast, move forward, Akou Ahiba* :-)


Cheers

Wednesday, April 7, 2010

Hello World!

With every programming language I learned, I wrote a "Hello World!' program as per tradition! Ah! Now a 'Hello World!' blog post. I am not sure how far do i go with this, as my previous 3 blogs did not propel far from the launchpad.
Anyway -This is my first Post - more to come! :-)... Cheers!! One more blog is born.

The idea behind this blog is to increase my breadth of blogging by providing a linux operation centric blog space. Some of these posts will be cross-posts, some of them will be linux specific collations of blogs from all the other various places I post. Basically, this is my knowledge base repository from my work experience. So..Happy Reading ... :-)

Now going forward do i promise something ?Uhh! Yes i guess. I could not succeed in my previous attempts as a blogger! Now if i try to recollect and analyze why was was it so, the only thing comes up in my mind is that, i was very fixed about what i would write. SO, this time - I wont think about or between the subjects of writing, and subject matter. Second- Blog post will be subject to availability of my time and mood. Third - No Guarantee of length or size of any post -depends on mood and diversity of the topic And lastly - SkipFast will be Categorically Not!!

So, what.s your blog URL? lol :D

Anyway Happy Reading ... :-) Fast-Reading. Brake-Fast, Skip-Fast! Run Rabbit Run, Dig the hole, and forget the sun.

Cheers!!

  Expand your network at ease : Six Degrees of Separation! PC: Psych2Go Did you know that you are just six connections away from any person ...