Saturday, August 28, 2010

e For eject

One of the idea to start this blog was to have my own space of rant n' rave. Just to make things clear - none of the things I say on here represent the views of any particular company who stands my 'ex or current employer!!. But sometimes, there are things that come along that really make me smile and of-course an opportunity to make an effort to work on those as part of process improvement.

Once we suffered a mismanaged tagging# blues of our servers in one of the data center. Reboot request sometimes rebooted a wrong server. Now imagine if that server was serving live -that too business critical traffic! Well, preferring NOT to explain the post episode here. Now any guess how do we make it sure that the DC engineer is actually standing in front of the right server, if he had a reboot request to address a server hang or server shifting etc.? :-)

Yes! You are right - 'e' for eject was the savior that time. If he is close by somewhere in the grid or rack one eject should be enough and if it is NOT visible to him - get some scripting in place!

while [ true ]
do
#eject CDROM
eject

#pull CDROM tray back in
eject -t
done

He will definitely locate your server, believe me!

Knock Knock!! Now who says disabling/removing CD-ROM from my server should be a part of DC physical security checklist!!?

Cheers!
DK


Thursday, August 26, 2010

URL response time via curl

These stats are easily visible by any integrated HTTP sniffer available in the market like HttpWatch/Fiddler etc. for our popular web browser like Mozilla/IE. But if you need to do that quickly from command line, we can use cURL to do the same -

Determine response times of a URL with cURL:

# echo "`curl -s -o /dev/null -w '%{time_starttransfer}-%{time_pretransfer}' http://m.com/`"|bc
.386

A bit deeper:

$curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null -s http://mysite.com/

Lookup time: 2.221
Connect time: 2.541
PreXfer time: 2.589
StartXfer time: 2.862

Total time: 3.587


To get the amount of time between when a connection is established and when the data actually begins to be transferred:

$echo "`curl -s -o /dev/null -w '%{time_starttransfer}-%{time_pretransfer}' http://mysite.com/`"|bc
.281


Hope it helps!

-DEBAJIT

Wednesday, August 25, 2010

ForkBomb

Just heard someone discussing on this 'Fork Bomb'. Folks from core SA background who spend much of their time hardening their server, making it fit and strong enough to withstand the live raw traffic as well as (D)DoS proof might already know it why prank like this exists. For me when I discuss topics like Fork Bomb I rightly say - "It's NOT there, it's needed - that's why it exists!" Even though I don't know if it was a intentional or accidental discovery -computer pranks and some viruses like this(wabbit) I would say, helps us making us aware of our current strengths and weaknesses - but definitely with right spirit and following all safety measures. Fork-bomb can be lethal sometimes and may loose unsafe data too. But at the same time on a newly build server can be one of the point in checklist for kernel 'ulimit' parameter.

Just to touch upon the subject line a bit, A Fork bomb is considered to be the (deadliest) smallest writable virus code in the batch language and it is capable of being annoying and if launched on a computer or server will probably result in a crash.

This is how this forkBomb piece of code looks like:

:(){:|: &};:

- looks like smiley puking? Probably, who wrote this first was a bit humorous cum creative guys who wanted to make it look funny and then attack . Funny earthlings!!!! But don't you dare to underestimate it, even-though it looks like a set of smileys. The below will work fine too, and can prove lethal equally.

nix()
{
nix|nix &
};:

Now to complete this story I must also tell you how to defend against this. For that you need to read and understand all the parameters around this file /etc/security/limits.conf, a bit of PAM, and some ulimit parameters and see you are done!! I am sure doing this, you will also discover so many new dimension on your server hardening mission.


Jai Ho!!
DEBA




Tuesday, August 24, 2010

lsof

Below are some of my favorite lsof combination which I found so handy and proves savior in critical moment. Just an effort to put them in this single page.

lsof -d mem Programs loaded in memory and executing
lsof -i :25 Who is using this port
lsof -i lists all processes with open Internet sockets (TCP and UDP)
lsof -c httpd listing of files for processes whose name begins with apache(httpd)
lsof -N Listing of NFS mounted files
lsof -u ^root| grep debu List all open files by a user exclude root user.
lsof -p 3030 list by pid, you can supply more than one with a comma separated list
lsof /tmp/funky.lock To find the processes that have the /tmp/funky.lockfile open.
lsof -u only for specified uid
lsof -t `which httpd` List the PIDs of running httpd processes
lsof -i@172.16.80.70 To see connection to a specific host
lsof -i| grep LISTEN What ports are listening for connection
lsof -i| grep ESTABLISHED Current Active connections
lsof /var/log/messages Which processes are interacting with this file
lsof +L1 Security context. This means something fishy. Read man page/google
lsof -i -P | grep -i "listen" List all open ports and their owning executables


There are some good resources too on how to recover deleted files with the help of lsof.
Paste below if you some more interesting lsof combination.

Thanks/-
DEBU

Sunday, August 22, 2010

man page inside VIM editor

I am NOT sure how it can help you. But Yes! a very handy one. From inside a VIM editor we can open man page of any command we want. Being in command mode -put the cursor over the keyword which you want to look up, and press shift+K



BTW, did you all get a chance to check what's new with new VIM release?

Cheers!
DK

Sunday, August 15, 2010

Happy Independence Day

“Long years ago, we made a tryst with destiny and now the time comes when we shall redeem our pledge... At the stroke of the midnight hour, when the world sleeps, India will awake to life and freedom.” -Jawaharlal Nehru.




Today India is Celebrating it's 64th Independence Day - Yes! Its really a very auspicious moment for all of us. Each and every corner of India is under the magic of the "Tiranga" - the Tri-Color. Every where you can just see Saffron,White and Green. I was passing by the road and as expected I saw the same extra excitement in the air, and the exquisite decorations everywhere.

Yes this was this fateful morning of 15th August 1947 when, India was declared independent from "British Raj" of 200 years, and finally the reins of control were handed over to the leaders of the Nation. India’s gaining of independence was a tryst with destiny, as the struggle for freedom was a long and tiresome one, which witnessed the sacrifices of many brave freedom fighters, who laid down their lives on the line.

Congratulation To all my friends and followers and I Wish You All A Very Happy Independence Day!!

Jai Hind.

Debajit Kataki



Saturday, August 14, 2010

Restricted SSH key access

I am NOT going to discuss here how we generate a SSH key pair and setup a pass-phraseless access between two computer systems - rather a very less known , yet very strong access restriction facility that is available with authorized_keys file.

A simple distribution of public key allows any remote host where the private key is known to make any kind of ssh connection (login, remote command execution, port forwarding, etc.) to the computer. But there are a number of restrictions that can be implemented in an authorized_keys to further restrict the access. The $HOME/.ssh/authorized_keys file on the client not only provides a means for public key authentication, but can also impose certain restrictions. The syntax or format of the file is:


It has four phrases -

options - keytype - encoded-key - comment


Host Access Restriction:

from="pattern list"

e.g.

from="*.dk.mydomain.com,escbps.mydomain.com" ssh-rsa ...

..................... debu@esprdmon1.mydomain.com

-This will allow access only from the mentioned host or domain, and other clients will still unable to access this host even though they posses a valid private key.

Forced command:

This method helps to execute the mentioned "command" whenever this key authenticates, and will ignore whatever command the remote user has supplied. This is one of the most powerful uses of SSH public key authentication, and usually is used to create task-specific key pair.

--clip--
from="escbps.mydomain.com ",command="/usr/local/bin/command", no-port-forwarding ssh-rsa AAAA
......
--clap--

Other options:

Well there exists some other ssh facilities too which can be used to suppress by adding any of the following options to the options section -

no-X11-forwarding,no-port-forwarding,no-agent-forwarding,no-pty

In an environment where passphraseless access is a must and where entirely automated remote connections keeps on flowing , it is generally a good idea to apply these options unless they actually needs one of these facilities.

Cheers!

DK

Thursday, August 5, 2010

IRC or IM

At some point in a company’s growth, management notices that there is a lot of sub-par, redundant, and distributed method of communication going on in the house. Employees have been using small - easy to use tools in order to perform their jobs with the most efficient method of communication. Management also ponders - which tool is best for the company. It is really a very crucial juncture which needs long term vision and solution. Sometimes I even ask myself - is the continuous change in technology, processes, working methods and the competitive environment makes it virtually impossible for organizations to forecast and implement a stable yet efficient mode of quick communication with co-workers while in office?

Have you ever realized that in an office environment, in-spite of having office email / Normal telephone extension/ VoIP with high availability, various IM for quick chat and so on – we still sometimes land up doing multiple cross communication -which sometimes proves really fatal and gives us a feeling of sheer waste of time?

I feel the same when I see people using an IM in office environment.

Well IM's primary focus is person-to-person and it just gives a mere sense of 'me' being there. I always felt - with multiple teams around in your office where you speak so much on the point of 'collaborative effort' IM are surely a waste - since , had the IRC taken its place - with various #team# representing different but linked BU with 'n' number of people clearing out my doubt on a point becomes easier. In contrast, following up on the same doubt with multiple team becomes a bit tedious(specially when you don't know anyone in that team). With email I probably need to be a gentleman and extra careful between the lines. Surely it takes away some of your valuable time with little benefit. Believe me you just need a very minimum no of people to make it meaningful, fruitful,viable and that too joyful.


IRC chat room brings in more active discussion - with higher resource availability and brings in more agreed upon answer around a point that too with very lesser effort of followup with multiple people around you. Personnel stuff which needs a offline discussion away from the group can even be done 1-to-1 fashion in IRC, same as we would do with any IM.

Yes! IM also support multi-user chat, and thus could be used as a complete replacement for IRC, and lets NOT miss the point that IRC is actually just another IM protocol., and probably had different focus historically, but it was really just one of the first widely-used, interoperable IM systems. Well did I try to keep away some controversy by saying so here ? :-) may be yes!!

Anyway - the idea of this post was - just to express my personal viewpoint on how an IRC can provide additional functionality and prove handy if at all we are using IM without a much thought process behind!!

Cheers!
DEBU



Linux DropCache!!

Linux caches the disk reads. You can have it prove by copying huge data in and out of disk and check how the cache piles up. Linux normally writes data buffered in memory out to disk at a specified interval. This provides really good performance benefit.

Linux usually readjusts the free available RAM on cache/buffer with a LRU(Least recently Used) algorithm and application will never starve for memory -as kernel will take care of this dept. always - but when we see something very odd like 40GB + in cache and that too application is finding difficult to get needed share of RAM we can always do it manually. Probably Kernel is very busy or cache itself is too huge to do on the fly validation for LRU and hence application is behaving slow. I guess this feature is available from 2.6.16 onwards.

But there are certain cases where before flushing it off the RAM cache you would want to guarantee that everything is flushed to disk. If you type sync when logged in as root, it tells the OS to write all buffered data like superblock, inode, buffered read, and buffered write data to disk. This insures that nothing is sitting in RAM before you flush the cache.

We can always issue a sync after an operation such as copying a large file to insure that all the data is committed to disk. Conveniently, sync will block until the data is written giving you a guarantee that you're safe.


I have seen this issue in VMWARE hosts sometime back - VMware (ESXi4) looks for guest OS to let it (ESXi host) know when the guest is done using the memory so it will make it available it to another server – and it happens that guest does not release the cache (feature wise!! ) and host thinks that the memory is still in use. And what we see in the host is that - a huge piled up cache – BUT nothing is running in host ! And that’s a real FUN !!! So need to resort to the manual mechanism.


We MUST always use a ‘sync’ before using any of these.

To free only pagecache:

#sync; echo 1 > /proc/sys/vm/drop_caches

To free only dentries and inodes:

# sync; echo 2 > /proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:

#sync; echo 3 > /proc/sys/vm/drop_caches

I am sure this is a NOT an unsafe operation and will only free things that are completely unused. Dirty objects will always reside on RAM until written out to disk and are not freeable. If you run "sync" first to flush them out to disk, it should be able to free more memory.


Thanks/-
DEBU

  Expand your network at ease : Six Degrees of Separation! PC: Psych2Go Did you know that you are just six connections away from any person ...