[Nagiosplug-help] check_disk fails on line feed
Rasmus Plewe
rplewe at ess.nec.de
Mon Dec 2 10:40:51 CET 2002
Hello,
ok, so this mail is a "bit" longer, as I will try to be as verbose as
possible with the problem:
On Mon, Dec 02, 2002 at 11:05:43AM -0500, Subhendu Ghosh wrote:
> On Fri, 29 Nov 2002, Rasmus Plewe wrote:
> > On Fri, Nov 29, 2002 at 03:22:51PM -0500, Subhendu Ghosh wrote:
> > > On Thu, 28 Nov 2002, Rasmus Plewe wrote:
> > > > >
> > > > > I can currently live with the workaround to do a
> > > > > ./check_disk -p "fs1 fs2 fs3" --warning=20% --critical=1%
> > > >
> > > > no, I can't live with this workaround. Since I do this from remote, the
> > > > command would look something like
> > > >
> > > > ./check_by_ssh -H host -l user -C "/remote_path/check_disk -p "fs1 fs2"
> > > > -w 20% - 1%"
> > >
> > > escape the inner quotes: -p\"fs1 fs2\"
> >
> > About the first thing I did:
> >
> > ./check_by_ssh -H cs17 -C "/remote/nagios_plugins/check_disk -p \"/ /tmp\" -w 20% -c 1%"
> > Unable to open pipe: /usr/bin/ssh cs17 '/remote/nagios_plugins/check_disk -p "/ /tmp" -w 20% -c 1%'tts:/usr/local/nagios/libexec #
> >
> unable to open a pipe seems to be a different issue. Do you have enough
> file handles?
I would guess. With everything else the machine is doing file handles
have never been an issue.
> Also could you post a full output of the DF commandd so we can look at
> integrating the multiline output...
Here we go:
cs1 #:df -h
Filesystem size used avail capacity Mount on
/dev/dsk/201 248M 113M 135M 45% /
/dev/dsk/200 500M 79M 421M 15% /stand
/dev/dsk/202 2.0G 1.5G 428M 78% /usr
/dev/dsk/203 3.4G 612M 2.8G 17% /var
/dev/dsk/204 5.9G 198M 5.7G 3% /var/sx/adm
/dev/dsk/208 1.0G 479M 521M 47% /maint
/dev/dsk/205 2.0G 587M 1.4G 29% /usr/opt
/dev/dsk/206 2.0G 300M 1.7G 15% /var/bkup
/dev/dsk/207 7.8G 151M 7.7G 1% /opt
/dev/dsk/302 59G 24M 59G 0% /var/spool/nqs/restart
/dev/dsk/301 10G 1.2G 8.8G 11% /tmp
/dev/dsk/400 266G 83G 183G 31% /wrk-local
siox1:/home/ERS 12G 11G 1.4G 88% /var/spool/ers/sharedb
donner:/export/home/dkrz
39G 21G 18G 53% /nfs/donner/home/dkrz
niesel:/pf/a 4.9G 552M 4.4G 10% /nfs/niesel/pf/a
niesel:/pf/b 4.9G 491M 4.4G 9% /nfs/niesel/pf/b
niesel:/pf/g 4.9G 1.2G 3.7G 25% /nfs/niesel/pf/g
niesel:/pf/u/gp 4.9G 442M 4.5G 8% /nfs/niesel/pf/gp
niesel:/pf/u/ifm 4.9G 473M 4.5G 9% /nfs/niesel/pf/ifm
niesel:/pf/k 513M 30M 483M 5% /nfs/niesel/pf/k
niesel:/pf/u/uni14 4.9G 749M 4.2G 14% /nfs/niesel/pf/uni14
niesel:/pf/u/uni16 4.1G 1.6G 2.5G 39% /nfs/niesel/pf/uni16
regen:/pf/k 479M 167M 312M 34% /nfs/regen/pf/k
regen:/pf/m 6.7G 4.3G 2.4G 63% /nfs/regen/pf/m
regen:/pf/m/mo 6.7G 3.8G 2.9G 56% /nfs/regen/pf/mo
regen:/pf/m/at 3.4G 2.1G 1.2G 63% /nfs/regen/pf/at
peta:/pf/k 68G 64G 3.9G 94% /nfs/peta/pf/k
cross:/pf 49G 47G 2.0G 95% /nfs/cross/pf
cross:/bf 95G 44G 50G 46% /nfs/cross/bf
/dev/dsk/513 532G 384G 147G 72% /oldmf/1
136.172.43.200:/sx/sxbs1
1.1T -3.6T 4.6T -336% /tmp/mnt
cs2-hs:/oldmf/2 532G 377G 155G 70% /oldmf/2
cs3-hs:/oldmf/3 532G 389G 142G 73% /oldmf/3
cs4-hs:/oldmf/4 532G 196G 335G 36% /oldmf/4
cs2-hs:/oldmf/6 532G 307G 225G 57% /oldmf/6
cs3-hs:/oldmf/7 532G 240G 292G 45% /oldmf/7
cs4-hs:/oldmf/9 532G 401G 131G 75% /oldmf/9
ds2-hs:/mnt/wrk-share2
1.0T 6.4G 1.0T 0% /nfmnt/wrk-share2
ds1-hs:/mnt/wrk-share1
1.0T 20M 1.0T 0% /nfmnt/wrk-share1
/dev/dsk/700 266G 235G 31G 88% /pf
/dev/dsk/101 248M 165M 83M 66% /bkup
ds1-hs:/mf/1 532G 16G 516G 3% /mf/10
ds2-hs:/mf/2 532G 34G 498G 6% /mf/11
ds1-hs:/mf/3 532G 127G 405G 23% /mf/12
ds2-hs:/mf/4 532G 15G 518G 2% /mf/13
ds1-hs:/mf/5 532G 60G 473G 11% /mf/14
ds2-hs:/mf/6 532G 5.1G 527G 0% /mf/15
ds1-hs:/mf/7 532G 3.4G 529G 0% /mf/16
ds2-hs:/mf/8 532G 18G 515G 3% /mf/17
ds1-hs:/mf/9 532G 35G 497G 6% /mf/18
ds2-hs:/mf/10 532G 28G 504G 5% /mf/19
ds1-hs:/mf/11 532G 25G 507G 4% /mf/20
ds2-hs:/mf/12 532G 17G 516G 3% /mf/21
ds1-hs:/mf/13 532G 22G 510G 4% /mf/22
cs6-hs:/oldmf/5 532G 300G 231G 56% /oldmf/5
cs6-hs:/oldmf/8 532G 319G 213G 59% /oldmf/8
cs2-hs:/pool 266G 189G 77G 71% /nfmnt/pool
ds1-hs:/pf 532G 61G 472G 11% /tmp/pf
/dev/dsk/702 100G 23G 77G 22% /pf/k/adm
Now:
$ ./check_by_ssh -H cs1 -l root -C "/maint/nagios_plugins/check_disk -w 20% -c 1%"
Unable to read output:
$ ./check_by_ssh -H cs1 -l root -C "/maint/nagios_plugins/check_disk -p / -w 20% -c 1%"
DISK OK -
$ ./check_by_ssh -H cs1 -l root -C "/maint/nagios_plugins/check_disk -p "/ /tmp" -w 20% -c 1%"
INPUT ERROR: Unable to parse command line
$ ./check_by_ssh -H cs1 -l root -C "/maint/nagios_plugins/check_disk -p \"/ /tmp\" -w 20% -c 1%"
Unable to open pipe: /usr/bin/ssh -l root cs1 '/maint/nagios_plugins/check_disk -p "/ /tmp" -w 20% -c 1%'<prompt>
$ ./check_by_ssh -t 20 -H cs1 -l root -C "/maint/nagios_plugins/check_disk -p '/ /tmp' -w 20% -c 1%"
INPUT ERROR: Unable to parse command line
[I need the "-t 20" here, otherwise I get a timeout]
So, if you can make sense of it, and even more important, know of a
solution, you're highly welcome. ;-)
A different approach: To create a simple shell script on the remote
host with the lines:
#!/bin/sh
/maint/nagios_plugins/check_disk -w 20% -c 3% -p "/ /tmp /maint /var /opt"
returns the correct exit code, but unfortunately the exit code is not
related to the state of the disks - it issues a warning even though
three, four file systems are at 99% (different system), while
check_disk called directly shows the expected behaviour.
As you can imagine from the df output above, I need some way to
"group" file systems for checking: Too many to really change them
individually, and too many I don't want to check, so I don't want to
simply do a check over all file systems (even if it would work, which
it doesn't).
Lots of other questions emerged, too, but I'll save them for a time
after I've found time to do some reading first. Unfortunately I'm not
able to drop everything else and just work on Nagios for a couple of
days (but the system monitoring needs to be ready by end of last week,
of course)...
Regards,
Rasmus
More information about the Help
mailing list