There are many error on VMWare ESX Disk Sensors. We followed the following ticket (https://kb.paessler.com/knowledgebase/en/topic/24373-ssh-vmware-esx-i-disk-sensor-error) and upgraded our customer's PRTG to version 9 without solving the problem.
example:
Sensor History 02/10/2011 22:58:00 Notification Info, State Trigger activated (Trigger ID: 4294967297) 02/10/2011 22:56:35 Down, 2 % (Free Space) is below the error limit of 10 % 02/10/2011 22:56:30 Notification Info, State Trigger activated (Trigger ID: 4294967297) 02/10/2011 22:56:30 Warning, No valid result from SSH Shell 02/10/2011 18:13:00 Notification Info, State Trigger activated (Trigger ID: 4294967297) 02/10/2011 18:11:36 Down, 2 % (Free Space) is below the error limit of 10 % 02/10/2011 18:11:30 Notification Info, State Trigger activated (Trigger ID: 4294967297) 02/10/2011 18:11:30 Warning, No valid result from SSH Shell 30/09/2011 22:40:00 Notification Info, State Trigger activated (Trigger ID: 4294967297) 30/09/2011 22:38:36 Down, 2 % (Free Space) is below the error limit of 10 %
Article Comments
Here is the result of the suggested command:
linux
Last login: Mon Oct 24 12:38:15 2011 from srvprtg.newims.local [root@vmware1 ~]# echo PAESSHSTART;vdf - k;echo PAESSHEND PAESSHSTART Unknown option: k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdl5 5036284 1893576 2886876 40% / /dev/sdl2 2016044 1546196 367436 81% /var/log /dev/sdk1 124427 72955 45048 62% /boot /dev/sdk2 5162828 1898096 3002472 39% /esx3-installation /dev/sdk1 124427 72955 45048 62% /esx3-installation/boot /dev/sdk6 2063504 136276 1822408 7% /esx3-installation/var/log /dev/sdk5 10317828 5975896 3817816 62% /esx3-installation/vmimages /vmfs/devices 71706928 0 71706928 0% /vmfs/devices /vmfs/volumes/45b48f67-586b87c8-88aa-00145e7b9944 52428800 27379712 25049088 52% /vmfs/volumes/local_VMWARE1 /vmfs/volumes/475fc71e-d734fdba-cac0-00145e7b993a 2134638592 2103705600 30932992 98% /vmfs/volumes/array3_sata1 /vmfs/volumes/475fc7ca-b788c7f8-e17b-00145e7b993a 1073479680 1062461440 11018240 98% /vmfs/volumes/array3_sata2 /vmfs/volumes/475fc907-ae11c800-bb26-00145e7b993a 1183580160 288370688 895209472 24% /vmfs/volumes/array3_sata3 /vmfs/volumes/475fc989-88e41a12-72c7-00145e7b993a 2134638592 390963200 1743675392 18% /vmfs/volumes/array4_sata1 /vmfs/volumes/475fc9a5-fbb92db2-66f5-00145e7b993a 1073479680 950444032 123035648 88% /vmfs/volumes/array4_sata2 /vmfs/volumes/475fc9c9-55b2308e-517a-00145e7b993a 1915486208 1820336128 95150080 95% /vmfs/volumes/array4_sata3 /vmfs/volumes/479dc9d1-2b4bfc72-6d5c-00145e7b9944 524025856 317448192 206577664 60% /vmfs/volumes/array1_fc1 /vmfs/volumes/479dca17-85f22034-7ae6-00145e7b9944 475529216 296495104 179034112 62% /vmfs/volumes/array1_fc2 /vmfs/volumes/479dcaa7-26dab6ee-30bf-00145e7b9944 419168256 317556736 101611520 75% /vmfs/volumes/array2_fc1 /vmfs/volumes/479dcacc-c25a3a4e-5425-00145e7b9944 151781376 88757248 63024128 58% /vmfs/volumes/array2_fc2 PAESSHEND [root@vmware1 ~]#
Oct, 2011 - Permalink
What was the outcome of this investigation please? We are getting the same errors.
Oct, 2012 - Permalink
For reference if anyone else has this issue: Setting the SSH timeout to 10s and the scanning interval to 5mins (from 60s) appears to have initially resolved this issue. I haven't investigated whether the sensors were overloading the PRTG or the ESX host.
Oct, 2012 - Permalink
We have experienced the same issue with VMWare servers.
Changing the timeouts as described above resolved the issue for one of them, however, it has not stopped intermittent errors "No valid result from SSH Shell" on another one.
Does anyone have a solution to this?
Apr, 2013 - Permalink
Hi,
this error may happen if the connected storage-system is responding slowly due to heavy load. If you fire the command 'df -k' on a ssh-shell how long will the result take to appear?
Please adjust the timeout-setting for these sensors to a higher value, the default is 5 seconds.
Depending on the load and the number of connected volumes a timeout of 10 or 15 seconds or even higher will be better.
Kind regards
Apr, 2013 - Permalink
Hello,
I have a fine connection to the servers, servers are running with low load, I have adjusted the ssh timeouts, and the testing intervals ... but I am still getting intermittent errors saying "No valid result from SSH Shell"..
Here is what I get written to disk when the test fails:
linux
Last login: Mon May 20 16:07:18 2013 from prtgmaster.aac.academyart.edu
echo PAESSHSTART;/var/prtg/scripts/firefoxcheck.sh ./;printf "\n";echo PAESSHEND alertingadmin@alertingadmin-desktop:$ echo PAESSHSTART;/var/prtg/scripts/firefo xcheck.sh ./;printf "\n";echo PAESSHEND PAESSHSTAR
May, 2013 - Permalink
Hi,
Is the missing 'T' at the end of your result-string really missing or have you just missed to copy it?
If you just missed to copy it this may be connected with some settings in your sshd_config-file. Please try to adjust the following settings to at least
ClientAliveInterval 15 ClientAliveCountMax 4
to enforce sshd to wait at least 1 minute with no data from the client (in this case the PRTG-Server) before dropping the connection.
Kind Regards
Jun, 2013 - Permalink
In the sensorsettings please choose "Write result to disk" and send us the resultfile the sensor has created.
Oct, 2011 - Permalink