Hello

In our environment we currently have one PRTG core server and 3 remote probes. The core only has its own 6 sensors. In total we have about 2500 sensors, distributed on those 3 remote probes. In the near future we will further add 1-2 remote probes and might get to 5000 sensors in total. According to the manual it is strongly recommended to use a hardware installation for 5000+ sensors. Does this apply if the core only holds remote probes? Where is the performance issue in this case?

Thanks


Article Comments

Hello there,

The "issue" is that using remote probes distributes/takes away "monitoring load" from the core server and running the sensors is just one aspect.

In the end it's the core server that needs to process all the data (saving historic data, generating graphs, running reports, checking against configured sensor limits to set sensors to warning/error, triggering notifications etc.) and also to "present it", meaning it runs the webserver with everything related to it like checking access rights (which user is allowed to see which objects and allowed to perform which actions), also other things like displaying configured maps, running automated tasks like unusual detection, similar sensors detection and so on.

The probes basically "only" perform sensor requests and then send the results back to the core, everything else related to it is then job of the core server.

See also:

Kind regards,

Erhard


Oct, 2016 - Permalink