A user was having trouble scheduling a trigger to run once a month, with the script running correctly when executed manually. After checking backend data and determining a support ticket was required, the team provided a Monitor log4net file and instructions for creating the log file. It was suggested to look into the trigger timings being in different time zones, and instructions were given for setting the Monitor service time outs. In the end it was discovered that the infrastructure servers had incorrect time zones, which caused the problem. ControlUp Monitor service time out was also optimized. The Monitor service should transition from 'started' to 'running' over time. The issue is now resolved.
Read the entire article here...
Multiple Monitors Related Training & Support Archives
Multiple Monitors training and support-related archives from inside the ControlUp Community on Slack.
Investigating Delays for Dedicated Citrix MCS VDI
A user reported that all applications on their dedicated Citrix MCS VDI, which has 6xCPU and 64GB RAM for running Bloomberg and other heavy apps, have experienced delay, including mouse delays. It was suggested to check the latency Last metric in the RT console, or the latency metrics via the Session Details report in Insights. RemoteDX and Scoutbees were also mentioned as useful tools to check the endpoints network conditions and latency to the backend servers. A temporary license had been applied for to test these products.
Read the entire article here...
Read the entire article here...
Does rebooting ControlUp servers have a negative affect on load balancing
Various members discussed the need to reboot servers on a weekly basis, and the issues it could create with load balancing. It was suggested that ControlUp users stop and start the monitors directly from the console, which can properly rebalance the environment. It was also suggested that a PowerShell command be created to restart the monitor services at once.
Read the entire article here...
Read the entire article here...
Troubleshooting General VDI Slowness with Multiple Screens
A user asked for help in finding the source of general slowness in a VMware environment with 4-6 screens, no GPUs, and W10 Kiosks. ControlUp is being used to identify the bottleneck but CPU and RAM are not the source. GPU memory was discussed as a possible bottleneck, and it was suggested to check settings like Display memory limit, CPU metrics, network connectivity and RTT times, plus the fact that the EDT protocol can help. The user later reported that their lobbying for GPUs paid off, and they are getting 12 servers with 2xAmpere A16 GPUs.
Read the entire article here...
Read the entire article here...