

Understanding the View Planner Report
source link: https://blogs.vmware.com/performance/2020/09/understanding-view-planner-report.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

By Sravya Kondam and Atul Pandey
In this article, we describe the VMware View Planner Report and give examples of the various metrics and scores presented in the report. We also analyze the report of a sample test and discuss its conclusions.
View Planner is a benchmarking tool that analyzes a virtual desktop infrastructure’s (VDI) performance and user experience. It simulates large-scale, virtual desktop deployments and generates a realistic measure of a user’s activity by running several applications on the desktop environment. The applications include Microsoft Office, browsers, Windows Media Player, and so on.
When the run completes, View Planner generates a comprehensive report with details like test configuration, resource usage, and application performance. A View Planner report helps to analyze VDI performance and VM consolidation, among others.
For a detailed explanation of the View Planner reports, keep reading. If you want to skip ahead to use cases, go to “Getting Optimized VM Consolidation Using View Planner Report” later in this document.
A View Planner report contains multiple sections:
- Introduction
- Test Configuration
- View Planner Score
- Operation Details
- Resource Usage
Note: We ran a test described in the previous blog, VDI Capacity Planning with View Planner. The examples in this blog are based on this work.
1. Introduction
In its introduction, a View Planner report explains the background of View Planner and the methodology used to classify the user operations.
2. Test Configuration
This section provides the configuration of the test, including work profile, number of iterations, display protocol, number of VMs, think time, and ramp-up time. Here’s the configuration of our test in table 1.
Table 1. Test configurations for the workload and run profiles
This section also provides the discarded VM count.
- This indicates VMs that were available while starting the test but lost connection with the View Planner harness during the test. This can happen for multiple reasons like a VM crash or network issues.
- It also includes VMs that were not found in the test.
The discarded VM count should be less than 2% to consider the test a success.
3. View Planner Score
This section of the View Planner report has two subsections. The first subsection provides the details of the test performed, while the second gives the quality of service (QoS) of CPU and storage.
The first subsection gives information about the mode of the test, mode of the latency data (remote mode test supports both local and remote mode report creation), test status, and timestamps as shown in table 2.
Table 2. First subsection of View Planner report
In the case of remote or passive runs with the display protocol as the Blast protocol, the blastCodecs.csv
file is reported. This file contains the list of Blast codecs used on each VM during the run.
3.1. Quality of Service (QoS)
The View Planner workload mix consists of multiple applications running in the desktop VMs and performing user operations. These user operations are classified into three groups (table 3).
Group Description Threshold Group A Interactive, CPU-bound operations 1 Group B I/O bound operations 6 Group C Long running and other miscellaneous operations NA
Table 3. Group threshold settings for quality of service
The operations in Groups A and B are used to determine QoS, which is the measurement of overall performance. The operations in Group C are used to generate additional load.
For example, in a Microsoft Word application, “opening a document” is classified as Group B because it requires disk I/O operations. Modifying the document is considered Group A because it’s a CPU-bound operation.
The View Planner QoS score is the 95th percentile of application response time for Group A and B operations separately. If the QoS scores for both groups A and B are within the threshold limits provided by View Planner, only then is the test considered to pass. The QoS values are represented in a graph as shown in figure 1.
Figure 1. Both Group A and Group B latencies fall beneath their respective thresholds.
4. Operation Details
The Operation Details section has two subsections: OE ratio and application response time.
4.1. OE Ratio
OE ratio is the ratio of the actual number of operations executed to the expected number of operations. The number of operations expected depends on the user configuration. Table 4 shows the OE ratio of a sample report.
Table 4. The OE ratio of a sample report.
The QoS score determines whether the latency values of the operations are within the threshold limits. But there can be a case where all the operations might not be executed successfully. The reasons for this include an absence of the application on the VM or crashing of the application during the test (among others). The OE ratio provides an insight on the percentage of the operations executed successfully.
Ideally all the expected operations must be executed resulting in an OE ratio of 1. But a test is considered a success if at least 90 % of the expected operations are executed (an OE ratio greater than 0.9 is also acceptable).
4.2. Application Response Time
This section contains the following details in table format:
- Application Latency Data – The operations performed on applications, category (storage sensitive or CPU sensitive) and the latency of each operation are reported.
- Remote Login Time – The time taken by the client to log onto the desktop machine in remote runs is reported.
In remote mode runs, View Planner collects the data from both the client and desktop. Client machines report the latency of the operations as experienced by the user. Desktop machines report the latency of the operations that actually occurred.
For example, if we consider an open operation in a Google Chrome application, then the Chrome window might open on the desktop as soon as the icon is clicked. So the desktop reports the Chrome open operation as a success. But the user might experience a delay because of the protocol or network latencies. So the client machine reports the Chrome open operation as a success only after the user sees the Chrome application on the screen.
View Planner reports the data collected from the desktop as local mode data, and the data collected from the client as remote mode data. View Planner generates the remote mode report by default. We can create the local mode report also. For commands to generate the report, go to View Planner Commands.
Below is an operation details table of a 130-VM sample test with five iterations of the standard benchmark profile. Each operation is executed once in each iteration, so the total expected count for each operation is 390 (without the first and last iterations). But remote login happens only once on each desktop VM, so the count of remote logins is only 130. Latency values of all the operations are recorded; mean and median values are calculated for all the individual operations. Application response times of a sample test are shown in figure 5.
Table 5. Application response time
5. Resource Usage
During a run, the resource usage of hosts involved in the test are collected and reported, as seen in table 6. This report section includes CPU, memory, and network usages of the hosts. Average, minimum, and maximum usages of the resources are reported.
Table 6. Resource usage
Resource usage at any particular time during the run is plotted in the graphs, as shown in figure 1. Each graph shows the amount of time taken for the complete run on the X-axis and the resource usage on the Y-axis. All the resource usage charts have vertical lines that indicate checkpoints like remote login, pre-run, run, and post-run. Resource usage is collected through vCenter performance counters. For a description of all the performance counters, go to vSphere 6.0 Performance Counter Description.
Graphs vCenter Performance Counters CPU usage cpu.usage, cpu.coreUtilization, cpu.utilization Memory usage mem.usage Memory mem.active, mem.consumed Network Usage net.received, net.transmitted Datastore write latency datastore.totalWriteLatency Datastore avg write requests per second datastore.numberWriteAveraged Datastore read latency datastore.totalReadLatency Datastore avg read requests per second datastore.numberReadAveraged
Table 7. View Planner collects and plots graphs for the following host resource details.
Here is a snapshot of the CPU usage graphs from our test, in figure 2.
Figure 2. A snapshot of the CPU usage graphs from our test
CPU usage and Memory usage graphs can have spikes up to 100%. But if the spike is for a long duration, then we can conclude that CPU or memory is the bottleneck for the run.
5.1. Getting Optimized VM Consolidation Using View Planner Report
We performed an experiment to find the VM consolidation of a host by running the standard View Planner work profile applications. We selected the following parameters to track the performance.
Parameter Threshold Description CPU Sensitive Operations (Group A) <1 95th percentile of the latency values of all the CPU-sensitive operations performed during the test. If this parameter is beyond the threshold, we can conclude that CPU is the bottleneck of the test. Storage Sensitive Operations (Group B) <6 95th percentile of the latency values of all the storage-sensitive operations performed during the test. If this parameter is beyond the threshold, we can conclude that disk storage is the bottleneck of the test. OE Ratio >0.90 Ratio of the executed number of operations to expected number of operations. If this parameter is beyond the threshold, we can conclude that all the operations were not successfully executed. Discarded VM Count <2% Number of VMs that were available during the start of the View Planner test but dropped during the test. If this parameter is more than zero, we can conclude that all the expected VMs were not part of the test. Memory Usage <90% Memory usage of the host under test is reported. If this parameter is beyond the threshold, we can conclude that Memory is the bottleneck of the test.
Table 8. Parameters we chose to track performance
Based on the host configuration, we started the test with 144 VMs. Table 9 shows the screenshots of QoS and host resource usage from the report of our 144-VM run.
Table 9. Host resource usage and quality of service results
From the Table 9 reports, we can deduce the parameters as shown in table 10.
Performance Parameter Parameter Values Threshold Status CPU sensitive operations (Group A) 0.9193 < 1 Seconds SUCCESS Storage sensitive operations (Group B) 7.0804 < 6 Seconds FAIL Ratio of actual to expected operations (O/E ratio) 0.98 > 0.9 SUCCESS Discarded desktop count 0 < 2% SUCCESS Memory Usage of any of the desktop host <50%(approx.) < 90% SUCCESS
Table 10. Failed test performance parameters
We can see that QoS of the storage-sensitive operations is above the threshold limit. Even though all the other parameters have met the criteria, we consider this test to be a failure.
We decreased the VM count to 130 VMs in our run profile and performed another test. In the preceding explanation, you can see the snapshots of resource utilization and QoS details of the test. From those snapshots, we can deduce the following.
Performance Parameter Parameter Values Threshold Status CPU sensitive operations (Group A) 0.7661 < 1 Seconds SUCCESS Storage sensitive operations (Group B) 4.9244 < 6 Seconds SUCCESS Ratio of actual to expected operations (O/E ratio) 1 > 0.9 SUCCESS Discarded desktop count 0 < 2% SUCCESS Memory Usage of any of the desktop host ~47%(approx.) < 90% SUCCESS
Table 11. Successful test performance parameters
Based on the reports of our tests, we can say that our host can accommodate 130 VMs without any compromise on performance.
6. Contact Us
If you have any questions or want to know more, reach out to the VMware View Planner team at [email protected]. The View Planner team actively answers this community email.
Recommend
-
35
A PHP-based planner that will give you all of the workdays in a given date range.
-
58
本文为 Apache Flink 新版本重大功能特性解读之 Flink SQL 系列文章的开篇 ,Flink S QL 系列文章 由其核心贡献者们分享,涵盖基础知识、实践、...
-
24
README.md Trip Planner This tool filters Google Maps places based on an input query and exports them to a CSV file. Getting started
-
43
Automatically calculate ROI inside Google Keyword PlannerKeyword Planner ROI is a Chrome extension that automatically calculates forecasted Return on Investment inside Google's Keyword Planner. Instant...
-
31
Voronoi Diagram(也称作Dirichlet tessellation)是由俄国数学家Georgy Voronoy提出的一种空间分割算法。它通过一系列的种子节点(Seed Points)将空间切分为许多子区域,每个子区域被称为一个Cell,每个Cell中的所有点到当前Cell中的种子节...
-
9
Understanding Horizon View Real-Time Audio-Video (RTAV) 07/12/2013VMware announced today support for Real-Time Audio-View as part of the VMware Horizon View 5.2 Feature Pack 2 (find the release not...
-
6
The view from the gallery: A Kleiman v Wright special report Business 13 January 2022 The excitement hasn’t quite gone down...
-
9
Material Purchase order view text report In many organizations there is a requirement in reports related to data extraction of text from text views. Like material purchase order text view, many organizations are using fr...
-
8
lakshmi narayanan k November 2, 2022 3 m...
-
12
PALLAB HALDAR November 23, 2020 5 minute read...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK