8

Understanding the View Planner Report

 3 years ago
source link: https://blogs.vmware.com/performance/2020/09/understanding-view-planner-report.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Understanding the View Planner Report

By Sravya Kondam and Atul Pandey

In this article, we describe the VMware View Planner Report and give examples of the various metrics and scores presented in the report. We also analyze the report of a sample test and discuss its conclusions.

View Planner is a benchmarking tool that analyzes a virtual desktop infrastructure’s (VDI) performance and user experience. It simulates large-scale, virtual desktop deployments and generates a realistic measure of a user’s activity by running several applications on the desktop environment. The applications include Microsoft Office, browsers, Windows Media Player, and so on.

When the run completes, View Planner generates a comprehensive report with details like test configuration, resource usage, and application performance. A View Planner report helps to analyze VDI performance and VM consolidation, among others.

For a detailed explanation of the View Planner reports, keep reading. If you want to skip ahead to use cases, go to “Getting Optimized VM Consolidation Using View Planner Report” later in this document.

A View Planner report contains multiple sections:

  1. Introduction
  2. Test Configuration
  3. View Planner Score
  4. Operation Details
  5. Resource Usage

Note: We ran a test described in the previous blog, VDI Capacity Planning with View Planner. The examples in this blog are based on this work.

1. Introduction

In its introduction, a View Planner report explains the background of View Planner and the methodology used to classify the user operations.

2. Test Configuration

This section provides the configuration of the test, including work profile, number of iterations, display protocol, number of VMs, think time, and ramp-up time. Here’s the configuration of our test in table 1.

view-planner-reports-table-1.png

Table 1. Test configurations for the workload and run profiles

This section also provides the discarded VM count.

  • This indicates VMs that were available while starting the test but lost connection with the View Planner harness during the test. This can happen for multiple reasons like a VM crash or network issues.
  • It also includes VMs that were not found in the test.

The discarded VM count should be less than 2% to consider the test a success.

3. View Planner Score

This section of the View Planner report has two subsections. The first subsection provides the details of the test performed, while the second gives the quality of service (QoS) of CPU and storage.

The first subsection gives information about the mode of the test, mode of the latency data (remote mode test supports both local and remote mode report creation), test status, and timestamps as shown in table 2.

view-planner-reports-table-2.png

Table 2. First subsection of View Planner report

In the case of remote or passive runs with the display protocol as the Blast protocol, the blastCodecs.csv file is reported. This file contains the list of Blast codecs used on each VM during the run.

3.1. Quality of Service (QoS)

The View Planner workload mix consists of multiple applications running in the desktop VMs and performing user operations. These user operations are classified into three groups (table 3).

Group Description Threshold Group A Interactive, CPU-bound operations  1 Group B I/O bound operations  6 Group C Long running and other miscellaneous operations NA

Table 3. Group threshold settings for quality of service

The operations in Groups A and B are used to determine QoS, which is the measurement of overall performance. The operations in Group C are used to generate additional load.

For example, in a Microsoft Word application, “opening a document” is classified as Group B because it requires disk I/O operations. Modifying the document is considered Group A because it’s a CPU-bound operation.

The View Planner QoS score is the 95th percentile of application response time for Group A and B operations separately. If the QoS scores for both groups A and B are within the threshold limits provided by View Planner, only then is the test considered to pass. The QoS values are represented in a graph as shown in figure 1.

view-planner-reports-fig1.png

Figure 1. Both Group A and Group B latencies fall beneath their respective thresholds.

4. Operation Details

The Operation Details section has two subsections: OE ratio and application response time.

4.1. OE Ratio

­OE ratio is the ratio of the actual number of operations executed to the expected number of operations. The number of operations expected depends on the user configuration. Table 4 shows the OE ratio of a sample report.

view-planner-reports-table-3.png

Table 4. The OE ratio of a sample report.

The QoS score determines whether the latency values of the operations are within the threshold limits. But there can be a case where all the operations might not be executed successfully. The reasons for this include an absence of the application on the VM or crashing of the application during the test (among others). The OE ratio provides an insight on the percentage of the operations executed successfully.

Ideally all the expected operations must be executed resulting in an OE ratio of 1. But a test is considered a success if at least 90 % of the expected operations are executed (an OE ratio greater than 0.9 is also acceptable).

4.2. Application Response Time

This section contains the following details in table format:

  • Application Latency Data – The operations performed on applications, category (storage sensitive or CPU sensitive) and the latency of each operation are reported.
  • Remote Login Time – The time taken by the client to log onto the desktop machine in remote runs is reported.

In remote mode runs, View Planner collects the data from both the client and desktop. Client machines report the latency of the operations as experienced by the user. Desktop machines report the latency of the operations that actually occurred.

For example, if we consider an open operation in a Google Chrome application, then the Chrome window might open on the desktop as soon as the icon is clicked. So the desktop reports the Chrome open operation as a success. But the user might experience a delay because of the protocol or network latencies. So the client machine reports the Chrome open operation as a success only after the user sees the Chrome application on the screen.

View Planner reports the data collected from the desktop as local mode data, and the data collected from the client as remote mode data. View Planner generates the remote mode report by default. We can create the local mode report also. For commands to generate the report, go to View Planner Commands.

Below is an operation details table of a 130-VM sample test with five iterations of the standard benchmark profile. Each operation is executed once in each iteration, so the total expected count for each operation is 390 (without the first and last iterations). But remote login happens only once on each desktop VM, so the count of remote logins is only 130. Latency values of all the operations are recorded; mean and median values are calculated for all the individual operations. Application response times of a sample test are shown in figure 5.

view-planner-reports-table-4.png

Table 5. Application response time

5. Resource Usage

During a run, the resource usage of hosts involved in the test are collected and reported, as seen in table 6. This report section includes CPU, memory, and network usages of the hosts. Average, minimum, and maximum usages of the resources are reported.

view-planner-reports-table-6.png

Table 6. Resource usage

Resource usage at any particular time during the run is plotted in the graphs, as shown in figure 1. Each graph shows the amount of time taken for the complete run on the X-axis and the resource usage on the Y-axis. All the resource usage charts have vertical lines that indicate checkpoints like remote login, pre-run, run, and post-run. Resource usage is collected through vCenter performance counters. For a description of all the performance counters, go to vSphere 6.0 Performance Counter Description.

Graphs vCenter Performance Counters CPU usage cpu.usage, cpu.coreUtilization, cpu.utilization Memory usage mem.usage Memory mem.active, mem.consumed Network Usage net.received, net.transmitted Datastore write latency datastore.totalWriteLatency Datastore avg write requests per second datastore.numberWriteAveraged Datastore read latency datastore.totalReadLatency Datastore avg read requests per second datastore.numberReadAveraged

Table 7. View Planner collects and plots graphs for the following host resource details.

Here is a snapshot of the CPU usage graphs from our test, in figure 2.

view-planner-reports-fig2.png

Figure 2. A snapshot of the CPU usage graphs from our test

CPU usage and Memory usage graphs can have spikes up to 100%. But if the spike is for a long duration, then we can conclude that CPU or memory is the bottleneck for the run.

5.1. Getting Optimized VM Consolidation Using View Planner Report

We performed an experiment to find the VM consolidation of a host by running the standard View Planner work profile applications. We selected the following parameters to track the performance.

Parameter Threshold Description CPU Sensitive Operations (Group A) <1 95th percentile of the latency values of all the CPU-sensitive operations performed during the test. If this parameter is beyond the threshold, we can conclude that CPU is the bottleneck of the test. Storage Sensitive Operations (Group B) <6 95th percentile of the latency values of all the storage-sensitive operations performed during the test. If this parameter is beyond the threshold, we can conclude that disk storage is the bottleneck of the test. OE Ratio >0.90 Ratio of the executed number of operations to expected number of operations. If this parameter is beyond the threshold, we can conclude that all the operations were not successfully executed. Discarded VM Count <2% Number of VMs that were available during the start of the View Planner test but dropped during the test. If this parameter is more than zero, we can conclude that all the expected VMs were not part of the test. Memory Usage <90% Memory usage of the host under test is reported. If this parameter is beyond the threshold, we can conclude that Memory is the bottleneck of the test.

Table 8. Parameters we chose to track performance

Based on the host configuration, we started the test with 144 VMs. Table 9 shows the screenshots of QoS and host resource usage from the report of our 144-VM run.

view-planner-reports-table-9a.png

view-planner-reports-table-9b.png

Table 9. Host resource usage and quality of service results

From the Table 9 reports, we can deduce the parameters as shown in table 10.

Performance Parameter Parameter Values Threshold Status CPU sensitive operations (Group A) 0.9193 < 1 Seconds SUCCESS Storage sensitive operations (Group B) 7.0804 < 6 Seconds FAIL Ratio of actual to expected operations (O/E ratio) 0.98 > 0.9 SUCCESS Discarded desktop count 0 < 2% SUCCESS Memory Usage of any of the desktop host <50%(approx.) < 90% SUCCESS

Table 10. Failed test performance parameters

We can see that QoS of the storage-sensitive operations is above the threshold limit. Even though all the other parameters have met the criteria, we consider this test to be a failure.

We decreased the VM count to 130 VMs in our run profile and performed another test. In the preceding explanation, you can see the snapshots of resource utilization and QoS details of the test. From those snapshots, we can deduce the following.

Performance Parameter Parameter Values Threshold Status CPU sensitive operations (Group A) 0.7661 < 1 Seconds SUCCESS Storage sensitive operations (Group B) 4.9244 < 6 Seconds SUCCESS Ratio of actual to expected operations (O/E ratio) 1 > 0.9 SUCCESS Discarded desktop count 0 < 2% SUCCESS Memory Usage of any of the desktop host ~47%(approx.) < 90% SUCCESS

Table 11. Successful test performance parameters

Based on the reports of our tests, we can say that our host can accommodate 130 VMs without any compromise on performance.

6. Contact Us

If you have any questions or want to know more, reach out to the VMware View Planner team at [email protected]. The View Planner team actively answers this community email.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK