1-Click Capacity Planning vCOps Dashboard – Part 1

superman-comic2Going back to the early days of virtualization, capacity planning always was a process needed to be done in some point. If you into virtualization you know what the benefits are and why it’s important to go through a thorough capacity planning process but sometimes it’s hard to know what or where to look and where to begin.

During my day-to-day work I’m facing a lot of customers wish for this to be a much easier process, for me tell them what to look for and if they are already using vCOps they want one single pane of glass dashboard with all the data they need.

In this 3-part blog post series, I will show you how to pull out capacity planning data for ESXi hosts Cluster CPU, memory and storage (just to make things shorter I decided to ignore networking for now) in one vCOps custom dashboard with one single click on a cluster resource kind.

The first step in laying down the dashboard infrastructure is to create all the Super Metrics. For this to work we will use some OOTB metrics and create a Super Metric package which will include some Super Metrics of our own.

In the metrics table, red color represents OOTB metrics and green color represents Super Metrics.




Density and Deployment

CPU Demand (%)

Memory Usage (%)

Cluster Datastores Used Space (GB)

vCPU to pCPU Ratio

Cluster Total CPU Capacity (GHz)

Cluster Physical Memory Capacity (GB)

Cluster Total Storage Throughput (KBps)

VM to Host Ratio

Cluster Total CPU Demand (GHz)

Cluster Total Memory Usage (GB)

Cluster Total IOPS

Deployed VMs



Cluster Datastores Total Capacity (GB)




Cluster Datastores Used Space (%)


Since this is not a “how to create Super Metrics” kind of post, please read VMware vCenter Operations Manager Administration Guide Custom User Interface (pg. 39).

Bouke Groenescheij also got a nice blog post which will also help you get started with Super Metrics.

So, now that you know how you actually need to create it using the following formulas. Do not try to copy the formulas in the screenshots, as each Resource or Resource Kind ID is different in each environment. Like I said, you need to create it by yourself Smile

Cluster Total CPU Capacity (GHz)

sum(This Resourcecpu|totalCapacity_average)/1000

cpu|totalCapacity_average = Resources: Cluster Compute Resource > CPU Usage > Total Capacity (MHz)

01. Cluster Total CPU Capacity (GHz)

Cluster Total CPU Demand (GHz)

sum(This Resourcecpu|demandmhz)/1000

 cpu|demandmhz = Resources: Cluster Compute Resource > CPU Usage > Demand (MHz)

02. Cluster Total CPU Demand (GHz)

Cluster Physical Memory Capacity (GB)

sum(This Resourcemem|host_provisioned)/1048576

mem|host_provisioned = Resources: Host system > memory > Provisioned Memory (KB)

03. Cluster Physical Memory Capacity (GB)

Cluster Total Memory Usage (GB)

sum(This Resourcemem|host_usage)/1048576

mem|host_usage = Resources: Host System > Memory > Usage (KB)

04. Cluster Total Memory Usage (GB)

Cluster Datastores Total Capacity (GB)

sumN(DatastoreCapacity|Total Capacity (GB),2)

DatastoreCapacity|Total Capacity (GB) = Resources Kinds: Datastore > Capacity > Total Capacity (GB)

05. Cluster Datastores Total Capacity in GB

Cluster Datastores Used Space (%)

sumN(DatastoreCapacity|Used Space (GB),2)/sumN(DatastoreCapacity|Total Capacity (GB),2)/100

DatastoreCapacity|Used Space (GB) = Resources Kinds: Datastore > Capacity > Used Space (GB)

DatastoreCapacity|Total Capacity (GB) = Resources Kinds: Datastore > Capacity > Total Capacity (GB)

06. Cluster Datastores Used Space

The purpose of this Super Metric is to calculate the sum of all datastores used space in percentage which connected to the cluster. Currently there is no way to do this to datastore clusters right of the box (we can use custom groups instead) but in future releases this will be possible.

After creating all the Super Metrics and add them to a package you should see something like that. Note that Super Metric IDs can be different in each environment.

07. All SM

The last step for this will be applying the package to “Cluster Compute Resources” resource kind. Under ENVIRONMENT tab go to ENVIRONMENT OVERVIEW.


Select Resource Kinds > Cluster Compute Resource

09. Resource Kinds

Next, we need to apply the Super Metric package to the clusters. You can choose all your clusters or just pick one, it’s up to you. In this example I will choose both of my clusters. After doing so, click the “Edit Resource” button.

10. Edit Resource

Select the Super Metric package created and apply it on the cluster resource

11. Apply Package

Wait a bit as Super Metrics calculations can take some time to complete and become available for selection (depends on the size of the environment, usually no more then 5-10 min).

Login to vSphere UI, select one of your clusters and go to Operations > All Metrics. Notice how all the newly created Super Metrics are now available for my cluster.

You can double click each one of them in order to see it in the Metric Chart on the right.

11. vSphere UI

Don’t forget to stay tuned for the next part in this series as I will show you how to perform manipulations on vCOps interaction XML files in our journey to create an awesome 1-click interactive capacity planning custom dashboard.


  1. I am running vcops 5.8 and cannot get the values in the planning, capacity, datastore inventory view to sync with the Cluster Datastores Total Capacity (GB) super metric that you provided.

    Can you verify that these values should match?

      • If you choose a cluster in the main vcops dashboard and look at the datastore inventory view in planning, it will list all the datastores for the cluster and total capacity per datastore along with other data.

        If I manually sum the total capacity for all datastores in the cluster I should get the same number as when I create the supermetric – Datastore: Capacity|Total Capacity (GB) . However I find that those numbers do not match,

  2. Hi – Regarding the “Cluster Total CPU Demand (GHz)” metric that you are using – is it taking HA into account or is it the raw CPU capacity of the cluster?

      • Lior – So, as a follow up question – if we consider an HA enabled cluster (let’s assume N+1), then we should be looking at a different metric, as we would want to know the capacity for N hosts – does that make sense?

        Thanks for your earlier reply!

  3. I am trying to create a supermetric for “Cluster Total CPU Demand – Ghz)

    I go in and I select Cluster Compute Resource, under adapter Kinds I select Demand(Mhz)

    My formula is

    sum(Cluster Compute Resource: CPU Usage|Demand(MHz))/1000
    I have applied the metric package at the datacenter level and cluster level but I never get any data in the charts. What am I doing wrong?

  4. Hi Lior
    The cluster Datastore Total Capacitiy (GB) Super metric shows all the datasores belonging to the cluster include the local datastores and swap datastores,
    is there a way to filter those datastores?


    • You can by creating a dynamic group but it will not be a scalable solution because of current product limitations.
      I will write a blog post about it after the current vCloud blog post series.

  5. Hello,
    First of thankyou for the amazing dashboard.
    Need your help as i am facing issues with the Super Metric, I am using vRops 6.2 .
    Cluster Total CPU Demand and Capacity Metric are not working.
    Demand Looks like this:
    Sum(Cluster Compute Resource: CPU|Demand)/1000
    Capacity Looks like this:
    Sum(Cluster Compute Resource: CPU|Total Capacity)/1000

    and Disk Space Used widget also says “no data to display”.

    Please suggest .


1 Trackback / Pingback

  1. Create a One-Click Cluster Capacity Dashboard Using vCOps | VMware Consulting Blog - VMware Blogs

Leave a Reply