Going back to the early days of virtualization, capacity planning always was a process needed to be done in some point. If you into virtualization you know what the benefits are and why it’s important to go through a thorough capacity planning process but sometimes it’s hard to know what or where to look and where to begin.
During my day-to-day work I’m facing a lot of customers wish for this to be a much easier process, for me tell them what to look for and if they are already using vCOps they want one single pane of glass dashboard with all the data they need.
In this 3-part blog post series, I will show you how to pull out capacity planning data for ESXi hosts Cluster CPU, memory and storage (just to make things shorter I decided to ignore networking for now) in one vCOps custom dashboard with one single click on a cluster resource kind.
The first step in laying down the dashboard infrastructure is to create all the Super Metrics. For this to work we will use some OOTB metrics and create a Super Metric package which will include some Super Metrics of our own.
In the metrics table, red color represents OOTB metrics and green color represents Super Metrics.
CPU |
Memory |
Storage |
Density and Deployment |
CPU Demand (%) |
Memory Usage (%) |
Cluster Datastores Used Space (GB) |
vCPU to pCPU Ratio |
Cluster Total CPU Capacity (GHz) |
Cluster Physical Memory Capacity (GB) |
Cluster Total Storage Throughput (KBps) |
VM to Host Ratio |
Cluster Total CPU Demand (GHz) |
Cluster Total Memory Usage (GB) |
Cluster Total IOPS |
Deployed VMs |
|
|
Cluster Datastores Total Capacity (GB) |
|
|
|
Cluster Datastores Used Space (%) |
|
Since this is not a “how to create Super Metrics” kind of post, please read VMware vCenter Operations Manager Administration Guide Custom User Interface (pg. 39).
Bouke Groenescheij also got a nice blog post which will also help you get started with Super Metrics.
So, now that you know how you actually need to create it using the following formulas. Do not try to copy the formulas in the screenshots, as each Resource or Resource Kind ID is different in each environment. Like I said, you need to create it by yourself
Cluster Total CPU Capacity (GHz)
sum(This Resource: cpu|totalCapacity_average)/1000
cpu|totalCapacity_average = Resources: Cluster Compute Resource > CPU Usage > Total Capacity (MHz)
Cluster Total CPU Demand (GHz)
sum(This Resource: cpu|demandmhz)/1000
cpu|demandmhz = Resources: Cluster Compute Resource > CPU Usage > Demand (MHz)
Cluster Physical Memory Capacity (GB)
sum(This Resource: mem|host_provisioned)/1048576
mem|host_provisioned = Resources: Host system > memory > Provisioned Memory (KB)
Cluster Total Memory Usage (GB)
sum(This Resource: mem|host_usage)/1048576
mem|host_usage = Resources: Host System > Memory > Usage (KB)
Cluster Datastores Total Capacity (GB)
sumN(Datastore: Capacity|Total Capacity (GB),2)
Datastore: Capacity|Total Capacity (GB) = Resources Kinds: Datastore > Capacity > Total Capacity (GB)
Cluster Datastores Used Space (%)
sumN(Datastore: Capacity|Used Space (GB),2)/sumN(Datastore: Capacity|Total Capacity (GB),2)/100
Datastore: Capacity|Used Space (GB) = Resources Kinds: Datastore > Capacity > Used Space (GB)
Datastore: Capacity|Total Capacity (GB) = Resources Kinds: Datastore > Capacity > Total Capacity (GB)
The purpose of this Super Metric is to calculate the sum of all datastores used space in percentage which connected to the cluster. Currently there is no way to do this to datastore clusters right of the box (we can use custom groups instead) but in future releases this will be possible.
After creating all the Super Metrics and add them to a package you should see something like that. Note that Super Metric IDs can be different in each environment.
The last step for this will be applying the package to “Cluster Compute Resources” resource kind. Under ENVIRONMENT tab go to ENVIRONMENT OVERVIEW.
Select Resource Kinds > Cluster Compute Resource
Next, we need to apply the Super Metric package to the clusters. You can choose all your clusters or just pick one, it’s up to you. In this example I will choose both of my clusters. After doing so, click the “Edit Resource” button.
Select the Super Metric package created and apply it on the cluster resource
Wait a bit as Super Metrics calculations can take some time to complete and become available for selection (depends on the size of the environment, usually no more then 5-10 min).
Login to vSphere UI, select one of your clusters and go to Operations > All Metrics. Notice how all the newly created Super Metrics are now available for my cluster.
You can double click each one of them in order to see it in the Metric Chart on the right.
Don’t forget to stay tuned for the next part in this series as I will show you how to perform manipulations on vCOps interaction XML files in our journey to create an awesome 1-click interactive capacity planning custom dashboard.
hi
there is any limit for super metric per package?
Not that i’m aware of
I am running vcops 5.8 and cannot get the values in the planning, capacity, datastore inventory view to sync with the Cluster Datastores Total Capacity (GB) super metric that you provided.
Can you verify that these values should match?
I’m not sure what do you mean with “should match”. Can you explain?
If you choose a cluster in the main vcops dashboard and look at the datastore inventory view in planning, it will list all the datastores for the cluster and total capacity per datastore along with other data.
If I manually sum the total capacity for all datastores in the cluster I should get the same number as when I create the supermetric – Datastore: Capacity|Total Capacity (GB) . However I find that those numbers do not match,
Hi – Regarding the “Cluster Total CPU Demand (GHz)” metric that you are using – is it taking HA into account or is it the raw CPU capacity of the cluster?
Raw CPU capcity
Lior – So, as a follow up question – if we consider an HA enabled cluster (let’s assume N+1), then we should be looking at a different metric, as we would want to know the capacity for N hosts – does that make sense?
Thanks for your earlier reply!
I agree that you should take HA under consideration but currently there is no OOTB option to do so. You can do it with a Super Metric though…
I am trying to create a supermetric for “Cluster Total CPU Demand – Ghz)
I go in and I select Cluster Compute Resource, under adapter Kinds I select Demand(Mhz)
My formula is
sum(Cluster Compute Resource: CPU Usage|Demand(MHz))/1000
I have applied the metric package at the datacenter level and cluster level but I never get any data in the charts. What am I doing wrong?
Thanks,
Greg
http://blogs.vmware.com/management/2014/03/vcenter-operations-management-tech-tips-tip-24-1-click-capacity-planning-custom-dashboard-part-1.html
How would be go about calculating in say a buffer for HA?
http://imallvirtual.com/dont-forget-ha-admission-control-can-use-super-metrics/
Hi Lior
The cluster Datastore Total Capacitiy (GB) Super metric shows all the datasores belonging to the cluster include the local datastores and swap datastores,
is there a way to filter those datastores?
Thanks
Ido
You can by creating a dynamic group but it will not be a scalable solution because of current product limitations.
I will write a blog post about it after the current vCloud blog post series.
Hello,
First of thankyou for the amazing dashboard.
Need your help as i am facing issues with the Super Metric, I am using vRops 6.2 .
Cluster Total CPU Demand and Capacity Metric are not working.
Demand Looks like this:
Sum(Cluster Compute Resource: CPU|Demand)/1000
Capacity Looks like this:
Sum(Cluster Compute Resource: CPU|Total Capacity)/1000
and Disk Space Used widget also says “no data to display”.
Please suggest .
Regards
Samesh