MDT

New System Center Courses on Microsoft Virtual Academy (MVA)!

The USMT team blog - Thu, 08/01/2013 - 19:52

Hopefully everybody has been reading and enjoying the “What’s New in 2012 R2?” series from our VP Brad Anderson.    If you haven’t been following along, check it out!  http://blogs.technet.com/b/in_the_cloud/archive/tags/what_2700_s+new+in+2012+r2/

All this new stuff that has been coming out so quickly (2012, 2012 SP1, and soon 2012 R2 in only about a year and a half) can sometimes feel daunting to keep up with.  We’re here to help you get up to speed on all the new technology though!  We have recently published two very timely and informative courses on System Center to help you get the knowledge you need.

Customizing and Extending System Center Course

The first one is an entire 6 part series on customizing and extending System Center 2012.  Here’s the abstract:

This course focuses on extensibility and customization with System Center 2012 SP1, enabling customer and partners to create their own plugins with the System Center components. First you will learn how to create service templates in Virtual Machine Manager (SCVMM) allowing management of a multi-tiered application as a single logical unit. Next you will learn the basics of creating a custom Integration Pack using Orchestrator (SCO) to enable you to integrate custom activities within a workflow. The next two modules look at Operations Manager (SCOM) extensibility through creating Management Packs to monitor any datacenter component and Dashboards to visualize the data. The final two modules look at Service Manager (SCSM) Data Warehouse and reporting to generate datacenter-wide information, and then how to customize Service Catalog offerings to enable end-users to create self-service requests through a web portal.

· VMM: Service Templates – Damien Caro (Microsoft IT Pro Evangelist)

· SCO: Integration Packs – Andreas Rynes (Microsoft Datacenter Architect)

· SCOM: Management Packs – Lukasz Rutkowski (Microsoft PFE)

· SCOM: Dashboards – Markus Klein (Microsoft PFE)

· SCSM: Data Warehouse & Reporting – Travis Wright (Microsoft PM)

· SCSM: Service Catalogs – Anshuman Nangia (Microsoft PM)

Go here to take the course:

http://www.microsoftvirtualacademy.com/training-courses/system-center-2012-sp1-extensibility

Private and Hybrid Cloud Jump Starts

These two jump start events are a comprehensive view of how to build a private cloud with Windows Server and System Center and how to embrace a “hybrid cloud”.  Pete Zerger (MVP) was the instructor and Microsoft evangelists Symon Perrriman and Matt McSpirit were the hosts for the events.  If you haven’t seen a presentation by the famous Pete Zerger, you need to!

Part 1: Symon & Pete: Build a Private Cloud with Windows Server & System Center Jump Start: http://www.microsoftvirtualacademy.com/training-courses/build-a-private-cloud-with-windows-server-system-center-jump-start

Part 2 :Matt & Pete: Move to Hybrid Cloud with System Center & Windows Azure Jump Start: http://www.microsoftvirtualacademy.com/training-courses/move-to-hybrid-cloud-with-system-center-windows-azure-jump-start

 

Enjoy!

Categories: MDT

New System Center Courses on Microsoft Virtual Academy (MVA)!

Hopefully everybody has been reading and enjoying the “What’s New in 2012 R2?” series from our VP Brad Anderson.    If you haven’t been following along, check it out!  http://blogs.technet.com/b/in_the_cloud/archive/tags/what_2700_s+new+in+2012+r2/

All this new stuff that has been coming out so quickly (2012, 2012 SP1, and soon 2012 R2 in only about a year and a half) can sometimes feel daunting to keep up with.  We’re here to help you get up to speed on all the new technology though!  We have recently published two very timely and informative courses on System Center to help you get the knowledge you need.

Customizing and Extending System Center Course

The first one is an entire 6 part series on customizing and extending System Center 2012.  Here’s the abstract:

This course focuses on extensibility and customization with System Center 2012 SP1, enabling customer and partners to create their own plugins with the System Center components. First you will learn how to create service templates in Virtual Machine Manager (SCVMM) allowing management of a multi-tiered application as a single logical unit. Next you will learn the basics of creating a custom Integration Pack using Orchestrator (SCO) to enable you to integrate custom activities within a workflow. The next two modules look at Operations Manager (SCOM) extensibility through creating Management Packs to monitor any datacenter component and Dashboards to visualize the data. The final two modules look at Service Manager (SCSM) Data Warehouse and reporting to generate datacenter-wide information, and then how to customize Service Catalog offerings to enable end-users to create self-service requests through a web portal.

· VMM: Service Templates – Damien Caro (Microsoft IT Pro Evangelist)

· SCO: Integration Packs – Andreas Rynes (Microsoft Datacenter Architect)

· SCOM: Management Packs – Lukasz Rutkowski (Microsoft PFE)

· SCOM: Dashboards – Markus Klein (Microsoft PFE)

· SCSM: Data Warehouse & Reporting – Travis Wright (Microsoft PM)

· SCSM: Service Catalogs – Anshuman Nangia (Microsoft PM)

Go here to take the course:

http://www.microsoftvirtualacademy.com/training-courses/system-center-2012-sp1-extensibility

Private and Hybrid Cloud Jump Starts

These two jump start events are a comprehensive view of how to build a private cloud with Windows Server and System Center and how to embrace a “hybrid cloud”.  Pete Zerger (MVP) was the instructor and Microsoft evangelists Symon Perrriman and Matt McSpirit were the hosts for the events.  If you haven’t seen a presentation by the famous Pete Zerger, you need to!

Part 1: Symon & Pete: Build a Private Cloud with Windows Server & System Center Jump Start: http://www.microsoftvirtualacademy.com/training-courses/build-a-private-cloud-with-windows-server-system-center-jump-start

Part 2 :Matt & Pete: Move to Hybrid Cloud with System Center & Windows Azure Jump Start: http://www.microsoftvirtualacademy.com/training-courses/move-to-hybrid-cloud-with-system-center-windows-azure-jump-start

 

Enjoy!

Categories: MDT

New System Center Courses on Microsoft Virtual Academy (MVA)!

The Deployment Guys - Thu, 08/01/2013 - 19:52

Hopefully everybody has been reading and enjoying the “What’s New in 2012 R2?” series from our VP Brad Anderson.    If you haven’t been following along, check it out!  http://blogs.technet.com/b/in_the_cloud/archive/tags/what_2700_s+new+in+2012+r2/

All this new stuff that has been coming out so quickly (2012, 2012 SP1, and soon 2012 R2 in only about a year and a half) can sometimes feel daunting to keep up with.  We’re here to help you get up to speed on all the new technology though!  We have recently published two very timely and informative courses on System Center to help you get the knowledge you need.

Customizing and Extending System Center Course

The first one is an entire 6 part series on customizing and extending System Center 2012.  Here’s the abstract:

This course focuses on extensibility and customization with System Center 2012 SP1, enabling customer and partners to create their own plugins with the System Center components. First you will learn how to create service templates in Virtual Machine Manager (SCVMM) allowing management of a multi-tiered application as a single logical unit. Next you will learn the basics of creating a custom Integration Pack using Orchestrator (SCO) to enable you to integrate custom activities within a workflow. The next two modules look at Operations Manager (SCOM) extensibility through creating Management Packs to monitor any datacenter component and Dashboards to visualize the data. The final two modules look at Service Manager (SCSM) Data Warehouse and reporting to generate datacenter-wide information, and then how to customize Service Catalog offerings to enable end-users to create self-service requests through a web portal.

· VMM: Service Templates – Damien Caro (Microsoft IT Pro Evangelist)

· SCO: Integration Packs – Andreas Rynes (Microsoft Datacenter Architect)

· SCOM: Management Packs – Lukasz Rutkowski (Microsoft PFE)

· SCOM: Dashboards – Markus Klein (Microsoft PFE)

· SCSM: Data Warehouse & Reporting – Travis Wright (Microsoft PM)

· SCSM: Service Catalogs – Anshuman Nangia (Microsoft PM)

Go here to take the course:

http://www.microsoftvirtualacademy.com/training-courses/system-center-2012-sp1-extensibility

Private and Hybrid Cloud Jump Starts

These two jump start events are a comprehensive view of how to build a private cloud with Windows Server and System Center and how to embrace a “hybrid cloud”.  Pete Zerger (MVP) was the instructor and Microsoft evangelists Symon Perrriman and Matt McSpirit were the hosts for the events.  If you haven’t seen a presentation by the famous Pete Zerger, you need to!

Part 1: Symon & Pete: Build a Private Cloud with Windows Server & System Center Jump Start: http://www.microsoftvirtualacademy.com/training-courses/build-a-private-cloud-with-windows-server-system-center-jump-start

Part 2 :Matt & Pete: Move to Hybrid Cloud with System Center & Windows Azure Jump Start: http://www.microsoftvirtualacademy.com/training-courses/move-to-hybrid-cloud-with-system-center-windows-azure-jump-start

 

Enjoy!

Categories: MDT

Using Server Inventory Reports to Help Stay Compliant with Service Provider Licensing Agreement (SPLA)

Microsoft Deployment Toolkit Team Blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, helping Service providers to stay compliant within the SPLA framework is another key investment area for this release. Service providers that we have talked to during the planning phases very clearly identified the difficulty involved in being able to accurately report on the licenses consumed within the datacenter, especially true for the datacenters of today which are very dynamic. 

Overview of Server Inventory Reporting

The Creating usage analytics reports using Excel blog post provided an overview of how to use Excel to create powerful reports that help the provider gain insights into the usage patterns of their customers. This blog post will focus on how the service administrator can leverage the same system to create server inventory reports that help the service providers gain insight into how the Windows Servers and the VM instances that they host power the services and assess the licensing impact with respect to SPLA framework of the 2012 R2 release. As shown in the figure below, the Service Reporting extracts fabric or host data from Operations Manager (also called OM) to process the data relevant for licensing scenarios as high-lighted by the red circle.

As called out the in the earlier blog on this topic, the Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc.)

The Service Reporting is designed for the Service Administrator who can create reports on their own using Excel power pivots and obtain the insights that help them in their capacity planning needs. While the previous blog went into the details of how to create reports from Windows Azure Pack for Tenant Resource Utilization, this blog will focus on how to leverage the Server Inventory report that is shipped out of the box in 2012 R2.

Server Inventory Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. The data in the Operations Manager database contains information about the Hyper-V hosts and the Virtual Machines that are hosted on those servers.

As illustrated in the figure below, the details about the servers and the guest VMs is extracted by the Service Reporting component and is then processed to create the relevant OLAP cubes, which are then used to create the Excel Reports that have the Server Inventory information.

 

 

Scenarios

For the 2012 R2 release we targeted the server inventory scenarios below. The goals were to enable the Service Provider to be able to create accurate SPLA reports, understand trends and use the report for planning and auditing scenarios.

  1. Report on the number of processors and monthly VM high water mark on the Hyper-V hosts
  2. Trending data for processor count and VM high water mark, for up to 3 years
  3. Detailed view of all the server and the VMs for upto 3 years
Configuring the Server Inventory Report

The prerequisite for the Server Inventory Report is that the Service Reporting system must be working correctly and Server Inventory data from Operations Manager must be flowing into the system. This blog does not address the installation and deployment of Service Reporting component.

The Server Inventory report shipped out of the box in 2012 R2 needs to be configured to connect to the Analysis Server that holds the Server Inventory cubes. This can be easily done by opening the Server Licensing Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in bold) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure that the command string has the text SRVMInventoryCube.

At this point, you should be able to view the server inventory report dashboard.

Click on the Summary worksheet and you should see content similar to this figure.

Depending on the data in the system, the slicers may show different values that are selected by default. The left axis shows the processor count and the right axis shows the VM Instance count. If the slicer values are changed, the report will change as well.

As you can see in this report, the processor count and the VM instance count grew between May and June of 2013.

An important thing to note, is that if you try to print this page for your records, the slicers will not be displayed, since the print area is configured to exclude the slicers.

Further, there is a placeholder for key information to be entered which allows the provider to identify themselves in the report when the scenarios call for communicating with license resellers.

Detailed Report

The Server Inventory Report has a detail worksheet.  It contains the information about what helped compose the summary report. This is useful when one wants to understand the finer details on the report. As you can see in this figure, a monthly breakdown of which host had how many processors and how many VM instances on that host is available.

Expanding the host, the report will list all the VM instances on that host that were hosted on that server.

This view is agnostic of tenants and workloads because the licensing scenarios require only processor counts and high water mark of VMs on the servers for a given month.

Conclusion

This is a very powerful capability for the service providers to accurately and  easily report license consumption based on SPLA framework with the 2012 R2 release.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Creating Usage Analytics Reports using Excel

Microsoft Deployment Toolkit Team Blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, enabling usage analytics scenarios for service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and provides analytics on tenant resource usage.  

Overview of Service Reporting

The “How to Integrate Your Billing System with the Usage Metering System” blog post provided an overview of the Usage Metering System. This blog post will focus on how we extract the same data and provide analytics on tenant resource VM utilization and make them available in Excel pivot tables. (analysis via Performance Point is covered in a subsequent blog post). As shown in the figure below, the Service Reporting component extracts the data from the Usage REST API and transforms them into OLAP cubes for analytics, as shown in the picture below.

  

Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc..)

Service Reporting is designed for the service administrator to create reports using Excel pivot tables to obtain the insights that help them in their capacity planning needs and show-back situations.

VM Usage Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. This data is collected and stored in the WAP (Windows Azure Pack) Usage Database along with usage data of other resources. As mentioned earlier, the details of WAP Usage system was detailed in the blog How to Integrate Your Billing System with the Usage Metering System.

The Service Reporting component reads data from the Usage Database and then transforms the raw usage data into OLAP cubes for analytics. The data in these OLAP cubes are available for visualization and for drill down analytics using Excel and Performance Point.

 

 

Scenarios

For the 2012 R2 release we targeted the following usage analysis scenarios:

  1. Usage trends across different time dimensions (hourly, daily, monthly, quarterly, yearly) to provide critical trending patterns
  2. Pivoting by subscriptions to understand which subscribers are impacting the business
  3. Pivoting by clouds/plans to understand which plans are used the most
  4. Side-by-side comparison between allocated capacity for tenants and their usage to help understand utilization ratios

These scenarios can be visualized in Excel and in Performance Point. Excel is a very popular tool for most reporting needs, and has pivot table capabilities that come in very handy for ad-hoc analytics. Excel workbooks can contain data to be analyzed even when disconnected from the SQL Server Analysis Server.

Configuring Usage Reports

The prerequisites for Usage Reports to work are that the Service Reporting component must be working correctly and usage data must be flowing into the system. This blog does not address the installation and deployment of the Service Reporting component. The Excel Usage Reports shipped out of the box in 2012 R2 need to be connected to the Analysis Server that holds the Usage Data Cube. This can be easily done by opening the Usage Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu in Excel and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in red) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure, the command text has SRUsageCube in the text.

Once these connection properties are saved, the Excel report can now be populated with data from the Usage Data Cube and its capabilities.

To test it out, you can create a brand new worksheet and then create a pivot table using the connection you just created.

Step 1: Open a new worksheet

Step 2: Click on Insert->Pivot Table

  • Step 3: Make sure you have External data source selected
  • Step 4: Click on Choose Connection and select the data connection configured in the previous step.

  • Step 5: Save the changes and close the dialog to go back to the Excel worksheet.

If the data connection is configured correctly, you should be seeing this form on the right side of your worksheet.

Click on “All” and you will see a drop down with the following items.

Click on the Settings icon (the round sprocket) and collapse all the fields.

You will see all the 19 “measures” that are available out of the box for reporting different utilization data points.

At this point, you are ready to create your own report that is provided in the sample Usage Report.

Explore the Pivot Table fields and try to compose the report similar to the one in the figure below by dragging and dropping the different fields to the appropriate areas (Filters, Columns, Rows, Values).

As you add the rows and columns, you will start to see the report shape up to look like the figure below.

Slicers

Once you have a report that looks like this you can augment this report by adding slicers to give you filtering options.

Go to Insert->Slicer and choose the same connection that the pivot table is using. This will provide you with options to choose the necessary filter. Select VMM User Role (which is the same as Subscriptions) and you can see list of subscribers in the system and selecting one gives you the ability to scope the results.

In this instance, I have created a slicer with “VMM User Role” but changed the Display name to “Subscriptions” to make it more intuitive. All the available “Subscriptions” are shown in this list and all of them are in scope.

Now, if you select just one of them, say “Unknown User Role” you will see the report change to just display the records related to just that subscription as shown the table below.

As you can see, all the values, instantly change to the selected filter, thus giving the administrator great ability to look at subscribers and compare them side by side. One can multi-select within the same slicer and chain other slicers to provide richer analytics.

 

Conclusion

While Excel is super powerful and ubiquitous, Performance Point allows greater collaboration by enabling dashboards . By connecting to the Analysis Server of Service Reporting, one can take advantage of all the key fields that are available in Service Reporting to create powerful dashboards that can help the service administrator see the key metrics of the business is a single location. 

Subsequent blog posts will go into the details of configuring Performance Point dashboards.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

 

Categories: MDT

How to Integrate Your Billing System with the Usage Metering System

Microsoft Deployment Toolkit Team Blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Provider experiences in enabling the billing and chargeback of Tenant Resource Utilization and how it applies to Brad’s larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post:   What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.

As described in that blog post, enabling billing scenarios for the service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and reports on tenant resource utilization. These services are offered on a subscription basis and therefore it is critical that the resource utilization is reported at the subscription granularity to assist in billing scenarios. 

Overview of the Usage Metering System

The usage system is located alongside the Service Management API in the Windows Azure Pack (WAP) stack, enabling it to access tenant utilization data for all the services provided in the WAP stack and provide an REST API which is leveraged to integrate with the billing system of the provider, as illustrated in the figure below.

 

Service providers have invested a lot in their own billing system and it was critical that the 2012 R2 release be able to integrate with the existing systems in place. Therefore, we targeted our investments to ensure that 2012 R2 integrates easily with various billing providers and ITFM (IT Financial Management) products that are in the market.

It is important to note that there is no billing system being shipped in 2012 R2 release. Service providers have to create the billing integration module (also referred as “billing adaptor”) to provide data to the billing system they are using.

Now, lets go a little deeper to look at the building blocks of the usage metering system and how its architected.

The Usage Metering System has four main components. Three of these components, the Data Generator, The Data Collector and the Usage Database are internal to the system and the fourth component the Usage API is an external facing API that the billing adapter will interface with to extract the tenant resource utilization data.

Data Generator

The Data Generator tier represents the services (resource providers) registered as part of the system. They collect information specific to a subscription and expose it to the Usage Collector. The Usage Collector expects information to be made available following a specific data contract. This contract is the same across all the providers. All providers in the system adhere to this contract to provide information. IaaS metrics in Windows Azure Pack are provided by VM Clouds resource provider.

Data Collector

The Data Collector is an internal component that periodically collects usage information from all the registered Data Generators and stores it in the Usage Database.

Usage Database

The Usage Database is a transient store, which stores all the data from the various Data Generators for a period of 30-40 days. The expectation is that during this time, the billing system would have extracted the data from this database for billing purposes.

Usage API

This is a RESTful API and is the only way to extract the data from the Usage Database. Since Service Providers typically have a billing system which allows them to generate monthly bills to their subscribers. Customers can easily create an integration to their billing system by extracting data from the Usage Database through the Usage API. The component that customers develop to integrate with their billing system is called a “Billing Adapter”, which serves as a bridge between the Usage Metering system and the customer billing system.

 

In the figure below, in the red circles, you can see the VM Clouds resource provider, alongside other resource providers such as Service Bus, generating the IaaS resource utilization data, which is collected and stored in the Usage Database and made available through the Usage API.

 

The Usage API can be leveraged to create the billing adaptor and interface with the billing system within the provider data center. In the figure below, you can see that the role of the “billing adaptor” serving to integrate the Usage Metering System and the billing provider within the provider datacenter.

 

The “Service Reporting” component and the analytics it provides is discussed in the blog post titled “Creating Usage Analytics Reports using Excel and Performance Point” while this blog post details on how to create a “Billing Adaptor”.

Interacting with the Usage System

This section explains the ways an external system can interact with the Usage Metering System. Two different types of information are available through the Usage API:

  1. Tenant resource utilization for all subscriptions
  2. Plan, add-on, subscription, and account information

The information is presented via two channels:

  1. Usage API that queries all the historical data
  2. Real time CRUD events via the Event Notification System.

The billing adaptor uses both these channels to be able to effectively create a billing reports while being able to respond in real time as plans, subscriptions and accounts get created and managed in the environment.

Usage API (Exposed on the Usage Endpoint) Usage Data

The Usage endpoint exposes an API to return tenant resource utilization data pertaining to every subscription across services. The caller (“Billing Adapter”) needs to provide the “startid”. This parameter informs the Usage Metering System to return usage data, starting from that ID. The Billing Adapter advances the “startid” based on the number of records returned for the subsequent call.

 

Method Name

API

Response

GET

/usage?startId={startId}&batchSize={batchSize}

UsageRecordList<UsageRecord>

Plans\Addon\Subscription Data

The Usage endpoint also exposes APIs to return data on existing plans, addons, subscriptions, etc

Method Name

API

Response

GET

billing/plans?startId={startId}&batchSize={batchSize}

UsageEventList<Plan>

GET

billing/addons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOn>

GET

billing/subscriptions?startId={startId}&batchSize={batchSize}

UsageEventList<Subscription>

GET

billing/planServices?startId={startId}&batchSize={batchSize}

UsageEventList<ResourceProviderReference>

GET

billing/planAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

GET

billing/subscriptionAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

Notes:

STARTID is the record id of the first record you want to fetch in a particular cycle.

BATCHSIZE is the maximum number of records you want to fetch.

USAGE-RESTAPI-ENDPOINT can be found at :30022">https://<Admin-API-Machine-Name>:30022

Configuration

The administrator needs to ensure that the Usage Metering Service is configured correctly to authenticate the Billing Adaptor. That can be done by ensuring that the service is capable of accepting the correct credentials that will be used to authenticate. The steps below describe how to ensure that the credentials are set properly. Note: During the installation process, the password used is a random sequence and hence this step is necessary to establish connectivity.

On the WAP deployment launch the Management Service PowerShell Module on the Admin API server.

Then, run the commands below:

· Set-MgmtSvcSetting -Namespace UsageService -Name Username -Value '<EnterUserName>'

· Set-MgmtSvcSetting -Namespace UsageService -Name Password -Value '<EnterPassword>' –Encode

Once the username and password are set to known values, these values can be used by the Billing Adaptor to authenticate.

Consuming the Usage REST API

The following steps are required to consume the Usage REST API:

  • Define an httpClient
  • Construct a URI to query the Usage Metering Service
  • StartID is the record id of the first record you want to fetch and BatchSize is the maximum number of records you want to fetch.
  • Execute the API call and read Usage Data
  • Data Contracts can be used to de-serialize the response returned (as in Sample below)
Usage Data Model

The Usage Data Model is shown in the figure below and can be used to associate the data returned by the Usage API.

Event Notification System

The Service Management API keeps track of events within the Usage Metering System and sends notifications to any registered subscriber (e.g. a Billing Adaptor). Examples of the events are plan, addon, subscription creation\updates and account creation.

Notifications are sent as a Post call to an endpoint registered with the Usage Metering System. The Management Service PowerShell Module should be used to define the required notification end point. Note that the notificationEndPoint must end with a trailing slash.

Description:

Subscribing for plan, add-on and account changes.

Verb

Command Parameters

Set

MgmtSvcNotificationSubscriber

-NotificationSubscriber

-Name

-Enabled

-SubscriberType

-Endpoint

-AuthenticationMode

-AuthenticationUsername

-AuthenticationPassword

-EncryptionKey

-EncryptionAlgorithm

-ConnectionString

-Server

-Database

-UserName

-Password

SubscriberType:

  • BillingService
  • MandatoryService
  • OptionalService

Example:

Set-MgmtSvcNotificationSubscriber -Name Billing –SubscriberType BillingService -Enabled $false -Endpoint https://localhost/ -AuthenticationMode Basic

The Billing Adaptor can be set up to handle the event in a blocking or a non-blocking manner. The SubscriberType BillingService & MandatoryService are both blocking. The only nonblocking option is OptionalService. If the Billing Adaptor is set up to be blocking, a plan creation event in the service management API should trigger a corresponding plan to be created in the billing system. If this operation is not successful, the plan creation at the service management API will fail. This enables consistency between the platform and the billing system.

Notification Data Contracts

Notifications sent to the billing adapter adhere to type - NotificationEvent<T> type. T could be replaced by the below objects.

  • Plan
  • PlanAddOn
  • AdminSubscription
  • ResourceProviderReference
  • PlanAddOnReference
  • PlanAddOnReference

When you download the WAP (Windows Azure Pack) the data contracts can be found under:

· \SampleBillingAdapter\DataContracts\*

Following are the two important properties of NotificationEvent

1. NotificationEvent Method could have following values:

1. Post to create a new account/subscription/addon/plan

2. Delete to delete an account/subscription/addon/plan

3. Post an update to a plan

2. NotificationEvent Entity sends an event when any of the above objects are created\updated\deleted. Pricing APIs

The Pricing API is designed for billing system in the Service Provider data center to specify prices for Plans and Add-ons to flow into the 2012 R2 system. The billing adaptor can choose provide prices for each Plan, or Plan add-on in real-time. As part of implementing the notification subscriber, we have specifications for the below APIs that the billing service can implement to enable pricing data to flow back into the system. The implementation of these APIs is optional. If the below APIs are enabled the price values for the plans and add-ons will be visible in the WAP Tenant site at the time of addition of the Plan\Add-On.

Method Name

API

RESPONSE

GET

/planPrice?id={id}&region={region}&username={username}

String

GET

/addonPrice?id={id}&region={region}&username={username}&subscriptionId={subscriptionId}

String

 

Notes:

  • This API is expected to return a string with pricing information. The 2012 R2 system will display this information alongside plans for the subscriber, but these are textual and not typed.
Detailed Description of the Sample Adapter Project Files

This section explains the content of the sample billing adapter (SampleBillingAdapter.sln). At a high level the billing adapter consists of the below parts:

1. SampleBillingAdapter.cs provides an example of the different calls to the Usage REST API

2. The set of Data Contracts that can be used to deserialize the API responses

SampleBillingAdapter.cs

This is the entry point for the application. The file contains the below:

1. Instantiation of a UserServiceHttpClient with the required configuration data.

2. This UsageServiceHttpClient is then used to query the usage service. There are seven types of calls that can be made for the Billing data. This data is deserialized into instances of the data contracts that are included in the DataContracts directory

3. The data is then printed to the console.

Example:

using Microsoft.WindowsAzurePack.Usage.DataContracts;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;

static void Main(string[] args)
{
    // create a WAP usage service http client (this data can be read from a config file)
    string mediaType = "application/json"; // application/json or application/xml
    string authenticationType = "basic";
    string Username = "UsageClient";
    string Password = "specify the correct pwd";
    string Machine = "specify the machine where the usage service is running";
    string Port = "30022";
    string BaseAddress = String.Format("https://{0}:{1}/", Machine, Port);
    var usageService = new WAPUsageServiceHttpClient(Username, Password, authenticationType, BaseAddress, mediaType);

    // gather usage and billing data asynchronously using the Usage API
    var usage = usageService.GetDataAsync<UsageRecordList>("usage", 0, 50);
    var plans = usageService.GetDataAsync<UsageEventList<Plan>>("billing/plans", 0, 50);
    var subscriptions = usageService.GetDataAsync<UsageEventList<Subscription>>("billing/subscriptions", 0, 50);
    var addOns = usageService.GetDataAsync<UsageEventList<AddOn>>("billing/addons", 0, 50);
    var planAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/planAddons", 0, 50);
    var subscriptionAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/subscriptionAddons", 0, 50);
    var planServices = usageService.GetDataAsync<UsageEventList<ResourceProviderReference>>("billing/planServices", 0, 50);

    #region Print the usage and billing data to the console ...
    Console.WriteLine("Printing Usage Data - Press Enter to Proceed...");
    Console.ReadLine();
    usageService.PrintUsageData(usage.Result);
    usageService.PrintPlanData(plans.Result);
    usageService.PrintSubscriptionData(subscriptions.Result);
    usageService.PrintAddOnsData(addOns.Result);
    usageService.PrintPlanAddOnsData(planAddOns.Result);
    usageService.PrintSubscriptionAddOnsData(subscriptionAddOns.Result);
    usageService.PrintPlanServicesData(planServices.Result);
    #endregion
}

Data Contracts

The DataContracts directory contains all the required Data Contracts to interact effectively with the Usage API.

VM Data Gathered from the Usage API

VM Provider

Measure

Unit

Description

 

MemoryAllocated-Min

MB

Lowest allocated memory size for a VM within an hour timespan

 

MemoryAllocated-Max

MB

Highest allocated memory size for a VM within an hour timespan

 

MemoryConsumed-Min

MB

Lowest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Max

MB

Highest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Median

MB

Median average consumed memory size for a VM within an hour timespan

 

CPUAllocationCount-Min

Each

Lowest number of CPU core allocated for a VM within an hour timespan

 

CPUAllocationCount-Max

Each

Highest number of CPU core allocated for a VM within an hour timespan

 

CPUPercentUtilization-Median

%

Median average in percentage of CPU consumption for a VM within an hour timespan

 

CrossDiskIOPerSecond-Min

MB

Lowest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Max

MB

Highest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Median

MB

Median average input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Min

MB

Lowest allocated disk size across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Max

MB

Highest allocated disk size across all attached disk for a VM within an hour timespan

 

PerNICKBSentPerSecond-Min

MB

Lowest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Max

MB

Highest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Median

MB

Median average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Average

MB

Straight average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Min

MB

Lowest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Max

MB

Highest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Median

MB

Median average bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Average

MB

Straight average bytes received per second on a network adapter attached to a VM within an hour timespan

       

As you can see, this is a powerful API that allows bi-directional data flow. The usage data from the 2012 R2 stack to the billing adaptor and the pricing data (business logic decides the prices) and that data flows from the billing system into the 2012 R2 stack.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

How to Create a Basic Plan Using the Service Administration Portal

Microsoft Deployment Toolkit Team Blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, success for a service provider largely hinges on its ability to attract and retain tenants.  It therefore falls to the service provider to think about how to use service offerings to draw tenants in, to consider different tactics for differentiation, upsell, and maintain healthy tenant accounts.  To help service providers meet these challenges, we have invested in key enhancements to the service management experience targeting these specific areas: 

  • Using value based offers to attract tenants and drive new subscriptions
  • Use offer differentiation & upsell to drive more consumption 
  • Managing tenant accounts and subscriptions

A “service provider” here could be an IT organization within a company that is providing services such as IaaS to other business units in the organization. Thus, an IT organization that operates like a service provider to other business units must create compelling service offerings in much the same way as a service provider tries to attract customers. 

Overview of an IaaS Plan

Service providers can build bundles of service offerings that are termed “plans”. Plans include composable service offerings that can be assembled together in order target different types of prospective tenants.  Tenants consume these service offerings by subscribing to a plan.  In a very general sense, a cloud is nothing more to the tenant than a set of capabilities (“service offerings”) at some capacity (“quotas”).

To support this business model, we have designed a very easy-to-use experience for creating offers, selecting the kinds of service offerings to include and then setting the quotas to control how much can be consumed by any single subscription.  But, it goes beyond a simple set of compute, storage, and networking capabilities at some quota amount! 

One of the most important aspects of plan creation is the process of including library content to facilitate simplified application development.  For that reason, the plan creation experience also features a way to include templates for foundational VM configurations and workloads.

Plan Creation

In the Service Administration portal, the left side navigation has a entry called “Plans” which lists all the plans currently in the system. As you can see in the figure below, the administrators have created many different plans. When plans are created they are “Private” by default, meaning they are not yet visible to prospective tenants.

In a quick glance, the administrator can identify which Plans are accessible by the tenants, how many subscriptions they have along with other pertinent status.

Creating a new plan is very easy and is enabled by the Quick Create experience.

Scroll down and click on New, which takes you to the Plan creation experience.

This experience enables plan creation and “plan add-ons”. We will be focusing on plan creation in this blog post. The plan creation experience is a simple wizard with just three screens. In the first screen, you give the plan a name, and in this case it is going to be called “BlogIaaSPlan” as shown in the figure below.

 

As mentioned earlier, a plan is a container of service offerings that are available in the system. This system is configured to provide VMs, web sites, SQL Server databases, and service bus services. Therefore, the plan wizard allows all of these types of services to be offered in the plan.

We will focus only on virtual machines in this blog post. In this screenshot the ‘Virtual Machine Clouds’ service is selected and ‘Virtual Machine Clouds’ is chosen from the drop down.

Skip the plan add-ons for now and click OK. That completes the plan Quick Create experience. As you can see, the BlogIaaSPlan is created and is private by default.  The plan is “not configured” yet and needs to be configured before it can be made public for tenants to be able to subscribe to the plan.

Configuring the Plan

Clicking on the plan (BlogIaaSPlan) presents the plan dashboard page. This page shows the plan statistics at the top and a list of all the services that are available on the plan along with plan add-ons associated with the plan.

Since we are creating an VM only plan, we have not selected any other offer/services other than Virtual Machine Clouds.

 

 

As seen in the figure above, the Plan is not active nor configured for it to be used.

Click on ‘Virtual Machine Clouds’ to configure the plan.

Associating the Plan with a VMM Server and Cloud

A plan that offers VM clouds is associated with a specific Virtual Machine Manager server and a Virtual Machine Manager cloud within that VMM server. When the tenant subscribes to this plan and instantiates a virtual machine, the system will deploy that VM with the specified properties on the associated cloud via the associated VMM server.

As shown in the figure to the left, the VMM server is a mandatory property that needs to be set for an IaaS plan.

As part of the service registration, the VMM server information is already available in the system. Therefore, selecting the correct VMM server is very easy.

Once a VMM server has been selected, the VMM cloud managed by that VMM server needs to be selected and bound to the plan.

As you can see in the figure, once a VMM server has been identified, the system queries the VMM server to list all the VMM clouds available on the VMM server.

We will choose the Gold Cloud to be associated with this plan.

A plan allows the administrator to bind the service to a specific cloud and controls on the upper limits on its usage.

Assigning Quota Limits to a Plan

The administrator will be able to set limits on the core compute attributes such as “the maximum number of VMs, logical cores, the max memory, storage and virtual networks each subscription can have.

As you can see in this figure, you can specify absolute limits by specifying a number against each compute property to leave it unlimited, in which case it will be limited by the underlying fabric constraints.

Adding Allowed Networks to the Plan

The plan allows various cloud resources to be made available to the subscriber in a controlled manner. In this section we will go through the networks that are made available to the plan.

When a plan is being configured for the first time, there are no cloud resources assigned to the plan. Therefore all the cloud resources will be empty. Click on ‘Add networks’ to add networks to the plan and add the Fabrikam External network to the plan.

 

 

 

When a tenant subscribes to this plan, and creates a VM, the only networks available for that VM to use will be the only networks allowed by the plan and in this case, it will only be the Fabrikam External network.

Adding Other Resources

Following the same pattern as that of networks, we can specify which hardware profiles and virtual machine templates are accessible within the plan. In this plan, all the hardware profiles and only the Windows Server 2012 VM template are chosen resulting in the figures below.

 

 

With these configurations complete, the plan is ready to be subscribed to by tenants. Advanced scenarios such as the gallery and other settings will be discussed in later blogs.

Next the plan needs to be made it public so that it can be discovered and subscribed to by tenants.

Plan Activation

Go back to the Plans List view to see all the plans. As you can see in the figure below, the plan is now configured, but still private. 

Select the BlogIaaSPlan and then make it public by changing its access privileges. As shown in the Figure below, you can see do it from the Change Access command

 

The plan status will change to reflect the fact it’s now a publicly accessible plan. You can see that in the Plans view as shown in the figure below.

Once the plan is made public it can be subscribed to and tenants can start to deploy virtual machines against the subscription.

Conclusion

In subsequent blogs, we will provide more details of creating advanced plans.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Using Server Inventory Reports to Help Stay Compliant with Service Provider Licensing Agreement (SPLA)

The USMT team blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, helping Service providers to stay compliant within the SPLA framework is another key investment area for this release. Service providers that we have talked to during the planning phases very clearly identified the difficulty involved in being able to accurately report on the licenses consumed within the datacenter, especially true for the datacenters of today which are very dynamic. 

Overview of Server Inventory Reporting

The Creating usage analytics reports using Excel blog post provided an overview of how to use Excel to create powerful reports that help the provider gain insights into the usage patterns of their customers. This blog post will focus on how the service administrator can leverage the same system to create server inventory reports that help the service providers gain insight into how the Windows Servers and the VM instances that they host power the services and assess the licensing impact with respect to SPLA framework of the 2012 R2 release. As shown in the figure below, the Service Reporting extracts fabric or host data from Operations Manager (also called OM) to process the data relevant for licensing scenarios as high-lighted by the red circle.

As called out the in the earlier blog on this topic, the Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc.)

The Service Reporting is designed for the Service Administrator who can create reports on their own using Excel power pivots and obtain the insights that help them in their capacity planning needs. While the previous blog went into the details of how to create reports from Windows Azure Pack for Tenant Resource Utilization, this blog will focus on how to leverage the Server Inventory report that is shipped out of the box in 2012 R2.

Server Inventory Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. The data in the Operations Manager database contains information about the Hyper-V hosts and the Virtual Machines that are hosted on those servers.

As illustrated in the figure below, the details about the servers and the guest VMs is extracted by the Service Reporting component and is then processed to create the relevant OLAP cubes, which are then used to create the Excel Reports that have the Server Inventory information.

 

 

Scenarios

For the 2012 R2 release we targeted the server inventory scenarios below. The goals were to enable the Service Provider to be able to create accurate SPLA reports, understand trends and use the report for planning and auditing scenarios.

  1. Report on the number of processors and monthly VM high water mark on the Hyper-V hosts
  2. Trending data for processor count and VM high water mark, for up to 3 years
  3. Detailed view of all the server and the VMs for upto 3 years
Configuring the Server Inventory Report

The prerequisite for the Server Inventory Report is that the Service Reporting system must be working correctly and Server Inventory data from Operations Manager must be flowing into the system. This blog does not address the installation and deployment of Service Reporting component.

The Server Inventory report shipped out of the box in 2012 R2 needs to be configured to connect to the Analysis Server that holds the Server Inventory cubes. This can be easily done by opening the Server Licensing Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in bold) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure that the command string has the text SRVMInventoryCube.

At this point, you should be able to view the server inventory report dashboard.

Click on the Summary worksheet and you should see content similar to this figure.

Depending on the data in the system, the slicers may show different values that are selected by default. The left axis shows the processor count and the right axis shows the VM Instance count. If the slicer values are changed, the report will change as well.

As you can see in this report, the processor count and the VM instance count grew between May and June of 2013.

An important thing to note, is that if you try to print this page for your records, the slicers will not be displayed, since the print area is configured to exclude the slicers.

Further, there is a placeholder for key information to be entered which allows the provider to identify themselves in the report when the scenarios call for communicating with license resellers.

Detailed Report

The Server Inventory Report has a detail worksheet.  It contains the information about what helped compose the summary report. This is useful when one wants to understand the finer details on the report. As you can see in this figure, a monthly breakdown of which host had how many processors and how many VM instances on that host is available.

Expanding the host, the report will list all the VM instances on that host that were hosted on that server.

This view is agnostic of tenants and workloads because the licensing scenarios require only processor counts and high water mark of VMs on the servers for a given month.

Conclusion

This is a very powerful capability for the service providers to accurately and  easily report license consumption based on SPLA framework with the 2012 R2 release.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Creating Usage Analytics Reports using Excel

The USMT team blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, enabling usage analytics scenarios for service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and provides analytics on tenant resource usage.  

Overview of Service Reporting

The “How to Integrate Your Billing System with the Usage Metering System” blog post provided an overview of the Usage Metering System. This blog post will focus on how we extract the same data and provide analytics on tenant resource VM utilization and make them available in Excel pivot tables. (analysis via Performance Point is covered in a subsequent blog post). As shown in the figure below, the Service Reporting component extracts the data from the Usage REST API and transforms them into OLAP cubes for analytics, as shown in the picture below.

  

Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc..)

Service Reporting is designed for the service administrator to create reports using Excel pivot tables to obtain the insights that help them in their capacity planning needs and show-back situations.

VM Usage Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. This data is collected and stored in the WAP (Windows Azure Pack) Usage Database along with usage data of other resources. As mentioned earlier, the details of WAP Usage system was detailed in the blog How to Integrate Your Billing System with the Usage Metering System.

The Service Reporting component reads data from the Usage Database and then transforms the raw usage data into OLAP cubes for analytics. The data in these OLAP cubes are available for visualization and for drill down analytics using Excel and Performance Point.

 

 

Scenarios

For the 2012 R2 release we targeted the following usage analysis scenarios:

  1. Usage trends across different time dimensions (hourly, daily, monthly, quarterly, yearly) to provide critical trending patterns
  2. Pivoting by subscriptions to understand which subscribers are impacting the business
  3. Pivoting by clouds/plans to understand which plans are used the most
  4. Side-by-side comparison between allocated capacity for tenants and their usage to help understand utilization ratios

These scenarios can be visualized in Excel and in Performance Point. Excel is a very popular tool for most reporting needs, and has pivot table capabilities that come in very handy for ad-hoc analytics. Excel workbooks can contain data to be analyzed even when disconnected from the SQL Server Analysis Server.

Configuring Usage Reports

The prerequisites for Usage Reports to work are that the Service Reporting component must be working correctly and usage data must be flowing into the system. This blog does not address the installation and deployment of the Service Reporting component. The Excel Usage Reports shipped out of the box in 2012 R2 need to be connected to the Analysis Server that holds the Usage Data Cube. This can be easily done by opening the Usage Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu in Excel and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in red) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure, the command text has SRUsageCube in the text.

Once these connection properties are saved, the Excel report can now be populated with data from the Usage Data Cube and its capabilities.

To test it out, you can create a brand new worksheet and then create a pivot table using the connection you just created.

Step 1: Open a new worksheet

Step 2: Click on Insert->Pivot Table

  • Step 3: Make sure you have External data source selected
  • Step 4: Click on Choose Connection and select the data connection configured in the previous step.

  • Step 5: Save the changes and close the dialog to go back to the Excel worksheet.

If the data connection is configured correctly, you should be seeing this form on the right side of your worksheet.

Click on “All” and you will see a drop down with the following items.

Click on the Settings icon (the round sprocket) and collapse all the fields.

You will see all the 19 “measures” that are available out of the box for reporting different utilization data points.

At this point, you are ready to create your own report that is provided in the sample Usage Report.

Explore the Pivot Table fields and try to compose the report similar to the one in the figure below by dragging and dropping the different fields to the appropriate areas (Filters, Columns, Rows, Values).

As you add the rows and columns, you will start to see the report shape up to look like the figure below.

Slicers

Once you have a report that looks like this you can augment this report by adding slicers to give you filtering options.

Go to Insert->Slicer and choose the same connection that the pivot table is using. This will provide you with options to choose the necessary filter. Select VMM User Role (which is the same as Subscriptions) and you can see list of subscribers in the system and selecting one gives you the ability to scope the results.

In this instance, I have created a slicer with “VMM User Role” but changed the Display name to “Subscriptions” to make it more intuitive. All the available “Subscriptions” are shown in this list and all of them are in scope.

Now, if you select just one of them, say “Unknown User Role” you will see the report change to just display the records related to just that subscription as shown the table below.

As you can see, all the values, instantly change to the selected filter, thus giving the administrator great ability to look at subscribers and compare them side by side. One can multi-select within the same slicer and chain other slicers to provide richer analytics.

 

Conclusion

While Excel is super powerful and ubiquitous, Performance Point allows greater collaboration by enabling dashboards . By connecting to the Analysis Server of Service Reporting, one can take advantage of all the key fields that are available in Service Reporting to create powerful dashboards that can help the service administrator see the key metrics of the business is a single location. 

Subsequent blog posts will go into the details of configuring Performance Point dashboards.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

 

Categories: MDT

How to Integrate Your Billing System with the Usage Metering System

The USMT team blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Provider experiences in enabling the billing and chargeback of Tenant Resource Utilization and how it applies to Brad’s larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post:   What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.

As described in that blog post, enabling billing scenarios for the service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and reports on tenant resource utilization. These services are offered on a subscription basis and therefore it is critical that the resource utilization is reported at the subscription granularity to assist in billing scenarios. 

Overview of the Usage Metering System

The usage system is located alongside the Service Management API in the Windows Azure Pack (WAP) stack, enabling it to access tenant utilization data for all the services provided in the WAP stack and provide an REST API which is leveraged to integrate with the billing system of the provider, as illustrated in the figure below.

 

Service providers have invested a lot in their own billing system and it was critical that the 2012 R2 release be able to integrate with the existing systems in place. Therefore, we targeted our investments to ensure that 2012 R2 integrates easily with various billing providers and ITFM (IT Financial Management) products that are in the market.

It is important to note that there is no billing system being shipped in 2012 R2 release. Service providers have to create the billing integration module (also referred as “billing adaptor”) to provide data to the billing system they are using.

Now, lets go a little deeper to look at the building blocks of the usage metering system and how its architected.

The Usage Metering System has four main components. Three of these components, the Data Generator, The Data Collector and the Usage Database are internal to the system and the fourth component the Usage API is an external facing API that the billing adapter will interface with to extract the tenant resource utilization data.

Data Generator

The Data Generator tier represents the services (resource providers) registered as part of the system. They collect information specific to a subscription and expose it to the Usage Collector. The Usage Collector expects information to be made available following a specific data contract. This contract is the same across all the providers. All providers in the system adhere to this contract to provide information. IaaS metrics in Windows Azure Pack are provided by VM Clouds resource provider.

Data Collector

The Data Collector is an internal component that periodically collects usage information from all the registered Data Generators and stores it in the Usage Database.

Usage Database

The Usage Database is a transient store, which stores all the data from the various Data Generators for a period of 30-40 days. The expectation is that during this time, the billing system would have extracted the data from this database for billing purposes.

Usage API

This is a RESTful API and is the only way to extract the data from the Usage Database. Since Service Providers typically have a billing system which allows them to generate monthly bills to their subscribers. Customers can easily create an integration to their billing system by extracting data from the Usage Database through the Usage API. The component that customers develop to integrate with their billing system is called a “Billing Adapter”, which serves as a bridge between the Usage Metering system and the customer billing system.

 

In the figure below, in the red circles, you can see the VM Clouds resource provider, alongside other resource providers such as Service Bus, generating the IaaS resource utilization data, which is collected and stored in the Usage Database and made available through the Usage API.

 

The Usage API can be leveraged to create the billing adaptor and interface with the billing system within the provider data center. In the figure below, you can see that the role of the “billing adaptor” serving to integrate the Usage Metering System and the billing provider within the provider datacenter.

 

The “Service Reporting” component and the analytics it provides is discussed in the blog post titled “Creating Usage Analytics Reports using Excel and Performance Point” while this blog post details on how to create a “Billing Adaptor”.

Interacting with the Usage System

This section explains the ways an external system can interact with the Usage Metering System. Two different types of information are available through the Usage API:

  1. Tenant resource utilization for all subscriptions
  2. Plan, add-on, subscription, and account information

The information is presented via two channels:

  1. Usage API that queries all the historical data
  2. Real time CRUD events via the Event Notification System.

The billing adaptor uses both these channels to be able to effectively create a billing reports while being able to respond in real time as plans, subscriptions and accounts get created and managed in the environment.

Usage API (Exposed on the Usage Endpoint) Usage Data

The Usage endpoint exposes an API to return tenant resource utilization data pertaining to every subscription across services. The caller (“Billing Adapter”) needs to provide the “startid”. This parameter informs the Usage Metering System to return usage data, starting from that ID. The Billing Adapter advances the “startid” based on the number of records returned for the subsequent call.

 

Method Name

API

Response

GET

/usage?startId={startId}&batchSize={batchSize}

UsageRecordList<UsageRecord>

Plans\Addon\Subscription Data

The Usage endpoint also exposes APIs to return data on existing plans, addons, subscriptions, etc

Method Name

API

Response

GET

billing/plans?startId={startId}&batchSize={batchSize}

UsageEventList<Plan>

GET

billing/addons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOn>

GET

billing/subscriptions?startId={startId}&batchSize={batchSize}

UsageEventList<Subscription>

GET

billing/planServices?startId={startId}&batchSize={batchSize}

UsageEventList<ResourceProviderReference>

GET

billing/planAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

GET

billing/subscriptionAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

Notes:

STARTID is the record id of the first record you want to fetch in a particular cycle.

BATCHSIZE is the maximum number of records you want to fetch.

USAGE-RESTAPI-ENDPOINT can be found at :30022">https://<Admin-API-Machine-Name>:30022

Configuration

The administrator needs to ensure that the Usage Metering Service is configured correctly to authenticate the Billing Adaptor. That can be done by ensuring that the service is capable of accepting the correct credentials that will be used to authenticate. The steps below describe how to ensure that the credentials are set properly. Note: During the installation process, the password used is a random sequence and hence this step is necessary to establish connectivity.

On the WAP deployment launch the Management Service PowerShell Module on the Admin API server.

Then, run the commands below:

· Set-MgmtSvcSetting -Namespace UsageService -Name Username -Value '<EnterUserName>'

· Set-MgmtSvcSetting -Namespace UsageService -Name Password -Value '<EnterPassword>' –Encode

Once the username and password are set to known values, these values can be used by the Billing Adaptor to authenticate.

Consuming the Usage REST API

The following steps are required to consume the Usage REST API:

  • Define an httpClient
  • Construct a URI to query the Usage Metering Service
  • StartID is the record id of the first record you want to fetch and BatchSize is the maximum number of records you want to fetch.
  • Execute the API call and read Usage Data
  • Data Contracts can be used to de-serialize the response returned (as in Sample below)
Usage Data Model

The Usage Data Model is shown in the figure below and can be used to associate the data returned by the Usage API.

Event Notification System

The Service Management API keeps track of events within the Usage Metering System and sends notifications to any registered subscriber (e.g. a Billing Adaptor). Examples of the events are plan, addon, subscription creation\updates and account creation.

Notifications are sent as a Post call to an endpoint registered with the Usage Metering System. The Management Service PowerShell Module should be used to define the required notification end point. Note that the notificationEndPoint must end with a trailing slash.

Description:

Subscribing for plan, add-on and account changes.

Verb

Command Parameters

Set

MgmtSvcNotificationSubscriber

-NotificationSubscriber

-Name

-Enabled

-SubscriberType

-Endpoint

-AuthenticationMode

-AuthenticationUsername

-AuthenticationPassword

-EncryptionKey

-EncryptionAlgorithm

-ConnectionString

-Server

-Database

-UserName

-Password

SubscriberType:

  • BillingService
  • MandatoryService
  • OptionalService

Example:

Set-MgmtSvcNotificationSubscriber -Name Billing –SubscriberType BillingService -Enabled $false -Endpoint https://localhost/ -AuthenticationMode Basic

The Billing Adaptor can be set up to handle the event in a blocking or a non-blocking manner. The SubscriberType BillingService & MandatoryService are both blocking. The only nonblocking option is OptionalService. If the Billing Adaptor is set up to be blocking, a plan creation event in the service management API should trigger a corresponding plan to be created in the billing system. If this operation is not successful, the plan creation at the service management API will fail. This enables consistency between the platform and the billing system.

Notification Data Contracts

Notifications sent to the billing adapter adhere to type - NotificationEvent<T> type. T could be replaced by the below objects.

  • Plan
  • PlanAddOn
  • AdminSubscription
  • ResourceProviderReference
  • PlanAddOnReference
  • PlanAddOnReference

When you download the WAP (Windows Azure Pack) the data contracts can be found under:

· \SampleBillingAdapter\DataContracts\*

Following are the two important properties of NotificationEvent

1. NotificationEvent Method could have following values:

1. Post to create a new account/subscription/addon/plan

2. Delete to delete an account/subscription/addon/plan

3. Post an update to a plan

2. NotificationEvent Entity sends an event when any of the above objects are created\updated\deleted. Pricing APIs

The Pricing API is designed for billing system in the Service Provider data center to specify prices for Plans and Add-ons to flow into the 2012 R2 system. The billing adaptor can choose provide prices for each Plan, or Plan add-on in real-time. As part of implementing the notification subscriber, we have specifications for the below APIs that the billing service can implement to enable pricing data to flow back into the system. The implementation of these APIs is optional. If the below APIs are enabled the price values for the plans and add-ons will be visible in the WAP Tenant site at the time of addition of the Plan\Add-On.

Method Name

API

RESPONSE

GET

/planPrice?id={id}&region={region}&username={username}

String

GET

/addonPrice?id={id}&region={region}&username={username}&subscriptionId={subscriptionId}

String

 

Notes:

  • This API is expected to return a string with pricing information. The 2012 R2 system will display this information alongside plans for the subscriber, but these are textual and not typed.
Detailed Description of the Sample Adapter Project Files

This section explains the content of the sample billing adapter (SampleBillingAdapter.sln). At a high level the billing adapter consists of the below parts:

1. SampleBillingAdapter.cs provides an example of the different calls to the Usage REST API

2. The set of Data Contracts that can be used to deserialize the API responses

SampleBillingAdapter.cs

This is the entry point for the application. The file contains the below:

1. Instantiation of a UserServiceHttpClient with the required configuration data.

2. This UsageServiceHttpClient is then used to query the usage service. There are seven types of calls that can be made for the Billing data. This data is deserialized into instances of the data contracts that are included in the DataContracts directory

3. The data is then printed to the console.

Example:

using Microsoft.WindowsAzurePack.Usage.DataContracts;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;

static void Main(string[] args)
{
    // create a WAP usage service http client (this data can be read from a config file)
    string mediaType = "application/json"; // application/json or application/xml
    string authenticationType = "basic";
    string Username = "UsageClient";
    string Password = "specify the correct pwd";
    string Machine = "specify the machine where the usage service is running";
    string Port = "30022";
    string BaseAddress = String.Format("https://{0}:{1}/", Machine, Port);
    var usageService = new WAPUsageServiceHttpClient(Username, Password, authenticationType, BaseAddress, mediaType);

    // gather usage and billing data asynchronously using the Usage API
    var usage = usageService.GetDataAsync<UsageRecordList>("usage", 0, 50);
    var plans = usageService.GetDataAsync<UsageEventList<Plan>>("billing/plans", 0, 50);
    var subscriptions = usageService.GetDataAsync<UsageEventList<Subscription>>("billing/subscriptions", 0, 50);
    var addOns = usageService.GetDataAsync<UsageEventList<AddOn>>("billing/addons", 0, 50);
    var planAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/planAddons", 0, 50);
    var subscriptionAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/subscriptionAddons", 0, 50);
    var planServices = usageService.GetDataAsync<UsageEventList<ResourceProviderReference>>("billing/planServices", 0, 50);

    #region Print the usage and billing data to the console ...
    Console.WriteLine("Printing Usage Data - Press Enter to Proceed...");
    Console.ReadLine();
    usageService.PrintUsageData(usage.Result);
    usageService.PrintPlanData(plans.Result);
    usageService.PrintSubscriptionData(subscriptions.Result);
    usageService.PrintAddOnsData(addOns.Result);
    usageService.PrintPlanAddOnsData(planAddOns.Result);
    usageService.PrintSubscriptionAddOnsData(subscriptionAddOns.Result);
    usageService.PrintPlanServicesData(planServices.Result);
    #endregion
}

Data Contracts

The DataContracts directory contains all the required Data Contracts to interact effectively with the Usage API.

VM Data Gathered from the Usage API

VM Provider

Measure

Unit

Description

 

MemoryAllocated-Min

MB

Lowest allocated memory size for a VM within an hour timespan

 

MemoryAllocated-Max

MB

Highest allocated memory size for a VM within an hour timespan

 

MemoryConsumed-Min

MB

Lowest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Max

MB

Highest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Median

MB

Median average consumed memory size for a VM within an hour timespan

 

CPUAllocationCount-Min

Each

Lowest number of CPU core allocated for a VM within an hour timespan

 

CPUAllocationCount-Max

Each

Highest number of CPU core allocated for a VM within an hour timespan

 

CPUPercentUtilization-Median

%

Median average in percentage of CPU consumption for a VM within an hour timespan

 

CrossDiskIOPerSecond-Min

MB

Lowest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Max

MB

Highest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Median

MB

Median average input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Min

MB

Lowest allocated disk size across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Max

MB

Highest allocated disk size across all attached disk for a VM within an hour timespan

 

PerNICKBSentPerSecond-Min

MB

Lowest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Max

MB

Highest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Median

MB

Median average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Average

MB

Straight average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Min

MB

Lowest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Max

MB

Highest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Median

MB

Median average bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Average

MB

Straight average bytes received per second on a network adapter attached to a VM within an hour timespan

       

As you can see, this is a powerful API that allows bi-directional data flow. The usage data from the 2012 R2 stack to the billing adaptor and the pricing data (business logic decides the prices) and that data flows from the billing system into the 2012 R2 stack.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

How to Create a Basic Plan Using the Service Administration Portal

The USMT team blog - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, success for a service provider largely hinges on its ability to attract and retain tenants.  It therefore falls to the service provider to think about how to use service offerings to draw tenants in, to consider different tactics for differentiation, upsell, and maintain healthy tenant accounts.  To help service providers meet these challenges, we have invested in key enhancements to the service management experience targeting these specific areas: 

  • Using value based offers to attract tenants and drive new subscriptions
  • Use offer differentiation & upsell to drive more consumption 
  • Managing tenant accounts and subscriptions

A “service provider” here could be an IT organization within a company that is providing services such as IaaS to other business units in the organization. Thus, an IT organization that operates like a service provider to other business units must create compelling service offerings in much the same way as a service provider tries to attract customers. 

Overview of an IaaS Plan

Service providers can build bundles of service offerings that are termed “plans”. Plans include composable service offerings that can be assembled together in order target different types of prospective tenants.  Tenants consume these service offerings by subscribing to a plan.  In a very general sense, a cloud is nothing more to the tenant than a set of capabilities (“service offerings”) at some capacity (“quotas”).

To support this business model, we have designed a very easy-to-use experience for creating offers, selecting the kinds of service offerings to include and then setting the quotas to control how much can be consumed by any single subscription.  But, it goes beyond a simple set of compute, storage, and networking capabilities at some quota amount! 

One of the most important aspects of plan creation is the process of including library content to facilitate simplified application development.  For that reason, the plan creation experience also features a way to include templates for foundational VM configurations and workloads.

Plan Creation

In the Service Administration portal, the left side navigation has a entry called “Plans” which lists all the plans currently in the system. As you can see in the figure below, the administrators have created many different plans. When plans are created they are “Private” by default, meaning they are not yet visible to prospective tenants.

In a quick glance, the administrator can identify which Plans are accessible by the tenants, how many subscriptions they have along with other pertinent status.

Creating a new plan is very easy and is enabled by the Quick Create experience.

Scroll down and click on New, which takes you to the Plan creation experience.

This experience enables plan creation and “plan add-ons”. We will be focusing on plan creation in this blog post. The plan creation experience is a simple wizard with just three screens. In the first screen, you give the plan a name, and in this case it is going to be called “BlogIaaSPlan” as shown in the figure below.

 

As mentioned earlier, a plan is a container of service offerings that are available in the system. This system is configured to provide VMs, web sites, SQL Server databases, and service bus services. Therefore, the plan wizard allows all of these types of services to be offered in the plan.

We will focus only on virtual machines in this blog post. In this screenshot the ‘Virtual Machine Clouds’ service is selected and ‘Virtual Machine Clouds’ is chosen from the drop down.

Skip the plan add-ons for now and click OK. That completes the plan Quick Create experience. As you can see, the BlogIaaSPlan is created and is private by default.  The plan is “not configured” yet and needs to be configured before it can be made public for tenants to be able to subscribe to the plan.

Configuring the Plan

Clicking on the plan (BlogIaaSPlan) presents the plan dashboard page. This page shows the plan statistics at the top and a list of all the services that are available on the plan along with plan add-ons associated with the plan.

Since we are creating an VM only plan, we have not selected any other offer/services other than Virtual Machine Clouds.

 

 

As seen in the figure above, the Plan is not active nor configured for it to be used.

Click on ‘Virtual Machine Clouds’ to configure the plan.

Associating the Plan with a VMM Server and Cloud

A plan that offers VM clouds is associated with a specific Virtual Machine Manager server and a Virtual Machine Manager cloud within that VMM server. When the tenant subscribes to this plan and instantiates a virtual machine, the system will deploy that VM with the specified properties on the associated cloud via the associated VMM server.

As shown in the figure to the left, the VMM server is a mandatory property that needs to be set for an IaaS plan.

As part of the service registration, the VMM server information is already available in the system. Therefore, selecting the correct VMM server is very easy.

Once a VMM server has been selected, the VMM cloud managed by that VMM server needs to be selected and bound to the plan.

As you can see in the figure, once a VMM server has been identified, the system queries the VMM server to list all the VMM clouds available on the VMM server.

We will choose the Gold Cloud to be associated with this plan.

A plan allows the administrator to bind the service to a specific cloud and controls on the upper limits on its usage.

Assigning Quota Limits to a Plan

The administrator will be able to set limits on the core compute attributes such as “the maximum number of VMs, logical cores, the max memory, storage and virtual networks each subscription can have.

As you can see in this figure, you can specify absolute limits by specifying a number against each compute property to leave it unlimited, in which case it will be limited by the underlying fabric constraints.

Adding Allowed Networks to the Plan

The plan allows various cloud resources to be made available to the subscriber in a controlled manner. In this section we will go through the networks that are made available to the plan.

When a plan is being configured for the first time, there are no cloud resources assigned to the plan. Therefore all the cloud resources will be empty. Click on ‘Add networks’ to add networks to the plan and add the Fabrikam External network to the plan.

 

 

 

When a tenant subscribes to this plan, and creates a VM, the only networks available for that VM to use will be the only networks allowed by the plan and in this case, it will only be the Fabrikam External network.

Adding Other Resources

Following the same pattern as that of networks, we can specify which hardware profiles and virtual machine templates are accessible within the plan. In this plan, all the hardware profiles and only the Windows Server 2012 VM template are chosen resulting in the figures below.

 

 

With these configurations complete, the plan is ready to be subscribed to by tenants. Advanced scenarios such as the gallery and other settings will be discussed in later blogs.

Next the plan needs to be made it public so that it can be discovered and subscribed to by tenants.

Plan Activation

Go back to the Plans List view to see all the plans. As you can see in the figure below, the plan is now configured, but still private. 

Select the BlogIaaSPlan and then make it public by changing its access privileges. As shown in the Figure below, you can see do it from the Change Access command

 

The plan status will change to reflect the fact it’s now a publicly accessible plan. You can see that in the Plans view as shown in the figure below.

Once the plan is made public it can be subscribed to and tenants can start to deploy virtual machines against the subscription.

Conclusion

In subsequent blogs, we will provide more details of creating advanced plans.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Using Server Inventory Reports to Help Stay Compliant with Service Provider Licensing Agreement (SPLA)

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, helping Service providers to stay compliant within the SPLA framework is another key investment area for this release. Service providers that we have talked to during the planning phases very clearly identified the difficulty involved in being able to accurately report on the licenses consumed within the datacenter, especially true for the datacenters of today which are very dynamic. 

Overview of Server Inventory Reporting

The Creating usage analytics reports using Excel blog post provided an overview of how to use Excel to create powerful reports that help the provider gain insights into the usage patterns of their customers. This blog post will focus on how the service administrator can leverage the same system to create server inventory reports that help the service providers gain insight into how the Windows Servers and the VM instances that they host power the services and assess the licensing impact with respect to SPLA framework of the 2012 R2 release. As shown in the figure below, the Service Reporting extracts fabric or host data from Operations Manager (also called OM) to process the data relevant for licensing scenarios as high-lighted by the red circle.

As called out the in the earlier blog on this topic, the Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc.)

The Service Reporting is designed for the Service Administrator who can create reports on their own using Excel power pivots and obtain the insights that help them in their capacity planning needs. While the previous blog went into the details of how to create reports from Windows Azure Pack for Tenant Resource Utilization, this blog will focus on how to leverage the Server Inventory report that is shipped out of the box in 2012 R2.

Server Inventory Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. The data in the Operations Manager database contains information about the Hyper-V hosts and the Virtual Machines that are hosted on those servers.

As illustrated in the figure below, the details about the servers and the guest VMs is extracted by the Service Reporting component and is then processed to create the relevant OLAP cubes, which are then used to create the Excel Reports that have the Server Inventory information.

 

 

Scenarios

For the 2012 R2 release we targeted the server inventory scenarios below. The goals were to enable the Service Provider to be able to create accurate SPLA reports, understand trends and use the report for planning and auditing scenarios.

  1. Report on the number of processors and monthly VM high water mark on the Hyper-V hosts
  2. Trending data for processor count and VM high water mark, for up to 3 years
  3. Detailed view of all the server and the VMs for upto 3 years
Configuring the Server Inventory Report

The prerequisite for the Server Inventory Report is that the Service Reporting system must be working correctly and Server Inventory data from Operations Manager must be flowing into the system. This blog does not address the installation and deployment of Service Reporting component.

The Server Inventory report shipped out of the box in 2012 R2 needs to be configured to connect to the Analysis Server that holds the Server Inventory cubes. This can be easily done by opening the Server Licensing Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in bold) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure that the command string has the text SRVMInventoryCube.

At this point, you should be able to view the server inventory report dashboard.

Click on the Summary worksheet and you should see content similar to this figure.

Depending on the data in the system, the slicers may show different values that are selected by default. The left axis shows the processor count and the right axis shows the VM Instance count. If the slicer values are changed, the report will change as well.

As you can see in this report, the processor count and the VM instance count grew between May and June of 2013.

An important thing to note, is that if you try to print this page for your records, the slicers will not be displayed, since the print area is configured to exclude the slicers.

Further, there is a placeholder for key information to be entered which allows the provider to identify themselves in the report when the scenarios call for communicating with license resellers.

Detailed Report

The Server Inventory Report has a detail worksheet.  It contains the information about what helped compose the summary report. This is useful when one wants to understand the finer details on the report. As you can see in this figure, a monthly breakdown of which host had how many processors and how many VM instances on that host is available.

Expanding the host, the report will list all the VM instances on that host that were hosted on that server.

This view is agnostic of tenants and workloads because the licensing scenarios require only processor counts and high water mark of VMs on the servers for a given month.

Conclusion

This is a very powerful capability for the service providers to accurately and  easily report license consumption based on SPLA framework with the 2012 R2 release.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Creating Usage Analytics Reports using Excel

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, enabling usage analytics scenarios for service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and provides analytics on tenant resource usage.  

Overview of Service Reporting

The “How to Integrate Your Billing System with the Usage Metering System” blog post provided an overview of the Usage Metering System. This blog post will focus on how we extract the same data and provide analytics on tenant resource VM utilization and make them available in Excel pivot tables. (analysis via Performance Point is covered in a subsequent blog post). As shown in the figure below, the Service Reporting component extracts the data from the Usage REST API and transforms them into OLAP cubes for analytics, as shown in the picture below.

  

Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc..)

Service Reporting is designed for the service administrator to create reports using Excel pivot tables to obtain the insights that help them in their capacity planning needs and show-back situations.

VM Usage Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. This data is collected and stored in the WAP (Windows Azure Pack) Usage Database along with usage data of other resources. As mentioned earlier, the details of WAP Usage system was detailed in the blog How to Integrate Your Billing System with the Usage Metering System.

The Service Reporting component reads data from the Usage Database and then transforms the raw usage data into OLAP cubes for analytics. The data in these OLAP cubes are available for visualization and for drill down analytics using Excel and Performance Point.

 

 

Scenarios

For the 2012 R2 release we targeted the following usage analysis scenarios:

  1. Usage trends across different time dimensions (hourly, daily, monthly, quarterly, yearly) to provide critical trending patterns
  2. Pivoting by subscriptions to understand which subscribers are impacting the business
  3. Pivoting by clouds/plans to understand which plans are used the most
  4. Side-by-side comparison between allocated capacity for tenants and their usage to help understand utilization ratios

These scenarios can be visualized in Excel and in Performance Point. Excel is a very popular tool for most reporting needs, and has pivot table capabilities that come in very handy for ad-hoc analytics. Excel workbooks can contain data to be analyzed even when disconnected from the SQL Server Analysis Server.

Configuring Usage Reports

The prerequisites for Usage Reports to work are that the Service Reporting component must be working correctly and usage data must be flowing into the system. This blog does not address the installation and deployment of the Service Reporting component. The Excel Usage Reports shipped out of the box in 2012 R2 need to be connected to the Analysis Server that holds the Usage Data Cube. This can be easily done by opening the Usage Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu in Excel and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in red) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure, the command text has SRUsageCube in the text.

Once these connection properties are saved, the Excel report can now be populated with data from the Usage Data Cube and its capabilities.

To test it out, you can create a brand new worksheet and then create a pivot table using the connection you just created.

Step 1: Open a new worksheet

Step 2: Click on Insert->Pivot Table

  • Step 3: Make sure you have External data source selected
  • Step 4: Click on Choose Connection and select the data connection configured in the previous step.

  • Step 5: Save the changes and close the dialog to go back to the Excel worksheet.

If the data connection is configured correctly, you should be seeing this form on the right side of your worksheet.

Click on “All” and you will see a drop down with the following items.

Click on the Settings icon (the round sprocket) and collapse all the fields.

You will see all the 19 “measures” that are available out of the box for reporting different utilization data points.

At this point, you are ready to create your own report that is provided in the sample Usage Report.

Explore the Pivot Table fields and try to compose the report similar to the one in the figure below by dragging and dropping the different fields to the appropriate areas (Filters, Columns, Rows, Values).

As you add the rows and columns, you will start to see the report shape up to look like the figure below.

Slicers

Once you have a report that looks like this you can augment this report by adding slicers to give you filtering options.

Go to Insert->Slicer and choose the same connection that the pivot table is using. This will provide you with options to choose the necessary filter. Select VMM User Role (which is the same as Subscriptions) and you can see list of subscribers in the system and selecting one gives you the ability to scope the results.

In this instance, I have created a slicer with “VMM User Role” but changed the Display name to “Subscriptions” to make it more intuitive. All the available “Subscriptions” are shown in this list and all of them are in scope.

Now, if you select just one of them, say “Unknown User Role” you will see the report change to just display the records related to just that subscription as shown the table below.

As you can see, all the values, instantly change to the selected filter, thus giving the administrator great ability to look at subscribers and compare them side by side. One can multi-select within the same slicer and chain other slicers to provide richer analytics.

 

Conclusion

While Excel is super powerful and ubiquitous, Performance Point allows greater collaboration by enabling dashboards . By connecting to the Analysis Server of Service Reporting, one can take advantage of all the key fields that are available in Service Reporting to create powerful dashboards that can help the service administrator see the key metrics of the business is a single location. 

Subsequent blog posts will go into the details of configuring Performance Point dashboards.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

 

Categories: MDT

How to Integrate Your Billing System with the Usage Metering System

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Provider experiences in enabling the billing and chargeback of Tenant Resource Utilization and how it applies to Brad’s larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post:   What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.

As described in that blog post, enabling billing scenarios for the service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and reports on tenant resource utilization. These services are offered on a subscription basis and therefore it is critical that the resource utilization is reported at the subscription granularity to assist in billing scenarios. 

Overview of the Usage Metering System

The usage system is located alongside the Service Management API in the Windows Azure Pack (WAP) stack, enabling it to access tenant utilization data for all the services provided in the WAP stack and provide an REST API which is leveraged to integrate with the billing system of the provider, as illustrated in the figure below.

 

Service providers have invested a lot in their own billing system and it was critical that the 2012 R2 release be able to integrate with the existing systems in place. Therefore, we targeted our investments to ensure that 2012 R2 integrates easily with various billing providers and ITFM (IT Financial Management) products that are in the market.

It is important to note that there is no billing system being shipped in 2012 R2 release. Service providers have to create the billing integration module (also referred as “billing adaptor”) to provide data to the billing system they are using.

Now, lets go a little deeper to look at the building blocks of the usage metering system and how its architected.

The Usage Metering System has four main components. Three of these components, the Data Generator, The Data Collector and the Usage Database are internal to the system and the fourth component the Usage API is an external facing API that the billing adapter will interface with to extract the tenant resource utilization data.

Data Generator

The Data Generator tier represents the services (resource providers) registered as part of the system. They collect information specific to a subscription and expose it to the Usage Collector. The Usage Collector expects information to be made available following a specific data contract. This contract is the same across all the providers. All providers in the system adhere to this contract to provide information. IaaS metrics in Windows Azure Pack are provided by VM Clouds resource provider.

Data Collector

The Data Collector is an internal component that periodically collects usage information from all the registered Data Generators and stores it in the Usage Database.

Usage Database

The Usage Database is a transient store, which stores all the data from the various Data Generators for a period of 30-40 days. The expectation is that during this time, the billing system would have extracted the data from this database for billing purposes.

Usage API

This is a RESTful API and is the only way to extract the data from the Usage Database. Since Service Providers typically have a billing system which allows them to generate monthly bills to their subscribers. Customers can easily create an integration to their billing system by extracting data from the Usage Database through the Usage API. The component that customers develop to integrate with their billing system is called a “Billing Adapter”, which serves as a bridge between the Usage Metering system and the customer billing system.

 

In the figure below, in the red circles, you can see the VM Clouds resource provider, alongside other resource providers such as Service Bus, generating the IaaS resource utilization data, which is collected and stored in the Usage Database and made available through the Usage API.

 

The Usage API can be leveraged to create the billing adaptor and interface with the billing system within the provider data center. In the figure below, you can see that the role of the “billing adaptor” serving to integrate the Usage Metering System and the billing provider within the provider datacenter.

 

The “Service Reporting” component and the analytics it provides is discussed in the blog post titled “Creating Usage Analytics Reports using Excel and Performance Point” while this blog post details on how to create a “Billing Adaptor”.

Interacting with the Usage System

This section explains the ways an external system can interact with the Usage Metering System. Two different types of information are available through the Usage API:

  1. Tenant resource utilization for all subscriptions
  2. Plan, add-on, subscription, and account information

The information is presented via two channels:

  1. Usage API that queries all the historical data
  2. Real time CRUD events via the Event Notification System.

The billing adaptor uses both these channels to be able to effectively create a billing reports while being able to respond in real time as plans, subscriptions and accounts get created and managed in the environment.

Usage API (Exposed on the Usage Endpoint) Usage Data

The Usage endpoint exposes an API to return tenant resource utilization data pertaining to every subscription across services. The caller (“Billing Adapter”) needs to provide the “startid”. This parameter informs the Usage Metering System to return usage data, starting from that ID. The Billing Adapter advances the “startid” based on the number of records returned for the subsequent call.

 

Method Name

API

Response

GET

/usage?startId={startId}&batchSize={batchSize}

UsageRecordList<UsageRecord>

Plans\Addon\Subscription Data

The Usage endpoint also exposes APIs to return data on existing plans, addons, subscriptions, etc

Method Name

API

Response

GET

billing/plans?startId={startId}&batchSize={batchSize}

UsageEventList<Plan>

GET

billing/addons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOn>

GET

billing/subscriptions?startId={startId}&batchSize={batchSize}

UsageEventList<Subscription>

GET

billing/planServices?startId={startId}&batchSize={batchSize}

UsageEventList<ResourceProviderReference>

GET

billing/planAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

GET

billing/subscriptionAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

Notes:

STARTID is the record id of the first record you want to fetch in a particular cycle.

BATCHSIZE is the maximum number of records you want to fetch.

USAGE-RESTAPI-ENDPOINT can be found at :30022">https://<Admin-API-Machine-Name>:30022

Configuration

The administrator needs to ensure that the Usage Metering Service is configured correctly to authenticate the Billing Adaptor. That can be done by ensuring that the service is capable of accepting the correct credentials that will be used to authenticate. The steps below describe how to ensure that the credentials are set properly. Note: During the installation process, the password used is a random sequence and hence this step is necessary to establish connectivity.

On the WAP deployment launch the Management Service PowerShell Module on the Admin API server.

Then, run the commands below:

· Set-MgmtSvcSetting -Namespace UsageService -Name Username -Value '<EnterUserName>'

· Set-MgmtSvcSetting -Namespace UsageService -Name Password -Value '<EnterPassword>' –Encode

Once the username and password are set to known values, these values can be used by the Billing Adaptor to authenticate.

Consuming the Usage REST API

The following steps are required to consume the Usage REST API:

  • Define an httpClient
  • Construct a URI to query the Usage Metering Service
  • StartID is the record id of the first record you want to fetch and BatchSize is the maximum number of records you want to fetch.
  • Execute the API call and read Usage Data
  • Data Contracts can be used to de-serialize the response returned (as in Sample below)
Usage Data Model

The Usage Data Model is shown in the figure below and can be used to associate the data returned by the Usage API.

Event Notification System

The Service Management API keeps track of events within the Usage Metering System and sends notifications to any registered subscriber (e.g. a Billing Adaptor). Examples of the events are plan, addon, subscription creation\updates and account creation.

Notifications are sent as a Post call to an endpoint registered with the Usage Metering System. The Management Service PowerShell Module should be used to define the required notification end point. Note that the notificationEndPoint must end with a trailing slash.

Description:

Subscribing for plan, add-on and account changes.

Verb

Command Parameters

Set

MgmtSvcNotificationSubscriber

-NotificationSubscriber

-Name

-Enabled

-SubscriberType

-Endpoint

-AuthenticationMode

-AuthenticationUsername

-AuthenticationPassword

-EncryptionKey

-EncryptionAlgorithm

-ConnectionString

-Server

-Database

-UserName

-Password

SubscriberType:

  • BillingService
  • MandatoryService
  • OptionalService

Example:

Set-MgmtSvcNotificationSubscriber -Name Billing –SubscriberType BillingService -Enabled $false -Endpoint https://localhost/ -AuthenticationMode Basic

The Billing Adaptor can be set up to handle the event in a blocking or a non-blocking manner. The SubscriberType BillingService & MandatoryService are both blocking. The only nonblocking option is OptionalService. If the Billing Adaptor is set up to be blocking, a plan creation event in the service management API should trigger a corresponding plan to be created in the billing system. If this operation is not successful, the plan creation at the service management API will fail. This enables consistency between the platform and the billing system.

Notification Data Contracts

Notifications sent to the billing adapter adhere to type - NotificationEvent<T> type. T could be replaced by the below objects.

  • Plan
  • PlanAddOn
  • AdminSubscription
  • ResourceProviderReference
  • PlanAddOnReference
  • PlanAddOnReference

When you download the WAP (Windows Azure Pack) the data contracts can be found under:

· \SampleBillingAdapter\DataContracts\*

Following are the two important properties of NotificationEvent

1. NotificationEvent Method could have following values:

1. Post to create a new account/subscription/addon/plan

2. Delete to delete an account/subscription/addon/plan

3. Post an update to a plan

2. NotificationEvent Entity sends an event when any of the above objects are created\updated\deleted. Pricing APIs

The Pricing API is designed for billing system in the Service Provider data center to specify prices for Plans and Add-ons to flow into the 2012 R2 system. The billing adaptor can choose provide prices for each Plan, or Plan add-on in real-time. As part of implementing the notification subscriber, we have specifications for the below APIs that the billing service can implement to enable pricing data to flow back into the system. The implementation of these APIs is optional. If the below APIs are enabled the price values for the plans and add-ons will be visible in the WAP Tenant site at the time of addition of the Plan\Add-On.

Method Name

API

RESPONSE

GET

/planPrice?id={id}&region={region}&username={username}

String

GET

/addonPrice?id={id}&region={region}&username={username}&subscriptionId={subscriptionId}

String

 

Notes:

  • This API is expected to return a string with pricing information. The 2012 R2 system will display this information alongside plans for the subscriber, but these are textual and not typed.
Detailed Description of the Sample Adapter Project Files

This section explains the content of the sample billing adapter (SampleBillingAdapter.sln). At a high level the billing adapter consists of the below parts:

1. SampleBillingAdapter.cs provides an example of the different calls to the Usage REST API

2. The set of Data Contracts that can be used to deserialize the API responses

SampleBillingAdapter.cs

This is the entry point for the application. The file contains the below:

1. Instantiation of a UserServiceHttpClient with the required configuration data.

2. This UsageServiceHttpClient is then used to query the usage service. There are seven types of calls that can be made for the Billing data. This data is deserialized into instances of the data contracts that are included in the DataContracts directory

3. The data is then printed to the console.

Example:

using Microsoft.WindowsAzurePack.Usage.DataContracts;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;

static void Main(string[] args)
{
    // create a WAP usage service http client (this data can be read from a config file)
    string mediaType = "application/json"; // application/json or application/xml
    string authenticationType = "basic";
    string Username = "UsageClient";
    string Password = "specify the correct pwd";
    string Machine = "specify the machine where the usage service is running";
    string Port = "30022";
    string BaseAddress = String.Format("https://{0}:{1}/", Machine, Port);
    var usageService = new WAPUsageServiceHttpClient(Username, Password, authenticationType, BaseAddress, mediaType);

    // gather usage and billing data asynchronously using the Usage API
    var usage = usageService.GetDataAsync<UsageRecordList>("usage", 0, 50);
    var plans = usageService.GetDataAsync<UsageEventList<Plan>>("billing/plans", 0, 50);
    var subscriptions = usageService.GetDataAsync<UsageEventList<Subscription>>("billing/subscriptions", 0, 50);
    var addOns = usageService.GetDataAsync<UsageEventList<AddOn>>("billing/addons", 0, 50);
    var planAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/planAddons", 0, 50);
    var subscriptionAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/subscriptionAddons", 0, 50);
    var planServices = usageService.GetDataAsync<UsageEventList<ResourceProviderReference>>("billing/planServices", 0, 50);

    #region Print the usage and billing data to the console ...
    Console.WriteLine("Printing Usage Data - Press Enter to Proceed...");
    Console.ReadLine();
    usageService.PrintUsageData(usage.Result);
    usageService.PrintPlanData(plans.Result);
    usageService.PrintSubscriptionData(subscriptions.Result);
    usageService.PrintAddOnsData(addOns.Result);
    usageService.PrintPlanAddOnsData(planAddOns.Result);
    usageService.PrintSubscriptionAddOnsData(subscriptionAddOns.Result);
    usageService.PrintPlanServicesData(planServices.Result);
    #endregion
}

Data Contracts

The DataContracts directory contains all the required Data Contracts to interact effectively with the Usage API.

VM Data Gathered from the Usage API

VM Provider

Measure

Unit

Description

 

MemoryAllocated-Min

MB

Lowest allocated memory size for a VM within an hour timespan

 

MemoryAllocated-Max

MB

Highest allocated memory size for a VM within an hour timespan

 

MemoryConsumed-Min

MB

Lowest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Max

MB

Highest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Median

MB

Median average consumed memory size for a VM within an hour timespan

 

CPUAllocationCount-Min

Each

Lowest number of CPU core allocated for a VM within an hour timespan

 

CPUAllocationCount-Max

Each

Highest number of CPU core allocated for a VM within an hour timespan

 

CPUPercentUtilization-Median

%

Median average in percentage of CPU consumption for a VM within an hour timespan

 

CrossDiskIOPerSecond-Min

MB

Lowest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Max

MB

Highest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Median

MB

Median average input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Min

MB

Lowest allocated disk size across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Max

MB

Highest allocated disk size across all attached disk for a VM within an hour timespan

 

PerNICKBSentPerSecond-Min

MB

Lowest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Max

MB

Highest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Median

MB

Median average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Average

MB

Straight average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Min

MB

Lowest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Max

MB

Highest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Median

MB

Median average bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Average

MB

Straight average bytes received per second on a network adapter attached to a VM within an hour timespan

       

As you can see, this is a powerful API that allows bi-directional data flow. The usage data from the 2012 R2 stack to the billing adaptor and the pricing data (business logic decides the prices) and that data flows from the billing system into the 2012 R2 stack.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

How to Create a Basic Plan Using the Service Administration Portal

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, success for a service provider largely hinges on its ability to attract and retain tenants.  It therefore falls to the service provider to think about how to use service offerings to draw tenants in, to consider different tactics for differentiation, upsell, and maintain healthy tenant accounts.  To help service providers meet these challenges, we have invested in key enhancements to the service management experience targeting these specific areas: 

  • Using value based offers to attract tenants and drive new subscriptions
  • Use offer differentiation & upsell to drive more consumption 
  • Managing tenant accounts and subscriptions

A “service provider” here could be an IT organization within a company that is providing services such as IaaS to other business units in the organization. Thus, an IT organization that operates like a service provider to other business units must create compelling service offerings in much the same way as a service provider tries to attract customers. 

Overview of an IaaS Plan

Service providers can build bundles of service offerings that are termed “plans”. Plans include composable service offerings that can be assembled together in order target different types of prospective tenants.  Tenants consume these service offerings by subscribing to a plan.  In a very general sense, a cloud is nothing more to the tenant than a set of capabilities (“service offerings”) at some capacity (“quotas”).

To support this business model, we have designed a very easy-to-use experience for creating offers, selecting the kinds of service offerings to include and then setting the quotas to control how much can be consumed by any single subscription.  But, it goes beyond a simple set of compute, storage, and networking capabilities at some quota amount! 

One of the most important aspects of plan creation is the process of including library content to facilitate simplified application development.  For that reason, the plan creation experience also features a way to include templates for foundational VM configurations and workloads.

Plan Creation

In the Service Administration portal, the left side navigation has a entry called “Plans” which lists all the plans currently in the system. As you can see in the figure below, the administrators have created many different plans. When plans are created they are “Private” by default, meaning they are not yet visible to prospective tenants.

In a quick glance, the administrator can identify which Plans are accessible by the tenants, how many subscriptions they have along with other pertinent status.

Creating a new plan is very easy and is enabled by the Quick Create experience.

Scroll down and click on New, which takes you to the Plan creation experience.

This experience enables plan creation and “plan add-ons”. We will be focusing on plan creation in this blog post. The plan creation experience is a simple wizard with just three screens. In the first screen, you give the plan a name, and in this case it is going to be called “BlogIaaSPlan” as shown in the figure below.

 

As mentioned earlier, a plan is a container of service offerings that are available in the system. This system is configured to provide VMs, web sites, SQL Server databases, and service bus services. Therefore, the plan wizard allows all of these types of services to be offered in the plan.

We will focus only on virtual machines in this blog post. In this screenshot the ‘Virtual Machine Clouds’ service is selected and ‘Virtual Machine Clouds’ is chosen from the drop down.

Skip the plan add-ons for now and click OK. That completes the plan Quick Create experience. As you can see, the BlogIaaSPlan is created and is private by default.  The plan is “not configured” yet and needs to be configured before it can be made public for tenants to be able to subscribe to the plan.

Configuring the Plan

Clicking on the plan (BlogIaaSPlan) presents the plan dashboard page. This page shows the plan statistics at the top and a list of all the services that are available on the plan along with plan add-ons associated with the plan.

Since we are creating an VM only plan, we have not selected any other offer/services other than Virtual Machine Clouds.

 

 

As seen in the figure above, the Plan is not active nor configured for it to be used.

Click on ‘Virtual Machine Clouds’ to configure the plan.

Associating the Plan with a VMM Server and Cloud

A plan that offers VM clouds is associated with a specific Virtual Machine Manager server and a Virtual Machine Manager cloud within that VMM server. When the tenant subscribes to this plan and instantiates a virtual machine, the system will deploy that VM with the specified properties on the associated cloud via the associated VMM server.

As shown in the figure to the left, the VMM server is a mandatory property that needs to be set for an IaaS plan.

As part of the service registration, the VMM server information is already available in the system. Therefore, selecting the correct VMM server is very easy.

Once a VMM server has been selected, the VMM cloud managed by that VMM server needs to be selected and bound to the plan.

As you can see in the figure, once a VMM server has been identified, the system queries the VMM server to list all the VMM clouds available on the VMM server.

We will choose the Gold Cloud to be associated with this plan.

A plan allows the administrator to bind the service to a specific cloud and controls on the upper limits on its usage.

Assigning Quota Limits to a Plan

The administrator will be able to set limits on the core compute attributes such as “the maximum number of VMs, logical cores, the max memory, storage and virtual networks each subscription can have.

As you can see in this figure, you can specify absolute limits by specifying a number against each compute property to leave it unlimited, in which case it will be limited by the underlying fabric constraints.

Adding Allowed Networks to the Plan

The plan allows various cloud resources to be made available to the subscriber in a controlled manner. In this section we will go through the networks that are made available to the plan.

When a plan is being configured for the first time, there are no cloud resources assigned to the plan. Therefore all the cloud resources will be empty. Click on ‘Add networks’ to add networks to the plan and add the Fabrikam External network to the plan.

 

 

 

When a tenant subscribes to this plan, and creates a VM, the only networks available for that VM to use will be the only networks allowed by the plan and in this case, it will only be the Fabrikam External network.

Adding Other Resources

Following the same pattern as that of networks, we can specify which hardware profiles and virtual machine templates are accessible within the plan. In this plan, all the hardware profiles and only the Windows Server 2012 VM template are chosen resulting in the figures below.

 

 

With these configurations complete, the plan is ready to be subscribed to by tenants. Advanced scenarios such as the gallery and other settings will be discussed in later blogs.

Next the plan needs to be made it public so that it can be discovered and subscribed to by tenants.

Plan Activation

Go back to the Plans List view to see all the plans. As you can see in the figure below, the plan is now configured, but still private. 

Select the BlogIaaSPlan and then make it public by changing its access privileges. As shown in the Figure below, you can see do it from the Change Access command

 

The plan status will change to reflect the fact it’s now a publicly accessible plan. You can see that in the Plans view as shown in the figure below.

Once the plan is made public it can be subscribed to and tenants can start to deploy virtual machines against the subscription.

Conclusion

In subsequent blogs, we will provide more details of creating advanced plans.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Using Server Inventory Reports to Help Stay Compliant with Service Provider Licensing Agreement (SPLA)

The Deployment Guys - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, helping Service providers to stay compliant within the SPLA framework is another key investment area for this release. Service providers that we have talked to during the planning phases very clearly identified the difficulty involved in being able to accurately report on the licenses consumed within the datacenter, especially true for the datacenters of today which are very dynamic. 

Overview of Server Inventory Reporting

The Creating usage analytics reports using Excel blog post provided an overview of how to use Excel to create powerful reports that help the provider gain insights into the usage patterns of their customers. This blog post will focus on how the service administrator can leverage the same system to create server inventory reports that help the service providers gain insight into how the Windows Servers and the VM instances that they host power the services and assess the licensing impact with respect to SPLA framework of the 2012 R2 release. As shown in the figure below, the Service Reporting extracts fabric or host data from Operations Manager (also called OM) to process the data relevant for licensing scenarios as high-lighted by the red circle.

As called out the in the earlier blog on this topic, the Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc.)

The Service Reporting is designed for the Service Administrator who can create reports on their own using Excel power pivots and obtain the insights that help them in their capacity planning needs. While the previous blog went into the details of how to create reports from Windows Azure Pack for Tenant Resource Utilization, this blog will focus on how to leverage the Server Inventory report that is shipped out of the box in 2012 R2.

Server Inventory Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. The data in the Operations Manager database contains information about the Hyper-V hosts and the Virtual Machines that are hosted on those servers.

As illustrated in the figure below, the details about the servers and the guest VMs is extracted by the Service Reporting component and is then processed to create the relevant OLAP cubes, which are then used to create the Excel Reports that have the Server Inventory information.

 

 

Scenarios

For the 2012 R2 release we targeted the server inventory scenarios below. The goals were to enable the Service Provider to be able to create accurate SPLA reports, understand trends and use the report for planning and auditing scenarios.

  1. Report on the number of processors and monthly VM high water mark on the Hyper-V hosts
  2. Trending data for processor count and VM high water mark, for up to 3 years
  3. Detailed view of all the server and the VMs for upto 3 years
Configuring the Server Inventory Report

The prerequisite for the Server Inventory Report is that the Service Reporting system must be working correctly and Server Inventory data from Operations Manager must be flowing into the system. This blog does not address the installation and deployment of Service Reporting component.

The Server Inventory report shipped out of the box in 2012 R2 needs to be configured to connect to the Analysis Server that holds the Server Inventory cubes. This can be easily done by opening the Server Licensing Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in bold) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure that the command string has the text SRVMInventoryCube.

At this point, you should be able to view the server inventory report dashboard.

Click on the Summary worksheet and you should see content similar to this figure.

Depending on the data in the system, the slicers may show different values that are selected by default. The left axis shows the processor count and the right axis shows the VM Instance count. If the slicer values are changed, the report will change as well.

As you can see in this report, the processor count and the VM instance count grew between May and June of 2013.

An important thing to note, is that if you try to print this page for your records, the slicers will not be displayed, since the print area is configured to exclude the slicers.

Further, there is a placeholder for key information to be entered which allows the provider to identify themselves in the report when the scenarios call for communicating with license resellers.

Detailed Report

The Server Inventory Report has a detail worksheet.  It contains the information about what helped compose the summary report. This is useful when one wants to understand the finer details on the report. As you can see in this figure, a monthly breakdown of which host had how many processors and how many VM instances on that host is available.

Expanding the host, the report will list all the VM instances on that host that were hosted on that server.

This view is agnostic of tenants and workloads because the licensing scenarios require only processor counts and high water mark of VMs on the servers for a given month.

Conclusion

This is a very powerful capability for the service providers to accurately and  easily report license consumption based on SPLA framework with the 2012 R2 release.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Creating Usage Analytics Reports using Excel

The Deployment Guys - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, enabling usage analytics scenarios for service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and provides analytics on tenant resource usage.  

Overview of Service Reporting

The “How to Integrate Your Billing System with the Usage Metering System” blog post provided an overview of the Usage Metering System. This blog post will focus on how we extract the same data and provide analytics on tenant resource VM utilization and make them available in Excel pivot tables. (analysis via Performance Point is covered in a subsequent blog post). As shown in the figure below, the Service Reporting component extracts the data from the Usage REST API and transforms them into OLAP cubes for analytics, as shown in the picture below.

  

Service Reporting is a data warehousing solution developed on top of the Microsoft Business Intelligence (BI) stack.

In the 2012 R2 release, data is correlated from two sources

  1. Windows Azure Pack Usage (Tenant Resource Utilization data)
  2. Operations Manager (Fabric data such as Servers, VM Instances etc..)

Service Reporting is designed for the service administrator to create reports using Excel pivot tables to obtain the insights that help them in their capacity planning needs and show-back situations.

VM Usage Data Pipeline

In the figure below, the VM usage data source is VMM (Virtual Machine Manager). This data is periodically collected and stored in the OM (Operations Manager) database. This data is collected and stored in the WAP (Windows Azure Pack) Usage Database along with usage data of other resources. As mentioned earlier, the details of WAP Usage system was detailed in the blog How to Integrate Your Billing System with the Usage Metering System.

The Service Reporting component reads data from the Usage Database and then transforms the raw usage data into OLAP cubes for analytics. The data in these OLAP cubes are available for visualization and for drill down analytics using Excel and Performance Point.

 

 

Scenarios

For the 2012 R2 release we targeted the following usage analysis scenarios:

  1. Usage trends across different time dimensions (hourly, daily, monthly, quarterly, yearly) to provide critical trending patterns
  2. Pivoting by subscriptions to understand which subscribers are impacting the business
  3. Pivoting by clouds/plans to understand which plans are used the most
  4. Side-by-side comparison between allocated capacity for tenants and their usage to help understand utilization ratios

These scenarios can be visualized in Excel and in Performance Point. Excel is a very popular tool for most reporting needs, and has pivot table capabilities that come in very handy for ad-hoc analytics. Excel workbooks can contain data to be analyzed even when disconnected from the SQL Server Analysis Server.

Configuring Usage Reports

The prerequisites for Usage Reports to work are that the Service Reporting component must be working correctly and usage data must be flowing into the system. This blog does not address the installation and deployment of the Service Reporting component. The Excel Usage Reports shipped out of the box in 2012 R2 need to be connected to the Analysis Server that holds the Usage Data Cube. This can be easily done by opening the Usage Report from the Reports folder in the install directory of the Service Reporting component. Navigate to the Data->Connections menu in Excel and open up the default connection that is shipped out of the box and edit it. As you can see in the figure below, you can navigate to the Definition tab in the Connection properties.

The connection string to use here is highlighted below.

Ensure you add the correct connection properties and save. The only property you should be changing is the source (highlighted in red) below.

Provider=MSOLAP.5;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=UsageAnalysisDB;Data Source=fab-servicereporting;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

Make sure, the command text has SRUsageCube in the text.

Once these connection properties are saved, the Excel report can now be populated with data from the Usage Data Cube and its capabilities.

To test it out, you can create a brand new worksheet and then create a pivot table using the connection you just created.

Step 1: Open a new worksheet

Step 2: Click on Insert->Pivot Table

  • Step 3: Make sure you have External data source selected
  • Step 4: Click on Choose Connection and select the data connection configured in the previous step.

  • Step 5: Save the changes and close the dialog to go back to the Excel worksheet.

If the data connection is configured correctly, you should be seeing this form on the right side of your worksheet.

Click on “All” and you will see a drop down with the following items.

Click on the Settings icon (the round sprocket) and collapse all the fields.

You will see all the 19 “measures” that are available out of the box for reporting different utilization data points.

At this point, you are ready to create your own report that is provided in the sample Usage Report.

Explore the Pivot Table fields and try to compose the report similar to the one in the figure below by dragging and dropping the different fields to the appropriate areas (Filters, Columns, Rows, Values).

As you add the rows and columns, you will start to see the report shape up to look like the figure below.

Slicers

Once you have a report that looks like this you can augment this report by adding slicers to give you filtering options.

Go to Insert->Slicer and choose the same connection that the pivot table is using. This will provide you with options to choose the necessary filter. Select VMM User Role (which is the same as Subscriptions) and you can see list of subscribers in the system and selecting one gives you the ability to scope the results.

In this instance, I have created a slicer with “VMM User Role” but changed the Display name to “Subscriptions” to make it more intuitive. All the available “Subscriptions” are shown in this list and all of them are in scope.

Now, if you select just one of them, say “Unknown User Role” you will see the report change to just display the records related to just that subscription as shown the table below.

As you can see, all the values, instantly change to the selected filter, thus giving the administrator great ability to look at subscribers and compare them side by side. One can multi-select within the same slicer and chain other slicers to provide richer analytics.

 

Conclusion

While Excel is super powerful and ubiquitous, Performance Point allows greater collaboration by enabling dashboards . By connecting to the Analysis Server of Service Reporting, one can take advantage of all the key fields that are available in Service Reporting to create powerful dashboards that can help the service administrator see the key metrics of the business is a single location. 

Subsequent blog posts will go into the details of configuring Performance Point dashboards.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

 

Categories: MDT

How to Integrate Your Billing System with the Usage Metering System

The Deployment Guys - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Provider experiences in enabling the billing and chargeback of Tenant Resource Utilization and how it applies to Brad’s larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post:   What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.

As described in that blog post, enabling billing scenarios for the service providers is a key investment area for this release. Service providers cannot successfully monetize their services in the absence of a system that tracks and reports on tenant resource utilization. These services are offered on a subscription basis and therefore it is critical that the resource utilization is reported at the subscription granularity to assist in billing scenarios. 

Overview of the Usage Metering System

The usage system is located alongside the Service Management API in the Windows Azure Pack (WAP) stack, enabling it to access tenant utilization data for all the services provided in the WAP stack and provide an REST API which is leveraged to integrate with the billing system of the provider, as illustrated in the figure below.

 

Service providers have invested a lot in their own billing system and it was critical that the 2012 R2 release be able to integrate with the existing systems in place. Therefore, we targeted our investments to ensure that 2012 R2 integrates easily with various billing providers and ITFM (IT Financial Management) products that are in the market.

It is important to note that there is no billing system being shipped in 2012 R2 release. Service providers have to create the billing integration module (also referred as “billing adaptor”) to provide data to the billing system they are using.

Now, lets go a little deeper to look at the building blocks of the usage metering system and how its architected.

The Usage Metering System has four main components. Three of these components, the Data Generator, The Data Collector and the Usage Database are internal to the system and the fourth component the Usage API is an external facing API that the billing adapter will interface with to extract the tenant resource utilization data.

Data Generator

The Data Generator tier represents the services (resource providers) registered as part of the system. They collect information specific to a subscription and expose it to the Usage Collector. The Usage Collector expects information to be made available following a specific data contract. This contract is the same across all the providers. All providers in the system adhere to this contract to provide information. IaaS metrics in Windows Azure Pack are provided by VM Clouds resource provider.

Data Collector

The Data Collector is an internal component that periodically collects usage information from all the registered Data Generators and stores it in the Usage Database.

Usage Database

The Usage Database is a transient store, which stores all the data from the various Data Generators for a period of 30-40 days. The expectation is that during this time, the billing system would have extracted the data from this database for billing purposes.

Usage API

This is a RESTful API and is the only way to extract the data from the Usage Database. Since Service Providers typically have a billing system which allows them to generate monthly bills to their subscribers. Customers can easily create an integration to their billing system by extracting data from the Usage Database through the Usage API. The component that customers develop to integrate with their billing system is called a “Billing Adapter”, which serves as a bridge between the Usage Metering system and the customer billing system.

 

In the figure below, in the red circles, you can see the VM Clouds resource provider, alongside other resource providers such as Service Bus, generating the IaaS resource utilization data, which is collected and stored in the Usage Database and made available through the Usage API.

 

The Usage API can be leveraged to create the billing adaptor and interface with the billing system within the provider data center. In the figure below, you can see that the role of the “billing adaptor” serving to integrate the Usage Metering System and the billing provider within the provider datacenter.

 

The “Service Reporting” component and the analytics it provides is discussed in the blog post titled “Creating Usage Analytics Reports using Excel and Performance Point” while this blog post details on how to create a “Billing Adaptor”.

Interacting with the Usage System

This section explains the ways an external system can interact with the Usage Metering System. Two different types of information are available through the Usage API:

  1. Tenant resource utilization for all subscriptions
  2. Plan, add-on, subscription, and account information

The information is presented via two channels:

  1. Usage API that queries all the historical data
  2. Real time CRUD events via the Event Notification System.

The billing adaptor uses both these channels to be able to effectively create a billing reports while being able to respond in real time as plans, subscriptions and accounts get created and managed in the environment.

Usage API (Exposed on the Usage Endpoint) Usage Data

The Usage endpoint exposes an API to return tenant resource utilization data pertaining to every subscription across services. The caller (“Billing Adapter”) needs to provide the “startid”. This parameter informs the Usage Metering System to return usage data, starting from that ID. The Billing Adapter advances the “startid” based on the number of records returned for the subsequent call.

 

Method Name

API

Response

GET

/usage?startId={startId}&batchSize={batchSize}

UsageRecordList<UsageRecord>

Plans\Addon\Subscription Data

The Usage endpoint also exposes APIs to return data on existing plans, addons, subscriptions, etc

Method Name

API

Response

GET

billing/plans?startId={startId}&batchSize={batchSize}

UsageEventList<Plan>

GET

billing/addons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOn>

GET

billing/subscriptions?startId={startId}&batchSize={batchSize}

UsageEventList<Subscription>

GET

billing/planServices?startId={startId}&batchSize={batchSize}

UsageEventList<ResourceProviderReference>

GET

billing/planAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

GET

billing/subscriptionAddons?startId={startId}&batchSize={batchSize}

UsageEventList<AddOnReference>

Notes:

STARTID is the record id of the first record you want to fetch in a particular cycle.

BATCHSIZE is the maximum number of records you want to fetch.

USAGE-RESTAPI-ENDPOINT can be found at :30022">https://<Admin-API-Machine-Name>:30022

Configuration

The administrator needs to ensure that the Usage Metering Service is configured correctly to authenticate the Billing Adaptor. That can be done by ensuring that the service is capable of accepting the correct credentials that will be used to authenticate. The steps below describe how to ensure that the credentials are set properly. Note: During the installation process, the password used is a random sequence and hence this step is necessary to establish connectivity.

On the WAP deployment launch the Management Service PowerShell Module on the Admin API server.

Then, run the commands below:

· Set-MgmtSvcSetting -Namespace UsageService -Name Username -Value '<EnterUserName>'

· Set-MgmtSvcSetting -Namespace UsageService -Name Password -Value '<EnterPassword>' –Encode

Once the username and password are set to known values, these values can be used by the Billing Adaptor to authenticate.

Consuming the Usage REST API

The following steps are required to consume the Usage REST API:

  • Define an httpClient
  • Construct a URI to query the Usage Metering Service
  • StartID is the record id of the first record you want to fetch and BatchSize is the maximum number of records you want to fetch.
  • Execute the API call and read Usage Data
  • Data Contracts can be used to de-serialize the response returned (as in Sample below)
Usage Data Model

The Usage Data Model is shown in the figure below and can be used to associate the data returned by the Usage API.

Event Notification System

The Service Management API keeps track of events within the Usage Metering System and sends notifications to any registered subscriber (e.g. a Billing Adaptor). Examples of the events are plan, addon, subscription creation\updates and account creation.

Notifications are sent as a Post call to an endpoint registered with the Usage Metering System. The Management Service PowerShell Module should be used to define the required notification end point. Note that the notificationEndPoint must end with a trailing slash.

Description:

Subscribing for plan, add-on and account changes.

Verb

Command Parameters

Set

MgmtSvcNotificationSubscriber

-NotificationSubscriber

-Name

-Enabled

-SubscriberType

-Endpoint

-AuthenticationMode

-AuthenticationUsername

-AuthenticationPassword

-EncryptionKey

-EncryptionAlgorithm

-ConnectionString

-Server

-Database

-UserName

-Password

SubscriberType:

  • BillingService
  • MandatoryService
  • OptionalService

Example:

Set-MgmtSvcNotificationSubscriber -Name Billing –SubscriberType BillingService -Enabled $false -Endpoint https://localhost/ -AuthenticationMode Basic

The Billing Adaptor can be set up to handle the event in a blocking or a non-blocking manner. The SubscriberType BillingService & MandatoryService are both blocking. The only nonblocking option is OptionalService. If the Billing Adaptor is set up to be blocking, a plan creation event in the service management API should trigger a corresponding plan to be created in the billing system. If this operation is not successful, the plan creation at the service management API will fail. This enables consistency between the platform and the billing system.

Notification Data Contracts

Notifications sent to the billing adapter adhere to type - NotificationEvent<T> type. T could be replaced by the below objects.

  • Plan
  • PlanAddOn
  • AdminSubscription
  • ResourceProviderReference
  • PlanAddOnReference
  • PlanAddOnReference

When you download the WAP (Windows Azure Pack) the data contracts can be found under:

· \SampleBillingAdapter\DataContracts\*

Following are the two important properties of NotificationEvent

1. NotificationEvent Method could have following values:

1. Post to create a new account/subscription/addon/plan

2. Delete to delete an account/subscription/addon/plan

3. Post an update to a plan

2. NotificationEvent Entity sends an event when any of the above objects are created\updated\deleted. Pricing APIs

The Pricing API is designed for billing system in the Service Provider data center to specify prices for Plans and Add-ons to flow into the 2012 R2 system. The billing adaptor can choose provide prices for each Plan, or Plan add-on in real-time. As part of implementing the notification subscriber, we have specifications for the below APIs that the billing service can implement to enable pricing data to flow back into the system. The implementation of these APIs is optional. If the below APIs are enabled the price values for the plans and add-ons will be visible in the WAP Tenant site at the time of addition of the Plan\Add-On.

Method Name

API

RESPONSE

GET

/planPrice?id={id}&region={region}&username={username}

String

GET

/addonPrice?id={id}&region={region}&username={username}&subscriptionId={subscriptionId}

String

 

Notes:

  • This API is expected to return a string with pricing information. The 2012 R2 system will display this information alongside plans for the subscriber, but these are textual and not typed.
Detailed Description of the Sample Adapter Project Files

This section explains the content of the sample billing adapter (SampleBillingAdapter.sln). At a high level the billing adapter consists of the below parts:

1. SampleBillingAdapter.cs provides an example of the different calls to the Usage REST API

2. The set of Data Contracts that can be used to deserialize the API responses

SampleBillingAdapter.cs

This is the entry point for the application. The file contains the below:

1. Instantiation of a UserServiceHttpClient with the required configuration data.

2. This UsageServiceHttpClient is then used to query the usage service. There are seven types of calls that can be made for the Billing data. This data is deserialized into instances of the data contracts that are included in the DataContracts directory

3. The data is then printed to the console.

Example:

using Microsoft.WindowsAzurePack.Usage.DataContracts;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;

static void Main(string[] args)
{
    // create a WAP usage service http client (this data can be read from a config file)
    string mediaType = "application/json"; // application/json or application/xml
    string authenticationType = "basic";
    string Username = "UsageClient";
    string Password = "specify the correct pwd";
    string Machine = "specify the machine where the usage service is running";
    string Port = "30022";
    string BaseAddress = String.Format("https://{0}:{1}/", Machine, Port);
    var usageService = new WAPUsageServiceHttpClient(Username, Password, authenticationType, BaseAddress, mediaType);

    // gather usage and billing data asynchronously using the Usage API
    var usage = usageService.GetDataAsync<UsageRecordList>("usage", 0, 50);
    var plans = usageService.GetDataAsync<UsageEventList<Plan>>("billing/plans", 0, 50);
    var subscriptions = usageService.GetDataAsync<UsageEventList<Subscription>>("billing/subscriptions", 0, 50);
    var addOns = usageService.GetDataAsync<UsageEventList<AddOn>>("billing/addons", 0, 50);
    var planAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/planAddons", 0, 50);
    var subscriptionAddOns = usageService.GetDataAsync<UsageEventList<AddOnReference>>("billing/subscriptionAddons", 0, 50);
    var planServices = usageService.GetDataAsync<UsageEventList<ResourceProviderReference>>("billing/planServices", 0, 50);

    #region Print the usage and billing data to the console ...
    Console.WriteLine("Printing Usage Data - Press Enter to Proceed...");
    Console.ReadLine();
    usageService.PrintUsageData(usage.Result);
    usageService.PrintPlanData(plans.Result);
    usageService.PrintSubscriptionData(subscriptions.Result);
    usageService.PrintAddOnsData(addOns.Result);
    usageService.PrintPlanAddOnsData(planAddOns.Result);
    usageService.PrintSubscriptionAddOnsData(subscriptionAddOns.Result);
    usageService.PrintPlanServicesData(planServices.Result);
    #endregion
}

Data Contracts

The DataContracts directory contains all the required Data Contracts to interact effectively with the Usage API.

VM Data Gathered from the Usage API

VM Provider

Measure

Unit

Description

 

MemoryAllocated-Min

MB

Lowest allocated memory size for a VM within an hour timespan

 

MemoryAllocated-Max

MB

Highest allocated memory size for a VM within an hour timespan

 

MemoryConsumed-Min

MB

Lowest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Max

MB

Highest consumed memory size for a VM within an hour timespan

 

MemoryConsumed-Median

MB

Median average consumed memory size for a VM within an hour timespan

 

CPUAllocationCount-Min

Each

Lowest number of CPU core allocated for a VM within an hour timespan

 

CPUAllocationCount-Max

Each

Highest number of CPU core allocated for a VM within an hour timespan

 

CPUPercentUtilization-Median

%

Median average in percentage of CPU consumption for a VM within an hour timespan

 

CrossDiskIOPerSecond-Min

MB

Lowest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Max

MB

Highest input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskIOPerSecond-Median

MB

Median average input/output per second (IOPS) across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Min

MB

Lowest allocated disk size across all attached disk for a VM within an hour timespan

 

CrossDiskSizeAllocated-Max

MB

Highest allocated disk size across all attached disk for a VM within an hour timespan

 

PerNICKBSentPerSecond-Min

MB

Lowest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Max

MB

Highest bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Median

MB

Median average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBSentPerSecond-Average

MB

Straight average bytes sent per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Min

MB

Lowest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Max

MB

Highest bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Median

MB

Median average bytes received per second on a network adapter attached to a VM within an hour timespan

 

PerNICKBReceivedPerSecond-Average

MB

Straight average bytes received per second on a network adapter attached to a VM within an hour timespan

       

As you can see, this is a powerful API that allows bi-directional data flow. The usage data from the 2012 R2 stack to the billing adaptor and the pricing data (business logic decides the prices) and that data flows from the billing system into the 2012 R2 stack.

In subsequent blogs, we will provide more details as we hear more from our customers.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

How to Create a Basic Plan Using the Service Administration Portal

The Deployment Guys - Thu, 08/01/2013 - 11:30

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Service Administration and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: Service Provider & Tenant IaaS Experience.”

As described in that blog post, success for a service provider largely hinges on its ability to attract and retain tenants.  It therefore falls to the service provider to think about how to use service offerings to draw tenants in, to consider different tactics for differentiation, upsell, and maintain healthy tenant accounts.  To help service providers meet these challenges, we have invested in key enhancements to the service management experience targeting these specific areas: 

  • Using value based offers to attract tenants and drive new subscriptions
  • Use offer differentiation & upsell to drive more consumption 
  • Managing tenant accounts and subscriptions

A “service provider” here could be an IT organization within a company that is providing services such as IaaS to other business units in the organization. Thus, an IT organization that operates like a service provider to other business units must create compelling service offerings in much the same way as a service provider tries to attract customers. 

Overview of an IaaS Plan

Service providers can build bundles of service offerings that are termed “plans”. Plans include composable service offerings that can be assembled together in order target different types of prospective tenants.  Tenants consume these service offerings by subscribing to a plan.  In a very general sense, a cloud is nothing more to the tenant than a set of capabilities (“service offerings”) at some capacity (“quotas”).

To support this business model, we have designed a very easy-to-use experience for creating offers, selecting the kinds of service offerings to include and then setting the quotas to control how much can be consumed by any single subscription.  But, it goes beyond a simple set of compute, storage, and networking capabilities at some quota amount! 

One of the most important aspects of plan creation is the process of including library content to facilitate simplified application development.  For that reason, the plan creation experience also features a way to include templates for foundational VM configurations and workloads.

Plan Creation

In the Service Administration portal, the left side navigation has a entry called “Plans” which lists all the plans currently in the system. As you can see in the figure below, the administrators have created many different plans. When plans are created they are “Private” by default, meaning they are not yet visible to prospective tenants.

In a quick glance, the administrator can identify which Plans are accessible by the tenants, how many subscriptions they have along with other pertinent status.

Creating a new plan is very easy and is enabled by the Quick Create experience.

Scroll down and click on New, which takes you to the Plan creation experience.

This experience enables plan creation and “plan add-ons”. We will be focusing on plan creation in this blog post. The plan creation experience is a simple wizard with just three screens. In the first screen, you give the plan a name, and in this case it is going to be called “BlogIaaSPlan” as shown in the figure below.

 

As mentioned earlier, a plan is a container of service offerings that are available in the system. This system is configured to provide VMs, web sites, SQL Server databases, and service bus services. Therefore, the plan wizard allows all of these types of services to be offered in the plan.

We will focus only on virtual machines in this blog post. In this screenshot the ‘Virtual Machine Clouds’ service is selected and ‘Virtual Machine Clouds’ is chosen from the drop down.

Skip the plan add-ons for now and click OK. That completes the plan Quick Create experience. As you can see, the BlogIaaSPlan is created and is private by default.  The plan is “not configured” yet and needs to be configured before it can be made public for tenants to be able to subscribe to the plan.

Configuring the Plan

Clicking on the plan (BlogIaaSPlan) presents the plan dashboard page. This page shows the plan statistics at the top and a list of all the services that are available on the plan along with plan add-ons associated with the plan.

Since we are creating an VM only plan, we have not selected any other offer/services other than Virtual Machine Clouds.

 

 

As seen in the figure above, the Plan is not active nor configured for it to be used.

Click on ‘Virtual Machine Clouds’ to configure the plan.

Associating the Plan with a VMM Server and Cloud

A plan that offers VM clouds is associated with a specific Virtual Machine Manager server and a Virtual Machine Manager cloud within that VMM server. When the tenant subscribes to this plan and instantiates a virtual machine, the system will deploy that VM with the specified properties on the associated cloud via the associated VMM server.

As shown in the figure to the left, the VMM server is a mandatory property that needs to be set for an IaaS plan.

As part of the service registration, the VMM server information is already available in the system. Therefore, selecting the correct VMM server is very easy.

Once a VMM server has been selected, the VMM cloud managed by that VMM server needs to be selected and bound to the plan.

As you can see in the figure, once a VMM server has been identified, the system queries the VMM server to list all the VMM clouds available on the VMM server.

We will choose the Gold Cloud to be associated with this plan.

A plan allows the administrator to bind the service to a specific cloud and controls on the upper limits on its usage.

Assigning Quota Limits to a Plan

The administrator will be able to set limits on the core compute attributes such as “the maximum number of VMs, logical cores, the max memory, storage and virtual networks each subscription can have.

As you can see in this figure, you can specify absolute limits by specifying a number against each compute property to leave it unlimited, in which case it will be limited by the underlying fabric constraints.

Adding Allowed Networks to the Plan

The plan allows various cloud resources to be made available to the subscriber in a controlled manner. In this section we will go through the networks that are made available to the plan.

When a plan is being configured for the first time, there are no cloud resources assigned to the plan. Therefore all the cloud resources will be empty. Click on ‘Add networks’ to add networks to the plan and add the Fabrikam External network to the plan.

 

 

 

When a tenant subscribes to this plan, and creates a VM, the only networks available for that VM to use will be the only networks allowed by the plan and in this case, it will only be the Fabrikam External network.

Adding Other Resources

Following the same pattern as that of networks, we can specify which hardware profiles and virtual machine templates are accessible within the plan. In this plan, all the hardware profiles and only the Windows Server 2012 VM template are chosen resulting in the figures below.

 

 

With these configurations complete, the plan is ready to be subscribed to by tenants. Advanced scenarios such as the gallery and other settings will be discussed in later blogs.

Next the plan needs to be made it public so that it can be discovered and subscribed to by tenants.

Plan Activation

Go back to the Plans List view to see all the plans. As you can see in the figure below, the plan is now configured, but still private. 

Select the BlogIaaSPlan and then make it public by changing its access privileges. As shown in the Figure below, you can see do it from the Change Access command

 

The plan status will change to reflect the fact it’s now a publicly accessible plan. You can see that in the Plans view as shown in the figure below.

Once the plan is made public it can be subscribed to and tenants can start to deploy virtual machines against the subscription.

Conclusion

In subsequent blogs, we will provide more details of creating advanced plans.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Categories: MDT

Configuring an Evaluation Windows Intune Standalone Deployment

Steve Rachui's Manageability blog - Fri, 07/26/2013 - 11:36
So you’ve heard about Windows Intune and are ready to setup a lab to test it out. Question is, where do you start? How do you use this cloud based service to manage your on-premise PC’s or devices?  What are the requirements? Do you have to sync...(read more)
Categories: MDT

Pages