您的位置:首页 > 其它

Windows Azure Platform 2nd Edition Note

2012-01-29 02:37 351 查看
How will cloud computing help? To understand, let’s go backto the original business requirement:

the business owner has an immediate need to deploy an application, and
the timeframe is within three

months. Basically, what the business is looking for is IT agility, and if theapplication takes only one

month to develop, then is it really worth wasting six months on coordinationand acquisition of the

hardware?

Cloud computing gives you an instant-on infrastructure for deploying yourapplications. The

provisioning of the hardware, operating system, and the software is allautomated and managed by the

cloud service providers.

Types of cloud services

For standardizing the overall terminology around cloud computing, the industryhas defined three main

cloud service categories: Infrastructure as a Service (IaaS), Platform as aService (PaaS) and Software as a

Service (SaaS).

-IaaS

-InIaaS, you are still responsible for upgrading, patching, and maintaining

the operating systems and the software applications that run on the rentedhardware.

-Inshort, IaaS abstracts the hardware and virtualization infrastructure from you.

-PaaS

-ThePaaS manages the
operating systems and hardware maintenance for you, but
youhave to

manage your applications and data.

-Inshort, PaaS abstracts the infrastructure and the operating system from you.

-SaaS

-Youonly have to manage your business data that resides and flows through thesoftware service.

In its natural progression, a SaaS is built on a PaaS and a PaaS is built on anIaaS.

Types of Clouds

what is the difference between a hosting provider and a cloud service provider?

-Iwould define a cloud only if the data center architecture

provides you with the following services:

-Payas you go service

-Aself-service provisioning portal

-Serverhardware abstraction

-Networkhardware abstraction

-Dynamicscalability

-HighAvailability Service Level Agreement (SLA)

The location of this cloud determines its type:
private or public.The primarydifference between private and public clouds is the amount of capital costinvolved in provisioning infrastructure. Public clouds don’t requireprovisioning.

Windows Azure platform consists of three core components: Windows Azure,SQLAzure,

and Windows Azure AppFabric.Windows Azure is the operating system for thedatacenter that provides compute, storage, and

management services. SQL Azure is a
relational database engine in the WindowsAzure Platform.

Windows Azure AppFabric is the middleware component that consists of serviceslike Service Bus,

Access Control, and Caching Services.

Understanding Windows Azure Compute Architecture

Windows Azure platform compute architecture is based on a software
fabriccontroller

running in the data center.The fabric controller manages

the life cycle of the deployment by allocating and decommissioning hardware andoperating system

images as needed.



three core services of Windows Azure

-Compute

-Storage

-Management

-TheService Management API is the hidden jewel of the platform.It makes WindowsAzure a truly

dynamically scalable platform allowing you to scale-up and scale-down yourapplication on-demand.

Using the Service Management API, you can automate provisioning, de-provisioning,scaling, and

administration of your cloud services.

core services offered by SQL Azure

-RelationalData Storage

-DataSync

-Management

-DataAccess

-ReportingServices

Windows Azure AppFabric

Windows Azure AppFabric is the cloud-based middleware platform hosted inWindows Azure. Windows

Azure AppFabric is the glue for connecting critical pieces of a solution in thecloud and on-premises\

three core services of Windows Azure AppFabric

-Accesscontrol

-Servicebus

-Caching

**Tip SQLAzure runs a labs program where you can try out upcoming features forfree and provide feedback to

the Microsoft product teams. You can find more information about the labsprogram herewww.sqlazurelabs.com

**Tip Like SQLAzure, Windows Azure AppFabric also runs a labs program where youcan try upcoming features

for free. You can login and start playing with these features herehttps://portal.appfabriclabs.com.

Worker Role

-Technically,the only major difference between a Web Role and a Worker Role is the presence of IIS on the Web Role

A Worker role class must inherit from theMicrosoft.WindowsAzure.ServiceRuntime.RoleEntryPoint

class. RoleEntryPoint is an abstract class that defines functions forinitializing, starting and stopping the Worker role service.

VM Role

The VM role is specifically designed by Microsoft to reduce the barriers toentry into the Windows Azure

Platform. VM Role lets you customize a Windows Server 2008 R2 Enterprisevirtual machine, based on a

virtual hard drive (VHD), and then deploy it as a base image in WindowsAzure.Don’t use VM role unless absolutely needed, because by acquiring morecontrol over the underlying operating system image, you also inherit the risks and the burden associatedwith maintaining
it. Windows Azure

does not understand the health of your applications running on a VM role, andtherefore it becomes your

responsibility to track application health

Upgrade Domains and Fault Domains

The Service Level Agreement (SLA) for Windows Azure states, “For compute, weguarantee that when

you deploy two or more role instances in different fault and upgrade domainsyour Internet facing roles

will have external connectivity at least 99.95% of the time.” 1

In Compute, the instances of your service run in dedicated virtual machines.These

virtual machines are managed by the hypervisor.

Role Settings and Configuration

.NET Trust Level: The .NET Trust Level specifies the trust level under whichthis

particular role runs. The two options are Full Trust and Windows Azure Partial

Trust. Full Trust options gives the role privileges to access certain machine

resources and execute native code. Even in Full Trust, the role still runs inthe

standard Windows Azure user’s context and not the administrator’s context. Inthe

Partial Trust option, the role runs in a partially trusted environment and doesnot

have privileges for accessing machine resources and native code execution.

Instances: The instance count defines the number of instances of each role you

want to run in the cloud.

Diagnostics

The diagnostics engine named “MonAgentHost.exe” runs on all the role instancesby default.

Tip When designing cloud applications, it is important to design diagnosticsand logs reporting right from the

beginning. This will save you a lot of debugging time and help you create ahigh quality application

Each storage account gets 100TB of maximum space combining all the storageservices within that

account. By default, each Windows Azure subscription receives five storageaccounts. You can contact

Windows Azure support for adding more accounts to your subscription.1

Storage Service Architecture

-FrontEnd Servers

-PartitionLayer

-DistributedFile System (DFS)

Partitioning scheme for each object type follows the following format:

• Blobs: Combination of Container Name and Blob Name

• Tables: Combination of Table Name and PartititionKey

• Queues: Queue Name (all messages in a queue are located on the samepartition)

HMAC stands for Hash Message Authentication Code, which is a message-authenticationcode calculated

from the secret key using a special cryptographic hash function like MD5,SHA-1, or SHA256. The Windows Azure Storage service expects a SHA256 hash forthe request. SHA256 is a 256-bit hash for the input data

Blob Limitations and Constraints

-Themaximum size of each block blob is 200GB and each page blob is 1TB

-Youcan upload blobs that are less than or equal to 64MB in size using a single PUToperation. Blobs more than 64MB in size must be uploaded as a set of blocks,with each block not greater than 4MB in size

-Thedevelopment Blob service supports blob sizes only up to 2GB

**One caveat in a root container is that you

cannot have forward slash (/) in the blob names in a root container. Based onthe $root keyword, the storage processing engine treats these blob addresses abit differently than blobs in other containers.

**The Blob API provides filtering capabilities based on a delimiter that allowsyou to retrieve only the

log files in a particular virtual structure. For example, you can retrieve onlythe log files under the virtual folder structure 2009/december/21 by specifyinga delimiter when enumerating the blobs.

REST API Request

-HTTPVerb

-RequestURI

-URIParameters

-RequestHeaders

-Inthe Storage Service REST API, the request header must include the authorization

information and a Coordinated Universal Time (UTC) timestamp for the request.The timestamp can be

in the form of either an HTTP/HTTPS date header or an x-ms-Date header.

The authorization header format is as follows

Authorization="[SharedKey|SharedKeyLite] <AccountName>:<Signature>"

where SharedKey|SharedKeyLite is the authentication scheme, <AccountName> is the storage service

account name, and <Signature> is a Hash-based Message Authentication Code(HMAC) of the request

computed using the SHA256 algorithm and then encoded using Base64 encoding.

Caution The Set Container Metadata operation replaces all the existing metadataof the container. It doesn’t

update individual metadata entries. For example, if a container has twometadata values Creator and Creation-

Time, and you call Set Container Metadata with only one metadata valueLastUpdatedBy, then the Creator and

Creation-Time values will be deleted and the container will have only onemetadata value LastUpdatedBy. To avoid

this side effect, always set all the metadata values again along with any newvalues you want to add to the

container’s metadata.

List Containers URI Parameters

-Prefix

-marker

-maxresults

List Blobs URI Parameters

-Prefix

-delimiter

-marker

-maxresults

CDN content can be made available only for public containers and blobs overHTTP and HTTPS. Be

careful while choosing content for caching. If you cache constantly changingdata, you may not be able to reap the

benefits of the CDN and will also cost you a lot. Also, you don’t have controlover the cache endpoints, means,

based on the user access, the CDN decides which edge cache machine to cacheyour content on. This may have

cost implications if you have worldwide user base.

Windows Azure Drives

Windows Azure Drives also support caching of data locally on the

role instance for improving reads.
The cache size is specified throughconfiguration and API while

mounting the drive and it takes up the local drive spaceallocated to the roleinstance based on its size.

Therefore, the drive cache cannot exceed the size of the local drive space.

The typical life cycle of a Windows Azure Drive is comprised of the followingsteps.

-Creatinga Drive

-Uploadinga Drive

-Mountinga Drive

-CloudDrive.Mount()

-Workingwith a Drive

-Snapshottinga Drive

-CloudDrive.Snapshot()

-Copyinga Drive

-TheCloudDrive.Copy() method allows you to create awritable copy of a snapshot oran unmounted drive.

-Unmountinga Drive

-CloudDrive.Unmount()

Blob Storage Scenarios

-MassiveData Uploads

-Storageas a Service in the Cloud

-Encryptionand Decryption

-EnterpriseFile Sync

Queue Limitations and Constraints

-TheQueue service supports an unlimited number of messages, but individual

messages in the Queue service can’t be more than 64KB in size

-TheFIFO behavior of the messages sent to the Queue service isn’t guaranteed

-Messagescan be received in any order.

-TheQueue service doesn’t offer guaranteed-once delivery

-Messagessent to the Queue service can be in either text or binary format, but

received messages are always in the Base64 encoded format.

-Theexpiration time for messages stored in the Queue service is seven days

Note Unlike the Blob service REST API, the Queue service REST API doesn’tsupport HTTP 1.1 conditional

headers.

List Queues URI Parameters

-prefix

-marker

-maxresults

Queue Scenarios

-Scenario1: Windows Azure Web and Worker RoleCommunications

-Scenario2: Worker Role Load Distribution

-Scenario3: Interoperable Messaging

-Scenario4: Guaranteed Processing

Table Limitations and Constraints

Caution The following characters are not allowed in PartitionKey and RowKeyvalues:

The forward slash (/) character

The backslash (\) character

The number sign (#) character

The question mark (?) character

An entity can contain at the most 255 properties (including the PartitionKey,

RowKey, and Timestamp properties, which are mandatory).

• The total size of an entity including all the property names and valuescan’texceed

1MB.

• Timestamp is a read-only value maintained by the system.

• PartitionKey and RowKey can’t exceed 1KB in size each.

• Property names can contain only alphanumeric characters and the underscore(_)

character. The following characters aren’t supported in property names:backslash

(\), forward slash (/), dash (-), number sign (#), and question mark (?).

Note The Table service supports ACID transactions for batch operations onmultiple entities on a single

partition (with the same PartitionKey). 5 The constraints are as follows: samePartitionKey, one transaction per

entity in the batch, no more than 100 transactions in the batch, and the totalbatch payload size should be less

than 4MB.

Storage Analytics

In August 2011, Microsoft announced the Windows Azure Storage Analaytics API.This API lets you to

trace and analyze all the calls made to the Windows Azure storage service,including Blobs, Queues, and

Tables. The Storage Analytics comprises of two features:
Logging and Metrics.In Logging, you can trace

the calls to the storage service, and the Metrics feature lets you capture theusage of your storage at an

individual or aggregated basis.

Caution Storage Analytics is not enabled by default. Once you enable it, youwill be charged for the space

occupied and transactions handled by the Storage Analytics to your storageaccount.

Table Service Scenarios

-Scenario1: Reading Performance Counters from Table Storage

-Scenario2: Paging in Table storage

VM Role Benefits/Tradeoffs

Using VM role provides you the ability to have more control over your image.You can set it up exactly

the way you want, install any software necessary, and configure services thatyou might need. In

addition, you still get the benefits of Azure-provided load balancing,failover, and redundancy.

However, once you use the VM role, you are responsible for maintaining theimage yourself.This

means you have to perform upgrades to the operating system, and patches

Scenarios

-Onewould be an application that requires many other products to be installed to run correctly.

-Ifyour application requires third-party software, and the provider didn’t write its installer torun in silent or unattended

mode, then you need to use the VM role

At a high level, there are the following three steps to deploying a VM toWindows Azure:

1. Create a base image in Hyper-V.

2. Apply sysprep.exe.

3. Upload to Windows Azure.

Windows Azure Connect

Windows Azure Connect vs. Service Bus

Connect enables hybrid scenarios and Service Bus enables hybrid scenarios. Sohow are they different,

and when is it appropriate to use one over the other? I like to think ofService Bus as an application-level

integration utility, while Connect is a machine-level integration utility.

With Service Bus, you need to build proxies on each side of the firewall, andthe applications use a

relay service to communicate. If your scenario is one application talking toanother application, this is

highly appropriate.

With Connect, you have access to the entire machine as if it were in yourdatacenter. This means

you can operate in a more familiar way, such as integrating System CenterOperations Manager for

monitoring, connecting to a database on-premises, or accessing functionality oflegacy systems that

could not be ported to the cloud.

Connect brings tremendous power and flexibility. However, keep the following inmind:

1. Network latency. Keeping your database on-premises and using Connect is

very tempting, especially in regulatory compliance scenarios. However, the

distance between your datacenter and the Azure datacenter matters.

Performance may suffer. However, if the benefits outweigh the performance

cost, then go with Connect.

2. Bandwidth costs. Even though the servers are interacting as if they are inthe

same datacenter, they are not. Even though pricing has not yet been

announced for Connect, keep in mind that you are charged for all bandwidth

coming out of the datacenter. No matter what, you would always want to make

sure you are using bandwidth as efficiently as possible.

SQL Azure Architecture

-InfrastructureLayer

-PlatformLayer

-ServicesLayer

-Theservices layer is also responsible for routing connections to the primarydatabase instance in the platform layer.

-ClientLayer

-Theclient layer is the only layer that runs outside of the Microsoft data center

SQL Azure Data Access

-Code-Near Connectivity

In a typical code-near architecture, the data access application is located in the same data center as the SQL Azure database. The end users or on-premises
applications access the web interface are exposed via a Windows Azure web role. This web role may be hosting an ASP.NET application for end users or a web service for on-premises applications.

Theadvantages of the code-near approach are as follows:

• Business logic is located closer to the database.

• You can expose open standards–based interfaces like HTTP, REST, SOAP, and so

on to your application data.

• Client applications don’t have to depend on the SQL Server client API.

The disadvantage of this approach is the performance impact your applicationexperiences if you’re

using Windows Azure as a middle tier to access the database.

-Code-Far Connectivity

The biggest advantage of the code-far approach is the performance benefit yourapplication can

experience because of direct connectivity to the database in the cloud. Thebiggest disadvantage is that

all the client applications must use the TDS protocol to access the database.Therefore, the data access

clients must use SQL Server-supported client APIs like ADO.NET, ODBC, and so on

Creating Logins

To create a new login, you have to first create a new login in the master database, and then create a new user for thelogin in the MyCloudDb

database, and finally add the new user to one of the database roles using thesystem stored procedure

sp_addrolemember

-CREATELOGIN testuser WITH PASSWORD = 'pas@word1'

-CREATEUSER testuser FOR LOGIN testuser;

-EXECsp_addrolemember 'db_owner', 'testuser

SQL Azure Reporting

Data Sync
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: