Over the last 20 years, my IT background and experience have predominantly revolved around Microsoft Windows and other Microsoft technologies, including SQL server, VB.Net and C#. Over the last 10 years, my career and focus shifted to the business side of IT, moving into product management and product marketing.
In 2015, Rackspace launched Fanatical Support for Microsoft Azure, a managed service offering for the Azure public cloud platform, and I thought it was high time I dusted off my Microsoft certifications and jumped into the technical side again.
Because most of my knowledge is related to the “legacy world” of IT, I decided to draw on some similarities and differences between compute functions deployed in Azure and compute functions deployed on a dedicated server.
I still remember my old server room when I was working for a startup in London. I had a couple of 2U and 4U servers racked in what was essentially a closet, with cables everywhere, and very little air-conditioning or ventilation. Things have certainly changed over the last few years. Some of you may be in a similar position, deciding to take the plunge and move over to Azure.
A quick preface: this blog post is meant for folks starting to get their feet wet with Azure, and it does not cover advanced concepts. If you’re a Microsoft-certified Azure solution architect, you should probably close this window now, I wouldn’t want to bore you to death.
On to the post…
The basics of Azure compute
An Azure Virtual Machine is an on-demand, scalable computing resource deployed on Azure through various methods. These might include the user interface within the Azure portal, using pre-defined application “blueprints” in the Azure marketplace, scripting through Azure PowerShell, deploying from a template that is defined by using a JSON file or deploying directly through Visual Studio.
Azure uses a deployment model called the Azure Resource Manager, which defines all the resources that form part of your overall application solution, allowing you to deploy, update or delete all the resources for your solution in a single operation.
Resources may include the storage account, network configurations and IP addresses. You may have heard the term ARM templates, which essentially means the JSON template that defines the different parts of your solution you are trying to deploy. The older “Azure Classic” deployment model also still exists as an option, but I don’t really cover that in this post.
Azure Virtual Machines come in different types and sizes, with names such as A-series and D-series. Each VM type is built with specific workloads or performance needs in mind, including general purpose, compute optimized, storage optimized or memory optimized. You can also deploy less common types like GPU or high performance compute VMs.
Now that we’ve covered the basics of Azure VMs, I’ll dive a little deeper into storage, network configuration, management and monitoring of Azure VMs, as well as how this differs from running a stand-alone Windows Server.
Local Disk Storage vs Azure Storage Accounts
There are a few similarities between a typical Windows Server and an Azure Windows VM. In Azure, the type of disk on your VM depends on the VM size you select.
At the time of writing this blog, an “A series” VM contains local HDD starting at 20 GB, and Av2-series VMs have SSD disks starting at 10 GB. These disk sizes go up in increments depending on the VM size you choose. For example, an H series VM could have up to 2,000 GB of local SSD storage.
On an Azure VM the SYSTEM disk is deployed on the C: drive, typical for how you would deploy a standard Windows Server on-premises in a dedicated server. The D: drive is temporary. This means that when you shut down a VM, anything you stored on that disk in your session would be lost. This D: drive could be used for things like page files.
The data disks on your VM start at drive letter F: and goes up from there (G:, H:, I:, etc.). This is where you would store data files. For example, if you’re creating a SQL VM, you could store the database files on the F: drive and above. The number of data disks is limited to the size of the VM you selected. This is very important — together these data disks are stored as VHD files in an Azure Storage account, in Azure blob storage. They are not local to the server.
You could deploy these disks on standard storage (HDD) or on higher performing Azure Premium Storage (SSD) to get a much higher level of IOPS. At the time of writing, you can attach up to 32 TB of storage per VM.
This model where storage is abstracted from the server is clearly very different from a typical server where storage is all local to the server’s hard drives, but it makes your configuration infinitely more scalable. Sure, you can attach SAN volumes to a dedicated server in a similar way, but the ease and scalability of doing it in Azure is unmatched.
Basics of networking for your Azure VM
Similar to your on-premises server, you can configure both Private and Public IP addresses on your Azure VMs. In Azure Resource Manager, a public IP address is a resource that has a specific set of properties independent of the VM. You can associate a public IP address with a Windows or Linux VM by assigning it to its network interface.
Public IPs are for Internet-facing services like a webserver. Private IPs are for communicating with other Azure VMs in your Azure Virtual Network (VNet).
Private IP Addresses are assigned out of a virtual network that is defined by using CIDR notation which specifies the subnet range. An example of this is 10.0.0.0/24. A virtual network is a resource that you can create using the Azure Resource Manager, as shown in the next screenshot.
Public IP addresses on Azure Virtual Machines
This is a bit different to how you may be relating this back to a dedicated Windows server. The concepts of static and dynamic IP addresses still exist, but you may be surprised to note that in the case of a public IP address, a dynamic IP is not associated with a DHCP server. Below is a quick explanation of how IP addressing is handled in Azure.
Static public IP addresses
Typically, setting a static IP meant that you set a permanent IP address and subnet mask on the server network interface, and this IP address will not change, even if you shut down the machine.
This behavior is similar in Azure, however setting a static IP address on a VM in Azure means it will pick an IP address from a range, this IP address cannot be specified, you will simply get the next IP in the range. This IP does, however, stay with the VM if the VM is restarted or shut down
Dynamic public IP addresses
As I mentioned earlier, no DHCP server is actually involved in this situation. Similar to a static IP address in Azure, when you allocate a dynamic IP address to a VM in Azure, the IP address is selected from a range that’s allocated to the network interface of the VM. No DHCP broadcast or requests are sent out by your VM to request a dynamic IP.
The difference between this and a static IP in Azure is that when you shut down the VM, the IP address is not persistent. When you restart the VM it may receive a completely different IP from the subnet range.
Protecting your VM with Network Security Groups
Similar to a dedicated firewall, Azure can create a network security group that protects your VM from network traffic on the public internet. You can also open specific ports to allow ingress traffic to your VM, or set outbound rules to restrict outgoing traffic.
A network security group can be associated with the network interface of your VM, or with an entire subnet. If it’s associated with a subnet then the inbound and outbound rules you specify on the NSG apply to the entire subnet.
Private IP Addresses for Azure VMs
There aren’t a lot of differences between static and dynamic IP addresses in Azure and what you’re used to in your datacenter or IT environment. Dynamic IP addresses are allocated via a DHCP service that is provided by Azure.
Static IP addresses are also provided via DHCP, working like a DHCP reservation, meaning these IP Addresses are persistent, meaning they will not change when the VM is restarted or shut down. The gotcha here is that you should never manually change the IP Address settings within the VM to static, keep it as Dynamic IP Assignment, and Azure will always assign it the same IP address.
Managing Azure VMs
As you may have guessed, management of an Azure VM isn’t that different from remotely managing on-premises Windows or Linux VMs and dedicated servers. There are a couple ways you can do this:
- Remote Desktop (RDP) — This is the most common method. Simply fire up your RDP client and connect to the public IP of the Windows Server based Azure VM.
- SSH — This is the most common method to remotely connect to a Linux server.
Cross platform (Linux or Windows) management options
- Command Line Interface (CLI) — This is an open source, shell-based commands for creating and managing Microsoft Azure VMs. This is a great option for administrators as these scripts will work across various personal systems like Windows, Linux or Mac.
- VM Extensions — These are agents you can install on Azure VMs, either during deployment or post-deployment of the VM. These agents allow you to use automation tools like Chef, Puppet or Desired State Configuration for Windows or Linux. In the case of DSC, it allows you to describe the desired state of your Windows or Linux OS for automated configuration of your VM. DSC files can be stored in an Azure Storage account. This is great for re-using already created on-premises DSC configurations that can be applied to Azure VMs.
- Custom Script extensions — This method allows you to run scripts on the Azure VM depending on the OS type. For Windows, you would use PowerShell, and for Linux, you could use scripts that are compatible with the Linux OS like python or bash. These scripts reside outside of your VM in an Azure Storage account or in Github. The scripts may be used to outline a set of instructions to automate the configuration of your VM.
Monitoring Azure VMs
Monitoring and diagnostics can easily be enabled from the Azure portal. There are a number of diagnostic logs you can monitor such as Windows event system logs, Windows event security logs, Windows event application logs or Diagnostic infrastructure logs.
This process is similar to the Windows Event Log on your dedicated server, where events on the server are stored locally in various log files, which you can then analyze to pinpoint issues. In Azure, the log files are stored in an Azure Storage account that is automatically associated with your virtual machine.
As an admin, you are also able to set alerts on specific metrics, like the example below. This alert will trigger when the CPU on a server runs at more than 75 percent for over five minutes.
Alerts can be configured to send email notifications to administrators on the account, or you can also use webhooks to trigger workflows in third party systems. For example, at Rackspace, we create support tickets based on specific triggers set on monitoring alerts. You can also use Operations Management Suite in Azure for reporting on these monitoring events. Check out this recent blog post on how Fanatical Support for Microsoft Azure is powered by OMS Log Analytics.
That’s a wrap
We’ve come to the end of this introductory crash course on deploying, managing and monitoring an Azure VM and how it’s different from a dedicated server.
There is a world of information out there that can help you with advancing your knowledge as you delve deeper in Azure, but if you don’t feel like going at it alone you can always call one of our experts and get help with architecting, deploying and managing your Microsoft Azure environment.
As always, visit Rackspace to find out more about our managed support offering for Microsoft Azure and additional ways our Microsoft experts can help you get the most of the cloud.