Virtual Machine Apps for Mac. Install and Run Windows Software’s, Games inside Mac OS X. Run any Apps and Software’s without resource limitation as you can manually configure the RAM, Processor etc., Access your Laptop or Desktop hardware’s (USB, CD/DVD Drive, Lan Adapters, Sound Drivers etc.,) inside Virtual OS (Windows/Linux). I'm relatively new to Hyper-V and virtualization. What I would like to know, is the best way to create a Mac OS X VM running in Windows Hyper-V host. I'm currently using a Windows 10 based desktop running Hyper-V to create a test lab. I'd like to run a Mac OS X VM in Hyper-V in my test lab.
Article ID = 148Article Title = Virtualising OS X 10.11 El Capitan
Article Author(s) = Graham Needham (BH)
Article Created On = 12th March 2019
Article Last Updated = 27th March 2019
Article URL = https://www.macstrategy.com/article.php?148
Article Brief Description:
Instructions for installing, setting up and virtualising OS X 10.11 El Capitan
Virtualising OS X 10.11 El Capitan
The ability to virtualise OS X 10.11 El Capitan is important and very useful as it is an easy way to run 32-bit applications that do not run on macOS 10.15 or later. MacStrategy presents this special guide to virtualising Mac OS X / OS X / macOS.This article deals with setting up/installing a virtual machine with Mac OS X 10.6 clean/from scratch. If you would like to transfer an existing Mac running Mac OS X 10.6 to a virtual machine, or take a Mac OS X 10.6 bootable storage device/clone/disk image and convert it into a virtual machine please see this article instead.
Virtualisation Software
- Parallels Desktop [£79.99 inc VAT - 14 day free trial available]
- VMWare Fusion [£70.00 inc VAT - 30 day free trial available]
- Oracle VirtualBox [FREE - Open source under GNU General Public License (GPL) version 2]
Instructions
NOTE: This document was written using a Mac mini (2014 model) with macOS 10.14 Mojave running in 64-bit only test mode and using Parallels Desktop 14.1.2, VMWare Fusion 11.0.2 and VirtualBox 6.0.4.Preparation
NOTE: You will need a Mac and the OS X 10.11 El Capitan installer.- Obtain your preferred virtualisation software (see list above)
- Obtain the OS X 10.11 El Capitan installer and copy it to your local Desktop/hard disk:
- If you already have this installer archived/backed up you are good to go
- If you purchased OS X 10.11 El Capitan you might be able to re-download the installer - go to Macintosh HD > Applications > App Store > Purchased tab at the top >login if necessary > check your purchase history list to download Mountain Lion
- You may still be able to download OS X 10.11 El Capitan for free from Apple
- If you haven't already, make a backup/archive of the OS X 10.11 El Capitan installer e.g. copy it to an external storage device
- Purchase/install/update your preferred virtualisation software
- On later versions of macOS your preferred virtualisation software will require specifically allowing their System Extension(s) to run via System Preferences > Security & Privacy, plus they may require to be granted access to Accessibility
- Make sure you have plenty of free hard disk space (a basic 10.11.6 install is about ~20GB before your own applications and you'll need at least twice that if you need to clone it for multiple installations), plus you need ~6GB for Parallels to create a bootable disk image file from the installer, so we recommend at least 75GB of free space (100GB+ if you're looking to virtualise and use Adobe Creative Suite)
- Make sure your actual, physical Mac has a working internet connection e.g. use a web browser to go to https://www.apple.com and see if you can view a web page
- Create a dedicated folder to share files/documents with the virtual environment e.g. in your Documents folder create a folder titled '1011SharedFolder'
Parallels Desktop Instructions
- Open Parallels
- Go to File menu > New
- Click on 'Install Windows or another OS from a DVD or image file' and click Continue
- If Parallels automatically finds the OS X 10.11 installer you put on your Desktop/local hard disk earlier (as per the preparation section above) click 'Continue'
- Otherwise click on 'Choose Manually', click 'Image File' and locate the OS X 10.11 installer/drag it to the window
- Click Continue to begin installing OS X
- Parallels will need to create a bootable disk image file from the installer so at the warning message click 'Continue' and Save the 'macOS image file' to the default location
- Name your virtual machine e.g. 'OS X 10.11'
- Tick the 'Customize settings before installation' option
- Choose your required custom settings - they can be changed later. We recommend:
- General > CPUs and Memory e.g. 2 CPUs and 4GB RAM)
- Options > Sharing - for best security set 'Share Folders' to 'None', untick 'Share iCloud, Dropbox, and Google Drive' + 'Map Mac volumes to virtual machine' and click 'Custom Folders…' to add your dedicated shared folder e.g. in your Documents > '1011SharedFolder' (as per the preparation section above)
- Hardware > Video > Video memory - the more memory assigned the higher the resolution available for the virtual environment
- Hardware > Network > Source > choose 'Ethernet' - the virtual environment will use your physical Mac's Ethernet network configuration
- Hardware > Sound & Camera > untick 'Share Mac Camera'
- Close the settings window and click 'Continue'
- The virtual machine will reboot to the OS X install screen (Apple logo + whirling wheel underneath)
- Follow the on screen instructions
- At the OS X Utilities screen, click 'Install OS X' and click 'Continue'
- After the installation completes and the virtual machine reboots please be patient, especially with any black/white/grey screens - everything can be slower in a virtual environment
- At the Welcome screen follow the on screen instructions
- Select your country
- Select your keyboard
- Transfer Information to This Mac > Don't transfer any information now
- Enable Location Services > your choice
- Apple ID > Don't Sign in (Skip)
- Terms and Conditions > Agree
- Create Your Computer Account + tick 'Set time zone based on current location'
- Diagnostics & Usage > untick 'Send diagnostics & usage data to Apple' + 'Share crash data with app developers'
- Go to Actions menu > Install Parallels Tools…
- Install Parallels Tools, following the on screen instructions and restart the virtual machine when complete
- Set the screen resolution as required
- Set your Finder > Preferences
- To avoid confusion with your primary computer rename the virtual machine's hard disk from Macintosh HD to something that is different to your current hard disk e.g. 'OS X 10_11 HD'
- Go to Apple menu > App Store… > Updates tab > install all available updates (except full macOS upgrades) especially any security updates
- Keep going to Apple menu > App Store… > Updates tab and installing all available updates until there are no more updates to install
- NOTE: If you are going to use this virtual environment on multiple computers or you just want a backup:
- In Parallels 'Shut down' the virtual machine and choose shut down again to force the Mac to shut down if necessary
- In the Finder go to the Parallels virtual machine folder (usually Macintosh HD > Users >your home directory > Library > Parallels)
- Copy/duplicate/archive the OS X 10.11 virtual machine file (pvm)
- Copy this file to the same place on additional Macs with Parallels as required (usually Macintosh HD > Users >your home directory > Library > Parallels)
- In Parallels go to Window menu > Control Center
- Select the OS X 10.11 virtual machine (don't open it or start it)
- Go to File menu > Clone and make a clone of the virtual machine
- Copy the clone to additional Macs with Parallels as required
- Check the OS X 10.11 Notes section below
VMWare Fusion
- Open VMWare Fusion
- Go to File menu > New
- At the 'Select the Installation Method' screen click on 'Install from disc or image' and click Continue
- Locate the OS X 10.11 installer and drag it to area in the window and click Continue
- At the 'Finish > Virtual Machine Summary' screen click 'Customize Settings' at the bottom
- Name your virtual machine e.g. 'OS X 10.11'
- Choose your required custom settings
- We recommend:
- Processors & Memory > CPUs and Memory e.g. '2 processor cores' and 4096MB [4GB])
- Network Adapter > tick 'Connect Network Adapter' and choose 'Ethernet' - the virtual environment will use your physical Mac's Ethernet network configuration
- Hard Disk (SATA) > virtual machine drive size of 75GB
- USB & Bluetooth > untick 'Share Bluetooth devices with the virtual machine'
- Close the settings window and click 'Finish' if necessary
- Click the start button/triangle in the middle of the screen to begin installing OS X
- The virtual machine will reboot to the OS X installer (Apple logo + whirling wheel underneath)
- Follow the on screen instructions
- At the OS X Utilities screen, click 'Install OS X' and click 'Continue'
- After the installation completes and the virtual machine reboots please be patient, especially with any black/white/grey screens - everything can be slower in a virtual environment
- At the Welcome screen follow the on screen instructions
- Select your country
- Select your keyboard
- Transfer Information to This Mac > Don't transfer any information now
- Enable Location Services > your choice
- Apple ID > Don't Sign in (Skip)
- Terms and Conditions > Agree
- Create Your Computer Account + tick 'Set time zone based on current location'
- Diagnostics & Usage > untick 'Send diagnostics & usage data to Apple' + 'Share crash data with app developers'
- Go to Virtual Machine menu > Install VMWare Tools
- Install VMWare Tools, following the on screen instructions and restart the virtual machine when complete (you may get a message about the installer certificate being out of date and this appears to stop the Tools installing so things like drag and drop are not supported [with this guest OS])
- If you want to configure shared folder(s) go to Virtual Machine > Sharing > Sharing Settings… > tick 'Enable Shared Folders' add your dedicated shared folder e.g. in your Documents > '1011SharedFolder' (as per the preparation section above)
- Set the screen resolution as required
- Set your Finder > Preferences
- To avoid confusion with your primary computer rename the virtual machine's hard disk from Macintosh HD to something that is different to your current hard disk e.g. 'OS X 10_11 HD'
- Go to Apple menu > App Store… > Updates tab > install all available updates (except full macOS upgrades) especially any security updates
- Keep going to Apple menu > App Store… > Updates tab and installing all available updates until there are no more updates to install
- NOTE: If you are going to use this virtual environment on multiple computers or you just want a backup:
- Go to Virtual Machine menu > Shut down and click the 'Shut Down' button
- Quit VMWare Fusion
- In the Finder go to the Fusion virtual machine folder (usually Macintosh HD > Users >your home directory > Library > Virtual Machines)
- Copy/duplicate/archive the OS X 10.11 virtual machine file (pvm)
- Copy this file to the same place on additional Macs with Fusion as required (usually Macintosh HD > Users >your home directory > Library > Virtual Machines)
- If you have Fusion 'Professional', in Fusion select the OS X 10.11 virtual machine from the Virtual Machine Library (you cannot create clones using the standard version of Fusion - use the copy method above instead)
- Click Virtual Machine and select 'Create Full Clone'
- Type a name for the clone e.g. 'OS X 10.11 Clone' and click Save to make a clone of the virtual machine
- The clone file is created in the Fusion Virtual Machines folder (usually Macintosh HD > Users >your home directory > Library > Virtual Machines)
- Copy the clone to additional Macs with Fusion as required
- Check the OS X 10.11 Notes section below
VirtualBox
We could not get VirtualBox to create an OS X 10.11 guest OS - it would never boot the OS X installer - we tried at least 10 different methods all documented out there on the internet but none ofthem worked. However, we discovered a neat little trick of easily creating the virtual machine in VMWare Fusion (a 30-day trial download is available) and then copying the virtual machine over for use inVirtualBox - here are the step-by-step instructions:NOTE: This trick was performed on a Mac mini (2014 model) with macOS 10.14 Mojave using the trial version of VMWare Fusion 11.0.2 and VirtualBox 6.0.4.
- Install VirtualBox heeding the advice in our Preparation section above
- Open VirtualBox to get it running and then Quit it
- Download a trial version of VMWare Fusion
- Install VMWare Fusion heeding the advice in our Preparation section above
- Install OS X 10.11 using the instructions in our VMWare Fusion section above
- Stop the OX 10.11 virtual machine in VMWare Fusion if it is running and then Quit VMWare Fusion
- Go to Macintosh HD > Users > ~your home directory~ > VirtualBox VMs folder (if this folder doesn't exist, create it) > inside this folder create a new folder called 'OS X 10.11 from Fusion' > keep this window open
- Open a new Finder window and go to Macintosh HD > Users > ~your home directory~ > Virtual Machines > locate the OS X 10.11 virtual machine you created in VMWare Fusion e.g. 'OS X 10.11' > right/control click on it and select 'Show Package Contents' from the contextual menu
- Copy, not move, all the files from this folder to the 'OS X 10.11 from Fusion' folder you created two steps ago in the VirtualBox VMs folder
- Now you're ready to use this virtual machine disk with VirtualBox
- Open VirtualBox
- Click on the 'New' icon
- Click on 'Expert Mode'
- Name your virtual machine e.g. 'OS X 10.11'
- Set 'Type' to 'Mac OS X'
- Set 'Version' to 'Mac OS X 10.11 El Capitan (64-bit)'
- Set 'Memory Size' to 4096MB (4GB)
- Set 'Hard Disk' to 'Use an existing virtual hard disk file'
- Click on the folder icon with the little green up arrow in the bottom right of the window
- Navigate to Macintosh HD > Users > ~your home directory~ > VirtualBox VMs folder > OS X 10.11 from Fusion and select the 'Virtual Disk.vmdk' file
- Click 'Choose'
- Click 'Create'
- Select the new virtual OS on the left and click 'Settings' at the top
- Set your virtual OS settings. We recommend:
- Display > Screen > Video memory - the more memory assigned the higher the resolution available for the virtual environment e.g. set it to 128MB
- Audio > UNTICK 'Enable Audio' - according to the VirtualBox forums it is best that audio is disabled
- Click 'OK'
- Click 'Start'
- The virtual machine will boot into OS X 10.11
OS X 10.11 Notes
Security Notes
OS X 10.11 is no longer supported with security updates so be sure to follow our recommendations for securing older operating systems, specifically:- Don't use Apple Safari as it is no longer updated and thus it is not secure - use a supported web browser e.g. Firefox or Chrome
- Don't use Apple Mail as it is no longer updated and thus it is not secure
- Don't install unsupported web plug-ins and disable old plugins:
- Go to OS X 10_11 HD (or whatever you have named the virtual hard disk) > Library
- If there is no folder named 'Internet Plug-Ins (Disabled)', create a new folder named that
- Open the 'Internet Plug-Ins' folder and move all the items in it to the 'Internet Plug-Ins (Disabled)' folder NOTE: To move the files you will need to authenticate as an administrator of the computer.
- Restart the virtual machine (go to Apple menu > Restart)
General Notes
Coming soon…Running 32-bit Applications
Article Keywords: OS X OSX 107 108 109 1010 1011 macOS 1012 1013 1014 1015 Snow Leopard Lion Mountain Lion Mavericks Yosemite El Capitan Sierra High Sierra Mojave Catalina VM virtual machine virtualisation virtualising virtualization virtualizing
This article is © MacStrategy » a trading name of Burning Helix. As an Amazon Associate, employee's of MacStrategy's holding company (Burning Helix sro) may earn from qualifying purchases. Apple, the Apple logo, and Mac are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc.
All proceeds go directly to MacStrategy / Burning Helix to help fund this web site.
All proceeds go directly to MacStrategy / Burning Helix to help fund this web site.
Go to this
web page
to donate to us.
Azure allows you to run applications and virtual machines (VMs) on shared physical infrastructure. One of the prime economic motivations to running applications in a cloud environment is the ability to distribute the cost of shared resources among multiple customers. This practice of multi-tenancy improves efficiency by multiplexing resources among disparate customers at low costs. Unfortunately, it also introduces the risk of sharing physical servers and other infrastructure resources to run your sensitive applications and VMs that may belong to an arbitrary and potentially malicious user.
This article outlines how Azure provides isolation against both malicious and non-malicious users and serves as a guide for architecting cloud solutions by offering various isolation choices to architects.
Tenant Level Isolation
One of the primary benefits of cloud computing is concept of a shared, common infrastructure across numerous customers simultaneously, leading to economies of scale. This concept is called multi-tenancy. Microsoft works continuously to ensure that the multi-tenant architecture of Microsoft Cloud Azure supports security, confidentiality, privacy, integrity, and availability standards.
In the cloud-enabled workplace, a tenant can be defined as a client or organization that owns and manages a specific instance of that cloud service. With the identity platform provided by Microsoft Azure, a tenant is simply a dedicated instance of Azure Active Directory (Azure AD) that your organization receives and owns when it signs up for a Microsoft cloud service.
Each Azure AD directory is distinct and separate from other Azure AD directories. Just like a corporate office building is a secure asset specific to only your organization, an Azure AD directory was also designed to be a secure asset for use by only your organization. The Azure AD architecture isolates customer data and identity information from co-mingling. This means that users and administrators of one Azure AD directory cannot accidentally or maliciously access data in another directory.
Azure Tenancy
Azure tenancy (Azure Subscription) refers to a “customer/billing” relationship and a unique tenant in Azure Active Directory. Tenant level isolation in Microsoft Azure is achieved using Azure Active Directory and Azure role-based access control offered by it. Each Azure subscription is associated with one Azure Active Directory (AD) directory.
Users, groups, and applications from that directory can manage resources in the Azure subscription. You can assign these access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. An Azure AD tenant is logically isolated using security boundaries so that no customer can access or compromise co-tenants, either maliciously or accidentally. Azure AD runs on “bare metal” servers isolated on a segregated network segment, where host-level packet filtering and Windows Firewall block unwanted connections and traffic.
- Access to data in Azure AD requires user authentication via a security token service (STS). Information on the user’s existence, enabled state, and role is used by the authorization system to determine whether the requested access to the target tenant is authorized for this user in this session.
Tenants are discrete containers and there is no relationship between these.
No access across tenants unless tenant admin grants it through federation or provisioning user accounts from other tenants.
Physical access to servers that comprise the Azure AD service, and direct access to Azure AD’s back-end systems, is restricted.
Azure AD users have no access to physical assets or locations, and therefore it is not possible for them to bypass the logical Azure RBAC policy checks stated following.
For diagnostics and maintenance needs, an operational model that employs a just-in-time privilege elevation system is required and used. Azure AD Privileged Identity Management (PIM) introduces the concept of an eligible admin. Eligible admins should be users that need privileged access now and then, but not every day. The role is inactive until the user needs access, then they complete an activation process and become an active admin for a predetermined amount of time.
Azure Active Directory hosts each tenant in its own protected container, with policies and permissions to and within the container solely owned and managed by the tenant.
The concept of tenant containers is deeply ingrained in the directory service at all layers, from portals all the way to persistent storage.
Even when metadata from multiple Azure Active Directory tenants is stored on the same physical disk, there is no relationship between the containers other than what is defined by the directory service, which in turn is dictated by the tenant administrator.
Azure role-based access control (Azure RBAC)
Azure role-based access control (Azure RBAC) helps you to share various components available within an Azure subscription by providing fine-grained access management for Azure. Azure RBAC enables you to segregate duties within your organization and grant access based on what users need to perform their jobs. Instead of giving everybody unrestricted permissions in Azure subscription or resources, you can allow only certain actions.
Azure RBAC has three basic roles that apply to all resource types:
Owner has full access to all resources including the right to delegate access to others.
Contributor can create and manage all types of Azure resources but can’t grant access to others.
Reader can view existing Azure resources.
The rest of the Azure roles in Azure allow management of specific Azure resources. For example, the Virtual Machine Contributor role allows the user to create and manage virtual machines. It does not give them access to the Azure Virtual Network or the subnet that the virtual machine connects to.
Azure built-in roles list the roles available in Azure. It specifies the operations and scope that each built-in role grants to users. If you're looking to define your own roles for even more control, see how to build Custom roles in Azure RBAC.
Some other capabilities for Azure Active Directory include:
Azure AD enables SSO to SaaS applications, regardless of where they are hosted. Some applications are federated with Azure AD, and others use password SSO. Federated applications can also support user provisioning and password vaulting.
Access to data in Azure Storage is controlled via authentication. Each storage account has a primary key (storage account key, or SAK) and a secondary secret key (the shared access signature, or SAS).
Azure AD provides Identity as a Service through federation by using Active Directory Federation Services, synchronization, and replication with on-premises directories.
Azure AD Multi-Factor Authentication is the multi-factor authentication service that requires users to verify sign-ins by using a mobile app, phone call, or text message. It can be used with Azure AD to help secure on-premises resources with the Azure Multi-Factor Authentication server, and also with custom applications and directories using the SDK.
Azure AD Domain Services lets you join Azure virtual machines to an Active Directory domain without deploying domain controllers. You can sign in to these virtual machines with your corporate Active Directory credentials and administer domain-joined virtual machines by using Group Policy to enforce security baselines on all your Azure virtual machines.
Azure Active Directory B2C provides a highly available global-identity management service for consumer-facing applications that scales to hundreds of millions of identities. It can be integrated across mobile and web platforms. Your consumers can sign in to all your applications through customizable experiences by using their existing social accounts or by creating credentials.
Isolation from Microsoft Administrators & Data Deletion
Microsoft takes strong measures to protect your data from inappropriate access or use by unauthorized persons. These operational processes and controls are backed by the Online Services Terms, which offer contractual commitments that govern access to your data.
- Microsoft engineers do not have default access to your data in the cloud. Instead, they are granted access, under management oversight, only when necessary. That access is carefully controlled and logged, and revoked when it is no longer needed.
- Microsoft may hire other companies to provide limited services on its behalf. Subcontractors may access customer data only to deliver the services for which, we have hired them to provide, and they are prohibited from using it for any other purpose. Further, they are contractually bound to maintain the confidentiality of our customers’ information.
Business services with audited certifications such as ISO/IEC 27001 are regularly verified by Microsoft and accredited audit firms, which perform sample audits to attest that access, only for legitimate business purposes. You can always access your own customer data at any time and for any reason.
If you delete any data, Microsoft Azure deletes the data, including any cached or backup copies. For in-scope services, that deletion will occur within 90 days after the end of the retention period. (In-scope services are defined in the Data Processing Terms section of our Online Services Terms.)
If a disk drive used for storage suffers a hardware failure, it is securely erased or destroyed before Microsoft returns it to the manufacturer for replacement or repair. The data on the drive is overwritten to ensure that the data cannot be recovered by any means.
Compute Isolation
Microsoft Azure provides various cloud-based computing services that include a wide selection of compute instances & services that can scale up and down automatically to meet the needs of your application or enterprise. These compute instance and service offer isolation at multiple levels to secure data without sacrificing the flexibility in configuration that customers demand.
Isolated Virtual Machine Sizes
Azure Compute offers virtual machine sizes that are Isolated to a specific hardware type and dedicated to a single customer. The Isolated sizes live and operate on specific hardware generation and will be deprecated when the hardware generation is retired.
Isolated virtual machine sizes are best suited for workloads that require a high degree of isolation from other customers’ workloads for reasons that include meeting compliance and regulatory requirements. Utilizing an isolated size guarantees that your virtual machine will be the only one running on that specific server instance.
Additionally, as the Isolated size VMs are large, customers may choose to subdivide the resources of these VMs by using Azure support for nested virtual machines.
The current Isolated virtual machine offerings include:
- Standard_E64is_v3
- Standard_E64i_v3
- Standard_E80ids_v4
- Standard_E80is_v4
- Standard_M128ms
- Standard_GS5
- Standard_G5
- Standard_F72s_v2
Note
Isolated VM Sizes have a hardware limited lifespan. Please see below for details
Deprecation of Isolated VM Sizes
As Isolated VM sizes are hardware bound sizes, Azure will provide reminders 12 months in advance of the official deprecation of the sizes. Azure will also offer an updated isolated size on our next hardware version that the customer could consider moving their workload onto.
Size | Isolation Retirement Date |
---|---|
Standard_DS15_v21 | May 15, 2020 |
Standard_D15_v21 | May 15, 2020 |
1 For details on Standard_DS15_v2 and Standard_D15_v2 isolation retirement program see FAQs
FAQ
Q: Is the size going to get retired or only 'isolation' feature is?
A: If the virtual machine size does not have the 'i' subscript, then only 'isolation' feature will be retired. If isolation is not needed, there is no action to be taken and the VM will continue to work as expected. Examples include Standard_DS15_v2, Standard_D15_v2, Standard_M128ms etc.If the virtual machine size includes 'i' subscript, then the size is going to get retired.
10.explorationsynthesismr. Mac's Virtual Existence Software
Q: Is there a downtime when my vm lands on a non-isolated hardware?
A: If there is no need of isolation, no action is needed and there will be no downtime.
Q: Is there any cost delta for moving to a non-isolated virtual machine?
A: No
Q: When are the other isolated sizes going to retire?
A: We will provide reminders 12 months in advance of the official deprecation of the isolated size.
Q: I'm an Azure Service Fabric Customer relying on the Silver or Gold Durability Tiers. Does this change impact me?
A: No. The guarantees provided by Service Fabric's Durability Tiers will continue to function even after this change. If you require physical hardware isolation for other reasons, you may still need to take one of the actions described above.
Q: What are the milestones for D15_v2 or DS15_v2 isolation retirement?
A:
Date | Action |
---|---|
November 18, 2019 | Availability of D/DS15i_v2 (PAYG, 1-year RI) |
May 14, 2020 | Last day to buy D/DS15i_v2 1-year RI |
May 15, 2020 | D/DS15_v2 isolation guarantee removed |
May 15, 2021 | Retire D/DS15i_v2 (all customers except who bought 3-year RI of D/DS15_v2 before November 18, 2019) |
November 17, 2022 | Retire D/DS15i_v2 when 3-year RIs done (for customers who bought 3-year RI of D/DS15_v2 before November 18, 2019) |
Next steps
Customers can also choose to further subdivide the resources of these Isolated virtual machines by using Azure support for nested virtual machines.
Dedicated hosts
In addition to the isolated hosts described in the preceding section, Azure also offers dedicated hosts. Dedicated hosts in Azure is a service that provides physical servers that can host one or more virtual machines, and which are dedicated to a single Azure subscription. Dedicated hosts provide hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are deployed in the same datacenters and share the same network and underlying storage infrastructure as other, non-isolated hosts. For more information, see the detailed overview of Azure dedicated hosts.
Hyper-V & Root OS Isolation Between Root VM & Guest VMs
10.explorationsynthesismr. Mac's Virtual Existence Reality
Azure’s compute platform is based on machine virtualization—meaning that all customer code executes in a Hyper-V virtual machine. On each Azure node (or network endpoint), there is a Hypervisor that runs directly over the hardware and divides a node into a variable number of Guest Virtual Machines (VMs).
Each node also has one special Root VM, which runs the Host OS. A critical boundary is the isolation of the root VM from the guest VMs and the guest VMs from one another, managed by the hypervisor and the root OS. The hypervisor/root OS pairing leverages Microsoft's decades of operating system security experience, and more recent learning from Microsoft's Hyper-V, to provide strong isolation of guest VMs.
The Azure platform uses a virtualized environment. User instances operate as standalone virtual machines that do not have access to a physical host server.
The Azure hypervisor acts like a micro-kernel and passes all hardware access requests from guest virtual machines to the host for processing by using a shared-memory interface called VMBus. This prevents users from obtaining raw read/write/execute access to the system and mitigates the risk of sharing system resources.
Advanced VM placement algorithm & protection from side channel attacks
Any cross-VM attack involves two steps: placing an adversary-controlled VM on the same host as one of the victim VMs, and then breaching the isolation boundary to either steal sensitive victim information or affect its performance for greed or vandalism. Microsoft Azure provides protection at both steps by using an advanced VM placement algorithm and protection from all known side channel attacks including noisy neighbor VMs.
The Azure Fabric Controller
The Azure Fabric Controller is responsible for allocating infrastructure resources to tenant workloads, and it manages unidirectional communications from the host to virtual machines. The VM placing algorithm of the Azure fabric controller is highly sophisticated and nearly impossible to predict as physical host level.
The Azure hypervisor enforces memory and process separation between virtual machines, and it securely routes network traffic to guest OS tenants. This eliminates possibility of and side channel attack at VM level.
In Azure, the root VM is special: it runs a hardened operating system called the root OS that hosts a fabric agent (FA). FAs are used in turn to manage guest agents (GA) within guest operating systems on customer VMs. FAs also manage storage nodes.
The collection of Azure hypervisor, root OS/FA, and customer VMs/GAs comprises a compute node. FAs are managed by a fabric controller (FC), which exists outside of compute and storage nodes (compute and storage clusters are managed by separate FCs). If a customer updates their application’s configuration file while it’s running, the FC communicates with the FA, which then contacts GAs, which notify the application of the configuration change. In the event of a hardware failure, the FC will automatically find available hardware and restart the VM there.
Communication from a Fabric Controller to an agent is unidirectional. The agent implements an SSL-protected service that only responds to requests from the controller. It cannot initiate connections to the controller or other privileged internal nodes. The FC treats all responses as if they were untrusted.
Isolation extends from the Root VM from Guest VMs, and the Guest VMs from one another. Compute nodes are also isolated from storage nodes for increased protection.
The hypervisor and the host OS provide network packet - filters to help assure that untrusted virtual machines cannot generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic.
Additional Rules Configured by Fabric Controller Agent to Isolate VM
By default, all traffic is blocked when a virtual machine is created, and then the fabric controller agent configures the packet filter to add rules and exceptions to allow authorized traffic.
There are two categories of rules that are programmed:
- Machine configuration or infrastructure rules: By default, all communication is blocked. There are exceptions to allow a virtual machine to send and receive DHCP and DNS traffic. Virtual machines can also send traffic to the “public” internet and send traffic to other virtual machines within the same Azure Virtual Network and the OS activation server. The virtual machines’ list of allowed outgoing destinations does not include Azure router subnets, Azure management, and other Microsoft properties.
- Role configuration file: This defines the inbound Access Control Lists (ACLs) based on the tenant's service model.
VLAN Isolation
There are three VLANs in each cluster:
- The main VLAN – interconnects untrusted customer nodes
- The FC VLAN – contains trusted FCs and supporting systems
- The device VLAN – contains trusted network and other infrastructure devices
Communication is permitted from the FC VLAN to the main VLAN, but cannot be initiated from the main VLAN to the FC VLAN. Communication is also blocked from the main VLAN to the device VLAN. This assures that even if a node running customer code is compromised, it cannot attack nodes on either the FC or device VLANs.
Storage Isolation
Logical Isolation Between Compute and Storage
As part of its fundamental design, Microsoft Azure separates VM-based computation from storage. This separation enables computation and storage to scale independently, making it easier to provide multi-tenancy and isolation.
Therefore, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically. This means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time a customer writes data on the virtual disk, space on the physical disk is allocated, and a pointer to it is placed in the table.
Isolation Using Storage Access control
Access Control in Azure Storage has a simple access control model. Each Azure subscription can create one or more Storage Accounts. Each Storage Account has a single secret key that is used to control access to all data in that Storage Account.
Access to Azure Storage data (including Tables) can be controlled through a SAS (Shared Access Signature) token, which grants scoped access. The SAS is created through a query template (URL), signed with the SAK (Storage Account Key). That signed URL can be given to another process (that is, delegated), which can then fill in the details of the query and make the request of the storage service. A SAS enables you to grant time-based access to clients without revealing the storage account’s secret key.
The SAS means that we can grant a client limited permissions, to objects in our storage account for a specified period of time and with a specified set of permissions. We can grant these limited permissions without having to share your account access keys.
IP Level Storage Isolation
You can establish firewalls and define an IP address range for your trusted clients. With an IP address range, only clients that have an IP address within the defined range can connect to Azure Storage.
IP storage data can be protected from unauthorized users via a networking mechanism that is used to allocate a dedicated or dedicated tunnel of traffic to IP storage.
Encryption
Azure offers the following types of Encryption to protect data:
- Encryption in transit
- Encryption at rest
Encryption in Transit
Encryption in transit is a mechanism of protecting data when it is transmitted across networks. With Azure Storage, you can secure data using:
- Transport-level encryption, such as HTTPS when you transfer data into or out of Azure Storage.
- Wire encryption, such as SMB 3.0 encryption for Azure File shares.
- Client-side encryption, to encrypt the data before it is transferred into storage and to decrypt the data after it is transferred out of storage.
Encryption at Rest
For many organizations, data encryption at rest is a mandatory step towards data privacy, compliance, and data sovereignty. There are three Azure features that provide encryption of data that is “at rest”:
- Storage Service Encryption allows you to request that the storage service automatically encrypt data when writing it to Azure Storage.
- Client-side Encryption also provides the feature of encryption at rest.
- Azure Disk Encryption allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
Azure Disk Encryption
Azure Disk Encryption for virtual machines (VMs) helps you address organizational security and compliance requirements by encrypting your VM disks (including boot and data disks) with keys and policies you control in Azure Key Vault.
The Disk Encryption solution for Windows is based on Microsoft BitLocker Drive Encryption, and the Linux solution is based on dm-crypt.
The solution supports the following scenarios for IaaS VMs when they are enabled in Microsoft Azure:
- Integration with Azure Key Vault
- Standard tier VMs: A, D, DS, G, GS, and so forth, series IaaS VMs
- Enabling encryption on Windows and Linux IaaS VMs
- Disabling encryption on OS and data drives for Windows IaaS VMs
- Disabling encryption on data drives for Linux IaaS VMs
- Enabling encryption on IaaS VMs that are running Windows client OS
- Enabling encryption on volumes with mount paths
- Enabling encryption on Linux VMs that are configured with disk striping (RAID) by using mdadm
- Enabling encryption on Linux VMs by using LVM(Logical Volume Manager) for data disks
- Enabling encryption on Windows VMs that are configured by using storage spaces
- All Azure public regions are supported
The solution does not support the following scenarios, features, and technology in the release:
- Basic tier IaaS VMs
- Disabling encryption on an OS drive for Linux IaaS VMs
- IaaS VMs that are created by using the classic VM creation method
- Integration with your on-premises Key Management Service
- Azure Files (shared file system), Network File System (NFS), dynamic volumes, and Windows VMs that are configured with software-based RAID systems
SQL Database Isolation
SQL Database is a relational database service in the Microsoft cloud based on the market-leading Microsoft SQL Server engine and capable of handling mission-critical workloads. SQL Database offers predictable data isolation at account level, geography / region based and based on networking— all with near-zero administration.
SQL Database Application Model
Microsoft SQL Database is a cloud-based relational database service built on SQL Server technologies. It provides a highly available, scalable, multi-tenant database service hosted by Microsoft in cloud.
From an application perspective, SQL Database provides the following hierarchy:Each level has one-to-many containment of levels below.
The account and subscription are Microsoft Azure platform concepts to associate billing and management.
Logical SQL servers and databases are SQL Database-specific concepts and are managed by using SQL Database, provided OData and TSQL interfaces or via the Azure portal.
Servers in SQL Database are not physical or VM instances, instead they are collections of databases, sharing management and security policies, which are stored in so called “logical master” database.
Logical master databases include:
- SQL logins used to connect to the server
- Firewall rules
Billing and usage-related information for databases from the same server are not guaranteed to be on the same physical instance in the cluster, instead applications must provide the target database name when connecting.
From a customer perspective, a server is created in a geo-graphical region while the actual creation of the server happens in one of the clusters in the region.
Isolation through Network Topology
When a server is created and its DNS name is registered, the DNS name points to the so called “Gateway VIP” address in the specific data center where the server was placed.
Behind the VIP (virtual IP address), we have a collection of stateless gateway services. In general, gateways get involved when there is coordination needed between multiple data sources (master database, user database, etc.). Gateway services implement the following:
- TDS connection proxying. This includes locating user database in the backend cluster, implementing the login sequence and then forwarding the TDS packets to the backend and back.
- Database management. This includes implementing a collection of workflows to do CREATE/ALTER/DROP database operations. The database operations can be invoked by either sniffing TDS packets or explicit OData APIs.
- CREATE/ALTER/DROP login/user operations
- Server management operations via OData API
The tier behind the gateways is called “back-end”. This is where all the data is stored in a highly available fashion. Each piece of data is said to belong to a “partition” or “failover unit”, each of them having at least three replicas. Replicas are stored and replicated by SQL Server engine and managed by a failover system often referred to as “fabric”.
Generally, the back-end system does not communicate outbound to other systems as a security precaution. This is reserved to the systems in the front-end (gateway) tier. The gateway tier machines have limited privileges on the back-end machines to minimize the attack surface as a defense-in-depth mechanism.
Isolation by Machine Function and Access
SQL Database (is composed of services running on different machine functions. SQL Database is divided into “backend” Cloud Database and “front-end” (Gateway/Management) environments, with the general principle of traffic only going into back-end and not out. The front-end environment can communicate to the outside world of other services and in general, has only limited permissions in the back-end (enough to call the entry points it needs to invoke).
Networking Isolation
Azure deployment has multiple layers of network isolation. The following diagram shows various layers of network isolation Azure provides to customers. These layers are both native in the Azure platform itself and customer-defined features. Inbound from the Internet, Azure DDoS provides isolation against large-scale attacks against Azure. The next layer of isolation is customer-defined public IP addresses (endpoints), which are used to determine which traffic can pass through the cloud service to the virtual network. Native Azure virtual network isolation ensures complete isolation from all other networks, and that traffic only flows through user configured paths and methods. These paths and methods are the next layer, where NSGs, UDR, and network virtual appliances can be used to create isolation boundaries to protect the application deployments in the protected network.
Traffic isolation: A virtual network is the traffic isolation boundary on the Azure platform. Virtual machines (VMs) in one virtual network cannot communicate directly to VMs in a different virtual network, even if both virtual networks are created by the same customer. Isolation is a critical property that ensures customer VMs and communication remains private within a virtual network.
Subnet offers an additional layer of isolation with in virtual network based on IP range. IP addresses in the virtual network, you can divide a virtual network into multiple subnets for organization and security. VMs and PaaS role instances deployed to subnets (same or different) within a VNet can communicate with each other without any extra configuration. You can also configure network security group (NSGs) to allow or deny network traffic to a VM instance based on rules configured in access control list (ACL) of NSG. NSGs can be associated with either subnets or individual VM instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the VM instances in that subnet.
Next Steps
Learn about Network Isolation Options for Machines in Windows Azure Virtual Networks. This includes the classic front-end and back-end scenario where machines in a particular back-end network or subnetwork may only allow certain clients or other computers to connect to a particular endpoint based on an allow list of IP addresses.
Learn about virtual machine isolation in Azure. Azure Compute offers virtual machine sizes that are isolated to a specific hardware type and dedicated to a single customer.