Explore your options to select the right VMs for your workloads with this video by Microsoft Mechanics.

On this episode of Azure Essentials, Matt McSpirit shares core compute and disk storage options for any workload you want to run in Azure.

If you’re new to Azure, shifting your apps or workloads onto a virtual machine or multiple VMs in Azure can be achieved without rearchitecting them or writing new code. You can even deploy your workloads to Azure Dedicated Hosts that provide single tenant physical servers dedicated to your organization.

Azure Arc enables customers to reduce the complexity of managing disparate infrastructure scattered across on-premises and multicloud by bringing all resources into a single control plane.

It also allows customers to use cloud native services into any infrastructure, anywhere. In this video, get a glimpse into the executive vision behind Azure Arc.

Learn more: https://azure.microsoft.com/en-us/services/azure-arc/

Migrating to the cloud requires some investigation on the resources required.  Jay and Abel will look at how to begin the Lift and Shift process by reviewing the hosts and determining needs.

Learn More:

Free DevOps courses on Microsoft Learn:

DevOps Lab Favorite Links:

Discover the best practices of Cloud resource with Steven Murawski and Foteini Savvidou, utilizing the Carnegie Mellon University Microsoft Learn Module. Automating your cloud resource management can increase productivity, sustainability, and the scalability of your services.

In this session led by Steven Murawski, Principal Cloud Advocate at Microsoft, and Microsoft Student Ambassador, Foteini Savvidou will explain the concept of Infrastructure-as-Code and discuss the advantages that it offers over ordinary scripting.

Did you ever wonder how much further AI can scale?

In this session, Nidhi Chappell (Head of Product, Specialized Azure Compute at Microsoft) and Christopher Berner (Head of Compute at OpenAI) share their perspectives and insight about how the Microsoft-OpenAI partnership is taking significant steps to eliminate the barriers of scale to AI processes.

Of specific interest is OpenAI’s new GPT-3 natural language processing model that required 175 billion parameters to train properly.

The rapid evolution of the cloud to support massive computational models across HPC and AI workloads is shifting paradigms giving customers options that were previously only possible with dedicated on-premises solutions or supercomputing centers.

Steve Scott, Technical Fellow and CVP Hardware Architecture at Microsoft Azure, shares his experiences from his first 5 months at Microsoft.

Learn more: https://azure.microsoft.com/en-us/solutions/high-performance-computing/