Azure Spot VMs are incredibly cheap CPUs that come with the risk of being evicted if enough demand for full-price CPUs occurs in the region.

Luckily, Spark is a resilient distributed system that can easily handle replacing nodes, and so we’re left with a very cost effective approach to provisioning lower-priority workloads!

In this video, Simon walks through the process for provisioning a cluster using Spot VM workers, how to get to the lower-level configuration and some of the gotchas to be aware of.

In this episode with Gaston Cruz, he’ll show you the options to process Azure Analysis Services models (Semantic Layer) connecting to an Azure SQL DB as a data source (using SSDT), and then create an architecture using a Service Principal account to process the model (DB, Tables, Partitions) in an automatic way deploying an Azure Logic Apps, and then calling from Azure Data Factory to trigger the process.

Resources:
Gaston’s YouTube Channel
AAS REST API official doc