Good news for folks looking to learn about the latest AI development techniques: Nvidia is now allowing the general public to access the online workshops it provides through its Deep Learning Institute (DLI).

The GPU giant today announced today that selected workshops in the DLI catalog will be open to everybody. These workshops previously were available only to companies that wanted specialized training for their in-house developers, or to folks who had attended the company’s GPU Technology Conferences.

Two of the open courses will take place next month, including “Fundamentals of Accelerated Computing with CUDA Python,” which explores developing parallel workloads with CUDA and NumPy and cost $500. There is also “Applications of AI for Predictive Maintenance,” which explores technologies like XGBoost, LSTM, Keras, and Tensorflow, and costs $700. Certificates are available for those who complete both workshops.

The price seems a little steep to me, but it may be worth checking out, since it includes GPU-accelerated development cloud compute services.

Deep learning has certainly come a long way over the past few years, and working with images has gotten simpler thanks to advances in data engineering techniques and tactics.

Delta Lake has been amazing at creating a tabular structured transactional layer on object storage, but what about handling images at scale?

Would you like to know how to gain a 45x improvement in your image processing pipeline?

Watch this video to find out how

Traditional deep learning frameworks such as TensorFlow and PyTorch support training on a single deep neural network (DNN) model, which involves computing the weights iteratively for the DNN model.

Designing a DNN model for a task remains an experimental science and is typically a practice of deep learning model exploration. Retrofitting such exploratory-training into the training process of a single DNN model, as supported by current deep learning frameworks, is unintuitive, cumbersome, and inefficient.

In this webinar, Microsoft Research Asia Senior Researcher Quanlu Zhang and Principal Program Manager Scarlett Li will analyze these challenges within the context of Neural Architecture Search (NAS).