Microsoft Research presents this talk on pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks.  However, most pretraining efforts focus on general-domain corpora, such as in newswire and web text. Biomedical text is very different from general-domain text, yet biomedical NLP has been relatively underexplored.

A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models.

In this webinar, Microsoft researchers Hoifung Poon, Senior Director of Biomedical NLP, and Jianfeng Gao, Distinguished Scientist, will challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.

You will begin with understanding how biomedical text differs from general-domain text and how biomedical NLP poses substantial challenges that are not present in mainstream NLP. You will also learn about the two paradigms for domain-specific language model pretraining and see how pretraining from scratch significantly outperforms mixed-domain pretraining in a wide range of biomedical NLP tasks. Finally, find out about our comprehensive benchmark and leaderboard created specifically for biomedical NLP, called BLURB, and see how our biomedical language model, PubMedBERT, sets a new state of the art.

In this session you’ll learn how Tailwind Traders took their support ticket text and audio files, convert and extract insight metadata from each ticket using Azure Cognitive Services Text Analytics and Speech-to-Text.

They then aggregated their findings to inform their product backlog and implement improvements.

Tailwind Traders have a great website and application for customers and partners. However, they are seeing an increased amount of support tickets regarding usage of these offerings. They want to store, analyze and extract insights from their text and audio data to make better product backlog decisions and reduce their support tickets. 

     

NLP is a key component in many data science systems that must understand or reason about text. This hands-on tutorial uses the open-source Spark NLP library to explore advanced NLP in Python.

Spark NLP provides state-of-the-art accuracy, speed, and scalability for language understanding by delivering production-grade implementations of some of the most recent research in applied deep learning. It’s the most widely used NLP library in the enterprise today.

You’ll edit and extend a set of executable Python notebooks by implementing these common NLP tasks: named entity recognition, sentiment analysis, spell checking and correction, document classification, and multilingual and multi domain support. The discussion of each NLP task includes the latest advances in deep learning used to tackle it, including the prebuilt use of BERT embeddings within Spark NLP, using tuned embeddings, and “post-BERT” research results like XLNet, ALBERT, and roBERTa. Spark NLP builds on the Apache Spark and TensorFlow ecosystems, and as such it’s the only open-source NLP library that can natively scale to use any Spark cluster, as well as take advantage of the latest processors from Intel and Nvidia. You’ll run the notebooks locally on your laptop, but we’ll explain and show a complete case study and benchmarks on how to scale an NLP pipeline for both training and inference.

Text Analytics for health is a preview feature of Text Analytics which enables developers to process and extract insights from unstructured clinical and biomedical text.

Through a single API call, using NLP techniques such as named entity recognition, entity linking, relation extraction and entity negation, Text Analytics can extract critical and relevant medical information without the need for time-intensive, manual development of custom models.

Did you ever wonder how much further AI can scale?

In this session, Nidhi Chappell (Head of Product, Specialized Azure Compute at Microsoft) and Christopher Berner (Head of Compute at OpenAI) share their perspectives and insight about how the Microsoft-OpenAI partnership is taking significant steps to eliminate the barriers of scale to AI processes.

Of specific interest is OpenAI’s new GPT-3 natural language processing model that required 175 billion parameters to train properly.

The next version of QnA Maker advances several core capabilities like better relevance and precise answering, by introducing state-of-art deep learning technologies. 

In addition, it also simplifies resource management by reducing the number of resources deployed.

The latest version will enable customers with strict geo requirements to deploy the service in the region of their choice end-to-end.

More Information: