Abstract:We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Jon Wood has just posted this video on how to use ML.NET to remove stop words in text data.
Jon Wood has another great video on ML.NET, this time focusing on Tokenizing Text Data as part of an NLP.
Regina Barzilay is a professor at MIT and a world-class researcher in natural language processing and applications of deep learning to chemistry and oncology, or the use of deep learning for early diagnosis, prevention and treatment of cancer.
She has also been recognized for her teaching of several successful AI-related courses at MIT, including the popular Introduction to Machine Learning course. This conversation is part of the Artificial Intelligence podcast run by Lex Fridman.
In this session from Build 2019, get introduced to new features in Language Understanding (LUIS) and QnA Maker, that simplify creating intelligent NLP models without any prior AI experience. Gain insights on tips & best practices to make your bot truly intelligent, conversational and personal.
In this video, Lex Fridman interviews Oriol Vinyals, a senior research scientist at Google DeepMind.
From the video description:
Before that he was at Google Brain and Berkeley. His research has been cited over 39,000 times. He is one of the most brilliant and impactful minds in the field of deep learning. He is behind some of the biggest papers and ideas in AI, including sequence to sequence learning, audio generation, image captioning, neural machine translation, and reinforcement learning. He is a co-lead (with David Silver) of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft.
Two Minute Papers explores the project that takes NLP to the next level: GPT-2. It’s almost too good – OpenAI has, accordingly, will not release it.
In this video, Noelle LaCharite explores how quick and easy it is to create a chatbot using QnA Maker that can answer top-of-mind questions for employees or customers.