Quantum London  partnered with Sia Partners to offer you two detailed, practical and actionable discussions.

In this first session held on 7th April we discussed with Martin Hofmann (ex Group CIO of Volkswagen where is was the Quantum Program Lead, now works with strategic customers at Salesforce), Markus Pflitsch (CEO and co-founder of Terra Quantum, a CERN Quantum Physicist and a senior financial executive) and Karan Pinto (Senior Venture Builder focused on Quantum Technologies at Sia Partners)

The USB Rubber Ducky is a famous hacker tool that allows quick exploitation of a target computer, provided you know what script you want to run in advance.

SecurityFWD will try out a new tool called the Wi-Fi Duck that allows a hacker to connect and run payloads from up to blocks away.

This allows hackers to run USB Rubber Ducky like scripts without needing to know what kind of computer they’re hacking in advance.

In case, you’re looking for more evidence that quantum computing is getting real, then look no further than the stock market.

Peter Chapman, chief executive officer and president at IonQ Inc, discusses his company going public via a special purpose acquisition company deal with dMY Technology Group Inc. that gives the combined entity a pro-forma implied market capitalization of about $2 billion.

He also shares how soon quantum computing will become mainstream and what real-life issues it can solve. Chapman speaks to Emily Chang on “Bloomberg Technology.”

Documents contain invaluable information powering core business processes. Extracting information from these documents with minimum manual intervention helps bolster organizational efficiency and productivity. As more and more processes and workflows get automated, the need for new features to help extract text and structures increases.
The new capabilities in Form Recognizer which were announced at Ignite support pre-built IDs, invoices and 73 new languages.

Learn more:

Sustaining growth in storage and computational needs is increasingly challenging thanks to those pesky laws of physics.

For over a decade, exponentially more information has been produced year after year while data storage solutions are pressed to keep up. Soon, current solutions will be unable to match new information in need of storage. Computing is on a similar trajectory, with new needs emerging in search and other domains that require more efficient systems. Innovative methods are necessary to ensure the ability to address future demands, and DNA provides an opportunity at the molecular level for ultra-dense, durable, and sustainable solutions in these areas.

In this webinar, join Microsoft researcher Karin Strauss in exploring the role of biotechnology and synthetic DNA in reaching this goal. Although we have yet to achieve scalable, general-purpose molecular computation, there are areas of IT in which a molecular approach shows growing promise. These areas include storage as well as computation.

Learn how molecules, specifically synthetic DNA, can store digital data and perform certain types of special-purpose computation.

Microsoft Research presents this talk on pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks.  However, most pretraining efforts focus on general-domain corpora, such as in newswire and web text. Biomedical text is very different from general-domain text, yet biomedical NLP has been relatively underexplored.

A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models.

In this webinar, Microsoft researchers Hoifung Poon, Senior Director of Biomedical NLP, and Jianfeng Gao, Distinguished Scientist, will challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.

You will begin with understanding how biomedical text differs from general-domain text and how biomedical NLP poses substantial challenges that are not present in mainstream NLP. You will also learn about the two paradigms for domain-specific language model pretraining and see how pretraining from scratch significantly outperforms mixed-domain pretraining in a wide range of biomedical NLP tasks. Finally, find out about our comprehensive benchmark and leaderboard created specifically for biomedical NLP, called BLURB, and see how our biomedical language model, PubMedBERT, sets a new state of the art.