Here’s an interesting use of IoT and AI in agri-tech.

A Turkish researcher has developed technology that uses data from a visible light sensor to identify the ripeness of produce. The project’s goal is to detect ripeness in fruit and vegetables by monitoring pigment changes, Hack A Day reports. Rather than use a camera, the project relies on data from an AS7341 visible light sensor, which is better suited to capturing accurate spectral data.

Jamie Shotton, takes a closer look at the collaborative research engagements with Microsoft via the Swiss Joint Research Center, Mixed Reality & AI Zurich Lab, Mixed Reality & AI Cambridge Lab, Inria Joint Center, their academic and Microsoft supervisors as well as the wider research community.

The event continued in the tradition of the annual Swiss JRC Workshops. PhD students and postdocs presented project updates and discussed their research with their supervisors and other attendants.

Putting together a demo or a simple proof of concept for your Vision AI at the edge project has become pretty simple. But bringing this project to pilot then to production can be daunting.

Mahesh Yadav joins Olivier on this new episode to introduce the open source project VisionOnEdge which gives you all you need to rapidly get your project to a production ready state.

Learn more reading Mahesh’s blog post at

Yannic Kilcher covers a paper where Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs.

GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is constructed dynamically and differently for each input, without changing the underlying neural network. This is done by a multi-step consensus algorithm that runs over different levels of abstraction at each location of an image simultaneously. GLOM is just an idea for now but suggests a radically new approach to AI visual scene understanding.