Join us at the Montreal AI Symposium, 2020!

This year, Whale Seeker is going to MAIS 2020! Ok, so not ‘going’ in the traditional sense - actually presenting online but an achievement nonetheless! Every year, the Montreal Artificial Intelligence Symposium is held to showcase the fantastic tech achievements, progress and new research of the Montreal AI community. In this exciting and informative day-long event, both academic and industrial participants are welcome to submit their work and are invited to present across its keynote addresses, contributed talks and posters. And with additional time allocated for networking and socializing, the Symposium seeks to build strong connections between researchers within the Greater Montreal area. The works presented range from more theoretical, cutting-edge research into deep learning techniques to real-life applications of these approaches to solve problems in different industry domains. It’s not uncommon to see the heavyweights of AI there too; folks like Yoshua Bengio, Doina Precup, Aaron Courville and Joëlle Pineau alongside their students, collaborators and research groups to present their work and boost the Montreal AI community.

The Symposium is free to attend and will be hosted online. Our work, entitled “Zoom Out: Using Thumbnails for Weakly Supervised Land Detection in Whale Population Monitoring”, will be presented at the poster session at the end of the day but we encourage you to jump on to the rest of the presentations to get the best out of the event. The context of our submitted project was to automate the land detection and area measurement to speed up and lower the costs of the reporting process in whale population monitoring. For the poster we’ll be presenting, the goals were to remove the need to process aerial images in their current, enormous form and to reduce the burden of having to hand segment areas of land to provide as labels for deep learning. To tackle the first goal, as the title of the poster suggests, we ‘zoomed out’ (i.e. resized) the images to dimensions of a thumbnail and applied deep learning models on that. Then, for the second goal, we used a deep learning approach termed ‘weakly supervised’ learning to leverage a global ‘yes-no’ label of whether a thumbnail contained land, as opposed to segmenting (or ‘colouring in’) the actual area of land. Combining this with the techniques related to deep learning interpretability, namely ‘pixel attribution saliency maps’, we could generate a heatmap of what the models looked at when predicting if an image had land or not. The teaser video and poster are up on the MAIS website, http://montrealaisymposium.com/, for you to take a look as well (head to the ‘Scientific Program’ page under ‘Supervised Learning’) and we hope to see you there!

Previous
Previous

Can B Corp set a framework for ethical AI?

Next
Next

An unlikely partnership for marine health