The ICLR competition targeted towards designing a machine learning model to classify fields by crop type from images collected during the growing season by the Sentinel-2 satellite, promises a USD 7,000 prize, which will be awarded to the three winners of the competition, alongside travel grants.
ICLR competition is in responses to providing accurate and reliable satellite-derived agricultural data to build designs for agricultural monitoring, improving farmers productivity and heightening the impact of intervention mechanisms. ICLR challenge organised by Radiant Earth Foundation, a nonprofit organisation focused on enabling groups and individuals all over the world with open Artificial Intelligence (AI) and Earth observations (EO) data, standards and tools to address the world’s most critical international development challenges, is backed by PlantVillage in providing the ground reference data. PlantVillage is a research and development unit of Penn State University that empowers small-scale farmers with a mandate to alleviate poverty through the use of cheap and affordable technology, and democratising the access to knowledge that can help the cultivation of food crops
Competition prizes are funded by Microsoft AI for Earth and Descartes Labs.
For more information on joining the competition and other necessary details, click here.
Competition closes on 15 March 2020. Final submissions must be received by 11:59 PM GMT.
Meanwhile, the fields pictured in the training set are across western Kenya, and the images were collected by the PlantVillage team. The dataset contains a total of more than 4,000 field images. The data participants will have access to includes 12 bands of observations from Sentinel-2 L2A product (observations in the ultra-blue, blue, green, red; visible and near-infrared (VNIR); and short wave infrared (SWIR) spectra), as well as a cloud probability layer. The bands are mapped to a common 10x10m spatial resolution grid.
Furthermore, Western Kenya, where the data was collected is dominated by smallholder farms, popular across Africa, and poses a challenge to build crop type classification from Sentinel-2 data. Moreover, the class imbalance in the dataset may need to be taken into account for building a model that performs evenly across all classes.
Computer Vision for Agriculture (CV4A) workshop is a one-day training holding in April 2020, on the sidelines of the International Conference on Representation Learning (ICLR), slated for Addis Ababa, Ethiopia. The workshop, which is the 2nd edition of the Computer Vision for Global Challenges initiative, will centre on agriculture, and feature speakers, posters, spotlight presentations, a panel discussion and (tentatively) a mentoring/networking dinner in addition to competitions. It is jointly organised by AI and computational agriculture researchers and has the support of CGIAR.
The International Conference on Learning Representations (ICLR), on the other hand, is the foremost gathering of professionals committed to the progress of representation learning, a field of artificial intelligence.
ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
Ogechi Onuoha is a Cambridge Certified ESOL editor with a background in reporting, international relations, creative writing and adept in industry research and analysis. She is passionate about curating and evaluating the benefits/relevance of space to grassroots development and women’s participation in the space sector.