BEYOND is organising a special session relevant to Multimodal Analysis and Retrieval of satellite images as part of the 27th International Conference on Multimedia Modeling, which will be held on January 25-27, 2021 in Prague, Czech Republic https://mmm2021.cz/.
The call for papers is also presented below.
The dedicated website for the special session is here: https://mklab.iti.gr/multisat2021/
The conference proceedings will be published in the series of Lecture Notes in Computer Science (LNCS) by Springer, according to the author guidelines: https://mmm2021.cz/author-guidelines/
Special session papers will be peer-reviewed in a double-blind review process.
- Deadline for paper submission: July 12th, 2020
- Notification of acceptance: September 20th, 2020
- Camera Ready Paper submission and registration: October 18th, 2020
- Conference starts: January 25th, 2021
Call for Papers
Deep learning and semantic technologies have recently introduced a paradigm shift in the domain of Earth Observation (EO) via generating higher level knowledge and combining heterogeneous data streams to enable the development of non-traditional downstream services. Copernicus data and other georeferenced data sources are often highly heterogeneous, distributed and semantically fragmented. Semantic web technologies are used to publish the data contained in the various repositories in Resource Description Framework (RDF). Semantically annotated data become interconnected and thereby easily accessible by the users. The value of the original data is therefore increased, encouraging the development of deep learning applications of higher value.
Image analysis with novel, supervised, semi-supervised or unsupervised learning, is already part of our lives and is extensively entering the space sector to offer value-added Earth Observation products and services. Large volumes of satellite data are frequently coming to the Earth from the Sentinel constellation, offering a basis for creating value-added products that go beyond the space sector. The visual analysis and data fusion of all streams of data need to take advantage of the existing Data and Information Access Services (DIAS) and High Performance Computing (HPC) infrastructures, when required by the involved end users to deliver fully automated processes in decision support systems. Most importantly, interpretable machine learning techniques should be deployed to unlock the knowledge that is hidden in big Copernicus data.
This special session includes presentation of novel research in:
- Concept extraction from satellite images
- Change and event detection over satellite image time series
- Fusion of EO and non-EO imagery
- Multimodal image retrieval in georeferenced data
- Geo-localization of multimedia content
- Reinforcement learning and active learning on multispectral images
- Semantic analysis on Copernicus data for multispectral image retrieval
- Linked Earth Observation data for semantic multispectral image retrieval
- Deep Learning on satellite images
- Deep Learning on Multimodal geospatial data
- Generative Adversarial Networks (GANs) on satellite imagery
- Semantics through word embeddings on satellite image metadata
- Indexing and retrieval of Copernicus data
- Knowledge extraction and data mining on Big Copernicus datasets
- Machine learning techniques for unsupervised and semi-supervised learning on satellite imagery
- Distributed machine learning techniques on High Performance Computing environments
- Data augmentation and pseudo-labeling
- Explainable Artificial Intelligence (XAI), feature selection and feature engineering
- Causality analysis to infer statistical associations in observed data sets