General Information

Welcome to DeepModAI 2025, the International Workshop on Deep learning for Multimodal Data, held in conjunction with the 32nd International Conference on Neural Information Processing (ICONIP 2025).

This workshop serves as a premier forum for researchers and practitioners to present and discuss the latest advancements, challenges, and future directions in deep multimodal learning. We emphasize innovative approaches that handle the complexities of real-world, dynamic data.

Key Details

  • Workshop date: Monday, November 24, 2025

  • Submissions deadline: September 30 October 20, 2025

  • LocationOkinawa Institute of Science and Technology (OIST), Okinawa, Japan

  • Workshop format: A half-day event featuring keynote talks, technical paper presentations, poster sessions, and a panel discussion (program subject to final confirmation).

  • Registration: Attendance is open to registered participants of ICONIP 2025. Please refer to the main conference website for registration details.

 

Aim and Scope

In our data-driven world, multimodal data is ubiquitous. However, unlocking its full potential requires moving beyond traditional deep learning paradigms. The DeepModAI workshop aims to bridge this gap by bringing together academic researchers and industry professionals to address the core challenges in modeling complex, dynamic multimodal data.

We specifically focus on advanced deep learning techniques—such as unsupervised, self-supervised, and weakly supervised approaches—that learn transferable and robust latent representations across different modalities. The goal is to advance beyond unimodal and static data analysis.

Topics of Interest:

  • Multi-view and multi-modal architecture design
  • Cross-modal alignment and translation
  • Attention mechanisms for dynamic modality fusion
  • Diversity-aware and ensemble learning methods
  • Explainable and collaborative multimodal frameworks
  • Adaptability to dynamic, incomplete, or context-dependent data
  • Scalable deployment and computational efficiency

We also strongly encourage contributions that demonstrate applications in critical domains such as health monitoring, autonomous systems, robotics, and environmental modeling.

Featuring a mix of technical presentations, invited talks, a panel discussion, and collaborative sessions, the workshop will highlight both theoretical advances and practical solutions. It will foster interdisciplinary dialogue on emerging challenges such as evaluation standards, handling missing modalities, and ethical considerations, ultimately helping to chart the future directions of deep multimodal learning.

 

Call for papers

We invite the submission of contributions on topics relevant to deep multimodal learning.

Submissions can take the form of either:

  • Extended abstracts (2 pages maximum, excluding references), or
  • Regular papers (any length, with a recommended page limit of 12-15 pages)

Submission Guidelines:

  • Submissions must be made through the “New submission” link on the left panel of this website.
  • Submissions can be in any suitable format (e.g., Springer LNCS). 
  • Regular papers should also be submitted to a preprint repository (e.g., arXivJxiv) and have an associated DOI at the time of submission. Please use the 'Comment' box on the submission page to provide the DOI for your paper.
  • All submissions will be peer-reviewed. Upon acceptance, the extended abstracts and the links to regular papers repositories will be published on this website.

Important Dates:

  • Submission deadline: September 30 October 20, 2025 
  • Workshop date: November 24, 2025

Accepted submissions will be assigned to either an oral or poster presentation based on the reviewers' recommendations.

 

Organizing Committee

We are grateful to the international experts who will form our program committee. The list of members will be announced soon.

 

Sponsors

Osidoc

Loading... Loading...