The 3D community continues to innovate and evolve, with greater focus on enabling augmented reality and virtual reality (AR/VR/MR) experiences. There have been amazing breakthroughs on the capture and acquisition in recent years, with the introduction of microlens camera arrays and the growing momentum behind large-scale multi-camera arrays, as well as 360-degree video and depth sensing devices. Display technology continues to advance as the emergence of head-mounted displays gain in popularity. The widespread increase in computational power has allowed an ever-increasing realism in 3D scene generation. Additionally, 3D audio has the potential to add to the immersive experience through surround sound and realistic sound field rendering.
While appropriate venues for presenting research at advanced stages are plentiful, the 3D multimedia community needs an appropriate venue for receiving feedback during early or initial stages of the development of radical and potentially disruptive technologies. This is the void that Hot3D tries to fill.
Scope & Format
Papers in all areas of Multimedia 3D are solicited. Early stage or preliminary results from potentially disruptive technology is particularly encouraged. Full papers (up to 6 pages) will be published in the ICME 2018 proceedings.
Additionally, and most importantly, position papers are solicited for short presentation and discussion of preliminary work or ideas. Submit a proposal of up to 2 pages, with a decision expected 2 weeks after submission.
The 1-day workshop will be co-located with ICME, the flagship multimedia conference sponsored by four IEEE societies. The workshop will be a unique opportunity to interact with other researchers working on 3D Multimedia. With an environment designed to facilitate discussion and feedback in early stage research, as well as forge new collaboration, this is an event not to be missed.
Automated analysis of multimedia content is indispensable for organizing, indexing and nav- igating through vast amount of data. Due to the proliferation of media sharing platforms and social networks, such as YouTube, Twitter and Facebook, the production and distribution of me dia content have gone through a paradigm shift. This has led to a rapid democratization of the field inviting content-makers from all spheres, and has also led to an unprecedented growth in consumption of multimedia content over both modern and traditional mediums. Along with the traditional multimedia areas such as indexing and summarization, the research in this area is now being driven by the need to improve and facilitate personal and social activities, insight generation, and interaction experience. Research effort has been directed towards developing computational tools and methodologies for systematic study of trends and biases in media, impact of media content in terms of commercial outcome and influence on users.
Extracting and analyzing rich media analytics requires powerful, often multimodal, frameworks that can handle the huge variability involved in different forms of media data. The variability is due to the inherent nature of media data being ‘in-the-wild’, user-generated, and often challenging the traditional formats. In the technical front this results into problems with employing the off-the-shelf techniques from domains like computer vision or speech analysis. For example, the human face detector fails miserably to detect animated movie characters. Analyzing such challenging content calls for the design and training of novel algorithms that exploits specific properties and additional structures in the media data. In fact, for many successful vision and audio analysis tasks, media content has been found to be one of the most difficult benchmarks. This issue is further compounded by the absence of any large in-domain datasets with reliable annotations. Hence, this field often relies on employing and developing clever data mining techniques, and approaches from semi-supervised or transfer learning. While machine learning methods requiring little or no supervision can greatly benefit the field, a need for properly annotated databases for evaluation and benchmarking can not be denied. This gap, if addressed, will advance the field towards a more usable and practical future.
Also note that this research area can immensely benefit from skills from multiple disciplines including engineering and computer science, social sciences, psychology and even, film theory. Thus the main purpose of this workshop is to facilitate conversation between different groups of researchers across disciplines ,and to provide a platform for sharing research progress and challenges in the area of multimedia analysis for societal trends. We believe that the submitted manuscripts will help develop this new and important area of research, enable discussions on the potential of this area, the challenges involved, and the future that researchers envision.
Call for Papers
The workshop will offer a timely dissemination of research updates to benefit the academic and industry researchers and professionals working in the fields ranging from multimedia computing, multimodal signal processing to social science. To this end, we solicit original research papers related (but not limited) to the topics listed below:
- Media analytics and methodologies for all forms of media.(e.g. automated discovery of rich analytics related to gender, profession, ethnicity, personality, stereotypes, topics of discussion/conversation, and analysis of their interrelationships, dynamics and evolution over time.)
- Impact prediction and analysis (e.g. popularity, virality, memorability, commercial success and influence of media content on society)
- Methodologies and analytics for analyzing relatively less-studied media, such as, advertisements and animated movies.
- Affect and sentiment analysis from media (e.g. emotional appeal, persuasiveness, emotion perception and communication)
- Large-scale data collection, benchmarking and challenges.
- Evaluation protocols and metrics for methods analyzing societal trends.
Papers will be evaluated based on their novelty, presentation, contributions and relevance to the workshop topic. The papers must be written in English and describe original unpublished work. Extensions on previously published work must show significant additional work in order to be considered for publication. Reviewers will make an initial determination on the suitability and scope of all submissions.
Today multimedia services and technologies play an important role in providing and managing e-health services to anyone, anywhere and anytime seamlessly. These services and technologies facilitate doctors and other healthcare professionals to have immediate access to e-health information for efficient decision making as well as better treatment. Researchers are working in developing various multimedia tools, techniques, and services to better support e-health initiatives. In particular, works in e-health record management, elderly health monitoring, real-time access of medical images and video are of great interest. Today multimedia services and technologies play an important role in providing and managing smart health services to anyone, anywhere and anytime seamlessly. These services and technologies facilitate doctors and other healthcare professionals to have immediate access to smart -health information for efficient decision making as well as better treatment. Researchers are working in developing various multimedia tools, techniques, and services to better support smart -health initiatives. In particular, works in smart-health record management, elderly health monitoring, real-time access to medical images and video are of great interest.
This workshop aims to report high-quality research on recent advances in various aspects of smart-health, more specifically to the state-of- the-art approaches, methodologies, and systems in the design, development, deployment and innovative use of multimedia services, tools and technologies for health care. Authors are solicited to submit complete unpublished papers in the following, but not limited to:
- Serious Games for smart health
- Multimedia big data for healthcare applications
- Digital Game-based Therapy
- Adaptive exergames for smart health
- Multimedia Enhanced Learning, Training & Simulation for Health
- Sensor and RFID technologies for smart health
- Cloud-based smart health services
- Resource allocation for media cloud-assisted smart healthcare
- Multimedia big data for smart healthcare
- Health record management
- Context-aware smart health services and applications
- Elderly health monitoring
- Collaborative smart –health
- IoT-Cloud Integration for Smart Healthcare
- Deep learning approach for smart healthcare
- Cloud-based connected healthcare
- Security, privacy and authentication for Smart Healthcare Systems
Extended versions of some selected accepted papers will be invited to submit to a special issue of an ISI journal and IEEE Access -Mobile Multimedia for Healthcare
The intimate presence of mobile devices in our daily life, such as smartphones and various wearable gadgets like smart watches, has dramatically changed the way we connect with the world around us. Nowadays, in the era of the Internet‐of‐Things (IoT), these devices are further extended by smart sensors and actuators and amend multimedia devices with additional data and possibilities. With a growing number of powerful embedded mobile sensors like camera, microphone, GPS, gyroscope, accelerometer, digital compass, and proximity sensor, there is a variety of data available and hence enables new sensing applications across diverse research domains comprising mobile media analysis, mobile information retrieval, mobile computer vision, mobile social networks, mobile human‐computer interaction, mobile entertainment, mobile gaming, mobile healthcare, mobile learning, and mobile advertising. Therefore, the workshop on Mobile Multimedia Computing (MMC 2018) aims to bring together researchers and professionals from worldwide academia and industry for showcasing, discussing, and reviewing the whole spectrum of technological opportunities, challenges, solutions, and emerging applications in mobile multimedia.
- Ubiquitous computing on mobile and wearable devices
- Action/gesture/object/speech recognition with mobile sensor
- Computational photography on mobile devices
- Human computer interaction with mobile and wearable devices
- Mobile multimedia content adaptation and adaptive streaming
- Power saving issues of mobile multimedia computing
- Personalization, privacy and security in mobile multimedia
- User behavior analysis of mobile multimedia applications
- Other topics related to mobile multimedia computing
- Mobile visual search
- Multimedia data in the IoT
- Mobile social signal processing
- Mobile virtual and augmented reality
- Mobile multimedia indexing and retrieval
- Multi‐modal and multi‐user mobile sensing
- 2D/3D computer vision on mobile devices
- Multimedia Cloud Computing
This workshop focuses on the emerging field of multimedia creation using machine learning (ML) and artificial intelligence (AI) approaches. It aims to bring together researchers from ML and AI and practitioners from multimedia industry to foster multimedia creation. Multimedia creation, including style transfer and image synthesis, have been a major focus of machine learning and AI societies, owing to the recent technological breakthroughs such as generative adversarial networks (GANs). This workshop seeks to reinforce the implications to multimedia creation. It publishes papers on all emerging areas of content understanding and multimedia creation, all traditional areas of computer vision and data mining, and selected areas of artificial intelligence, with a particular emphasis on machine learning for pattern recognition. The applied fields such as art content creation, medical image and signal analysis, massive video/image sequence analysis, facial emotion analysis, control system for automation, content-based retrieval of video and image, and object recognition are also covered. The workshop is expected to provide an interactive platform to researchers, scientists, professors, and students to exchange their innovative ideas and experiences in the areas of Multimedia, and to specialize in the field of multimedia from underlying cutting-edge technologies to applications.
- Generative models for multimedia creation
- AI for multimedia creation
- Data mining techniques for multimedia creation
- Synthesis and prediction of multimedia
- Deep learning application in video and image analysis
- Multi-modal data analysis
- Medical image and signal analysis
- Content of video and image extraction, analysis and application
- Online and distributed computing for multimedia creation
- Wireless technology and demonstrations for multimedia creation
- Security, privacy and policy regulation for multimedia creation
- Machine learning on social, emotional and affective multimedia
- Augmented reality
- Multimedia applied on control system for automation
- Human-computer interaction
- Signal processing including audio, video, image processing, and coding
- Smart multimedia surveillance
For more details, please refer to the submission instruction of ICME 2018 here.
The past decade has seen a tremendous growth in multimedia systems and applications in various areas ranging from surveillance to social media. While these systems and applications have been instrumental in improving the way of life for the end users; in the process the people's privacy might be put at risk. In particular, in most social networking websites, users upload their information without any guarantees on privacy. Although there has been a significant progress in multimedia research, the issues related to privacy related to the use of multimedia systems and applications have only recently begun to attract the attention of researchers.
This is the second edition of this workshop (after the first successful PIM’16 in Seattle) and it aims to bring forward recent advances related to privacy protection in various multimedia systems and applications. We seek unpublished high quality papers that address the privacy issues in different multimedia applications including, but not limited to, surveillance, e-chronicles, e-health, mobile media, and social networking from the following perspectives:
- Privacy considerations in acquisition and transmission of multimedia data.
- Privacy issues in fusion, analysis, presentation, and publication of multimedia data.
- Privacy in multimedia databases: storage, access, indexing and retrieval.
- Theory and models: assessment of security and privacy and utility, privacy leakage and covert channels.
- Synergy between privacy preserving technologies and ethical and legal issues.
- Privacy aware cloud-based multimedia storage and processing.
- System architectural choices for privacy preservation.
Recent years have witness a great popularity of multimedia applications and services. With the rapid growth of the volume of multimedia data and the complexity of systems, high efficient processing and analytics technologies have received significant attention and become key research issues. This workshop is intended to promote further research interests and activities related to multimedia data processing and analytics as well as to provide a forum for researchers and engineers to present their cutting-edge innovations and share their experiences on all aspects of the emerging multimedia systems and applications. Topics of interests include, but are not limited to:
- Theories and methodologies for multimedia big data computing
- High efficient multimedia data compression and transmission
- Multimedia retrieval, classification and understanding
- Security and privacy in multimedia big data
- Multimedia quality of experience
- Multimedia big data systems
- Multi-modality fusion of multimedia content
- Object localization and recognition in images/videos
- VR/AR content generation and analysis
- Syntactic parsing and semantic analysis
- Machine translation and speech recognition
- Word segmentation and text mining
- Question answering and user interaction
- Benchmark datasets of emerging multimedia applications
- Tutorials/surveys of the advances of multimedia technologies
We have witnessed remarkable advances in facial recognition technologies over the past a few years due to the rapid development of deep learning and large-scale, labeled facial image collections. As progress continues to push renown facial recognition databases nearly to saturation. There is a need for evermore challenging image and video collections, to solve emerging problems in the fields of faces and multimedia.
In parallel to conventional face recognition, research is done to automatically understand social media content. To gain such an understand, the following capabilities must be satisfied: face tracking (e.g., facial expression analysis, face detection), face characterization (e.g., behavioral understanding, emotion recognition), facial characteristic analysis (e.g., gait, age, gender and ethnicity recognition), group understanding via social cues (e.g., kinship, non-blood relationships, personality), and visual sentiment analysis (e.g., temperament, arrangement). The ability to create effective models for visual certainty has significant value in both the scientific communities and the commercial market, with applications that span topics of human-computer interaction, social media analytics, video indexing, visual surveillance, and Internet vision.
This workshop serves a forum for researchers to review the recent progress of recognition, analysis and modeling of faces in multimedia. Special interests will be given to visual kin and non-kin social relations. The workshop will include up to two keynotes, along with peer-reviewed papers (oral and poster). Original high-quality contributions are solicited on the topics listed in the section below.
- Soft biometrics and profiling of faces: age, gender, ethnicity, personality, kinship, occupation, and beauty ranking;
- Deep learning practice for social face problems with ambiguity including kinship verification, family recognition and retrieval;
- Understanding of the familial features from vast amount of social media data;
- Discovery of the social groups from faces and the context;
- Mining social face relations through metadata as well as visual information;
- Tracking and extraction and analysis of face models captured by mobile devices;
- Face recognition in low-quality or low-resolution video or images;
- Novel mathematical models and algorithms, sensors and modalities for face & body gesture and action representation;
- Analysis and recognition for cross-domain social media;
- Novel social applications involving detection, tracking & recognition of faces;
- Face analysis for sentiment analysis in social media;
- Other applications involving face analysis in social media content.
Honorary General Chair
Biometrics based recognition, identification and retrieval techniques become more and more important in our society. Great progress has been made in this area, focusing on heterogeneous cues (face, body (2D appearance and 3D volume), other unimodal biometrics such as finger and palm, gait, behavioral cues in general) which do not require user’s collaboration. However, this problem is far from being completely solved, particularly in real-world applications under uncontrolled environments, where a large number of factors hinder the identification/recognition/retrieval performance, including lighting variations, different types of occlusion, large pose evaluation and view change etc.
The mission of the workshop is to explore the cutting-edge research in non-collaborative (re)identification/recognition/retrieval, with a particular emphasis on the fusion of different modalities under cross-view setting. For example, the face recognition and the re-identification communities, even though they share many objectives, they rarely have interacted to hybridize novel recognition applications, where both the biometric patterns face and body can be jointly exploited. This holds true also for the communities of gait recognition and body re-identification, thermal body recognition, visual body recognition and other biometrics cues such as Iris Recognition at a distance. The workshop, in this sense, will be highly interdisciplinary, encouraging papers (even preliminary), where the modality fusion plays a primary role.
In addition, human-related identification/recognition/retrieval techniques greatly rely on the development of feature and similarity learning strategy. Therefore, this workshop also aims to explore recent progress in feature and similarity learning (distance metric learning) for biometric based identification/recognition/retrieval. It has been observed in recent years that the (re-)identification identification/recognition/retrieval performance can be largely improved when a robust feature representation or an appropriate distance/similarity function have been learned. In this aspect, this workshop will help the community to better understand the challenges and opportunities of feature and similarity learning techniques and their applications to (re-)identification for the next few years. In addition, with the great increasing number of data, the techniques addressing the large- scale biometrics are also extremely required.
Topics & FormatTopics of interest include, but are not limited to:
- Face, Finger, Iris, Palm Recognition.
- Person Re-identification.
- People Detection, Tracking, and Gait analysis.
- Novel biometrics sensing methods and Soft Biometrics.
- Feature Learning for Biometrics Recognition.
- Similarity Learning (Distance Metric Learning) for Biometrics Recognition.
- Human identification with multiple cues and multi-modality fusion.
- Large scale search and matching for identification.
- Transfer Learning for visual surveillance.
- Performance modeling, prediction and evaluation of identification/biometrics systems.
- Security improvement assessment for multi-identification/biometrics systems.
- Large scale multi-biometrics feature learning for fast retrieval.
- Hash learning for biometrics.
We propose to host a half day workshop consisting of oral presentations, an invited speech and a panel.
Paper Submission Instructions
Please submit a full-length paper (up to 6 pages IEEE 2-column format) through the online submission system here
The templates for Microsoft Word and LaTeX submissions are available as below.
- Workshop Paper submission deadline: March 19, 2018 (Extended March 26, 2018 )
- Workshop Paper acceptance notification: April 27, 2018
- Camera-Ready Workshop Paper submission deadline: May 11, 2018