Grand Challenges
Description
With the significant growth of 3D sensing technologies point clouds have become a viable solution, since they provide practical ways for capture, storage, delivery and rendering in augmented reality, mixed reality, virtual reality, medical imaging and 3D printing applications, among others. There is a need for an interchange and delivery format allowing an efficient point cloud compression with minimal or no impact on their quality. This challenge solicits contributions for this purpose. Moreover, new evaluation methodologies are sought. Furthermore, additional publicly accessible point cloud content along with evidence for compression efficiency as well as other attractive features are also accepted.
Website
https://mmspg.epfl.ch/ICME2018GrandChallengeOrganizers



Description
This grand challenge is focused on heterogeneous face recognition, specifically on polarimetric thermal-to-visible matching. The motivation behind this challenge is the development of a nighttime face recognition capability for homeland security and defense. The challenge organizers will provide a polarimetric thermal and visible face database for algorithm development. Participants will be asked to provide heterogeneous face recognition algorithms in the form of executables, that take a pair of images (an aligned polarimetric thermal face image and an aligned visible face image) as input and provide a similarity score as output. Algorithms will be ranked by their face verification performance using ROCcurves.
Website
https://sites.google.com/view/hfr-challenge18/homeOrganizers




Description
One of the most successful efforts to reduce the latency of HTTP streaming is to utilize the HTTP chunked transfer coding, which enables a video segment to be generated and transmitted concurrently. For more technical details, please read Twitter’s recent technical blog Introducing LHLS Media Streaming
However, compared with segment-based HTTP download, chunked transfer coding makes the bandwidth estimation a lot harder for any ABR playback algorithm. This Grand Challenge is to call for signal-processing/machine-learning algorithms that can effectively estimate download bandwidth based on the noisy samples of chunked-based download throughput.
Website
https://blog.twitch.tv/twitch-invites-you-to-take-on-the-icme-2018-grand-challenge-2b3824d3537bOrganizers




Description
Densely-sampled light field (DSLF) is a discrete representation of the 4D approximation of the plenoptic function, where multi-perspective camera views are arranged in such a way that the disparities between adjacent views are less than one pixel. DSLF is an attractive representation of scene visual content, particularly for applications which require ray interpolation and view synthesis. However, direct DSLF capture of real-world scenes is not practical. In this Grand Challenge, proponents are invited to develop and implement algorithms for DSLF reconstruction from decimated-parallax imagery, i.e. from a given sparse set of camera images.
Website
http://www.tut.fi/civit/index.php/icme-2018-grand-challenge-densely-sampled-light-field-reconstruction/Organizers




Description
The MPEG DASH standard provides an interoperable representation format but deliberately does not define the adaptation behavior for the client implementations. In a typical deployment, the encoding is optimized for the respective delivery channels, but various issues during streaming (e.g., high startup delay, stalls/re-buffering, high switching frequency, inefficient network utilization, unfairness to competing network traffic, etc.) may limit the viewer experience.
The goal of this grand challenge is to solicit contributions addressing end-to- end delivery aspects that will help improve the QoE while optimally using the network resources at an acceptable cost. Such aspects include, but are not limited to, content preparation for adaptive streaming, delivery in the Internet and streaming client implementations.
A special focus of 2018’s grand challenge will be related to immersive media applications and services including omnidirectional/360-degree videos.
Website
https://github.com/Dash-Industry-Forum/Academic-Track/wiki/DASH-Grand-Challenge-at-IEEE-ICME-2018Organizers


Description
Recent VR/AR applications still face important challenges. Particularly, understanding how users watch and explore 360° content and modelling visual attention is a key tech to develop appropriate rendering, coding and streaming techniques to create a good experience for consumers.
Salient360! 2018 is the follow-up of ICME’17 Salient360! Grand challenge. The first edition set the baseline for several types of visual attention models for 360° images, and ad-hoc methodologies and ground-truth data to test each type of model. With this second edition, it is expected to:
- consolidate and improve the existing modeling.
- extend the type of models.
- extend the type of input contents.
Website
https://salient360.ls2n.frOrganizers


Submission Instructions
Please use the online submission system here
Important Dates
- Grand Challenge Winner Paper submission deadline: March 26, 2018
- Grand Challenge Acceptance notification: April 23, 2018
- Grand Challenge Camera Ready Paper submission deadline: May 11, 2018
Grand Challenges Chair

vasudevb@qti.qualcomm.com

leizhang@microsoft.com