Industry Forum

Panels

Synopsis

5G is the next big thing in mobile communications. With key technology advances, it promises faster speeds and lower latency, and opens the door to a whole new set of use cases for smartphones and other consumer products. It is expected that 2019 as the earliest possible launch date for the first “true” 5G smartphones.

At ICME 2018, we’re excited to announce the panel discussion on “5G-enabled Multimedia User Experience”. We have invited 4 outstanding panelists from industry, who will focus on discussing how 5G low latency and faster network speed will enhance the multimedia user experience whether it is audiovisual streaming, mobile gaming, or augmented/virtual/mixed reality. Please be sure to join our panel discussion to be held on Wednesday July 25th, 2018.

Panelists

Dr. Robert A. DiFazio is the Head of Research & Development and Vice President of InterDigital Labs, where he leads a group of engineers who design and develop advanced technologies and applications for mobile communications. He manages and actively participates in numerous projects addressing 5G cellular technology, next generation Wi-Fi, millimeter wave radio systems, small cell and heterogeneous wireless networks, advanced video standards and platforms, emerging network technology, IoT and machine-to-machine communications, and advanced sensor systems for navigation and localization. He contributes to technology planning at InterDigital and the company’s collaboration with many universities. Dr. DiFazio has almost forty years of experience in research, design, implementation, and testing of new technologies for commercial and military wireless systems. Prior to InterDigital, he spent more than twenty years at BAE Systems working on software defined radios, smart antenna systems, jam resistant modems, and low probability of intercept communication and navigation systems. He has a Ph.D. from the NYU Tandon School of Engineering (formerly, Brooklyn Poly). He serves on the Industry Advisory Boards for the NYU Tandon Department of Electrical Engineering and Computer Science and for New York Institute of Technology. He is a Senior Member of the IEEE and holds over forty issued and numerous pending US patents.


Dr. Ajay Luthra is currently Vice President in Advanced Research, ARRIS Inc. (formerly, Google, Motorola, General Instrument). He has been leading the development of emerging disruptive technologies, new product concepts and advanced prototypes in the areas of Advanced Digital Video Compression, Video Intelligent Cloud Computing, Cable Head-End systems, IP Based Video Delivery, Multi-screen video delivery and Next Generation In-home Cloud & Networking of Multi-Media Devices. Previously, he was Director, Communication and Video Systems Lab (1990-1995) and manager for the DSP Group (1985-1990) at Tektronix.

He received New Bay Video Edge Industry Innovator award in 2017. He pioneered and championed the concepts of transcoding and related high impact products for multi-screen environment and one of those designs earned a CES Innovation Design and Engineering award in 2010. He received Motorola’s Master Innovator award in 2010, Patent of the Year award in 2007 and Distinguished Innovator award in 2003. He was named Motorola Dan Noble Fellow in 2003.

He has also been an active member of MPEG committee for more than twenty-five years where he has chaired several technical sub-groups and pioneered the MPEG-2 extensions for studio applications. He led the development of MPEG-4 AVC/H.264 video coding standard as vice-chair of Joint Video Team (JVT), consisting of ISO/MPEG and ITU-T/VCEG experts. This standard has now become the primary engine behind very successful deployments of HDTV technologies & services, and spawned multi-billion dollars businesses. He and JVT were recognized by ATAS Prime Time Emmy Engineering Award and NATAS Technology & Engineering Emmy Award committees for the development of that standard. He was also the US Head of Delegates (HoD) to MPEG and chair of InterNational Committee for Information Technology (INCITS)/L3.1 committee from 2004 to 2011. INCITS is the central U.S. standards group dedicated to creating technology standards for the next generation of innovation. He received INCITS’ Gene Milligan award in March 2013 for providing outstanding leadership in the development of standards for digital video coding technologies.

He has also been actively involved in the development of High Efficiency Video Coding (HEVC) standard – the most current MPEG digital video coding standard. As co-chair of MPEG’s adhoc group on High Dynamic Range (HDR) and Wide Color Gamut (WCG) video coding, he led the MPEG’s effort in understanding the impact of High Dynamic Range (HDR) and Wide Color Gamut (WCG) attributes of video on the coding efficiency of HEVC standard, how to provide better visual quality and backward compatible solutions. He holds more than 50 patents and several more are pending. He has authored numerous technical papers and has given keynotes at various conferences in the areas of digital video compression and communication. Dr. Luthra received Ph.D. from Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia, USA, M.Tech. in Communications Engineering from IIT Delhi, India and B.E. (Hons) from BITS, Pilani, India.


Dr. Imed Bouazizi received the M.S. degree in computer sciences from Technical University of Braunschweig, Braunschweig, Germany, and the Ph.D. degree from the RWTH Aachen University, Aachen, Germany, in 2000 and 2004, respectively. He is a Principal Researcher at Samsung Research America. His research interests cover immersive multimedia distribution and communication, including VR/AR, 6DoF media, and 5G architecture for media distribution, with a strong focus on standardization activities. He has contributed to the development of several standardization efforts in the most relevant organizations


Dr.Manuel Tiglio received his Ph. D. in Physics from the Universidad Nacional de Cordoba (Argentina) in October of 2000. Since then he has had a number of postdoctoral, research, visiting scholar and faculty positions at Penn State, Louisiana State University, Cornell University, the University of Maryland, and Caltech. Dr. Tiglio’s research interests are in real-time solutions to complex problems, both forward (simulations) and inverse (parameter estimation, model selection, Bayesian inference) ones; uncertainty quantification, high order and spectral methods in complex geometries, heterogeneous computing and, more broadly: gravitational wave physics, computational mathematics, scientific computing and numerical analysis, in particular reduced order modeling, learning, and data mining.


Dr. Khaled El-Maleh is a Senior Director of Technology in the IP Department of Qualcomm leading the Sensors+IP Portfolio Team, Multimedia Technology Team, and related IP Strategy areas. Dr. El-Maleh’s areas of expertise and interests include: design, implementation and quality evaluation of multimedia systems, sensors technologies, data mining, human-computer interfaces, computer vision applications, talent management, innovation and industry-university technology transfer. He is a technologist and strategist with focus on entrepreneurship & Innovation.

Khaled joined Qualcomm in 2000 as a Senior Engineer working on multimedia technology in Qualcomm Chip Business (QCT). Khaled received Double Majors Bachelor degrees in Electrical and Computer Engineering and in Applied Math from King Fahd University of Petroleum and Minerals of Saudi Arabia, and M. Eng. and PhD in Electrical and Computer Engineering from McGill University, Canada. He is an accomplished inventor with more than 200 US and international patents. He was awarded Qualcomm Career Thought Leadership Award in 2009, and the IP Department 2013 Distinguished Contributor Award.

Khaled is a member of the IEEE SPS Conference Board, the Executive Advisory Board of University of San Diego Shiley-Marcos School of Engineering, and the Advisory Board of Alliant University California School of Management and Leadership.

Synopsis

XR, or X Reality, encompasses many means of combining digital and real-world realities. XR applications can take different forms, such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and more. XR users generate new forms of reality by bringing digital objects into the physical world and bringing physical world objects into the digital world. XR has applications in many industries, including architecture, real estate, health care, retail, travel, media and entertainment, marketing, education, enterprise, and so on.

To truly bring out the sense of reality, XR experience must be delivered at the highest quality. This puts significant demands on the processing speed and power of hardware and software implementations and on the bandwidth required for high quality delivery. Advanced capturing, processing, compression and display technologies (sensors, displays, and infrastructures) need to be developed. Companies large and small are innovating to improve the XR ecosystem. International standardization development organizations such as ISO/IEC MPEG and ITU-T/VCEG have also taken up the tasks of defining compression and delivery standards to enable interoperability among XR applications.

At ICME 2018, we’re excited to announce the panel discussion on “XR: Virtual, Augmented and Mixed Reality.” We have invited a list of outstanding panelists, who will cover a wide range of topics related to XR, from content creation to light field displays in labs, and from hardware and software implementations to the latest and upcoming international standards. Please be sure to join our panel discussion to be held on Wednesday July 25th, 2018.

Panelists

Jill M. Boyce is Intel Fellow and Chief Media Architect at Intel. She represents Intel at the Joint Collaborative Team on Video Coding (JCT-VC) and Joint Video Exploration Team (JVET) of ITU-T SG16 and ISO/IEC MPEG. She serves as Associate Rapporteur of ITU-T VCEG, and was an editor of the Scalability High Efficiency Video Coding extension (SHVC).

She received a B.S. in Electrical Engineering from the University of Kansas in 1988 and an M.S.E. in Electrical Engineering from Princeton University in 1990. She was formerly Director of Algorithms at Vidyo, Inc. where she led video and audio coding and processing algorithm development. She was formerly VP of Research and Innovation Princeton for Technicolor, formerly Thomson. She was formerly with Lucent Technologies Bell Labs, AT&T Labs, and Hitachi America. She was Associate Editor from 2006 to 2010 of IEEE Transactions on Circuits and Systems for Video Technology. She is the inventor of more than 150 granted U.S. patents, and has published more than 40 papers in peer-reviewed conferences and journals.


Philip A. Chou has longstanding interests in data compression, signal processing, machine learning, communication theory, and information theory and their applications to processing media such as dynamic point clouds, video, images, audio, speech, and documents. He did the first work on multiple reference frame video coding, he originated rate-distortion optimization for codecs, and he performed the seminal work on client-driven network-adaptive streaming media on demand leading up to Microsoft IIS Smooth Streaming and subsequent standards. He is one of the inventors of practical network coding using random codes, and one of the inventors of wireless network coding. He holds degrees in electrical engineering and computer science from Princeton, Berkeley, and Stanford. He has been a member of the research staff or research manager at AT&T Bell Laboratories, Xerox PARC, and Microsoft Research. He has played key roles in startups Telesensory Systems, Speech Plus, VXtreme (acquired by Microsoft), and 8i. He has been an affiliate faculty member at Stanford, the University of Washington, and the Chinese University of Hong Kong. He has been associate or guest editor for the IEEE Trans. Information Theory, the IEEE Trans. Image Processing, and the IEEE Trans. Multimedia. He has been an organizer or technical co-chair for the inaugural NetCod, ICASSP’07, MMSP’09, ICIP’15, ICME’16, ICIP’17, among others. He is an IEEE Fellow and has served on the IEEE Fellow evaluation committees of the IEEE Computer and Signal Processing societies, as well as on the Board of Governors of the IEEE Signal Processing Society. He has been an active participant in MPEG, where he instigated the work on the file format, and contributed algorithms and code used for static point cloud compression. He has won or co-authored best paper awards in the IEEE Trans. Signal Processing, the IEEE Trans. Multimedia, ICME, and ICASSP. He is a Fellow of the IEEE. He is co-editor of a book on multimedia communication. He is currently with 8i.com, a startup spread across Wellington, Los Angeles, and Seattle, where he leads the effort to compress and communicate volumetric video, popularly known as holograms, for virtual and augmented reality.


Serafin Diaz is a Vice President of engineering at Qualcomm and currently leads XR Research. His experience prior to joining Qualcomm includes digital hardware designer, LAN systems engineer, software architect, and formal tester for TDMA cellular systems.

After joining Qualcomm in 1997, Serafin led a variety of projects in areas concerning 2G CDMA data systems, test automation platforms for wireless devices, physical layer system integration and test for 1xEVDO as well as projects which laid down core technology for wireless VoIP and Video Telephony systems.

Serafin co-founded and led the first Augmented Reality project in Qualcomm in 2007. Such project produced core Real Time Computer Vision technology which has enabled not only Augmented Reality but also became foundation to projects in Robotics, Automotive, and Virtual Reality. The inside-out 6DoF head tracking technology recently showcased in Qualcomm’s VR reference design HMD is an example of such technology.

Serafin received his undergraduate degree in electronic systems from the ITESM, Monterrey, Mexico in 1989 and his Master’s degree in electrical engineering from SMU, Dallas, USA in 1994.


Jon Karafin has dedicated his career to innovation in live action cinema, VFX post-production, and light field technology – transforming bleeding-edge concepts into market ready solutions. As CEO of Light Field Lab, he applies this expertise to the development of a next-generation holographic technology.

Karafin has an extensive background in light field and visual effects technology, having previously served as Head of Light Field at Lytro, Vice President of Production Technology at RealD, and as Director of Production, Technology, and Operations at Digital Domain. During his tenure, he was responsible for ushering in a new era of cinematic capture through the launch of Lytro Cinema, as well as delivering technology and content for many of the all-time highest grossing feature films, including Peter Jackson’s The Hobbit, Michael Bay’s Transformers 3, and Tim Burton’s Alice in Wonderland.

Karafin holds multiple graduate degrees from the Rochester Institute of Technology (RIT), as well as BFAs in multiple fields from Ithaca College.


Jens-Rainer Ohm holds the chair position of the Institute of Communication Engineering at RWTH Aachen University, Germany since 2000. His research and teaching activities cover the areas of multimedia signal processing, analysis, compression, transmission and content description, including 3D and VR video applications, bio signal processing and communication, application of deep learning approaches in the given fields, as well as fundamental topics of signal processing and digital communication systems.

Since 1998, he participates in the work of the Moving Picture Experts Group (MPEG). He has been chairing/co-chairing various standardization activities in video coding, namely the MPEG Video Subgroup 2002-2018, the Joint Video Team (JVT) of MPEG and ITU-T SG 16 VCEG 2005-2009, the Joint Collaborative Team on Video Coding (JCT-VC) since 2010, as well as the Joint Video Experts Team (JVET) since 2015.

Prof. Ohm has authored textbooks on multimedia signal processing, analysis and coding, on communication engineering and signal transmission, as well as numerous papers from the fields mentioned above.

Industrial Plenary Talks

Abstract

The widely anticipated 5G cellular specifications, 3GPP Release 15, are here. Deployments are starting, devices will appear soon, and there’s plenty of buzz about who’s first, who’s best and what is to come.  5G brings great promises of 20 Gbps data rates, 1 ms latency, long battery life, and network enhancements: a Service Based Architecture, Network Function Virtualization, and Network Slicing.  But what does it all mean and what is to come? Are we overly enthusiastic, or are those who are ambivalent or skeptical justified? 

This talk will take a brief look at the evolution of cellular standards, the expectations, the successes, and the failures. It will then focus on how 5G is different and discuss how success will follow from leveraging the flexible 5G technologies for a larger ecosystem that can benefit from the broadband continuous coverage of cellular networks. Advanced multimedia services are one of the most important use cases. Yet, success may also depend on high performance localized applications using mobile edge computing, IoT, new entrants operating in unlicensed spectrum, contributions to the automobile industry’s plans for autonomous and assisted driving, non-terrestrial networks offering the ability to integrate satellite systems, unmanned aerial vehicles, robotics, and as history shows, those yet-to-be-imagined applications.

About the Speaker

Dr. Robert A. DiFazio, Head of Research & Development, Vice President, InterDigital Labs, InterDigital Communications, Inc. Dr. Robert A. DiFazio is the Head of Research & Development and Vice President of InterDigital Labs, where he leads a group of engineers who design and develop advanced technologies and applications for mobile communications. He manages and actively participates in numerous projects addressing 5G cellular technology, next generation Wi-Fi, millimeter wave radio systems, small cell and heterogeneous wireless networks, advanced video standards and platforms, emerging network technology, IoT and machine-to- machine communications, and advanced sensor systems for navigation and localization. He contributes to technology planning at InterDigital and the company’s collaboration with many universities. Dr. DiFazio has almost forty years of experience in research, design, implementation, and testing of new technologies for commercial and military wireless systems. Prior to InterDigital, he spent more than twenty years at BAE Systems working on software defined radios, smart antenna systems, jam resistant modems, and low probability of intercept communication and navigation systems. He has a Ph.D. from the NYU Tandon School of Engineering (formerly, Brooklyn Poly). He serves on the Industry Advisory Boards for the NYU Tandon Department of Electrical Engineering and Computer Science and for New York Institute of Technology. He is a Senior Member of the IEEE and holds over forty issued and numerous pending US patents.

Synopsis

HEVC (High Efficiency Video Coding) has emerged as a major step forward in video compression and standardization. This achievement was recognized by the Emmy Engineering Award in October 2017.  At the same time new video compression technologies continue being actively developed beyond HEVC to suit the rapidly growing market demands. A Call for Proposals was jointly issued by ISO/IEC and ITU-T in October 2017 to launch a new standardization project to capture these advances. More than 40 responses were received in April 2018, among which some new elements were presented besides more conventional video coding techniques, including the utilization of neural networks for video compression. Neural network or deep learning technologies have been researched for enhancing video and image qualities, and more recently, video and image compression. This talk will look into the recent work on neural video compression for the next video compression standard and discuss the opportunities as well as challenges.

Panelists

Shan Liu is a Distinguished Scientist and Vice President of Tencent Media Lab at Tencent America. Prior to Tencent she was the Chief Scientist and Head of America Media Lab at Futurewei Technologies, a.k.a. Huawei USA. She also held senior management and technical positions at MediaTek, Mitsubishi Electric Research Laboratories, Sony Electronics / Sony Computer Entertainment America, and IBM T.J. Watson Research Center. Dr. Liu is the inventor of more than 200 US and global patent applications and the author of more than 30 journal and conference articles. Many of her inventions have been adopted by international standards such as ITU-T H.265 | ISO/IEC HEVC, MPEG-DASH and OMAF, as well as utilized in widely sold commercial products. She has chaired and co-chaired a number of ad-hoc and technical groups through standard development and served as co-Editor of Rec. ITU-T H.265 v4 | ISO/IEC 23008-2:2017. She has been in technical and organizing committees, or an invited speaker, at various international conferences such as IEEE ICIP, VCIP, ICNC, ICME and ACM Multimedia. She served in Industrial Relationship Committee of IEEE Signal Processing Society 2014-2015 and was appointed the VP of Industrial Relations and Development of Asia-Pacific Signal and Information Processing Association (APSIPA) 2016-2017. Dr. Liu obtained her B.Eng. degree in Electronics Engineering from Tsinghua University, Beijing, China and M.S. and Ph.D. degrees in Electrical Engineering from University of Southern California, Los Angeles, USA.

Industrial Program Chairs

Khaled El-Maleh, Qualcomm, USA
Yan Ye, InterDigital, USA