行健讲坛学术讲座
时 间 : 2018年4月12日(周四)下午14:00
地 点 : 校本部东区翔英楼706室
讲座名称: 未来电视:驱动未来交互式视频娱乐
演 讲 者 : 南京大学 马展博士
演 讲 者 简 介:Zhan Ma received the B.S. and M.S. degrees from Huazhong University of Science and Technology, Wuhan, China, in 2004 and 2006, respectively, and the Ph.D. degree from the Tandon School of Engineering of New York University (formerly Polytechnic University, Brooklyn, NY, USA), New York, NY, USA, in 2011. He is currently on the faculty of Electronic Science and Engineering School, Nanjing University, Nanjing, China. From 2011 to 2014, he was with Samsung Research America, Dallas, TX, USA, and Futurewei Technologies, Inc., Santa Clara, CA, USA, respectively. His current research interests include the video compression, gigapixel streaming, and multispectral signal processing. He is supported by the national natural science foundation (NSFC) of China, National Key Research and Development Program, Jiangsu NSFC, WeChat, Huawei, etc.
讲 座 摘 要:
The FutureTV defines a groundbreaking means for users to interact with the video immersively in a virtualized space so as to mimic the natural interaction of the human being in reality and revolutionize the existing television ecosystem. Towards this goal, the video representation has to meet the gigapixel resolution scale with free viewport navigation support. It presents the significant challenges on the real-time network delivery (e.g., bandwidth and latency). In this talk, we use an array camera to capture the immersive video (i.e., 360_ panoramic video at gigapixel resolution) in real-time to represent the vivid natural world, followed by a multi-scale acceleration engine to process tiled videos in parallel. Instead of delivering a bulky gigapixel video, we propose to utilize the viewport (e.g., FoV - field of view) adaptive streaming. Each FoV is consist of one or multiple tiles. FoV navigation could be facilitated by streaming appropriate tiles based on user’s request. We leverage the characteristics of human visual system (HVS) to significantly reduce the network bandwidth consumption, but without sacrificing the perceptual quality of the content, thanks to the development of computational models considering the peripheral and motion induced vision impacts in immersive environment.
欢迎广大教师和学生参加!