<div dir="ltr"><div dir="ltr"><div dir="ltr">Signal Processing: Image Communication<br></div><div dir="ltr"><br></div><div dir="ltr"><a href="https://www.journals.elsevier.com/signal-processing-image-communication/call-for-papers/the-deep-learning-in-computational-photography">https://www.journals.elsevier.com/signal-processing-image-communication/call-for-papers/the-deep-learning-in-computational-photography</a> <br></div><div dir="ltr"><br></div><div dir="ltr">==========================</div><div dir="ltr"><div class="gmail-publication" style="box-sizing:border-box;margin:0px 0px 2rem;padding:0px;vertical-align:baseline;font-size:1.25rem;color:rgb(80,80,80);font-family:NexusSans,"Helvetica Neue",Helvetica,Arial,sans-serif"><div class="gmail-publication-title" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline"><h1 style="box-sizing:border-box;margin:0px 0px 0.373rem;padding:0px;vertical-align:baseline;font-family:NexusSerif,Georgia,serif;color:rgb(34,34,34);line-height:1.4;font-size:2.375rem;max-width:inherit;font-feature-settings:"kern","liga","pnum","tnum" 0,"onum","lnum" 0,"dlig";font-weight:100">The Deep Learning in Computational Photography</h1></div></div><hr style="box-sizing:content-box;margin:0px 0px 1.5rem;padding:0px;vertical-align:baseline;clear:both;border-style:solid;border-color:rgb(221,221,221);border-right-width:0px;border-bottom-width:0px;border-left-width:0px;height:0px;color:rgb(80,80,80);font-family:NexusSans,"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.04px"><div class="gmail-article-listing" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;list-style:none;color:rgb(80,80,80);font-family:NexusSans,"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:15.04px"><div class="gmail-article-header" style="box-sizing:border-box;margin:0px 0px 1rem;padding:0px;vertical-align:baseline;float:left;width:598.469px"></div><div class="gmail-article-content" style="box-sizing:border-box;margin:1rem 0px 2rem;padding:0px;vertical-align:baseline;font-size:1.25rem"><p style="box-sizing:border-box;margin:0px 0px 1.5rem;padding:0px;vertical-align:baseline;font-family:NexusSerif,Georgia,serif;font-size:1.25rem;line-height:1.625rem;max-width:inherit">The Computational Photography is a new and rapidly developing subject. By integrating a variety of technologies such as digital sensors, optical systems, intelligent lighting, signal processing, computer vision, and machine learning, computational photography aims at improving the traditional imaging technology, in which an image is formed directly at sensors. The joint force Computational Photography enhances and extends the data acquisition capabilities of traditional digital cameras, and captures the full range of real-world scene information.</p><p style="box-sizing:border-box;margin:0px 0px 1.5rem;padding:0px;vertical-align:baseline;font-family:NexusSerif,Georgia,serif;font-size:1.25rem;line-height:1.625rem;max-width:inherit">With rapidly advancing hardware, some studies use highly curved image sensors to improve optical performance, some try to optimize the micro-lens array-based light field camera systems, the others propose fundamentally new imaging modalities for depth cameras. Despite that new sensing technologies are able to provide better image quality even richer information, cost constraints often limit large scale applications of such technologies.  Instead, learning-based computational photography techniques demonstrate a potential capability of enhancing the camera systems without requiring a significate upgrade of hardware.  Recently, deep neural networks have shown their superior performance in the imaging computation. They can either learn a complex imaging mechanism in a low-light environment, detect objects to strengthen the focus function and depth estimation, or enhance the degraded images captured in bad work conditions, just to name a few.</p><p style="box-sizing:border-box;margin:0px 0px 1.5rem;padding:0px;vertical-align:baseline;font-family:NexusSerif,Georgia,serif;font-size:1.25rem;line-height:1.625rem;max-width:inherit">The objective of this special issue is to provide a forum for researchers to share their recent progresses on deep learning for computation photography. Papers could cover broad aspects from both theoretical and engineering perspectives, including DNN techniques for modeling, DNN algorithms for image reconstruction, and novel DNN designs for computational imaging in various spectral regimes, such as optical, multi-spectrum, ultrasound, microwave regimes, and so on. Contributions are also welcome concerning applications using computational photography, from fundamental science to applied research.</p><p style="box-sizing:border-box;margin:0px 0px 1.5rem;padding:0px;vertical-align:baseline;font-family:NexusSerif,Georgia,serif;font-size:1.25rem;line-height:1.625rem;max-width:inherit">Potential topics include, but are not limited to:</p><ul style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:1.625rem;list-style-position:outside;font-family:inherit;font-size:1.25rem"><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Computational imaging methods and models</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Computational illumination</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Computational image processing</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Multi-spectral imaging, SAR imaging, medical imaging and their processing</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Post-processing in computational photography</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Degraded image enhancement</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Aesthetics captioning</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Image recovery from compressed sensing</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Image generation through domain learning</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Applications, including natural, medical, remote sensing research.</li></ul><p style="box-sizing:border-box;margin:1.5rem 0px;padding:0px;vertical-align:baseline;font-family:NexusSerif,Georgia,serif;font-size:1.25rem;line-height:1.625rem;max-width:inherit"><span style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;font-weight:700">Tentative schedule:</span><br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">Paper submission due: January 18, 2019<br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">First notification: March 31, 2019<br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">Revision: May 31, 2019<br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">Final decision: June 30, 2019<br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">Tentative Publication date: August 2019</p><p style="box-sizing:border-box;margin:0px 0px 1.5rem;padding:0px;vertical-align:baseline;font-family:NexusSerif,Georgia,serif;font-size:1.25rem;line-height:1.625rem;max-width:inherit"><span style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;font-weight:700">Guest Editors:</span></p><ol style="box-sizing:border-box;margin:0px 0px 1.25rem;padding:0px;vertical-align:baseline;line-height:1.625rem;list-style-position:outside;font-family:inherit;font-size:1.25rem"><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Xuefeng Liang, Professor, Xidian University, China (<a href="mailto:xliang@xidian.edu.cn" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">xliang@xidian.edu.cn</a>) <a href="http://web.xidian.edu.cn/xliang/en/index.html" target="_blank" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">http://web.xidian.edu.cn/xliang/en/index.html</a> & Visiting Professor, Kyoto University, Japan (<a href="mailto:xliang@i.kyoto-u.ac.jp" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">xliang@i.kyoto-u.ac.jp</a> )  <a href="http://www.genome.ist.i.kyoto-u.ac.jp/~xliang/index.html" target="_blank" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">http://www.genome.ist.i.kyoto-u.ac.jp/~xliang/index.html</a><br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">Dr. Xuefeng Liang is professor at School of Artificial Intelligent, Xidian University. During 2010 – 2018, he was an associate professor at Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University. His research areas of interests include visual perception & cognition (psychology), computer vision and intelligent algorithms. Dr. Liang has published 59 international journals and conference papers, and received “Academic Award of Governors of Kyoto Prefecture” in 2017 and other 3 outstanding research/papers awards in past 5 years. He is on the editorial board of two international journals, has chaired 6 international conferences/workshops, and served as TCP members for 6 international conferences. Before joining Kyoto University in 2010, Dr. Liang was a Research Assistant with the University College of London and with the Queen Mary University of London.</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Lixin Fan, Principal Scientist, Nokia Technologies,<a href="mailto:%20lixin.fan@nokia.com" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">lixin.fan@nokia.com</a> ,  <a href="https://scholar.google.com/citations?hl=en&user=fOsgdn0AAAAJ" target="_blank" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">https://scholar.google.com/citations?hl=en&user=fOsgdn0AAAAJ</a><br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">Dr Lixin Fan is a principal scientist at Nokia Technologies. His research areas of interests include Machine learning & deep learning, Computer vision & pattern recognition, Image and video processing, 3D big data processing, data visualization & rendering, Augmented and virtual reality, Mobile ubiquitous and pervasive computing and Intelligent human-computer interface.   Dr Fan is the (co-)author of more than 50 international journal & conference publications, and the (co-)inventor of dozens of granted and pending patents filed in US, Europe and China. Dr Fan also co-organized workshops held jointly with CVPR, ICCV, ACCV, ICPR, ICME and ISMAR.  Before joining Nokia in 2004, Dr Fan was affiliated with Xerox Research Center Europe and his research work included the well recognized Bag of Keypoints method for image categorization.</li><li style="box-sizing:border-box;margin:0px 0px 0.5rem 1rem;padding:0px 0px 20px;vertical-align:baseline;list-style:none;font-size:1.25rem;line-height:1.4;font-family:NexusSerif,Georgia,serif;clear:both;float:left;width:598.469px">Chee Seng Chan, Associate Professor, University of Malaya, <a href="mailto:cs.chan@um.edu.my" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">cs.chan@um.edu.my</a>, <a href="https://scholar.google.com/citations?user=hKfga9oAAAAJ&hl=zh-CN" target="_blank" rel="external" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;overflow:hidden;border-bottom:none">https://scholar.google.com/citations?user=hKfga9oAAAAJ&hl=zh-CN</a><br style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline">Dr. Chee Seng Chan is an associate professor at Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya. His research areas of interests include computer vision, image processing and fuzzy sets. Dr. Chan has published more than 50 international journals and conference publications. He was the founding chair of IEEE Computational Intelligence Society (Malaysia chapter), and recipients of few prestige awards such as Young Scientist by Academy Science Malaysia in 2016, Hitachi Fellowships in 2012, as well as the Top 100 British Young Engineers in 2010. He is a senior member of IEEE and a chartered engineer.</li></ol></div><div class="gmail-clearfix" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline"></div></div><div class="gmail-publication-return" style="box-sizing:border-box;margin:1rem 0px 2rem;padding:1rem 0px 0px;vertical-align:baseline;border-top:0.15rem solid rgb(245,245,245);font-family:NexusSerif,Georgia,serif;font-size:1.25rem;color:rgb(80,80,80)"> <a href="https://www.journals.elsevier.com/signal-processing-image-communication/call-for-papers" title="Call for Papers" rel="" style="box-sizing:border-box;margin:0px;padding:0px;vertical-align:baseline;line-height:inherit;background:0px 0px;color:rgb(0,115,152);text-decoration-line:none;word-break:break-word;border-bottom:none">Return to Call for Papers</a></div></div></div></div>