Special Sessions

Special Session 20.1: Image Processing and Pattern Recognition for Ecological Applications

Co-chairs: Concetto Spampinato, Alexis Joly, Vasileios Mezaris

Emails: cspampin@dieei.unict.it, alexis.joly@inria.fr, bmezaris@iti.gr

Organizer Web Page: http://www-rocq.inria.fr/~ajoly/index.php

Summary: Ecology is nowadays being considered an interdisciplinary, collaborative and data-intensive science and as such discovering, integrating and analyzing heterogeneous data are of key important for researchers to address complex questions ranging from single particles to animals to the biosphere. This combined to the recent progress in digital cameras and sensors, as well as in network bandwidth and information storage capacities, has led to a widespread use of video imaging devices for environment monitoring. In fact, there is a growing demand from various scientific and management communities for monitoring issues such as meteorological conditions, air quality, pollen, climate change, water quality and, more in general, ecosystem changes. In addition, flora and fauna monitoring are also of great interest for studying and observing the world's biodiversity. This increasing interest is widely witnessed by the great number of projects devoted to this topic: for example, the Fish4Knowledge Project , which aims at developing video analysis tools for automatic extraction of information about marine animals, or the PESCaDO project which aimed at automatically extracting environmental data from web pages and, in particular, from images for being integrated in a more general environmental framework, or the Pl@ntNet project which, instead, develops machine vision tools for gathering information on the identity, geographic distribution and use of plants. Beyond these projects, the continuous collection of visual data for capturing animal and insect behavior (e.g. bird nests, wolf, badger, fox, grasshoppers, etc.) and also for habitat classification and pollution monitoring are yet other important examples. This explosion of ecological data has created the urgent need for automatic image and video analysis tools able to support biologists in their efforts towards analyzing the natural environment, promoting its preservation, and understanding the behavior and interactions of the living organisms (insects, animals, etc.) that are part of it. However, the automatic analysis of environmental data poses several challenges from image processing and pattern recognition point of views. For example, video surveillance systems in uncontrolled "real-life" environments have to deal with typical effects of outdoor applications, such as light change, image quality, etc. which are emphasized in natural environments, but also with the 3D motion of the targets (unlike humans or vehicles), which tends to be erratic, with sudden direction and speed variations, and appearance and non-rigid shape can undergo quick changes. Other challenges are the recognition of classes with subtle differences (unlike most current class-based recognition), as for instance in the case of plant leaves identification, or the recognition of moving, deforming, largely 2D, objects under uncontrolled 3D motion as in the case of underwater marine animals or birds monitoring. The goal of this special session is twofold. First, it aims at reporting on the most recent image processing and pattern recognition methods for monitoring the environment from the animal/insect behavior understanding to the plant and habitat classification or pollution assessment and other similar topics. Second, it aims at bringing together practitioners and researchers, both in image processing and in ecology, to share ideas and experiences in designing and implementing novel image analysis tools for ecological visual content.


Special Session 20.2: Progress in implantable prosthetic vision and the role of vision processing

Special Session chair: Nick Barnes

Email: nick.barnes@nicta.com.au

Organizer Web Page: http://users.cecs.anu.edu.au/~nmb/

Summary: This special session will introduce the state-of-the-art in implantable prosthetic vision, with particular emphasis on recent research in Australia. This will introduce implantable devices, the clinical context of patient needs and evaluation of outcomes, the percepts that result from electrical stimulation strategies for stimulation, as well as vision processing methods.

Motivation and Rationale: Human trials for implantable prosthetic vision devices have now reached multi-centre trials in Europe and the US. However, the results are still limited from these devices. Results show statistically significant improvement over system off in trials for functional outcomes, but the trials are limited in their scope, and there is a great deal of room for improvement. Due to the limitations of resolution and dynamic range, there is a significant role to be played by vision processing. This session will overview key progress in this field in terms of devices and clinical approach. It will present work that has been done on vision processing, and how such work can be evaluated for its contribution to clinical outcome.


Special session 20.3: Video Compression and Its IEEE standardization

Session co-chairs: Oscar C. Au, Tiejun Huang, and Feng Wu

Emails: eeau@ust.hk, tjhuang@pku.edu.cn, fengwu@microsoft.com

Organizer Web Page:
http://www.ece.ust.hk/~eeau/
http://www.jdl.ac.cn/user/huangtiejun/index-en.aspx
http://research.microsoft.com/en-us/people/fengwu/

Summary: IEEE 1857 is a multi-part standard for multimedia compression and communication in a highly efficient way under constraints that include limited complexity and bandwidth, in particular for the emerging applications like video surveillance and Internet video. IEEE 1857 video part is organized in an application-oriented way that suitable tools are chose to get a high compression ratio. There are four profiles (Main, Surveillance, Portable and Broadcasting) which are designed for respectively for normal video, surveillance video, video communication among portable devices and high-definition video broadcasting. The future of IEEE 1857 is targeting on a higher compression ratio, as well as new features for emerging applications.

Motivation and Rationale: According to Cisco VNI report, mobile video traffic exceeded 50% for the first time in 2011. Internet video traffic is predicted to be 86% of all consumer Internet traffic in 2016. The video contents created and consumed by people in modern life are increasing exponentially. However, video compression and communication efficiency only increases linearly from one generation to the next. It poses a great challenge to the development of video compression and communication technologies. 802.11 series standards developed by IEEE constitute the basis of WLAN. It is the most popular wireless network and provides billions of people convenient and high-speed wireless connections. Most video traffic goes through WLAN. Video compression standards are mainly developed by ISO MPEG and ITU-T VECG. The new video standard HEVC can double the compression ratio compared to its predecessors H.264. But, the theoretical work on joint source-channel coding has proved that jointly designing source and channel coding provides a feasible way to significantly improve the efficiency. It is urgently desired to develop the joint compression and communication standards.