On event-based motion detection and integration
Event-based vision sensors sample individual pixels at a much higher temporal resolution and provide a representation of the visual input available in their receptive fields that is temporally independent of neighboring pixels. The information available on pixel level for subsequent processing stages is reduced to representations of changes in the local intensity function. In this paper we present theoretical implications of this condition with respect to the structure of light fields for stationary observers and local moving contrasts in the luminance function. On this basis we derive several constraints on what kind of information can be extracted from event-based sensory acquisition using the address-event-representation (AER) principle. We discuss how subsequent visual mechanisms can build upon such representations in order to integrate motion and static shape information. On this foundation we present approaches for motion detection and integration in a neurally inspired model that demonstrates the interaction of early and intermediate stages of visual processing. Results replicating experimental findings demonstrate the abilities of the initial and subsequent stages of the model in the domain of motion processing.
- Published: 2nd Feb 2015
- Publisher: ACM