Mrinal Kanti Bhowmik, Ph.D. (Engg.)
Department of Computer Science and Engineering
Tripura University (A Central University)


Salient Features for Moving Object Detection in Adverse Weather Conditions during Night Time

Night Vision in Adverse Weather Conditions


  • Challenges :
    1. He is actively involved in infrared based object background segmentation at night time adverse weather environments such as foggy, rainy and dusty conditions etc. as shown in Figure. 1. The background model-based background segmentation methods are mostly pixel-level approaches but in infrared FIR imaging, the pixel-intensity based methods are not well suited. There have many key issues that are related to object detection at night using an FIR camera as shown in Figure. 2, such as the following: (i) Flat Cluttered Background: The infrared radiation signal must travel from the target to the camera sensor among adverse atmospheric particles and is attenuated due to scattering; the loss of radiation along the way produces a blurred flat region. In addition, with the thermal sensors, because of large variations in the surface, which includes hot and cool objects such as buildings, vehicles, animals, humans, and light poles, the foreground objects and the background scene become indistinguishable; (ii) Temperature Polarity Changes: Thermal temperature adjustment during the maiden appearance of a moving object in a video sequence causes illumination-type effects in the background model from the current video frame and, therefore, yields false classifications. (iii) Background Dynamics: Outdoor scenes are affected by movement in the background, e.g., due to waves or swaying tree leaves.


    (a) (b) (c) (d) (e) (f)
    Figure. 1. Sample frames of the created dataset at night time (a), (b) a visual frame and the corresponding thermal frame, respectively, under dust conditions; (c), (d) a visual frame and the corresponding thermal frame, respectively, under rain conditions; (e), (f) a visual frame and the corresponding thermal frame, respectively, under fog conditions.


    (a) (b) (c)
    Figure. 2. Key Challenges under thermal imaging in outdoor uncontrolled adverse environments (a) flat cluttered background; (b) temperature polarity changes; (c) background dynamics.


  • Scope:
    1. The literature of benchmark datasets are contains of very limited video sequences in different adverse weather conditions. Thus, it is difficult to evaluate the robustness of object detection methods under atmospheric conditions, especially for night vision, because more than half of object-related accidents occur at night. In contrast, our motivation might be providing a new dataset comprising of several adverse weather conditions with large number of video sequences compared to existing datasets.


  • Dataset:
    1. Therefore, we are designing a standard night-vision video dataset that is based on several atmospheric-weather-degraded conditions and covers many real-world scenarios. The considered atmospheric conditions are dust aerosols, fog aerosols, rain aerosols, and a low-light environment, under which we utilize a thermal camera. The dataset is naming as ‘Tripura University Video Dataset at Night time (TU-VDN)’. The TU-VDN dataset provides a realistic diverse set of outdoor videos in night vision that were captured via a thermal modality. The current dataset consists of 60 video sequences that were captured under various atmospheric conditions. The key features of the designed dataset are as follows:

      (i) Each frame contains multiple types of moving objects, e.g., pedestrians, various types of vehicles, bicyclists, motorbikes, trains, and pets;

      (ii) The night video clips were captured under three outdoor atmospheric scenarios, namely, dust, rain, and fog, which produce flat regions in thermal scenes. In addition, the captured scenes are mostly in urban areas, which correspond to larger surface variations due to the presence of hot and cool objects such as houses, warehouses, office buildings, streets, and residents. Therefore, areas with varied background and adverse weather conditions produce thermal characteristics that lead to an increased flat cluttered region in the target area;

      (iii) A conventional challenge is encountered, namely, a dynamic background due to shaking trees, since the whole dataset was recorded in an outdoor environment;

      (iv) The key issue with the FIR camera is thermal temperature adjustment during the maiden appearance of a moving object in a video sequence, which causes illumination-type effects in the background model from the current video frame;

      (v) Motion-camera-based videos are captured by mounting the camera on a moving vehicle, where the camera and objects are moving and shaking simultaneously.


  • Our Proposed Algorithms to mitigate the challenges :
    1. If the background emits the same amount of thermal radiation as objects, e.g., a cluttered background, the foreground and background regions will be indistinguishable. We investigated the performance of a perceptual discrimination salient-feature-based methodology on a flat cluttered background. Most methods that are used for foreground object segmentation in video sequences that were captured by thermal or visual cameras are composed of two modules: feature extraction and maintenance of the background model. However, finding a satisfactory reference or background model for background subtraction is difficult when there are several real-time objects in thermal frames. Therefore, we working on a satisfactory background segmentation model that uses the novel Akin-based Local Whitening Boolean Pattern (ALWBP) salient features. The salient features handle flat cluttered regions (as shown in Figure. 3) and background model handle background dynamics (as shown in Figure. 4) and temperature polarity changes (as shown in Figure. 5) to increase false-negative ratios.


    Figure. 3.Classification of moving objects from a flat cluttered background.


    Figure. 4. Reduction of false classification in a frames sequences due to background dynamics.


    Figure. 5. Reduction of false classification in a frames sequences due to temperature polarity changes.


  • Featured Article(s) in the Proposed Domain:


    1. Anu Singha, Mrinal Kanti Bhowmik,"Salient Features for Moving Object Detection in Adverse Weather Conditions during Night Time", IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), IEEE, IF-4.133 (SCI), 2019, DOI: 10.1109/TCSVT.2019.2926164.

    2. Anu Singha, Mrinal Kanti Bhowmik,"TU-VDN: Tripura University Video Dataset at Night Time in Degraded Atmospheric Outdoor Conditions for Moving Object Detection", Proceedings of 26th IEEE International Conference on Image Processing (ICIP) [Tier II Conference], Taipei, Taiwan, September 22-25, 2019. (Accepted)





Foreground segmentation of moving objects in adverse atmospheric conditions such as fog, rain, low light and dust is a challenging task in computer vision. The advantages of thermal infrared imaging at night time under adverse atmospheric conditions have been demonstrated, which are due to the long wavelength. However, existing state-of-the-art object detection techniques have not been useful in such scenarios. We propose an improved background model that utilizes both thermal pixel intensity features and spatial video salient features. The proposed spatial video salient features are represented as an Akin-based per-pixel Boolean string over a local region block, and depend on the effect of neighbouring pixels on a centre pixel. The result of this Boolean procedure is referred to as the - ‘Akin-based Local Whitening Boolean Pattern (ALWBP),’ which differentiates foreground and background region accurately, even against a cluttered background. The background model is controlled via (i) the automatic adaptation of parameters such as the decision threshold RT and, learning parameter L, and (ii) the updating of background samples Bsample_int and,- Bsample_ALWBP to minimize (a) the effect of the background dynamics of outdoor scenes, and (b) the temperature polarity changes during the maiden appearance of a moving object in thermal frame sequences. The performance of this model is evaluated using nine existing standard segmentation performance metrics on our newly created -‘Tripura University Video Dataset at Night time (TU-VDN)’ and on the publicly available CDnet-2014 dataset. Our newly created weather-degraded video dataset, namely, TU-VDN, consists of sixty video sequences that represent four atmospheric conditions, namely, low light, dust, rain, and fog. The results of a performance comparison with fourteen state-of-the-art detection techniques also demonstrate the high accuracy of the proposed technique.

Publications

"Salient Features for Moving Object Detection in Adverse Weather Conditions during Night Time",
A. Singha, M.K. Bhowmik
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), IEEE
DOI: 10.1109/TCSVT.2019.2926164, 2019.
[PDF]

"TU-VDN: Tripura University Video Dataset at Night Time in Degraded Atmospheric Outdoor Conditions for Moving Object Detection",
A. Singha, M.K. Bhowmik
IEEE International Conference on Image Processing (ICIP), IEEE
DOI: 10.1109/ICIP.2019.8804411, 2019.
[PDF]

Images


TU-VDN Samples:
Sample frames of the created dataset at night time (a), (b) a visual frame and the corresponding thermal frame, respectively, under low-light conditions; (c), (d) a visual frame and the corresponding thermal frame, respectively, under dust conditions; (e), (f) a visual frame and the corresponding thermal frame, respectively, under rain conditions; and (g), (h) a visual frame and the corresponding thermal frame, respectively, under fog conditions. To characterize the textures in night time visual and night time thermal images, we have used entropy to measure the contents, where a higher entropy value in night time thermal frames indicates an image with adequate details of information in terms of better quality.



TU-VDN Statistics:
The TU-VDN dataset provides a realistic diverse set of outdoor videos in night vision that were captured via a thermal modality. The current dataset consists of 60 video sequences that were captured under various atmospheric conditions; the key challenges of the video clips are listed in Table II. Each video clip is 2 minutes in duration and was recorded with an FLIR camera that was rigidly mounted with 90o alignments on a tripod stand by maintaining 200m to 2km distances from objects. In contrast, for a motion background, the video is captured by mounting the camera on a moving vehicle (20~30 km/h) such that the objects, camera, and background are moving simultaneously.



Flat Cluttered Background:
Outline of the salient-feature-based methodology over a flat cluttered background. (a) Background flat region. Each neighbouring pixel similarity pattern (Bs) is computed using the center pixel (marked as ‘x’); (b) Foreground object flat region. The foreground string (Fs) has 6/8 matches with the background similarity string (Bs), which could be categorized as background (incorrectly); (c) Foreground object flat region. The ALWBP descriptor (As) is computed using a randomly selected background sample (marked as ‘√’) as a reference center pixel. The foreground string (As) has 3/8 matches with the background string (Bs), which is categorized as foreground (correctly).



Background Model via ALWBP (BM ALWBP)
The overall system pipeline of the proposed background segmentation method is the combination of an ALWBP feature descriptor and a background model generation. Background model generation collectively represents a generation of background model where both spatial-level and pixel-level features are represented as ALWBP Boolean patterns and thermal intensities respectively as inputs. It is consist of three sub steps: decision for segmentation, adaptation of parameters and updating the background samples.



Temperature Polarity Changes:
Thermal intensity changes upon the first appearance of an object in (a) a thermal frame and (b) the next frame in which the object enters for the first time. To account for changes in the background, such as thermal intensity changes upon the first appearance of an object in the frame, a waving water layer, and shaking trees, updating of the background pixels in the background model is essential.



Background Dynamics:
Outdoor scenes are affected by movement in the background, e.g., due to waves or swaying tree leaves. In figure, higher background dynamics (dmin) require faster threshold increments in the decision threshold (RT) and the thresholds gradually decrease for in low background dynamic values.



Experimental Results:
Typical segmentation results for several key challenges under various atmospheric conditions in our created night time dataset. Row (1) shows input frames, row (2) shows the ground truth, row (3) shows the BMUALWBP results, row (4) shows the ViBe results, row (5) shows the Subsense results, row (6) shows the LOBSTER results, row (7) shows the PAWCS results, row (8) shows the FST results, row (9) shows the PBAS results, row (10) shows the Multicue results, row (11) shows the ISBM results, row (12) shows the MTD results, row (13) shows the VuMeter results, row (14) shows the KDE results, row (15) shows the MoG_V2 results, row (16) shows the Eigenbackground results, and row (17) shows the Codebook results.

Slides
Poster Presentation at ICIP 2019 Conference
TCSVT Slides








Today's visitors: 22

Total no of visitors: 14657

Copyright © 2020 | All Rights Reserved.

Contact
DR. Mrinal Kanti Bhowmik

mrinalkantibhowmik@tripurauniv.in
+91 9436129933
Admin Login