Tackling image degradation due to atmospheric turbulence, particularly in dynamic environment, remains a challenge for long-range imaging systems. Existing techniques have been primarily designed for static scenes or scenes with small motion. This paper presents the first segment-then-restore pipeline for restoring the videos of dynamic scenes in turbulent environment. We leverage mean optical flow with an unsupervised motion segmentation method to separate dynamic and static scene components prior to restoration. After camera shake compensation and segmentation, we introduce foreground/background enhancement leveraging the statistics of turbulence strength and a transformer model trained on a novel noise-based procedural turbulence generator for fast dataset augmentation.
Benchmarked against existing restoration methods, our approach restores most of the geometric distortion and enhances sharpness for videos. We make our code, simulator, and data publicly available to advance the field of video restoration from turbulence: riponcs.github.io/TurbSegRes
Turb-Seg-Res tackles the stabilization of high focal length videos, highly sensitive to vibrations. Existing technologies often fail to handle extreme vibrations and motion, particularly in long-range videos with turbulence, which produces artifacts that disrupt feature matching. Unlike methods such as SIFT, SURF, and ORB, our GPU-accelerated cross-correlation technique provides accurate motion estimation and superior stabilization.
Turb-Seg-Res introduces a novel tilt-and-blur video simulator based on 3 sets of 3D simplex noise for rapidly generating plausible turbulence effects with temporal coherence. Unlike existing physics-based methods, our procedural noise approach efficiently models coherent pixel shifts and adaptive blurring across video frames, mimicking atmospheric turbulence over time. This simulator can process 200x200 videos in real time, enabling the rapid creation of large training datasets up to 1024x1024 resolution for our transformer model. It is versatile and can work with any image size. Additionally, this is the first open-source turbulence video simulator with temporal consistency.
Turb-Seg-Res introduces an efficient unsupervised motion segmentation technique tailored for dynamic scenes affected by atmospheric turbulence. Our approach leverages integrated multi-frame optical flow analysis to separate moving foreground objects from static background regions, even amidst turbulence-induced distortions. By adaptively determining the optimal number of neighboring frames, we enhance the distinction between static and dynamic components, effectively discriminating inherent object motion from turbulence effects. This unsupervised segmentation method provides a crucial first step in our pipeline, enabling targeted enhancement strategies for the static background and dynamic foreground regions.
Stabilization is crucial for aligning temporal frames. In Turb-Seg-Res, we utilize GPU-accelerated normalized cross-correlation to achieve frame alignment. The impact of stabilization is readily apparent in the restored video.
In Turb-Seg-Res, segmentation plays a crucial role, allowing for the application of different processing techniques to distinct regions of the video.
In Turb-Seg-Res, we employ a transformer model trained on simulated videos from our simulator to enhance the turbulence effect. The impact is notably more pronounced in the restored video.
Turb-Seg-Res paper estimates turbulence strength of the video and determines the Cn2 value based on the concept introduced in the paper Turbulence strength Cn2 estimation from video using physics-based deep learning. This value is then utilized to enhance the video.
Turb-Seg-Res paper employs adaptive average optical flow for segmentation. A similar approach, albeit without the adaptive component and with an additional network for mask refinement, is proposed in Unsupervised Region-Growing Network for Object Segmentation in Atmospheric Turbulence.
@article{saha2024turb,
title = {Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence},
author = {Saha, Ripon Kumar and Qin, Dehao and Li, Nianyi and Ye, Jinwei and Jayasuriya, Suren},
booktitle = {Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
year = {2024},
}