Tez No İndirme Tez Künye Durumu
400737
Multi-frame information fusion for image and video enhancement /
Yazar:BAHADIR KÜRŞAT GÜNTÜRK
Danışman: PROF. YÜCEL ALTINBAŞAK
Yer Bilgisi: Georgia Institute of Technology / Yurtdışı Enstitü / Elektrik ve Bilgisayar Ana Bilim Dalı
Konu:Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol = Computer Engineering and Computer Science and Control ; Elektrik ve Elektronik Mühendisliği = Electrical and Electronics Engineering
Dizin:
Onaylandı
Doktora
İngilizce
2003
130 s.
The need to enhance the resolution of a still image or of a video sequence arises frequently in digital cameras, security/surveillance systems, medical imaging, aerial/satellite imaging, scanning and printing devices, and high-definition TV systems. In this thesis, we address several aspects of the resolution-enhancement problem. We first look into the color filter array (CFA) interpolation problem, which arises because of the patterned sampling of color channels in single-chip digital cameras. At each pixel location, one color sample (red, green, or blue) is taken, and the missing samples are estimated by a CFA interpolation process. When the CFA interpolation is not performed well, the resulting images suffer from highly visible color artifacts. We demonstrate that there is a high correlation among the color channels and this correlation differs at different frequency components, and propose an iterative CFA interpolation algorithm that exploits the frequency-dependent inter-channel correlation. The algorithm defines constraint sets based on the observed data and the interchannel correlation, and employs the projections onto convex sets (POCS) technique to estimate the missing samples. To increase the resolution further to the subpixel levels, we need to use multiple frames. By using subpixel accurate motion vector estimates among the observed images, it is possible to reconstruct an image or a sequence of images that has higher spatial resolution than any of the observations. Such a multi-frame reconstruction process is called super-resolution reconstruction. Although there is a lot of work done in the area of super-resolution reconstruction, most of it assumes that there is no compression during the imaging process. The input signal (video/image sequence) is assumed to exist in a raw (uncompressed) format. However, because of the limited resources (bandwidth, storage space, I/O requirements, etc.), this is rarely the case. We therefore look into the super-resolution problem where compression is part of the imaging process. The most popular image compression standards are based on the discrete cosine transform (DCT). We add a DCT-based compression reconstruction algorithm that handles the illumination changes and improves both spatial and gray-scale resolution.