feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    UID:
    gbv_819951994
    Format: 1 Online-Ressource (xiii, 70 Seiten) , Illustrationen
    Edition: Also available in print
    ISBN: 1627055878 , 9781627055871
    Series Statement: Synthesis lectures on image, video, and multimedia processing #17
    Content: Every year lives and properties are lost in road accidents. About one-fourth of these accidents are due to low vision in foggy weather. At present, there is no algorithm that is specifically designed for the removal of fog from videos. Application of a single-image fog removal algorithm over each video frame is a time-consuming and costly affair. It is demonstrated that with the intelligent use of temporal redundancy, fog removal algorithms designed for a single image can be extended to the real-time video application. Results confirm that the presented framework used for the extension of the fog removal algorithms for images to videos can reduce the complexity to a great extent with no loss of perceptual quality. This paves the way for the real-life application of the video fog removal algorithm. In order to remove fog, an efficient fog removal algorithm using anisotropic diffusion is developed. The presented fog removal algorithm uses new dark channel assumption and anisotropic diffusion for the initialization and refinement of the airlight map, respectively. Use of anisotropic diffusion helps to estimate the better airlight map estimation. The said fog removal algorithm requires a single image captured by uncalibrated camera system. The anisotropic diffusion-based fog removal algorithm can be applied in both RGB and HSI color space. This book shows that the use of HSI color space reduces the complexity further. The said fog removal algorithm requires pre- and post-processing steps for the better restoration of the foggy image. These pre- and post-processing steps have either data-driven or constant parameters that avoid the user intervention. Presented fog removal algorithm is independent of the intensity of the fog, thus even in the case of the heavy fog presented algorithm performs well. Qualitative and quantitative results confirm that the presented fog removal algorithm outperformed previous algorithms in terms of perceptual quality, color fidelity and execution time. The work presented in this book can find wide application in entertainment industries, transportation, tracking and consumer electronics
    Content: 6. Video fog removal framework using an uncalibrated single camera system -- 6.1 Introduction -- 6.2 Challenges of realtime implementation -- 6.3 Video fog removal framework -- 6.3.1 MPEG coding -- 6.4 Simulation and results -- 6.5 Conclusion --
    Content: 2. Analysis of fog -- 2.1 Overview -- 2.1.1 Framework --
    Content: 3. Dataset and performance metrics -- 3.1 Foggy images and videos -- 3.2 Performance metrics -- 3.2.1 Contrast gain (Cgain) -- 3.2.2 Percentage of the number of saturated pixels -- 3.2.3 Computation time -- 3.2.4 Root mean square (RMS) error -- 3.2.5 Perceptual quality metric (PQM) --
    Content: 4. Important fog removal algorithms -- 4.1 Enhancement-based methods -- 4.2 Restoration-based methods -- 4.2.1 Multiple image-based restoration techniques -- 4.2.2 Single image-based restoration techniques --
    Content: 5. Single-image fog removal using an anisotropic diffusion -- 5.1 Introduction -- 5.2 Fog removal algorithm -- 5.2.1 Initialization of airlight map -- 5.2.2 Airlight map refinement -- 5.2.3 Behavior of anisotropic diffusion -- 5.2.4 Restoration -- 5.2.5 Post-processing -- 5.3 Simulation and results -- 5.4 Conclusion --
    Content: 1. Introduction -- 1.1 Video post-processing -- 1.2 Motivation --
    Content: 7. Conclusions and future directions -- Bibliography -- Authors' biographies
    Note: Includes bibliographical references (pages 66-69) , 1. Introduction1.1 Video post-processing -- 1.2 Motivation , 2. Analysis of fog2.1 Overview -- 2.1.1 Framework , 3. Dataset and performance metrics3.1 Foggy images and videos -- 3.2 Performance metrics -- 3.2.1 Contrast gain (Cgain) -- 3.2.2 Percentage of the number of saturated pixels -- 3.2.3 Computation time -- 3.2.4 Root mean square (RMS) error -- 3.2.5 Perceptual quality metric (PQM) , 4. Important fog removal algorithms4.1 Enhancement-based methods -- 4.2 Restoration-based methods -- 4.2.1 Multiple image-based restoration techniques -- 4.2.2 Single image-based restoration techniques , 5. Single-image fog removal using an anisotropic diffusion5.1 Introduction -- 5.2 Fog removal algorithm -- 5.2.1 Initialization of airlight map -- 5.2.2 Airlight map refinement -- 5.2.3 Behavior of anisotropic diffusion -- 5.2.4 Restoration -- 5.2.5 Post-processing -- 5.3 Simulation and results -- 5.4 Conclusion , 6. Video fog removal framework using an uncalibrated single camera system6.1 Introduction -- 6.2 Challenges of realtime implementation -- 6.3 Video fog removal framework -- 6.3.1 MPEG coding -- 6.4 Simulation and results -- 6.5 Conclusion , 7. Conclusions and future directionsBibliography -- Authors' biographies. , Also available in print. , System requirements: Adobe Acrobat Reader. , Mode of access: World Wide Web.
    Additional Edition: ISBN 9781627055864
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    San Rafael, California 〈1537 Fourth Street, San Rafael, CA 94901 USA〉 : Morgan & Claypool
    UID:
    gbv_1656539519
    Format: Online Ressource (1 PDF (xiii, 70 pages)) , illustrations.
    Edition: Online-Ausg.
    ISBN: 9781627055871
    Series Statement: Synthesis lectures on image, video, and multimedia processing # 17
    Content: Every year lives and properties are lost in road accidents. About one-fourth of these accidents are due to low vision in foggy weather. At present, there is no algorithm that is specifically designed for the removal of fog from videos. Application of a single-image fog removal algorithm over each video frame is a time-consuming and costly affair. It is demonstrated that with the intelligent use of temporal redundancy, fog removal algorithms designed for a single image can be extended to the real-time video application. Results confirm that the presented framework used for the extension of the fog removal algorithms for images to videos can reduce the complexity to a great extent with no loss of perceptual quality. This paves the way for the real-life application of the video fog removal algorithm. In order to remove fog, an efficient fog removal algorithm using anisotropic diffusion is developed. The presented fog removal algorithm uses new dark channel assumption and anisotropic diffusion for the initialization and refinement of the airlight map, respectively. Use of anisotropic diffusion helps to estimate the better airlight map estimation. The said fog removal algorithm requires a single image captured by uncalibrated camera system. The anisotropic diffusion-based fog removal algorithm can be applied in both RGB and HSI color space. This book shows that the use of HSI color space reduces the complexity further. The said fog removal algorithm requires pre- and post-processing steps for the better restoration of the foggy image. These pre- and post-processing steps have either data-driven or constant parameters that avoid the user intervention. Presented fog removal algorithm is independent of the intensity of the fog, thus even in the case of the heavy fog presented algorithm performs well. Qualitative and quantitative results confirm that the presented fog removal algorithm outperformed previous algorithms in terms of perceptual quality, color fidelity and execution time. The work presented in this book can find wide application in entertainment industries, transportation, tracking and consumer electronics
    Note: Part of: Synthesis digital library of engineering and computer science. - Includes bibliographical references (pages 66-69). - Compendex. INSPEC. Google scholar. Google book search. - Title from PDF title page (viewed on January 17, 2015) , 1. Introduction -- 1.1 Video post-processing -- 1.2 Motivation -- , System requirements: Adobe Acrobat Reader.
    Additional Edition: ISBN 9781627055864
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-1-62705-586-4
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    San Rafael, California 〈1537 Fourth Street, San Rafael, CA 94901 USA〉 : Morgan & Claypool
    UID:
    gbv_1656538903
    Format: Online Ressource (1 PDF (xiii, 79 pages)) , illustrations.
    Edition: Online-Ausg.
    ISBN: 9781627055772
    Series Statement: Synthesis lectures on image, video, and multimedia processing 1559-8144 # 16
    Content: Current vision systems are designed to perform in normal weather condition. However, no one can escape from severe weather conditions. Bad weather reduces scene contrast and visibility, which results in degradation in the performance of various computer vision algorithms such as object tracking, segmentation and recognition. Thus, current vision systems must include some mechanisms that enable them to perform up to the mark in bad weather conditions such as rain and fog. Rain causes the spatial and temporal intensity variations in images or video frames. These intensity changes are due to the random distribution and high velocities of the raindrops. Fog causes low contrast and whiteness in the image and leads to a shift in the color. This book has studied rain and fog from the perspective of vision. The book has two main goals: 1) removal of rain from videos captured by a moving and static camera, 2) removal of the fog from images and videos captured by a moving single uncalibrated camera system. The book begins with a literature survey. Pros and cons of the selected prior art algorithms are described, and a general framework for the development of an efficient rain removal algorithm is explored. Temporal and spatiotemporal properties of rain pixels are analyzed and using these properties, two rain removal algorithms for the videos captured by a static camera are developed. For the removal of rain, temporal and spatiotemporal algorithms require fewer numbers of consecutive frames which reduces buffer size and delay. These algorithms do not assume the shape, size and velocity of raindrops which make it robust to different rain conditions (i.e., heavy rain, light rain and moderate rain). In a practical situation, there is no ground truth available for rain video. Thus, no reference quality metric is very useful in measuring the efficacy of the rain removal algorithms. Temporal variance and spatiotemporal variance are presented in this book as no reference quality metrics
    Note: Part of: Synthesis digital library of engineering and computer science. - Includes bibliographical references (pages 75-78). - Compendex. INSPEC. Google scholar. Google book search. - Title from PDF title page (viewed on January 17, 2015) , 1. Introduction -- 1.1 Motivation -- , System requirements: Adobe Acrobat Reader.
    Additional Edition: ISBN 9781627055765
    Additional Edition: Erscheint auch als Druck-Ausgabe ISBN 978-1-62705-576-5
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages