Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement KCI

DC Field Value Language
dc.contributor.author Kim, Hong Gi -
dc.contributor.author Seo, Jung Min -
dc.contributor.author Kim, Soo Mee -
dc.date.accessioned 2022-03-02T04:50:02Z -
dc.date.available 2022-03-02T04:50:02Z -
dc.date.created 2022-03-02 -
dc.date.issued 2022-02 -
dc.identifier.issn 1225-0767 -
dc.identifier.uri https://sciwatch.kiost.ac.kr/handle/2020.kiost/42367 -
dc.description.abstract Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately. -
dc.description.uri 2 -
dc.language English -
dc.publisher 한국해양공학회 -
dc.title Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement -
dc.type Article -
dc.citation.endPage 40 -
dc.citation.startPage 32 -
dc.citation.title Journal of Ocean Engineering and Technology -
dc.citation.volume 36 -
dc.citation.number 1 -
dc.contributor.alternativeName 김홍기 -
dc.contributor.alternativeName 서정민 -
dc.contributor.alternativeName 김수미 -
dc.identifier.bibliographicCitation Journal of Ocean Engineering and Technology, v.36, no.1, pp.32 - 40 -
dc.identifier.doi 10.26748/ksoe.2021.095 -
dc.identifier.kciid ART002813348 -
dc.description.journalClass 2 -
dc.description.isOpenAccess N -
dc.subject.keywordAuthor Generative adversarial networks -
dc.subject.keywordAuthor Image fusion -
dc.subject.keywordAuthor Image enhancement -
dc.subject.keywordAuthor Underwater optical image -
dc.subject.keywordAuthor Underwater image deep learning techniques -
dc.description.journalRegisteredClass kci -
Appears in Collections:
Marine Industry Research Division > Maritime ICT & Mobility Research Department > 1. Journal Articles
Files in This Item:
There are no files associated with this item.

qrcode

Items in ScienceWatch@KIOST are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse