VisNet: Spatiotemporal self-attention-based U-Net with multitask learning for joint visibility and fog occurrence forecasting SCIE SCOPUS

DC Field Value Language
dc.contributor.author Kim, Jinah -
dc.contributor.author Cha, Jieun -
dc.contributor.author Kim, Taekyung -
dc.contributor.author Lee, Hyesook -
dc.contributor.author Yu, Ha-Yeong -
dc.contributor.author Suh, Myoung-Seok -
dc.date.accessioned 2024-07-29T00:30:02Z -
dc.date.available 2024-07-29T00:30:02Z -
dc.date.created 2024-07-29 -
dc.date.issued 2024-10 -
dc.identifier.issn 0952-1976 -
dc.identifier.uri https://sciwatch.kiost.ac.kr/handle/2020.kiost/45797 -
dc.description.abstract To provide skillful prediction of horizontal visibility and fog occurrence over consecutive 12-h ahead forecasts with hourly time interval, a spatiotemporal self-attention-based U-Net architecture with multitask learning is proposed and applied to the overall Korean Peninsula. The proposed spatiotemporal learning framework facilitates the capture of multiple spatiotemporal teleconnections and lags between multiple variables from numerical reanalysis grid data over the Korean Peninsula and in-situ measurements at the 155 automatic weather station locations. In addition, multitask learning, which simultaneously performs a regression task for predicting visibility distance and a classification task for predicting fog occurrence, is applied to overcome the data imbalance problem presented by the occurrence of hazardous events by sharing the representation of the tasks used to characterize low visibility and fog occurrence and further generalize the prediction performance. Extensive ablation studies and comparative experiments with state-of-the-art (SOTA) models are conducted to determine the combination of input variables, input/output sequence lengths, data source, spatial resolution of the dataset, level of joint learning of multiple tasks, and network architecture necessary to obtain the optimal model architecture and experimental conditions. Moreover, three-dimensional analysis of geographical location, land-use purpose, and temporal parameters such as season, horizontal visibility distance threshold, and weather code classes is performed using various evaluation metrics suitable for regression and classification tasks of predicting low visibility and fog. Furthermore, the reliability of the model was examined through trained attention maps and probability calculations for predicted fog events compared to actual fog occurrences. Compared to SOTA, the proposed model achieved an average root-mean-square error improvement of about 380 m for the horizontal visibility distance prediction and an improvement in fog occurrence classification accuracy of about 6% when predicting for 1-h ahead forecast. © 2024 Elsevier Ltd -
dc.description.uri 1 -
dc.language English -
dc.publisher Pergamon Press Ltd. -
dc.title VisNet: Spatiotemporal self-attention-based U-Net with multitask learning for joint visibility and fog occurrence forecasting -
dc.type Article -
dc.citation.title Engineering Applications of Artificial Intelligence -
dc.citation.volume 136 -
dc.contributor.alternativeName 김진아 -
dc.contributor.alternativeName 김태경 -
dc.identifier.bibliographicCitation Engineering Applications of Artificial Intelligence, v.136 -
dc.identifier.doi 10.1016/j.engappai.2024.108967 -
dc.identifier.scopusid 2-s2.0-85199042519 -
dc.type.docType Article -
dc.description.journalClass 1 -
dc.description.isOpenAccess N -
dc.subject.keywordAuthor Multi-step-ahead forecasts -
dc.subject.keywordAuthor Multitask learning -
dc.subject.keywordAuthor Spatiotemporal self-attention-based U-net -
dc.subject.keywordAuthor Atmospheric visibility prediction -
dc.subject.keywordAuthor Fog detection -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
Appears in Collections:
Sea Power Enhancement Research Division > Coastal Disaster & Safety Research Department > 1. Journal Articles
Files in This Item:
There are no files associated with this item.

qrcode

Items in ScienceWatch@KIOST are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse