Assessment of Convolutional Neural Network Pre-Trained Models for Detection and Orientation of CracksShow others and affiliations
2023 (English)In: Materials, E-ISSN 1996-1944, Vol. 16, no 2, article id 826Article in journal (Refereed) Published
Abstract [en]
Failure due to cracks is a major structural safety issue for engineering constructions. Human examination is the most common method for detecting crack failure, although it is subjective and time-consuming. Inspection of civil engineering structures must include crack detection and categorization as a key component of the process. Images can automatically be classified using convolutional neural networks (CNNs), a subtype of deep learning (DL). For image categorization, a variety of pre-trained CNN architectures are available. This study assesses seven pre-trained neural networks, including GoogLeNet, MobileNet-V2, Inception-V3, ResNet18, ResNet50, ResNet101, and ShuffleNet, for crack detection and categorization. Images are classified as diagonal crack (DC), horizontal crack (HC), uncracked (UC), and vertical crack (VC). Each architecture is trained with 32,000 images equally divided among each class. A total of 100 images from each category are used to test the trained models, and the results are compared. Inception-V3 outperforms all the other models with accuracies of 96%, 94%, 92%, and 96% for DC, HC, UC, and VC classifications, respectively. ResNet101 has the longest training time at 171 min, while ResNet18 has the lowest at 32 min. This research allows the best CNN architecture for automatic detection and orientation of cracks to be selected, based on the accuracy and time taken for the training of the model.
Place, publisher, year, edition, pages
MDPI , 2023. Vol. 16, no 2, article id 826
Keywords [en]
convolutional neural networks; orientation of cracks; cracks detection; deep learning; pre-trained models
National Category
Civil Engineering
Identifiers
URN: urn:nbn:se:hig:diva-40752DOI: 10.3390/ma16020826ISI: 000918935500001PubMedID: 36676563Scopus ID: 2-s2.0-85146687498OAI: oai:DiVA.org:hig-40752DiVA, id: diva2:1727131
2023-01-152023-01-152024-07-04Bibliographically approved