hig.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 32) Show all publications
Chandel, K., Seipel, S., Åhlén, J. & Roghe, A. (2025). Augmented Reality for PCB Component Identification and Localization. Applied Sciences, 15(11), Article ID 6331.
Open this publication in new window or tab >>Augmented Reality for PCB Component Identification and Localization
2025 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 15, no 11, article id 6331Article in journal (Refereed) Published
Abstract [en]

This study evaluates the effectiveness of augmented reality (AR), using the Microsoft™ HoloLens™ 2, for identifying and localizing PCB components compared to traditional PDF-based methods. Two experiments examined the influence of user expertise, viewing angles, and component sizes on accuracy and usability. The results indicate that AR improved identification accuracy and user experience for non-experts, although it was slower than traditional methods for experienced users. Optimal performance was achieved at 90° viewing angles, while accuracy declined significantly at oblique angles. Medium-sized components received the highest confidence scores, suggesting favorable visibility and recognition characteristics within this group, though further evaluation with a broader component distribution is warranted. Participant feedback highlighted the system’s intuitive interface and effective guidance but also noted challenges with marker stability, visual discomfort, and ergonomic limitations. These findings suggest that AR can enhance training and reduce errors in electronics manufacturing, although refinements in marker rendering and user onboarding are necessary to support broader adoption. This research provides empirical evidence on the role of AR in supporting user-centered design and improving task performance in industrial electronics workflows.

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
augmented reality (AR); PCB assembly; component identification; Microsoft™ HoloLens™ 2; human-centered design; electronics manufacturing; visual guidance
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:hig:diva-47061 (URN)10.3390/app15116331 (DOI)001505728100001 ()
Funder
European Regional Development Fund (ERDF), 20201871European Regional Development Fund (ERDF), 20201871
Available from: 2025-06-05 Created: 2025-06-05 Last updated: 2025-06-19Bibliographically approved
Åhlén, J. (2023). Burned area prediction using smoke plume detection from high spatial resolution imagery. In: Trofymchuk O., Rivza B. (Ed.), International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, SGEM: . Paper presented at 23rd International Multidisciplinary Scientific Geoconference: Informatics, Geoinformatics and Remote Sensing, SGEM 2023, Albena, Bulgaria, 3-9 July 2023 (pp. 145-152). , 21
Open this publication in new window or tab >>Burned area prediction using smoke plume detection from high spatial resolution imagery
2023 (English)In: International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, SGEM / [ed] Trofymchuk O., Rivza B., 2023, Vol. 21, p. 145-152Conference paper, Published paper (Refereed)
Abstract [en]

The fast-spreading wildfire engulfs the dense parched flora and all obstructions in its way, transforming a woodland into a volatile reservoir of combustible materials. Once ignited, fires can expand at a velocity of up to 23 km/h. As flames spread across vegetation and woodlands, they have the potential to become self-sustaining, propagating sparks and embers that can spawn smaller fires miles away. The proximity of the burning materials to the observer has a direct impact on the density of smoke produced by the fire. This relationship is crucial for fire management teams and emergency responders and helps them assess the severity of a fire, predict its behavior, and make informed decisions regarding evacuation measures, resource allocation, and the protection of affected communities and ecosystems. Drones are valuable tools in the fight against forest fires. They can capture high-resolution imagery, thermal imaging, and video footage, supplying insights into the properties, behavior, and direction of the fire. By employing classical image processing techniques, it is possible to analyze these images and promptly determine the extent of land cover affected.According to the Swedish Civil Contingencies Agency, more than 25000 ha of forest burned down during the period of 2012-2021, which resulted in severe damage costs. The presence of a reliable and easily accessible smoke detection and assessment tool could significantly reduce the impact of wildfires. This study utilizes low and mid-level image processing techniques to analyze the domain of wildfires, leveraging smoke properties to estimate the extent of land affected by the flames.

Keywords
burned area calculation; detection; drone images; smoke
National Category
Earth and Related Environmental Sciences
Identifiers
urn:nbn:se:hig:diva-43423 (URN)10.5593/sgem2023/2.1/s08.19 (DOI)2-s2.0-85177869471 (Scopus ID)978-619-7603-57-6 (ISBN)
Conference
23rd International Multidisciplinary Scientific Geoconference: Informatics, Geoinformatics and Remote Sensing, SGEM 2023, Albena, Bulgaria, 3-9 July 2023
Available from: 2023-12-11 Created: 2023-12-11 Last updated: 2025-02-07Bibliographically approved
Chandel, K., Åhlén, J. & Seipel, S. (2023). Evaluating the Tracking Abilities of Microsoft HoloLens-1 for Small-Scale Industrial Processes. In: : . Paper presented at ICMAT 2023: International Conference on Computerized Manufacturing Automation Technologies, Stockholm, July 6-7, 2023.
Open this publication in new window or tab >>Evaluating the Tracking Abilities of Microsoft HoloLens-1 for Small-Scale Industrial Processes
2023 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

This study evaluates the accuracy of Microsoft HoloLens (Version 1) for small-scale industrial activities, comparingits measurements to ground truth data from a Kuka Robotics arm. Two experiments were conducted to assess its positiontracking capabilities, revealing that the HoloLens device is effective for measuring the position of dynamic objects with smalldimensions. However, its precision is affected by the velocity of the trajectory and its position within the device's field of view.While the HoloLens device may be suitable for small-scale tasks, its limitations for more complex and demanding applicationsrequiring high precision and accuracy must be considered. The findings can guide the use of HoloLens devices in industrialapplications and contribute to the development of more effective and reliable position-tracking systems.

Keywords
Augmented reality (AR), Microsoft HoloLens, object tracking, industrial processes, manufacturing processes
National Category
Geosciences, Multidisciplinary
Identifiers
urn:nbn:se:hig:diva-42841 (URN)
Conference
ICMAT 2023: International Conference on Computerized Manufacturing Automation Technologies, Stockholm, July 6-7, 2023
Available from: 2023-08-15 Created: 2023-08-15 Last updated: 2023-08-16Bibliographically approved
Chandel, K., Åhlén, J. & Seipel, S. (2022). Augmented Reality and Indoor Positioning in Context of Smart Industry: A Review. Management and Production Engineering Review, 13(4), 72-87
Open this publication in new window or tab >>Augmented Reality and Indoor Positioning in Context of Smart Industry: A Review
2022 (English)In: Management and Production Engineering Review, ISSN 2080-8208, E-ISSN 2082-1344, Vol. 13, no 4, p. 72-87Article, review/survey (Refereed) Published
Abstract [en]

Presently, digitalization is causing continuous transformation of industrial processes. However, it does pose challenges like spatially contextualizing data from industrial processes. There are various methods for calculating and delivering real-time location data. Indoor positioning systems (IPS) are one such method, used to locate objects and people within buildings. They have the potential to improve digital industrial processes, but they are currently under utilized. In addition, augmented reality (AR) is a critical technology in today’s digital industrial transformation. This article aims to investigate the use of IPS and AR in manufacturing, the methodologies and technologies employed, the issues and limitations encountered, and identify future research opportunities. This study concludes that, while there have been many studies on IPS and navigation AR, there has been a dearth of research efforts in combining the two. Furthermore, because controlled environments may not expose users to the practical issues they may face, more research in a real-world manufacturing environment is required to produce more reliable and sustainable results

Place, publisher, year, edition, pages
Polish Academy of Sciences, 2022
Keywords
Industrial Augmented Reality, Indoor Positioning Systems, Smart Manufacturing, Smart Factory.
National Category
Computer and Information Sciences
Research subject
Intelligent Industry
Identifiers
urn:nbn:se:hig:diva-40651 (URN)10.24425/mper.2022.142396 (DOI)000961972800007 ()2-s2.0-85168687249 (Scopus ID)
Projects
Spatial Data Innovation (SDI)
Funder
Region Gavleborg, 20201871Swedish Agency for Economic and Regional Growth
Available from: 2023-01-02 Created: 2023-01-02 Last updated: 2023-09-04Bibliographically approved
Åhlén, J. (2022). Smoke and fog classification in forest monitoring using high spatial resolution images. In: 22nd International Multidisciplinary Scientific GeoConference SGEM 2022, July, 2022, Albena, Bulgaria: . Paper presented at 22nd International Multidisciplinary Scientific GeoConference SGEM 2022 (pp. 131-138). , 22
Open this publication in new window or tab >>Smoke and fog classification in forest monitoring using high spatial resolution images
2022 (English)In: 22nd International Multidisciplinary Scientific GeoConference SGEM 2022, July, 2022, Albena, Bulgaria, 2022, Vol. 22, p. 131-138Conference paper, Published paper (Refereed)
Abstract [en]

Forest fires cause major damage to human habitats and forest ecosystems. Early detection may prevent serious consequences of fast fire spread. Although there are many smoke detection algorithms employed by various optical remote sensing systems, there is still a major misdetection of images containing fog. Fog exhibits similar visual characteristics to smoke. Furthermore, when monitoring dense forests many smoke detection algorithms would fail in robust recognition due to fog covering the trees at dawn. There have been more or less successful attempts to separate smoke from a fog in optical imagery however, these algorithms are strongly connected to a specific application area or use a semi-automatic approach. This work aims to propose a novel smoke and fog separation algorithm based on color space model calculation followed by rule-based shape analysis. In addition, the internal properties of the smoke candidate areas are examined for linear attenuation towards higher energy wavelength. Those areas are then investigated for internal shape properties such as convex hull and eccentricity. Several tests conducted on various high-resolution aerial images suggest that the system is effective in differentiating smoke and fog and thus considered to be robust in early fire detection in forest areas. 

Keywords
smoke, fog, detection, image, rule-based
National Category
Computer Sciences
Research subject
Intelligent Industry
Identifiers
urn:nbn:se:hig:diva-40728 (URN)10.5593/sgem2022/2.1/s08.16 (DOI)2-s2.0-85151048254 (Scopus ID)978-619-7603-40-8 (ISBN)
Conference
22nd International Multidisciplinary Scientific GeoConference SGEM 2022
Available from: 2023-01-11 Created: 2023-01-11 Last updated: 2023-04-13Bibliographically approved
Seffers, G., Åhlén, J., Seipel, S. & Ooms, K. (2021). Assessing Damage – Can the Crowd Interpret Colour and 3D Information?. Cartographic Journal, 58(1), 69-82
Open this publication in new window or tab >>Assessing Damage – Can the Crowd Interpret Colour and 3D Information?
2021 (English)In: Cartographic Journal, ISSN 0008-7041, E-ISSN 1743-2774, Vol. 58, no 1, p. 69-82Article in journal (Refereed) Published
Abstract [en]

ABSTRACT The goal of this study is to investigate how efficiently and effectively collapsed buildings ? due to the occurrence of a disaster ? can be localized by a general crowd. Two types of visualization parameters are evaluated in an online user study: (1) greyscale images (indicating height information) versus true colours; (2) variation in the vertical viewing angle (0°, 30° and 60°). Additionally, the influence of map use expertise on how the visualizations are interpreted, is investigated. The results indicate that the use of the greyscale image helps to locate collapsed buildings in an efficient and effective manner. The use of the viewing angle of 60° is the least appropriate. A person with a map use expertise will prefer the greyscale image over the colour image. To confirm the benefits of the use of three-dimensional visualizations and the use of the colour image, more research is needed.

Place, publisher, year, edition, pages
Taylor & Francis, 2021
Keywords
Post-disaster visualizations, users study, crowdsourcing, change detection
National Category
Civil Engineering
Research subject
Sustainable Urban Development
Identifiers
urn:nbn:se:hig:diva-34078 (URN)10.1080/00087041.2020.1714277 (DOI)000576489700001 ()2-s2.0-85092113780 (Scopus ID)
Available from: 2020-10-08 Created: 2020-10-08 Last updated: 2023-02-17Bibliographically approved
Ooms, K., Åhlén, J. & Seipel, S. (2018). Detecting Collapsed Buildings in Case of Disaster: Which Visualisation Works Best?. In: Kiefer, Peter Giannopoulos, Ioannis Göbel, Fabian Raubal, Martin Duchowski, Andrew T. (Ed.), Eye Tracking for Spatial Research: Proceedings of the 3rd International Workshop. Paper presented at 3rd International Workshop on Eye Tracking for Spatial Research, January 14, 2018, Zurich, Switzerland. Zurich
Open this publication in new window or tab >>Detecting Collapsed Buildings in Case of Disaster: Which Visualisation Works Best?
2018 (English)In: Eye Tracking for Spatial Research: Proceedings of the 3rd International Workshop / [ed] Kiefer, Peter Giannopoulos, Ioannis Göbel, Fabian Raubal, Martin Duchowski, Andrew T., Zurich, 2018Conference paper, Published paper (Refereed)
Abstract [en]

A user study is conducted to evaluate the efficiency and effectiveness of two types of visualizations to identify damage sites in case of disaster. The test consists out 36 trials (18 for each visualisation) and in each trial an area of 1x1km, located in Ghent, is displayed on a screen. This image shows the combined height information from before and after the disaster. The first visualisation, page flipping, is based on greyscale images with height information from the pre- and post-disaster situation between which users can switch manually. The second visualisation, difference image, is a result of subtracting the heights (before versus after) and assigning a blue-white-red colour ramp. In order to simulate the urgency with which the data is captured, systematic and random imperfections are introduced in the post-disaster data. All participants’ mouse and key interactions are logged, which is further complemented by the registration of their eye movements. This give insights the visualizations’ efficiency, effectiveness and the overall search strategies of the participants.

Place, publisher, year, edition, pages
Zurich: , 2018
Keywords
user study; mouse & key logging; eye tracking; emergency response; damage assessment
National Category
Computer Systems
Research subject
Sustainable Urban Development
Identifiers
urn:nbn:se:hig:diva-29175 (URN)10.3929/ethz-b-000222480 (DOI)
Conference
3rd International Workshop on Eye Tracking for Spatial Research, January 14, 2018, Zurich, Switzerland
Available from: 2019-01-25 Created: 2019-01-25 Last updated: 2023-02-17Bibliographically approved
Ooms, K., Åhlén, J. & Seipel, S. (2018). Efficiency and effectiveness in case of disaster: a visual damage assessment test. In: Proceedings of the International Cartographic Association (Proceedings of the ICA): . Paper presented at The 28th International Cartographic Conference took place, 2–7 July 2017, Washington D.C., USA. , 1, Article ID 86.
Open this publication in new window or tab >>Efficiency and effectiveness in case of disaster: a visual damage assessment test
2018 (English)In: Proceedings of the International Cartographic Association (Proceedings of the ICA), 2018, Vol. 1, article id 86Conference paper, Published paper (Refereed)
Abstract [en]

 A user study is conducted to evaluate the efficiency and effectiveness of two types of visualizations to identify damages sites in case of disaster. The test consists out 36 trials (18 for each visualisation) and in each trial an area of 1×1km, located in Ghent, is displayed on a screen. This image shows the combined height information from before and after the disaster. The first visualisation, page flipping, is based on greyscale images with height information from the pre- and post-disaster situation between which users can switch manually. The second visualisation, difference image, is a result of subtracting the heights (before versus after) and assigning a blue-white-red colour ramp. In order to simulate the urgency with which the data is captured, systematic and random imperfections are introduced in the post-disaster data. All participants’ mouse and key interactions are logged, which is further complemented by the registration of their eye movements. This give insights the visualizations’ efficiency, effectiveness and the overall search strategies of the participants.

Keywords
User study, mouse & key logging, eye tracking, emergency response, damage assessment
National Category
Computer Systems
Research subject
Sustainable Urban Development
Identifiers
urn:nbn:se:hig:diva-29174 (URN)10.5194/ica-proc-1-86-2018 (DOI)
Conference
The 28th International Cartographic Conference took place, 2–7 July 2017, Washington D.C., USA
Available from: 2019-01-25 Created: 2019-01-25 Last updated: 2023-02-17Bibliographically approved
Åhlén, J. & Seipel, S. (2018). Mapping of roof types in orthophotos using feature descriptors. In: International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, SGEM: Proceedings of the International Multidisciplinary Scientific GeoConference SGEM. Paper presented at 18th International Multidisciplinary Scientific GeoConference SGEM,30th June - 9th July 2018, Albena, Bulgaria (pp. 285-291). , 18
Open this publication in new window or tab >>Mapping of roof types in orthophotos using feature descriptors
2018 (English)In: International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, SGEM: Proceedings of the International Multidisciplinary Scientific GeoConference SGEM, 2018, Vol. 18, p. 285-291Conference paper, Published paper (Refereed)
Abstract [en]

In the context of urban planning, it is very important to estimate the nature of the roof of every building and, in particular, to make the difference between flat roofs and gable ones. This analysis is necessary in seismically active areas. Also in the assessment of renewable energy projects such solar energy, the shape of roofs must be accurately retrieved. In order to perform this task automatically on a large scale, aerial photos provide a useful solution. The goal of this research project is the development of algorithm for accurate mapping of two different roof types in digital aerial images. The algorithm proposed in this paper includes several components: pre-processing step to reduce illumination differences of individual roof surfaces, statistical moments calculation and color indexing. Roof models are created and saved as masks with feature specific descriptors. Masks are then used to mark areas that contain elements of the different roof types (e.g. gable and hip). The final orthophoto visualize an accurate position of each of the roof types. The result is evaluated using precision recall method.

Series
Proceedings of the International Multidisciplinary Scientific GeoConference SGEM, ISSN 1314-2704 ; 2.2
Keywords
URBAN planning, ROOFS, BUILDINGS, ALGORITHMS, AERIAL photography, classification, orthophoto, Roof types, segmentation
National Category
Civil Engineering
Research subject
Sustainable Urban Development; Intelligent Industry
Identifiers
urn:nbn:se:hig:diva-28694 (URN)10.5593/sgem2018/2.2/S08.036 (DOI)2-s2.0-85058885965 (Scopus ID)
Conference
18th International Multidisciplinary Scientific GeoConference SGEM,30th June - 9th July 2018, Albena, Bulgaria
Available from: 2018-11-28 Created: 2018-11-28 Last updated: 2023-02-17Bibliographically approved
Åhlén, J., Seipel, S. & Kautz, M.-L. (2017). Data source evaluation for shoreline deliniation applications. In: International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, SGEM: Conference proceedings. Paper presented at 17th International Multidisciplinary Scientific GeoConference SGEM 2017,27 June - 6 July, 2017, Albena, Bulgaria (pp. 849-858). , 17(2-3)
Open this publication in new window or tab >>Data source evaluation for shoreline deliniation applications
2017 (English)In: International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, SGEM: Conference proceedings, 2017, Vol. 17, no 2-3, p. 849-858Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes an evaluation of data acquired with various sensors and used in coastal water segmentation applications. Correct monitoring of coastal changes in dynamic coastal environments strongly depends on accurate and frequent detection of shoreline position. Automatic shoreline delineation methods are preferable, especially in terms of time, cost, labor intensiveness and difficulties of in-situ measurements. Two main issues have been encountered within this application field, the quality of data and the segmentation algorithms. In this work, potential benefits of various data sources including optical and active sensors for extraction of shorelines have been investigated. The goal with shoreline detection from digital data sources is to obtain information as efficiently as possible and as reliably as necessary. Starting with that observation the paper discusses the effectiveness of coastal information extraction provided different data sources. This question is especially important to address since we observe a fast development of high spatial resolution data acquisition. There are many of segmentation algorithms described in the field of image processing and yet there is currently no single theory or method, no universal segmentation framework, that can be applied on all images to precisely and robustly extract shorelines. Nether there is a uniform standard for the assessment of segmentation results, and this process still largely relies on visual analysis and personal judgment. Out of myriads of image segmentation algorithms, we chose the most frequently and successfully applied within the application field and considering the data sources. In optical sensor data cases, the most frequently used methods are NDWI (Normalized Difference Water Index) and thresholding techniques. We do not aim to create yet another method to segment out the particular objects from remotely sensed data and then tailor it to work efficiently on that data set. Instead, we evaluate the data quality regarding the given application field. The case study is carried out on a 10 km coastal stretch facing the Baltic Sea (Sweden) and belonging to the Municipality of Gävle. In citu measurements were acquired to evaluate the extracted coastal lines and comparisons with reference were performed based on the average mean distance. A conclusion is done regarding the most reliable data source for this particular application of shoreline delineation.

Series
International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, SGEM : Conference proceedings, ISSN 1314-2704 ; 21
Keywords
Data source, Evaluation, Segmentation, Shoreline
National Category
Computer and Information Sciences Earth and Related Environmental Sciences
Research subject
Sustainable Urban Development
Identifiers
urn:nbn:se:hig:diva-25424 (URN)10.5593/sgem2017/21/S08.108 (DOI)2-s2.0-85032471686 (Scopus ID)978-619-7408-01-0 (ISBN)
Conference
17th International Multidisciplinary Scientific GeoConference SGEM 2017,27 June - 6 July, 2017, Albena, Bulgaria
Available from: 2017-10-18 Created: 2017-10-18 Last updated: 2025-06-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-5986-7464

Search in DiVA

Show all publications