hig.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Liu, Fei
Publications (6 of 6) Show all publications
Liu, F. & Seipel, S. (2018). Precision study on augmented reality-based visual guidance for facility management tasks. Automation in Construction, 90, 79-90
Open this publication in new window or tab >>Precision study on augmented reality-based visual guidance for facility management tasks
2018 (English)In: Automation in Construction, ISSN 0926-5805, E-ISSN 1872-7891, Vol. 90, p. 79-90Article in journal (Refereed) Published
Abstract [en]

One unique capability of augmented reality (AR) is to visualize hidden objects as a virtual overlay on real occluding objects. This “X-ray vision” visualization metaphor has proved to be invaluable for operation and maintenance tasks such as locating utilities behind a wall. Locating virtual occluded objects requires users to estimate the closest projected positions of the virtual objects upon their real occluders, which is generally under the influence of a parallax effect. In this paper we studied the task of locating virtual pipes behind a real wall with “X-ray vision” and the goal is to establish relationships between task performance and spatial factors causing parallax through different forms of visual augmentation. We introduced and validated a laser-based target designation method which is generally useful for AR-based interaction with augmented objects beyond arm's reach. The main findings include that people can mentally compensate for the parallax error when extrapolating positions of virtual objects on the real surface given traditional 3D depth cues for spatial understanding. This capability is, however, unreliable especially in the presence of the increasing viewing offset between the users and the virtual objects as well as the increasing distance between the virtual objects and their occluders. Experiment results also show that positioning performance is greatly increased and unaffected by those factors if the AR support provides visual guides indicating the closest projected positions of virtual objects on the surfaces of their real occluders.

Keywords
Augmented reality “X-ray vision”, Experiment, Facility management, Positioning task, Precision study, Spatial judgment, Augmented reality, Experiments, Geometrical optics, Location, Office buildings, Positioning tasks, Precision studies, Spatial judgments, X-ray vision, X rays
National Category
Computer and Information Sciences Other Engineering and Technologies
Identifiers
urn:nbn:se:hig:diva-26238 (URN)10.1016/j.autcon.2018.02.020 (DOI)000430520300007 ()2-s2.0-85042273035 (Scopus ID)
Available from: 2018-03-15 Created: 2018-03-15 Last updated: 2018-05-31Bibliographically approved
Liu, F. & Seipel, S. (2017). On the precision of third person perspective augmented reality for target designation tasks. Multimedia tools and applications, 76(14), 15279-15296
Open this publication in new window or tab >>On the precision of third person perspective augmented reality for target designation tasks
2017 (English)In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 76, no 14, p. 15279-15296Article in journal (Refereed) Published
Abstract [en]

The availability of powerful consumer-level smart devices and off-the-shelf software frameworks has tremendously popularized augmented reality (AR) applications. However, since the built-in cameras typically have rather limited field of view, it is usually preferable to position AR tools built upon these devices at a distance when large objects need to be tracked for augmentation. This arrangement makes it difficult or even impossible to physically interact with the augmented object. One solution is to adopt third person perspective (TPP) with which the smart device shows in real time the object to be interacted with, the AR information and the user herself, all captured by a remote camera. Through mental transformation between the user-centric coordinate space and the coordinate system of the remote camera, the user can directly interact with objects in the real world. To evaluate user performance under this cognitively demanding situation, we developed such an experimental TPP AR system and conducted experiments which required subjects to make markings on a whiteboard according to virtual marks displayed by the AR system. The same markings were also made manually with a ruler. We measured the precision of the markings as well as the time to accomplish the task. Our results show that although the AR approach was on average around half a centimeter less precise than the manual measurement, it was approximately three times as fast as the manual counterpart. Additionally, we also found that subjects could quickly adapt to the mental transformation between the two coordinate systems.

Keywords
Augmented reality, Third person perspective, Target designation, Precision study, Experiment
National Category
Human Computer Interaction Computer Sciences
Identifiers
urn:nbn:se:hig:diva-22349 (URN)10.1007/s11042-016-3817-0 (DOI)000404609900004 ()2-s2.0-84984843529 (Scopus ID)
Available from: 2016-09-05 Created: 2016-09-05 Last updated: 2018-03-13Bibliographically approved
Liu, F. & Seipel, S. (2015). Infrared-visible image registration for augmented reality-based thermographic building diagnostics. Visualization in Engineering, 3(1), Article ID 16.
Open this publication in new window or tab >>Infrared-visible image registration for augmented reality-based thermographic building diagnostics
2015 (English)In: Visualization in Engineering, ISSN 2213-7459, Vol. 3, no 1, article id 16Article in journal (Refereed) Published
Abstract [en]

Background: In virtue of their capability to measure temperature, thermal infrared cameras have been widely used in building diagnostics for detecting heat loss, air leakage, water damage etc. However, the lack of visual details in thermal infrared images makes the complement of visible images a necessity. Therefore, it is often useful to register images of these two modalities for further inspection of architectures. Augmented reality (AR) technology, which supplements the real world with virtual objects, offers an ideal tool for presenting the combined results of thermal infrared and visible images. This paper addresses the problem of registering thermal infrared and visible façade images, which is essential towards developing an AR-based building diagnostics application. Methods: A novel quadrilateral feature is devised for this task, which models the shapes of commonly present façade elements, such as windows. The features result from grouping edge line segments with the help of image perspective information, namely, vanishing points. Our method adopts a forward selection algorithm to determine feature correspondences needed for estimating the transformation model. During the formation of the feature correspondence set, the correctness of selected feature correspondences at each step is verified by the quality of the resulting registration, which is based on the ratio of areas between the transformed features and the reference features. Results and conclusions: Quantitative evaluation of our method shows that registration errors are lower than errors reported in similar studies and registration performance is usable for most tasks in thermographic inspection of building façades.

Keywords
Multimodality image registration, Augmented reality, Thermal infrared imaging, Façade
National Category
Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction Other Computer and Information Science
Identifiers
urn:nbn:se:hig:diva-20587 (URN)10.1186/s40327-015-0028-0 (DOI)2-s2.0-85044085447 (Scopus ID)
Available from: 2015-11-13 Created: 2015-11-13 Last updated: 2018-06-26Bibliographically approved
Liu, F. & Seipel, S. (2014). Detection of Façade Regions in Street View Images from Split-and-Merge of Perspective Patches. Journal of Image and Graphics, 2(1), 8-14
Open this publication in new window or tab >>Detection of Façade Regions in Street View Images from Split-and-Merge of Perspective Patches
2014 (English)In: Journal of Image and Graphics, ISSN ISSN 2301-3699, Vol. 2, no 1, p. 8-14Article in journal (Refereed) Published
Abstract [en]

Identification of building façades from digital images is one of the central problems in mobile augmented reality (MAR) applications in the built environment. Directly analyzing the whole image can increase the difficulty of façade identification due to the presence of image portions which are not façade. This paper presents an automatic approach to façade region detection given a single street view image as a pre-processing step to subsequent steps of façade identification. We devise a coarse façade region detection method based on the observation that façades are image regions with repetitive patterns containing a large amount of vertical and horizontal line segments. Firstly, scan lines are constructed from vanishing points and center points of image line segments. Hue profiles along these lines are then analyzed and used to decompose the image into rectilinear patches with similar repetitive patterns. Finally, patches are merged into larger coherent regions and the main building façade region is chosen based on the occurrence of horizontal and vertical line segments within each of the merged regions. A validation of our method showed that on average façade regions are detected in conformity with manually segmented images as ground truth.

Place, publisher, year, edition, pages
San Jose, CA, USA: Engineering and Technology Publishing, 2014
Keywords
façade region detection, street view image, vanishing point, mobile augmented reality
National Category
Computer Engineering
Identifiers
urn:nbn:se:hig:diva-18517 (URN)10.12720/joig.2.1.8-14 (DOI)
Available from: 2014-12-11 Created: 2014-12-11 Last updated: 2018-03-13Bibliographically approved
Åhlén, J., Seipel, S. & Liu, F. (2014). Evaluation of the Automatic methods for Building Extraction. International Journal Of Computers and Communications, 8, 171-176
Open this publication in new window or tab >>Evaluation of the Automatic methods for Building Extraction
2014 (English)In: International Journal Of Computers and Communications, ISSN 2074-1294, Vol. 8, p. 171-176Article in journal (Refereed) Published
Abstract [en]

Recognition of buildings is not a trivial task, yet highly demanded in many applications including augmented reality for mobile phones. Recognition rate can be increased significantly if building façade extraction will take place prior to the recognitionprocess. It is also a challenging task since eachbuilding can be viewed from different angles or under differentlighting conditions. Natural situation outdoor is when buildings are occluded by trees, street signs and other objects. This interferes for successful building façade recognition. In this paper we evaluate the knowledge based approach  to automatically segment out the whole buildingfaçade or major parts of thefaçade. This automatic building detection algorithm is then evaluated against other segmentation methods such as SIFT and vanishing point approach. This work contains two main steps: segmentation of building façades region using two different approaches and evaluation of the methods using database of reference features. Building recognition model (BRM) includes evaluation step that uses Chamfer metrics. BMR is then compared to vanishing points segmentation. In the evaluation mode, comparison of these two different segmentation methods is done using the data from ZuBuD.Reference matching is also done using Scale Invariant Feature Transform. Theresults show that the recognition rate is satisfactory for the BMR model and there is no need to extract the whole building façade for the successful recognition.

Keywords
Building, extraction, recognition, Chamfer metrics, SIFT, Vanishing Points
National Category
Information Systems Computer Sciences
Identifiers
urn:nbn:se:hig:diva-18200 (URN)
Available from: 2014-11-26 Created: 2014-11-26 Last updated: 2018-03-13Bibliographically approved
Liu, F. & Seipel, S. (2012). Detection of Line Features in Digital Images of Building Structures. In: Yingcai Xiao (Ed.), Proceedings of IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2012 (CGVCVIP 2012): . Paper presented at IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2012 (CGVCVIP 2012), Lisbon, Portugal, 21-24 July 2012 (pp. 163-167).
Open this publication in new window or tab >>Detection of Line Features in Digital Images of Building Structures
2012 (English)In: Proceedings of IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2012 (CGVCVIP 2012) / [ed] Yingcai Xiao, 2012, p. 163-167Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes method for detection of short line segments in digital images. It aims at identifying buildingsin images taken from the ground view. The process starts with the image edge map and is carried out in twodifferent levels. One is to detect long line segments usually stemming from façade edges and building silhouettes.The other one identifies shorter line segments which typically represent architectural details such as windows andentrances. Selected individual connected components in both vertical and horizontal gradient component mapsare used respectively as input to the Hough transform at this level. Our first result shows that this method iscapable of recognizing lines of interest but has also included many randomly oriented lines. The next step will beto eliminate the random line segments and correlate line segments of the two levels to classify high-level features ofbuildings in an image.

Keywords
Building images, Gradient edges, Hough transform, Line segment detection
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Engineering
Identifiers
urn:nbn:se:hig:diva-12940 (URN)2-s2.0-84887314410 (Scopus ID)978-972-8939-74-8 (ISBN)
Conference
IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2012 (CGVCVIP 2012), Lisbon, Portugal, 21-24 July 2012
Available from: 2012-09-18 Created: 2012-09-18 Last updated: 2018-03-13Bibliographically approved
Organisations

Search in DiVA

Show all publications