hig.sePublications
Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard-cite-them-right
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • sv-SE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • de-DE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Liu, Fei
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Detection of Façade Regions in Street View Images from Split-and-Merge of Perspective Patches2014In: Journal of Image and Graphics, ISSN ISSN 2301-3699, Vol. 2, no 1, p. 8-14Article in journal (Refereed)
    Abstract [en]

    Identification of building façades from digital images is one of the central problems in mobile augmented reality (MAR) applications in the built environment. Directly analyzing the whole image can increase the difficulty of façade identification due to the presence of image portions which are not façade. This paper presents an automatic approach to façade region detection given a single street view image as a pre-processing step to subsequent steps of façade identification. We devise a coarse façade region detection method based on the observation that façades are image regions with repetitive patterns containing a large amount of vertical and horizontal line segments. Firstly, scan lines are constructed from vanishing points and center points of image line segments. Hue profiles along these lines are then analyzed and used to decompose the image into rectilinear patches with similar repetitive patterns. Finally, patches are merged into larger coherent regions and the main building façade region is chosen based on the occurrence of horizontal and vertical line segments within each of the merged regions. A validation of our method showed that on average façade regions are detected in conformity with manually segmented images as ground truth.

  • 2.
    Liu, Fei
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Detection of Line Features in Digital Images of Building Structures2012In: Proceedings of IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing 2012 (CGVCVIP 2012) / [ed] Yingcai Xiao, 2012, p. 163-167Conference paper (Refereed)
    Abstract [en]

    This paper describes method for detection of short line segments in digital images. It aims at identifying buildingsin images taken from the ground view. The process starts with the image edge map and is carried out in twodifferent levels. One is to detect long line segments usually stemming from façade edges and building silhouettes.The other one identifies shorter line segments which typically represent architectural details such as windows andentrances. Selected individual connected components in both vertical and horizontal gradient component mapsare used respectively as input to the Hough transform at this level. Our first result shows that this method iscapable of recognizing lines of interest but has also included many randomly oriented lines. The next step will beto eliminate the random line segments and correlate line segments of the two levels to classify high-level features ofbuildings in an image.

  • 3.
    Liu, Fei
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Centre for Image Analysis, Uppsala University, Uppsala, Sweden.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Centre for Image Analysis, Uppsala University, Uppsala, Sweden.
    Infrared-visible image registration for augmented reality-based thermographic building diagnostics2015In: Visualization in Engineering, ISSN 2213-7459, Vol. 3, no 1, article id 16Article in journal (Refereed)
    Abstract [en]

    Background: In virtue of their capability to measure temperature, thermal infrared cameras have been widely used in building diagnostics for detecting heat loss, air leakage, water damage etc. However, the lack of visual details in thermal infrared images makes the complement of visible images a necessity. Therefore, it is often useful to register images of these two modalities for further inspection of architectures. Augmented reality (AR) technology, which supplements the real world with virtual objects, offers an ideal tool for presenting the combined results of thermal infrared and visible images. This paper addresses the problem of registering thermal infrared and visible façade images, which is essential towards developing an AR-based building diagnostics application. Methods: A novel quadrilateral feature is devised for this task, which models the shapes of commonly present façade elements, such as windows. The features result from grouping edge line segments with the help of image perspective information, namely, vanishing points. Our method adopts a forward selection algorithm to determine feature correspondences needed for estimating the transformation model. During the formation of the feature correspondence set, the correctness of selected feature correspondences at each step is verified by the quality of the resulting registration, which is based on the ratio of areas between the transformed features and the reference features. Results and conclusions: Quantitative evaluation of our method shows that registration errors are lower than errors reported in similar studies and registration performance is usable for most tasks in thermographic inspection of building façades.

  • 4.
    Liu, Fei
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Centre for Image Analysis, Uppsala University, Uppsala, Sweden.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Centre for Image Analysis, Uppsala University.
    On the precision of third person perspective augmented reality for target designation tasks2017In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 76, no 14, p. 15279-15296Article in journal (Refereed)
    Abstract [en]

    The availability of powerful consumer-level smart devices and off-the-shelf software frameworks has tremendously popularized augmented reality (AR) applications. However, since the built-in cameras typically have rather limited field of view, it is usually preferable to position AR tools built upon these devices at a distance when large objects need to be tracked for augmentation. This arrangement makes it difficult or even impossible to physically interact with the augmented object. One solution is to adopt third person perspective (TPP) with which the smart device shows in real time the object to be interacted with, the AR information and the user herself, all captured by a remote camera. Through mental transformation between the user-centric coordinate space and the coordinate system of the remote camera, the user can directly interact with objects in the real world. To evaluate user performance under this cognitively demanding situation, we developed such an experimental TPP AR system and conducted experiments which required subjects to make markings on a whiteboard according to virtual marks displayed by the AR system. The same markings were also made manually with a ruler. We measured the precision of the markings as well as the time to accomplish the task. Our results show that although the AR approach was on average around half a centimeter less precise than the manual measurement, it was approximately three times as fast as the manual counterpart. Additionally, we also found that subjects could quickly adapt to the mental transformation between the two coordinate systems.

  • 5.
    Liu, Fei
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Centre for Image Analysis, Uppsala University, Uppsala, Sweden.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science. Centre for Image Analysis, Uppsala University, Uppsala, Sweden.
    Precision study on augmented reality-based visual guidance for facility management tasks2018In: Automation in Construction, ISSN 0926-5805, E-ISSN 1872-7891, Vol. 90, p. 79-90Article in journal (Refereed)
    Abstract [en]

    One unique capability of augmented reality (AR) is to visualize hidden objects as a virtual overlay on real occluding objects. This “X-ray vision” visualization metaphor has proved to be invaluable for operation and maintenance tasks such as locating utilities behind a wall. Locating virtual occluded objects requires users to estimate the closest projected positions of the virtual objects upon their real occluders, which is generally under the influence of a parallax effect. In this paper we studied the task of locating virtual pipes behind a real wall with “X-ray vision” and the goal is to establish relationships between task performance and spatial factors causing parallax through different forms of visual augmentation. We introduced and validated a laser-based target designation method which is generally useful for AR-based interaction with augmented objects beyond arm's reach. The main findings include that people can mentally compensate for the parallax error when extrapolating positions of virtual objects on the real surface given traditional 3D depth cues for spatial understanding. This capability is, however, unreliable especially in the presence of the increasing viewing offset between the users and the virtual objects as well as the increasing distance between the virtual objects and their occluders. Experiment results also show that positioning performance is greatly increased and unaffected by those factors if the AR support provides visual guides indicating the closest projected positions of virtual objects on the surfaces of their real occluders.

  • 6.
    Åhlén, Julia
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Urban and regional planning/GIS-institute.
    Seipel, Stefan
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Liu, Fei
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Industrial Development, IT and Land Management, Computer science.
    Evaluation of the Automatic methods for Building Extraction2014In: International Journal Of Computers and Communications, ISSN 2074-1294, Vol. 8, p. 171-176Article in journal (Refereed)
    Abstract [en]

    Recognition of buildings is not a trivial task, yet highly demanded in many applications including augmented reality for mobile phones. Recognition rate can be increased significantly if building façade extraction will take place prior to the recognitionprocess. It is also a challenging task since eachbuilding can be viewed from different angles or under differentlighting conditions. Natural situation outdoor is when buildings are occluded by trees, street signs and other objects. This interferes for successful building façade recognition. In this paper we evaluate the knowledge based approach  to automatically segment out the whole buildingfaçade or major parts of thefaçade. This automatic building detection algorithm is then evaluated against other segmentation methods such as SIFT and vanishing point approach. This work contains two main steps: segmentation of building façades region using two different approaches and evaluation of the methods using database of reference features. Building recognition model (BRM) includes evaluation step that uses Chamfer metrics. BMR is then compared to vanishing points segmentation. In the evaluation mode, comparison of these two different segmentation methods is done using the data from ZuBuD.Reference matching is also done using Scale Invariant Feature Transform. Theresults show that the recognition rate is satisfactory for the BMR model and there is no need to extract the whole building façade for the successful recognition.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard-cite-them-right
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • sv-SE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • de-DE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf