Spatial Data Infrastructure denotes the collection of technologies, policies and institutional arrangements that facilitate the availability of and access to spatial information. During the last few years the development of spatial data infrastructure in Sweden has been influenced by two actions. The first was the European Directive in spatial data infrastructure namely Infrastructure for Spatial Information in Europe (INSPIRE), and the second action was the Swedish parliament's directive early in 2008 on e-Government. In a modern society, spatial data play major roles and have different applications such as information support during disaster prevention and management. These two milestones involving Geodata development have created huge demands and represent great challenges for researchers in the area of spatial data infrastructure. One of these challenges concerned the methodologies involved for testing proposed data specifications from INSPIRE. This paper addresses the above challenge and introduces a framework for testing Geodata. The testing of Geodata includes, the testing of the data specifications for different geographical themes and data structure, the performance testing of Opengeospatial Web Services (OWS) and the usability of Geoportals and services. The proposed methods were evaluated during a pilot test for a regional geoportal in Sweden, and the reported results in this paper show the feasibility and applicability of the methods used. The methods used assisted in the identification of the performance related defects and the bottleneck involved in relation to the response time, stress and load. The methods support the detection of different types of errors that occur during the testing time such as http error, timeout error, and socket error. During the pilot test of a geoportal, it was discovered that the response time was 30 seconds which is 6 times higher than the INSPIRE required time (Maximum 5 second), with 500 virtual users accessing the system and performing a specific task. A usability test was conducted which focused on the users' acceptance and the “think aloud” methods. The usability testing enabled the identification of user-interface related problems and the results were quantified to enable comparisons to be made with current results and those from the new test.