最新刊期

    5 2008
    • WEI Cheng-jie,LIU Ya-lan,WANG Shi-xin,ZHANG Li-fu,HUANG Xiao-xia
      Issue 5, Pages: 673-682(2008) DOI: 10.11834/jrs.20080588
      摘要:Earthquake is one of the most destructive natural disasters in China.For a long time,investigation of earthquake damage has been based on field surveys,but disadvantages of this means,such as huge workload,low efficiency,too much expense and un-visualized information,make it not practical.However,remote sensing technique can provide a rapid and effective approach for quick investigation and assessment of earthquake damage,owning to its characteristics of objectivity,real-time and macro-scope view.The history of investigation and assessment of earthquake damage based on remote sensing in China: Since mid of 1960s,institutes and departments of Chinese Academy of Sciences have done a lot of work on quake damage investigation and assessment and accumulated rich experience.Aerial photography was carred out for Xingtai quake in 1966,Haicheng earthquake in 1975,Tangshan quake in 1976 and Lancang-Gengma earthquake in 1988 and the mornitoring and mapping for earthquake damage were accomplished in different scales.All of these achievements provided important scientific basis for decision-making of the earthquake relief and reconstruction.With the development of remote sensing technique for decades,this technology supplies technical support and priori knowledge for the investigation and assessment of Wenchuan quake damage.There occurred open ended Richter 8.0 scale earthquake in the county of Wenchuan,Sichuan province on May 12th,2008,which has caused huge destructive losses.In this,the remote sensing technique is used to investigate and paper assess the situation of the earthquake damage.There are five classes of features which can be interpreted according to the remote sensing imageries:(1) the damage degree of buildings,like civilian homes,factories,schools,hospitals and so on,which is related to the safety of human life and property directly;(2) the damage degree of structures,such as television towers,chimneys,oil tanks,power houses and etc.,which can indicate the intensity of the earthquake;(3) the damage degree of the lifeline,for example,the highway,railway,bridge,power and communication supply,which are very important for disaster relief and human’s life after the earthquake;(4) the field disasters of earthquake,such as landslides,debris flows,ground fissures;(5)the secondary disasters caused by field disasters,like quake lakes,emission of harmful gases.All of these five classes of features are the main data for determinating the intensity of earthquake and the degree of damage.The features can be obtained according to different spectral and spatial resolution of remote sensing imageries,through image processing and interpretation methods.Besides,other kinds of data can also be imported for assistant interpretation.The work flow includes remote sensing image processing,human-machine interactive classification,location,qualitative and quantitative information extraction,GIS-support assessment of earthquake damage,mapping of damaged objects.In this study,the radar images are used.Due to the bad weather after the earthquake,and the optical remote sensing cannot get useful information.As a result,the all-weather radar sensor can provide lots of information and the 1m spatial resolution radar images have played an important role in the investigation.Certainly,the optical remote sensing imageries also played an important role.Using kinds of remote sensing images,the abundace of landslides,collapse debris flows,the rate of building damages,the situation of quake lakes and road damage are interpreted.In conclusion,remote sensing technique can play an important role in the investigation and assessment of earthquake damage.Of course,remote sensing application in WenChuan quake shows some shortcomings,and it may be improved and developed in five aspects as follows.(1) It is important to select imageries and high spatial resolution imageries are significant.(2) The basic database is needed to be built.(3) Quick interpretation techniques should be developed.(4) It is more practical to use human-computer interactive interpretation extract earthquake information.(5) In investigation of earthquake damages,data process techniques of 3D rebuild will quickly be developed and remote sensing,GIS and GPS techniques will be combined and applied to the investigations in wider and higher level.  
      关键词:Wenchuan quake;earthquake disaster remote sensing;mornitoring and assessment;earthquake disaster remote sensing trends   
      213
      |
      1977
      |
      22
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650778 false
      发布时间:2021-06-10
    • CHEN Xue-hong,CHEN Jin,YANG Wei,ZHU Kai
      Issue 5, Pages: 683-691(2008) DOI: 10.11834/jrs.20080589
      摘要:Remote sensing is widely used in mapping land use /land cover types and monitoring land use /land cover changes from regional to global scale.Supervised classification method is a powerful tool in extracting land cover and land use information from remotely sensed images.Although many supervised classification method have been developed in machine learning field,there are not a universal best performing method yet.That is,different kinds of classification methods have their own advantages and defects.This phenomenon is called selective superiority.It is necessary to explore a method that can integrate advantages of different classifiers and avoid their weakness.Combining classifiers properly may improve classification accuracy,because different classifiers may have different mistake sets.Combined classifiers have been studied widely in machine learning field;however,it was seldom studied in remote sensing image classification.This paper proposed one type of combined classifier based on error analysis,which incorporates the rule outputs of maximum likelihood classification(MLC) and support vector machine(SVM),to achieve higher classification accuracy.MLC is the most widely used classification method in computer processing of remotely sensed images.It is based on classical statistical theory and has solid probability meanings.However,the classified accuracy of this method would be affected seriously if the training sample distribution does not follow normal distribution.SVM is a newly developed classifier,which is based on statistical learning theory.SVM is robust for small sample,and it has shown a good performance in many studies.However,the original SVM classifier is a binary classifier,which needs to be extended to a multi-class classifier through extra works.How to effectively extend binary SVM to multi-class classification is still an on-going research issue and it probably affect the performance of SVM.The new method proposed in this paper first estimates the errors of two classifiers,which are denoted by the confidence intervals of rule outputs,then combines their rule outputs with weights depending on the confidence intervals,and finally acquires a more accurate rule output.Classification experiments were conducted on case study area(Summer Palace area in Beijing).Classification accuracies of the combined classifier and two single classifiers were compared with different sample distribution and different sample amount.And the results demonstrated that the new combined classifier can acquire a higher accuracy than other two classifiers.The results also revealed that combined classifier performs better when two classifiers are more independent.Another compared experiment was done between new combined classifier and previous combined classifier by averaging,and result also showed that new method had better performance.However,there are still some defects in the new method.Firstly,error analysis is not completely finished for the two classifiers;secondly,error analysis based on classical statistical theory would be too optimistic for MLC.Although there are some disadvantages in the new combined classifier based on error analysis,it still has shown promising potential in remotely sensed image classification.  
      关键词:combined classifier;error analysis;confidence interval;land use/ land cover;remote sensing   
      143
      |
      420
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650714 false
      发布时间:2021-06-10
    • SHEN Wei1,LI Jing2,CHEN Yun-hao2,DENG Lei2,PENG Guang-xiong3
      Issue 5, Pages: 692-698(2008) DOI: 10.11834/jrs.20080590
      摘要:The building boundary extraction and normalization are the key approach for LIDAR data processing and building 3D modeling.In this paper,"Alpha Shapes algorithm" is first applied on the LIDAR data to extract the building boundary.In addition,an enhanced boundary simplifying algorithm,i.e."Pipe Algorithm" and two other developed normalization algorithms,"Circumcircle Regularization Algorithm" & "Cluster and Adjustment Algorithm" are used to improve the extracted boundary.Finally,the normalized building boundary is generated perfectly with these algorithms.A limited accumulated points S has an alpha shape in polygon.This polygon is determined by S and α.We can imagine that a circle with an α radius is rolling around the S.When α value is big enough,the circle will not fall into the area of accumulated points.The rolling track will form the boundary of these discrete points(for example LIDAR data).Contrarily,when the α value is very small(α→ 0),every point might be the boundary.When the alpha value is approaching infinity(α→∞),alpha shape will be the convex hull.When the S contains evenly distributed points and α value approaching optimum value,the alpha shape can extract the inner and outer boundary of convex and concave polygon.The boundary obtained from Alpha Shape Algorithm above is rough which can be defined as raw boundary.In this paper,an enhanced simplifying algorithm,i.e."Pipe Algorithm" is developed to simplify the raw outline which usually is composed of zigzag shape.Pipe Algorithm retrieves the polygon inflexion points based on the changes of angle direction.These inflexion points are retained while the intermediate points are eliminated.The remained inflexion points will establish a basic framework of the polygon.At the same time,two other developed normalization algorithms,Circumcircle Regularization Algorithm & Cluster and Adjustment Algorithm are used to improve the extracted polygon framework.Now,the two normalization algorithms can be applied for four sided and multi-sided polygon(greater than four sides and the number of side must be even number).Compared with other algorithms,"Alpha Shapes" algorithm can effectively and stably process LIDAR points-cloud data with high precision.At the same time,it can keep fine features of any building shape boundary while filter the footprints of non-building.The experiment results show that these algorithms are excellent in building boundary(convex & concave polygons) extraction and normalization.The error between the extracted building boundary and the actual outline is usually less than 0.5 m.  
      关键词:LiDAR;airborne laser scanning;building boundary extraction;building boundary normalization   
      174
      |
      1703
      |
      45
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650604 false
      发布时间:2021-06-10
    • CHENG Jie1,XIAO Qing1,LI Xiao-wen1,LIU Qin-huo1,DU Yong-ming1
      Issue 5, Pages: 699-706(2008) DOI: 10.11834/jrs.20080591
      摘要:Temperature and emissivity are two important parameters of thermal infrared remote sensing.Surface emitted radiance is a function of both its kinetic temperature and its spectral emissivity.Temperature and emissivity separation from radiometric measurements relates to the problems of solving N+1 parameters with N equations.Some approximations or assumptions must be taken to reduce the number of unknown parameters and make the equation complete.Many temperature and emissivity separation algorithms have been put forward according to the different strategies.Most of these temperature and emissivity separation algorithms are designed for processing multi-spectral data.As far as hyperspectral FTIR data is concerned,their applicability needs to be evaluated.Moreover,we explores whether there is an optimal algorithm for retrieving soil emissivity from hyperspectral FTIR data.Five typical temperature emissivity methods(e.g.NEM,ISSTES,alpha residual method,MMD and TES) were investigated in this study by simulated dataset.The simulated dataset is composed of two parts,simulated ground-leaving radiance and simulated atmospheric downward radiance.Totally 58 soil directional hemispherical reflectance were obtained from the ASTER spectral library,and were converted to emissivities based on Kirchhoff’s law.The soil temperature was assigned as 300K.The atmospheric downward radiance was simulated by MODTRAN4.0 in which the 1976 US atmosphere model was used.The simulated data was added a random Gauss noise with zero mean and standard deviation of 3.14e-9 W/cm2/sr/cm-1,which was the Noise Equivalent Spectral Radiance(NESR) of our spectrometer BOMEN MR 304 measured in laboratory.In order to evaluate these algorithms’ sensitivity of response to the instrument random noise,the simulated data added with zero mean and standard deviation of 2,4,6,8,10,15,20 times of instrument NESR were also considered.On the basis of the result,we draw some valuable conclusions.For NEM,an optimal maximum emissivity of 0.985 is suggested,the RMSE of derived soil emissivities and mean absolute temperature is minimum with this maximum emissivity.A better empirical relationship has been discovered to substitute the original mean-minimum maximum difference relationship in MMD method.The alpha residual method is not suitable to retrieve soil emissivity from hyperspectral FTIR data.By comparing the accuracy of NEM and ISSTES,we find that the RMSE of derived soil emissivities suing NEM under ideal condition is two times than ISSTES,so the original NEM module has been replaced by ISSTES to acquire the accurate initial value of emissivity in TES,the original power relationship in MMD module of TES has been replaced by a linear relationship for higher fit precision.As a conclusion,we find the ISSTES is the best method with the true instrument noise level,the RMSE of derived soil emissivities is only 0.0007 and the mean absolute temperature bias is only 0.02K.The RMSE of derived soil emissivities and the mean absolute temperature bias monotonically grow with the increase of instrument noise level.Finally,we present an example of soil emissivity extraction using five methods mentioned above with ground-based measured hyperspectral data,which were measured at our field test site with BOMEN MR 304 spectrometer on the afternoon of September 26,2005.The distribution of derived emissivity spectra verifies the results of algorithm analysis.  
        
      202
      |
      596
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650657 false
      发布时间:2021-06-10
    • WANG Hai-qi1,WANG Jin-feng2
      Issue 5, Pages: 707-715(2008) DOI: 10.11834/jrs.20080592
      摘要:This paper focuses on space-time nonlinear intelligent modeling for lattice data.Lattice data refers to attributes attached to fixed,regular or irregular,polygonal regions such as districts or census zones in two-dimensional space.Lattice data space-time analysis is aiming at detecting,modeling and predicting space-time patterns or trends of lattice attributes changed with time while spatial topological structures are simultaneously kept invariable.From the perspective of space,lattice objects have two different scale spatial properties influencing lattice data modeling: global dependence and local fluctuation.Global spatial dependence or autocorrelation quantifies the correlation of the same attribute at different spatial locations,and local spatial fluctuation or rough,coexisted with global dependence,is represented in the form of local spatial clustering of similar values or local spatial outliers.To consider simultaneously the effects of two properties above,local neural networks(NN) model is studied for space-time nonlinear autoregressive modeling.The main research contents include:(1) To reduce influence of spatial fluctuation on prediction accuracy of NN,all regions are partitioned into several subareas by an improved k-means algorithm.(2) Different partition schemes are evaluated and compared according to three essential criteria including dependence,continuity,fluctuation.Dependence means that an optimal partition must guarantee that there is real and significant spatial dependence among regions in a subarea because the results of output layer nodes in a NN model depending on the interactions of input layer nodes through hidden layers nodes.Spatial autocorrelation of a subarea can be measured by global Moran’s I and its significance test can be done based on z-score of Moran’s I.Continuity means that only neighboring regions can be grouped into a subarea,and this criterion is fused into the modified k-means algorithm.When the algorithm judges one region which subarea it belongs to,not only should the distance be considered to the centroid of a subarea but also the common borders between this region and the regions in a subarea.As to fluctuation,although it is impossible to make each subarea have complete spatial stability through partitioning,the less fluctuation means the better predicting results of NN model.For a subarea,standard deviation between local Moran’s I of all regions in the subarea and global Moran’s I of the subarea is regarded as an evaluation index to the fluctuation of the subarea.(3) Each multi-layer perceptrons(MLPs) network is used respectively in modeling and predicting for each subarea.The output nodes are the predicting values at time t of an attribute for all regions in a subarea.The input nodes are observations before time t of the same attribute of both regions in the subarea and regions neighboring to the subarea and the latter is called boundary effect.Finally,as a case study,all local models of all the subareas are trained,tested and compared with a single global MLPs network by modeling one-step-ahead prediction of an epidemic dataset which records weekly influenza cases of 94 departments in France from the first week of 1990 to the 53th of 1992.Two performance measures,including average relative variance(ARV) and dynamic similarity rate(DSR),indicate that local NN model based on partitioning has better predicting capability than global NN model.Several issues are still worth further study:(1) The initial subareas of partitioning are selected randomly in our research.In the further study,a reasonable approach should combine selection with spatial patterns,for instance considering the center of local cluster.(2) Partition criteria should be another issue and different types of spatial and space-time processes,such as rainfall,price waves,public data,etc,may have different objective criteria for choosing an optimal partition.(3) It may be more imperative to study feasible measures for quantifying global and local space-time dependence of lattice data and testing significance of this dependence.  
      关键词:lattice data;space-time modeling;partitioning;K-means clustering;neural networks;boundary effect   
      147
      |
      552
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650625 false
      发布时间:2021-06-10
    • WANG Jing1,GAO Jun-feng1
      Issue 5, Pages: 716-723(2008) DOI: 10.11834/jrs.20080593
      摘要:Lakes have been facing a number of disadvantageous eco-environmental issues due to the aggravation of human activities and the global climate change.So,lake dynamic monitoring becomes more and more important.The extraction of lake enclosure culture area as an important content for lake dynamic monitoring has been highly valued.However,few researches on extracting lake enclosure culture area have been conducted,and little attention has been paid to the extraction method,and the extraction precisions using existed methods were unfavorable.In order to improve the extraction precision of lake enclosure culture area,a new method using correspondence analysis(CA) to extract lake enclosure culture area is proposed in this paper.The CA method relies on squared deviations between pixel values and their expected values(joint probabilities),and it explores the relationship between not only the image bands but also the pixels.Although CA has routinely been an accepted technique among ecologists and statisticians,its application to remote sensing is still relatively new.It is the goal of this study to detail CA methodology as a means of achieving highly accurate and efficient lake enclosure culture extraction.The methodology mainly includes four steps as follows: Firstly,the data is preprocessing.Secondly,the preprocessed data is transformed using the CA algorithm.Thirdly,the texture analysis is applied to the CA first component.Finally,the enclosure culture area is extracted by decision tree according to the most superexcellent threshold value.In this study,the Gehu lake is selected as a case to validate this method based on the multi-spectral data of Landsat-7 ETM+ acquired on July 26,2001,which is obtained in a standardized orthorectified GeoTIFF format downloaded from internet.The Gehu Lake(31°29′—31°42′N,119°44′—119°53′E) is located in the middle and lower reaches of the Yangtze River,which is the second largest lake in Taihu basin,and its water area is 146.0 km2.It is one of the earliest lakes which utilized the enclosure culture to cultivate freshwater fish or shellfish.And the classification results are evaluated using the precision analysis function in ENVI4.2.The results are as follows:(1) the extraction accuracy of enclosure culture area is 95.59%,overall accuracy is 91.14%,with overall Kappa value of 0.86;and(2) the area of Gehu lake enclosure culture is 111.76km2,76.29% of total Gehu lake area.Then the results are compared with those with the other three methods,namely,the method based on principal component analysis(PCA),the method combining the PCA first two components with band 4 and the method based on original bands without CA or PCA analysis.And the results of the comparison test showed that:(1) the overall accuracies for the three methods above mentioned are 82.17%、89.17% and 83.54% respectively;and(2) the classification accuracies are obvious less than that based on CA transformation.To sum up,the CA is proved to be a new powerful multivariate analysis technique for extracting enclosure culture area.But there is still some commission and omission phenomenon,such as the higher commission phenomenon in the classification of lake enclosure culture area with the commission error of 13.41%,as well as the higher omission phenomenon in the classification of the other category with omission of 21.43%.So it is necessary to improve the method to reduce the commission and omission error.And the further research should be done to test the CA method on other lakes for extraction of enclosure culture,and should also be focus on whether it can be suitable for application to high spatial resolution imagery,such as SPOT,IKONOS and QUICKBIRD.  
      关键词:Correspondence analysis;Texture analysis;decision tree classification;enclosure culture in lake;gehu lake   
      188
      |
      529
      |
      9
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650523 false
      发布时间:2021-06-10
    • LI Hai-bo,LI Xia,LIU Xiao-ping,AI Bin,LIU Tao
      Issue 5, Pages: 724-733(2008) DOI: 10.11834/jrs.20080594
      摘要:Site selection,which is one basic task of GIS functionalities,is to search for the best sites for a facility or a number of facilities.The objective is to maximize some utility functions subject to some goals.Traditional site selection methods using GIS only focus on the identifying the best locations(coordinates) of facilities.In many applications,contiguity constraints must be considered in site selection.Site selection should consider not only locations,but also patch configuration for solving many optimization problems.The objective is to maximize utility functions subject to contiguity constraints and various planning goals.The combination of locations and contiguity for site selection is a difficult problem for site selection because of involving huge solution space.The problem becomes more complex when multi-objectives are incorporated in the optimization.Many alternative generating techniques(such as the weighting method and the non-inferior set estimation method) have been developed to help decision-makers search solution spaces.Although these methods are effective under some circumstances,the approaches have several weaknesses:(1) it can only be applied to problems that are mathematically formulated;(2) it is inefficient when applied to large problems;and(3) it may fail to find important solutions.As a consequence,builders of decision-support tools require methods that overcome these limitations and efficaciously generate alternative solutions to multi-objectives decision problems.Particle-swarm optimization(PSO) can be used to achieue such goals.This paper presents a new method to solve such problem by using particle-swarm optimization(PSO) method and shape-mutate algorithms,which is a strict mutation operator to prevent the formation of "holes" in searching for optimal contiguous sites.Particle-swarm optimization method is used to make the solutions flying to the best locations.Shape contiguity constraint and patch configuration optimization are operated by shape-mutate algorithms.Here,a site is represented by using an undirected graph and a set of operations is designed to change the shape and location of sites during the search for possible solutions.These operations evolve randomly generated initial solutions into a set of optimal solutions to this type of problem;at the same time,the contiguity of a site is maintained and the "holes" of the site are prevented to formation.This approach is applied to three different types of cost surfaces: uniform random,a conical and a deformed sombrero-like surface.The analyses are focused on a 128×128 grid of cells,where a facility is located at the center of the area;The number of cells for a site is fixed and set at 10.The results demonstrate the robustness and effectiveness of this PSO-based approach to geographical analysis and multi-objective site selection problems.This approach has also been tested in the city of Guangzhou,to search for the best locations for CBD.The results are also reasonable.The experiments have indicated that this approach is effective in solving this problem.It can successfully capture all the best solutions.The results can be used directly as the location selection by the decision-makers because these have been the best solutions.  
      关键词:particle-swarm optimization;GIS;site selection;patch configuration;optimization   
      169
      |
      841
      |
      8
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650760 false
      发布时间:2021-06-10
    • ZHOU Ji1,CHEN Yun-hao1,LI Jing1,WENG Qi-hao2,Yi Wen-bin1
      Issue 5, Pages: 734-742(2008) DOI: 10.11834/jrs.20080595
      摘要:Along with rapid urbanization,urban heat island(UHI) has become one of the most serious urban problems,because of its impacts on the urban microclimate,air quality,energy consuming,public health and so on.With the advent of thermal remote sensing technology,remote observations of UHIs become the focus of urban remote sensing.While some progresses have been made,remote sensing study of UHI has been slow to improve qualitative description of thermal patterns and simple correlations.As a common indicator of UHI,magnitude is often used to reflect the degree of UHI occurrence,and it can be computed as the different indicator between the temperatures of urban surfaces and rural surfaces.However,the extent of UHI is often neglected when measuring an UHI as a whole.In order to explore a quantitative and more effective indicator for UHI dynamic monitoring at regional scale,aiming at deriving its spatial feature and facilitating the comparisons between different UHIs occur at different time or different areas based on remote sensing imageries such as NOAA AVHRR and Terra/Aqua MODIS data,this paper proposes a new parameter-UHI volume to integrate UHI magnitude with extent effectively.After the subtraction of the rural contribution in the land surface temperature(LST) image,the isolated UHI signature is suitable to use a least-squares Gaussian surface and then the UHI volume is calculated as a double integral of the UHI signature function over its footprint,to which the extent UHI happens.This study investigates the applicability of this volume model based on examination of thirty EOS-Terra MODIS level 1B imageries of Beijing city and its surrounding suburban areas,and the imageries were acquired between 2004—2006.These imageries include fifteen nighttime scenes,with their corresponding daytime scenes.Firstly,the resampled digital elevation model(DEM) data of the study area,the normalized difference vegetation index(NDVI) and the modified normalized difference water index(MNDWI) are used to extract the urban areas.Secondly,a simplified method is applied to retrieve the land surface temperature from MODIS channel 31 and channel 32,accounting for the land surface emissivity.Thirdly,four transects across the center of Beijing city at different directions including N-S,W-E,NW-SE and NE-SW are selected to detect the UHIs to ensure the application of the volume model is valid.The detections reveal that each of the UHI in Beijing City at different time has a single core and its spatial distribution is symmetrical as a whole,despite there are some small UHIs in the outskirts,so the volume model presented is appropriate for the UHI simulations.Results of the UHI simulations demonstrate that:(1)The correlation between the modeled UHI signatures and the corresponding true values is close,especially for the nocturnal UHIs.This fact indicates that the Gaussian is suitable and the volume model is valid for UHI simulation at regional scale in Beijing.(2)There is obvious UHI effect in both daytime and nighttime in the summer.In the spring,autumn and winter,there is UHI effect in the nighttime,while no UHI in the daytime;the UHI magnitude and volume shows that the diurnal UHI is intenser than the nocturnal UHI,and the difference of the volume between the diurnal and nocturnal UHIs seems to be stable.(3)Because of different dominant factors including natural and anthropogenic factors and their different influences,the changes of the nocturnal UHIs are complicated,especially the extents of UHIs.The UHI magnitude and volume shows that the nocturnal UHI effect is intensest in the winter,while weakest in the summer.The application in Beijing suggests that the volume model may be a promising routine to measure UHIs quantitatively based on remote sensing imageries.However,the precondition of this model urges that its applicability in other cities should be varified further.  
      关键词:urban heat island volume;land surface temperature;MODIS;remote sensing   
      176
      |
      1432
      |
      19
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650741 false
      发布时间:2021-06-10
    • FAN Kai-guo1,HUANG Wei-gen2,HE Ming-xia1,FU Bin2,GAN Xi-lin2
      Issue 5, Pages: 743-749(2008) DOI: 10.11834/jrs.20080596
      摘要:A simulation model of SAR imaging of shallow water bathymetry has been developed based on SAR imaging mechanism of shallow water bathymetry and the microwave radar imaging of oceanic surface’s program of M4S.This simulation model improves the drawback of the traditional simulation model,which just only simulated alternant dark bright streaks,and both streaks have almost the same relative radar backscatter,but could not simulate streaks in the form of dark streaks or bright streaks respectively.However,above three expressions appear in shallow water bathymetry SAR images imaging at the same area and the same local time in each one day.As far as we known,SAR can’t image shallow water bathymetry features directly.It images the bathymetry via surface effects induced by tidal currents flow variation over bottom topography under favorable currents and sea surface winds,thus the form of shallow water bathymetry SAR images has close relationship with hydrological and meteorological conditions.Herein,we give the SAR simulation process of shallow water bathymetry according to SAR imaging mechanism of shallow water bathymetry,and make use of this improved simulation model,carry out the simulation study on the effect of wind on SAR imaging of shallow water bathymetry.Also the relationship between the sea surface wind and SAR imaging of shallow water bathymetry has been analyzed using both the results of the simulation and the analysis of shallow water bathymetry SAR images.According to these results,the relationship between the sea surface winds and SAR imaging of shallow water bathymetry has been strengthened,which shows that,under lower wind speed,the SAR backscattering cross section of the sea surface of shallow water bathymetry is darker,vice versa.The relative intensity of the bright and dark streaks is also affected by the wind speed but is not the main factor.The effect of wind direction on SAR imaging of shallow water bathymetry is more important.When the wind direction is adverse or to parallel the direction of currents component in direction of shallow water bathymetry,the SAR backscattering cross section of the sea surface is brighter,bright streaks and dark streaks appear on SAR images respectively.Under crosswind,the SAR backscattering cross section of the sea surface is darker,the bright and dark streaks are alternant,and the adverse wind direction is better for SAR imaging of shallow water bathymetry.In this study,we fully analyze the expression of shallow water bathymetry SAR images under different sea surface wind speeds and directions,which is an necessary step for extracting quantitative information of shallow water bathymetry form SAR images.  
      关键词:SAR;shallow water bathymetry;wind speed;wind direction   
      157
      |
      305
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650590 false
      发布时间:2021-06-10
    • ZHOU Peng1,PI Yi-ming1
      Issue 5, Pages: 750-758(2008) DOI: 10.11834/jrs.20080597
      摘要:In recent years,spaceborne/airborne bi-static synthetic aperture radar(SA-BSAR) has been proposed as a new kind of microwave remote sensor.The development of SA-BSAR is still at an early stage since there are many challenges faced ahead.Due to the huge difference between the satellites’ and the aircrafts’ velocities and flying altitudes,there are obvious differences between SA-BSAR and the earlier bi-static SAR systems which have symmetrical structures,such as the spaceborne bi-static SAR and the airborne bi-static SAR.Three basic and critical issues in SA-BSAR are discussed,including geometry,Doppler properties and signal model.In the discussion of geometry,the proper coordinate systems are first established to describe the movement of the satellite and the aircraft.Next,the appropriate flight path for the aircraft is quantitatively analyzed in detail from many aspects,including: how to improve the range resolution,how to improve the azimuth resolution,how to receive the direct path signal more conveniently,and how to make the spatial synchronization much easier.The following conclusions are drawn: the aircraft should fly on the same side of the imaging scene with the satellite;the difference angles among the platforms’ course angles can not be too large;the aircraft must be illuminated by the satellite’s main beam within the imaging time;the aircraft must appear at the right position just in time to cooperate with the satellite to implement the spatial synchronization.In the discussion of Doppler properties,several analytical formulae for computing Doppler parameters are first derived using the method of vector analysis.Afterwards,to make the formulae more practical,the derived expressions are transformed to an easy-to-use form that is expressed by orbit elements of the satellite,satellite antenna pointing angles and kinematical parameters of the aircraft.In the discussion of signal model,the model for SA-BSAR is first established based on the model for mono-static SAR and then the great difficulties in designing the imaging algorithm are discussed.Through the analysis for the established model,the following conclusion is drawn: the traditional Range-Doppler(RD) algorithm,after properly modified and extended,can also be applied to the imaging of SA-BSAR when some special conditions are simultaneously met.These special conditions include: the synchronization scheme based on wide-beam illumination is employed;the satellite and the aircraft fly in parallel,and both the transmitter and the receiver are operated in side-looking modes.In order to validate the above conclusions,two simulation experiments were performed.The simulation results verify the accuracy of the derived formulae for the calculation of the Doppler parameters,and also verify the applicability of the RD algorithm when some special conditions are met.On the basis of the research conducted in this paper,the imaging algorithm,when the synchronization scheme based on antenna steering is employed and the flight path of the aircraft is arbitrary,can be further designed in the next step.  
      关键词:spaceborne/airborne bi-static SAR;geometry;doppler parameters;signal model   
      187
      |
      242
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650562 false
      发布时间:2021-06-10
    • HUANG Yong-qi1,CUI Wei-hong1
      Issue 5, Pages: 759-764(2008) DOI: 10.11834/jrs.20080598
      摘要:Spatial-temporal database researches on how to store historical and current spatial-temporal data so as to track and analyze changes of some region,which finally implements spatial-temporal modeling and geographical simulation process.At present there are mainly two solutions as for spatio-temporal database.The one is adding time dimension on the basis of spatial database.The other is extending spatial dimension on the basis of temporal database.The research on spatial database has already been mature at the present time.So we should fully make use of existing spatial data model and spatial processing and analyzing functions of GIS,which will reduce enormously workload.Consequently this paper chooses to add time dimension on the basis of spatial database.Extended relational spatio-temporal database extends time dimension by considering time as attribute of spatial geometry and thematic feature,which has its particular advantages.For one thing,relational database is supported by strict relational algebra and has many available large commercial relational databases.In addition,many information systems are originally established on the basis of relational database.In the second place many researches on temporal database itself mostly adopt relational model,which extend temporal semantics for relational algebra and develop temporal structured query language.Thirdly many researches on spatial database in GIS domain also employ relational data model,and research on spatial query language is also based on relational model.All in all,spatial-temporal database adopting model of historical relational database can make the best of research results and existing systems such as temporal query function of temporal relational database and spatial process and analysis function of GIS,reducing the cost of establishing spatial-temporal database.This paper at first introduces historical relational database model,temporal relational algebra and temporal query language,and then conducts researches on historical relational database to organize temporal information in spatial database.Aiming at completely relational spatial database,this paper makes use of temporally upward compatible function of ATSQL2 and adopts the mode of transplanting non-temporal database to temporal database to extend temporal information for spatial data.In the light of organizing principle of spatio-temporal data,current data and historical data are stored separately.As a result,spatio-temporal database includes current spatial database and historical spatial database.Current database stores the latest base state version of map layers,yet historical database stores historical base state version of map layers.If spatial data and attribute data change,data of new version will be added to current spatial database.At the same time original data will become data of old version and be shifted to historical spatial database.We can adopt indexing pointer to establish the relationship between current spatial database and historical spatial database.The paper eventually gives an example of rotating wheat fields to examine the feasibility and validity of adopting model of historical relational database to establish spatial-temporal database.Rotating wheat fields has obvious temporal trait.Generally speaking,it is not likely that wheat fields change in aspect of spatial geometry.Most changes of wheat fields are changes of attributes such as plant type and owner.When plant type of a wheat field changes,it is considered that the old geographic entity disappears,and a new one comes into being.We need to add a new record in the current database and transfer the original record to the historical database.In the mean time we adopt PreVersion and PostVersion to record successive relationship between them.The result shows that spatial-temporal database based on historical relational model is feasible and valid.However,spatio-temporal database based on historical relational database still has the shortcomings.In addition,it can not deal with transaction time yet,at which database objects are created,updated and deleted.Consequently in future work we should take disposal ability of transaction time to spatio-temporal database into account.  
      关键词:spatial-temporal database;completely relational spatial database;historical relational database;rotating   
      143
      |
      616
      |
      2
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650790 false
      发布时间:2021-06-10
    • HUANG Bo1,WU Bo1,LIU Biao2,CAO Kai1
      Issue 5, Pages: 766-771(2008) DOI: 10.11834/jrs.200805100
      摘要:Notwithstanding the astounding growth achieved by Geographic Information Systems(GIS) in recent decades,some critical bottlenecks still continue to pose challenges to the advancement of this technology.The inability to handle large and diverse datasets and the lack of functionalities to solve complex spatial problems such as combinatorial location problems is essential one among these hurdles.Recent advances in Computational Intelligence(CI) and Operations Research have,however,opened up new avenues to overcome these obstacles to the development of GIScience.Judicious integration of the aforementioned techniques into GIScience could possibly lead to an innovative discipline,namely spatial intelligence.This paper presents our preliminary investigation on this subject,including its framework,major acquisition methods,and sample applications.The basic concept of spatial intelligence derived from psychology refers to the mental process associated with the brain’s attempts to interpret certain types of information received.This information basically includes any kind of mental input,such as visual pictures,maps,and plans.Based on this concept,we propose that the introduction of spatial intelligence within the domain of GIScience is an ability to discover and apply spatial patterns,which is usually elicited through analysis/mining,optimization,and simulation.Two characteristics of spatial intelligence are highlighted here.One is the ability of spatial cognition,and the other is the self learning capability.Spatial cognition refers to the process of recognizing,encoding,saving,expressing,decomposing,constructing and generalizing spatial objects,which can be obtained from spatial observation,spatial perception,spatial indexing,and spatial deductive inference.Self learning includes enforcing learning,adaptive learning,and knowledge acquiring abilities to actively dig up knowledge from observation data.Promoting spatial intelligence is a logical requirement for higher level analysis and application of GIScience.Through active learning and searching process in complex spatiotemporal data,spatial intelligence discovers unknown spatial patterns,trends,and regularities.From the technical perspective,we emphasize the use of meta-heuristic and other intelligent algorithms to address complex geospatial problems.A three-tiered structure is proposed for the spatial intelligence framework.At the bottom of the framework,spatial statistics,programming,and intelligence computation are used to provide the foundation of spatial analysis,simulation,and optimization.The middle level consists of spatial intelligence,self learning,and spatial cognition for analyzing,simulating,interpreting,and decision making for geospatial processes and phenomena.At the top of the framework,GIScience laws and regularities are used to mine unknown patterns.To realize the goal of spatial intelligence,a solid research that integrates key topics with the concepts and modeling approaches derived from Information Science and Operations Research to advance GIS theories have been developed.The core supporting methods and techniques pertinent to the proposed framework include spatial analysis,simulation,and optimization.The development of spatial analytical models to represent spatial and temporal features and their relationships forms a vital aspect of this research.Spatial optimization is employed to maximize or minimize a planning objective,given the limited area,finite resources,and spatial relationships for a location-specific problem,once spatiotemporal patterns have been discovered.Simulation is an important tool to evaluate and improve models and spatial patterns.Some successful applications of spatial analysis,optimization,and simulation are also reported in this paper.Logistic regression models,e.g.,binomial,multinomial,and Nested Logit,are applied and examined to predict various spatiotemporal changes,including rural to commercial,rural to recreational,and rural to other land uses.A range of heuristic algorithms,such as Genetic Algorithms(GA) and Ant Colony Systems(Ant),to troubleshoot complex routing and location problems,and multi-objective optimization for spatial decision making are studied.A case study of integrating agent-based modeling with analytical models by drawing upon microscopic traffic simulations for emulating real-time traffic conditions is also conducted.  
      关键词:spatial intelligence;spatial cognition;self learning;spatial optimization.   
      196
      |
      1546
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650700 false
      发布时间:2021-06-10
    • GONG Jian-hua
      Issue 5, Pages: 772-779(2008) DOI: 10.11834/jrs.200805101
      摘要:It is argued that the fundamental framework of current GIS is mainly designed from the land/landscape oriented perspective.With the deveopment of transportation,communication,network,and computers,human beings play more and more important roles in the man-landscape ecosystems,and the traditional GIS,which is land/landscape oriented or place based,is not enough to deal with human highlighted geoprocesses.For instance,the current commercial GIS can not well support the modeling of disease spatiotemporal transmission among people.Thus,human oriented GIS is to study the representation,computation,simulation and analysis of human distribution in time and space,and of the spatiotemporal characteristics and laws of social and economic organizations and activity behavior within geographic environments.From the perspective of geovisualization,and on the basis of the human oriented GIS concept,this paper discusses human-oriented geovisualization.Humans in human-oriented geovisualization is classified into three types,namely System User,Social Human,and Knowledge Human respectively in terms of users of GIS or visualization systems,studied objective social behavior and activities of people in man-land relation systems,and representation of spatial cognition and geospatial knowledge of people.A conceptual framework of human-oriented geovisualization is then addressed based on the three kinds of human.Collaborative visualization and egocentric visualization are discussed according to system user oriented geovisualization.In view of social human oriented geovisualization,the paper presents the latest research on visualization of space-time path,visualization of crowd simulation,and visualization of social network.In terms of knowledge human oriented visualization,knowledge visualization is related to the theory and methodology of geo-diagram and geospatial information Tupu.The paper finally explores the key issues and technologies in human oriented GIS and geovisualization from aspects of ontology,information acquiring technique,data organization,representation model,visual representation methods,and subject concept.  
      关键词:GIS;geovisualization;collaborative visualization;space-time path;crowd simulation;social network;knowledge visualization   
      130
      |
      664
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650507 false
      发布时间:2021-06-10
    • LI Xian-hua1,XU Li-hua3,ZENG Qi-hong1,LIU Xue-feng1,CHANG Jing1,MAO Jian-hua1
      Issue 5, Pages: 780-785(2008) DOI: 10.11834/jrs.200805102
      摘要:The state of the atmospheric environment is related to the values of atmospheric path radiation in remote sensing data.As an important information resource,an atmospheric path radiation image derived from remotely sensed data can be used in many research fields.These include the research in the transmission properties of the atmosphere,the atmospheric modification of remote sensing image,the study of both regional and urban atmospheric environment,and the monitoring of the atmospheric environmental qualities.The remote sensing data collected by earth observation technology includes predominantly ground information and weak atmospheric information.The atmospheric path radiation remote sensing image which includes atmospheric information can be generated by separating the weak atmospheric signals from the strong ground returns in the remote sensing data.The key restriction on atmospheric environmental remote sensing lies both on the above process and extracting atmospheric information from the remote sensing data.Therefore,it is important to derive the data of quantitative atmospheric environmental pollution and precise atmospheric modification of the remotely sensed digital image.After many years of previous research,a scheme of generating the image of atmospheric path radiation by computing the value of atmospheric path radiation of each pixel using land reflectance is demonstrated in this paper.The principle and methodology for generating the heterogeneous atmospheric path radiation image are based on known surface reflectance.The atmospheric path radiation remote sensing image with different waveband from multi-spectral or hyper-spectral remote sensing image can monitor urban atmospheric pollution with different type and degree(size and content of aerosol) without ground interference.In this paper,the research of urban atmospheric pollution monitoring is developed from Shanghai City as a demonstration,with using the atmospheric path radiation remote sensing images from MODIS multi-spectral remote sensing image,based on the atmospheric pollution observational data(PM10 thickness).Using fuzzy neural network method,the input neuron altogether includes MODIS atmospheric path radiation remote sensing images(4 wave bands),the traffic density images,and the NDVI grid images.The output is the single neuron of the synthetic evaluation classification images of atmospheric environment quality,and the network membership function is Gaussian function and sigmoid excitation function.According to the synthetic evaluation classification images,Shanghai atmospheric environment quality is basically the first and the second standard classification.In 6 day-long research time intervals,the atmospheric environment quality spatial and temporal pattern has changed obviously.On 3rd,9th and on 10th the atmospheric environment quality are the second level of majorities,and on13th the mass of atmosphere quality achieved the highest level standard,on 2122nd by gradually drops,the pollutant atmospheric environment quality are the second level gradually.During the transmission in the atmosphere,electromagnetic wave is influenced by the atmospheric path radiation(that is,the upwards dispersion along the optical path) which is related to atmospheric environmental quality parameters such as the atmospheric aerosol optical depth,the total absorbing particulates and contents of the gaseous pollutants.At the same time,the atmospheric path radiation is very useful information in the remote sensing retrieval of regional atmospheric environmental qualities.It is also a supplement to the technology development of traditional atmospheric environmental remote sensing.In the future,we would expect this technique to be used in the retrieval of the atmospheric vapor content of the globe from GPS signals.  
      关键词:atmospheric path radiation;remote sensing image;urban atmosphere;pollution monitoring   
      188
      |
      1530
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650536 false
      发布时间:2021-06-10
    • CEN Yi1,ZHANG Liang-pei1,MURAMATSU Kanako2
      Issue 5, Pages: 786-792(2008) DOI: 10.11834/jrs.200805103
      摘要:Industrial development and human activities have greatly altered land cover over the past several decades.Besides,the increased cutting of forests and burning of fossil fuels have raised carbon dioxide concentrations in the atmosphere and has led to global temperature increases.Photosynthesis by vegetation removes carbon dioxide from the atmosphere and so plays an important role in the carbon cycle.To measure net primary production(NPP) is a way to understand the photosynthesis capabilities of the vegetation. NPP has been assessed using satellite data by several methods,which includes the light-use efficiency(LUE) model and normalized difference vegetation index(NDVI)both of which are commonly employed using the red and near-infrared channels.In order to study zonal net primary production(NPP) effectively using multi-spectral satellite data,a new vegetation index based on pattern decomposition method and a NPP estimation model taking into account photosynthetic saturation are developed by field experiments.In this paper,we focus on estimating NPP for a temperate forest zone(Kii Peninsula,Japan,mainly covered by temperate forest) using MODIS data of 2001 with a spatial resolution of 500m.To understand the photosynthetic capability of different vegetation types,we calculated NPP values for each land cover type using both the proposed method and a LUE model.Based on the land cover classification,global solar exposure,air temperature,and monthly average effective day length for vegetation photosynthesis,with the proposed method we estimated the following annual NPP values(in units of kg CO2/m2/a): evergreen,2.04;deciduous,2.23;farm,1.74;paddy,1.42;and urban area,1.06.In comparison,the LUE model estimated the following values: evergreen,1.99;deciduous,2.09;farm,1.76;paddy,1.53;and urban area,1.23.An IPCC report has listed NPP estimates for temperate forests as 2.29 and 2.86 kg CO2/m2/a.The annual values of zonal NPP for the evergreen category calculated using the proposed method agree with those listed by the IPCC report within the algorithm error of 26%. To validate the proposed method,results were compared NPP based on land surveys of temperate forest with paddy areas.The forest survey took place at an 80×80 m plot on Yoshino Mountain in Nara Prefecture.The forest results were 1.52±0.36 kg CO2/m2/a for the proposed method,1.15 kg CO2/m2/a for the LUE model,and 1.50±0.75 kg CO2/m2/afor the survey data.The NPP estimations by the survey and proposed method were more agreed within the permissible error.However,the paddy NPP estimated using satellite data(1.42 kg CO2/m2/a) was nearly 60% that of the field survey(2.48 kg CO2/m2/a).Because that paddies receive various nutrient and water supplements,unlike a natural forest,and this may have affected the parameter calculation.Additional field surveys of paddy areas are planned in the Kii Peninsula to develop more precise paddy parameters. Although the NPP estimate for paddy was only 60% of the survey NPP,paddy only affected 3% of the Kii Peninsula zonal NPP and thus could be ignored here.Accounting for the 26% estimated error of the algorithm,for the whole region from 32°30′ to 36°24′ N,134°30′ to 137°06′ E(area=3.94×104 km2),the annual zonal NPP was calculated as 6.11±1.62 kg CO2/a. This study shows that the proposed NPP estimation method can be applied to temperate forest regions,such as the Kii Peninsula.Verifying the method in other vegetation areas will lead to greater precision and allow for NPP estimation on a global scale.  
      关键词:NPP;PDM;VIPD;MODIS   
      156
      |
      501
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650675 false
      发布时间:2021-06-10
    • MA Zhong-dong1,LAN Tu1,NIE Zhi-gang1
      Issue 5, Pages: 793-799(2008) DOI: 10.11834/jrs.200805104
      摘要:In recent years,international migration has become more and more common,while its patterns are getting increasingly complex.After several years living in a certain destination,immigrants often make repeat migrations,either to return to their home country or onward to another host country.Researchers have developed new methodologies to study the flows and patterns of transnational movements,and new theories to explain them.One of the new developments in this respect is the triangular model of human capital transfer among nations by Devoretz and Ma(2002),which emphasizes the dynamic nature of international migration in the context of regional development.It argues that immigrants often enter into an entrepot country to accumulate transnational human capital and other capital,such as citizenship.After acquiring such capital,they can choose to stay in the country,return to their home countries,or move to another host country,in order to maximize the return to their acquired transnational capital.New challenges are posed to the study of transnational flows of human capital,requiring the use of multiple censuses of the sending country,entrepot country and major hosting country.Due to structural differences,censuses from different countries are normally in different format and cover populations within different national boundaries.As a result,the most current research on international migration is often limited to either the sending or receiving countries.Efforts are needed to integrate these data sets into a standardized one,so that variables can be unified for direct comparison and further analysis.We adopt a new approach to the integration of census micro-data of different countries into one unified framework.The framework includes two major parts: the statistical part and the mapping part.We start with the statistical part by using open source software packages,such as PHP and MySQL,to implement an integrated micro database system.The micro database includes censuses from multiple countries/regions,including the US,Canada and Hong Kong.In order to enable automated analysis,we first select common variables from different censuses and then standardize each of them to the same unit or category.These standardized variables are either called identifiers or indicators.Identifiers are variables used to identify similar population groups from different censuses and indicators are variables used to compare among groups.In the demo system,we used a total of 10 identifiers and 3 indicators.With the integrated database,we designed a search module and a statistics module.The search module uses key identifiers to search specific population groups from different censuses.The result is listed as tabulations to support further studies.The statistics module takes previous tabulations as input and output results of statistical analysis,including cross country/region comparison and uni-variable analysis(maximum/minimum/mean/std) as tables,graphs,and pre-map files.The second part of the framework is to integrate the micro database with a GIS database,the result of which can be used for mapping purposes.The statistical outputs from Part 1 are processed by the mapping module of Part2,and the system creates online map visualization.This paper mainly introduces the first part of the framework.We are still working on the second part.When both parts are finished and integrated,this system will integrate spatial information with census micro-data of different countries/regions,and provide a unified web-based demographic information system to facilitate flexible and advanced international immigration analysis.  
      关键词:international migration;census data;international demographic information system   
      134
      |
      190
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650480 false
      发布时间:2021-06-10
    • KONG Yun-feng1,LI Xiao-jian2,ZHANG Xue-feng2
      Issue 5, Pages: 800-809(2008) DOI: 10.11834/jrs.200805105
      摘要:As a result of birth rate decreases in China,the total number of pupils has steadily reduced since 1990s.Inevitably,restricting the primary and secondary school is an important issue for local governments.Meanwhile,along with a number of school closures in recent years in rural China,new problems are frequently reported,such as a long distance of traveling to school for some communities,the quota excess of teaching classes in many schools and uneven distribution of teaching resources among schools.For school redistricting planning,both the educational equality and school efficiency is important.The objective of this paper is to assess the educational equality by introducing geographic information systems(GIS) and the concept of spatial accessibility in school redistricting.Based on the literature review of the planning principles and methods for school redistricting considering the educational need and supply and the service equality and efficiency,the authors attempt to introduce a framework of school planning in rural China.The key technical procedures are enlarging geographic database such as schools,geodemographics and transportation networks,and calculating the spatial accessibility of school services for every community using GIS.The spatial accessibility indices are used to assess the spatial equality of educational service and to measure the rationality of school redistricting.The spatial relationships between population distribution and secondary schools in Gongyi City,Henan Province are evaluated using five accessibility measures,i.e.the ratio model,nearest distance model,chance accumulation model,gravity model and improved gravity model.The case study shows that there are uneven spatial patterns in terms of the per capita educational resources,the distance to nearest school,the chance of school choice and the balance between school supplies and community needs.The locationa advantages or disadvantages identified in the thematic maps have potentials in future school planning.The authors argue to introduce GIS technology and spatial accessibility models in school redistricting in rural China.It is also suggested to explore and verify the spatial patterns of school choice and estimate detailed school supplies and community needs with using Huff model,which lays a theoretical foundation for school facility planning.  
      关键词:school redistricting;GIS;spatial accessibility;Gongyi City   
      113
      |
      2536
      |
      31
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650727 false
      发布时间:2021-06-10
    • LU Guo-nian1,LIU Ai-li2
      Issue 5, Pages: 810-818(2008) DOI: 10.11834/jrs.200805106
      摘要:The demand for DEM data has dramatically increased mainly due to the large number of possible applications capable to exploit DEM data.As in many other fields,with the Internet has become preferred and effective means of data exchange,the online purchasing and distribution of DEM can now be performed relative easily.It follows that the copyright protection is increasingly becoming more and more important.Watermarking represents a potentially effective tool for the protection and verification of ownership rights in DEM.But before applying watermarking techniques developed for multimedia applications to DEM,it is important that the requirements imposed by DEM are carefully analyzed to investigate whether they are compatible with existing watermarking techniques.On the basis of these motivations,an overview of watermarking is given,and the contribution of this work is twofold:①Assessment of the requirements imposed by the characteristics of DEM on watermark-based copyright protection is compared with usual digital images.In the multimedia field,watermark must be transparent to the human visual system.While in the DEM context,this requirement can be rephrased as "not altering the elevation precision and the results of application to an extent of DEM besides transparency".This is called near-lossless requirement in this paper.It is worth noticing that near-lossless is considered more necessary than robustness requirement.That is because almost nobody would attack DEMs with degraded quality.Adaptivity is another characteristic of DEM watermark technique.Generally speaking,DEMs required high precision could be embedded in less watermark energy.While DEMs required low precision are on the contrary.Since different precision standards are established for different landforms of DEM,watermark energy embedded should also change adaptively to landforms.It ensures that the watermarked DEM meets accuracy requirements of DEM as well as watermark energy could be inserted as much as possible.In addition,efficiency of watermark algorithms should also be considered.②Discussion of a case study where the performance of two popular,state-of-the-art watermarking techniques is evaluated by the light of the requirements at the previous point.One is based on block-based DCT techniques using Watson perceptual model,the other is based on DWT techniques using PWM perceptual model.The two algorithms are applied to a 512×512 DEM of Loess Plateau with a horizontal grid spacing of 5m.The watermark is a 64×64 binary image.In the experiment,watermarked DEMs are ensured transparent to the human visual system.In order to evaluate the impact of the watermark presence on DEM,we considered histogram,slope,slope of slope,contours,collection waterlines and terrain structure lines derived from the two watermarked DEMs.It has been shown that the presence of the watermark has almost no impacts on histogram.However,the results of the applicative tasks in digital terrain analysis are heavily affected not only in the human vision but also in values.As for the DWT algorithm,from which contours,collection waterlines and terrain structure lines derived all exhibit poorer precision.Obviously,watermarking techniques evaluated by human visual system don’t always meet the requirements of elevation precision and applications of DEM.Watermark embedding should be very limited on avoiding DEM quality degradation.In the further,work will aim at the development of watermarking techniques specifically designed for DEM,so as to minimize the impact on the applications.  
      关键词:DEM;applicability;watermarking;copyright protection   
      134
      |
      523
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650642 false
      发布时间:2021-06-10
    • LI Xiang1,SHU Jiong1,LIU Zheng-jun2
      Issue 5, Pages: 819-824(2008) DOI: 10.11834/jrs.200805107
      摘要:A widely-applied fashion of security surveillance,also known as closed circuit television(CCTV),in micro-spatial environments,such as inside a building,is to have security personnel sit in front of a wall of monitors,which corresponds to spatially-distributed video cameras,and respons(e.g.alarms,access control,etc.) on any emergent events or suspicious objects.In particular,if a watched object is mobile,e.g.a theft,security personnel need to keep track of the object’s movement through frequently switching their visual focus among monitors.Even for experienced security personnel,the tracking process is flustered if the number of monitors is large or the monitored object moves fast.To the best of our knowledge,there is still lack of approaches which can provide security personnel with leads to track and estimate the movement of monitored objects in a micro-spatial environment.In view of this,we present a GIS-based approach to assist video surveillance indoor.In the proposed approach,all accessible places in a building and the locations of video cameras are represented with a network data model.Generally,monitored objects can only stay or move in accessible places of a building,e.g.rooms,exits(e.g.gates,stairs,elevators,etc.),and hallways.These accessible places in a building can be represented with a node-arc network data model.The network consists of nodes and arcs.Nodes,corresponding to rooms or exits,are end points of an arc,and arcs,corresponding to hallways,can only meet with each other at a node.Monitored objects can only access to a node(i.e.room or exit) through arcs(i.e.hallways).With this network representation,the topological relationships among accessible places can be clearly illustrated,and network analysis can be employed to calculate the accessibility in a building.Instead of linear hallway,in some cases,rooms or exits are linked or accessed to through a lobby.Although a lobby can be also treated as a room,one node in the geometric center of the lobby is not enough to reflect the walking behavior of people within it.Therefore,additional arcs are needed to fill in the spacious area of a lobby.Our approach is to link the geometric center and associated rooms or exits with arcs,and then link the middle points of these arcs with additional arcs.The above procedure can represent the accessibility in a single floor.To represent the accessibility of a multi-level building,we can link stair or elevator exits of different floors with arcs.Each camera is represented as a point identical with a node or along an arc closest to the camera’s location in a node-arc network and located with linear-referencing system(LRS),which is a well-known data model in GIS for transportation(GIS-T).A monitor corresponding to a camera generally displays the field of vision(FOV) through this camera.For each camera,we can delineate a FOV on the node-arc network.Depending on the installation gradient of a camera,its FOV may surround this camera or not.In this paper,we suppose each camera is surrounded by its FOV.The above definition facilitates us encoding topological relationships between one camera’s FOV and nearby objects including other cameras’ FOV and rooms in order to provide security personnel with leads of tracking monitored objects.For each camera,8 attributes are employed to record directly-linked cameras and rooms at 4 directions.Based on the network model and the data structure,a number of applications can be realized.One of them elaborated in the paper is to locate suspicious moving in a short time objects.Besides the procedure,a detailed description is given to explain how to implement the procedure and how to link the above research output to monitors and cameras.  
      关键词:GIS;indoor space;video surveillance;accessibility;node-arc model   
      116
      |
      414
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650490 false
      发布时间:2021-06-10
    • TUAN Chyau,NG Fung-yee Linda
      Issue 5, Pages: 825-830(2008) DOI: 10.11834/jrs.200805108
      摘要:Investment facilitation and promotion has been a major strategy adopted in the cities in both the Yangtze and Pearl River Delta in China for economic growth during the past two decades as well as a major driving force of economic development in the two delta regions.Other than "preferential" investment policy,clear,comprehensive information provision for investors is important for attracting investments and essential for success given the keen competition among these cities.Geographic Information System(GIS) provides an effective platform for the construction and visual display of experimental economic indicators of an investment environment.This paper attempts to provide a brief report on "The Globalized Delta Economies in China: Visualized Economic Indicators for Investment Environment" and to discuss how GIS can further be better utilized for investment promotion.  
      关键词:Geographic information system(GIS);investment environment;visualized economic indicators;Yangtze and Pearl River Delta;China   
      153
      |
      183
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650690 false
      发布时间:2021-06-10
    • XIE Chuan-jie,LIU Gao-huan,GAO Bing-bo,SHENG Wen-tao
      Issue 5, Pages: 831-836(2008) DOI: 10.11834/jrs.200805109
      摘要:With the application of the Spatial Information Grid(SIG),the spatial information managed by SIG being more and more abundant.The abundant spatial information reguires the better application of distributed spatial information query across SIG.However,the remote spatial join queries are always the bottleneck in the distributed spatial information query.Based on this observation,in this paper,the spatial join queries are optimized by taking full advantage of the grid computing resources according to the characteristics of spatial information.At first,the software architecture for distributed spatial query is designed based on the different grid services.The distributed spatial data query software architecture is composed of three different kinds of grid services,namely: Distributed Spatial Data Query Grid Service(DSDQGS),Spatial Data Grid Service(SDGS) and Remote Spatial Join Query Grid Service(RSJQGS),These three kinds of grid services cooperate to implement the optimization and execution of the distributed spatial data query.In the architecture,the grid computing resources are utilized by the remote spatial join queries execution grid services.Secondly,the partitioned parallel spatial join queries are implemented by the Kd-Tree spatial partition scheme.In the scheme,an original spatial query is rewrite into several sub-spatial queries bounded by sub regions of the Kd-Tree nodes,which can be run concurrently;therefore,the performance of the remote spatial join queries is improved.The cost model for the partitioned parallel spatial join queries is also presented in paper.The cost of the remote spatial join query involves two parts: the computing cost of the join operation and communication cost of the spatial data.Thirdly,the optimization algorithm for the query planed to generate the remote spatial join queries is designed according to the cost model.The remote spatial join query plan prescribes the way the spatial join query execution,including the scheme for partitioned parallel spatial join query,SDGSs participated in the join query,and assignments of the partitioned parallel spatial join query tasks to RSJQGSs.The cost is benchmark for the remote spatial join query plan.The parameters utilized in the optimization algorithm are managed as properties of the WSRF.The full optimization algorithm is finished when all identified spatial join operators are processed.At last,the future research directions for the optimization of spatial distributed query on SIG are discussed.  
      关键词:spatial information grid;remote spatial join queries;distributed spatial query optimization;parallel query   
      128
      |
      259
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 10650576 false
      发布时间:2021-06-10
    0