摘要:As a next generation geographic analysis tool,Virtual Geographic Environments( VGEs) have remarkable predominance in enhancing human cognition of the geographic world and in solving geographic problems. Most studies currently focused on VGE structures and technologies. However,research productions on basic theory,such as geographic semantics and data representation models,are few. In particular,the weakness of VGE cognitive theory has restrained the further development of VGEs. VGE cognition attempts to improve the usability,efficiency,equity,and profitability of VGEs with geo-knowledge obtained through the synthesized research of human geographic cognition rules,as well as the VGE nature and building methods. This research provides not only an effective guide to VGE construction but also theoretical and methodological guidance to the geographic cognition,geographic analysis,and modeling of remote sensing information. This article reviews several understandings of VGEs and suggests that cognition research on VGEs should be derived from the perspective of system theory to contribute to the future scientific development of VGEs. From this perspective,VGEs are complex systems of human environments,virtual environments created by computers,and realistic environments. The differences between the cognition of VGEs and other similar concepts are discussed to draw their clear boundaries. Geographic and map cognition are important bases of VGE cognition,but developing a new cognitive research framework of VGEs in reference to VGE characteristics are necessary. Thus,a cognitive research framework is developed from similarities that include thinking,perception,geometry,characteristic,and discreteness similarities in both realistic and virtual environments. Thinking similarity is the emphasis and the difficulty of VGE cognitive research. It is also the basis of building VGEs that are consistent with human cognition. The course of human cognition is through image schema,category,conception,and meaning. Thus,thinking similarity should include the specialization of human cognition models on geographic environments,context model building of VGEs,computer model of geographic concepts,and formalization and contextualized representation of geographic models. Perception similarity is an important factor in generating immersion and is a key to distinguishing VGEs and other spatial cognition tools. Thus,the multi-modal interaction,dynamic representation adaptability,cognitive load,and work efficiency experimentation of VGEs should be studied. VGEs should balance realistic representation and non-realistic representation,which is called realistic representation adaptability to geometry similarity. Given the multi-scale characteristic and scale dependence of both the geographic process and study method,characteristic similarity should include multiple-scale adaptability and representation of formalization models. VGE uncertainty is produced when discrete geographic data are translated into a continuous spatialtime process representation,which is lacking in VGE research. Based on the above framework,VGE cognitive research is placed in its wider context. The function of the geographic experiment platform of VGEs and human function in VGEs is also emphasized.This study has a large significance in deepening VGE research.
关键词:virtual geographic environment;geographic environment;system theory;cognitive research framework;cognitive mode
摘要:An internal wave is a common phenomenon that occurs in the pycnocline of the ocean. Its proprieties are closely related with ocean parameters,such as stratification. An internal wave is normally generated by tidal currents that encounter rough topography at deep ocean. Many internal waves in the South China Sea( SCS) have been detected from remote images; as a result,research on these waves has become popular in recent years. With the objective of exploring internal wave propagation and accurate inversed internal wave parameters,this study investigates a new model. The NonL inear Schrdinger( NLS) equation is established in a three-layer stratification configuration fluid as the theoretical model,and the SAR remote sensing detective model is presented based on the NLS equation. The inverse relation of the internal wave parameters is also proposed.The three-layer NLS equation is based on the basic equations of fluid dynamics. Assuming that ocean water is stratified in a three-layer model,the Laplace and Bernoulli formulas of hydromechanics are first used. By introducing the multiple scale and perturbation expansion method,the governing equation can then be transformed into the NLS equation. The analytic solution is also obtained,and the analysis of the physical significance of several parameters in the equation is provided.The internal wave amplitude is inversed in the Dong Sha Atoll of the SCS and the Malin shelf edge of the United Kingdom to verify the accuracy of the model. Inversion results are in accordance with the in situ measurement data. Moreover,the results of the three-layer NLS equation are more accurate than the two-layer models,such as the KdV and two-layer NLS equation. This finding proves that our NLS equation in a three-layer system is suitable for studying internal wave parameters.The three-layer NLS equation is quite close to real ocean conditions. A clear view of internal waves can be obtained with this model. The inversion results show that the three-layer NLS equation is more suitable for studying internal waves in the SCS and Malin shelf areas than the two-layers models.
摘要:This study explores the potential of using ASTER data to extract lithology information on the ophiolitic mixtite belt in the Tonghuashan region of southern Tianshan.First,we used the band ratio to quickly obtain information on rock assemblages. Second,we chose a number of lithological identification indices that have been established in previous research,including the Quartz Index( QI),Carbonate Index( CI),and Mafic Index( MI). We used these indices to distinguish among the silicates,carbonate rocks,and mafic-ultramafic rocks in the study area,and we also compared their performance in identifying the same kind of lithology. Third,we applied the log residual algorithm method on all the shortwave infrared( SWIR) bands of the ASTER data to examine its effectiveness in detecting ophiolite ingredients. Lastly,we employed the laboratory spectral data and spectral angle mapper method to identify the spatial distribution of the ophiolite complex. Quantitative evaluation results were achieved using the confusion matrix,and the related evaluation factors were obtained after separately applying the method on the ASTER SWIR and visible and near-infrared( VNIR)-SWIR data.In this study,we found that the band ratio method is suitable for suppressing the influence of the terrain and that it helps identify and understand geological information. On the one hand,the lithology index QI( 2008) image displayed less noise and more correctness than the lithology index QI( 2005) image in displaying the spatial distribution of silicate rocks,and the lithology index CI( 2005) image was more accurate than the CI( 2003) image in the distribution of marble. On the other hand,the lithology indices MI and QI( 2008) images collated with each other well and showed most of the ultramafic rocks in the area. The log residual algorithm successfully discriminated the ophiolite complex body from the surrounding rocks at the regional scale. The method of using the standard spectral data and spectral angle mapper to detect the spatial distribution of the ophiolite complex was relatively effective,as demonstrated by comparing the results of the method with the existing geological map and field observations. The distribution of mafic-ultramafic rocks was consistent with the geological map,but the peridotite in the southeast was not extracted because it was serpentinized,and serpentine is heavily superimposed on carbonation. Some peridotite outcrops,which were not manifested in the geological map,were also found. The distributions of dunite and peridotite were similar because dunite and peridotite are mainly composed of olivine. Gabbro and diabase outcrops were small and often symbiotic in the field; thus the accuracies of their recognition were low. The scope of basalt was larger than that in field observations because mafic lava is usually symbiotic with peridotite and gabbro,and mafic lava was altered into chlorite and epidote because of the hydrothermal alteration of the greenschist facies. However,the results for marble recognition were more accurate than those for mafic-ultramafic rocks. The quantitative evaluation of the confusion matrix demonstrated that classification using the combined ASTER VNIR and SWIR bands achieves a higher overall accuracy and Kappa coefficient than those using the ASTER SWIR bands alone.The band ratio and log residual algorithm methods can obtain general lithology information on ophiolite. Applying laboratory spectral data and spectral angle mapper on ASTER VNIR-SWIR data helps achieve increasingly accurate and detailed information on the spatial distribution of ophiolite ingredients. However,the potential of using other methods in combination with the methods mentioned above must be continuously explored to improve the accuracy of lithology identification.
摘要:Embankment has been one of the most important measures of flood control since the ancient times. Its extraction and detection serve an important role in flood prevention. However,traditional optical images cannot extract precise embankment threedimensional models using feature points because dikes are always surrounded by trees and roads. The airborne light detection and ranging( LiD AR) system can directly obtain high-accuracy three-dimensional information on a river basin. Dike points can be classified directly from the original LiD AR point data using the spatial semantic feature,including the slope and topological relation between the river and bank without any additional data source support. The method presented in this paper is important to automatic embankment extraction of embankment models.This paper presents a method that performs data pre-processing,water edge judgment,and slope fitting of extracted dike points using only the LiD AR point data.( 1) The water and non-water areas are separated using echo,intensity,and point density properties. The water edge is obtained by gridding the point data.( 2) The searching direction of the water edge is judged and sought.( 3) The embankment points are extracted using the least square method after excluding the water edge without embankment.The experiment data covers the cities of Changyang,Shashi,and Xianning in Hubei Province to test the validity of the algorithm process,the point densities for these three cities are 1 dots / m2,1 dots/m2,and 2 dots/m2,respectively. The checkpoints are collected by the GPS RTK distributed randomly along the river side to verify the precision of the method used in this study. The test results are: The precision of the embankment brink extraction is less than 0. 43 m,0. 48 m,and 0. 3 m in Changyang,Shashi,and Xianning City,respectively,which shows that the accuracy satisfies the one-half point interval. The error of embankment elevation is less than 0. 13 m,0. 3 m,and 0. 08 m,respectively.The test results show that the proposed method can successfully extract the embankment of the river using only LiD AR data based on its semantic information. Moreover,a relatively high accuracy result can be achieved when the dike structure is simple and the point data are dense.
关键词:LiDAR point cloud;object extraction;semantic feature;embankment slope;points segmentation
摘要:Launched in 2010,"Mapping Satellite-1 "( TH-1) is the first stereo surveying and mapping satellite from China.During thereal operation of TH-1 in the space environment,the attitude angles of the star sensor,lens geometric parameters,and CCD undergo unpredictable changes that affect the positioning accuracy of satellite images. This study conducts self-calibration block adjustment research on TH-1 satellite images to achieve high image positioning accuracy. The following investigation method is used in this study: First,the lens and CCD geometric distortions of the three linear stereo surveying and mapping cameras of TH-1 are analyzed,and a self-calibration model suitable for the satellite’s three linear CCD images is proposed. The self-calibration block adjustment model is built for the satellite’s three linear CCD images. Overall adjustment is then applied on exterior orientation elements and self-calibration parameters to eliminate systematic errors present in the above observations and to improve the positioning accuracy. Finally,the Songshan testfield is utilized to support the self-calibration block adjustment processing for the three linear images of TH-1 and to verify the correctness and validity of the self-calibration and self-calibration block adjustment models.The influence of the number of control points on self-calibration is also analyzed. Experimental results demonstrate that using the self-calibration block adjustment technique can effectively eliminate systematic positioning errors and significantly improve positioning accuracy. The spatial resolution of the three linear array images of TH-1 is 5 m. After normal block adjustment,the optimal plane accuracy along the X- and Y-directions are approximately 20 m,with a height accuracyofapproximately 20 m and plane accuracy of 4 pixels. After self-calibration block adjustment,the optimal plane accuracy increased from 5 m to 6. 5 m( i. e.,less than1. 3 pixels),and the height accuracy increased to 5 m. Through a series of experimental research and analysis,the following conclusions can be drawn:( 1) The auxiliary orbit and attitude determination files of the three linear CCD images of TH-1 contain clear systematic errors. A certain degree of geometric deformation also exists in the camera lens and CCD component. For direct space intersection using orbit and attitude determination files,a prominent systematic positioning error must be eliminated through the block adjustment approach.( 2) If the self-calibration parameter Xsand its observation equation are not considered,Eq.( 10)becomes the normal block adjustment model. Normal block adjustment considers only the control point error Xgand exterior orientation element error X; thus,it corrects only the attitude errors. Although normal block adjustment can effectively eliminate the positioning error caused by the attitude angle error,the impact of the lens and CCD component geometric deformation parameters still exists. The variation of the latter parameters is minor in a medium period,and the residual errors in the plane and elevation direction are still systemic.( 3) Applying self-calibration block adjustment to the three-line array images of TH-1 with the self-calibration model presented in this paper can effectively eliminate the attitude errors and the combined effects of the focal length change,lens,and CCD geometric deformation and significantly improve positioning accuracy. The residual systematic errors of the checkpoints are effectively eliminated after self-calibration adjustment,which confirms the correctness and effectiveness of the self-calibration parameter and self-calibration block adjustment models presented in this paper.( 4) Increasing the control points can efficiently improve the accuracy of self-calibration block adjustment in an appropriate range,but when the number of control points reaches a certain degree,the adjustment accuracy stabilizes.
关键词:Mapping Satellite-1(TH-1);thee linear CCD image;testfield;self-calibration model;block adjustment;geometric distortion
摘要:With the improvement of spatial resolution of remote sensing image,the details,geometrical structure and texture features of ground objects have been better presented. As the same object type has different spectra or different object types have same spectrum,the statistical separability of different land cover classes in spectral domain is reduced,which is a great challenge to the traditional classification methods based on pixel-features for high spatial resolution remote sensing image. Classification accuracies based on pixel classification methods are improved by fusing pixel texture,structure and shape features. But the pixel-based multi-feature classification methods generally have the shortcomings of"salt and pepper"effect and computational complexity. In recent years,the Object Based Image Analysis( OBIA) method has been widely concerned. The basic characteristic of OBIA is homogeneous regions as processing units. OBIA method can solve"salt and pepper"problem within traditional methods,and overcomes the shortcomings among pixel-based classification methods. However,a large segmentation scale in OBIA leads to lose detail and present"excessive smoothing"phenomenon. In view of the"salt and pepper"phenomenon of pixel-based multi-feature classification methods and the "excessive smoothing "phenomenon of OBIA,a classification method which fused pixel-based multifeature and multi-scale region-based features is proposed in this paper.( 1) The over-segment image objects are obtained by mean shift algorithm. Then regions are merged based on the original over-segmentation results through multi-scale,and the multi-scale segmentation results are obtained. According to change of multi-scale regions merged index-RMI and the correlation between classification accuracy and segmentation scale,when the RMI change is small,the adjacent regions are merged,and the RMI change is significant,best segmentation results are obtained in the optimal scale and the adjacent regions merging processes are stopped. The correlation among segmentation scales,segmentation numbers and OA is analyzed. Finally,the optimal segmentation scale is determined.( 2) Spectral features,shape features and multi-scale region features are extracted,then spectral features,Pixel Shape Index( PSI) features and region features of the original scale and the optimal scale are fused,and features of various types are normalized. Finally,the classification is implemented by Support Vector Machine( SVM). In order to test the effect of the proposed method discussed in this paper,two high spatial resolution hyper spectral remote sensing images are adopted. Some A series of experiment schemes are designed,which include classification methods using pixel-based LBP,GLCM and PSI features,Object-Based Image Analysis( OBIA),and single scale segmentation results by e Cognition algorithms and Meanshift( MS). Classification results are evaluated by quantitative and qualitative methods,which. are confusion matrix,Overall Accuracy( OA) and Kappa coefficient as quantitative evaluation and visual discrimination as qualitative evaluation. The classification accuracy of the proposed method is higher than pixel-based multi-feature methods,OBIA method in e Cogniton software and single scale classification results based on MS segmentation method. The experiment results show that the proposed method can effectively take advantages and reduce disadvantages of pixel-based and region-based classification methods and improve classification accuracies of different land cover classes.
摘要:Endmember extraction from hyper spectral data is an important procedure of hyper spectral unmixing. Nonnegative Matrix Factorization( NMF) has been widely used in the last few years for endmember extraction without assuming the presence of pure pixels. Many methods that incorporate different types of constraints into the NMF objective function have been proposed to accurately extract endmembers. However,traditional constrained NMF algorithms generally have two limitations. First,controlling the tradeoff between the accurate reconstruction and constraint well through the fixed penalty coefficient is difficult. Second,most traditional methods are usually trapped in a local optimum that renders the global optimum difficult to find. To overcome these constraints,we present a novel method called the High-dimension Adaptive Particle Swarm Optimization( HAPSO) for endmember extraction based on the minimum distance constrained NMF( MDC-NMF) scheme. HAPSO enhances the global search ability through PSO. Two key improvements—the high-dimensional PSO and adaptive penalty coefficient method based on swarm information—are considered. The standard PSO algorithm particularly suffers from the"curse of dimensionality",such that it is more likely to plunge into local optima as the dimensionality of the search space increases. To overcome this problem,high-dimensional PSO divides the complex high-dimensional constrained NMF problem into several simple low-dimensional sub problems according to the characteristics of objective function and hyper spectral data. Thus,each particle in the swarm can search for increasingly accurate positions in a detailed manner that significantly improves the accuracy of the results. Furthermore,particle information,such as the positions and feasibility of the PSO algorithm,can be easily applied to balance the search bias between objective functions and constraints. Thus,this study proposes an adaptive penalty coefficient method according to the proportion of feasible solutions in the swarm,instead of the fixed penalty coefficient. The penalty coefficient particularly increases when the proportion of feasible solutions is low in the preliminary stage and decreases as the proportion of feasible solutions increases. The proposed HAPSO algorithm,MDC-NMF,minimum volume constrained NMF,and vertex component analysis are compared. Both synthetic and real data are considered. For the synthetic data,different numbers of endmembers and signal-to-noise ratio are considered. Real hypers pectral data,including AVIRIS and HYDICE images are used. Results demonstrate that the proposed HAPSO algorithm outperforms other algorithms; it extracts more accurate endmembers and causes less reconstruction error. HAPSO is a method that applies PSO for endmember extraction based on the MDC-NMF scheme. The proposed method can effectively overcome the disadvantages of the traditional constrained NMF endmember extraction algorithms and solve high-dimensional and adaptive penalty coefficient problems for NMF endmember extraction. We will consider the influence of many endmembers that can decrease accuracy in future work.More reasonable and effective constraint handling approaches should be studied,although an adaptive penalty coefficient method performs well. The algorithm accuracy should also be further improved because not all endmembers are perfectly extracted by the proposed algorithm in the experiments.
摘要:High-speed spinning targets dection and imaging is essential to some special applicaions,such as targets classificationand recognition,missile defense,the protection of spacecraft,etc. A higher pulse repetition frequency( PRF) is required for the radar system to obtain focused images for high-speed spinning targets. However,the observations of targets are always undersampled or nonuniformly-sampled when the radar PRF cannot satisfy the sampling requirement,which influences target identification.To solve the problem,this paper establishes an imaging model of azimuth undersampled echoes,according to the compressed sensing( CS) theory and the sparsity nature of ISAR echoes. Then the compressive sampling matching pursuit( CoSaMP) algorithm is used for signal reconstruction to improve the stability of traditional imaging algorithm using OMP.The overall procedures of the proposed algorithm are as follows.( 1) After range compression and rough translational motion compensation of the echoes,a range bin containing the target component is determined due to the usage of narrowband radar.( 2)The spinning period and residual translational motion parameters are estimated using the time-frequency spectrum of the echoes.( 3) The noise level is estimated using the range bins containing only noise. The observation matrix is constructed according to the estimated target motion parameters. The narrowband imaging of spinning targets is transformed into an optimization problem based on CS.( 4) The target signal is reconstructed using the CoSaMP algorithm,which is then transformed into the target space. Then the two-dimensional images are obtained.The first experiment is provided to show the effectiveness of the proposed algorithm. Compared with imaging algorithm based on OMP,the algorithm improves the reconstruction accuracy and stability due to the usage of backtracking strategy. The imaging quality is highly decided by two factors,the PRF and the signal-to-noise ratio( SNR). In the second experiment,the influences of these two factors on the imaging performance using different imaging algorithms are investigated. Two indicators,including the number of false selections and normalized mean-square error,are introduced to evaluate the influences of RRF and SNR on the imaging performance. The results about these two indicators show that compared to the OMP and the SP algorithms,the proposed algorithm can reconstruct the target signal more effective and stable,especially under the conditions of low SNR and PRF. The shadowing effect,which results from the scatterers of the target that will be unsighted to radar in some observation intervals during a spin cycle,is considered in the third experiment. It well demonstrates the ability of the proposed algorithm to alleviate the shadowing effect. The locations of all the scatterers can be correctly estimated from the seriously insufficient samples.When the radar PRF cannot satisfy the Nyquist sampling theorem,the CoSaMP algorithm is used for image formation of highspeed spinning targets based on the CS theory. The algorithm further improves the ability of information-access of low PRF radar on the high-speed spinning targets,which benefits the target recognition.
摘要:Compression can significantly decrease hyper spectral images to relatively manageable sizes,thereby facilitating their efficient transmission and storage in aground station. Dictionary learning and sparse representation perform well in natural image compression. This study focuses on the properties of hyper spectral images and presents an efficient compression algorithm based on wavelets and dictionary learning for hyper spectral images.First,multi-scale training samples are built through wavelet decomposition; the training samples of each scale are sent to the K-SVD dictionary learning model to obtainmulti-scale dictionaries by joint training,whichsimultaneouslycalculates the errors and update the multi-scale dictionary. Second,statistical analysis is performed on the used dictionary atoms in local optimal bands in the process of sparse coding,and a frequency selection factor is introduced. The statistical information and frequency selection factor are then used to decrease on-used or rarely used atoms in the dictionary. Other bands can be sparsely and easily represented using the simplified dictionary. Finally,the simplified dictionary is directly entropy coded,and the DC component is entropy coded after 4-neighborhood prediction and differential pulse code modulation. The indices of the coefficients of each scale are rearranged according to the numerical value and are separately entropy coded after DPCM. The sparse coefficients are also rearranged according to the sequential changing of indices and are entropy coded together after adaptive quantization. Results show that the proposed scheme outperforms the traditional spatial and other multi-scale dictionary learning algorithms. It is also much better than3D-SPIHT in terms of bit rate distortion performance. As JPEG2000( Part 2) largely benefits from the embedded block coding with optimized truncation strategy,it can achieve a better performance than our scheme. However,our proposed is much faster than JPEG2000( Part 2). This study designed a novel hyper spectral image compressor based on wavelets and dictionary learning.Experimental results reveal that the proposed scheme outperforms the 3D-SPIHT and most compression algorithms. It is also much faster than the state-of-art compression standard JPEG2000( Part 2). This compression algorithm can be improved further in the process of rearrangement before entropy coding. Our experiments prove that dictionary learning and sparse representation theories have great potential in hyper pectral image compression and interpretation applications. This study can motivate future research in this field.
摘要:Spectral unmixing is an important and challenging task in the field of hyperspectral data analysis. The existing methods of blind unmixing have certain limitations. In this paper,we present a new blind unmixing method,namely the ATGP-NMF algorithm,for hyperspectral imagery. The method is based on the improved target endmember acquired through integration of the Automatic Target Generation Process( ATGP) algorithm and the Non-negative Matrix Factorization( NMF). The Harsanyi-FarrandChang( HFC) algorithm was introduced firstly to determine the number of target endmembers. Then the ATGP algorithm and NonNegative Least Squares( NNLS) were used to obtain the spectra and abundances of the target endmembers,which were then used as initial values for the NMF algorithm to obtain the refined endmembers. Finally,an improved cross correlogram spectral matching method was introduced to match the corresponding land cover type of each endmember. Three different sets of data,namely simulated data,laboratory-controlled spectral experimental data and remote sensing imagery,were used in this study to test the effectiveness and robustness of the proposed method,in comparison with the original NMF algorithm. Results from these experiments show that the ATGP-NMF algorithm can obtain endmembers with high accuracy and it is more robust and efficient than the original NMF algorithm in different situations,regardless of the existence of pure pixels,inter-class diversity,or correlation among the endmembers’ spectra. The ATGP-NMF algorithm thus has great potential of application in blind unmixing for hyperspectral imagery.
HONG Shunying,LIU Zhirong,SHEN Xuhui,CHEN Lize,JING Feng,DAI Yaqiong
Vol. 19, Issue 2, Pages: 288-294(2015)
摘要:We extracts multiple Line-Of-Sight( multi-LOS) coseismic deformation fields through the interferometry of three different sets of ENVISAT ASAR data in the LOS direction,and constructs a double-fault rupture model of the Gaize earthquake by integrating deformation field characteristics with focal mechanism solutions. We also inverts the coseismic slip distribution of the Gaize earthquake though the steepest descent method and the layered crustal model of CRUST2. 0 under the constraint of the quadtree resampling of multi-LOS coseismic deformation fields. Results show that the deformation residuals of inversion are effectively controlled within 0 ± 10 cm. The major slip distributions of the mainshock fault are located at a depth of 2 km to 16 km along the fault plane,with a maximum slip value of approximately 1. 34 m at a depth of 6. 4 km. The slip distributions of the aftershock fault are mostly located at a depth of 2 km to 6 km along the fault plane,with a maximum slip value of approximately 0. 90 m at a depth of 3. 52 km. Both the mainshock and aftershock faults are principally ruptured with the normal mode,and the mainshock fault is also ruptured with a slight left lateral striking,which is not obvious in the aftershock fault. When the shear modulus is set to 3. 2 ×1010Pa,the inversion seismic moments of the mainshock and aftershock are approximately 6. 34 × 1018N·m and 1. 20 × 1018N·m,which lead to moment magnitudes of Mw6. 47 and Mw5. 98,respectively.
摘要:This paper presents two temperature-emissivity separation algorithms of hyper spectral thermal airborne infrared data:the Automatic Retrieval of Temperature and Emissivity using Spectral Smoothness( ARTEMISS) algorithm and the ASTER temperature-emissivity separation( ASTER TES) algorithm. These two algorithms are applied to separate the temperature and emissivity derived from data on the Liuyuan regionusingthethermal airborne spectrographic imager. Results of the two algorithms,as applied on the data from field measurements,were analyzed and compared in terms of their image quality and accuracy. The results showed that both algorithms meet the accuracy requirements. However,some differences also exist: the ASTER TES algorithm has good image quality and high precision,where as the ARTEMISS algorithm has simple steps and an emissivity that can better reflect the differences in litho logy than the former algorithm. In practical applications,algorithms should be chosen depending on the requirements of the application.
摘要:Drought is a natural hazard that arises from climate fluctuations and variations. It frequently triggers negative impacts on economies,societies,and environments. This paper presents a new remote sensing-based drought index called the MODIS Shortwave Infrared Water Stress Index( MSIWSI) for monitoring agricultural drought. Changes in the water content of plant tissues or soil have a large effect on the reflectance in several regions of the shortwave infrared spectrum. MSIWSI integrates relevant information from the shortwave infrared channels 6 and 7 from the MODIS satellite data based on the distribution of different land cover classes in the MODIS shortwave infrared spectral feature space. Correlation and regression analyses are performed on MSIWSI and two other well-known drought indexes—the Enhanced Vegetation Index( EVI) and Modified Perpendicular Drought Index( MPDI) —and on ground-measured soil moisture data to assess the capability of MSIWSI in arid regions in northwest China. Pearson correlation analyses were also performed on MSIWSI and ground-measured soil moisture data of spring wheat during different phonological phases. Results show that MSIWSI performed better than EVI and MPDI in soil moisture retrieval. MSIWSI can also reflect the probable soil moisture trends of spring wheat. This investigation indicates that MSIWSI has significant application potential in monitoring agricultural drought in arid regions.
摘要:Sea Surface Temperature( SST) is a basic parameter in characterizing the ocean-atmosphere system and serves an important function in climate change. Many types of cloud-free,high-spatial,and temporal coverage merged SST products have been generated by the Group for High Resolution Sea Surface Temperature. These products provide important data sources that can be used in a wide variety of operational and scientific applications. However,differences are existed among these products,due to their specific research requirements,different blending algorithms,different satellite SST sources for blending,and quality control methods. Therefore,monitoring the quality of these products is necessary,particularly at shelf and coastal seas around China,which are characterized by complex atmospheric conditions and hydrodynamics. This study compares four types of merged SST products in the South China Sea and adjacent waters in the years 2008 and 2009.Four multi-satellite merged SST products—the Operational SST and Sea Ice Analysis( OSTIA),microwave / infrared optimally interpolated SST,microwave optimally interpolated SST,and new generation SST( NGSST) —are validated with the Argo SST in the shelf sea and Argos SST in the shallow coast. The match-up data are collected on the same day and location. The Root Mean Square( RMS),bias,and correlation coefficients are calculated and used to quantify the errors. These products are projected into the same grid of NGSST using the nearest-neighbor sampling method for comparison. OSTIA is selected as the basis,and the relative differences between OSTIA and the other three products are computed and visualized using maps,box-plot,and time series plots.The statistical results show that the RMS between the merged SSTs and Argo temperature ranged between 0. 3 ℃ and 1. 0 ℃,whereas the bias ranged between- 0. 1 ℃ and 0. 6 ℃ in the shelf sea( water depth > 80 m). The other three merged SSTs were consistent with the in situ data in the coastal area,except for NGSST,which had a significantly warm bias(- 1 ℃) and the largest RMS(- 1. 5 ℃). The bias and RMS of OSTIA were the smallest. An inter-comparison indicates no significant differences among the four merged SST products in the shelf sea. Their biases were within ± 0. 3 ℃. However,the deviation increases in shallow water. The largest bias was found in winter because of the poor weather conditions,whereas the smallest bias was found in summer.In summary,the four merged SST products were consistent with in situ data in the study region,except for the NGSST in the shallow coastal sea and the OSTIA product exhibited the best performance. This study has provided a reliable basis for the effective application of these merged SSTs with high spatial and temporal coverage in the South China Sea and its adjacent waters.
关键词:merged SST;validation;inter-comparison;remote sensing;South China Sea
摘要:Deformation monitoring is essential to the safe operation of seawalls. This paper reports the Interferometric Synthetic Aperture Radar( InSAR) measurement results derived from 31 Envisat ASAR images acquired over Hang zhou from 2006 to 2010,with special focus on the seawall in the Qiantang Estuary. Multi-temporal InSAR( MTInSAR) was used to extract deformation information from both Persistent Scatterers( PSs) and Distributed Scatterers( DSs),which provide dense measurement of the deformation of the seawall. Compared with the leveling measurement at 28 points,the mean error derived by InSAR is 0. 436 mm,with the largest error of 5. 016 mm,which confirms the millimeter-level precision and accuracy of the InSAR technique. A time series analysis was conducted based on these two datasets,and results showed that the subsidence of seawalls was spatially continuous and had a local negative unimodal pattern with distance. A linear tendency with minor local fluctuation was also observed in the time domain during a period of nearly seven years.
摘要:<正>近80年来中国大陆地区人口密度分界线变化 Variation of population density boundary in Mainland China in recent 80 years 封面图片是采用1935年、1964年、1982年、1990年、2000年及2010年共6年的人口统计数据,制作的近80年来中国大陆地区人口密度分界线变化图。分界线基于胡焕庸先生提出的"爱辉-腾冲"线中给定的人口比例阈值(西北部:4%;东南部96%),及洛伦兹(Lorenz)曲线原理厘定而成。自1935年以来,该分界线整体向西北方向偏移,在甘肃、宁夏回族自治区偏移较大,吉林、内蒙古自治区、陕西、云南、四川东南部次之,四川东北