最新刊期

    23 3 2019

      Review

    • Zhengqiang LI,Yisong XIE,Ying ZHANG,Lei LI,Hua XU,Kaitao LI,Donghui LI
      Vol. 23, Issue 3, Pages: 359-373(2019) DOI: 10.11834/jrs.20198185
      摘要:Complex and drastic variations of atmospheric aerosol components lead to high uncertainties in climate change assessment. The remote sensing of aerosol composition is the technology often used in aerosol optical and microphysical parametric analysis. These parameters, which are derived from remote sensing measurements, can quantitatively estimate the aerosol components of the entire atmosphere. Remote sensing offers the advantages of real-time and fast detection, spatial coverage, and maintenance of natural aerosol status. This paper presents a comprehensive review of theories, observations, models, and algorithms for aerosol composition remote sensing. First, in the area of algorithm development, we analyze the main ideas and methods for establishing aerosol composition models. In particular, an advanced remote sensing categorization model, which includes black carbon, brown carbon, mineral dust, light-scattering organic matter, inorganic salts, sea salt, and water, is presented in detail. The methods are identified or distinguished according to components (i.e., sensitivity parameters, including light absorption, size distribution, particulate shape, etc.). Second, for the calculation of the refractive index, some of the typical methods suitable for different aerosol mixing states are compared, and then their impacts on composition retrieval are determined. Third, some examples of remotely sensed aerosol components are given, and a preliminary validation of the retrievals is conducted by using synchronized chemical measurements. Finally, the development tendencies of the remote sensing of atmospheric aerosol composition are summarized from the perspectives of observation capability enhancement, optimization of the categorization model, improvements in retrieval accuracy, extension of application abilities, and identification of utilization prospects in global climate change assessment.  
      关键词:aerosol;chemical components;remote sensing;black carbon;brown carbon;dust;optical and microphysical parameters;mixing states   
      897
      |
      264
      |
      8
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411792 false
      发布时间:2021-06-07
    • Jianli DU,Dong CHEN,Zhenxin ZHANG,Liqiang ZHANG
      Vol. 23, Issue 3, Pages: 374-391(2019) DOI: 10.11834/jrs.20188199
      摘要:Creating photorealistic building models from large-scale airborne point clouds is an important aspect of urban modeling. Given the complexity of airborne points (i.e., noise, outliers, occlusions, and irregularities) and diversified architectures in the real world, the problems associated with the creation of photorealistic building models pose great challenges, but these problems are not comprehensively addressed by most of the state-of-the-art methods. In this research, intelligent algorithms are developed to create large-scale LoD3 building models with accurate geometry, correct topology, and abundant semantics. The developed algorithms can enhance the abstraction/representation of building point clouds. First, from the perspective of building model mechanism, modeling algorithms are divided into five categories, each of which is reviewed and analyzed in depth. Then, the common problems are determined, and their possible solutions are given accordingly. Finally, the possible directions of future building reconstruction are predicted on the basis of airborne point clouds. We aim to provide beneficial inspiration and relevant references to enhance building modeling theories, develop more intelligent modeling algorithms, and create high-quality building models.  
      关键词:airborne LiDAR;airborne point clouds;3D building reconstruction;building geometric models;review   
      794
      |
      251
      |
      16
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411506 false
      发布时间:2021-06-07
    • Lei YANG,Xinghua ZHOU,Quanjun XU,Baogui KE,Bo MU,Lin ZHU
      Vol. 23, Issue 3, Pages: 392-407(2019) DOI: 10.11834/jrs.20198262
      Research status of satellite altimeter calibration
      摘要:Satellite altimetry is one of the most important remote sensing technologies used over oceans, and it can dramatically promote ocean science in different disciplines. Since 2011, China has launched the HY-2A and HY-2B satellite altimeters to provide spatially dense data with a 14 day–time frequency to the international altimetry community. Calibration is an important work to maintain the high accuracy of the satellite altimeter data. Following the launch of HY-2A and HY-2B, as well as new satellites to be launched in the near future, the long time series of altimetry data will likely be achieved. The importance of altimeter calibration in guaranteeing the accuracy of satellite data and the consistency of long time series measurements will also likely become prominent. This paper summarizes the research progress and the status of the main calibration technology at the global level. The calibration methods are generally categorized into absolute and relative sets. Each method is briefly described along with its calibration principle and its merits and demerits. The instructions for the methods of tide gauge, GNSS buoy, and the crossover calibration between different satellites are given in detail, especially since they have been discussed comprehensively in the operational calibration literature. The calibration methods based on the transponder, ARGO, and tide gauge networks are briefly described given their supplementary roles. The existing progress and application of Chinese calibration sites are also presented. The geodetic survey and the surface subsidence over the Qianliyan calibration site and the calibration results for HY-2A, Jason-2 and Jason-3, Saral, and Sentinel-3A are demonstrated in detail. Then, the Wanshan sites and the planning calibration network in the Chinese coastal area are introduced. Furthermore, the wet delay of the troposphere, as measured by Jason-2 AMR in 2010—2016, is evaluated by considering one site from the Chinese coastal GNSS network, thus proving the feasibility of calibrating the microwave radiometer wet delay through the Chinese coastal GNSS network. In the last section, we summarize the problems obtained from the present research and provide suggestions for calibrating and correcting the domestic satellite altimeter data. This section is relevant because, in the near future, the surface water and ocean topography (popularly known as SWOT) and the Chinese “Guan Lan” and HY-3C missions, which adopt the new altimetry concept by using the sensors of the synthetic aperture radar interferometer, and which can provide 2D oceanic topography, will likely bring new challenges to present-day altimetry calibration technologies. The paper ends with a brief discussion on the anticipated new challenges on altimeter calibration. This work will bring remarkable contribution to the understanding of the present situation, considering that the paper focuses on the development trend of satellite altimeter calibration.  
      关键词:satellite altimeter;microwave radiometer;calibration;HY-2;Qianliyan;review   
      613
      |
      433
      |
      5
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412284 false
      发布时间:2021-06-07

      Technology and Methodology

    • Weihua AI,Guanyu CHEN,Shurui GE,Lingfeng YUAN,Xianbin ZHAO
      Vol. 23, Issue 3, Pages: 408-417(2019) DOI: 10.11834/jrs.20197154
      摘要:In accordance with the ability to obtain further target characteristic information when detecting the same surface area with L/C common reflector dual-frequency SAR dual-band electromagnetic wave, a new ocean wind field retrieval method is proposed. Wind speed and direction are estimated simultaneously using a geophysical model function and a normalized radar cross sections of the L/C dual-band in accordance with a common reflector dual-frequency SAR geophysical model function, and the fuzzy direction of 180° is wiped off using auxiliary data. Verification results show that the wind field retrieval method can be used to retrieve a high-precision wind speed and direction without other background sources in accordance with the simulation and airborne synchronous flight test on a sea. The major error can be explained by the insufficient accuracy in calibrating the NRCS for wind speed and direction retrieval. The wind speed error increases with the value of speed, and the root mean square error of wind speed and direction between retrieval and survey ship observed are 0.93m/s and 19.39°, respectively. The method for ocean surface wind field retrieval from the common reflector dual-frequency SAR solves the inherent problems of a sea surface wind field inversion of the traditional single-band SAR and increases the retrieval accuracy of the sea surface wind speed and direction, thereby providing support for business application and load developing of an ocean wind field retrieval from space-borne SAR images.  
      关键词:synthetic aperture radar;common reflector dual frequency;ocean wind field retrieve;CMOD;radiometric calibration   
      606
      |
      321
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411629 false
      发布时间:2021-06-07
    • Jiacheng LIU,Shuang WANG,Weihua LIU,Bingliang HU
      Vol. 23, Issue 3, Pages: 418-430(2019) DOI: 10.11834/jrs.20197074
      Saliency weighted RX hyperspectral imagery anomaly detection
      摘要:With the development of spectral imaging technique and its data processing technology, anomaly detection using hyperspectral data has become a popular topic. Anomaly detection refers to the search for sparse pixels of unknown spectral signals in hyperspectral imagery. Given that the anomaly detection is unsupervised, providing a priori information is necessary. Thus, anomaly detection has a strong practicality. Considering the lack of spatial correlation and low normal distribution adaptation, the traditional RX algorithm has an inaccurate background estimation. Thus, this algorithm is unsuitable for detecting hyperspectral data. In this study, a saliency weighted RX algorithm is proposed on the basis of the local neighborhood spectra of an image. When the human eye observes an image, the first object that is viewed is frequently the most significant. The significance of the saliency detection algorithm is to identify this goal. The saliency map is a 2D image of the same size as the original image to represent the significance of the corresponding pixel in the original image. In this algorithm, the image background modeling based on probability density is improved by introducing a saliency analysis method. Afterward, the spectral saliency map is established, and the mean vector and covariance matrix of the RX algorithm are redefined. Saliency weighted RX algorithm provides different weights to optimize the background estimation. Anomaly detection experiments are conducted using synthetic and real hyperspectral data. Synthetic data experimental results show that, for each target, the number of anomalies detected using the saliency weighted RX algorithm is more than that of the traditional algorithms, and the saliency weighted RX algorithm can detect anomalies with abundance below 0.1. By contrast, traditional algorithms cannot detect these anomalies. Moreover, the false alarm pixels of the traditional algorithms are distributed in various positions, whereas the saliency weighted RX algorithm concentrates on an area called a false alarm area. This area can be removed effectively by morphological filtering. Real data experimental results show that the saliency weighted RX algorithm corresponds to the largest AUC value and has the optimal detection results. The traditional RX algorithm assumes that the background model follows a multivariate Gaussian distribution and does not perform well in hyperspectral imagery. The method of saliency analysis in the field of computer vision can be effectively analyzed in the spatial domain. This phenomenon compensates for the shortcomings of the RX algorithm to ignore spatial correlation, thus detecting the anomalies synchronized in the spatial and spectral domains. The saliency weighted RX algorithm uses a saliency analysis method to provide the background and anomaly pixels with a different weight, thereby improving the adaptability of the background model. Through the experiment of synthetic and real data, the saliency weighted algorithm can improve the detection probability while reducing the false alarm rate in comparison with the traditional RX algorithm and has a certain anti-noise ability.  
      关键词:anomaly detection;saliency;RX algorithm;hyperspectral image processing   
      566
      |
      293
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411899 false
      发布时间:2021-06-07
    • Binge CUI,Yanan WU,Yong ZHONG,Liwei ZHONG,Yan LU
      Vol. 23, Issue 3, Pages: 431-442(2019) DOI: 10.11834/jrs.20197510
      Hyperspectral image rolling guidance recursive filtering and classification
      摘要:Hyperspectral images have relatively large changes in intra-class spectra, and the “same material, different spectra” phenomena are widespread. Moreover, classification accuracy is low when only the original spectral features are considered, and the “pepper and salt phenomenon” can be observed in the classification result map. In the attempt to improve the classification result, the spectral and spatial information of a hyperspectral image needs to be fully utilized to reduce the spectral changes within the class and expand the spectral difference between classes. Thus, in this study, we propose a spectral–spatial hyperspectral image classification method with rolling guidance recursive filtering. First, principal component analysis (PCA) is used to reduce the dimension of the hyperspectral image, in which the main components with large amounts of information are maintained for the input image of the next step. Second, the Gaussian filter is adopted to blur the input image and eliminate the small-scale structure. Third, recursive filtering is performed on the input image to preserve the edge of the image, in which the guidance image is the blurred image, and then the output result is treated as the new guidance image. The process is repeated iteratively until the large-scale edge is restored. Finally, the extracted feature bands of the hyperspectral image are used for classification by a support vector machine. Indian Pines and University of Pavia datasets were initially used to verify the effectiveness of the proposed rolling guidance recursive filtering (RGRF) method. As shown by the homogeneous regions inFigs. 5 and 6, the noise phenomenon is severe when the extracting feature of PCA is used, whereas the noise is effectively removed when rolling guidance filtering (RGF) and RGRF are utilized. However, at the strong edges, the boundary is blurred and irregular and the spatial smoothness is excessive for RGF. By contrast, the boundary is clear and the strong edges are protected for RGRF. Then, the proposed method is compared with related classification methods by using two real hyperspectral image datasets. The comparative results in Tables 1 and 2 indicate that RGRF can improve classification accuracy more effectively than the other methods. Subsequently, McMemar test was performed to analyze the statistical significance and verify the comparative results. As shown in Tables 3 and 4, the correlations of RGRF with the other methods are statistically significant, and the values exceed p>1.96. The McMemar values inTables 3 and 4 increase with the decrease in number of samples, which indicates that the advantages of using RGRF becomes more obvious in the case of fewer samples. This study presents an improved edge-preserving filtering method named RGRF. Compared with the other filtering methods, RGRF can maintain the edge structures of land covers and eliminate the texture and noise information within these land covers. RGRF effectively uses the recursive filtering method instead of the bilateral filtering method (i.e., of the RGF method) in each iteration. In this manner, the continuous spreading of texture and noise information across the strong edges is avoided. Moreover, considering that image quality is significantly improved after RGRF filtering, the classification accuracy of the hyperspectral image is also greatly enhanced. Classification experiments were performed on two real hyperspectral datasets. The results indicate that the classification accuracy of RGRF is superior to the other hyperspectral image classification methods. Moreover, RGRF can achieve higher classification accuracy when only few training samples are used.  
      关键词:hyperspectral image classification;rolling guidance recursive filtering;feature extraction;principal component analysis   
      605
      |
      178
      |
      7
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411962 false
      发布时间:2021-06-07
    • Jiafeng ZHANG,Peng ZHANG,Mingchun WANG,Tao LIU
      Vol. 23, Issue 3, Pages: 443-455(2019) DOI: 10.11834/jrs.20197431
      摘要:Poor applicability has been a common issue among high-resolution clutter models that use the existing constant false alarm rate (CFAR) detection methods for polarimetric synthetic aperture radar (PolSAR) imageries. To solve the problem, a CFAR detection method with G0 distribution and closed analytical expression for PolSAR imageries is proposed. CFAR loss (CL) is used to quantitatively evaluate the maintenance performance of CFAR detection methods. First, the probability density function (PDF) of the multi-look polarization whitening filter (MPWF) metric is derived on the basis of product modeling by combining the hypotheses of inverse Gamma distribution texture. Second, the PDF of the MPWF metric is integrated, and the analytical expression of the false alarm rate with respect to the CFAR detection threshold is obtained. The process of the proposed CFAR detection method is also designed. Finally, the simulation data and the AIRSAR real data are used to compare the running times of the methods, the fitting performances of the detection metrics, and the target detection performances of the existing and proposed methods. Results show that the running time of the proposed method is 3—30 times shorter than those of the existing methods, and the real-time performance of the proposed method is also enhanced. The analytical results of the AIRSAR real data of Tamano (Japan) show thatG0 distribution has a good fitting performance with high-resolution non-uniform sea areas. The proposed method can detect targets effectively in both G0 and non-G0 distribution sea areas, and its robustness is better than those of the existing methods. Moreover, the figure of merit of the proposed method is 15.78% higher than those of the other detection methods. The analytical results of CL indicate that the proposed method has good CFAR maintenance performance. Moreover, the closer the clutter log-cumulant scatter points are to the G0 distribution curve, the better the CFAR maintenance performance of the proposed method.  
      关键词:polarimetric synthetic aperture radar (PolSAR);constant false alarm rate (CFAR);product model;G0 distribution;multi-look polarization whitening filter (MPWF);false alarm rate;CFAR Loss (CL);figure of merit (FoM)   
      546
      |
      189
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412426 false
      发布时间:2021-06-07
    • Junming XIA,Xuerui WU,Weihua BAI,Yueqiang SUN,Liming LUO,Qifei DU,Xianyi WANG,Congliang LIU,Xiangguang MENG,Danyang ZHAO,Yingqiang WANG
      Vol. 23, Issue 3, Pages: 456-463(2019) DOI: 10.11834/jrs.20198018
      Research on the effects of delay and Doppler intervals on GNSS-R DDM simulation
      摘要:Global Navigation Satellite System Reflectometry (GNSS-R) uses GNSS signals reflected by the Earth’s surface to detect the Earth surface parameters, including sea surface wind field, sea surface height, soil moisture, sea ice range, and snow depth. GNSS-R is a new Earth remote sensing technology, and it has been a research focus in recent years. Delay-Doppler Mapping (DDM) is a significant GNSS-R observation for retrieving geophysical parameters. The reliability of GNSS-R DDM simulation results directly affects GNSS-R theoretical research and satellite mission engineering parametric design. In this thesis, the effects of delay and Doppler intervals on a simulated DDM is investigated, and the suitable parameters for both intervals is determined to generate a reliable DDM simulation result. GREEPS, which was a GNSS-R simulator based on Z-V models and was developed by the National Space Science Center of the Chinese Academy of Sciences, is currently used to simulate DDM waveforms with different delay and Doppler intervals. To evaluate the accuracy of the simulated DDM waveforms, the 1D delay mapping with 0 Hz Doppler and the peak’s signal-to-noise ratio (SNR) and position of the simulated DDM are compared with theoretical ones. The results shows that the smaller the delay and Doppler intervals are, the higher the correlation coefficient of the simulated DDM waveform and the theoretical waveform will be. When the delay interval is less than 1/16 GPS L1 C/A code chips, the correlation coefficient of the simulated DDM waveform and the theoretical waveform is greater than 0.99. The relative deviation of the peak SNR of the simulated DDM waveform is approximately 0.1% when the delay interval is set to be 1/16 GPS L1 C/A code chips. However, the relative deviation of the peak position of the simulated DDM waveform is extremely high. Even if the delay interval is set to be 1/64 GPS L1 C/A code chips, the relative deviation of the peak position of the simulated DDM is just 2.3%. When the Doppler interval is less than 200 Hz, the correlation coefficient of the simulated DDM waveform and the theoretical waveform is close to 1. When the Doppler interval is less than 50 Hz, the relative deviation of the peak SNR and the peak position is less than 0.1%. It is can be concluded that the smaller the delay and Doppler intervals are, the higher the coincidence degree of the simulated DDM and the theoretical DDM will be. When the delay interval and the Doppler interval are set to be 1/16 GPS L1 C/A code chips and 50 Hz, respectively, the simulated waveforms and the theoretical waveforms highly coincide with one another. The correlation coefficient is more than 0.99, and the relative deviation of the peak SNR of DDM is less than 0.1%. The effect of the delay interval on the peak’s position of DDM is greater than that of the Doppler interval. The relative deviation of the position of the peak SNR is greater than 2% even if the delay interval is set to 1/64 GPS L1 C/A code chips.  
      关键词:GNSS-R;delay-Doppler mapping;delay interval;Doppler interval;GREEPS   
      565
      |
      365
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412147 false
      发布时间:2021-06-07
    • Dongsheng WEI,Xiaoguang ZHOU
      Vol. 23, Issue 3, Pages: 464-475(2019) DOI: 10.11834/jrs.20198013
      Automatic sampling of remote sensing image change detection samples based on prior information of vector data
      摘要:Some existing classification vector datasets are available for change detection in many regions, and some prior knowledge—such as position, shape, size, and class—are included in these datasets. The change detection method, which rely on remote sensing images and vector datasets, has become an important research focus. The automatic sampling method for the change detection of samples is the key technology in achieving automatic change detection. Therefore, an automatic sampling method based on the feature space density outlier index is proposed. The change detection of vector data/remote sensing images is realized by focusing on the spectral feature difference between the image objects needed to be detected and the samples. On the one hand, the samples must be able to differentiate the regional environment characteristics in a spatial distribution, and this aspect belongs to the problem realm of the sampling distribution method. On the other hand, the samples must include the image objects with no changes in the posterior class attribute, and this aspect is important in the detection of outlier samples. The automatic sampling method proposed in this study includes the spatial layout of samples and the automatic detection of outlier samples. The reasonable spatial distribution of samples is the precondition for improving the accuracy of change detection results, and an automatic detection method for the outlier samples is an important part of automatic change detection technology. In acquiring the samples for change detection, we initially extract a sampling layer from the vector data and use this layer to segment the remote sensing image. In this manner, the image objects can be extracted. Second, samples are extracted from the image objects, and we determine whether the spatial distribution of these samples directly yet reasonably affects the accuracy of the change detection results. On the one hand, the rationality of the sample space layout should reflect the correctness and uniqueness of the samples in typical normal areas. On the other hand, the sample layout must be able to reflect the overall features of the image objects. The sample layout must also be random and uniform, and the samples should be representative. Thus, we extract samples of the image objects in accordance with the sampling region and by considering the distribution and topographic features of these image objects. Then, we detect the outlier image objects in the samples. Local reachability density (LRD) is applied to quantitatively describe the intensity of an image object relative to its neighborhood’s image objects in the feature space. In particular, the LRD algorithm is used to construct an outlier index that can measure the outlier degree of the sample image objects in the feature space. Feature space vectors are also constructed on the basis of the texture feature parameters. Then, the LRDs of the samples are calculated according to reachability distance. The feature space outlier index (FSOI) is also defined and then used to detect the outlier sample objects. Finally, an experimental research on the proposed sampling method was carried out by using a vector topographic map and a high-resolution remote sensing image. The experimental results indicate that the full detection of outlier samples can be achieved when thek value is set from 1/5 to 1/3 of the total number of samples and the FSOI threshold is set to 80%. The proposed sampling method can effectively achieve automatic sampling of change detection samples.  
      关键词:remote sensing image;vector data;sampling;change detection;outlier   
      554
      |
      137
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412330 false
      发布时间:2021-06-07
    • Ji LIANG,Jian WANG,Junlei TAN,Hongxing LI,Yan LIU,Shiting XIA
      Vol. 23, Issue 3, Pages: 476-486(2019) DOI: 10.11834/jrs.20198370
      摘要:Rayleigh scattering is one of the most important and common natural phenomena in life. Moreover, Rayleigh optical depth (ROD) is a significant index for measuring Rayleigh scattering intensity. By combining the theories of atmospheric scattering and ROD, we summarize in this paper the advantages and disadvantages of existing ROD simulation models. We find that, first, given that the CO2 concentration in global climate change has exceeded 400 ppm, some modeling errors will arise due to the parametric limitations of the atmospheric temperature and the background conditions of 300 ppm CO2 concentration in the approximate numerical model. Second, although the theoretical discrete model has a clear physical meaning and self-adaptability to the CO2 concentration index, and presuming that the simulation results are reliable in theory, deriving the solutions of various relevant input physical parameters is complicated. In obtaining an approximate numerical model with 400 ppm CO2 concentration, we first simulated ROD under specific atmospheric conditions (P0=1 atm, T=15 ℃, CO2=400 ppm) based on a theoretical discrete model for nine test sites with different heights and latitudes. Then, we analyzed and fitted the ROD as a function of wavelength and altitude. The Rayleigh scattering intensity was set to be inversely proportional to the square of 4.529 times of the wavelength. Furthermore, the contribution of CO2 concentration in the ultraviolet-blue band to Rayleigh optical depth was in the order of 10–4 to 10–3. In the case of changing CO2 concentrations, we suggest that the theoretical discrete model be used as the main algorithm to simulate ROD. In this manner, the adaptability of the model can be improved and the errors resulting from the modeling itself can be reduced. Furthermore, on the basis of the simulation results, a simple and convenient numerical simulation model for atmospheric conditions can be obtained.  
      关键词:Rayleigh optical depth;scattering cross section;refractive index of air;carbon dioxide concentrations   
      560
      |
      103
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411908 false
      发布时间:2021-06-07

      Remote Sensing Applications

    • Xinran ZHU,Changping HUANG,Bo WU,Hua SU,Wenzhe JIAO,Lifu ZHANG
      Vol. 23, Issue 3, Pages: 487-500(2019) DOI: 10.11834/jrs.20197382
      Research on remote sensing drought monitoring by considering spatial non-stationary characteristics
      摘要:Remote sensing technology has the unique advantages of real-time, fast and spatio-temporal continuity and wide coverage scale. Under the background of global climate deterioration, drought monitoring methods based on remote sensing can provide more real-time, accurate, and stable drought information and better assist scientific decision making than traditional ground monitoring methods. (Methods) Most of the existing drought monitoring methods based on remote sensing rely on global mathematical models that assume the spatial stability of drought events; hence, an accurate representation of local difference characteristics is difficult to achieve. In the current study, a geographic weighted regression (GWR) model is proposed to optimize the traditional global drought monitoring model by considering the spatial non-stationary characteristics of drought events and synthesizing various remote sensing drought indices. (Results) This study, which was conducted in mainland United States, focused on drought monitoring over a ten-year period (2002–2011). The results indicate that the GWR model can provide the best model parameters for the local estimation of spatial variations. Moreover, the monitoring results are consistent with the standard verification data of the United States Drought Monitor. The highest correlation coefficient R between the GWR model and the measured data is 0.8552. The RMSE is 0.972, which is significantly superior to other remote sensing drought monitoring models. (Conclusion) The GWR model has the advantage of spatial non-stationary detection and can realize local fine detection in drought modeling. Moreover, the GWR model can significantly improve the precision of remote sensing drought monitoring and thus has a good application prospect.  
      关键词:drought monitoring;remote sensing;geographically weighted regression;spatial non-stationarity;local optimizing;CUS   
      716
      |
      313
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412471 false
      发布时间:2021-06-07
    • Liang XIAO,Yueguang HE,Xuemin XING,Debao WEN,Chenggong TONG,Lifu CHEN,Xiaoying YU
      Vol. 23, Issue 3, Pages: 501-513(2019) DOI: 10.11834/jrs.20198292
      摘要:Mining-induced dynamic deformations over drilling water-soluble rock salt mines may result in sever progressive damages to surface buildings and infrastructures. Long-term dynamic monitoring and real-time analysis of rock salt mines are extremely important in preventing potential geological damages. In this study, the small-baseline subset (SBAS) InSAR technique is initially used to monitor the settlement of drilling water-soluble rock salt mines. A time series analysis is carried out to obtain the time–space characteristics of land subsidence caused by drilling water-soluble mining activities. Then, the temporal and spatial evolution of the settlement funnel is revealed. Subsequently, the monitoring results are compared with both DInSAR and leveling deformation results for the reliability assessment. A temporal and spatial analysis is conducted for the surface subsidence. A typical drilling water-soluble rock salt mine in Changde, Hunan Province, China was selected as the test area. Twenty-one scenes from the Sentinel-1 image data covering the period from February 2016 to February 2017 were used to obtain the time series settlement sequence of the survey area. Experimental results indicate that the large-scale surface settlement over the mining area began to appear in September 2016 and with the characteristics of sedimentation funnel-multiplied peaks in the horizontal direction. The overall spatial distribution of the subsidence in the mining area appears as an overall sheet-like shape in the southwest part and a discrete belt-like distribution in the north-central part. These subsiding characteristics are consistent with mining activities that use drilling water-soluble technology. To evaluate the reliability of the results, the field-leveling deformation data are compared with the DInSAR deformation data. The results obtained by the SBAS technology are highly consistent with the leveling results. This study confirms the feasibility of applying SBAS-InSAR technology to the settlement monitoring of drilling water-soluble salt rock mines. The findings can provide a reference for the preliminary determination of mining cavity locations, aid in the developmental analysis of the characteristics of subsidence bowls, and help improve the recovery rate of water-soluble mining mines.  
      关键词:SBAS-InSAR;D-InSAR;mining area;mining subsidence;drilling solution mining;time series analysis   
      650
      |
      392
      |
      10
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412188 false
      发布时间:2021-06-07

      Chinese Satellites

    • Tingting SHI,Hanqiu XU,Shuai WANG
      Vol. 23, Issue 3, Pages: 514-525(2019) DOI: 10.11834/jrs.20197496
      摘要:Tasseled Cap Transformation (TCT) is a commonly used remote sensing technique that has been successfully applied in various remote sensing fields. However, for high-resolution satellite sensors that usually have only four visible near-infrared bands but lack a mid-infrared band, the retrieval of the TCT wetness component has not always been successful with the traditional Gram–Schmidt (GS) orthogonalization method. Moreover, although a few studies have developed the wet component for such four-band sensor data, the derived results are somewhat unreasonable. Therefore, this study proposes a new method to derive the coefficients of the TCT wetness component for the four-band sensor data. In particular, the new method is used to derive the TCT coefficients of the ZiYuan-3 (ZY-3) MUX sensor data of China. Eleven ZY-3 MUX images and six synchronous/near synchronous Landsat 8 Operational Land Imager (OLI) images from different regions across China were used as test and validation images. From these image sets, seven ZY-3 MUX images and three Landsat 8 OLI images served as test images, while the other four ZY-3 MUX images and three Landsat 8 OLI image served as validation images. A large number of samples representing different land-cover types, such as dry and wet soil, dense vegetation, and water, were randomly selected from the test images. The new method proposed in this study for deriving the TCT coefficients is a back derivation (BD), in which the TCT wetness component rather than the brightness component was first retrieved, as previously performed in the traditional Gram–Schmidt method. Three synchronous/near synchronous image pairs of ZY-3 MUX and Landsat 8 OLI were used to derive the wetness component coefficient of ZY-3 MUX, particularly by relating the ZY-3 MUX data with the Landsat 8 wetness component based on the selected 735297 pixel samples. Then, the brightness and greenness components of the ZY-3 data were derived by implementing the traditional methods. Finally, the new BD method and the traditional method were compared to verify the feasibility of the new method. The experimental results indicate the following: (1) the TCT wetness component of ZY-3 MUX retrieved by the BD method can effectively solve the spectral distortion problem that exists with the wetness component of the four-band sensor data derived by the traditional method; (2) the scatters of the three components (brightness, greenness, and wetness) derived by the new method have typical tasselled-cap-like shapes in 3D feature space, and they are composed of the three components. Compared with the traditional GS method, the scatters of water, vegetation, and built land or bare soil retrieved by the BD method are clearly separated in 3D feature space, whereas the scatters vaguely overlap in the traditional GS method; (3) the accuracy of the TCT coefficients derived by the new method is higher than that derived by the traditional GS method, considering that the new method has a higher correlation coefficient (R) and a lower root mean square error when validated with the corresponding TCT components of the Landsat 8 data. This finding is due largely to the improved accuracy of the wetness component derived by the new method. This study provided a set of TCT coefficients for ZY-3 MUX sensor data, and it presented a new method for deriving TCT coefficients for high-resolution spatial remote sensing imageries with only four visible near-infrared bands but lack mid-infrared bands. The new method effectively solves the retrieval problem of the wetness component of the four-band sensor data existing in the traditional GS method.  
      关键词:Tasseled Cap Transformation (TCT);wetness component;ZY-3;Landsat 8;Gram-Schmidt orthogonalization   
      706
      |
      269
      |
      1
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412195 false
      发布时间:2021-06-07
    • Xiaobo ZHU,Qingjiu TIAN,Kaijian XU,Chunguang LYU,Ling WANG
      Vol. 23, Issue 3, Pages: 526-546(2019) DOI: 10.11834/jrs.20197128
      摘要:With the launch and preliminary application of the GF-4 geostationary satellite, the advantages of having both high spatial and temporal resolution enable it to have its own feature in the same type of satellites, and China has attached more importance to the research on geostationary satellite images. To evaluate the on-orbit detection capability of GF-4, thorough assessment is essential to grasp the available time of the image data on the same day and reasonably arrange the boot time to improve satellite use efficiency and prolong its life span. This objective can be accomplished by realistic radiation simulation of the imagery data and signal-to-noise ratio (SNR) study with typical objects represented by dark pixels (such as water). Based on the given spectral capabilities, geometrical, and atmospheric parameters, and in combination with land surface properties (Hong Kong coastal water), the GF-4 sensor hyperspectral remote sensing in standard atmospheric conditions, including Pan wave band and blue, green, red, and NIR-four wave bands, are simulated using the radiative transmission model MODTRAN. On this basis, the interference of atmospheric path radiance is removed to calculate the effective SNR. The data of four typical time nodes in spring equinox, summer solstice, autumn equinox, and winter solstice are selected for analysis. In this way, realistically simulated hyperspectral top-of-atmosphere apparent radiance, ground effective radiance, and path radiance are obtained. The information helps identify the effective time for the satellite to obtain high-quality water images in the observation of the same day, especially analyzing the sensitivity of the critical weak signal during the dawn–dusk period. This work fills the gaps in the data quality assessment and has a positive significance to the scientific and rational use of GF-4. Results show that the highest apparent radiance values of summer solstice and winter solstice are successive at 59.26 and 56.20 W/(m2·sr·μm), respectively, all in the blue band; and the highest ground effective radiance values of summer solstice and winter solstice are 17.52 and 12.13 W/(m2·sr·μm), respectively, also in the blue band. The highest effective SNR values of the summer solstice and winter solstice are 41.0 and 38.2 dB, respectively. The simulation results of blue band are the worst, but those of NIR, green, and Pan bands are good. To ensure the image quality, the water effective SNR of 35 dB is set as a threshold value to determine the boot time. The time of image acquisition is 7:49–17:01 in summer solstice and 9:28–15:07 in winter solstice.  
      关键词:GF-4;MODTRAN;radiance;effective SNR;path radiance;Hong Kong coastal water   
      631
      |
      137
      |
      3
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7412548 false
      发布时间:2021-06-07
    • Zhuchong LIN,Li LIU,Xiang WANG,Dandan WANG,Xiangzhou DENG,Songbin LI
      Vol. 23, Issue 3, Pages: 547-554(2019) DOI: 10.11834/jrs.20197100
      摘要:TH01 is a kind of surveying and mapping system with three satellites in orbit. The nominal orbit is the long-term maintaining target, which determines the coverage property of the constellation. This study aims to find a useful method for evaluating and optimizing coverage performance. In this study, a path-latitude cover-map method is proposed for analysis and optimization. In the path–latitude cover-map, the covered areas are drawn on a map with a path number as the X-axis rather than longitude, where the access number of certain target regions can be read, and the access intervals can be acquired via the access path numbers. The formula of the longitude span of a certain sweep-imaging satellite is derived to depict the path–latitude cover-map. The procedure is provided for evaluating the access number and intervals. An optimization method is also proposed with the current path number and the local time of a descending node as parameters. A numerical example of long-term analysis is provided to validate the effectiveness of the proposed method. A certain region with anomalous shape is selected as a target, whose number and intervals of accesses in a repeating period are analyzed through the proposed procedure and the traditional grid point covering method. Results show that the proposed method can achieve similar accuracy through the grid point covering method. The proposed optimization method is conducted to shorten the access intervals of the constellation by adjusting the current path numbers and local time of a descending node of each satellite, thus resulting in a 30% shorter maximum access interval. The proposed path–latitude cover-map method is effective for a long-term analysis of the TH01 satellite over an anomalous geometric area, thereby achieving similar accuracy through the traditional grid point method and remarkable saving of resource and time. The optimization method is also effective for limiting the maximum access intervals, which can provide helpful advice for long-term orbit maintenance.  
      关键词:TH01;coverage performance;path-latitude cover map;local time of descending node   
      605
      |
      247
      |
      4
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411804 false
      发布时间:2021-06-07
    • Xinzhi GU,Qingwei ZENG,Hua SHEN,Erxue CHEN,LEI ZHAO,Fei YU,Kuan TU
      Vol. 23, Issue 3, Pages: 555-566(2019) DOI: 10.11834/jrs.20198171
      Study on water information extraction using domestic GF-3 image
      摘要:The Chinese Synthetic Aperture Radar (SAR) satellite GF-3 has acquired a large volume of multi-polarization and full-polarization SAR data. Moreover, researchers at home and abroad have conducted in-depth studies on the extraction of inland water body images and proposed rapid response mechanisms against flood disasters. However, optical remote sensing images and foreign space SAR images remain widely used because of their long development history and reliable image quality. Applying domestic GF-3 images to environmental protection and water resources management is an urgent matter. This study analyzed the differing backscattering characteristics of water bodies and other targets to explain the rationality of the threshold segmentation method; integrated the image binarization method based on threshold segmentation and Markov random field (MRF); and developed a method with advantages of high accuracy and high automation for water body information extraction by using GF-3 images. The method first analyzed the backscattering intensity differences of water and nowater distribution under different imaging modes and different polarization modes based on histogram statistics. The best segmentation threshold was determined by using visual interpretation. The effects of the Otsu and KI binarization methods on the water-nowater classification were compared on the basis of the research results of threshold segmentation. The threshold segmentation result of the KI method was better than that of the Otsu method. Given that the radar beam cannot illuminate the large mountain slope facing away from the SAR sensor (i.e., a phenomenon that has led to missing monitoring information), the current SAR orthorectification is applied to compensate the missing data by using the interpolation method, but this approach may result in inaccuracies in feature monitoring. Scholars at home and abroad typically use the method of removing shadow areas from water distribution maps. However, the pixel value obtained by the interpolation cannot represent true backscattering intensity information, which can then affect the distribution characterization of the image and reduce the accuracy of automatic threshold estimation. In response to this problem, this study generated a mask file, also calledMask, of the radiation distortion region by combining DEM and GF-3 orbital parameters. In this manner, the radiation distortion region is masked in the SAR logarithmic intensity map, and the influence on the probability distribution of the image can be eliminated to a great extent. The initial water distribution map can be obtained by KI threshold segmentation, which has good accuracy. Fisher transform is a data dimension reduction algorithm that converts multidimensional data into one-dimensional data, and this approach is beneficial to image classification. In this study, the multipolar logarithmic intensity image is converted into a single variable by the Fisher transform under the guidance of the initial water-nowater distribution map derived by the KI method. Although traditional methods, such as multi-look and filtering, can greatly reduce the noise level of SAR images, these methods also tend to lose useful image information. MRF can determine the class attribute of a pixel by using the maximum posterior probability criterion and by fusing spatial context information with pixel gray information (i.e., an approach with good inhibitory effect on speckle noise). Moreover, the posterior probabilities of the pixels in the water and nowater classes in this study are calculated iteratively. With Fisher transform and MRF based on Gaussian distribution. When the number of iterations has been satisfied or the posterior probability has become stable, the final water distribution map is generated on the basis of the maximum posteriori probability criterion. Experiments were performed by using multi-polarization GF-3 images under different imaging modes. The images represent the northeast part of Hunan Province where many rivers and lakes abound. Floods in the rainy season are very severe in Hunan Province. Thus, real-time monitoring of water body distribution is necessary. A random point was selected to evaluate the accuracy of using optical images in this study and by setting close imaging times with GF-3. The true detection accuracy, which denotes the proportion of correctly classified pixels in the total pixels, was used to quantitatively evaluate the accuracy of water distribution. The experimental results show that the KI method has stronger advantages than the Otsu method in terms of water extraction application by using GF-3. AfterMask was used to remove the radiation distortion area of the image, the auto-determined threshold yielded a value that was close to the visual interpretation threshold, and the difference was only 0.4 dB. However, the accuracy was low due to the existence of the nowater class within the water class. By optimizing the Fisher transform and the MRF model, the continuous extraction of water information was achieved, and the boundary of the river and the lake became smoother than that of the KI method. The extraction accuracy of the water targets exceeded 85%, which is superior to that of the water index method that use optical images. Extracting water information with the threshold segmentation method from SAR images is a reasonable method. The capability of cross-polarization (HV, VH) to distinguish water and nowater classes is stronger than that with the same polarization (HH, VV). Radiation distortion areas have certain influences on the probability density distributions of intensity images. Thus, deriving the exact segmentation threshold by removing the radiation distortion areas is imperative. The Fisher transform and the GD-MRF model can be used to fuse multipolar information and spatial context information, and they can effectively suppress the influence of speckle noise. The method requires minimal artificial experience, and it offers the advantage of strong automation and high reliability.  
      关键词:GF-3;water extraction;KI;Markov random field;automation;maximum posterior probability   
      959
      |
      281
      |
      18
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 7411564 false
      发布时间:2021-06-07
    0