摘要:Big geodata can be regarded as the “footprints” generated by geographic objects, which are divided into two types: big earth observation data and big social behavior data. Big earth observation data refer to remote sensing data and all kinds of monitoring data from ground observation stations; big social behavior data are defined as the data generated by humans, such as mobile phone data, traffic card data, social media data, and trajectory data of floating cars. Big earth observation data contain the information of land surface, and big social behavior data record the social behavior of humans. The essence of big geodata mining is to reveal the man-land relationship via the retrieval of the “footprint” of geographic objects. Magnificent progresses in recent years demonstrate that most of them have been made through the aggregation of multiple types of big geodata. The significance of big geodata aggregation can be summarized into three aspects: to represent a new research paradigm, to produce a new research angle, and to improve the research quality by combining different types of information. In contrast to image fusion, which is regarded as the combination of imageries with the same data structure, big geodata aggregation is defined as the merging of different types of big geodata into a multidimensional dataset through transformation in terms of structure, content, and representation. In general, big geodata aggregation includes big earth observation data and big social behavior data. Given the differences in data source, data structure, connotation, and granularity, the aggregation procedure is various and complicated. Considering the characteristics and the classification of big geodata, we classify big geodata aggregation into four types: spatiotemporal aggregation, object-oriented aggregation, topic-oriented aggregation, and model-oriented aggregation. In spatiotemporal aggregation, data are merged via their common locations, such as the aggregation between POI and remote sensing data. In object-oriented aggregation, data are connected through the same object, such as the aggregation of indoor and outdoor trajectory data via the same phone carrier. In topic-oriented aggregation, data with the same topic are integrated, such as the aggregation of different types of information sourced from the same earthquake event. In model-oriented aggregation, data are assimilated via the same model, such as the aggregation of street view data and POI data when using a deep learning model for the recognition of building style. As an important part of big geodata mining, the procedure of big geodata aggregation can be divided into four steps: the determination of the aggregation core, information back tracking, retrieving, and converging. The key issues in big geodata aggregation include unifying spatiotemporal benchmarks, uniting data representation and storage, matching multisource big geodata from the same geo-object, centralizing the spatiotemporal scope of multisource big geodata, settling the changing velocities of different objects, solving data-sharing and privacy protection problems, and aggregating big and small geodata. With the development of methods and techniques of big geodata aggregation, some research areas, including the sensing of places, the identification of land functions, the detection of crowd activities, the determination of the relationship between urban functions and multisource flow, and the introduction of a land surface complex giant system, may be substantially expanded.
关键词:big geodata philosophy;urban functional area;crowds activities;geographic information science;data fusion
摘要:Cross-section monitoring is an important cornerstone to accurately grasp the surface water quality and water environment change characteristics, to explore water environment formation mechanism, to evaluate and assess water environment trend, to treat and repair water pollution, and to manage water quality. Improving the level of three-dimensional, automatic and intelligent monitoring is an important direction of future ecological environment monitoring, which can effectively improve the level of modern ecological environment monitoring ability.The traditional cross-section manual sampling monitoring is time-consuming and laborious, and the time and space frequency are very low resulting in the discrete data and the poor timeliness. High frequency on-line monitoring of underwater sensors can solve the continuous observation in time, but the sensor is easy to wear and tear and disturbed by water environment, resulting in unstable monitoring accuracy, limited monitoring indexes, large errors between automatic monitoring results of key parameters such as total nitrogen and total phosphorus and conventional laboratory monitoring analysis data. In addition, high management and maintenance costs are needed for the on-line monitoring of underwater sensors. Satellite remote sensing can realize the inversion of key water quality parameters in different spatial scales, but it is difficult to solve the high temporal resolution continuous observation. Meanwhile, it is difficult to ensure high-precision monitoring due to the effects of cloud and rain conditions and atmospheric correction.In this study, we firstly propose the concept of land-based (ground-based, shore-based) water environment remote sensing to effectively supplement and improve the existing satellite remote sensing and national surface water environment cross-section monitoring technology and methods. Using domestic hyperspectral imager developed by the Hangzhou Hikvision Digital Technology Co., Ltd. and Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences, we carry out the application practice of land-based (ground-based, shore-based) water environment remote sensing under complex weather and water conditions in Lake Taihu. Based on the in situ measurement of remote sensing reflectance and key water quality parameters, we calibrate and validate the remote sensing estimation models of Secchi disc depth, total suspended matter, total nitrogen, and total phosphorus with high precision. Overall, the estimation precisions of four key water quality parameters are close or higher than 80% indicating the land-based (ground-based, shore-based) remote sensing can be used for the accurate monitoring of water quality under complex weather and water conditions. High frequency dynamic processes of key water quality parameters are observed by implanting remote sensing estimation models into the hyperspectral imager, which can be used to finely characterize time evolution process of water environment. The practice of remote sensing of water environment in Lake Taihu shows that it has a wide application prospect and market value to install domestic hyperspectral imager to carry out high frequency accurate observation of key water quality parameters in the national surface water monitoring section.Of course, more in situ observation and measurement for different waters are needed to improve or develop universal models for more parameters monitoring. Coupled with satellite, unmanned aerial vehicle and land-based (ground-based, shore based) remote sensing, the sky-air-ground integrated remote sensing monitoring system can be established. Combined with cross-section manual monitoring and automatic observation, the collaborative monitoring can be carried out to give full play to their respective advantages, which truly realize the long-term reconstruction and real-time high-frequency dynamic monitoring of key water quality parameters serving water environment management.
摘要:Vegetation parameters are the hotspots and difficulties in the quantitative inversion of ecological remote sensing, and they are also the basic parameters of ecosystem research, as well as the basic parameters of ecosystem research. They have an important impact on the structure, process and function of ecosystems, and have always been hot issues in ecology and remote sensing technology research. However, in actual work, the relevant parameters of vegetation are not obtained through direct interpretation of remote sensing data, but based on the spectral characteristics of vegetation to establish a data model for quantitative inversion calculation, and there are obvious differences in the retrieval results of different inversion methods. Based on extensive reading of publicly published documents at home and abroad, this paper summarizes the existing vegetation ecological remote sensing parameters into three categories: physical, biochemical, energy and functional categories, and systematically sorts out the main quantitative inversion methods of each category of parameters’ advantages and disadvantages and applicability, and the current deficiencies and future development trends are discussed. And this article selects representative common vegetation parameters to introduce one by one, the physical vegetation ecological remote sensing parameters mainly introduce the research progress of vegetation coverage, biomass, leaf area index, tree height, etc. The biochemical group vegetation ecological remote sensing parameters mainly introduce the research progress of vegetation water content, chlorophyll content and photosynthetic capacity. Energy Vegetation ecological remote sensing parameters introduce the research progress of vegetation with Photosynthetically Active Radiation and Absorption of Photosynthetically Active Radiation, and functional vegetation ecological remote sensing parameters mainly introduce the research progress of vegetation productivity and carbon exchange capacity. Although positive progress has been made in the research on the quantitative inversion of vegetation ecological remote sensing parameters, due to the constraints of the ground sensor observation performance indicators and the insufficient knowledge of the vegetation growth and change process, there are still some problems in the research of vegetation ecological remote sensing parameters that need to be resolved. The main problems that exist include the decomposition of mixed pixels, ill-conditioned inversion, error transfer in the application of physical models, the scale effect and spatial variability of data fusion, and the determination of the optimal scheme for model coupling. Generally speaking, the inversion methods of vegetation parameters have been developing in the direction of multivariate methods, multi-method fusion, and continuous improvement of various methods. Data sources are becoming more and more abundant, which can better realize mutual verification and information complementarity between data, and solve the problems of lack of vegetation characteristic information faced in the process of parameter inversion. High-precision vegetation ecological remote sensing parameter products can better study new hot issues related to vegetation carbon sources, carbon sinks, carbon use efficiency, carbon cycle, vegetation and phenology.By comparing the inversion methods of different vegetation ecological remote sensing parameters and the relationship between different vegetation ecological remote sensing parameters, the difficulties and scientific problems existing in the vegetation quantitative remote sensing inversion are discussed, which is convenient for research and technical personnel in related fields.
摘要:Remote sensing imagery interpretation is a continuously developing research direction. With the ever-changing needs of remote sensing applications, the rapid development of high-resolution remote sensing data, the accumulation of geographic knowledge, and the development of artificial intelligence, automated and intelligent classification technology should be urgently developed. This paper aims at the development of intelligent interpretation. First, the research progress is elaborated from three aspects: interpretation unit, classification method, and interpretation recognition. Then, a geographic scene-level overall framework for intelligent understanding of remote sensing imagery is presented. The framework includes geographic knowledge map construction, deep convolutional neural network model construction, and the semantic classification method based on geographic knowledge graph and deep learning model. The preliminary test results are provided. Lastly, the important development trend of intelligent understanding is projected. A geographic knowledge graph can realize formal description and reasoning calculation of geographic knowledge and improve the learning capability and the utilization rate of prior knowledge. Classification and related semantic information can also be obtained, which is helpful for in-depth cognition of geographic scenes. This study looks forward to expanding the ideas and methods for intelligent interpretation of remote sensing images and improving its fineness and intelligence. Intelligent interpretation can understand geospatial capabilities intelligently and promote in-depth transformation of data, information, knowledge, and intelligence.
摘要:The polarimetric synthetic aperture (PolSAR) system transmits and receives electromagnetic waves with different polarization to acquire the image measurements. Polarimetric calibration (PolCAL) is a critical stage for PolSAR image quality improvement. The general calibration technique relies on the ground deployed active and passive corner reflectors (CRs) to solve the residual system errors after the internal calibration, such as the crosstalk, channel imbalance, Faraday Rotation Angle (FRA), and so on. Although the best way to calibrate system distortion is based on ground reflectors, the manufacturing and deploying reflectors are time- and money-cost. For the common trihedral, rectangular, and pentagonal CRs, the angle bias of more than 1° between the metal plates would result in a 0.2-1 dB change of the Radar Cross Section (RCS). When the sensor wavelength increases, the ground-deployed CR length should also be enlarged to ensure that the RCS is high enough. A 1-m CR is usually required to calibrate a 0.05-m wavelength C-band satellite sensor, but a CR of more than 2 m is necessary to calibrate a 0.22-m wavelength L-band sensor. When calibrating the P-band BIOMASS, the CR length should be 5 m, which significantly increases the difficulties in the CRs manufacturing. Moreover, the azimuth and pitch angles of deployment reflector should be adjusted according the sensor pass direction and look angle. The current radar generally works on dozens of beam wave which demands heavy ground campaigns to accomplish with sensor configuration. During the operation period of the spaceborne SAR, the radiometric characteristics change with time, and the polarimetric distortions would change accordingly. Then it is of great importance to carry on the periodic calibration campaign. The regular performance of the calibration based on CRs undoubtedly increases the expenditure of time and effort. Therefore, it is vital to develop the calibration technique without using corner reflectors. This paper reports the recent research process of PolSAR calibration without using CRs. Firstly, we introduce the reason why non-reflector calibration technique is necessary and the evaluation criteria of image quality is given to help readers better understand the radar distortion. Secondly, we classify the recent non-reflector calibration methods into two categories as nature media-based calibration and corner reflector-like calibration. The properties of two categories are subsequently presented. The ways refer to the natural media, such as the in-scene vegetation in FRA-free area, to estimate the crosstalk and the cross-pol channel imbalance, while the co-pol channel imbalance may remain a constant or be solved using one or more CRs. In the scenes affected by FRA coupling with other distortions, a calibrator with no cross-polarized return may still be needed to increase the constraint for searching solutions. To get rid of the CRs, the special natural media or the CR-like point targets help to estimate the amplitude, phase of the co-pol channel imbalance and make outstanding achievements. Finally, the conclusion is given in the last section. The research on non-reflector calibration technique is meaningful and of great value for the SAR system with long wavelength.
摘要:Machine learning methods have made breakthroughs in recent years, and their remote sensing applications have developed from remote sensing image recognition and classification to many fields in quantitative retrievals. Owing to their complicated mechanism in quantitative aerosol remote sensing, the types and accuracy of retrieved parameters are limited. Machine learning introduces new research and application techniques to aerosol remote sensing. Existing aerosol retrieval machine learning methods are summarized into four categories: satellite aerosol optical depth remote sensing, other aerosol parameters’ satellite remote sensing, particulate matter concentration, and ground-based aerosol remote sensing. In consideration of the authors’ research, through analysis and discussion, the conditions for machine learning to be used for aerosol quantitative remote sensing are summarized as follows: (1) Physical models could not be utilized. (2) Existing models have low accuracy. (3) Existing models have low computational efficiency. From the perspective of application, relevant inputs can be used to utilize machine learning to improve retrieved product types, retrieval accuracy, and calculation efficiency. For the quantitative remote sensing research field, how to mine information from remote sensing data should also be paid attention to improve the retrieval capability. Machine learning can also feedback the understanding of remote sensing mechanisms through error analysis, such that machine learning and remote sensing mechanism research can promote each other.
摘要:As an important grain crop in China, maize has many varieties and is prone to misclassification, affecting agricultural security and food production. With the development of hyperspectral imaging and deep learning technology, identifying crop varieties using a combination of both is possible. A Convolutional Neural Network (CNN), the most representative algorithm of deep learning technology to deal with the image classification task, needs a large number of training samples in the model training process. However, obtaining a large number of hyperspectral images of maize seed samples is difficult and time consuming. Aiming at the problem of the large number of modeling samples required for the traditional method based on CNN for crop identification in hyperspectral images, a variety identification model for maize seeds based on hyperspectral pixel-level spectral information and CNN is proposed. First, hyperspectral images of different varieties of maize seeds in the range of 400—1000 nm are obtained, and 203-dimensional spectral information of all the pixels of samples is extracted. Nevertheless, the enormous amount of spectral information creates the problem of dimensional disaster and greatly increases the computational cost. Second, to reduce the dimensionality of the sample spectral information, the principal component analysis algorithm is used to reduce the spectral dimension to eight dimensions, which effectively shortens the operation time. Third, the pixel-level spectral information of the sample (i.e., the spectral information of all the pixels of the sample) is applied to the Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) classification models, in addition to the CNN model. The experimental results demonstrate that for the CNN, SVM, and KNN recognition algorithms, the pixel-level spectral information models show a more stable and efficient recognition effect than the seed-level one (i.e., the average of all pixel spectral information of each sample). The seed-level information model does not fully utilize the sample pixel spectrum and spatial information, which needs a large number of modeling samples. When the number of samples used to build the classification model is the same, the CNN model has a significantly better recognition effect than the SVM and KNN models. In accordance with all the pixel-level classification results, a majority voting strategy is used to identify corn seed sample variety, and the sample recognition accuracy is up to 100% (100% refers to the identification accuracy when the numbers of samples in the modeling and test sets are 0.27 and 0.32, respectively. As the number of samples in the test set increases, the identification accuracy decreases). Lastly, the t-distribution random neighbor embedding algorithm is used to realize the visualization of output eigenvalues of CNN. The features of different maize seed varieties are clearly bounded in the visualization, which adequately verifies the validity of the species recognition model based on hyperspectral pixel-level information and CNN. In the rare case of modeling seed samples, nondestructive and efficient variety identification of maize seeds is realized, which will provide a theoretical basis for precision agriculture.
关键词:hyperspectral image;convolutional neural network;deep learning;maize seed;t-distribution random neighbor embedding algorithm;pixel-level spectral information
摘要:House buildings, as the main place for human activities, can be rapidly and accurately extracted from high-resolution remote sensing images, which is of great significance for promoting remote sensing information in disaster prevention and mitigation and town management. On the basis of deep learning, this paper proposes a pixel-level accurate extraction method for house buildings by using high-resolution remote sensing images.First, in consideration of the lack of pixel features at the edge of a sample image, an IEU-Net model is proposed on the basis of the U-Net model. A new ignore-edges categorical cross entropy function, IELoss, is designed as a loss function. Dropout and BN layers are added to help improve the speed and robustness of model training while fitting. Second, to solve the problem of limited feature richness of the model, the Morphological Building Index (MBI) is introduced into the classification process of the model, together with the remote sensing image RGB band. Lastly, when ignoring the edge prediction method, the best building extraction results can be obtained in the model prediction that corresponds to the IEU-Net model. The remote sensing image data used in this study are 0.8 m true color and infrared band data obtained by multispectral and panchromatic fusion from the GF-2 satellite of the Yushu Tibetan Autonomous Prefecture of Qinghai Province. A label image is obtained by visually interpreting the images by using ArcGIS software and performing a feature-to-raster transformation on the obtained vector file. Then, a ground truth image of the location of houses and buildings in the remote sensing image is recorded. The MBI and RGB bands are fused, and the corresponding label images are randomly cropped to obtain 500 pieces of 256×256 training data and 100 pieces of 256×256 verification data. Regarding the IEU-Net model r value of 0.5 and the training data as the model input and using the Adam optimization algorithm for back propagation iteration for 100 times help achieve the optimal parameter results.The Overall Accuracy (OA) is 91.86%, and the kappa value is 0.802. To verify the effectiveness of IELoss in the IEU-Net model and the ignore-edges prediction method relative to CELoss and the ordinary prediction method in solving the problem of insufficient edge pixel features, this study continues the comparison by using the values of r of 1.0, 0.9, 0.8, 0.7, and 0.6 (when r is 1.0, IELoss is equivalent to CELoss, and the ignore-edges prediction method is equivalent to the ordinary prediction method). The results show that the OA of the model using IELoss and the ignore-edges prediction method is 5.03% higher than that of the model using CELoss and the ordinary prediction method, and the kappa value is 0.165 higher. To verify the effectiveness of adding MBI to the training data of the model, a comparative experiment is performed by using RGB three-band data as the training set. The prediction result obtained using the MBI-added data set is 1.55% higher than the OA of the RGB band data set. The kappa value is increased by 0.009.The experimental results show that the IEU-Net model effectively solves the problem of insufficient edge pixel features and achieves house and building extraction accuracy with high OA and kappa values. The addition of MBI data can overcome the influence of roads and house building shadows to a certain extent and accurately extract the edge information of house buildings. Therefore, if significant vegetation exists near the houses and buildings in our area of interest, we can consider adding the normalized difference vegetation index to the training set to reduce the impact of the vegetation.
摘要:Owing to the special imaging principle in the long-wave infrared region, thermal infrared remote sensing images contain the temperature and emissivity features of targets. However, the low spatial resolution of thermal infrared images limits its wide application. With the development of remote sensing technology, multisource remote sensing images in the same region can provide complete information of a target for researchers. On the basis of the high spatial resolution of visible band images, thermal infrared image fusion enhancement and subpixel feature extraction have a high application value.Therefore, a new method named subpixel temperature retrieval of thermal infrared images based on multiresolution superpixel low-rank representation and residual correlation is proposed in this paper. The method achieves two goals by fusing visible and thermal infrared images in a super-resolution way: (1) enhancement of spatial characteristics for thermal infrared images based on adaptive fusion and (2) estimation of subpixel temperature and super resolution for thermal infrared images.The main processing and advantages of the algorithm are listed as follows: (1) For superpixel segmentation and low-rank restoration at multiresolution, superpixel blocks, instead of traditional blocks, are used as low-rank restoration units to enhance the stability of species in each unit and suppress structural noise. (2) Through constructing a guided linear filter, the high-spatial-resolution feature of the visible image can be transferred to the thermal infrared image while keeping the spectral information of the thermal infrared image unchanged. (3) For the estimation of subpixel temperature and super resolution of thermal infrared images, the correlation between the residuals of VIS and fusion images is established at the low-resolution layer and applied to the high-resolution layer to preserve image details.To validate the effectiveness of the proposed method, the visible and thermal infrared data in the 2014 IGARSS data fusion contest are used for experiments. The algorithm is evaluated in three aspects: (1) the improvement of spatial characteristics for thermal infrared images through adaptive low-rank representation, such as noise suppression, intraclass smoothing, and edge enhancement; (2) the spectral information protection of thermal infrared images in homogeneous and heterogenous regions; (3) the super-resolution effect and the accuracy of subpixel temperature retrieval. Compared with the traditional supervised graph-based feature fusion method, the proposed method has the best edge-sharpening, noise suppression, and spatial smoothing effects. It can protect the spectral information of thermal infrared images for different region types. The super-resolution image obtained by the proposed algorithm achieves high-temperature retrieval accuracy, and the overall root-mean-square error is less than 1 K. The average classification accuracy is improved by more than 20%.
关键词:subpixel temperature retrieval;visible and thermal infrared image fusion;guided filter;multi-resolution self-adaption low-rank representation;super-pixel segmentation
摘要:Scene classification is an important research topic, which aims at assigning a semantic label to a given image. High-Spatial-Resolution (HSR) images contain abundant information of ground objects, such as geometric structure and spatial layout. Complex HSR images are difficult to interpret effectively. Extracting discriminative features is the key step to improve classification accuracy. Various methods for constructing discriminative representations, including handcrafted feature-based methods and deep learning-based methods, have been proposed. The former methods focus on designing different handcrafted features via professional knowledge and describing a scene through single feature or multifeature fusion. However, for a complex scene, handcrafted features show limited discriminative and generalization capabilities. Deep learning-based methods, due to the powerful capability of feature extraction, have made incredible progress in the field of scene classification. Compared with the former methods, Convolutional Neural Networks (CNNs) can automatically extract deep features from massive HSR images. Nevertheless, CNNs merely focus on global information, which makes it fail to explore the context relationship of HSR images. Recently, Graph Convolutional Networks (GCNs) have become an important branch of deep learning, and they have been adopted to model spatial relations hidden in HSR images via graph structure. In this paper, a novel architecture termed CNN-GCN-based Dual-Stream Network (CGDSN) is proposed for scene classification. The CGDSN method contains two modules: CNN and GCN streams. For the CNN stream, the pretrained DenseNet-121 is employed as the backbone to extract the global features of HSR images. In the GCN stream, VGGNet-16 that is pretrained well on ImageNet is introduced to generate feature maps of the last convolutional layer. Then, an average pooling is organized for downsampling before the construction of an adjacency matrix. Given that every image is represented by a graph, a GCN model is developed to demonstrate context relationships. Two graph convolutional layers of the GCN stream are followed by a global average pooling layer and a Fully Connected (FC) layer to form the context features of HSR images. Lastly, to fuse global and context features adequately, a weighted concatenation layer is constructed to integrate them, and an FC layer is introduced to predict scene categories. The AID, RSSCN7, and NWPU-RESISC45 data sets are chosen to verify the effectiveness of the CGDSN method. Experimental results illustrate that the proposed CGDSN algorithm outperforms some state-of-the-art methods in terms of Overall Accuracies (OAs). On the AID data set, the OAs reach 95.62% and 97.14% under the training ratios of 20% and 50%, respectively. On the RSSCN7 data set, the classification result obtained by the CGDSN method is 95.46% with 50% training samples. For the NWPU-RESISC45 data set, the classification accuracies achieved via the CGDSN method are 91.86% and 94.12% under the training ratios of 10% and 20%, respectively. The proposed CGDSN method can extract discriminative features and achieve competitive accuracies for scene classification.
关键词:high-resolution image;remote sensing scene classification;graph neural network;convolutional neural network;Feature fusion
摘要:The application of deep learning methods in the hyperspectral research field has been developed continuously, and hyperspectral image classification models based on deep learning methods achieve high classification accuracy compared with others. Current classification models mostly use hyperspectral graphic-spectral features but lack diagnostic features and prior information, which makes it difficult to extract spectral-spatial features collaboratively and implement refined classification further. The results are not refined for intraclass classification. To solve the abovementioned issues, we proposed a Symbiotic Neural Network (SNN) for hyperspectral refined classification, which regards a multilabeled data set as input. SNN can fuse spectrum diagnostic features with graphic-spectral features to retrieve relative water content and implement refined classification simultaneously. First, we proposed a new spectral index, Red Edge Sloped (RES), to characterize the relative water content and used RES to label the relative water content on all the objects in data sets via an adaptive grading algorithm. We then built a multilabeled data set with original object labels. Second, we established the SNN architecture and dimensionality-varied feature extraction module to extract fusion information of space, spectra, and relative water content in hyperspectral data, which was able to improve the cooperative expression capability for features and the discrimination capability for the water content of various ground objects. It was also able to reduce the structural complexity and amount of computation of the deep model. Moreover, the dimensionality-varied module was able to extract more accurate and abstract features at a high level and improved the extensibility to build a deeper network. After the aforementioned progress, we implemented hyperspectral image refined classification with the guide of relative water content retrieval by using the SNN and completed interclass and intraclass classification simultaneously. On the basis of the relative water content retrieval in the classification, the interclass and intraclass distances can be further expanded over traditional classification methods. Experiments were conducted using one hyperspectral data set collected in the laboratory and four open hyperspectral data sets, namely, Lopex, Indian Pines, Pavia University, and Salinas, to validate the effectiveness of RES and the superiority of the SNN model. The experimental results demonstrated that the RES index proposed in this paper was able to represent the relative water content of objects in data effectively. The classification accuracy and discrimination capability for the water content of SNN were improved evidently with the guide of relative water content retrieval. In addition, we compared SNN with other state-of-the-art methods, and SNN obtained higher classification accuracy. Therefore, the SNN model proposed in this paper, which implements refined classification with the guide of relative water content retrieval, can enhance the feature extraction capability of the model for hyperspectral image classification and improve the classification performance effectively.
摘要:As the most important quantitative remote sensing product, surface reflectance products are the basic data source for many parametric remote sensing products and can be widely used in typical applications, such as forestry, agriculture, water resources, ecological environment, and urban environment. However, for meter-level high-resolution remote sensing images, reflectance products are unavailable at home and abroad. Most of the current domestic satellite products use high-resolution multispectral data in the four bands of blue, green, red, and near infrared as the main data source, and accurate atmospheric correction is difficult to achieve mainly due to the lack of short-wave infrared data. A set of spatiotemporal continuous meter-level high-resolution image reflectance products should also be provided for researchers to use. In this study, GF-2 Level-1A standardized products were used as input. The processing steps included panchromatic and multispectral fusion, geometric precision correction and mosaic, and atmospheric correction. A set of annual surface reflectance images with a spatial resolution of 0.8 m covering Beijing plain area from 2015 to 2019 was generated.First, a geometric deviation of subpixel level exists among the four multispectral bands of GF-2 Level-1A products. The Pixel Knife software was used to register each band of GF-2 images accurately, then the images were fused with panchromatic images. Second, with Sentinel-2 as reference images (10 m resolution) and supplementary SRTM DEM data (30 m resolution), we used a regional network adjustment method to achieve geometric precision correction. The geometric accuracy is within 20 m. This method ensures the absolute geometric positioning accuracy and the relative geometric accuracy among images. Third, we used a relative radiation uniformity method to complete atmospheric correction. Regarding Sentinel-2 reflectance images as reference images (10 m resolution), we automatically searched the pseudo invariant points between the reference images and the images to be corrected and built a regression equation band by band.A total of 184 scenes of surface reflectance images with a total data volume of 1.63 TB are acquired. The data set is issued annually, including the coverage and distribution vector of the annual products. In the current data set, for mountain-shaded and relatively clean water bodies, such as Miyun Reservoir water bodies, the retrieval reflectance is often zero or negative. Therefore, this data set is suitable for underlying surface applications, except mountainous areas and relatively clean water bodies. It can prevent the shading phenomenon after the fusion of panchromatic and multispectral images and ensure that the multiscene geometric precision correction images have good geometric consistency at the joints. The validation results show that the atmospheric correction effect of water, road, and vegetation canopy is good.On the premise that the reference image is ready, the processing method proposed in this article can process high-resolution images and output surface reflectance products rapidly, massively, and automatically. Currently, 70% success rate is achieved for atmospheric correction, which should be further improved.
关键词:0.8 m surface reflectance data;GF-2;high resolution image;atmospheric correction;plain area of Beijing
摘要:Significant impact of NO2 on global atmospheric environment and human health necessitate accurate monitoring of NO2. On the one hand, people can study and analyze their generation and extinction laws, distribution characteristics, diffusion, and transmission characteristics. On the other hand, it can provide decision-making basis for the formulation of pollutant discharge policy and pollution control program. However, the number of ground-based air quality monitoring stations has been increasing, providing abundant NO2 ground observation data. Large-scale monitoring of NO2 emissions requires the development of other monitoring methods. Satellite instruments covering the ultraviolet and visible spectrum have been widely used to detect the concentration of NO2 column in the atmosphere with the advantage of wide-range observation. to further strengthen the domestic air quality monitoring, and improve the air quality in China, the Environmental Trace Gas Monitoring Instrument (EMI) onboard the Chinese GaoFen-5 (GF-5) satellite was launched on May 9, 2018. It is a nadir-viewing wide-field hyperspectral spectrometer, which measures the earth’s backscattered radiation in the ultraviolet and visible spectrum and is designed for atmospheric trace gas detection. Based on the measured spectrum of EMI VIS1 channel, the tropospheric NO2 Vertical Column Density (VCD) was retrieved by Differential Optical Absorption Spectrometry (DOAS) method, which consists of three key steps, namely, spectral fitting, Stratosphere-Troposphere Separation (STS), and tropospheric Air Mass Factor (AMF) calculations. After spectral fitting, a stripe correction scheme was developed for the stripe phenomenon that appears in the initial fitted NO2 SCD. The current advanced STREAM algorithm was used to estimate the stratospheric NO2 concentration, and the TM5 NO2 profile with higher spatial resolution was used in the calculation of tropospheric AMF. The retrieval results of tropospheric NO2 VCD based on EMI were presented, and the results were cross-verified with NO2 products from international similar instruments, i.e., OMI and TROPOMI. From a larger spatial scale, EMI can reflect the global distribution of typical NO2 pollution city sources. In terms of regional scale, the daily spatial distribution correlation coefficients between EMI and TROPOMI in different regions are greater than 0.9. On a monthly time scale, EMI and OMI (TROPOMI) show consistent spatial distribution in the four urban agglomerations of China, and the average spatial correlation coefficient is 0.8 (0.87). The regional mean bias between EMI and OMI (TROPOMI) is within 11.3% (9.5%). The time series analysis of the Pearl River Delta region shows that EMI has high consistency (r=0.89) with TROPOMI. The ground-based MAX-DOAS observation results are also used for validation. The ground validation results show that the EMI retrieval results have high correlation coefficient (0.96) and approximately 35% underestimated. This study proves EMI’s ability in global NO2 monitoring. In the future, domestic instruments similar to EMI are carried out on the GF-5 (02) satellite and the atmospheric environmental monitoring satellite (AEMS), which contributes continuously to China’s trace gas detection. Therefore, this study can provide reference for the design of next similar instruments and the development of corresponding NO2 retrieval algorithm in China.
摘要:Sulfur dioxide (SO2) from volcanic eruption and its long-distance transportation has a significant impact on global climate change and aviation safety. Satellite remote sensing technology provides an unprecedented advantage for continuous, large spatial and short-revisit monitoring for atmospheric SO2. GF-5 Environmental trace gas Monitoring Instrument (EMI) with high spatial resolution is the China’s first instrument of hyper spectral measurements with wavelength range from 240 nm to 710 nm, and makes daily global observations of key atmospheric constituents, including ozone, nitrogen dioxide, sulfur dioxide.In this paper, based on the atmospheric radiation transfer model SCIATRAN, the clear sky albedo under typical atmospheric conditions is simulated by selecting the ocean area pixels with uniform surface types in the middle and low latitudes to evaluate the accuracy of the EMI Top-of-Atmosphere (TOA) albedo.Secondly, based on the S5P/TROPOMI L1 Radiance data and the DOAS algorithm, after 477nm O4 cloud screening, spectral calibration, slant column to vertical column conversion, the inversion results of total SO2 columns over volcanic eruption areas were obtained and compared with TROPOMI offline L2 SO2 products.Finally, GF-5/EMI SO2 columns were retrieved by using GF-5 EMI UV-2 band observations and DOAS algorithm, and then the retrieved GF-5/EMI SO2 columns were compared with similar S5P/TROPOMI SO2 columns to evaluate the ability of GF-5/EMI on monitoring global SO2 change from volcanic activity.Results show that, in the band range of 300—400 nm, the EMI spectra of the sampling points in the ocean area are lower than the simulated spectra of SCIATRAN, and EMI and TROPOMI spectra show similar systematic biases. SO2 obtained by using TROPOMI L1 Radiance data, three band windows of 315—327 nm, 325—335 nm and 360—390 nm, and DOAS inversion algorithm are compared with the TROPOMI offline L2 SO2 products, which show high correlations (correlation coefficient between 0.97—0.99, relative deviation between 3% and 9%). Therefore, GF-5/EMI can obtain the daily distribution of SO2 from volcanic eruption. The accuracy of GF-5/EMI SO2 columns can meet the needs of application of global volcano monitoring.