Li Yifu,Sun Bin,Gao Zhihai,Wang Bengyu,Yan Ziyu,Su Wensen,Gao Ting,Yue Wei
Corrected Proof
DOI:10.11834/jrs.20232526
摘要:The successful launch and operation of the 5m Optical Satellite 02 (ZY-1-02E) have provided a wealth of remote sensing data for various main businesses within the forestry and grass industry, offering reliable information guarantees for forestry management and ecological services. This study aims to test the application capability of the ZY-1-02E multispectral data in farmland shelterbelt monitoring, a primary forestry business. Zhangbei County, Hebei Province, serves as the study area. Based on the ZY-1-02E multispectral data, spectral, vegetation index, and texture feature sets are constructed, and four classification information extraction schemes are designed: (1) spectral features, (2) spectral features + vegetation index, (3) spectral features + texture features, and (4) spectral features + vegetation index + texture features. The random forest algorithm was employed for feature selection, classification information extraction, and validation to evaluate the application potential and effectiveness of the ZY-1-02E multispectral data in farmland shelterbelt information extraction. The results show that (1) the ZY-1-02E multispectral data allows for the accurate extraction of farmland shelterbelt information in the study area, reflecting the actual distribution of farmland shelterbelts to a high degree. Among them, the overall accuracy and Kappa coefficient of Scheme 1 are 0.8371 and 0.7760; the overall accuracy and Kappa coefficient of Scheme 2 are 0.8440 and 0.7855; the overall accuracy and Kappa coefficient of Scheme 3 reach 0.8839 and 0.8403; and Scheme 4 has the highest accuracy, with its overall accuracy and Kappa coefficient being 0.8908 and 0.8499. (2) The effective use of multiple feature variables can significantly improve the accuracy of farmland shelterbelt information extraction. Regarding the contribution of different features to farmland shelterbelt information extraction, the order of importance is spectral features > texture features > vegetation indices. (3) The ZY-1-02E multispectral data exhibit high accuracy and reliable results for farmland shelterbelt information extraction, which can better meet the needs of protection forest monitoring operations and has considerable potential for application in forest surveys and monitoring thematic operations. In conclusion, this study demonstrates the potential and effectiveness of the ZY-1-02E multispectral data for extracting farmland shelterbelt information. Using multiple feature variables and the random forest algorithm enables accurate extraction and validation of farmland shelterbelt information, providing valuable insights for future forest monitoring and management. As more data become available and the application capabilities of the ZY-1-02E are further explored, future work can consider integrating multispectral data from different periods and linear features of farmland shelterbelt to enhance the accuracy of information extraction, ultimately achieving more efficient and precise extraction of farmland shelterbelt information.
关键词:forestry and grass industry;farmland shelterbelt;feature extraction;application testing;ZY-1-02E satellite
摘要:Objective As an important technique in the image interpretation task, hyperspectral image (HSI) classification has been widely used in many fields such as remote sensing observation and intelligent medical service. Actually, the HSI classification can be seen as consisting of two parts, that is both feature extraction and classifiers based label prediction. Although deep learning can directly obtain the classification results by one step, which is achieved by the end-to-end network structure from data input to classification result output, they are actually viewed as a direct cascade of both feature extraction based on deep networks (such as deep autoencoder and convolutional neural network) and classifiers (such as softmax and logistic regression). Most of current classification approaches have no consideration of the influence of classifiers on feature extraction, which may cause the incompatibility between the extracted features and the used classifier. This incompatibility is reflected in the bad matching relationship between the classifier model and its input feature data, which will lead to poor prediction results.Method In order to remedy such deficiency, this paper presents a novel kind of HSI feature extraction methods embedded by the classifier mechanism, which can ensure the compatibility between feature extraction and the used classifier, so that features can be more easily calculated by classifier accurately, and classification prediction results can be thus improved. Two specific forms are given in this paper.1) The SRS feature extraction model compatible with support vector machine (SVM) classifier is built, which embeds the SVM property into the sparse representation (SR). 2) The DAES feature extraction model compatible with softmax classifier is constructed, which integrates the softmax function into the deep autoencoder (DAE) network. We also provide the optimization strategy to obtain the optimal solutions of the SRS and DAES models.Results The proposed SRS and DAES models are experimentally evaluated on the remote sensing HSI data and medical HSI data, and the experiments consists of parameter analysis, algorithm comparison, ablation study, and convergence analysis. According to the parameter analysis, we validate that the values of important parameters have obvious impact on the performance of our methods, and successfully select the best values of these parameters. As suggested by the algorithm comparison, the proposed methods achieve the better classification performance compared with some state-of-the-art approaches, which have obvious effectiveness and superiority, which are averagely higher 5.03%, 5.13%, 7.30% at OA (overall accuracy), AA (average accuracy), and Kappa indexes in the HSI classification task. The ablation study is conducted to demonstrate the effectiveness of the compatibility between feature extraction and the bedded classifiers for the performance improvement of HSI classification. The convergence analysis indicates that the designed optimization solution strategy can meet the application requirements of reliability and rapidity.Conclusion As discussed in the above-description, it can be concluded that the proposed SRS and DAES methods realize the good compatibility between feature extraction and classifiers, so that the extracted features can be better calculated by classifiers, and the more competitive classification performance can be achieved.
摘要:Unmanned platforms are rapidly being used in many different fields. However, improving their cognition and understanding of complex environments remains a challenging research problem. Machine maps are a new class of maps proposed to address this problem. Based on the conceptual model of the machine map, this paper adopts the research perspective of cognitive science and further proposes a theoretical model of the machine map's map construction, which is cognitive-plausible and consistent with the cognitive structure. This paper first discusses the theoretical roots of machine maps in cognitive science in terms of the origin, formation, and development of machine maps. Second, we briefly review the research on the structure and generation of memory models, mental image maps, cognitive architectures, and environmental cognition issues of robotic systems. Furthermore, we discuss the cognitive structure of the machine map and its supporting role in the map construction model of machine maps. Third, we propose the design principles of the machine map's map construction model, which includes the organization of environment information using distributed representations, structural design of the machine map using a multi-store memory system, and modeling of the generation of the machine map with a reference of brain cognitive activities. Furthermore, the task objectives, content classification, detailed logical structure, and map generation process of the machine map's map construction are presented. In particular, the perceptual map conducts preliminary processing of information acquired by sensors to obtain information about the features, location, geometry, and semantics of entities in the surrounding environment; the working map is functionally similar to working memory in human brains, which contains visual information, spatial information, situational information, and specialized maps constructed to accomplish specific tasks; the long-time map uses perceptual map and working map as information sources, and the fragmented information in the perceptual map and working map is associated, managed, and processed more extensively to form an environment model with global reference. Finally, machine map generation's primary activities (e.g., understanding, attention, inference, learning, and action) and processes(e.g., implicit map generation and explicit map generation)are discussed based on the logical structure. The implicit map generation refers to the process in which the content and the knowledge in the long-term map are continuously enriched and accumulated through the continuous evolution and support of the perceptual map and the working map during the operation of the unmanned platform. This process contains three activities: shallow understanding, deep understanding, and implicit learning. Explicit map generation refers to the process in which the working map will form a specialized map for a given task to meet the specific task requirements and support the generation of spatial behavior under the support of itself, perceptual map, and long-term map. The process consists of six activities: superficial understanding, inference, attention, deep understanding, episodic learning, and action. The cognitive structure and map construction model, which is an interpretation of the machine map cognitive computing system, can serve as a basic framework for researchers that are interested in the machine map, allowing them to carry out collaborative research at a more abstract level and also provide references for the integration, evaluation, and application of related technologies and data. This research also illuminates new requirements and goals for constructing digital twin or virtual geographic environments.
关键词:machine map;cognitive logic;map construction model;unmanned platform
Wang Mengyu,Zhao Feng,Pang Yong,Meng Ran,Jia Wen,Yue Chao
Corrected Proof
DOI:10.11834/jrs.20233103
摘要:Cold-temperate forests, recognized as the most extensive terrestrial ecosystems, cover vast areas around the global and hold very important ecological and social values. Accurate mapping of forest type and tree species cover fraction in these forests across space and time is crucial for quantifying ecosystem services and formulating effective forest management policies to ensure their sustainable conservation. However, despite the increasing development of remote sensing technologies, there have been limited studies exploring the feasibility of inverting forest type and tree species cover fraction using medium-resolution multispectral satellite-based data, such as Landsat, in China's cold-temperate forests. This limitation is primarily attributed to the scarcity of reference data and the restricted spectral information available in multispectral images. Moreover, the quantitative impact of the temporal frequency of data acquisitions (e.g., single-date, multi-date) on mapping forest type and tree species cover fraction remains largely unexplored. The timing and frequency of satellite data acquisition can significantly influence the detection and characterization of dynamic changes in forests, which, in turn, affect the accuracy of mapping forest attributes. To address these gaps, our study aims to map the forest type and tree species cover fraction in Mengjiagang forest, Heilongjiang Province, by employing synthetically mixed data and a random forest regression model. Furthermore, we extend our analysis to three decades (from 1986 to 2020) of the Landsat data, thereby mapping the cover fractions of both broadleaf and needleleaf forests in Mengjiagang forest by using the optimal broadleaf and needleleaf random forest regression model. The results of our study reveal the following key findings: (1) For forest type cover fraction inversion, the random forest regression model based on the growing season median index (including spectral bands, NDTI, and TCT indices) was the optimal model (achieving an =0.76 for broadleaf and =0.71 for needleleaf, respectively); (2) For tree species cover fraction inversion, the random forest regression model based on the multi-date spectral features (including spectral bands, NDTI, and TCT indices of both growth season and leaf off season) was the optimal model (achieving an =0.40 for Larch, =0.23 for Korean pine, and =0.61 for Mongolian pine, respectively); (3) Increasing the temporal frequency of data acquisition can enhance tree species cover fraction inversion accuracy (achieving an = 0.04 for Larch, = 0.07 for Korean pine, and= 0.27 for Mongolian pine, respectively), while its impact on improving forest type cover fraction inversion accuracy was limited. By effectively combining the advantages of synthetically training data and random forest regression, we have successfully mapped the forest type and tree species cover fraction of Mengjiagang forest. Moreover, our study provides a comprehensive analysis that accurately quantifies the influence of temporal data acquisition frequency on mapping forest type and tree species cover fraction. Our study offers valuable insights into the future mapping of forest type and tree species cover fraction across space and time, particularly for regions with similar species composition. The outcomes of this research will make a significant contribution to the understanding and management of cold-temperate forests, thereby supporting their conservation and sustainable use.
关键词:forest type coverage;tree species coverage;synthetically training data;long time series
摘要:Land use/cover change (LUCC) is a direct driver of the carbon balance in terrestrial ecosystems, and its impact on global warming is second only to fossil fuel and industrial emissions. Forest ecosystem is the largest carbon pool in terrestrial ecosystems and has an important role to play in addressing global climate change and achieving carbon neutrality targets. However, the limited LUCC data make the impact of LUCC on carbon emissions greatly underestimated, and the lack of spatiotemporal LUCC data in the context of future climate also makes revealing the response of forest carbon cycle to LUCC face many uncertainties. How to simulate LUCC and analyze the impact of LUCC on the carbon cycle of forest ecosystems is a hot topic of research at domestic and international level. This paper systematically summarized the spatiotemporal LUCC simulation methods, forest carbon balance estimation methods and the progress of research on the impact of LUCC on forest carbon cycle, and listed and analyzed the advantages, applicability and problems of different models and methods. Through the literature review, it is pointed out that using remote sensing data as a basis to simulate LUCC and driving ecosystem process models to achieve accurate spatial and temporal simulation of forest ecosystem carbon cycle is one of the current trends and development trends in future carbon cycle research.
摘要:The abundance of geospatial big data has created unprecedented opportunities for the development of virtual geographic environments (VGEs), but with the gradual ease of data acquisition, VGEs generally suffer from the problem of being "visual" but not "intelligent" and have yet to meet the demand for real-world geographic environments that are computable, manageable, and decision-ready. This paper first summarizes the challenges that geospatial big data pose to virtual geographic environments and proposes that improving the semantic interoperability of geospatial big data is the key to solving the aforementioned problems in virtual geographic environments. This involves using a machine-readable and human-understandable language to establish semantic relationships between data in a structured manner within virtual geographic environments. Therefore, the concept of Semantic Virtual Geographic Environments (SVGEs) is proposed. Semantic Virtual Geographic Environments use Semantic Web technologies such as ontologies and knowledge graphs to unify the description, management, and analysis of fragmented, unstructured, and non-explicitly related raw data generated by geographic objects and geographic processes in real-world environments with more standardized semantics, thus building a highly integrated digital geographic environment with data connectivity and interoperability. Semantic Virtual Geographic Environments are a type of virtual geographic environment that is semantically enhanced in the context of geospatial big data. SVGEs have characteristics such as knowledge-driven, semantic collaboration, and structural unity. Considering that the knowledge graph is the culmination of knowledge engineering in the era of big data, it can provide solutions for data knowledge organization and intelligent application in virtual geographical environment due to its powerful semantic expression ability, storage capacity and reasoning ability. Therefore, the semantic virtual geographic environment can use the knowledge graph as the formal expression of the real geographic environment, and form the mapping of the real geographic environment in the virtual space by integrating the real-time perception data and the interactive data information between the virtual and the real. Finally, this article takes wetland monitoring as an example and combines knowledge graph with virtual geographic environment to construct a semantic virtual geographic environment for wetland monitoring. Based on the characteristics of wetland monitoring data, conceptual modeling and formal expression of monitoring data were carried out, and a wetland monitoring ontology was constructed. The knowledge graph ontology can achieve a comprehensive, clear, and unambiguous description of monitoring data, laying the foundation for interoperability of monitoring data in virtual geographic environments. On the basis of wetland monitoring ontology, large-scale multi-source heterogeneous monitoring data is semantically mapped, achieving dual integration of isolated and scattered monitoring data in both logical and physical aspects in a virtual geographic environment. Then, the rich semantic relationships between monitoring data can be utilized to achieve knowledge-based search of monitoring data. In addition, the semantic virtual geographic environment also has strong inference ability, which can generate new knowledge through inference from native associated monitoring data, thus providing support for the comprehensive information management decision-making of wetlands. This study is a preliminary attempt to explore the theory, methods, and applications of semantic virtual geographic environments. However, further exploration is needed to truly achieve the full utilization of data in virtual geographic environments.
TAO Enzhe,ZHOU Xuhua,PENG Hailong,LI Duoduo,XU Kexin,LI Kai
Corrected Proof
DOI:10.11834/jrs.20232380
摘要:Objective Haiyang 2D satellite (HY-2D) is the fourth Marine dynamic environment satellite in China, which belongs to the Marine remote sensing satellite series in China, with high-precision orbit measurement, orbit determination capabilities and all-weather, all-day, global detection capabilities. The main mission of the satellite is to monitor and investigate the Marine environment, obtain a variety of Marine dynamic environmental parameters, including sea surface wind field, wave height and sea surface height, directly provide measured data for the early warning and prediction of catastrophic sea conditions, and provide support services for Marine disaster prevention and reduction, Marine rights and interests protection, Marine resources development, Marine environmental protection, Marine scientific research and national defense construction. After the launch of HY-2D satellite, HY-2B satellite and HY-2C satellite realize the three-star network observation. Accurate orbit of HY-2D is a prerequisite for mission accomplishment. It was equipped with the DORIS receiver for POD (precise orbit determination). The attitude of the satellite alternates between fixed mode and nominal yaw-steering mode.Method In order to study the accuracy of orbit determination of HY-2D using DORIS, we build a satellite attitude model, and DORIS phase measurement data from June 5 to June 13, 2021, is selected, and the Epoch-Difference method and dynamic orbit determination method is adopted for precise orbit determination.Result The orbital accuracy was then evaluated by post-fit residuals, overlapping, comparing with CNES and SLR. The differences between the two orbit determination results using attitude model and attitude data were discussed. The results show that:(1) Mean of the Post-fit residuals is 0.355mm/s, mean of 3D RMS is within 2cm. (2) Compare with the precise orbit by CNES, the RMS when using attitude data in the R, T and N directions are 1.02cm, 2.92cm and 3.11cm, respectively, and the RMS when using attitude model in the R, T and N directions are 0.97cm, 2.77cm and 3.15cm, respectively. The similarity between the two results indicates that the constructed satellite attitude model is in good agreement with the measured attitude data. (3) The mean RMS of SLR residuals for CNES orbit and our orbit are 2.38cm and 2.24cm respectively, indicating that the two orbits have similar accuracy.Conclusion Our research shows that the on-board DORIS receiver have good and stable performance, by using the received data, we can provide a centimeter-level HY-2D satellite orbit, which can ensure the stable operation of satellite altimetry.
摘要:(Objective)Synthetic aperture radar (SAR) is a kind of high-resolution imaging sensor, which is able to work under nearly all weather and illumination conditions. SAR plays an important role in earth observation. For single-band, single-polarization SAR image, however, there’s only one complex scalar in each pixel. So that the information contained in such single-channel SAR image could be quite limited, which limits its performance in various applications. Since terrain classification is one of the typical tasks in SAR image interpretation, this paper takes it as an example to demonstrate the problem and gives our solution.(Method)To address the above problem, this paper proposes a representation for the spatial information on SAR image—the directional context covariance matrix (DCCM). DCCM obtains the variance of pixel intensity in several orientations inside the neighborhood in order to make use of the context information. During such process, the target pixel in extended from a complex scalar to a group of matrices, so that its information content is increased. Besides, the matrix form also enables some of the advanced matrix algorithms to be applied to single-channel SAR image. On the basis of it, the DCCM texture feature is derived, which can better represent the texture properties on SAR image and shows better discriminability for different land covers. Then, the texture feature is combined with two traditional classifiers as well as the convolutional neural network (CNN), respectively. Thereafter, a SAR image classification scheme is established.(Result)To illustrate the performance of proposed method, terrain classification experiments are carried out on AIRSAR and UAVSAR datasets. Methods based on three commonly used texture features, the gray level co-occurrence matrix (GLCM), Gabor filters and multilevel local pattern histogram (MLPH) are taken into comparison. On traditional classifiers, the overall classification accuracies are increased by 7% on both datasets. While combining with CNN, the overall accuracies and kappa coefficients are significantly improved with DCCM texture feature than the original SAR data. The proposed feature also shows nice efficiency and better robustness when compared to other texture features.(Conclusion)The experiment results indicate that DCCM is an effective representation that is suitable for SAR image. DCCM is efficient, robust and easy-to-use. The proposed DCCM based classification method can improve the classification performance of single-channel SAR image by increasing the pixel information content. Beyond that, DCCM could be a promising method for many other SAR image interpretation tasks.
摘要:In recent years, with the development of Global Navigation Satellite Systems (GNSS), a GNSS-Multipath Reflectometry (GNSS-MR) technique based on Signal to Noise Ratio (SNR) has been developed. This technology can obtain the information of the reflector by using GNSS receiver, and has the advantages of abundant signal sources and high sampling rate in snow depth inversion. But now many GNSS receiver do not record SNR observations. In order to make these receivers also have the ability of snow depth monitoring, a multi-mode and multi-frequency GNSS-MR snow depth inversion fusion method based on Signal Strength Indicator (SSI) is proposed in this paper. At the same time, aiming at the two main problems existing in GNSS-MR inversion of snow depth, that is, low precision and low time resolution, this method can also be effectively solved. This approach mainly benefits from the strategy of performing a robust estimation. The specific steps are as follows: firstly, using SSI and SNR data of GPS, GLONASS, Galileo and Beidou, and using Lomb-Scargle periodogram (LSP) method in classical snow depth retrieval principle, the snow depth retrieval values of each frequency band are obtained from four constellations. Then a specific time window is established, and the state transition equation set is established in each time window considering the snow surface dynamic change and tropospheric delay. Finally, the snow depth time series is solved by a robust estimation model. In essence, it is a method of optimal valuation for GNSS-MR that is theoretically suitable for different geographical environments. In addition, in order to prove the feasibility and effectiveness of the method, this paper also selected a suitable station for snow depth retrieval experiments. The experimental station is SG27 in Alaska, United States.The results show that the multi-frequency SSI data of four global satellite systems can retrieve snow depth. Before multi-mode and multi-frequency GNSS-MR snow depth inversion fusion, The results of SSI inversion at each frequency band have good correlation with the measured snow depth (except for Beidou frequency band, the other correlation coefficients is greater than 0.92). Considering the standard deviation and root mean square error of the retrieval results of different satellite systems, the retrieval results of GPS satellite system are the best, followed by GLONASS, then Galileo, but the retrieval results of these three satellite systems are similar; The Beidou satellite system has the worst retrieval result. Among the four satellite systems, root mean square error of the frequency band with the best inversion result is 6.34cm. And after multi-mode and multi-frequency GNSS-MR snow depth inversion fusion, the root mean square error between the SSI inversion results and the measured snow depth series is 2.36cm, and the correlation coefficient is 0.98. At the same time, the multi-mode and multi-frequency GNSS-MR snow depth inversion based on SNR data is also carried out in the calculation example, and it is found that:The results of SSI inversion are consistent with those of SNR inversion, and the feasibility and effectiveness of multi-mode and multi-frequency GNSS-MR snow depth inversion fusion based on SSI are verified by experiments.
关键词:GNSS-MR;multi-mode and multi-frequency;snow depth;robust estimation;signal strength
摘要:Remote sensing time series contain the changes and their difference information of forest composition, structure, function driven by natural factors and human activities. This provides theoretical support for forest disturbance detection and attribution by integrating spatio-temporal-spectral information from remote sensing time series data, which can effectively improve the understanding of forest succession processes, developmental trend and their driving and response mechanism. This paper systematically reviewed the research progress of forest disturbance detection and attribution by integrating spatio-temporal-spectral information from remote sensing time series.The prerequisite for forest disturbance attribution is the detection of forest disturbance events, and the accuracy of disturbance detection directly affects the accuracy of subsequent attribution. In this paper, current forest disturbance detection methods and techniques are highlighted from multiple perspectives, including data (time series observation frequency selection), features (spectral feature selection, fusion of spatial and temporal feature), and algorithms (multi-algorithm integration, forest low-intensity disturbance detection). From the data perspective, based on the frequency of available observations in different regions, the change detection methods for dense and sparse time series are introduced respectively. From the feature perspective, the spectral response characteristics of forest disturbances are summarized. The change detection strategy of multi-spectral feature integration is introduced to address the problems of change detection based on single spectral features. The fusion of temporal and spatial features for forest disturbance detection is summarized. From the algorithmic perspective, to address the issues such as differences in the results of different change detection algorithms and the fact that a single algorithm may not be the most efficient way to describe all conditions, two multi-algorithm integration strategies, parallel and serial, are presented. Based on the analysis of the reasons for the poor detection of low-intensity disturbances (e.g., selective logging, pests and diseases, drought, etc.), progress in research on change detection oriented to mid- and low-intensity disturbances in forests is described.The essence of forest disturbance attribution is a classification problem for multiple types of forest disturbances, which identifies disturbance types with the help of remote sensing features of forest disturbances caused by different driving factors as the input of classification algorithms. In this paper, we first summarized the attribution features as the input of forest disturbance attribution, that is, pre-, mid- and post-disturbance features in chronological order, and temporal, spatial, spectral and topographic features in feature dimensions. Then, according to the condition that whether disturbance detection before attribution of disturbances, methods for attributing multiple types of forest disturbance based on the spatio-temporal-spectral and topographic features of remote sensing time series are summarized and compared: the direct method and the two-stage method.At last, we analyzed the current problems in forest disturbance monitoring using remote sensing, and prospected the future research directions such as fusion of spatio-temporal-spectral features, simultaneous detection of forest multi-intensity disturbance and attribution of forest multi-type disturbance under limited sample conditions. We hope this article provides reference for detection and attribution of changes using an ensemble of spatio-temporal-spectral information from remote sensing time series.
关键词:forest disturbance;remote sensing time series;spatio-temporal-spectral information;feature ensemble;disturbances detection;attribution of disturbances
摘要:With the continuous development of aerospace technology and remote sensing technology, the applications of remote sensing images in lots of fields have been expanding. Hyperspectral image (HSI) is a common type of remote sensing image which consists of a series of two-dimensional remote sensing images as a 3D data cube. Each two-dimensional image in HSI can reflect the reflection/radiation intensity of different wavelengths of electromagnetic waves, and each pixel of HSI corresponds to the spectral curve reflecting the spectral information in different wavelengths. Therefore, the hyperspectral remote sensing images have the feature of "map-spectrum integration", which contains not only the spectral information with strong discriminative, but also rich spatial information, so the hyperspectral data have great application potential.Hyperspectral anomaly detection is to detect the pixels in a scene with different characteristics from surrounding pixels and determines them as anomalous targets without the any prior knowledge of the target. Since hyperspectral anomaly detection is an unsupervised process that does not require any priori information about the target to be measured in advance, it plays a great role in real life. For example, anomaly target detection technology can be used to search and rescue people after a disaster, to quickly determine the fire point of a forest fire, and to search mineral point in mineral resource exploration. Hyperspectral anomaly detection has been a popular research direction in the area of remote sensing image processing in recent years and a large number of researchers have conducted extensive and in-depth research and achieved rich research results.However, hyperspectral anomaly detection still faces many difficult problems, such as the targets of the same material may exhibit different spectral characteristics due to the different imaging equipment and imaging environment, which may interfere with the detection results and lead to the problem of "same object with different spectrum", and the targets of different materials may also exhibit the problem of "different object with different spectrum". Then, most of the existing hyperspectral anomaly detection algorithms are only in the laboratory stage and with low technology maturity. Besides, the hyperspectral data may have lots of spectral bands that contains a large amount of redundant information, which makes the data processing difficult. Moreover, the number of publicly available hyperspectral anomaly detection datasets is insufficient and most of the datasets are very old.In this paper, we firstly summarize the main research progress of hyperspectral anomaly detection, and then classify and summarize the existing mainstream algorithms, mainly divided into five categories: statistics-based anomaly detection methods, data expression-based anomaly detection methods, data decomposition-based methods anomaly detection, deep learning-based methods anomaly detection and other methods. Besides, through the investigation, analysis and summary of the existing methods, three future development directions of hyperspectral anomaly detection are proposed, including database expansion: introduce newer dataset with more images and more sophisticated remote sensing sensors, multi-source data combination: take advantages of different imaging sensors and different types of remote sensing data; algorithm practical: enables the anomaly detection algorithms to be ported for application on real platforms.
摘要:(Objective)Hyperspectral imagery (HSI) is a three-dimensional cube data that combines spatial imagery and spectral information, which brings many conveniences to the accurate interpretation of observation information of ground coverings. However, there are also challenges in high-dimensional nonlinear data processing for the HSI change detection (HSI-CD) task. Therefore, a HSI change detection method based on spectral-frequency domain attribute pattern fusion (SFDAPF) is introduced to quantify the spectral representation of pixel attribute patterns step by step. Specifically, a saliency enhancement (SE) strategy for pixel attribute patterns based on Fourier transform theory is developed to improve the separability between pixel attribute patterns in our work. The proposed SFDAPF method consists of four components as follows.(Method)First, a gradient correlation-based spectral absolute distance (GCASD) is designed in this paper, so that the attribute patterns of pixel pairs in bitemporal HSI can be quantified step by step from the aspect of spectral information representation. Then, according to Fourier transform theory, this paper proposed a SE strategy of attribute patterns of pixel pairs, which improves the separability of attribute patterns of changing and non-changing pixel pairs in terms of global spatial information utilization. Next, the saliency level and GCASD per pixel are fused to obtain the comprehensive discrimination value of change detection. Finally, according to the false alarm threshold, the binarization results of the bitemporal HSI-CD are obtained.(Result)The proposed SFDAPF method is applied to two open-source bitemporal HSI datasets, i.e., River and Farmland datasets. Experimental results show that the proposed SFDAPF method can outperform the traditional and state-of-the-art HSI-CD methods. For the River dataset, compared with the traditional methods, the SFDAPF method in this paper introduces the local context information of the pixel in the calculation stage of the GCASD and adopts the global SE strategy, which is effective in reducing false alarms. Compared with the state-of-the-art methods, the SFDAPF method in this paper achieves the highest accuracy for most of the performance evaluation indicators. For the Farmland dataset, the AA, Kappa, F1, IoU, and OA indicators of the SFDAPF method in this paper have reached the highest accuracy, which is 0.01985, 0.05653, 0.01474, 0.02798, and 0.02187 higher than the second highest accuracy. In addition, the OAu (0.97500) and OAc (0.96766) indicators of the SFDAPF method did not achieve the highest accuracy, but they were only 0.00673 and 0.01237 lower than the highest accuracy, which can be called slightly lower than the highest accuracy. Therefore, the experiments verified the effectiveness of the proposed SFDAPF method in the HSI-CD task.(Conclusion)In general, the proposed SFDAPF method fully considers the representation of spectral information and the utilization of neighborhood spatial information, and thus promoting overall accuracy of HSI-CD. However, the proposed SFDAPF method only considers the single-window eight-connected neighborhood in the spectral characterization stage as well as the magnitude features represented in the frequency domain. Therefore, in future research work, we will further explore the contribution of dual-window spectral information representation and phase information of frequency domain representation to HSI-CD task.
HU Boni,CHENLin,XU Bingli,BU Shuhui,HAN Pengcheng,LIKun,XIA Zhenyu,LI Ni,LI Ke,CAO Xuefeng,WAN Gang
Corrected Proof
DOI:10.11834/jrs.20232597
摘要:The development of high-fidelity 3D digital models of the land surface environment in real-time has become essential in many fields, including urban planning, agricultural surveying and mapping, disaster management, and military applications. Building a high-fidelity digital model of the land surface environment in real time is the key foundation for realizing the virtual mapping of the geographic environment and then forming a digital twin geographic environment.However, current methods for constructing these models suffer from several challenges such as slow speed, low timeliness, and limited application in large scenarios.To overcome these challenges, this paper proposes a new real-time dense point cloud generation and digital model construction method based on unmanned aerial vehicle (UAV) platforms. The proposed algorithm significantly improves the speed and accuracy of land surface model construction compared to existing algorithms, it enables the degree of online data collection and real-time modeling. The algorithm is based on a general simultaneous localization and mapping framework, the technical chain from data acquisition, data processing, feature extraction and matching to 3D point cloud generation, digital model reconstruction and result analysis is unobstructed. At the meantime, the algorithm breaks through the key technologies in scene reconstruction such as online data acquisition and pose problem solving, real-time dense point cloud generation, digital surface model (DSM) and digital orthophoto map (DOM) construction.According to the experimental results of the dataset containing different land surface environments such as cities, farmland, mountains, and deserts, the algorithm proposed in this paper can process high-quality land surface environment dense point clouds and digital models while being 30-50 times faster than existing algorithms such as Pix4DMapper. On average, it can process a high-resolution image in less than one second, while the interval for general aerial survey drones is 1-2 seconds. Furthermore, the application of the algorithm proposed in this paper to emergency rescue work of mountain flood disasters can assist in real-time reconstruction of inaccessible areas, surveying the area and location of washed-out and sediment areas, and support emergency rescue work.In conclusion, the proposed real-time dense point cloud generation and digital model construction method based on UAV platforms represents a significant advancement in the field of geographic information systems. It has the potential to revolutionize various geographic fields, including disaster warning and management, emergency response, military applications, and agricultural and urban planning. Additionally, due to the breakthrough of the algorithm, the core problem of real-time mapping from reality to virtual in the construction of digital twins in geographic environments has been solved. Therefore, it can empower digital twins and provide a digital foundation for digital twin applications in geographic environments. Based on the real-time mapping of three-dimensional information of the ground environment, parallel simulation and decision-making can be carried out. Therefore, it is also expected to achieve functions such as emergency warning, scheme evaluation and decision optimization based on the dynamic monitoring of target information in the scene, further improving the automation and intelligence level of the applied system.
关键词:Real-time reconstruction;Dense Point cloud;digital twin;Terrain environment;Digital model
LUO Wang,PENG Dailiang,LIU Jinxiu,XU Junfeng,LOU Zihang,LIU Guohua,GAO Shuang,YU Le,WANG Fumin
Corrected Proof
DOI:10.11834/jrs.20233056
摘要:ObjectiveSoybean is the world’s most important legume crop that serves as a major source of high-protein food, the primary ingredient for livestock feed, and an essential source of edible oil. They play a crucial role in the world's food production, with a global annual production of around 370 million metric tons. China is one of the major soybean producing countries globally, with an annual production of around 17 million metric tons. However, China's domestic soybean production is insufficient to meet production and living needs, and it is highly reliant on imports, accounting for more than 80% of its soybean consumption. Consequently, China's food security faces significant structural challenges. Remote sensing technology is a powerful tool for monitoring soybean cultivation and can provide basic data support for various countries to release signals of changes in agricultural product markets, strengthen the guidance of agricultural product markets, and formulate effective agricultural economic development strategies. The traditional method of estimating soybean planting area through agricultural surveys is usually time-consuming, labor-intensive, and subject to subjective factors, leading to inaccurate and imprecise results. In contrast, remote sensing technology utilizes satellite, aerial, or drone-based sensors to capture and analyze the electromagnetic radiation reflected or emitted by the Earth's surface, providing a more objective and efficient way of monitoring crop planting areas. While methods based on vegetation indices time series and phenology are widely used for crop recognition including soybeans, and the focus has been primarily on the impact of the vegetation index time series feature or phenological feature in soybean recognition, and there has been limited research on the time series curve itself. There is a lack of analysis and research on the spectral characteristics of the key growth stages for soybeans, and no standard spectral time series curve for soybeans has been established to summarize their changing patterns. Additionally, mainstream crop recognition methods face difficulties in obtaining samples, especially in large-scale mapping, where the quality and quantity of samples are the main limiting factors.MethodThis study proposes a soybean recognition method based on the standard spectral time series curve on the Google Earth Engine (GEE) cloud platform. By adding the climate weight factor, the standard spectral time series curve of soybean can be accurately recognized.ResultUsing the features of the standard spectral time series curve combined with the random forest classifier, a soybean distribution map in Heilongjiang Province in 2020 was extracted. The classification confusion matrix shows that the overall accuracy of soybean recognition is 86.95%, the user accuracy is 90.91%, the producer accuracy is 86.14%, and the F1-Score is 0.8846. Compared to statistical area data, the area accuracy reaches 95.94%.ConclusionThe study focuses on analyzing the difference between the spectral time series curves of soybean and corn and the impact of meteorological factors on the curves, and establishes a method for mapping the standard spectral time-series curve information to samples, thereby solving the problem of insufficient samples for mapping a large area of soybean. Furthermore, this paper designs experiments to verify the robustness of this soybean recognition method based on the standard spectral time series curve in terms of time scale and disaster situations. These results provide scientific evidence and technical support for the monitoring of soybean growth, disaster evaluation, and the formulation of international agricultural product trade strategies.
摘要:Objective Efficiently and accurately grasping the spatial distribution of terraced fields not only provides important data support for soil and water conservation, but also improves the regulatory level of agriculture in mountainous areas. In the process of using deep learning methods for terrace recognition, narrow and elongated terraces are prone to missing extraction due to convolution operations, and in complex backgrounds with unclear terrain boundaries in mountainous areas, large areas of adhesive recognition results are easily generated, which leads to low accuracy in the final terrace recognition. Therefore, in order to further achieve accurate recognition of terrace information, the urgent technical problem to be solved is how to effectively maintain the high semantic information of high-resolution remote sensing images in the convolution operation process based on the characteristics of terraces, reduce the omission of narrow and long terraces and the adhesion of recognition results.Method To address the above problems, this paper proposes a JAM-R-CNN deep learning network terrace recognition method based on very high resolution remote sensing images. This network is based on the Mask R-CNN model, integrates the jumping network to maintain the high Semantic information of high resolution remote sensing images, introduces the convolutional block attention module (CBAM) to enhance the feature expression ability of terraces, and modifies the anchor size to adapt to the narrow and long characteristics of terraces, so as to achieve the purpose of improving the terrace recognition accuracy. To test the proposed method, the part of the salt well terraces in Nanchuan District, Chongqing, China was selected as the study area, and used four models in the domestic GF-2 satellite image data for experiments.Result The results show that the terrace parcel map derived from the JAM-R-CNN model achieved the precision of 90.81%, recall of 84.28%, the F1 score of 88.98% and the IoU of 83.15%. Compared with the results of the Mask R-CNN, the value of the precision, recall, F1 score and IoU in the results of the JAM-R-CNN model were increased by 1.96%, 5.26%, 3.29% and 5.19%, indicating that the JAM-R-CNN model can better identify the terraces. Most of the terraces identified by the Unet and DeepLab v3+ are connected together, and the terraced fields with small size fail to be distinguished. The JAM-R-CNN model identifies fewer missing areas on the periphery of terraces compared to the Mask R-CNN model, and the number of missing narrow and long terraces is significantly reduced. This is the effect of three improvement parts, which further proves that the JAM-R-CNN model proposed in this article has significant improvement effect and shows superior performance in remote sensing recognition of terraces.Conclusion To sum up, the JAM-R-CNN deep learning network model proposed in this article not only effectively reduces the adhesion phenomenon of terrace recognition results, but also significantly improves the extraction rate of narrow and long terraces, achieving a significant improvement in the overall accuracy of terrace remote sensing recognition, and has good application value.
摘要:Urban land surface temperature is an important indicator of the energy budget of urban underlying surface and local climate change. Remote sensing is an important tool to obtain urban land surface temperature at a large spatial scale. Remarkable urban three-dimensional structure and complex urban surface materials substantially influence the directional variation in upwelling thermal radiance. Thermal infrared remote sensing typically provides an average temperature (i.e., directional temperature) of all component surfaces in a sensor’s field of view at a specific viewing direction. The directional temperature varies with the sensor's observation angle and differs from the true distribution of urban surface temperature. To better characterize the energy exchange between the urban underlying surface and the atmosphere, the term "complete surface temperature" was proposed to represent the characteristics of urban surface temperature. Currently, "complete surface temperature" has only made a breakthrough in describing the average state of urban surface temperature, but it still cannot reflect the high- resolution spatiotemporal characteristics of urban surface temperature, and cannot meet the needs of fine-scale assessments of urban thermal environment.In this review,we summarizes the development of urban surface remote sensing temperature from "directional temperature" (2-dimensional) to "complete surface temperature" (2.5-dimensional) and then to "3-dimensional surface temperature" (3-dimensional), as well as the current progress in using remote sensing directional observations to obtain urban surface temperature in different dimensions; Clarifies the differences and interrelationships of in different dimensions; The application of remotely-sensed urban surface temperature in different dimensions was also elaborated. On the basis of summarizing the existing problems, the future development trend of remotely-sensed urban surface temperature is pointed out: (1) definition of 3-dimensional urban surface temperature for different application purposes; (2) Stereoscopic observation for the reconstruction of 3-dimensional urban surface temperature and (3) coupling of 3-dimensional surface temperature products and urban climate models.
关键词:urban remote sensing;land surface temperature;directional temperature;complete surface temperature;3-dimensional surface temperature
摘要:Accurate wetland classification methods can quickly grasp the spatial-temporal variation characteristics of wetlands, and plays an important role in wetland research. Aiming at the problem that the existing wetland classification method based on few-shot learning is limited to the use of target domain or single source domain dataset, this paper proposes a 3D multi-source domain self-attention few-shot learning model 3D-MDAFSL (3D Multi-source Domain self-Attention Few-Shot Learning). Firstly, combining the advantages of convolution and attention mechanism, a 3D feature extractor based on self-attention mechanism and deep residual convolution is designed. Then, the conditional adversarial domain adaptation strategy is used to achieve multi-source domain distribution alignment, and few-shot learning is performed separately in each domain. Finally, the features extracted by the trained model are input to the K-nearest Neighbor classifier to obtain classification results. The results show that compared with the framework without feature extraction, the 3D feature extractor improves the overall accuracy by about 6.79%. When using multi-source domain datasets, the overall accuracy of the 3D-MDAFSL model for the Sentinel-2 wetland dataset in Zhongshan City can reach 93.52%, which is significantly improved compared with the existing algorithms. The 3D-MDAFSL model proposed in this paper has good application value in the high-precision extraction and classification of wetland features.
摘要:Unsupervised domain adaptive (UDA) classification aims to classify the target domain scenes without labeled samples by using knowledge from the source domain data with the labeled samples, which is one of the important cross-scene classification methods in the field of hyperspectral image classification (HSIC). The existing domain adaptive classification methods for hyperspectral remote sensing data mainly utilize the adversarial training mode to achieve the feature alignment between the target domain and the source domain. Although the popular UDA approach with the condition of local alignment of dual domains generates acceptable classification accuracy, the key issue of whether the source domain knowledge is sufficiently transferred to the target domain is not considered. To effectively extract and transfer source domain knowledge, in this paper, an Unsupervised Domain Adaptation Classification by Adversary Coupled with Distillation (UDAACD) is proposed for the unsupervised HSIC. In the proposed framework, the dense-base network with Convolutional Block Attention Module is presented to extract the abundant features for the representation of the category of the source and target domain. In the source domain training, a self-distillation learning schema is adopted to reduce the class-wised difference by matching the predictive distribution of the same class samples. The self-distillation regularization constraint is increased between the samples of the same category in the source domain to reduce the intra-class difference of the classification subspace and improve the knowledge expression accuracy of the source domain classification model. Thus, the ability of the adaptive classification model to refine the source domain supervision knowledge is improved. Besides, a novel mechanism of adversarial training coupled with distillation knowledge is presented to guarantee that the source domain knowledge fully transfers to the target domain scene with feature alignment. Moreover, dual classifiers are employed in the process of adversarial training to eliminate the effect of the prediction of the confused samples. The maximum and minimum discrepancy of the dual classifiers during the adversarial training promote the feature alignment rapidly without confusion. In this way, knowledge distillation is carried out to improve the recognition ability of the network in the domain, while ensuring the full transfer of hyperspectral source domain knowledge in the feature alignment process, to improve the knowledge acquisition ability of the model in the target domain. Finally, the unsupervised classification of hyperspectral images in the target domain is completed after the knowledge transfer. The experiments for HSI cross-scene image classification are conducted on three hyperspectral remote sensing scene datasets including Pavia University, Pavia Center, Houston 2013 and Houston 2018. The results demonstrate that the proposed model is superior to other hyperspectral domain adaptive methods. Under the same sample conditions, the classification accuracy achieves 91.75% (Pavia University to Pavia Center), 74.41% (Pavia Center to Pavia University), 70.68% (Houston 2013 to Houston 2018) and 67.76% (Houston 2018 to Houston 2013), respectively. In addition, the ablation study illustrates that the final classification accuracy of the unsupervised HSIC is improved with the self-distillation and the distillation loss in the adversarial training model. The analysis of parameters with different weights and temperatures are analyzed in the experiments with the variation of the values. The validity of the method is verified by all the mentioned experimental results and analyses.
CUI Guangxi,DU Yanlei,Wang sheng,XU Xuefeng,Yang Xiaofeng
Corrected Proof
DOI:10.11834/jrs.20232433
摘要:Objective Ocean internal waves are a commonly observed catastrophic mesoscale oceanic phenomenon, which is of great attention due to its significant threat to the marine military and marine engineering. With the rapid development of science and technology, the ocean internal wave remote sensing detection method has attracted more and more attention. At present, remote sensing methods used for internal wave observation can be divided into synthetic aperture radar (SAR), visible light, infrared by frequency band. Among them, SAR has the advantages of all-day, all-weather and high-resolution, which is especially well-suited for remote sensing investigation of oceanic internal waves with frequent cloud coverage areas. In order to achieve accurate detection of ocean internal waves using synthetic aperture radar (SAR) images and to solve the problem that conventional detection algorithms are susceptible to SAR speckle noise interference, this paper proposes a SAR ocean internal wave detection algorithm based on superpixel segmentation and global saliency features.Method Firstly, the SAR image is segmented into feature-uniform superpixels using the simple linear iterative clustering algorithm (SLIC). The SLIC algorithm combines neighboring pixels with similar features into superpixels. The superpixels not only enhance the continuity between the inner wave pixels, but also suppress the speckle noise interference. Then, the gradient feature, gray scale feature and spatial feature of the super-pixel are used to construct the internal wave saliency feature vector and calculate its global saliency. Based on the saliency, threshold segmentation algorithm is used to extract the internal wave superpixels. Experiments are conducted on GF-3 image and ERS-1 image, which show that the constructed internal wave saliency feature vector is beneficial to detect more internal wave stripes. Finally, the label image indicating the internal wave regions is generated according to the spectral characteristics of internal wave and used to correct the internal wave detection result in previous step.Result We carried out the detection experiment of internal wave bright stripes on five SAR images with a resolution of about 10 meters. The experimental results show that the proposed method has good detection accuracy for these five high-resolution SAR internal wave images. The average F1 score of the internal wave detection for the five scene experimental data of our method could reach 0.884, and the average false alarm rate is 0.009.Conclusion By comparing the internal wave detection results and related evaluation indexes of our method with the classical Canny operator and the deep learning U-Net method, the effectiveness and robustness of our proposed method in high-resolution SAR ocean internal wave detection are demonstrated, which is of great significance to improve the inversion accuracy of internal wave wavelength and amplitude.
关键词:Ocean internal wave;superpixel segmentation;salient feature detection;Fourier energy spectrum;synthetic aperture radar
TANG Xinming,XIAO Chenchao,WEI Hongyan,LIU Yu,HAN Bo,LIU Yinnian,WANG Jun
Corrected Proof
DOI:10.11834/jrs.20232522
摘要:The 5m Optical Satellite 02 (ZY-1-02E) was successfully launched on December 26, 2021. Along with the previous model (ZY-1-02D), an operational constellation of mid-resolution satellites will be established, forming a large-scale, quantitative and comprehensive earth observation capability. This satellite was designed to provide 2.5-meter panchromatic/10-meter multispectral, 30-meter hyperspectral, and 16-meter thermal infrared image data. The whole project was led by the Ministry of Natural Resources. It will support national monitoring and survey and mapping projects at the scale of 1:100,000 to 1:250,000. Compared with traditional multispectral data, hyperspectral data can obtain more abundant spectral information of ground objects, which is particularly important for fine-grained survey and monitoring tasks. The development of China's hyperspectral remote sensing technology is basically synchronized with the international frontier level. However, since the birth of hyperspectral remote sensing technology, the main way to obtain data has relied on human-operated aerial platforms, and for a long period of time, there has been no large-scale acquisition capacity from the data source. Therefore, the hyperspectral payload carried by this satellite is its biggest highlight and has attracted widespread attention from domestic and foreign peers. During the in-orbit test, the Ministry of Natural Resources designed a total of 32 application test items, focusing on land resources, geology and minerals, surveying and mapping geographic information, ocean island monitoring and other fields. All application items and other key indicators were evaluated in accordance with the relevant standards for natural resource survey and monitoring and technical specifications. The result shows that the satellite system, ground system and application system have been operating stably, and maintaining good consistency between each satellite of the constellation. It meets the data quality requirements of main businesses in various fields, including natural resource supervision and enforcement, geological and mineral resource investigation, ecological restoration project monitoring and evaluation, geospatial information update, coastline change monitoring, and steel industrial capacity monitoring, hyperspectral and thermal infrared payloads achieved good application results. This paper will introduce the data characteristics and application capabilities of the satellite, focusing on evaluation results of some specific applications, especially with hyperspectral and thermal infrared cameras. It is hoped that this paper can provide a reference for the further application of this satellite.