最新刊期

    本报道聚焦城市热环境演变研究,通过时空融合方法重建武汉市夏季地表温度均值,揭示了城市热环境时空变化特征,为城市生态文明建设提供科学依据。

    OUYANG Yuanjun, ZHANG Ming, ZHANG Lei, HE Qingqing, HUANG Jiejun

    DOI:10.11834/jrs.20254587
    img
    摘要:Objective Remote sensing-derived land surface temperature (LST) products are essential for studying urban thermal environment dynamics. However, limitations such as long revisit intervals of remote sensors and data gaps caused by cloudy or rainy weather hinder the representativeness of high-resolution LST products. As a result, long-term studies of urban thermal environments at fine spatial scales remain constrained. This study aims to reconstruct high-resolution summer mean LST data for Wuhan's core urban area from 2013 to 2022 using Landsat and MODIS remote sensing data through spatiotemporal fusion methods, and to analyze the evolution of Wuhan's thermal environment at a fine scale.Method The research employed spatiotemporal fusion techniques to integrate Landsat and MODIS data, reconstructing long-term high-resolution summer mean LST for Wuhan's core urban area. The study area covered Wuhan's central city and urban development zones. Validation was conducted using ground meteorological station data, with accuracy assessed through MAE, RMSE and R² metrics. LST classification and trend analysis were performed to examine spatial-temporal patterns of thermal environment changes.Result Key findings include: (1) The reconstructed high-resolution mean LST product demonstrated strong consistency with ground observations (MAE=0.478℃, RMSE=0.5965℃, R²=0.8538), effectively capturing the high spatiotemporal heterogeneity of urban thermal environments at fine scales; (2) From 2013 to 2022, the proportion of high-temperature zones in Wuhan's main urban area showed a decreasing trend while expanding towards surrounding new town clusters along development axes, with previously isolated high-temperature areas gradually merging; (3) During 2013-2022, all new town clusters except the southeastern cluster exhibited expansion of high-temperature zones, with particularly significant growth in northern, western and southwestern areas.Conclusion This study provides an effective approach for reconstructing high-resolution LST data and analyzing fine-scale urban thermal environment patterns. The findings offer valuable insights for urban ecological civilization construction and sustainable development, supporting evidence-based urban planning and heat island mitigation strategies. The methodology and results contribute to advancing research on spatiotemporal patterns of urban thermal environments at fine scales.  
    关键词:urban thermal environment;mean land surface temperature;land surface temperature reconstruction;multi-source spatiotemporal fusion;Spatiotemporal evolution;Mann-Kendall trend test;spatiotemporal resolution;Wuhan City   
    33
    |
    31
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 112023938 false
    更新时间:2025-06-24
    在遥感领域,专家提出了基于Mamba结构的高光谱和LiDAR数据自适应融合协同分类网络,有效提升了地物识别分类精度。

    Weng Qian, Chen Gengwei, Pan Zengying, Lin Jiawen, Zheng Xiangtao

    DOI:10.11834/jrs.20254539
    img
    摘要:ObjectiveHyperspectral images (HSI) capture rich spectral signatures for land-cover analysis but lack spatial elevation details, while light detection and ranging (LiDAR) provides precise 3D geometric information. Combining these complementary modalities can significantly enhance classification accuracy. However, existing deep learning frameworks, particularly Transformer-based models, face inefficiencies due to the quadratic complexity of self-attention mechanisms when processing high-dimensional HSI data. This study aims to address these limitations by proposing a novel adaptive fusion network that leverages the linear computational efficiency of the Mamba architecture, enabling efficient and accurate joint classification of HSI and LiDAR data.MethodWe propose AFMamba, an adaptive fusion collaborative classification network based on the Mamba architecture. The network features three key components: A dual-branch feature extraction module to independently encode HSI spectral-spatial features and LiDAR elevation information. A stackable dual-channel collaborative attention module (DCCAM) built upon Mamba blocks, which captures long-range dependencies across modalities while enforcing parameter sharing to enhance feature consistency and mutual learning. An adaptive fusion block (AF) that dynamically weights multi-modal features through learnable parameters, optimized via layer normalization.By integrating Mamba’s selective state space model (SSM), the network achieves linear computational complexity, efficiently modeling global dependencies without sacrificing spatial-spectral details. The parallel training architecture further reduces computational bottlenecks.ResultExtensive experiments on three benchmark datasets—Trento, Houston 2013, and MUUFL—demonstrate AFMamba’s superiority. The proposed method achieves state-of-the-art overall accuracies (OA) of 99.33%, 91.74%, and 94.94%, respectively, outperforming Transformer-based models (MFT, MIViT) and Mamba variants (HLMamba, SpectralMamba).ConclusionAFMamba establishes a new paradigm for efficient and accurate fusion of HSI and LiDAR data by integrating Mamba’s linear-time modeling capability with parameter-shared cross-modal attention. The method effectively addresses the computational inefficiency of Transformers while achieving superior classification performance through adaptive feature fusion and global dependency learning. Future work will extend this framework to semi-supervised scenarios and explore its applicability to other multimodal remote sensing tasks, such as change detection and target recognition.  
    关键词:remote sensing image classification;collaborative classification;adaptive fusion;Mamba architecture;parameter sharing;hyperspectral image;LiDAR;Multimodal data fusion   
    35
    |
    35
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 111076484 false
    更新时间:2025-06-18
    在目标检测领域,研究者提出了一种红外与可见光特征自适应融合的多模态目标检测方法,有效提高了复杂环境下的检测精度。

    YU Zhirui, YIN Zhanpeng, WANG Junyu, ZHOU Liang, YE Yuanxin

    DOI:10.11834/jrs.20254358
    img
    摘要:Objective The target detection method is mainly based on visible light images. Although it can fully display the details and texture features of the target, when the target is blurred, blocked, or the light is too strong or too dark, it is difficult to fully obtain the target information relying solely on visible light sensors, thus affecting the target detection effect. In contrast, infrared sensors have the advantages of strong resistance to environmental interference, insensitivity to light, and the ability to reflect target temperature, which can make up for the shortcomings of visible light imaging. Therefore, infrared features are integrated into target detection based on visible light.Methods This paper proposes a multimodal target detection method that adaptively fuses infrared and visible light features. This method uses the YOLOv8 target detection framework as the basic network to extract multi-scale feature information. On this basis, a new multimodal hybrid attention module (Cross-modal Hybrid Attention Module, CHAM) was constructed. This module extracts the complementary features of visible light and infrared images respectively, and jointly performs channel and spatial attention analysis to achieve the communication and reorganization of cross-modal information weights, thereby improving the perception of complementary features between different modalities. In addition, a visible light-infrared feature adaptive fusion module with ambient light intensity as an indicator was constructed. First, an illumination awareness module (IAM) was designed to evaluate the feature richness of the target contained in the visible light image according to the light intensity of the visible light image, and incorporate it into the cross-modal adaptive fusion module (CAFM) to guide the adaptive fusion process, solving the problem that conventional fusion methods are difficult to achieve dynamic processing based on multimodal data features. Thus, target detection based on multimodal feature fusion is realized.Result In order to fully demonstrate the effectiveness of the MAF-YOLO model proposed in this paper, several classic target detection models and current advanced detection methods are used to conduct comparative experiments with MAF-YOLO. Among them, the classic target detection models include Faster R-CNN and YOLOv8. Since they are both target detection models for single-modality images, this paper conducts experiments on visible light and infrared modalities respectively; current advanced models include CFT, which is the first application of Transformer to the field of multispectral target detection; TarDAL, which uses generative adversarial networks to generate fused images; and SuperYOLO, which achieves super-resolution reconstruction based on YOLOv5 and improves the accuracy of multimodal target detection. Comparative experiments are conducted on the M3FD street view dataset and the DroneVehicle aerial vehicle dataset to test the robustness of this method in different lighting environments and scenes. Experiments conducted on the publicly available M3FD street scene dataset and the DroneVehicle aerial vehicle dataset show that the proposed method achieves higher detection accuracy compared to current state-of-the-art single-modal and multimodal object detection methods.Conclusion Based on the YOLOv8 model, this paper proposes a target detection network based on visible light-infrared multimodal feature fusion (MAF-YOLO). A new cross-modal hybrid attention mechanism is designed to make full use of the complementary features of visible light and infrared information, and an illumination perception module and an adaptive fusion module are constructed to perform mid-term fusion of dual-stream information, extracting the complementary features of the two modalities for target detection. The comparative experiments of this method with a variety of existing target detection models on the DroneVehicle and M3FD datasets show that MAF-YOLO can have good detection performance and robustness in complex environments, proving that the proposed method can effectively solve the problem of insufficient visible light modal target features in complex environments, and realize accurate target detection with the fusion of infrared and visible light multimodal features.  
    关键词:target detection;multimodal;convolutional neural network;Feature fusion;attention mechanism;visible light imaging;infrared imaging;deep learning   
    48
    |
    53
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 111076441 false
    更新时间:2025-06-18
    在粮食安全领域,专家利用多源遥感数据,提出了水稻自动化制图新方法,为丘陵山区水稻精准制图提供可靠方案。

    LI Ziqi, WU Tianjun, LUO Jiancheng, ZHANG Jing, LI Manjia, FANG Zhiyang

    DOI:10.11834/jrs.20255064
    img
    摘要:Rice is a crucial global food crop, and the accurate, timely delineation of rice cultivation areas is vital for food security assessments and sustainable agricultural planning. Satellite remote sensing, integrated with advanced information technologies, has emerged as a pivotal tool for mapping spatial distribution patterns and monitoring growth dynamics of rice. However, in the hilly and mountainous regions of southwest China, frequent cloud cover, fragmented cultivation patterns, and high field sampling costs hinder efficient and reliable remote sensing-based rice mapping. To address these limitations, this study, focusing on Tongnan District in Chongqing Municipality, proposes a novel cropland parcel-scale rice automated sample generation and mapping framework, leveraging the complementary strengths of multi-source remote sensing data. The framework operates through several key stages: First, using cropland parcels as the basic unit, the optical-SAR time-series feature set is constructed by utilizing the multi-period Sentinel-1 and Sentinel-2 data. Secondly, adaptive identification of rice growth stages is achieved by analyzing temporal VH polarization features from SAR data, which are sensitive to vegetation structure and moisture variations. Subsequently, high-quality parcel-scale training samples are automatically generated by analyzing the rice seasonal pattern across different imaging modes, bypassing labor-intensive manual sampling. Based on the above samples, automated rice mapping is implemented by combining feature optimization and random forest. Finally, we evaluate and analyze the reliability of the generated samples and the rice mapping results. The results demonstrate four critical findings: (1) The method generates spectrally representative samples, achieving high consistency with field data (Spectral Correlation Similarity (SCS) = 0.987; Dynamic Time Warping (DTW) distance = 4.719). These samples exhibit spatial heterogeneity and broad coverage, addressing inefficiencies and biases inherent in traditional manual sampling. (2) Feature selection can effectively reduce the complexity of the model while ensuring classification accuracy. The contribution of transplantation period features is higher, and the importance of SAR features is higher than optical features. The two types of data have complementary advantages and can synergistically improve the classification robustness under cloudy conditions. (3) The automated rice mapping achieved an overall accuracy of 89%, an F1 score of 89%, and a total area extraction error of -7.5%, validating its reliability in spatially heterogeneous landscapes. In hilly mountainous areas with significant spatial heterogeneity, a moderate number of samples with uniform spatial distribution can help improve the stability and generalization ability of the model. In addition, rice exhibits significant clustering in flat, well-irrigated area. (4) The overall uncertainty of rice mapping in the study area was low (0.29), indicating that the mapping results were reliable. Uncertainty is mainly affected by the combination of environmental and agricultural management conditions, parcel morphology and sample characteristics. In the area with flat topography, favorable irrigation conditions, regular parcels and sufficient samples, the spectral characteristics of rice were significant, and the uncertainty of mapping was lower. This study provides a reliable automated sample generation approach for rapid and precise rice mapping in hilly and mountainous areas and offers scientific foundation for developing sampling strategies, selecting optimal feature bands and assessing uncertainty. In turn, it provides a reliable basis for precision agriculture planning and food security assessment.  
    关键词:hilly and mountainous areas;seasonal pattern;time-series remote sensing;cropland parcel-scale;automated sample generation;rice mapping   
    38
    |
    44
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 111076386 false
    更新时间:2025-06-18
    遥感产品误差溯源研究取得新进展,为地球资源监测提供理论和技术支持。

    LI Ziwei, WEN Jianguang, XIAO Qing, YOU Dongqin, TANG Yong, WU Xiaodan, LIN Xingwen, PIAO Sen, ZHAO Na, LI Ququ

    DOI:10.11834/jrs.20254475
    img
    摘要:Remote sensing products are of great significance for Earth resource monitoring, environmental governance and climate change research. With the continuous enhancement of the product production and service system, it is highly urgent to facilitate the development of high-quality remote sensing products. The quality of remote sensing products is inherently influenced by multi-source errors originating from sensor performance limitations, radiometric calibration, atmospheric correction, geometric distortions and retrieval uncertainties. Errors introduced at each stage are cumulatively propagated, leading to significant uncertainty in the quality of the derived remote sensing products. Although the validation method offers quantitative assessment, it cannot reflect the specific sources and magnitudes of errors in detail. To ensure the reliability and consistency of these products, the guidance documents provided by the Quality Assurance Framework for Earth Observation (QA4EO) particularly emphasize that it is crucial for data and derived products to be associated with an indicator of quality that is traceable to reference standards (preferably the International System of Units - SI). This traceability framework not only enables quantitative inter-product comparisons but also objectively characterizes their accuracy disparities.This paper mainly focuses on error traceability in remote sensing product production and validation. Errors originate from two distinct sources: intrinsic product errors and measurement uncertainties associated with ground reference data. The paper commences with a detailed description of the origins of these errors in both components. Subsequently, it elucidates the fundamental concepts of validation and error traceability, highlighting the significance of establishing SI-traceable propagation chains. Additionally, three main methods of error traceability in remote sensing product production and validation are also summarized, including uncertainty estimation, error decomposition and combining algorithm test and validation. The core objective is to determine the sources and magnitudes of errors accurately at each stage of product production and validation. Typical applications of these methods are illustrated through case studies, which provide a foundation for analyzing error propagation mechanisms and developing error propagation models. Finally, the conclusion and prospect of the future development trend of error traceability are summarized.Understanding the law of error propagation and establishing a comprehensive error transmission chain are crucial issues to be addressed in error traceability research. It is vital for algorithm improvement and product quality enhancement. Currently, the study of error traceability in remote sensing products remains in its infancy, with relatively few existing investigations in this area. Moreover, developing traceable quality indicators system that fully reflect uncertainties arising from input data, processing, and validation procedures is also an essential component of error traceability. These indicators serve as a vital means of presenting traceability results. Both producers and users are concerned about the quality of remote sensing products. The findings of this study provide theoretical and technological support for the further advancement of remote sensing product production and validation. However, how to further address the key issues of error traceability in the future, particularly in terms of constructing a whole-process error propagation model and developing a traceability method that considers both accuracy and uncertainty, remains a critical area that requires in-depth investigation.  
    关键词:remote sensing product;uncertainty;validation;error traceability   
    36
    |
    62
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 110890449 false
    更新时间:2025-06-17
    在地学领域,北京平原区地面沉降时空特征研究取得新进展,采用时空主成分分析方法,揭示了地面沉降趋势、波动特征及空间分布模式,为地面沉降防控提供数据支撑。

    SONG Zongwen, WANG Yanbing, LI Xiaojuan, LI Chenxia, ZHAO Yali, LI Yanxin, YANG Xiyue, CHENG Haowen

    DOI:10.11834/jrs.20254142
    img
    摘要:Objective Understanding the spatiotemporal characteristics of land subsidence in Beijing is crucial due to its significant environmental and geological implications. However, traditional approaches to analyzing land subsidence often consider temporal or spatial aspects separately, which may discount hidden information and potential patterns within the data. Spatial and Temporal Principal Component Analysis (ST-PCA) is presented in the paper to identify the hidden spatiotemporal characteristics of land subsidence in the Beijing plain.Method (1) Permanent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) is a remote sensing technique to measure and monitor displacement of the earth’s surface. PS-InSAR is used in this paper to obtain long-term land subsidence information in Beijing Plain with Radsat-2 SAR data. (2) ST-PCA is a mathematical transformation method that converts a set of correlated variables into a set of uncorrelated new variables. It simplifies the complexity in high-dimensional data by spatiotemporal clustering. In the paper, ST-PCA is used to discover the spatiotemporal variations features of land subsidence in the Beijing Plain based on the land subsidence data from 2010 to 2016.Result (1) Time Principal Component Analysis (TPCA) can be used to determine the trend and fluctuation characteristics of land subsidence, along with the corresponding spatial distribution patterns. The eigenvalues derived from TPCA analysis determine the amount of information that is explained by each component. The first two principal components (variance contribution rate) express respectively the characteristics of land subsidence about 96.5% and 2.2% in research area. TPC1 provides an insight into the overall spatially uneven subsidence trend within the study area, while TPC2 reveals the seasonal fluctuation characteristics of land subsidence and its spatial distribution differences. Particularly, significant seasonal differences in subsidence distribution were found in subsidence funnel areas where the subsidence rate is greater than 30mm/a. In the subsidence funnel areas,the northern region experienced increased subsidence during summer, whereas the southern region experienced increased subsidence during winter. (2) Spatial Principal Component Analysis (SPCA) is a useful technique for understanding the temporal evolution patterns of land subsidence through clustering subsidence points with similar temporal trends into the same principal component. The first three principal components (variance contribution rate) explain a cumulative variance contribution rate of 90.3%. Specifically, SPC1, accounting for 76.3% of the variance, demonstrates the dominant feature of subsidence in most areas with a continuous linear decrease in land subsidence. SPC2 and SPC3, 13.0% and 1.0% contribution of the variance respectively, indicate that the average annual subsidence was close to zero, while significant seasonal fluctuation characteristics were found in the areas of slight subsidence and non-subsidence.Conclusion The application of ST-PCA in this study successfully unveiled the spatiotemporal characters of land subsidence in the study area. ST-PCA could also be applied to analyze some other long-time series geoscientific data. It was discovered that the principal components obtained from PCA are linear combinations of the original variables and have strong comprehensiveness. However, further research is still necessary to fully understand their geoscientific significance.  
    关键词:ST-PCA;spatial and temporal evolution characteristics;land subsidence   
    21
    |
    31
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 110890337 false
    更新时间:2025-06-17
    记者从科技媒体获悉,我国自主研制的陆地探测一号01组A/B卫星(LT-1A/1B)在城区建筑物变化检测领域有较强应用潜力。基于LT-1数据,专家提出一种融合多时相SAR幅度特征和相干性信息的城区建筑物变化检测算法,为城市规划和违建查处等领域提供解决方案。

    WANG Xiuhua, FENG Guangcai, WANG Bin, LIU Jincang, JIANG Hongbo, LI Ning

    DOI:10.11834/jrs.20254179
    img
    摘要:Objective Urban building change detection is an important part of land use, resource management and urban planning. However, due to the influence of cloud, fog and rain weather in southern China, optical images are difficult to play a role. China's self-developed LT-1A/1B satellite have the advantages of high resolution, short return period, multi-polarization, all-day and all-weather. It can be used as an important supplement to optical images and has strong application potential in change detection. Based on LT-1 data, this paper proposes an urban building change detection algorithm that combines multi-temporal SAR amplitude and coherence information.Method The algorithm uses building recognition, color model conversion and coherence change constraints to correspond multi-temporal image pixel value changes to building changes, thereby obtaining urban building change areas and locating building change periods.Result In this paper, we take the local area of Hengqin Town, Zhuhai City as an example to carry out the experiment, and the results of building change detection in the study area are obtained by using five scenes of LT-1 ascending images from June 23,2023 to November 14,2023. At the same time, we verify the results of building change detection in the northeast and central parts of Hengqin Town by the construction of buildings. The result shows that the changes of 19 buildings in eight typical areas are consistent with the construction period of buildings, which verifies the reliability of the algorithm.Conclusion It shows the application ability and value of LT-1 data in the field of change detection, and has a wide application prospect in the fields of urban planning and illegal construction investigation.  
    关键词:SAR;building change detection;Amplitude;Color model conversion;coherence   
    19
    |
    46
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 110890271 false
    更新时间:2025-06-17
    在云-气溶胶研究领域,新提出的2D-SMA算法通过统计概率模型和二维层次检测窗口,显著提高了层次检测的准确性和可靠性。

    YU Hongyang, XU Weiwei, MAO Feiyue, ZANG Lin, GONG Wei

    DOI:10.11834/jrs.20254435
    img
    摘要:ObjectiveSpaceborne lidar is a unique approach for the research and monitoring of cloud and aerosol due to its ability to observe their vertical properties in the atmospheric profile. The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) onboard the CALIPSO satellite has been operational in orbit for many years and has provided a large amount of profiling measurements of cloud and aerosol. Detecting the spatial locations of cloud and aerosol layers from lidar data is a prerequisite for retrieving and extracting layer information accurately. While the official CALIOP algorithm detects layers using empirical thresholds, many layers are missed in its output, which may explain the systematic underestimation of aerosol optical depth (AOD) offered by CALIPSO compared to that by the Moderate Resolution Imaging Spectroradiometer (MODIS). Current hypothesis testing methods represented by the 1D-Simple Multiscale Algorithm (1D-SMA) determine whether a given signal belongs to a layer by verifying whether it conforms to the distribution hypothesis of background air. These methods eliminate the traditional empirical threshold array and improve the accuracy of layer detection. However, none of the methods above have taken into account the spatial continuity of the layer signals in the 2D vertical profiling scene, and missed layers still occur in their cases.MethodThis study further proposes a 2D-Simple Multiscale Algorithm (2D-SMA) based on Bernoulli probability distribution, which replaces the empirical threshold with a statistical probability model. For a background atmosphere bin, the probabilities that its signal intensity is greater or less than the ideal value are both 1/2. Assuming that the signals at each bin are independent, the distribution of the number of signal bins within the detection window that are greater or less than ideal value follows a multiple Bernoulli experiment. We design a probability of belonging to a layer based on the signal intensity of all the bins in the detection window, reflecting their overall deviation from the ideal background atmosphere bins. The new algorithm marks that the center bin of the detection window as layer bin when the probability is small to a certain extent, which means it deviates far from background atmosphere. To utilize the spatial correlation of signals on adjacent profiles, we use 2D layer detection windows covering multiple profiles at different horizontal resolutions.ResultStatistical comparing experiment based on the continuous global observations of CALIOP in December, 2017 shows that at full horizontal resolution (5-80 km), the new algorithm detects 50.45 % and 32.45 % more layer area than that of the official CALIOP algorithm and 1D-SMA, respectively. Applied at a horizontal resolution of 5-20 km, the new algorithm achieves a comparable or greater area of detected layers than that of the official algorithm at 5-80 km horizontal resolutions. Moreover, this paper demonstrates the reliability of the layers identified by the new algorithm by evaluating the depolarization ratio of ice clouds.ConclusionIn general, the new algorithm effectively reduces the missing of weak layers compared with the official CALIOP algorithm, simple and fast to implement, with certain research potential and application prospects.  
    关键词:spaceborne LiDAR;CALIOP;cloud and aerosol;layer detection;Multiscale   
    18
    |
    57
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 110890200 false
    更新时间:2025-06-17
    跨视角图像地理定位研究取得新进展,专家构建分类体系,为提升定位精度提供数据基础。

    ZHANG Xiao, GAO Yi, XIA Yuxiang, ZHAO Chunxue

    DOI:10.11834/jrs.20254348
    img
    摘要:Cross-view Image Geo-localization aims to retrieve the most similar image from a reference database through matching images captured from different viewpoints, subsequently leveraging images’ GPS tag to fulfill localization tasks. Traditional single-view image geo-localization is limited by factors such as dataset quality, scale, and positioning accuracy. Therefore, in recent years, numerous researchers and institutions have released a series of cross-view geo-localization datasets, laying the data foundation for improving geo-localization accuracy. Nevertheless, there is still a lack of systematic analysis of these cross-view image geo-localization datasets.Objective Therefore, this paper aimed to provide a comprehensive review of the published cross-view image geo-localization dataset.Method Based on the literature review, we collect and organize 32 cross-view image geo-localization datasets spanning from the year 2011 to 2024.We review 32 classic datasets that have emerged since the development of cross-view image geo-localization, constructing a classification system from four dimensions: viewpoint information, construction type, authenticity, and temporal information. We summarize the basic information of these datasets in a tabular form, included are the dataset's name, image resolution, data scale, encompassed scenes, etc. Fully expressing the fundamental attributive characteristics of cross-view geolocation datasets from multiple perspectives. Then we delve into these cross-view image geo-localization datasets from five aspects: metadata, influence, keywords, acquisition sources, and application fields. Additionally, we collate and summarize the mainstream algorithms for cross-view image geo-localization (e. g., network structure optimization, loss function optimization and attention mechanism, etc.). Finally, we discuss the future development directions of cross-view localization datasets from four perspectives: the trend of multimodal datasets, the approach of large language models, image distraction handling, and model optimization.Result In summary, we offer a comprehensive review of cross-view image geo-localization datasets from various perspectives. To the best of our knowledge, this paper is the first review of on such datasets in the field, which can provide a reference for researchers in related fields.Conclusion However, the current datasets still face issues such as low data quality, single source of data and weak generalization ability. Thus, further research is needed.  
    关键词:cross-view;image geo-localization;datasets;deep learning;unmanned aerial vehicle;image retrieval;image matching;computer vision   
    14
    |
    68
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 110886791 false
    更新时间:2025-06-17
    Cloud-Harmonizer去云技术在遥感影像处理领域取得突破,有效修复云区信息,提升影像质量。

    Zhu Shudan, Lei Fan, Zhang Lijun, Yang Min, Yang Kaijun, Wei Jide, Feng Ruyi

    DOI:10.11834/jrs.20254505
    img
    摘要:ObjectiveCloud occlusion remains a persistent challenge in optical remote sensing imagery, as conventional cloud removal methods often fail to fully restore details in occluded areas, leading to degraded image quality. Clouds not only obscure critical ground information but also introduce noise and artifacts during reconstruction, limiting the imagery's utility for applications such as land cover monitoring, disaster assessment, and environmental studies. To address this issue, this paper proposes a cloud removal approach based on multi-modal feature consistency fusion (Cloud-Harmonizer). The framework leverages the complementary characteristics and consistency between Synthetic Aperture Radar (SAR) and optical imagery to effectively restore cloud-occluded regions and generate high-quality reconstructed optical images. Unlike traditional methods relying solely on temporal or spatial interpolation, this approach capitalizes on the inherent advantages of SAR data (which is unaffected by cloud cover) to guide the reconstruction process and ensure the authenticity of restored areas. By integrating multi-modal data, the method aims to improve both structural and spectral recovery in cloud-affected images.The Cloud-Harmonizer framework comprises three core modules for feature extraction, alignment, and fusion of SAR and optical images. The Multi-modal Feature Consistency Module (MFCM) maps features from both modalities to a shared vector space and generates modality difference attention to locate cloud-affected regions, thereby ensuring compatibility between the feature representations of both modalities for precise occlusion identification. The Consistency-Constrained Compensation Module (CCCM) utilizes difference attention to guide SAR data in compensating for missing features in optical imagery, enabling reconstruction that closely resembles the actual scene. The Multi-modal Collaborative Adaptive Fusion Module (MCAF) employs self-attention-based adaptive fusion strategies to optimize the integration of both modalities and enhance overall reconstruction quality. This modular design enables accurate compensation and robust feature fusion under various environmental conditions, including dense cloud coverage and complex terrain. The framework dynamically adjusts the fusion process according to input data characteristics, making it suitable for diverse remote sensing scenarios.Experiments conducted on the SEN12MS-CR dataset validate the method's effectiveness. The Cloud-Harmonizer framework achieves a Peak Signal-to-Noise Ratio (PSNR) of 30.0408, Structural Similarity Index (SSIM) of 0.9004, and Spectral Angle Mapper (SAM) of 7.6068, demonstrating improvements over existing cloud removal methods. These quantitative results indicate the model's capability to recover detailed information while maintaining structural and spectral consistency in reconstructed images. Comparative analyses with existing methods show that the proposed approach effectively preserves textures, edges, and other details while reducing artifacts in cloud-occluded regions. Qualitative evaluations further confirm that the reconstructed images exhibit natural visual appearance, validating the framework's robustness.Experimental results demonstrate the potential of the Cloud-Harmonizer framework for cloud removal and feature restoration in optical remote sensing imagery. By effectively utilizing multi-modal data fusion, the method addresses cloud occlusion challenges while enhancing feature consistency between SAR and optical modalities. The approach benefits from the complementary characteristics of both data types, achieving accurate reconstruction of occluded areas while maintaining image quality. The framework's modular and adaptive design provides a foundation for exploring more sophisticated fusion strategies and extending applications to other remote sensing challenges. With growing demand for high-quality remote sensing data, Cloud-Harmonizer may serve as a viable solution for improving the usability of optical imagery in cloud-prone environments. Method:Result:Conclusion  
    关键词:Cloud removal in remote sensing imagery;Multi-modal data fusion;Multi-modal feature consistency   
    130
    |
    210
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 104440280 false
    更新时间:2025-05-31
    在森林垂直结构研究领域,中国东北地区专家建立了RF-EBK模型体系,探索了森林冠层高度估测课题,为精准营林管理提供技术支持。

    LI Xiang, ZHAO Yinghui, ZHEN Zhen

    DOI:10.11834/jrs.20255033
    img
    摘要:Forest canopy height, as a key parameter reflecting the vertical structure of forests, is essential for understanding the structure and function of forest ecosystems. Accurate estimation of canopy height is critically important for carbon cycle assessments, above-ground biomass (AGB) estimation, and ecosystem health monitoring. With the continuous advancement of remote sensing technologies—particularly the integration of LiDAR and optical remote sensing data—the potential for estimating forest canopy height at regional scales has become increasingly prominent, making it a current research hotspot in forest resource monitoring. This study focuses on Northeast China (NEC) and proposes a hybrid model that integrates Random Forest (RF) and Empirical Bayesian Kriging (EBK), referred to as the RF-EBK model, aiming to enhance the accuracy and robustness of regional-scale canopy height estimation. The model incorporates discrete canopy height data from the spaceborne LiDAR ICESat-2 (ATL08), Landsat 8 OLI imagery, Shuttle Radar Topography Mission (SRTM) elevation data, and forest canopy cover data (CATCD).Initially, a recursive feature elimination method with cross-validation was employed to select optimal variables, reduce redundancy, and improve the model's generalization ability. The RF model was then used to produce initial canopy height estimates, and residuals were calculated using a test dataset. Given the spatial autocorrelation of the residuals, the EBK method was applied to spatially model and interpolate them, generating a continuous residual surface across the study area. This residual surface was used to correct the RF predictions, effectively improving estimation accuracy. Ultimately, a high-accurate forest canopy height map at a 30 m resolution for NEC in 2023 was produced. The results show that forest canopy cover was the most important variable in the model. Among topographic factors, slope, elevation, and aspect were also highly influential, reflecting the significant role of terrain in vegetation type and growth conditions. In terms of optical remote sensing features, the original Landsat 8 OLI bands B2, B4, and B7 exhibited high importance. Moreover, texture features derived from bands B3, B6, and B7 (i.e., B3_savg, B6_savg, and B7_savg) were more important than the original bands, underscoring the value of incorporating spatial texture features for canopy height estimation. The Tasseled Cap Greenness (TCG), indicative of canopy cover and vegetation health, also showed strong predictive power. In terms of model performance, the RF-EBK model significantly outperformed the standalone RF model by effectively mitigating the overestimation of low canopy heights and underestimation of high canopy heights. After residual correction, the coefficient of determination (R²) on the validation set increased by 59.52%, while the RMSE and rRMSE decreased by 27%. Furthermore, canopy height measurements extracted from unmanned aerial vehicle laser scanning (ULS) data collected from six sites were used as reference data for model validation. The results showed that the RF-EBK model achieved high accuracy, with an R² of 0.69, RMSE of 1.65 m, and rRMSE of 7.81%. In conclusion, the RF-EBK model provides a reliable approach for high-accurate estimation of forest canopy height at the regional scale and offers robust technical support for precision silviculture and sustainable forest resource management in Northeast China.  
    关键词:ICESat-2;Unmanned Aerial Vehicle Laser Scanning;Landsat 8 OLI   
    103
    |
    153
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 104464632 false
    更新时间:2025-05-27
    在卫星遥感领域,专家提出了融合卫星激光测高数据的高分七号卫星影像数字表面模型BP神经网络方法,有效改善了沟壑发育地表无控条件下卫星立体影像生成DSM高程精度质量低的问题,为国产高分影像推广应用与高精度地形建模提供新思路。

    Zhang Yunlong, Hu Wenmin, Wei Wei, Qin Kai, Xu Jiaxing, Zhang Wei

    DOI:10.11834/jrs.20254406
    img
    摘要:The accuracy of Digital Surface Model (DSM) reconstructed from satellite stereo imagery is generally lower in rugged gully-dominated landscapes than in flat areas, especially in the absence of ground control points (GCPs). Moreover, collecting GCPs in large and rugged areas is often operationally challenging or costly. In order to improve the accuracy of DSM derived from satellite stereo imagery, this study proposes a method that integrates laser altimetry data with the DSM generated from Gaofen-7 (GF-7) satellite stereo imagery using a Backpropagation (BP) neural network. The method seeks to improve terrain DSM accuracy in areas where GCPs are either scarce or costly to obtain, providing a more efficient solution for terrain modeling in such regions.Considering the high-precision elevation accuracy of laser altimetry data, the method improves the DSM accuracy without GCPs through establishing relationships between multiple factors and elevation data from GEDI(Global Ecosystem Dynamics Investigation) mission. These factors include the DSM generated from GF-7 satellite stereo imagery without GCPs, geographic coordinates (longitude and latitude), terrain slope, and terrain errors. This fusion is achieved through the use of a BP neural network, which is trained to model and enhance the accuracy of the DSM under uncontrolled conditions. The study area is located in the Loess Plateau region spanning Shaanxi Province and Inner Mongolia Autonomous Region, China. The terrain is severely eroded loess gullies, intensive mining activities, and dramatic terrain undulations. The method is tested across regions covered with different numbers of images, and its performance is validated through comparisons with ground-truth data measured by RTK.The experimental results show that the elevation error of GF-7 stereo-derived DSM in gully-developed mining areas without GCPs can reach up to 20.49 meters. In contrast, the average elevation accuracy of the fused DSM is significantly improved to 1.63 meters after applying the fusion technique using the BP neural network, which is comparable to the accuracy of DSM generated with GCPs (1.44 meters). Furthermore, when utilizing quality-filtered laser altimetry points as substitute GCPs directly, the elevation accuracy of DSM can also be refined to 2.40 meters. However, the proposed comprehensive multi-factor fusion method yields better performance (1.63 meters vs 2.40 meters), confirming the advantage of the fusion approaches. The improvement in elevation accuracy demonstrates that, whether used directly as GCP substitutes or through data fusion approaches, incorporating laser altimetry data can effectively enhance the vertical accuracy of DSMs derived from GF-7 satellite stereo imagery without GCPs.The findings of this study show that the proposed BP neural network-based fusion method significantly enhances the accuracy of DSMs generated from GF-7 satellite stereo imagery in areas without GCPs. This approach effectively solves the problem of low elevation accuracy in regions with complex topography, such as gully-developed mining areas. The study not only provides an innovative solution for terrain modeling in areas lacking GCPs but also offers a new way to utilize domestic high-resolution satellite imagery for high-precision terrain reconstruction. This method contributes to the advancement of terrain modeling techniques, especially in regions where obtaining GCPs is difficult or impossible, and provides a viable solution for reducing labor costs, improving operational efficiency, and reconstructing high-quality terrain DSMs.  
    关键词:digital surface model;Ground Control Points;neural network;gully-developed areas;GF-7 satellite stereo imagery;GEDI.   
    96
    |
    168
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 104440309 false
    更新时间:2025-05-27
    新疆光伏电站分布及植被影响研究取得进展,基于深度学习模型的光伏电站识别准确率达98.64%,为光伏选址和生态评估提供数据支持。

    QIAO Jiajia, YAN Min, LIU Yongqiang, ZHANG Li, WU YIN, CHEN Yiyang, SHAO Wei

    DOI:10.11834/jrs.20254192
    img
    摘要:The Xinjiang Uygur Autonomous Region, endowed with abundant land and solar energy resources, has emerged as a national leader in installed photovoltaic (PV) capacity, driven by the growing demand for renewable energy and advancements in PV technology. Accurate and real-time identification of PV station distribution, along with quantitative analysis of their effects on the spatial aggregation of surrounding vegetation, provides crucial data and informed decision-making support for PV siting in Xinjiang.This study utilizes deep learning semantic segmentation models that integrate three architectures—UNet, PSPNet (Pyramid Scene Parsing Network), and DeepLabV3+—with eight backbone networks (ResNet-34, ResNet-50, ResNet-101, ResNet-152, MobileNetV2, DarkNet53, VGG16, and DenseNet121). The objective is to determine the optimal model for photovoltaic (PV) station detection and to map the spatial distribution of PV stations across Xinjiang. To assess the impact of PV station construction on vegetation spatial aggregation, Global Moran's I values were calculated as a time series within buffer zones divided into equal intervals, ranging from 30 m to 600 m around the PV stations.The results reveal that (1) The UNet-ResNet50 model demonstrates superior performance in photovoltaic station recognition, achieving an accuracy of 98.64% (an improvement of 0.09 percentage points), an F1 score of 95% (an improvement of 0.4 percentage points), and an Intersection over Union (IoU) of 90.47% (an improvement of 0.57 percentage points). The exceptional recognition capabilities are primarily attributable to the high accuracy of the photovoltaic sample set and the model's outstanding feature extraction and depth balancing abilities. (2) Utilizing Sentinel-2 remote sensing images and the UNet-ResNet50 model, the 2020 photovoltaic stations in Xinjiang were extracted and classified into vegetation-covered and bare land photovoltaic stations, with area proportions of 30% and 70%, respectively. (3) Within different buffer zones ranging from 30m to 210m from the photovoltaic station, the Global Moran's I of vegetation shows a significant downward trend from 2012 to 2020. In the buffer zones 210m to 600m from the photovoltaic station, the downward trend of the Global Moran's I of vegetation slows down significantly. The closer to the photovoltaic station, the greater the impact on the spatial aggregation of vegetation, and the more evident the downward trend in the time series.  
    关键词:photovoltaic station;semantic segmentation model;vegetation spatial aggregation;Global Moran's Index.   
    189
    |
    218
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 100667486 false
    更新时间:2025-05-15
    记者从新疆阜康煤田火区报道,我国专家利用地基短波红外成像光谱仪,提出了检测煤火甲烷逃逸的新方法,为煤火自燃早期识别与预警提供新思路。

    LIU Yanqiu, QIN Kai, CAO Fei, ZHONG Xiaoxing, TIAN Weixue, COHEN Jason Blake, BAO Xingdong

    DOI:10.11834/jrs.20254268
    img
    摘要:Coal-rich regions such as Xinjiang, Ningxia, and Inner Mongolia frequently experience spontaneous combustion of coal seams, releasing significant quantities of methane, a high-impact greenhouse gas. The unorganized and diffuse nature of these emissions poses significant challenges for detection and quantification, often contributing to the so-called ‘missing carbon’ sector in greenhouse gas inventories. Due to the limitations of satellite resolution and the inadequate adaptability of existing methane detection technologies in rugged terrains, this study focuses on coal fire areas characterized by unorganized emissions. Using wide-field ground-based hyperspectral imagery, we developed a novel detection methodology for methane emissions from individual coal fire sources in diverse terrains. The goal is to analyze methane escape patterns and assess latent risks of underground coal fires, offering a new framework for early-warning systems.In this work, we investigated methane escape in two representative mountain coal fire areas located in Fukang, Xinjiang, using ground-based hyperspectral wide-field imagery collected in June 2023. Seven distinct algorithms were applied: First, we proposed a Modified Least Squares Image Enhancement (MLSIE) algorithm by applying the L2-norm least squares regression principle to methane-sensitive spectral windows (1.66 μm and 2.3 μm) in the SWIR range. Second, we developed the Methane Ratio Derivative Spectral Unmixing (RDSU) algorithm, represented by RCH4I1 and RCH4I3, incorporating the spectral ratio derivative unmixing technique and the spectral sensitivity of methane to suppress background interference from pseudo-coal fire regions, while enhancing methane’s spectral signature. Third, by integrating factors such as cloud shadow detection, mineral content, combustion characteristics, and building-related features, we developed two methane ratio indices: 1DSRCH4I3, which enhances the contrast between methane and artifacts, and 2DSRCH4I3, which mitigates artifact interference and highlights methane-enriched areas.This work presents a novel methodology for detecting methane emissions from high-temperature, panoramic coal fire sources with diverse geomorphic characteristics. The findings provide valuable technical support for the identification and assessment of underground coal fire risks. Furthermore, this work introduces a new approach to early-warning systems for coal fire disasters, using methane escape as a diagnostic indicator of potential fire formation. However, we recognize that the influence of black carbon aerosols, which may interfere with mixed spectral signals, was not addressed. Future research could explore the dynamic quantification of methane plumes by integrating hyperspectral image spectral enhancement techniques to improve detection accuracy.Objective:Method:ResultOur evaluation demonstrated the following (1) The proposed MLSIE, RDSU, and DSRCH4I algorithms significantly improved methane detection accuracy compared to the existing CH4I algorithm; (2) The 2DSRCH4I3, MLSIE(2.3μm) and RCH4I1 algorithms exhibited superior performance in complex terrains, while 2DSRCH4I and MLSIE(2.3μm) algorithms were also effective in relatively simple mountainous coal fire areas; (3) MLSIE(2.3μm) displayed robust generalization ability, 2DSRCH4I3 effectively minimized artifacts and false positives, leading to superior detection accuracy, and RCH4I1 demonstrated clear detection efficacy in coal fire areas with significant methane leakage; (4) Methane plumes were detected in two distinct forms: one coexisting with combustion flames and the other freely escaping into the surrounding atmosphere.Conclusion  
    关键词:coal fire;methane;Hyperspectral imaging;SWIR;Unorganized emissions;Plume detection;Artifact suppression;Greenhouse gas;climate change   
    60
    |
    175
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 100667396 false
    更新时间:2025-05-15
    在遥感图像匹配领域,研究者提出了一种基于深度特征重构增强的光学和SAR图像鲁棒匹配方法,为解决复杂地物场景下的匹配问题提供解决方案。

    YANG Chao, LIU Chang, TANG Tengfeng, YE Yuanxin

    DOI:10.11834/jrs.20254295
    img
    摘要:The automatic precise registration between optical and Synthetic Aperture Radar (SAR) imagery remains a significant challenge in remote sensing due to fundamental differences in their imaging mechanisms. The inherent modality gap manifests as substantial radiometric discrepancies (speckle noise vs. photometric consistency) and geometric distortions (side-looking geometry vs. nadir projection), posing critical obstacles for conventional feature matching approaches. While existing deep learning-based methods have made progress in extracting deep features, most architectures inadequately address two crucial aspects: multi-scale feature fusion across different imaging characteristics and cross-modal invariant feature representation, leading to compromised robustness in complex geographical scenarios. To address this, we propose a robust matching method based on deep feature reconstruction for optical and SAR images. Our method features a pseudo-Siamese network that integrates multi-scale deep features and image reconstruction. First, a multi-scale feature extraction architecture efficiently obtains multi-scale deep features at the pixel level. This architecture allows the network to capture detailed information from various scales, which is crucial for understanding the complex patterns present in remote sensing images. Second, a pseudo-SAR translation branch for optical images is designed to reconstruct images from deep features, enhancing the network's ability to learn robust common features. This branch mimics the characteristics of SAR images, enabling the network to find shared features between the two image types more effectively. Through this translation process, the network learns to focus on the essential elements that are common to both optical and SAR images, thereby improving the matching accuracy. Finally, a joint loss function based on multi-layer feature matching similarity and reconstructed image average error is constructed for robust matching. This loss function ensures that the network not only matches features accurately across different layers but also maintains a high degree of fidelity in reconstructing the original images. By combining these two aspects, the network can achieve a balance between feature similarity and image reconstruction quality, leading to more reliable matching results. Experiments on two remote sensing image datasets with different resolutions and diverse terrain scenes (urban, suburban, desert, mountain, water) show that our method outperforms several state-of-the-art matching methods in correct matching rate. The proposed method demonstrates superior performance in various environments, indicating its versatility and effectiveness in real-world applications. The ability to handle different resolutions and terrain types is particularly important for practical remote sensing tasks, where conditions can vary widely.This research advances cross-modal image analysis by providing: 1) A new paradigm combining feature learning with cross-domain translation 2) Practical solutions for SAR-optical registration in challenging environments.  
    关键词:optical image;SAR image;image matching;deep learning   
    76
    |
    250
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 100667327 false
    更新时间:2025-05-15
    最新研究进展显示,专家提出了一种多源卫星数据反演湖泊水色指数FUI的一致性校正方法,显著提升了色度角与FUI反演结果的一致性,为多源卫星数据协同反演湖泊水色参量提供重要方法依据。

    Tan Zhangru, Wang Shenglei, Li Junsheng, Zhang Fangfang, Zhang Bing

    DOI:10.11834/jrs.20255037
    img
    摘要:Objective The color of lake water, as a direct reflection of its optical properties, is an important climate variable of lake ecosystems. In recent years, the Forel-Ule Index (FUI) drived from satellite remote sensing has been widely used to indicate the spatiotemporal variations in lake ecology and water quality over large areas. Multi-source satellite observations can significantly improve observation frequency and spatiotemporal coverage; however, the consistency of FUI retrieval across different satellites remains a challenge.Method This study focuses on six typical lakes on the Tibetan Plateau and develops a consistency correction method for FUI retrieval using multi-source satellite data, including Landsat 5/TM, Landsat 7/ETM+, Landsat 8/OLI, and MODIS surface reflectance products. The method aims to correct the retrieval differences in hue angles and FUI caused by variations in satellite spectral response functions and systematic biases in surface reflectance products. First, a polynomial correction approach based on a water body simulation dataset is applied to perform spectral response correction for the hue angles drived from the visible bands of different satellite sensors. Second, using 112,830 pairs of surface reflectance synchronous observations from the six lakes, a linear regression model is established between MODIS and Landsat TM, ETM+, and OLI retrievals of hue angles for cross-correction. Finally, the consistency of multi-source satellite data is systematically evaluated at both the pixel scale and the time series scale based on the synchronous observations and long-term FUI retrievals.Result Results show that the consistency of the drived hue angles and FUI significantly improves after correction (R² > 0.95, MAPE < 10%), and the annual mean FUI trends derived from the four satellite datasets are consistent.Conclusion This study provides an important methodological reference for the synergistic retrieval of lake water color parameters using Landsat TM, ETM+, OLI, and MODIS multi-source satellite data.  
    关键词:Water color;​ Multi-source satellite remote sensing;​Spectral response function;Consistency correction;​Hue angle;​Forel-Ule Index (FUI);Water reflectance​;​Lakes on the Qinghai-Tibet Plateau   
    124
    |
    76
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 90744308 false
    更新时间:2025-04-23
    非洲森林损毁研究取得新进展,专家构建决策树规则分类框架,揭示人为因素是主要驱动力,为森林保护提供科学依据。

    Liu Wendi, Zhang Xiao, Liu Liangyun

    DOI:10.11834/jrs.20254590
    img
    摘要:Objective Identifying the drivers of forest loss is essential for developing forest management policies, yet accurately and comprehensively classifying these drivers on a large scale remains a significant challenge. In this study, we focused on Africa, a region experiencing severe forest loss due to various human activities and natural disturbances. Our objectives are twofold: (1) to generate a spatially explicit and dynamic dataset for proximate drivers of forest loss, and (2) to quantitatively analyze their spatiotemporal patterns.Method Our approach consists of three main steps. First, we developed a decision tree-based classification framework to attribute annual forest loss in Africa to five human and three natural drivers. This framework integrates multi-source remote sensing data—including a global 30-m land cover dynamic monitoring dataset, human footprint pressure, fire occurrences, forest management practices, and the Standardized Precipitation Evapotranspiration Index (SPEI)—following a hierarchical approach based on identification difficulty. Second, we validated the driver classification results using approximately 15,000 third-party visual interpretation samples at a 1 km × 1 km resolution. Additionally, we compared our results with prior studies and systematically analyzed the sources of observed discrepancies. Finally, we quantified forest loss driven by different drivers across three spatial scales: the entire African continent, distinct geographical zones, and latitudinal gradients. We further examined temporal dynamics and long-term trends at both continental and zonal levels.Result We developed a 30-m annual forest loss driver dataset for Africa (2000–2020) with an overall accuracy of 95.38% in pan-tropical regions. Over this period, Africa experienced an estimated 93.51 Mha of forest loss, with human activities responsible for 86.73% of the loss and natural drivers accounting for 13.27%. Agricultural encroachment was the leading driver, causing 44.02% of the total loss, followed by forestry activity (11.40%). At a finer scale (0.01°×0.01°), agricultural encroachment was the dominant driver in most areas. Its impact increased significantly, from 1.53±0.24 Mha/yr (2001–2011) to 2.71±0.23 Mha/yr (2012–2020), peaking at 3.01 Mha/yr in 2014. Forestry activity declined slightly before 2012 but nearly doubled by 2020 (1.00 Mha/yr vs. 0.52 Mha/yr in 2001). All human drivers, except for impervious surface expansion, displayed significant accelerating trends. Agricultural encroachment showed the most pronounced increase (0.08 Mha/yr², P<0.05), followed by forestry activity (0.03 Mha/yr², P<0.05). In contrast, natural drivers remained stable or declined, with persistent drought showing a decrease of -0.01 Mha/yr² (P<0.05). The dominance of human activities in African forest loss intensified over time, rising from 80.62% in 2001 to 90.38% in 2020. Agricultural encroachment remained the primary driver throughout 2000–2020, peaking at 49.50% of total loss in 2014. The contribution of forestry activity rose sharply from 9.93% in 2001 to 18.61% in 2020, surpassing human-induced fire as the third-largest driver after 2013.Conclusion This study integrates multi-source remote sensing datasets to develop a decision tree-based classification framework for identifying the proximate drivers of forest loss during 2000–2020 in Africa. The driver classification results achieved an overall accuracy of 95.38% in the pan-tropical regions. Our analysis revealed that human activities were responsible for nearly 86.73% of Africa’s forest loss during this period, with their impacts continuing to grow, while natural drivers accounted for only 13.27%. Among the human drivers, agricultural encroachment (44.02%) and forestry activity (11.40%) were the two most significant contributors to forest loss. Notably, the rate of forest loss driven by nearly all human drivers has doubled over time, showing an accelerating trend. These findings highlight the urgent need for stronger forest conservation efforts, as Africa remains far from achieving SDG 15.2 (sustainable forest management), particularly in regions facing rapid agricultural expansion.  
    关键词:forest loss;proximate drivers;remote sensing;decision tree;Africa;time-series;agricultural encroachment;forestry activity   
    102
    |
    122
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 90743185 false
    更新时间:2025-04-23
    在冻土参量遥感反演领域,东北典型冻土区黑土的L波段微波辐射响应特性研究取得新进展,为冻土参数反演提供重要参考。

    Sun M Q, Kou X K, Jiang T, Jin M, Yan S

    DOI:10.11834/jrs.20254443
    img
    摘要:In the detection of frozen soil parameters, the L-band offers stronger penetration capability compared to other microwave bands, making it more advantageous for studying soil properties in frozen conditions. However, there has been limited research on the microwave radiation response depth of the L-band in such environments. Additionally, the microwave radiation response characteristics of soil under freezing conditions remain not fully understood. This study aims to investigate the microwave radiation response depth of black soil in typical permafrost regions of Northeast China, focusing on its behavior under natural freezing conditions during winter.To address the research gap, this study employed a dual-polarized L-band microwave radiometer with a frequency of 1.414 GHz to perform near-field experiments on black soil samples with different initial moisture contents. The experiments were conducted in winter under natural freezing conditions. The study examined the changes in microwave radiation response depth during and after the freezing process, considering four initial moisture contents: 0%, 10%, 20%, and 30%. By analyzing the experimental results, the study aimed to explore how the initial moisture content affects the microwave radiation response depth and to determine the dominant factors influencing brightness temperature in frozen soil. This approach allowed for a comprehensive understanding of the interactions between soil moisture, freezing processes, and microwave radiation.The results of the study revealed several key findings. During the freezing process, the L-band microwave radiation response depth for soils with 10% and 30% initial moisture content was found to exceed 5 cm. This suggests that the L-band is capable of detecting soil characteristics at relatively greater depths during freezing. Notably, the soil moisture content remained the dominant factor influencing brightness temperature during this process. After freezing, the initial moisture content continued to impact the microwave radiation response depth by affecting the amount of unfrozen water present in the soil. The measured response depths for black soil with an initial moisture content of 0% (frozen soil) and 10% were found to range from 100 cm to 105 cm and 50 cm to 60 cm, respectively, following freezing. For soils with higher initial moisture contents of 20% and 30%, the post-freezing microwave radiation response depths were recorded as between 35 cm to 50 cm and 25 cm to 35 cm, respectively. These results highlight the significant influence of soil moisture content on the penetration depth of the L-band signal. Furthermore, the study confirmed that under certain conditions, the microwave radiation response depth of frozen soil exceeded the penetration depth calculated using the Ulaby (1981) model, indicating that the actual response depth could be greater than previously estimated for frozen environments.This study provides new insights into the L-band microwave radiation response depth of frozen soil with varying initial moisture contents. The findings demonstrate that initial moisture content plays a significant role in determining the microwave radiation response depth by influencing the unfrozen water content in the soil. Additionally, the observed response depth surpassing the Ulaby model’s predicted penetration depth emphasizes the complexity of microwave interactions with frozen soil. These results have important implications for the remote sensing inversion of frozen soil parameters and the use of L-band microwave radiometry in monitoring permafrost and frozen ground. The ability to accurately measure the microwave radiation response depth in these environments can improve our understanding of the physical properties of frozen soils and enhance the accuracy of remote sensing systems designed for frozen soil monitoring and analysis.  
    关键词:Microwave radiometer;Response Depth;L-band;Frozen soil;Black Soil   
    63
    |
    152
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 89819876 false
    更新时间:2025-04-17
    风云四号B星晴空图像合成算法,为生态遥感应用提供高频次单日晴空图像。

    SHAO Jiali, WU Ronghua, GAO Ling, WANG Zhiwei, HAN Shuxin, XIE Lianni

    DOI:10.11834/jrs.20254072
    img
    摘要:Objective The clear-sky image synthesis in a single day is of great significance for daily water body recognition and other business applications. This paper proposes a clear sky image synthesis algorithm based on a binary Gaussian mixture model for the 1-minute continuous imaging sequence data of Geostationary High-speed Imager (GHI) of Fengyun-4B satellite.  
    关键词:Clear Sky Synthesis Image;FY-4B;GHI;Gaussian model;Water Body Identification;Multi- temporal Remote Sensing Data   
    66
    |
    199
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 90129039 false
    更新时间:2025-04-16
    航空遥感图像目标检测取得显著进展,专家系统性剖析旋转目标检测挑战,为推动技术发展提供新方向。

    Dang Min, Liu Gang, Wang Quan, Zhang Yuanze, Wang Di, Pan Rong

    DOI:10.11834/jrs.20254504
    img
    摘要:Object detection mainly involves classification and regression, a basic task of computer vision and has been widely studied. Early research mainly focused on horizontal object detection in natural image scenes. In recent years, with the development of convolutional neural networks (CNNs), many general object detection methods have been proposed and achieved good results on natural images, which greatly promoted the development of object detection tasks in aerial remote sensing images. Compared with natural images, aerial remote-sensing images have complex backgrounds, and objects are densely distributed in arbitrary orientation. Therefore, traditional horizontal object detection is no longer suitable for object detection in aerial remote sensing images. According to the requirements of object detection tasks in aerial remote sensing images, the task of oriented object detection that relies on oriented bounding boxes (OBBs) to detect oriented objects accurately has gradually emerged. Most oriented object detection relies on the horizontal proposal/anchor of mainstream horizontal object detection frameworks to predict the OBBs. Although significant progress has been made in performance, some fundamental flaws still exist, including the introduction of object-irrelevant information in the region features, as well as more complex rotation information calculation. At the same time, traditional CNNs cannot explicitly model the orientation variation of objects, which seriously affects the detection performance. Most existing oriented object detectors are based on horizontal object detectors by introducing an additional channel in the regression branch to predict the angle parameters. Although angle regression-based methods have shown promising results, they still face some fundamental limitations. Compared with horizontal object detectors, angle-regression detectors bring new problems, mainly including 1) inconsistency between metrics and losses, 2) boundary discontinuity, and 3) square-like problems. At the same time, small objects in arbitrary orientation pose great challenges to existing detectors, especially small oriented objects with extreme geometric shapes and limited features, which can lead to serious feature mismatch. These challenges have garnered widespread attention and prompted in-depth consideration from researchers in the relevant fields. To further promote the development of oriented object detection, this paper mainly summarizes and analyzes the research status of oriented object detection in aerial remote sensing images. Currently, various pipelines exist for object detection, with the most common approach adding an angle output channel to the regression branch to predict the OBBs. As a result, many oriented object detectors are built upon the horizontal object detection framework. This paper begins by reviewing key representative horizontal object detectors. With the development of object detection, many well-designed oriented object detection methods have been proposed and show good performance. The main challenges faced in current oriented object detection research are summarized in this paper into five aspects: 1) feature alignment in object detection, 2) the inconsistency between metric and loss, 3) boundary discontinuity and square-like problems; and 4) low recognizability of small objects, 5) label annotation problem. This paper provides a comprehensive analysis of these challenges, introduces proposed methods to address them, and discusses representative solutions in detail. The limitations and shortcomings of existing methods are examined, with potential directions for future exploration considered. In the field of oriented object detection in aerial remote sensing images, several widely used and representative public benchmark datasets with OBB annotations are summarized and introduced in detail. The experimental results and visualizations of representative state-of-the-art detectors are compared and analyzed using the most employed datasets, DOTA, HRSC2016, DIOR-R, and STAR. Current one-stage detectors are both simple and effective, with anchor-free object detectors demonstrating comparable detection accuracy to traditional two-stage detectors. On this basis, this paper integrates the current research status of object detection in aerial remote-sensing images and anticipates future research trends. To enhance the detection accuracy of oriented object detection while optimizing model complexity, future research could focus on developing anchor-free oriented object detectors, extracting rotation-invariant features, and minimizing annotation costs through weak supervision. Through this review, we aim to provide a valuable reference for researchers interested in exploring oriented object detection in aerial remote sensing images.  
    关键词:object detection;aerial remote sensing images;Oriented Object Detection;convolutional neural networks;oriented bounding box   
    90
    |
    212
    |
    0
    <HTML>
    <L-PDF><Enhanced-PDF><Meta-XML>
    <引用本文> <批量引用> 90128904 false
    更新时间:2025-04-16
0