摘要:Vegetation phenology is one of the most sensitive biological indicators of terrestrial ecosystem responses to global climate change and plays a crucial role in terrestrial ecological processes and functions. Changes in vegetation phenology have been strongly linked to climate change patterns and various ecological processes within terrestrial ecosystems, and may significantly impact land-atmosphere exchanges of carbon, water, and energy fluxes, and interactions between different species. Therefore, accurate monitoring of vegetation phenology is essential for simulating terrestrial ecological processes and understanding how terrestrial ecosystems respond to climate change. To date, various observation and monitoring methods for vegetation phenology have been developed, including ground-based observations (such as manual observation, phenocam observation / monitoring, and carbon flux-based monitoring) and remote sensing-based monitoring. Benefiting from the reliability of ground-based phenology observations and the spatial coverage and rapid repeatability of remotely sensed monitoring, a regional and global-scale vegetation phenology monitoring framework has been established, with satellite remote sensing as the primary method and ground observations for validation. A general technical workflow for remote sensing-based vegetation phenology monitoring has been formed, including remote sensing data acquisition, time series data construction (e.g., calculation of vegetation parameters such as various vegetation indices, leaf area index, fraction of absorbed photosynthetically active radiation, and gross primary production, etc.), time series data reconstruction (e.g., filtering, smoothing, and fitting), phenological metrics (phenometrics, e.g., start, peak, end, and length of the growing season) extraction, and phenometrics validation. However, each processing step in this workflow introduces uncertainty into the monitored phenometrics. This study focuses on three key aspects of remote sensing-based phenology monitoring: (1) remote sensing time series data (especially vegetation indices), (2) phenometrics extraction, and (3) phenometrics validation. Additionally, it discusses the effects of complex land surface backgrounds (e.g., snow, soil, and dry vegetation) on remote sensing time series data, the differences between various phenometrics extraction methods (i.e., threshold-based vs. derivative-based methods), and the matching issues between remote sensing phenometrics and reference phenometrics during validation (e.g., scale matching and phenometrics matching). Finally, two essential directions are proposed to address these key issues: (1) developing new remote sensing monitoring methods for vegetation phenology to counter background interference, such as constructing new remote sensing indices resistant to complex land surface backgrounds from the perspective of remote sensing mechanisms, and (2) establishing a comprehensive observation network that integrates ground-based multi-sensor coordinated observations and “space-air-ground” multi-scale integrated observations. Addressing these key issues will enhance the reliability of remote sensing phenology data, expand their applications, and deepen the understanding of land-atmosphere interactions.
摘要:Rice is one of the main staple foods of human beings. Timely and accurate access to information about the distribution of paddy rice crop areas and its spatial-temporal variations are crucial for food policy formulation. Focusing on the topic “paddy rice remote sensing mapping,” we first summarized the physiological growing process and primary cropping patterns of paddy rice systematically, following a survey of domestic and foreign literature. Globally, rice cultivation is concentrated in Southeast Asia. In China, single-cropping rice production areas are mainly located in the northeastern region and the middle and lower reaches of the Yangtze River. The double- and triple-cropping rice production areas are located in southern provinces, such as Hunan, Jiangxi, and Guangdong. Second, rice mapping primarily relied on radar data in the early stage due to the effect of clouds and rain. With the abundance of remote sensing data sources, optical and radar data were synergistically applied to rice mapping. On the basis of the highlighted features of paddy rice’s “remote sensing signal-spatial-temporal properties,” we discussed typical vegetation index and the radar backscatter coefficient in rice mapping and concluded with mainstream methods of rice mapping in terms of traditional machine learning and deep learning. Afterward, the rice mapping application status was summed up in three ways: using a standard machine learning model, fusing multisource remote sensing data, and using a cloud-based remote sensing computing platform. Results indicate that the existing issues on rice mapping has the following problems: (1) Rice is misclassified due to the plants (aquatic vegetation such as wetlands) with comparable phenological stages. (2) Optical and radar data hardly provide entire observations in phenology stages of paddy rice. (3) Rice mapping in terrain fragmental areas and multiple cropping or rotation regions is still a huge challenge. (4) Generalization of mapping algorithms in rice mapping remains an issue. With an aim to solve these issues, the next steps of rice mapping were explored from the perspectives of rice phenological feature mining, techniques for collection paddy rice time-series observations, and enhancements to finer spatial resolution in rice mapping, specifically for future research. The steps are as follows: (1) focusing on the exploration of the characteristics of remote sensing signals in phenological stages of paddy rice, (2) using various methods to acquire temporal remote sensing data covering the entire phenological stages of paddy rice, (3) improving the spatial resolution of paddy rice mapping via finer spatial resolution data or multiple data fusion model, and (4) taking fully advantage of optical imagery and radar data for integrated mapping of paddy rice and general algorithms in application.
摘要:In the real world, completely static scenes do not exist. Monocular depth estimation in dynamic scenes refers to obtaining depth information of dynamic foreground and static background from a single image, which has advantages over traditional stereo estimation methods in terms of flexibility and cost-effectiveness. It has strong research relevance and broad development prospects, playing a key role in downstream tasks, such as 3D reconstruction and autonomous driving. With the rapid development of deep learning technology self-supervised learning without using real data labels has attracted the enthusiasm of many scholars. Many local and foreign scholars have proposed a series of self-supervised monocular depth estimation algorithms to deal with dynamic objects in scenes, laying the research foundation for researchers in related fields. However, a comprehensive analysis of the above methods has yet to be conducted. To address this issue, this study systematically reviews and summarizes the progress of self-supervised monocular depth estimation in dynamic scenes based on deep learning.First, the basic models of self-supervised monocular depth estimation based on deep learning are summarized, and how self-supervised constraints are applied between images is analyzed and explained. Moreover, a basic framework diagram of self-supervised monocular depth estimation based on continuous frames is drawn. The effect of dynamic objects on images is explained from four aspects: epipolar lines, triangulation, fundamental matrix estimation, and reprojection error.Second, commonly used datasets and evaluation metrics for monocular depth estimation research are introduced. The KITTI and Cityscapes datasets provide continuous outdoor image data, while the NYU Depth V2 dataset provides indoor dynamic scene data, which are generally used for model training. The Make3D dataset has depth data but discontinuous images, which are generally used to test the generalization ability of the model. The algorithms are quantitatively analyzed using Root Mean Square Error (RMSE), logarithmic root mean square error (RMSE log), absolute relative error (Abs Rel), squared relative error (Sq Rel), and accuracies (Acc), and the performance of classic monocular depth estimation models in dynamic scenes is compared and analyzed.Then, on the basis of different ways of handling dynamic objects, the research directions of robust depth estimation in dynamic scenes and dynamic object tracking and depth estimation are summarized and analyzed. Dynamic objects are extracted and treated as outliers during training model to minimize their effect, training solely on static background information, which is referred to as robust depth estimation in dynamic scenes. Accurately distinguishing dynamic foreground and static background and processing the two regions separately is referred to as dynamic object tracking and depth estimation. Various algorithms for detecting and segmenting dynamic objects based on optical flow information, semantic information, and other information while estimating their motion are explained. At the same time, the advantages and disadvantages of each type of algorithm are summarized and analyzed on the basis of commonly used evaluation criteria.Finally, the future development directions of monocular depth estimation in dynamic scenes are discussed from the aspects of network model optimization, online learning and generalization, real-time operation capability of embedded devices, and domain adaptation of self-supervised learning.
摘要:As a global problem of soil degradation, salinization has become a major obstacle to the sustainable development of the ecological environment and agriculture. Moreover, it has become one of the major environmental and socioeconomic issues globally. However, the traditional process of salinity survey is too cumbersome, expensive, and time consuming to meet the mapping needs in a large scale. Remote sensing and proximal soil sensing technology has become important tools for rapid, accurate, and efficient acquisition and monitoring of soil salinization. The appropriate mapping methods are directly related to the spatial scale of interest. The need of regional soil salinity mapping was also one of the first published geostatistical applications. Macroscopic maps of salt affected soils at global scale may roughly illustrate the extent of the environmental problem; however, regional or greater level assessments are based on remote sensing and geographic information systems coupled with ground measurements. Applying remote sensing technology to the monitoring of soil salinization to obtain soil salinization information has become a trend. This article discusses the detection mechanisms, multisource data, and methods for monitoring soil salinization. Multiple sensors installed on different platforms can provide considerable earth observation information with various temporal, spatial, and spectral resolutions. On the basis of height, the observation platforms can be divided into near ground (proximal), airborne, and spaceborne remote sensing. With regard to the operating principle, these sensors can be mainly divided into electromagnetic sensors and optical/radiational sensors. Among them, spectral imaging and thermal infrared sensors are suitable for various observation platforms, while ground penetrating radar and electromagnetic induction are only suitable for near-ground soil salinization monitoring. The mainstream methods can be categorized into (1) thematic information extraction, (2) spectral index development, (3) quantitative retrieval modeling, and (4) digital soil mapping. On the basis of the above, this review summarized and explained the limitations of the current research fields and framework, monitoring data, monitoring methods, and scale effects. The integration of spaceborne remote sensing data with ground-based sensor information, complemented by the agile observational capabilities of Unmanned Aerial Vehicles (UAVs), enables us to transcend the limitations of noncoordinated Earth observation techniques. This integration allows for comprehensive coverage from a broad-scale perspective down to specific localized points. In summary, the core of the integration of satellite, UAV, and proximal sensing for soil salinization monitoring lies in the fusion of data from diverse sources, the establishment of quantitative models, and the extension of spatial scales. Finally, for future development and actual application needs, this review discussed the prospect for the further development of soil salinization studies on the basis of remote sensing and proximal soil sensing. To further advance and optimize technology, analysis, and retrieval methods, we identify critical future research needs and directions: (1) secondary soil salinization monitoring based on multisource data fusion, (2) multiscale collaborative monitoring of soil salinization, (3) improving detection depth on the basis of multidisciplinary knowledge, and (4) sharing research data and platform based on cloud computing.
摘要:With the development of Synthetic Aperture Radar (SAR) imaging and deep learning, the use of deep learning to classify land cover in SAR images has received extensive attention and applied research. In this study, a high-resolution airborne multidimensional SAR land cover classification dataset is constructed on the basis of the high-resolution airborne data of the Chinese Aeronautic Remote Sensing System (CARSS) for Earth observation, namely, AIR-MDSAR-Map (Airborne Multidimensional Synthetic Aperture Radar Mapping Dataset).The original data are obtained by CARSS, and the platform is a modified Xinzhou 60 remote sensing aircraft. SAR and optical images are generated in accordance with the standard data production process. After imaging processing, radiometric correction, polarization correction, and geometric correction, the original SAR data are preprocessed to form Single Look Complex (SLC) data, and then geometric processing is used to generate SAR DOM products. After image enhancement, splicing, and rough correction, the raw optical data are preprocessed to generate DSM data, and then semiautomatic filtering is performed to produce DEM. Finally, AIR-MDSAR-Map contains polarization SAR images in bands of C, Ka, L, P, and S and high-resolution optical images in Wanning, Hainan, and Sheyang, Jiangsu, with the spatial resolution ranging from 0.2 m to 1 m depending on the band.AIR-MDSAR-Map divides the land cover into nine categories and generates fine pixel-level labels through a semiautomatic labeling algorithm. In this study, the classical semantic segmentation methods in deep learning, such as UNet, SegNet, DeepLab, and HRNet, are used to verify the classification of AIR-MDSAR-Map. At the same time, we test the classification sensitivity of different band images to all kinds of land cover objects.This dataset includes multidimensional SAR images of the same place and time, which can be used for fusion classification research. In this study, multidimensional SAR data are fused and classified through different fusion strategies; model fusion classifies land cover by selectively fusing the models of each band, and the a priori fusion uses the prior information of the classification results in each band to distinguish land cover on defining the priority of objects. These two fusion methods outperform the single band in the performance of some types of land cover and improve the FWIoU and PA by 10%—15%, the FWIoU reaches 69%, and PA is 81%.AIR-MDSAR-Map can satisfy the research and application requirements of different users and can be used to study the characteristics of the same land cover object with different resolutions, bands, and polarizations. Moreover, it can provide a strong promotion for the development of multidimensional SAR applications. The AIR-MDSAR-Map will be available at the ChinaGEOSS Data Sharing Network (http://www.chinageoss.cn).
摘要:High Mountain Asia (HMA) is the richest high altitude region in the world except for the poles in terms of glacier and snow resources, The accurate monitoring of HMA snowpack distribution is important for HMA snowmelt runoff simulation, climate change prediction and ecosystem evolution. Fractional Snow Cover (FSC) can quantitatively describe the extent of snow cover at the sub-image scale, and is more suitable for reflecting the distribution of snow in complex mountainous areas than binary snow. The objective of this study is to develop a new HMA snow area ratio inversion algorithm and integrate the algorithm into Google Earth Engine to prepare a set of long time series HMA snow area ratio products.Considering the influence of HMA topography and sub-bedding type on the accuracy of snow accumulation information extraction, this paper proposes a Multivariate Adaptive Regression Splines (MARS) model LC-MARS to invert the proportion of snow accumulation area in Asia by integrating topography correction and subland class feature extraction. The FSC extracted by Landsat 8 is used as the true value, and the LC-MARS model is tested for inversion FSC accuracy using binary and error validation methods, and the performance of linear regression models trained with the same training samples and the LC-MARS model for inversion HMAFSC accuracy is compared, and the accuracy of the FSC inversion of the LC-MARS model with SnowCCI and MOD10A1 is also compared.(1) The overall accuracy of FSC binary validation of LC-MARS model inversion showed that Accuracy and Recall were 93.4% and 97.1%, respectively, and the overall accuracy of error validation showed that RMSE was 0.148 and MAE was 0.093, both binary validation and error validation indicated that the FSC accuracy of LC-MARS model inversion was higher. (2) The LC-MARS model trained based on the same training samples has higher FSC accuracy than the linear regression model in forest area, vegetation and bare land inversions, indicating that the LC-MARS model is more suitable for FSC inversions in mountain and forest areas. (3) The overall RMSE of MOD10A1 is 0.178 and MAE is 0.096; the overall RMSE of SnowCCI is 0.247 and MAE is 0.131. The accuracy of FSC prepared by LC-MARS is higher than that of MOD10A1 and SnowCCI, indicating that FSC inversion by LC-MARS has some application value.The LC-MARS model can fit high-dimensional nonlinear relationships and significantly improve the inversion accuracy of FSC in mountain and forest areas. The computational efficiency of the LC-MARS model based on Google Earth Engine is high, and it is suitable for preparing FSC products of large scale long time series. In this study, the day-by-day MODIS FSC products of HMA from 2000 to 2021 were prepared based on the LC-MARS model, which provides important data support for the study of climate change, hydrological and water resources in HMA.
关键词:remote sensing;High Mountain Asia (HMA);Fractional snow cover;MODIS;MARS;Terrain correction
摘要:In recent years, with the development of Global Navigation Satellite Systems (GNSS), a GNSS-multipath reflectometry (GNSS-MR) technique based on Signal-to-Noise Ratio (SNR) has been developed. This technology can obtain the information of the reflector by using a GNSS receiver and has the advantages of abundant signal sources and high sampling rate in snow depth inversion. However, many GNSS receivers do not record SNR observations. Thus, a multimode and multifrequency GNSS-MR snow depth inversion fusion method based on Signal Strength Indicator (SSI) is proposed in this study to make these receivers capable of snow depth monitoring. At the same time, aiming at the two main problems existing in GNSS-MR inversion of snow depth, that is, low precision and low time resolution, this method can also be effectively solved. This approach mainly benefits from the strategy of performing a robust estimation. The specific steps are as follows: first, by using SSI and SNR data of GPS, GLONASS, Galileo and Beidou, and using Lomb-Scargle Periodogram (LSP) method in classical snow depth retrieval principle, the snow depth retrieval values of each frequency band are obtained from four constellations. Then, a specific time window is established, and the state transition equation set is established in each time window considering the snow surface dynamic change and tropospheric delay. Finally, the snow depth time series is solved by a robust estimation model. In essence, it is a method of optimal valuation for GNSS-MR that is theoretically suitable for different geographical environments. In addition, this study selected a suitable station for snow depth retrieval experiments to prove the feasibility and effectiveness of the method. The experimental station is SG27 in Alaska, United States.Results show that the multifrequency SSI data of four global satellite systems can retrieve snow depth. Before multimode and multifrequency GNSS-MR snow depth inversion fusion, the results of SSI inversion at each frequency band have good correlation with the measured snow depth (except for Beidou frequency band, the other correlation coefficients is greater than 0.92). Considering the standard deviation and root mean square error of the retrieval results of different satellite systems, the retrieval results of GPS satellite system are the best, followed by GLONASS, then Galileo. However, the retrieval results of these three satellite systems are similar. The Beidou satellite system has the worst retrieval result. Among the four satellite systems, root mean square error of the frequency band with the best inversion result is 6.34 cm. After multimode and multifrequency GNSS-MR snow depth inversion fusion, the root mean square error between the SSI inversion results and the measured snow depth series is 2.36 cm, and the correlation coefficient is 0.98. At the same time, the multimode and multifrequency GNSS-MR snow depth inversion based on SNR data is also performed in the calculation example; the results of SSI inversion are consistent with those of SNR inversion, and the feasibility and effectiveness of multimode and multifrequency GNSS-MR snow depth inversion fusion based on SSI are verified by experiments.
关键词:GNSS-MR;multi-mode and multi-frequency;snow depth;robust estimation;signal strength
摘要:Snow seasonal evolution is one of the key factors influencing hydrological dynamics in mountainous areas and controlling terrestrial ecology. Accurate information on snowmelt is essential for meteorological, hydrological, and global climate change studies and for disaster prediction and early warning. The traditional snowmelt detection approach based on time-series SAR suffers from the influence of vegetation cover, rugged terrain, and long revisit time in some regions. In this study, we propose a new snowmelt detection method based on high resolution Sentinel-2 optical remote sensing data.The time series snow surface grain size variation is used to detect snowmelt events. As snow starts to melt, the liquid water in snow tend to increase the optical equivalent grain size retrieved from optical remote sensing remarkably. When the wet snow refreezes, the optical equivalent grain size remains considerably larger than dry snow. This provides the theoretical basis for snowmelt onset detection from optical remote sensing. The snow surface optical equivalent grain size is retrieved by applying snow reflectance models, with bidirectional reflectance, sun zenith angle, sensor viewing angle, and relative azimuth angle as inputs. Pure snow pixels are selected for snow optical equivalent grain size retrieval and snowmelt detection. In this study, snowmelt detection results of the Altay Mountain are presented and analyzed. Snowmelt onset detection based on optical remote sensing is also compared with SAR method, and the advantages and shortcomings of the two methods are analyzed.The main advantages of the proposed new method in this study are as follows The new method based on optical remote sensing is more sensitive in detecting the occurrence of snowmelt and provides richer information about the snow melting process. The choice of snow reflectance model can introduce differences in the retrieved snow grain size due to variations in modeling snow particle shape and light scattering and absorption. Additionally, the selection of threshold values for distinguishing between wet and dry snow grain sizes can affect the results. Therefore, the use of different snow reflectance models can lead to certain differences in the results of snowmelt detection. The snowmelt onset dates retrieved using the optical method show overall similarity to those retrieved from the SAR method, exhibiting similar dependencies on elevation and aspect. However, some differences between the two methods, which can be attributed to variations in detection principles and data sources, are observed. Compared with the SAR method, the optical method is less affected by speckle noise, mixed pixels, and vegetation cover. Particularly in low-elevation and vegetated areas, the proposed method demonstrates superior capability in detecting snowmelt events compared to the SAR method.The snowmelt onset date retrieved using Sentinel-2 data is similar to those retrieved from the SAR method using Sentinel-1 data, and they show similar dependencies on elevation and aspect. The new method based on Sentinel-2 data also shows advantages over the SAR method, e.g., the optical method is less affected by speckle noise, mixed pixels, and vegetation cover, and it provides more spatial details about the snowmelt onset data. The proposed snowmelt detection method based on optical data suffers from cloud cover, but it offers an alternative way to detect wet snow with high spatial resolution other than SAR. The snowmelt detection based on SAR and Sentinel-2 data can be complementary to each other, and the snowmelt detection in mountainous area can be improved by combining both methods.
摘要:With the rapid development of society, the overlapping degree between the distribution of social economy and population and the impact range of earthquake disasters is gradually increasing, and the harmfulness of earthquake disasters is further highlighted. Strengthening the monitoring and prediction of earthquake disaster risk is an important means to reduce the risk of earthquake disasters. Infrared remote sensing has gradually become an important means of earthquake prediction and monitoring. On September 16, 2021, a Ms6.0 earthquake occurred in Luxian, Sichuan Province. The FY-2H surface Outgoing Longwave Radiation (OLR) data product was used to analyze the anomalous distribution and changes of OLR in the study area from August 27 to October1. On the basis of the TFFA algorithm, our research extracted infrared radiation anomaly and made a retrospective attempt at short-term and imminent earthquake monitoring and prediction. We extracted the infrared radiation anomaly that is continuous and has remarkable change characteristics one week before the earthquake and identified the development and evolution of the impending earthquake infrared anomaly. Results showed a cross distribution between anomaly distribution and seismic structure, and the evolution process corresponds to the thermal infrared radiation law of rock fracture. In accordance with the extracted long-wave radiation anomaly, we speculate that only when the tectonic stress accumulates to the critical state of rock fracture and sliding can the tidal force trigger the earthquake, and the extracted anomaly is likely a manifestation of energy release in this process. With the change of tidal force, anomalies first appeared in the central and northeastern parts of the epicenter in the study area, and the anomalies were distributed in the northwest-southeast as a whole. This phenomenon and the evolution characteristics of rock stress-strain-fracture-experienced microfracture-fracture-accelerated fracture-fracture process are consistent. Results show that the tidal force of celestial bodies has an induced effect on the earthquake, and the anomaly of long-wave radiation may be the radiation characterization of stress and strain in the process of earthquake incubation. During extraction of seismic radiation anomaly, the selection of background day has a decisive influence on the results. This case study belongs to retrospective analysis; thus, in this study, the type of seismogenic fault is determined by the focal mechanism after the earthquake, and then the background date is determined. If this method is used for anomaly monitoring and prediction before earthquakes, the fault database can be obtained to judge the fault properties of the study area, and the background date can be selected in accordance with the fault properties. Afterward, the abnormal period (September 11-17) was tracked and verified by using NOAA satellite OLR product data; the characteristics of the two results were relatively consistent, which further demonstrated that the FY-2H satellite OLR data can be better applied to seismic anomaly monitoring. Moreover, the FY-2H single point time OLR data product has a good effect on monitoring the thermal anomaly, which further reflects the feasibility of the domestic satellite to conduct short-term and imminent earthquake prediction research. It provides a good application case for promoting the operational application of domestic satellite seismic monitoring and prediction.
关键词:remote sensing;FY-2H;Short-term anomaly;Luxian earthquake;outgoing longwave radiation;Tidal force
摘要:Emergency shelters can be used for sheltering, rescue, and evacuation of residents after a disaster. These structures are an important part of national public security and emergency management. Evaluating the carrying capacity of emergency shelters is important in promoting the construction of resilient cities and the sustainable development of cities. Beijing is one of only three capital cities in the world that are located in a zone of VIII degree of intensity. It is also the earliest city in China to build emergency shelters, and thus, it is typical and representative in terms of emergency evacuation. In this study, we use remote sensing images, POI, and other multi-source data to construct an evaluation model of the carrying capacity of urban emergency shelters based on multi-objective decision making, and take the Fifth Ring Area of Beijing, where the population distribution is relatively concentrated, as an example.First, according to the principle of establishing evaluation indexes, evaluation indexes are selected from four aspects: safety, accessibility, effectiveness and security, and relevant experts in disaster evaluation are invited to screen out 14 indexes according to their importance and relevance, constituting a system of evaluation indexes for the carrying capacity of emergency shelters.Second, subjective and objective weighting methods are used to determine the weights to make up for the lack of single weights. Subjective assignment adopts Analytic Hierarchy Process (AHP), and objective assignment adopts Entropy Weighting Method (EWM). Commonly used weight combination is the average of subjective and objective weights, while this study adopts maximum sum of squares of deviations theory to solve the optimal weight coefficients of the combination of assignments.Then, based on the multi-objective decision-making model, a 30 m×30 m grid was used as the evaluation unit, and a multi-objective linear weighting method was used to calculate the evaluation scores of the carrying capacity of the emergency shelter. And the natural discontinuity method was used to grade the carrying capacity.Finally, considering the complexity of the disaster after the occurrence of the disaster, including the degree of damage to the building and the potential secondary disaster impact, and referring to the relevant norms and standards, the evacuation scenarios are categorized into short-term evacuation and medium- and long-term evacuation scenarios according to the number of evacuees. The carrying capacity of different evacuation scenarios is simulated and analyzed using the method of this paper.Results show that the carrying capacity of the Fifth Ring Area of Beijing is good, but the carrying capacity of the second and third rings is zero, and an imbalance exists between the rings. This study provides qualitative and quantitative basis for the evaluation of the carrying capacity of urban emergency shelters. It also provides decision support for the planning and layout of emergency shelters and the efficient allocation of resources.
关键词:remote sensing;mergency shelter;Analytical Hierarchy Process (AHP);Entropy Weighting Method (EWM);combined empowerment;carrying capacity evaluation;multi objective decision making method;scenario simulation;Beijing
摘要:Postearthquake emergency rescue must use the method of “deep learning+remote sensing” to identify landslides from high-resolution remote sensing images quickly after an earthquake. However, an excellent deep learning model cannot be separated from a large-scale, high-quality dataset as a support. The size of the dataset and the quality of the target category data annotation information contained in it directly affect the performance and application effect of the deep learning model. So far, a few public datasets of deep learning landslide identification are available; however, they hardly meet the task requirements of researchers who use deep learning methods to conduct landslide identification. Thus, this study uses the visual interpretation method to annotate the pixel level of GF-6 remote sensing image and Google Earth image taken after the earthquake and uses the digital elevation model and optical image data to establish a three-dimensional model to assist and ensure the accuracy of landslide annotation. Finally, a public deep learning landslide dataset with a spatial resolution of 2 m is established; it can be used to train deep learning semantic segmentation and target detection models. The dataset contains 11581 groups of data, among which the training set contains 9265 groups of data; the validation set contains 1158 groups of data, and the test set contains 1158 groups of data, which far exceeds the data volume of the existing public landslide dataset and basically meets the training data volume requirements of most deep learning landslide identification tasks. In addition, to recognize the boundary and detail information of landslides better and improve the accuracy of landslide recognition, this study proposes an improved DeepLabV3+ landslide recognition model based on DeepLabV3+network by introducing channel attention mechanism, feature map fusion module, and transposed convolution. The feature map fusion module of the channel attention mechanism is used to adjust the weight between different feature map channels in the model training process to fuse effectively the low-order and high-order feature output by the coding structure. The transposed convolution uses a learnable convolution kernel to upsample the feature map to process complex image structure and semantic information well. During model training, this study uses the transfer learning method to transfer the backbone network architecture and its parameters of the ResNet50 model trained on the ImageNet dataset to the encoder structure of the DeepLabV3+ network model to accelerate the training of the model. Experimental results show that compared with the mainstream algorithms (FCN, U-Net, SegNet, DeepLabV3+), the improved DeepLabV3+ model has better extraction effect on the boundary and details of the landslide, and the results are closer to the real label; among them, MIOU is 87. 24%, recall is 92.47%, precision is 90.35%, F1 score is 90.87%, and pixel accuracy is 98. 91%. The code and data for this article are available at https://github.com/ZhaoTong0203/landslides_identification_model_code.git. This research provides robust support for the advancement of deep learning in landslide identification and offers substantial practical assistance for postearthquake emergency rescue efforts.
关键词:High-resolution remote sensing images;landslide data sets;deep learning;DeepLabv3+;GF-6
摘要:Tidal flats possess significant economic value, environmental importance, and ecological functions, yet their fragile environments are highly susceptible to degradation due to both natural factors and human activities. The changes in surface elevation of tidal flats have a direct impact on the stability of the ecological environment, making the monitoring of tidal flat surface deformation essential for understanding how these ecosystems respond to environmental changes and for predicting the evolution of tidal flat patterns, which in turn provides crucial data for the protection and restoration of the ecological environment. However, the measurement methods currently employed for tidal flat surface deformation are predominantly limited to traditional positioning techniques, which are constrained by the complex environmental factors inherent to tidal flats, thus impeding large-scale estimations. In this context, the emerging InSAR technology offers a promising solution to overcome these limitations. Therefore, this study aims to monitor the surface deformation of the Maowei Sea tidal flats by employing time-series InSAR technology, applying this advanced technique to coastal tidal flat areas. Specifically, we utilized the SBAS-InSAR technique in conjunction with PS feature points, drawing on 176 scenes of Sentinel-1A SAR image data from the study area to extract surface deformation information spanning from 2015 to 2022. By analyzing data on vegetation distribution, precipitation, sea level rise, and geological background within the study area, we conducted a comprehensive examination of the overall characteristics, spatiotemporal evolution trends, and influencing factors of surface deformation in the region.In conclusion, by employing the SBAS-InSAR technique in combination with PS feature points and Sentinel-1A data, this study successfully achieved refined monitoring of surface deformation within the study area, revealing long-term spatiotemporal changes in displacement rates and cumulative deformation over a seven-year period, thereby providing robust inversion results that offer valuable scientific insights for the protection and restoration of the Maowei Sea ecological environment.The results show that(1) The surface deformation of the tidal flat area demonstrates significant spatial heterogeneity, with distinct subsidence and uplift trends across different regions. Over the study period, the surface deformation rate within the study area ranged from -43.07 to 36.22 mm per annum, with deformation being unevenly distributed and generally exhibiting a slight uplift trend, while specific areas such as Jianshan and Mangrove Bay displayed a subsidence trend, in contrast to the uplift trend observed in the Kangxiling region. (2) Through time-series analysis of multiple feature points, it was revealed that the deformation in regions with severe subsidence is characterized by significant temporal heterogeneity, with some areas exhibiting periodic alternating uplift and subsidence, and an overall uneven subsidence trend over time, with a maximum subsidence of -184.9 mm observed over four years. (3) In terms of the factors influencing deformation, the study identifies biological activities, human activities, hydrological processes, sea level rise, and precipitation-induced geomorphological changes as the primary drivers of surface deformation in tidal flats, with these factors collectively contributing to the diversity and complexity of deformation patterns in the region.
关键词:remote sensing;Land subsidence;InSAR;Sentinel;time series analysis;Tidal flat;Maowei Sea Mangrove
摘要:The accurate acquisition of cloud microphysical parameters, Cloud Optical Thickness (COT), and Cloud Effective Radius (CER), is crucial to the calculation of surface or Top-Of-Atmosphere (TOA) radiation budget. However, most of the studies consider little about the cloud phase. Even if the cloud phase state is considered, the ice cloud is assumed to be spherical, which brings errors to the subsequent radiation calculation. In addition, only a few studies focus on cloud shortwave radiation forcing (SWRF) from satellites, but on downward shortwave radiation at the surface. To solve above problems, this study proposes methods for retrieving COT and CER for water and ice clouds as well as estimation of SWRF from the Himawari-8 satellite.In this study, we proposed a novel method for the retrieval of COT and CER for water and ice clouds from the new-generation geostationary satellite Himawari-8 measurements on the basis of the radiative transfer theory and optimal method. The irregularly shaped model named Voronoi, which was widely used in Japan Aerospace Exploration Agency satellite missions, such as Himawari-8, GCOMC-C, was used in our ice cloud model. In addition, a method for estimation of SWRF at the surface and the TOA was proposed on the basis of the look-up table (calculated by the radiative transfer model-RSTAT) method, as inputs of retrieved COT and CER above. Finally, the retrieved COT and CER from Himawari-8 in this study were validated with moderate-resolution imaging spectroradiometer (MODIS) collection-6 (C6) cloud product and an estimated SWRF from Himawari-8 by this study with Clouds and the Earth’s Radiant Energy System (CERES) level-3 (L3) SYN product, respectively.Validation with MODIS C6 cloud product for 4 days in different seasons (2016-01-01, 2016-04-01, 2016-07-01, 2016-10-01) shows that our retrieved COT and CER for water and ice clouds from Himawari-8 have a good agreement with the MODIS product, with correlation coefficient (R) values of COT of 0.68 and 0.77, respectively. For CER, the different ice cloud model used in MODIS and this study show a lower R value. For validation of SWRF with the CERES L3 SYN product, the estimated SWRF at the surface and the TOA by this study show good agreement, with R values of 0.97 and 0.98, root mean square error values of 15.0 and 16.6 Wm-2. While, mean bias error (MBE) values of-5.6 Wm-2 and-8.5 Wm-2 indict that our SWRF results having a slight underestimation.Our proposed methods for retrieving COT and CER, as well as SWRF at the surface and the TOA from Himawari-8, are effective. Although our ice cloud model (Voronoi) is different with MODIS, the finial SWRF validation still shows that our cloud products have high accuracy in other aspects. This research can provide important reference for the subsequent full radiation budget (shortwave plus longwave) estimation. Detailed products are available in our homepage (http://www.slrss.cn/care_zh/[2022-08-31]).
摘要:Ocean internal waves are a commonly observed catastrophic mesoscale oceanic phenomenon, which attracts great attention due to its considerable threat to marine military and marine engineering. With the rapid development of science and technology, the ocean internal wave remote sensing detection method has attracted increasing attention. At present, remote sensing methods used for internal wave observation can be divided into Synthetic Aperture Radar (SAR), visible light, and infrared by frequency band. Among them, SAR has the advantages of all-day, all-weather, and high-resolution, which is especially well-suited for remote sensing investigation of oceanic internal waves with frequent cloud coverage areas. To achieve accurate detection of ocean internal waves using SAR images and to solve the problem that conventional detection algorithms are susceptible to SAR speckle noise interference, this study proposes a SAR ocean internal wave detection algorithm based on superpixel segmentation and global saliency features.First, the SAR image is segmented into feature-uniform superpixels using the Simple Linear Iterative Clustering (SLIC) algorithm. The SLIC algorithm combines neighboring pixels with similar features into superpixels. The superpixels not only enhance the continuity between the inner wave pixels but also suppress the speckle noise interference. Then, the gradient feature, gray scale feature, and spatial feature of the superpixel are used to construct the internal wave saliency feature vector and calculate its global saliency. On the basis of the saliency, the threshold segmentation algorithm is used to extract the internal wave superpixels. Experiments are conducted on GF-3 and ERS-1 images, which show that the constructed internal wave saliency feature vector is beneficial to detect more internal wave stripes. Finally, the label image indicating the internal wave regions is generated in accordance with the spectral characteristics of internal wave and used to correct the internal wave detection result in previous step.We conducted a detection experiment of internal wave bright stripes on five SAR images with a resolution of approximately 10 m. The experimental results show that the proposed method has good detection accuracy for these five high-resolution SAR internal wave images. The average F1 score of the internal wave detection for the five scene experimental data of our method could reach 0.884, and the average false alarm rate is 0.009.By comparing the internal wave detection results and related evaluation indexes of our method with the classical canny operator and the deep learning U-Net method, the effectiveness and robustness of our proposed method in high-resolution SAR ocean internal wave detection are demonstrated, which is cr to improve the inversion accuracy of internal wave wavelength and amplitude.
摘要:Satellite remote sensing data represented by TROPOMI atmospheric composition products show good potential in the estimation of near-surface O3 concentrations. Given the limitations of cloud and inversion algorithms, data for TROPOMI atmospheric composition products are lacking, resulting in low coverage of estimation results. Therefore, the DINEOF method was used to reconstruct the missing cells of TROPOMI NO2, CO, and HCHO original data products and estimate the maximum daily 8 h average O3 concentration (MDA8 O3) of Chinese mainland high coverage from 2019 to 2021 based on XGBoost. In this study, three schemes to improve the coverage of O3 model estimation results are compared. Scheme 1 reconstructs the missing cells of TROPOMI NO2, CO, and HCHO original data products based on the DINEOF method and uses the reconstructed data to model and estimate O3. Scheme 2 is based on TROPOMI NO2, CO, and HCHO original data products, null values are assigned to their missing cells, and only other characteristic variables are entered to model and estimate O3. Scheme 3 uses a combination of modeling results containing TROPOMI NO2, CO, and HCHO original data products and modeling results that do not contain TROPOMI NO2, CO, and HCHO original data products but with other characteristic variables. Experiments show that the results of scheme 1 are the best; its 10-fold cross-validation results are R2=0.86 and RMSE=15.86 μg/m3. The model accuracy is basically the same as scheme 2 and higher than that of scheme 3, and the model accuracy in the reconstruction region is the highest (training set R2=0.82, RMSE=15.57 μg/m3). When O3 heavy pollution occurs in the reconstruction region (concentration greater than 160 μg/m3), the underestimation of the high value of the model can be remarkably improved, and the spatial distribution of the results is more reasonable. The average coverage of the near-surface MDA8 O3 estimated in scheme 1 increased from 33.6% to 97.2% from 2019 to 2021. Using TROPOMI NO2, CO, and HCHO refactor data products to model and estimate O3 can improve model performance and coverage of model results.
摘要:Fully Polarimetric Synthetic Aperture Radar imagery (PolSAR) can provide rich polarimetric information; however, given the limitations of the system’s signal bandwidth and the physical size of the antenna, the spatial resolution of the SAR imaging system is restricted while acquiring multiple polarization information. To solve this problem, on the basis of the deep learning framework, this study proposes a multiscale attention-based PolSAR super-resolution network (MS-PSRN), which performs super-resolution reconstruction on the low-resolution full-polarimetric SAR images to generate the fully polarimetric SAR images with high spatial resolution. Under this super-resolution reconstruction framework, this study uses a multiscale architecture to fully extract the feature information of objects at different scales. On this basis, the spatial attention mechanism and the channel attention mechanism are introduced to recalibrate the feature maps, which are used to enhance the reconstruction performance of spatial details and improve the ability to maintain polarization information, respectively. Two attention mechanism embedding methods, i.e., joint and separated, are proposed to cope with the spatial size and quantity changes of the feature maps processed by the encoder and decoder. This study introduces a residual information distillation mechanism, extracts discriminative features through feature distillation, and compresses model parameters at the same time. In addition, the adaptive loss function is proposed to constrain the network training process and improve the model’s numerical fitting ability and edge information preservation ability. In this study, the proposed method is verified by two datasets, namely, the simulated data and the real data produced by RADARSAT-2 images. The experimental results of spatial information show that the proposed method is superior to the comparison algorithms in terms of visual results and quantitative indicators and has higher texture detail reconstruction accuracy and lower reconstruction error. The polarimetric information preservation test shows that the proposed method can effectively preserve the polarimetric information of PolSAR images while improving spatial resolution.
摘要:An airborne pendulum large-view-field infrared scanner has the advantages of large view field and high-resolution images. However, it may be affected by various factors during sensor installation, transportation, and flight, which may cause the position centers and angles of the sensor and the POS system to be changed. Therefore, geometric calibration must be performed to improve the geometric quality of the images acquired by the sensor. Furthermore, this scanner produces left-pendulum and right-pendulum images, respectively, during the flight. Moreover, the left-pendulum and right-pendulum images have different imaging integration directions, which may cause different influence on imaging quality. Therefore, this study presents an external geometric calibration method considering different influences of the different pendulum directions.By considering different influences of the different pendulum directions of this airborne pendulum large-view-field infrared scanner, a geometric external calibration model is introduced, and it is characterized by the following: (1) The installation errors of the sensor and POS are considered. Moreover, the left- and right- pendulum images are used to calculate the installation parameters. Especially, different calibration parameters are used for the right- and left-pendulum images. The aim is to compensate for the different influences caused by different imaging integration directions. (2) Furthermore, calibration parameters for the installation errors of the sensor and POS are regarded as pseudo-observations because the installation errors of the sensor and POS are generally small. By regarding the calibration parameters as the pseudo-observations, the calibration model can avoid large unreasonable estimated parameters.The proposed method is applied in the calibration of the images collected by this airborne pendulum large-view-field infrared scanner. Experimental results show that the calibration model can effectively improve the geometric positioning accuracy of the images. The absolute average back-projection residuals of the control points in the left- and right- pendulum images are improved to be subpixel level.
摘要:In the context of continued global warming, the risks of drought have increased remarkably, causing tremendous impacts on the sustainability of natural ecosystems and socioeconomic systems. Remote sensing-based vegetation products are widely used to capture terrestrial vegetation dynamic and reflect the response of ecosystem to drought events. The vegetation condition index (VCI) derived from long-term vegetation products is one of the most popular indices for drought monitoring. VCI calculated from multisource vegetation products had been applied in applications for drought monitoring. However, little attention has been paid to quantify the uncertainty of the drought monitoring result caused by these products. This study aims to explore the uncertainty of drought monitoring using VCI derived from multisource remote-sensed vegetation products by considering the impact of difference in sensors, physical definition of vegetation parameters, and historical time span of products on VCI result. In this study, the uncertainty of multisource remote sensing-based vegetation products applied to drought monitoring was analyzed and evaluated by taking the middle reaches of Yangtze River as an example. Specifically, on the basis of the experimental settings of different sensors (MODIS, AVHRR), different vegetation parameters (NDVI, EVI, LAI, VOD), and different time spans (5, 10, and 20 years), the corresponding VCI time series were calculated. Then, the correlation coefficient (r) and root mean square deviation between VCI time series under different experimental settings were calculated to quantify the uncertainty of multisource vegetation remote sensing products for drought monitoring. The possible explanation for the quantified uncertainty was further attributed in the study. (1) The VCI time series calculated from NDVI products based on different sensors show considerable inconsistencies over most of the study area, with weak correlation and large overall deviations. The inconsistent long-term trend pattern between MODIS-NDVI and AVHRR-NDVI over the study area might account for the large uncertainty. (2) The differences due to different vegetation parameter products are much lower than those due to different sensors, but the differences are still over specific regions, with strong correlations between the VCI time series calculated on the basis of NDVI and EVI products, NDVI and LAI products, respectively, while the VCI time series calculated on the basis of NDVI and VOD products show significant differences in most regions. The saturation effect of NDVI still effect VCI calculation over highly vegetated area. (3) The VCI time series calculated on the basis of different time spans keep well consistency with each other, and the larger the time span of the products, the smaller the differences in VCI changes. In summary, when using vegetation remote sensing products for drought monitoring, the consistency characteristics among multisource vegetation remote sensing products must be carefully addressed to ensure the validity of the monitoring results. Similar analysis should be further expanded to global scale and more vegetation products to quantify the uncertainty systematically.
关键词:Vegetation Condition Index (VCI);drought monitoring;uncertainty analysis;Remote sensing of Vegetation;remote sensing products
摘要:Wheat is an important food crop in China, and its distribution acquisition and planting area estimation are crucial to ensuring national food security and development planning. As a powerful tool for large-scale land cover extraction and temporal and spatial dynamic monitoring, remote sensing has remarkable advantages in the field of wheat planting area estimation. However, wheat is often mixed with other ground objects and forms mixed pixels limited by the spatial resolution of remote sensing images and the influence of various factors in the transmission of electromagnetic radiation. The spectral mixing limits the accuracy of wheat planting area estimation. Meanwhile, endmember spectral variability also makes traditional spectral unmixing methods perform poorly. Thus, this study proposes a Self-supervised Learning-based Spectral Unmixing algorithm (SLSU) to alleviate the influence of spectral mixing and endmember spectral variability wheat planting area estimation. First, the variational autoencoder is used to achieve unsupervised interpretation of the endmember spectral variability and endmember library generation. Then, abundances corresponding to various endmembers are estimated by using the alternating least-squares strategy and fully constrained least squares. Finally, the unmixed results are corrected on the basis of the spatial neighborhood by using the probabilistic relaxation labeling algorithm to improve the accuracy of spectral unmixing and wheat extraction further. Three typical wheat planting areas in Xinxiang, Henan Province were selected as experimental areas, and the wheat planting area was obtained by Sentinel 2 images. The extraction accuracy was evaluated by wheat distribution data measured in the field, and the results showed that the median value and the R2 score of the wheat extraction’s relative extraction error are close to 1.3 pixels and 1.00, respectively, which are remarkably better than the results extracted by traditional spectral unmixing algorithms (including fully constrained least squares, extended linear mixed model and so on) and traditional supervised learning classification methods (such as support vector machines and random forests). The proposed SLSU algorithm can improve the accuracy and stability of wheat planting area estimation and provide an effective method for crop distribution extraction and planting area estimation.
关键词:remote sensing;self-supervised learning;variational auto encoder;spectral mixing;feature extraction;crop area estimation
摘要:Compact Polarimetric Synthetic Aperture Radar (CP-SAR) is a new type SAR that has attracted most researchers, especially the application of CP-SAR data. However, only a few studies have explored the application of forest aboveground biomass (AGB) retrieval using CP-SAR information. In consideration of the global climate change and the goals of achieving peak carbon emissions and carbon neutrality, the accurate inversion of forest AGB has become urgent in recent years. This study aims to explore the feasibility of CP-SAR data applied in forest AGB inversion. In this study, we took Xiaoshao Forest Farm in Yiliang County as the test site, using simulated CP-SAR data from quad polarimetric GF-3 data with four modes, i.e., Stokes1 mode (Stokes-related parameters were extracted from horizontal transmission and dual-orthogonal linear receipt), Stokes2 mode (Stokes-related parameters were extracted from vertical transmission and dual-orthogonal linear receipt), π/4 linear mode (π/4 transmission and orthogonal linear receipt), and CTLR mode (circular transmission and dual-orthogonal linear receipt), to explore the potential of CP-SAR data in forest AGB estimation. First, several SAR parameters of various modes were extracted on the basis of wave dichotomy theory, then the k-nearest neighbor algorithms with fast iterative feature selection (KNN-FIFS) method were applied to estimate the forest AGB in the study area. Finally, the accuracy of the KNN-FIFS inversion results were verified using the leave-one-out cross-validation methods. An R2 of 0.28 and an RMSE of 16.36 t/hm2 were acquired for the forest AGB estimation using Stokes1 mode, and the corresponding optimal feature combination was γ, , ; for Stokes2 mode, an R2 of 0.35 and an RMSE of 14.96 t/hm2 were obtained, and the corresponding optimal feature combination was P2, γ, m1, P1. Compared with Stokes1 and Stokes2 modes, the similar performance was shown in π/4 mode for forest AGB estimation; the R2 value was 0.34, while the RMSE was 15.21 t/hm2, and the corresponding optimal feature combination was , , vs1, , . Among four CP-SAR modes, CTLR mode exhibited the best performance in forest AGB inversion with an R2 of 0.52 and an RMSE of 13.02 t/hm2, and the corresponding optimal feature combination is , . The forest AGB inversion result combining four sets of CP-SAR parameters showed remarkable improvement with an R2 of 0.58 and an RMSE of 12.16 t/hm2. The CTLR CP-SAR mode outperformed the other modes in terms of forest AGB estimation when the parameters extracted from four CP SAR modes were combined and applied for forest AGB estimation; the improvement of inversion result was remarkable. KNN-FIFS is suitable for forest AGB estimation via CP-SAR parameters, and no considerable difference was found between the estimation results estimated using CTLR CP-SAR data and quad polarimetric SAR data. Among all the extracted CP-SAR parameters, the degree of linear polarization () and the power of the linear polarization component at a tilt angle of 45 degrees or 135 degrees () showed the best performance in the forest AGB estimation because both of them are selected in all the four modes as the optimized features. It revealed that they can better characterize the forest AGB changes. Meanwhile, the parameters that can reflect the forest density to a certain extent (vs1), the parameters that reflect the characteristics of the forest scattering direction (), and the parameters that represent the degree of forest depolarization all have good performance in the forest AGB inversion.