摘要：Geographic objects typically include both physical and human elements. The big data produced by remote sensing and the ubiquitous social media data provide rich sources for the feature classification of these two types of objects. The extraction of physical objects based on remote sensing classification and the extraction and classification of social media information based on web text are the current mainstream methods of extracting geographic objects. The former is supported by image processing technology, whereas the latter is achieved using natural language processing technology. With the application of artificial intelligence classification methods such as machine learning, the classification characteristics of these two types of elements are becoming increasingly common. Using the evolution of machine learning methods as a medium, in this study, we compared the remote sensing classification of single- and multiple-element physical geographic elements and the natural language processing classification of web text elements. Since the 1940s, the development of machine learning methods has experienced five stages, namely, germination, development, bottleneck, recovery, and outbreak. Machine learning and related information classification methods have become the current focus of researchers. We described the principle applied by machine learning methods for geographic element classification and divided the classification process of geographic elements into data acquisition, data preprocessing, feature construction or model training, and accuracy evaluation. We observed many similarities between physical-element-oriented remote sensing classification and human-element-oriented text classification in terms of their process and model. However, text and remote sensing classifications also differ in their data and tasks. By using single objects, compound object classification, and microblog social media topic classification extraction as three examples for remote sensing classification, we further examined the process of completing different geographic element classification tasks. We then built a pixel-based CNN model to classify the water bodies in the Tuul River region of Mongolia. Land cover classification mainly adopts random forest, decision tree, maximum likelihood, support vector machine, and pixel-object-knowledge methods to map the global land cover. Social media classification uses latent Dirichlet allocation and a random forest algorithm to classify the public sentiment on COVID-19 topics in microblogs. From the discussion, we noted similarities and differences in the use of machine learning methods for classifying the two types of geographical elements. Remote sensing and text classification are generally consistent, while remote sensing image classification and web-based text classification can learn from each other in many cases. The differences between these methods lie in their focuses on data processing and targeting. Specifically, text classification focuses on word separation and word vector construction, whereas image classification focuses on obtaining feature information, such as the spectrum, textures, and band indices of the target objects. The combination of geographic element classification and artificial intelligence has considerable potential. With the development of big data and the mining and use of multisource heterogeneous data, the multimodal learning of joint text and images can provide new directions and ideas for geographic object research. The integrated fusion of geoscience-domain knowledge and deep learning methods is expected to become a mainstream approach for advancing remote sensing information extraction in the future. The mutual reference among classification methods for remote sensing and social media big data can expand the applications of the intelligent classification of physical and human geographic elements.
关键词：Geographic elements classification;physical geography elements;human geography elements;machine learning;remote sensing classification;web text classification;natural language processing
摘要：As an important process in the nitrogen (N) cycle, inorganic N deposition from the atmosphere can be observed at ground sites or simulated by an atmospheric Chemical Transport Model (CTM). However, the atmospheric N deposition fluxes obtained by these two methods on a regional scale show huge gaps. The atmospheric NO2 and NH3 columns remotely sensed by satellites have the potential to estimate the N depositions with high spatial and temporal resolutions. This study reviewed those sensors that provide atmospheric NO2 and NH3 columns and those models that estimate atmospheric inorganic N depositions based on satellite observations.Several sensors can provide the daily NO2 and NH3 columns in the tropospheric atmosphere since 1995, with spatial resolutions ranging from 7×3.5 km2 to 320×40 km2. In general, the datasets of NO2 columns have higher spatial resolutions, cover longer periods, and show better data precision compared with those of NH3 columns.An inferential model is often used to estimate dry nitrogen depositions using the N-related component concentrations at the ground level and the deposition velocity (often estimated by an atmospheric CTM). The N-related components in the atmosphere include the gas formats of NO2, HNO3, and NH3 and the particulate formats of NO3- and NH4+, which can be estimated from the satellite observations of NO2 and NH3 columns, respectively. Three methods were used to covert the atmospheric columns of the N-related components to the concentrations at the ground level, namely, by taking the columns as the ground-level concentrations, by constructing a statistical model that involved the columns and the related influencing factors to estimate the ground-level concentrations, and by considering the profiles of N-related components simulated from CTM to estimate the ground-level concentrations.Three methods were also used to estimate wet inorganic nitrogen depositions, including an empirical formula, a statistical model, and a process model. The process model can reflect the combined effects of the N-related component concentrations in the atmosphere and the precipitation amounts. In this model, the key parameter of washing coefficient on N-related components from the atmosphere was estimated by using a mixed model that involved a large amount of N depositions observed at ground sites.Each model for estimating inorganic N depositions had its respective advantages and disadvantages. Those methods that consider the atmospheric profiles of N-related components to estimate the dry N depositions and the process model to estimate the wet N depositions have been used to systematically estimate inorganic N depositions on a regional scale, including gas, particulate, and wet depositions. These methods have obtained reliable estimates of N depositions across China.Although some methods have been developed to estimate inorganic N depositions based on satellite observations, these methods still show some limitations. To accurately estimate the N deposition on a regional scale, advanced techniques that use satellite sensors, show high spatial and temporal resolutions on N-related components in the atmosphere, and utilize new models for estimating the key parameters with high precision should be developed in future.
摘要：Based on the total CO2 column products of the GOSAT satellite from 2010 to 2020, this paper analyzes the spatio-temporal variation characteristics of atmospheric CO2 concentration, including global spatial distribution, latitudinal distribution, regional distribution, interannual and seasonal variation characteristics. The comparison and verification results with TCCON ground-based observation data demonstrate that GOSAT XCO2 products have high observation accuracy and can be applied to large-scale analysis. The results show that the CO2 concentration in the global land area continued to rise from 2010 to 2020, with an average annual value rising from 387.42 ppm to 410.32 ppm, and the average annual growth rate is about 2.33 ppm/a. Taking 2015 as the boundary, the average annual growth from 2010 to 2015 was 2.12 ppm, while the average annual growth from 2016 to 2020 was 2.46 ppm. In 2016, the average CO2 concentration in the global land area exceeded 400 ppm for the first time, with an annual growth of over 3 ppm, the highest in the past decade. In terms of seasonal changes, the northern hemisphere has the highest in spring and the lowest in late summer and early autumn, while the southern hemisphere has opposite wave phases. The wave amplitude in the northern hemisphere is much higher than that in the southern hemisphere, and the higher the latitude, the greater the fluctuation. The latitudinal distribution characteristics are significant, showing an overall increase and then decrease from south to north. Influenced by factors such as cloud and data quality control, there is a significant decrease near the equator, with peaks appearing at 0°—10°N and 30°N—40°N. There are significant differences in regional distribution, with the highest annual mean occurring in the tropical South America and the lowest in the northern North America, with a difference of nearly 30 ppm. In terms of annual growth rate, the temperate zone of Asia has the highest, while the northern region of Asia has the lowest, with 2.36 ppm/a and 2.27 ppm/a respectively. In the past decade, the rise in CO2 concentration in global terrestrial regions has been intensifying. At the same time, due to regional development differences, the spatial differences in CO2 concentration are also intensifying globally. Based on the above results, we can obtain some insights. The concentration of CO2 caused by industrial activities continues to rise, exacerbating global warming. The necessity and pressure for humans to control industrial emissions are increasing to ensure the sustainable development of humanity and the environment “Earth Home” that humans rely on for survival. Under the promotion of international frameworks and organizations such as the United Nations and the IPCC, the direction of human development and energy development has become a crucial theme, and the status and role of clean energy and alternative energy are particularly important. The global CO2 monitoring satellite provides an effective and comprehensive monitoring data source for the above work, and there is huge space for the mining and utilization of this data information. The combination of satellite CO2 observation and atmospheric transport models, as well as assimilation methods, can provide more effective tools for global monitoring and simulation.
关键词：GOSAT satellite;CO2 concentration;global land area;spatio-temporal variation characteristics;Atmosphere;long time series;remote sensing
摘要：Following the successful implementation of the Air Pollution Prevention and Control Action Plan (2013—2017) and the Three-Year Action Plan to Win the Blue Sky Defense War (2018—2020), the concentrations of five major pollutants (i.e., PM2.5, PM10, SO2, NO2, and CO), except for ozone, significantly dropped for most cities in China. The increasing ground-level ozone concentrations have been a key factor restricting the improvement in ambient air quality, especially during summer.Compared with the measurements from ground-based monitoring sites, satellite remote sensing technology can obtain spatially continuous total column ozone. However, given that ozone is abundantly distributed in the stratosphere, ground-level ozone has a very low contribution to the total column ozone observed from space. Therefore, the satellite total column ozone product provides limited information for estimating ground-level ozone concentrations. In this study, by combining TROPOMI ozone precursor (NO2 and HCHO) products, ERA5 meteorological parameters, and ground-based monitoring data, a machine learning model was developed to estimate the daily maximum 8-hour average ground-level ozone concentration over China from years 2019 to 2020.By comparing the performance of three ensemble learning methods, namely, extreme gradient boosters (XGBoost), extreme random trees (ERT), and gradient boost regression tree (GBRT), the averaged overall 10-fold cross-validation R2 of 2019 and 2020 are all larger than 0.89. Although the results estimated by XGBoost showed the best agreement between the model predictions and observations with an average RMSE and MAE of 15.77 μg/m3 and 10.53 μg/m3, respectively, the ERT method was eventually selected to model the daily maximum 8-hour average ground-level ozone concentration by considering the rationalization of spatial distribution.Due to the proactive emission reduction measures implemented by the Chinese government and the impact of the COVID-19 pandemic, the rising trend of ozone concentration over the years has been reversed. The annual average value of ground-level ozone concentration in 2020 reached 107.41±18.6 μg/m3 over China, which is 1.85 μg/m3 less than that recorded in 2019 (109.26±19.71 μg/m3). Severe surface ozone pollution events frequently occur from May to September of every year because the high temperatures during these months can promote photochemical reactions. The estimated ground-level ozone concentrations in the Beijing-Tianjin-Hebei, Yangtze River Delta, Pearl River Delta, and Chengdu-Chongqing regions are significantly higher than those in their surrounding areas, making these regions the key areas for ozone pollution prevention and control.
摘要：The novel coronavirus (COVID-19) has been declared a global pandemic in December 2019. To curb the spread of the virus, countries and regions around the world have adopted lockdown measures, which resulted in sharp reductions in their social and economic activities and significantly reduced the concentration of pollutants in their atmosphere. The environmental trace gases monitoring instrument (EMI) onboard the Chinese GaoFen-5 (GF-5) satellite is the first hyperspectral sensor designed for monitoring atmospheric trace gases. This study analyzed the tropospheric nitrogen dioxide (NO2) changes in typical global areas during the COVID-19 pandemic based on EMI NO2 observations. To evaluate the application potential of EMI NO2, this study compared the emission reductions monitored by EMI NO2 with that monitored by OMI and TROPOMI NO2 products. Eastern China, Europe, Iran, and South Korea were selected as the main research areas. The study period covered January 1 to March 31 in 2019 and 2020 and was divided into multiple sub-windows according to the differences in the outbreak and lockdown times in these regions. The comparison of regional-time-averaged EMI NO2 in 2019 and 2020 reveals that the EMI captured obvious NO2 reduction trends in most areas of Eastern China (-13.6%), Europe (-10.2%), Iran (-7.9%), and South Korea (-13%) from January 1 to March 24. The average relative deviation of EMI NO2 reduction percentage from OMI and TROPOMI is less than 12.3% in Eastern China and 13% in Europe. The EMI NO2 significantly decreased before and after lockdowns in the same year or in the same period between these two years. To further evaluate the quantitative expression ability of EMI in urban-scale NO2 emission reduction, the emission reductions of EMI NO2 in several typical cities were calculated and compared with OMI and TROPOMI. The NO2 reductions from EMI are highly consistent with those from OMI and TROPOMI. The averaged relative differences between EMI and OMI (TROPOMI) in the regional and urban scales are less than 13% and 9%, respectively. In addition to GF-5, the hyperspectral observation satellite (GF-5B) launched on September 7, 2021 and the atmospheric environment monitoring satellite (DQ1) launched on April 16, 2022 are also equipped with an EMI sensor. Preliminary results show that these satellites have good data quality and a detection capability comparable with that of GF-5(01) EMI. Other satellites that are planned to be launched, such as high-precision greenhouse gas detection satellite (DQ2) and GF-5 replacement satellite (GF501A), will also be equipped with an EMI sensor to continue monitoring polluting gases in the world and to provide a new source of data for global pollution monitoring. This study assesses the ability and practical application value of EMI in global NO2 monitoring and provides a reference for the development and application of similar instruments.
摘要：Northern China was hit by a strong dust storm with unprecedented intensity and influence around the middle of March 2021. In this study, the horizontal and vertical dynamic development and the transport path of this dust event were investigated using multi-source satellite observations, PM10 concentration data, and the HYSPLIT model.FY-4A/AGRI DST can efficiently identify dust in cloudless weather, FY-4A/AGRI IDDI-BK can semi-quantitatively characterize dust intensity, and SP-5/TROMI UVAI has the advantage of detecting dust under clouds. Therefore, in this paper, both DST and IDDI-BK were used to identify the affected area and the intensity characteristics of the dust event. UVAI data were also used to supplement the under-cloud dust information that may have been missed by DST. After obtaining the horizontal distribution characteristics, this paper continues analyzed the vertical distribution height of dust using CALIPSO VFM data. The transport path of dust was also described by using the forward- and back-trajectory analyses of the HYSPLIT model and trajectory clustering.The severe dust storm first appeared in central Mongolia on March 14. Widespread dust storms appeared over central and western Inner Mongolia, central Gansu, Ningxia, northern Shaanxi, northern Shaanxi, Hebei, Jilin, northwestern Liaoning, and northwestern Heilongjiang on March 15. From March 16 to 18, the coverage of the extreme dust storm in northern China narrowed westward, while a strong dust storm continued to strike western Inner Mongolia and central Gansu. A huge dust storm then attacked central Xinjiang, and a light dust storm affected parts of central and east China as well as the coastal waters of the Bohai and Yellow Seas. On March 18, the dust storm disappeared.Results show that central Mongolia is the main source of the dust storms over northern China between March 14 and 15 and that both central Xinjiang and central and western Inner Mongolia are the major contributors to the widespread dust storms that were reported from March 16 to 18. The dust transport routes are primarily separated into three branches, namely, northwest road, west road, and north road. The dust conveyance height is mostly concentrated at 1 km—3 km and 3 km—10 km, both of which are above the dust interception height of ecological environment projects, including the Three-North Shelter Forest Program. A combination of multi-source satellite products and numerical models can provide as a scientific reference for forecasting dust storms and for the regional joint prevention of dust pollution.
摘要：With the increasing frequency of air pollution incidents worldwide, many studies have focused on the disease burden from long-term exposure to PM2.5 pollution. In China and India, the two most populous developing countries in the world, the disease burden attributable to PM2.5 exposure are particularly serious. Therefore, these countries need to develop a multi-year and comprehensive dataset of PM2.5-related premature deaths to support their future air pollution prevention policies. However, only few studies have explored this topic over the past years. To fill this gap, this study analyzed the spatial and temporal patterns of PM2.5 concentrations and changes of population exposure to PM2.5 in China and India over the past 19 years (2000—2018) using high-resolution (0.01°×0.01°) satellite data. Combined with the Integrated Exposure Response (IER) model, this study comprehensively assessed the premature deaths from six diseases due to long-term PM2.5 exposure, including acute lower respiratory infection, (ALRI), Chronic Obstructive Pulmonary Disease, (COPD), type 2 diabetes (DIA), Ischemic Heart Disease (IHD), lung cancer (LNC), and stroke (STR).Results show that those areas with high levels of PM2.5 concentrations in China were concentrated in Xinjiang, Sichuan Basin, North China Plain, and the Yangtze River Economic Belt. The annual population-weighted PM2.5 concentrations showed a decreasing trend (50 μg∙m-3 in 2000 and 40.8 μg∙m-3 in 2018). In India, high levels of PM2.5 concentrations were concentrated in the north, including Punjab, Haryana, and Uttar Pradesh. The annual population-weighted PM2.5 concentrations increased from 51.5 μg∙m-3 in 2000 to 76.4 μg∙m-3 in 2018. The number of premature deaths caused by PM2.5 exposure in China increased by 34.1% from 908000 in 2000 to 1378000 in 2018, with the annual average premature deaths totaling 1228000. STR was the major contributor to total premature deaths in the country, accounting for 45.9% (563000) of all fatalities. In India, the number of premature deaths attributable to PM2.5 increased rapidly from 343000 in 2000 to 750000 in 2018, with a net increase of 407000. The annual average premature deaths were 506000, the majority of which were attributed to IHD and STR, which accounted for 39.9% (202000) and 25.5% (129000) of all deaths, respectively. Moreover, DIA was responsible for 29000 (2.3%) and 30000 (6%) premature deaths in China and India, respectively, and therefore should not be ignored.Overall, this study established a long-term series of high-resolution datasets on premature deaths due to PM2.5 exposure in China and India. The number of premature deaths caused by air pollution remain high in China and India, both of which have high PM2.5 concentrations and population density, thus necessitating stricter air pollution control policies. These results provide a reference for the formulation of air pollution policies in these countries. However, in estimating premature deaths due to PM2.5, the baseline mortality rate did not consider the differences caused by the level of development and medical treatment within a country. Therefore, in a future study, the researchers will incorporate the sub-national baseline mortality rate when assessing premature deaths.
关键词：remote sensing;PM2.5;Disease Burden;Premature Deaths;China;India;long time series
摘要：As an important air pollutant, formaldehyde (HCHO) not only affects the quality of the regional atmospheric environment but also harms human health. However, the current monitoring methods for HCHO are mainly based on surface in situ measurements, and the lack of vertical observations significantly hinders an in-depth understanding of the vertical evolution and chemical processes of HCHO, especially in the upper atmosphere. In this study, we retrieved HCHO vertical profiles by using the optimal estimation method (OEM) and hyperspectral remote sensing instrument measurements in Guangzhou Institute of Geochemistry (CAS) from December 2020 to November 2021. The HCHO in the vertical direction varied in the order of lower boundary layer (6.61 ppb) > middle boundary layer (4.76 ppb) > upper boundary layer (3.00 ppb). For the seasonal variation, HCHO varied in the order of summer (8.58 ppb) > spring (8.50 ppb) > autumn (8.05 ppb) > winter (5.43 ppb). The HCHO profiles in different seasons showed similarities in their diurnal variation characteristics. The highest HCHO concentrations usually appeared at 200 m and exhibited a “bi-peak” pattern, with the first peak occurring at 08:00—10:00 and the second occurring around 14:00. We also established a multiple linear regression model based on hyperspectral remote sensing instrument data to classify the measured HCHO to background, primary, and secondary concentrations. The HCHO secondary concentration (~64.10%) was the highest, followed by the background concentration (~20.20%) and primary concentration (~15.70%). The proportion of secondary HCHO in summer was significantly higher than that in winter mainly due to the enhanced VOCs emissions and reactivity of the photochemical process. By analyzing the diurnal variations of HCHO profiles on weekdays and weekends, we concluded that the HCHO emission level in Guangzhou did not significantly decrease during weekends. This study can help further understand the tempo-spatial distribution characteristics of HCHO and provide data support for the formulation of pollution prevention and control measures.
摘要：Surface UV-B irradiance can have a very important impact on the ecosystem. In the modern industrialization process, human activities have led to significant changes in the atmospheric system, such as the reduction of stratospheric ozone, emergence of ozone holes, tropospheric atmospheric composite pollution, and other phenomena, and changes in the global surface UV-B irradiance, all of which significantly affect human health (e.g., skin cancers) and ecological environment (e.g., reduced crop yields). Therefore, monitoring the surface UV-B irradiance has become essential.Surface UV-B irradiance can be monitored in two ways, namely, by ground-based monitoring and satellite monitoring. Ground-based platform surface UV-B irradiance monitoring faces several disadvantages, including its sparse site distribution and short operation time. Meanwhile, satellite remote sensing technology can realize a long-term monitoring of global surface UV-B irradiance, which is important for ecosystem assessment and atmospheric science research. Given the availability of few surface UV-B irradiance products based on domestic satellites, this paper carries out a preliminary inversion of the surface UV-B irradiance from the environmental trace gas monitoring instrument (EMI) on the GF-5 satellite.First, a sensitivity analysis of those factors influencing surface UV-B irradiance was performed using the SCIATRAN model to reduce the time consumed in lookup table construction and lookup interpolation. On the basis of the results of the sensitivity analysis of surface UV-B irradiance, the input parameter nodes of the clear-sky surface UV-B irradiance lookup table are reasonably determined, and the surface UV-B irradiance of EMI under clear-sky conditions is calculated. Second, the correction methods applied in the presence of clouds and aerosol scenarios were examined. The surface UV-B irradiance under clear sky conditions was corrected by using the cloud correction method based on Lambert equivalent reflectance, and the aerosol correction method based on aerosol index was used to obtain the surface UV-B irradiance under actual conditions. After applying these corrections, the global surface UV-B irradiance was eventually inversed. Third, to verify its correctness, the results of the EMI surface UV-B irradiance algorithm were compared with the European OMI satellite and WOUDC site data at the same time by using a linear fitting method. The correlation coefficient R with the OMI data exceeded 0.9, and the correlation coefficient R with the WOUDC site data exceeded 0.91, thereby suggesting that the results of the proposed algorithm show high agreement with both the OMI and WOUDC data. However, several shortcomings were noted. For example, the surface UV-B irradiance inversion algorithm of EMI shows high accuracy in the region with low surface albedo, while the results are large in the region with high surface albedo.The surface UV-B irradiance products of EMI are highly accurate and can provide a research basis for the subsequent releases of surface UV-B irradiance and other related products. These results also demonstrate the capability of the payload in global surface UV radiation spatial and temporal distribution monitoring applications and provide a basis for the long-term monitoring of the spatial and temporal variations of surface UV-B irradiance.
关键词：remote sensing;environmental trace gas monitoring instrument;surface UV-B irradiance;SCIATRAN;lookup table
摘要：Landslides are natural disasters that are driven by various factors and often leave catastrophic damages and casualties. A huge earthquake can trigger plenty of landslides. Therefore, landslide extraction is critical to provide timely information for post-disaster decision making. Remote sensing is a convenient tool for landslide information acquisition. However, landslide features are so intricate that landslide extraction mainly relies on a visual interpretation of aerial photographs or high-resolution remote sensing images, which requires vast manpower. Several landslide extraction methods are available today, including pixel-based methods, which have relatively low accuracy, and object-oriented methods, whose parameters need to be decided subjectively. With the continuous development of deep learning in image semantic segmentation, a precise and automatic remote sensing image binary classification becomes possible. Many researchers have investigated the use of deep learning for landslide extraction in different areas. However, a relatively small amount of landslide data can easily lead to model overfitting. Transfer learning, where knowledge is transferred from the source domain to the target domain, can alleviate this problem by using knowledge in the source domain to improve performance in the target task. A transfer learning deep network is then designed to improve the accuracy of landslide extraction.First, three GF-1 images taken from 2013 to 2015 in the research area were processed successively by geometric correction, registration, and image fusion to obtain 4 bands of images (red, green, blue, and near-infrared) with a resolution of 2 m. Second, a proper network was designed. The encoder of ResNet that was trained on ImageNet was chosen as the encoder, and the decoder of LinkNet, whose residual structure and bypass links can improve performance, was selected as the decoder. The bypass links in the decoder can also address the spatial information loss in max-pooling in the encoder, and the residual structure allows the network to learn complex features. After pre-training the ResNet network on ImageNet, we adjusted the number of input channels of the first convolution layer to 4, drop the last fully connected layer, and then form our network with the decoder. We eventually inputted remote sensing landslide images to finetune our model.When testing different network depths, the network does not always perform better as the depth increases. We chose ResNet50 as our encoder given its peak performance. Afterward, we compared our method with SVM. Without a pre-training encoder, our network improves U-Net and a mainstream transfer learning method, AlbuNet, thereby suggesting that deep learning methods outperform SVM, whereas transfer learning methods outperform deep learning methods that are trained on landslide images. The proposed method outperforms SVM in terms of precision, recall, and F1 measure by 17.16%, 18.58%, and 17.4%, respectively, outperforms the improved U-Net by 2.98%, 6.35%, and 4.61%, and outperforms AlbuNet by 0.9%, 1.98%, and 1.48%.The ResNet50 encoder combined with the LinkNet decoder should be selected to form a landslide extraction network with a higher accuracy compared with the transfer learning network AlbuNet and other ResNet encoders of different depths. Transferring knowledge learned from ImageNet can also improve the performance of the landslide extraction deep learning network. The proposed method can be used conveniently for follow-up landslide risk assessment, disaster investigation, disaster warning, and decision making.
摘要：To address the problems in the automatic identification of building damage types in post-earthquake complex lidar point cloud scenes, satisfy the timeliness and accuracy requirements in emergency rescue operations, abandon traditional artificial seismic damage feature extraction methods, fully excavate the seismic damage information of buildings in the disaster area from point cloud data, and further realize an automatic and intelligent recognition of buildings, this paper proposes a building seismic damage recognition model based on the PointNet++ network. This study also establishes collapsed, partially collapsed, and uncollapsed point cloud training datasets that can provide an important scientific basis for earthquake emergency rescue and disaster assessment.This paper applies the 3D point cloud deep learning method to identify seismic-damaged buildings. Given the uneven sample size, based on the characteristics of the PointNet++ network and the original point cloud sample shape, we propose a sample enhancement method that involves inverse distance interpolation, symmetry, and top projection to increase the amount of collapsed and partially collapsed samples.Sample enhancement not only increases the number of collapsed and partially collapsed buildings-hence making the samples more comprehensive and diverse-but also solves the problem of uneven samples. Therefore, the classification accuracies of collapse and partial collapse are improved by about 30% and 20%, respectively, and the overall average classification accuracy and kappa coefficient of the model are improved by more than 10%. The differences in classification accuracy between collapsed and uncollapsed and between partially collapsed and uncollapsed are reduced from 40% and 30% to about 15%.(1) The characteristics of the point cloud samples and network model should be fully considered when building a seismic damage training dataset. In this paper, we assume that PointNet++ has the same learning characteristics for the same geometric shape in different scales and spatial rotation changes. We design a sample enhancement method for the collapse and partial collapse categories that not only increases the number of samples but also enriches their damage form, thus effectively improving the classification accuracy of local collapse.(2) The sample size and its balance have a huge influence on the recognition effect of the earthquake damage recognition model established by the PointNet++ network. Only when the sample size is sufficient and the number of samples in each category is relatively uniform can a better classification and recognition effect be achieved. However, the sample size is not the decisive factor for improving the classification effect. When the sample size is uniform, the accuracy for the uncollapsed category is still higher than that for the other two categories. The classification effect is also related to other factors, such as sample selection, network design, and internal feature learning methods, which warrant further exploration.
关键词：remote sensing;classification and recognition;PointNet++;sample enhancement;LiDAR point cloud;seismic damage buildings
摘要：The water level changes in the lake groups on the Qinghai‒Tibetan Plateau is crucial for understanding global climate change and melting glaciers regionally. However, in-situ measurements are still lacking due to remote locations and material requirements. The current in-orbit optical satellites as well as radar and laser altimetry satellites have a relatively coarse spatiotemporal resolution and poor retrieval accuracies, both of which can be addressed in the future Surface Water and Ocean Topography (SWOT) mission. Therefore, the main objective of our study is to assess the water level monitoring potential of SWOT for the largest lake in China, that is, the Qinghai Lake in the Qinghai‒Tibetan Plateau.This study obtained SWOT-like water level time series covering the years 2010 to 2018 for the Qinghai Lake from in-situ measurements, the optical altimetry dataset, and radar altimetry products by using the CNES SWOT Hydrology Toolbox. A systematical performance evaluation of SWOT-like lake level was then conducted by comparing this level with the reference height. SWOT-like water volume time series under different input scenarios for the Qinghai Lake were estimated based on its hypsometric curve and were subsequently validated using the results derived from the Gravity Recovery and Climate Experiment (GRACE) mission and the WaterGAP global hydrological model (WGHM).In general, the SWOT-like water level time series can capture the seasonal and annual water level variations with an r/NSE ranging from 0.9 to 1.0 and from 0.8 to 0.99, respectively. Meanwhile, the SWOT-like lake level time series accurately demonstrate the long-term trends of water level in Qinghai Lake for the years 2010 to 2018 at various scales. The SWOT-inferred water volume changes under multiple forcing scenarios show change patterns that are comparable with the GRACE and WGHM results.The SWOT-like water level for the Qinghai Lake for the years 2010 to 2018 shows satisfactory accuracy at both seasonal and annual scales as revealed in comparisons with in-situ measurements, the optical altimetry dataset, and the radar altimetry product. The patterns of the SWOT-inferred water volume changes are also similar to the GRACE and WGHM results, thereby indicating the great water level and volume monitoring potential of the future SWOT mission. In our future work, we will derive time-varying water extent maps of Qinghai Lake by using multi-source optical imageries, including Landsat, Sentinel, and Gaofen series satellites. We also plan to carry further experiments for all lakes (> 1 km2) in the Qinghai‒Tibetan Plateau to assess the large-scale lake level monitoring potential of SWOT.
摘要：Nuclear power is actively used in China to promote construction projects. Monitoring and evaluating the impact of thermal plume from nuclear power on the surrounding water environment not only provide important decision-making basis for the planning, construction, and operation of nuclear-power-generating units but also hold significant value for marine water quality and ecological protection.An objective and accurate reference temperature extraction serves as the basis for the remote sensing monitoring and evaluation of the thermal pollution of the thermal plume. In this paper, the satellite-marine synergistic monitoring of the Ningde nudear power plant is taken as an example. By analyzing the vertical profile characteristics of 72 discrete temperature measurement points, it is divided into 7 profile morphologies, from which the stable temperature points are summarized. A satellite-marine synergistic method for extracting the reference temperature, hereinafter referred to as the measured stable temperature method, is then built based on these morphologies. This method can solve some problems associated with the reference temperature settings during the implementation of marine measurement monitoring and thermal infrared remote sensing monitoring, such as the independence of the reference temperature calculation and the lack of satellite-marine synergistic processing, both of which reduce the comparability of monitoring results.Results of the satellite-marine synchronous monitoring from the thermal plume extracted by the measured stable temperature method show the highest consistency with the distribution range and intensity-to-diffusion trend, and the extraction effect of this method is significantly better than those of other reference temperature extraction methods. The proposed method also achieves the smallest difference of 1.53 km2 in the satellite-marine monitoring area and reports a 4- to 10-fold improvement in its accuracy compared with other methods. By comparison, the satellite-marine area differences of the multi-point average temperature method (alongshore multi-point), inlet temperature method, neighborhood temperature substitution method, corrected bay average method, radius area average temperature method (5 km radius), and minimum temperature method in the temperature rise mixing zone are 11.95, 11.12, 7.69, 6.61, 8.46, and 14.93 km2, respectively.The measured stable temperature method can objectively and accurately extract the distribution range and size of thermal pollution of the thermal plume from a nuclear power plant, hence carrying an important application value in monitoring the thermal pollution of the thermal plume.
摘要：With the rapid development of remote sensing technology, the continuous accumulation of remote sensing time series data provides an important data support for studying land cover classification. Extracting classificational discriminative features from remote sensing time series data by using deep learning methods has become a hot research topic. Deep learning methods require a large number of training data, but sample imbalance prevents the commonly used recurrent and convolutional networks from achieving high accuracies in categories that have a small number of samples. To address this problem, this paper introduces the self-attention mechanism originating from the field of natural language processing to the classification of multispectral remote sensing time series data with the aim of extracting deep temporal features at a global scale. This mechanism differs from recurrent networks, which extract temporal features by using the previous time information along the temporal dimension, and from convolutional networks, which extract temporal features at the local time neighborhood.We construct a new feature extraction network based on the transformer encoder, which initially employs the self-attention mechanism in natural language processing, and then compare this neatwork with the long- and short-term-memory-based feature extraction network and temporal-convolution-neural-network-based feature extraction network to evaluate the effectiveness of the self-attention-based method in improving the classification accuracy of small-sample categories. To achieve a fair comparison, we adopt a generic classification framework consisting of data input, feature extraction network, classifier, and classification output, and we use different models with various hyperparameters as feature extraction networks. We then evaluate the classification performance of different methods on the TiSeLaC public multispectral remote sensing time series dataset by using per-class accuracy, overall accuracy (OA), and mean intersection over union (mIoU) as metrics.To obtain a proper measure of different methods, we choose the top three mIoU hyperparameter settings for each model and then calculate the average metrics as the final result. Results show that the self-attention-based network outperforms both the recurrent and convolutional networks. This network achieves a 92.98% OA and 80.60% mIoU, which are 1.25% and 1.32% higher than those achieved by the recurrent and convolutional networks, respectively. In terms of per-class accuracy, while the self-attention-based network achieves equivalent accuracies with differences of less than 0.74% in the large-sample categories compared with the recurrent and convolutional networks, the proposed network can significantly improve classification accuracies in small-sample categories by large margins ranging from 2.47% to 5.41%.This paper introduces the self-attention mechanism to the classification of multispectral remote sensing time series data to address the problem of low classification accuracy in small-sample categories caused by sample imbalance. We construct a new temporal feature extraction network based on the self-attention mechanism to globally extract temporal features from time series and design a set of objective comparison experiments. Experiment results show that by globally extracting temporal features from time series, instead of using previous time information (as in the case of recurrent networks) and focusing on the local time neighborhood (as in the case of convolutional networks), the self-attention-based network achieves the same accuracy in majority-sample categories and effectively improves the accuracy in small-sample categories. Therefore, the self-attention-based network can play an important role in the future classification of remote sensing time series, and further research on this network is critical.
关键词：self-attention mechanism;deep learning;remote sensing time series;land cover classification;imbalance of samples
摘要：Orbita Hyperspectral (OHS)-2 and 3 satellites were successfully launched on April 26, 2018 and September 19, 2019, respectively. While data quality evaluation serves as the basis of remote sensing data applications, no systematic evaluations or studies on the radiation quality evaluation of OHS have been conducted thus far.OHS products have 32 bands, hence consuming much manpower, material resources, and time when these products are evaluated by using a subjective evaluation method. To address this problem, this study explored the use of an objective evaluation method in assessing the radiation quality of OHS level 1B images. The radiation qualities of OHS-2 and OHS-3 images were evaluated at the same time by applying the objective evaluation method on those regions covered by representative features. On the basis of four objective indexes, namely, radiation accuracy, image definition (EVA), Signal-to-Noise Ratio (SNR), and entropy, the radiation qualities of OHS level 1B images were evaluated, and the radiation qualities of OHS and GF-5 (440—1000 nm) images were compared.Results show that the radiation quality and EVA of GF-5 are higher than those of OHS, the EVA of OHS is about 54.5% of that of GF-5, and the entropy of OHS ranges from 6 to 10, which is about 91.5% of that of GF-5. Meanwhile, the SNR of OHS is about 86.5% of that of GF-5. Therefore, OHS and GF-5 data can be supplemented, and the OHS can improve the data radiation quality by improving the quantitative series of spectral resolution, reducing the spectral resolution, and optimizing the sensor response.This study provides a data quality reference for the applications of OHS images. The radiation qualities of OHS-2C and OHS-3B were evaluated by using four objective indexes, namely, radiation accuracy, EVA, SNR, and entropy. The radiation quality of GF-5 was also compared with that of OHS. Although the radiation quality of OHS is lower than that of GF-5 due to the restriction of spectral resolution and the SNR and EVA of GF-5 are obviously better than those of OHS, the entropies of OHS and GF-5 are very similar. Due to the high revisit cycle of OHS (6 days for a single-star network and 2 days for a 4-star network) and their high spatial resolution (10 m), OHS images can complement GF-5 images to a certain degree in remote sensing applications. In the future, we plan to study the spectral quality and atmospheric correction of OHS in terms of quantitative remote sensing and water quality monitoring.
关键词：remote sensing;Orbita Hyper Spectral;Radiation accuracy;Image definition;signal to noise ratio;Shannon Entropy
摘要：GF-5 is the first hyperspectral satellite in China that can acquire comprehensive observations of the atmosphere and land surface. The Advanced Hyper Spectral Imager (AHSI) onboard GF-5 is a sensor that can acquire data covering visible near-infrared (VNIR) and short-wave infrared (SWIR) wavelengths with a very fine spectral resolution (i.e., 5 nm for VNIR and 10 nm for SWIR). However, the spatial resolution of GF-5 AHSI data (i.e., 30 m) is relatively coarse for the extraction of land cover information in several cases, such as small-sized buildings and roads in urban areas.To produce GF-5 data with fine spatial and spectral resolutions, in this paper, GF-5 hyperspectral images were downscaled to 10 m by spatial-spectral image fusion with 10 m Sentinel-2 multispectral images. To deal with the large computational burden of the advanced Information Loss Guided Image Fusion (ILGIF) method and the ubiquitous effect of the Point Spread Function (PSF), this paper also introduced a fast and accurate method for downscaling GF-5 data.A fast ILGIF (FILGIF) method was proposed. In this method, the original GF-5 hyperspectral data were transformed to a new feature space via principal component analysis (PCA), and the ILGIF-based spatial-spectral image fusion was implemented for the first few principal components. The fused components coupled with the remaining ones were transformed back to the original space to produce the 10 m downscaled results. The scale transformation optimal PSF between 10 m Sentinel-2 and 30 m GF-5 data was estimated adaptively for each band of GF-5 to enhance downscaling.Experimental results show that by fusing with the 10 m Sentinel-2 data, the 30 m GF-5 hyperspectral data can be downscaled effectively to 10 m. The FILGIF and ILGLF methods obtain greater accuracy than the area-to-point regression kriging (ATPRK) and approximate ATPRK (AATPRK) methods. Moreover, the computational cost of FILGIF is 30 times lower than that of ILGIF, and the accuracy of the downscaling results can be improved by considering the PSF effect adaptively for each band.Sentinel-2 images are suitable for downscaling GF-5 hyperspectral images. The proposed FILGIF method can achieve a comparable accuracy compared with ILGIF while significantly reducing computational costs. Highly accurate downscaling results are obtained when the PSF effect is considered appropriately.
关键词：remote sensing;GF-5;Sentinel-2;downscaling;spatial-spectral image fusion;geostatistics;point spread function (PSF)
摘要：Deep-neural-network-based multiple-source remote sensing image recognition systems have been widely used in many military scenarios, such as in aerospace intelligence reconnaissance, unmanned aerial vehicles for autonomous environmental cognition, and multimode automatic target recognition systems. Deep learning models rely on the assumption that the training and testing data are from the same distribution. However, these models show poor performance under common corruption or adversarial attacks. In the remote sensing community, the adversarial robustness of deep-neural-network-based recognition models have not received much attention, thence increasing the risk for many security-sensitive applications.This article evaluates the adversarial robustness of deep-neural-network-based recognition models for multiple-source remote sensing images. First, we discuss the incompleteness of deep learning theory and reveal the presence of great security risks. The independent identical distribution assumption is often violated, and the system performance cannot be guaranteed under adversarial scenarios. The whole process chain of a deep-neural-network-based image recognition system is then analyzed for its vulnerabilities. Second, we introduce several representative algorithms for adversarial example generation under both the white- and black-box settings. The gradient-propagation-based visualization method is also proposed for analyzing adversarial attacks.We perform a detailed evaluation of nine deep neural networks across two publicly available remote sensing image datasets. Both optical remote sensing and SAR remote sensing images are used in our experiments. For each model, we generate seven perturbations, ranging from gradient-based optimization to unsupervised feature distortion, for each testing image. In all cases, we observe a significant reduction in average classification accuracy between the original clean data and their adversarial images. Apart from adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, hence contributing to the present understanding of the vulnerability of deep learning models.Experimental results demonstrate that all deep neural networks have suffered great losses in classification accuracy when the testing images are adversarial examples. Understanding such adversarial phenomena improves our understanding of the inner workings of deep learning models. Additional efforts are needed to enhance the adversarial robustness of deep learning models.
摘要：Due to the limitations associated with the spatial resolution of instruments and complex natural surfaces, spectral mixing (SU), which identifies the proportion (abundance) of the basic component spectrum (endmember) in the sub-pixel level, has become an important topic in the deep development of hyperspectral image analysis. Given that the training procedure can be explained as finding a set of low-dimensional representations (abundance) that reconstruct the data with their corresponding bases (endmembers), the autoencoder (AE)-based method has received much attention in unsupervised hyperspectral unmixing. However, although the existing AE methods can effectively deal with unsupervised unmixing scenarios, their performance has not been satisfactory, and noise and initialization conditions greatly affect their unmixing performance.In this paper, we propose AAENet, a novel network for unsupervised unmixing that is based on the adversarial autoencoder network. First, we design the generator as an end-to-end unmixing network based on AE to obtain the meaningful abundance subjected to Abundance Nonnegative Constraint (ANC) and Abundance Sum-to-one Constraint (ASC). Second, we take the adversarial training process to map the abundance prior to the hidden code vector (abundance), which is equivalent to providing an adaptive training error to correct the AAENet converging toward a highly accurate and interpretable unmixing solution.Experiments on simulated and real hyperspectral data (Jasper dataset) demonstrate that the proposed algorithm can outperform the state-of-the-art methods. The synthetic data are polluted by Gaussian noise at different levels, where the SNR varies from 10 dB to 30 dB with an interval of 10 dB. Each algorithm is run 10 times, and the average and standard deviation are reported. With an increasing noise level, the proposed algorithm exhibits higher robustness in both the abundance and endmember estimations and achieves the best or comparable results in all cases. In experiments on real datasets, AAENet not only shows sparse abundances for the region but also interprets the boundary as a combination of neighboring materials. The best results are obtained in highly mixed scenes.Compared with the traditional AE method, the proposed algorithm can greatly enhance the performance and robustness of the model by using the adversarial procedure and incorporating the abundance prior to the framework. The discrimination network is designed to allow the transfer of the potentially intrinsic properties of the abundance prior information. As its main purpose, the proposed method takes an adversarial training process to impose a prior on the hidden code vector of the autoencoder. The output of the hidden code vector is then guided to produce meaningful samples. Experiments on simulated and real hyperspectral data demonstrate that the proposed algorithm can achieve a better unmixing performance compared with state-of-the-art methods.