摘要:For the missing carbon sink problem,terrestrial ecosystems are recognized as the biggest unknown field. Thus,to determine the role of terrestrial ecosystems in the global carbon cycle and understand its short-and long-term dynamics is a matter of high practical and scientific importance.Remote sensing has been attached increasingly more attention regarding the regional or global scale net primary production(NPP)estimation.The light-use efficiency model elaborated by Monteith has been widely used to estimate NPP based on remotely sensed data.In those NPP models based on remotely sensed images,NPP can be represented as the light-use efficiency multiplying absorbed photosynthetically active radiation.The absorbed photosynthetically active radiation is linear to the fraction of incident photosynthetically active radiation absorbed by vegetation(FPAR).So FPAR is a very important parameter in the NPP model.For the exist- ing remote sensed NPP models,FPAR is calculated respectively from the Normalized Difference Vegetation Index(NDVI) used in Glo-PEM NPP model and Enhanced Vegetation Index(EVI)in VMP model,so it is influenced by VI characteris- tic property,which VI varies with view angle and bands of sensors.In fact,According to its definition,FPAR only depends on the solar incident angle and the canopy parameters.In this paper,Monte Carlo algorithm will be used to simu- late the radiance transfer process in canopy.In this paper,the spherical distribution of leaf angle distribution is taken as an example.On the one hand,we can get the function of bidirectional reflectance distribution at different wavelength, which can be used to calculate the vegetation index,such as Normalized Difference Vegetation Index(NDVI)used in Glo- PEM NPP model and Enhanced Vegetation Index(EVI)in VMP model.So the VIs can be used to study the property of FPAR value applied in Glo-PEM and VMP model.On the other hand,the Monte Carlo algorithm can be directly applied to simulate the fraction of active radiance absorbed by canopy,which is the FPAR.After the simulation of FPAR at differ- ent wavelength between 400-70Ohm,an averaged FPAR is integrated.It has been proved that the FPAR is function from solar zenith angle for the research of the leaf area index and the leaf angle distribution.Our simulated results show that the FPAR increases with the solar zenith angle,but the increasing range of FPAR depends on LAI.But when the LAI is up to 5,the FPAR is getting saturation and the FPAR changes smoothly with different solar zenith angles.The simulated results also show that the FPAR variations with LAI depend on solar zenith angle.For example,when solar zenith angle is set at a low value I0,the FPAR is saturated while LAI is up to 7.But when solar zenith angle is set at a big value,for example 70 degree,FAPR is easily saturated when LAI is getting to 3.Our simulated results also show FPAR does not dependent on view angle.It is consistent with definition of FPAR,which is defined as the fraction of incident solar energy between wavelength of 400 nm and 700 nm absorbed by canopy.
摘要:Synthetic aperture radar(SAR)is an active microwave imaging sensor.It utilizes pulse compression technol- ogy to obtain high range resolution and aperture synthesis technology to achieve high azimuth resolution.SAR has a capa- bility of obtaining high resolution images that no ordinary radar can be comparable with,therefore,it is one of the signifi- cant directions for the modem radar development.The raw SAR data is difficult to be transmitted and stored due to its large size,so it is necessary to use data compression or reduction techniques.In this paper,based on the characteristics of SAR raw data coefficients in Walsh transform domain,an algorithm of variable bit rate allocation according to energey dis- tribution of every subband in Walsh transform domain-Block Adaptive Quantization in Walsh Transform domain(WHT- BAQ)is put forward.The comparison among WHT-BAQ algorithm,time domain BAQ algorithm and FFT-BAQ algorithm is conducted.By using two real SAR raw data,compression and decompression with the three algorithms are made respec- tively.The quality parameters are achieved,and the complexity of the three algorithms for raw SAR data compression are studied.The images correspond to the three algorithms are gained.The experiments manifest,due to bit allocation is introduced according to energy of different subband in Walsh transform domain,therefore,with same bit rate,SQNR and SDNR of WHT-BAQ algorithm surpass that of time domain BAQ algorithm.The increase of I.15-I.48dB is gained for SQNR,and the increase of 1.54-1.73dB is gained for SDNR.The performance of WHT-BAQ algorithm is comparable with FFT-BAQ algorithm.In different bit rate,the variance of resolution.PSLR and ISLR is small for the three algorithms. The above mentioned manifests,data compression technique(in the case of bit rate more than 2 bits/sample)hardly in- fluences resolution of SAR image hardly,and SAR image is not sensitive to bit rate of raw SAR data.The operation of WHT-BAQ algorithm is a little of exceed BAQ algorithm,but is much less than that of FFT-BAQ algorithm.In addition, as Walsh transform has the characteristics of fast speed algorithm and dose not need multiplication,therefore,WHT-BAQ algorithm has some advantages in the domain of SAR raw data compression.
关键词:data compression;Walsh transform;block adaptive quantization;signal to noise ratio;bit rate
摘要:Speckle noises appear in synthetic aperture radar(SAR)imagery because of coherent superposition of elec- tromagnetic wave scattering from scattering points in a resolution cell.Multi-look processing is usually used to reduce noise and thus exerts an impact on SAR ship detection. Single-look SAR images were processed with multi-look method to get multi-look images.By analyzing the equivalent number of looks,spatial resolution and ship-sea contrast,we got following results:on the one hand,multi-look processing increases equivalent number of looks and improves the image quality,and this is helpful for ship detection.On the other hand,multi-look processing reduces the image spatial resolution and ship-sea contrast,and this is the disadvantage for ship detection. Ship detection were carried out as case studies for C and L band HH,HV,and VV polarization SIR-C SAR images with different looks by using two typical ship detection methods such as windows filter method and K-distribution method. And Figure of Merit(FOM)was used to measure the quality of ship detection.It is concluded that: (1)the quality of ship detection with less multi-look processing is better than that with more multi-look processing or no multi-look processing(remain single-look); (2)the quality of ship detection is the best when the number of multi-look processing is 3 for windows filter method ; (3)the quality of ship detection is the best when the number of multi-look processing is 2 or 3 for K-distribution method.
摘要:TLS(Terrestrial Laser Scanning)uses laser to measure the 3D coordinates of a given region of an objeet’s surface automatically,in a systematic order at a high rate in near real time.TLS is a method which can get the 3D model of the object directly.Due to its high speed in obtaining unstructured point clouds from all surfaces within the field of view,TLS can be used in many fields such as deformation monitoring,3D modeling of building or other large scale scene. The data from TLS have the format of point cloud.Compared with traditional survey method,TLS focuses not only on the accuracy of single point position but also on the modeling accuracy of the 3D surface.Therefore,some resolutions are im- portant characters in the point cloud.The resolutions govern the level of identifiable details within the scanned point cloud.These resolutions can be classified into angular,range and intensity resolution.The first one is like an indicator of resolutions and is the focus of this study.The main issues concern the impact of angular resolution by different scan range and angle.Based on the principle of TLS,the basic relations and formulas concerning the accuracy are put forward for dif- ferent scan range and angle.And then,a Leica HDS3000 laser scanner is used to testify in the field. The first test is concerned with the change of accuracy by different scan range.A piece of plane ground is selected for scanning,and a rectangle range is cut from the point cloud to calculate the histogram of distances from the scan points to the scanner.Contrast with the theoretical number of scan points,the curves calculated appear quite similar.The result can be explained when the scan range increases,the point density decreases. The second test is concerned with the change of accuracy by different scan angle.Seven targets are set and scanned in different angle from 0 to 60 degree but in the same distance to the scanner.In each target’s point cloud,the number of points is calculated.The chart of number of points is fitted to the theoretical number of points.The data can be shown when increasing the scan angle,the point density is decreased. The analysis of the results shows that in the different part of a 3D object the point density obtained is different. Therefore,the homogeneous point density on the model is not existed.But according to the research it is feasible to esti- mate the point density and evaluate the level of details,which can be designed for a scanned point cloud on forehand. Thus,a blind scanning can be avoided. Besides the scan angle and scan distance,some other factors also will impact the point density and the resolution of the point cloud,such as the shape,size,materials,environment or surroundings of the scanned object.We do not work on the factors mentioned above in this study.Also,another key factor,the intensity resolution of the point cloud is not in- volved in this paper.
摘要:For data discovery,Open Geospatial Consortium(OGC)Catalogue Service for Web profile(CSW)will be used for all data sources.For data access to traditional sensor data systems,the OGC Web Coverage Service(WCS), OGC Web Feature Service(WFS),and Open-source Project for a Network Data Access Protocol(OPeNDAP)will be used.For connection to web-enabled sensors,OGC Sensor Web Enablement(SWE)protocols(e.g.,Sensor Observation Service,SensorML)will be used.In order to find the sensor,clusters of sensors or data in a heterogeneous distributed en- vironment with some special criteria,such as coincidence of time,space,or scale,an architecture of retrieval geospatial sensor data based on the integration of CSW with sensor observation service(SOS)is put forward,which includes distrib- uted geospatial sensor observation service(DSOS),geospatial catalogue service based on ebXML Registry Information Model(ebRIM),SOS register and search middleware and geospatial sensor service portal. DSOS is the collection of Sensor Observation Service(SOS)based on distributed network environment,packaged into a Web service and deployed into a web container,an API is provided to manage deployed sensors and retrieve sensor data and specifically observation data.It retrieves sensor properties and sensor data with three core operations:GetCa- pabilities,DescribeSensor,and GetObservation.The"GetCapabilities"operation can get the metadata for specified serv- ice instances:identifier,provider,operation metadata,filter capabilities,and contents.The"DescribeSensor"operation can get the detailed sensor description information encoded using a sensor information model(SensorML or TML).The"GetObservation"operation can get the observation or survey data encoded by the O&M information model.It has two types of users:One is the SOS provider,who can supply sensor observation and survey data to SOS;the other is the sensor data consumer.In this paper,the sensor data consumers are the geospatial catalogue service and the SOS registry middle- ware. Geospatial catalogue service based on ebRIM is the access component of the SOS and implements with ebRIM profile of CSW.This service allows entities to be registered and discovered by data consumers.It supports the registry and discov- ery of services,datasets,coverages,features,and service chain.The external interface follows the OGC CSW2.0 stand- ard,which supports three types of interface:OGCService,Discovery,and Manager.The OGCService interface is the getCapabilities operation,providing a description of the capabilities for catalogue service and also providing related XML documents.The Discovery interface provides three operations by which clients can discover and obtain the information reg- istered in the catalogue service database.The Manager interface allows users to use the"push"or"pull"mode to update catalogue content. SOS registry and search middleware is a coordinator in the system,which is responsible for SOS search,capability and observation data registry.It extracts information and generates records for the sensor data granules.Each record in the registry contains SOS basic information,SOS capability information,or SOS observation data.Sensor data granules are auto-harvested,real time sensor data granule records are registered,and timing of SOS information is updated through reg- istry middleware.Management and query of registry information is implemented by space-time index. Geospatial sensor service portal is the user access point of the system,it is implemented using Java portal technology, the portal carries out the following functions:user authorization service,user signing service and SOS searching presenta- tion service. The content and sequence of SOS basic service information,SOS Contents and SOS observation data register approa- ches are presented.The deep time-space search and large amount of observation data management is described.A visual search based on Google map,adjoining sheet table and meta index of register information is presented. The prototype system is designed and implemented by OGC CSW specification and SWE SOS standard,the following functions are included:SOS service discovery,SOS basic information registry,SOS capability information registry,SOS observation data registry,SOS registry information management,and SOS registry information query.The EO-1 SOS regis- ter is tested,including one SOS record,ten SOS capabilities content records and 2420 WCSLayer observation records in a specified time. The experimental result shows that the approach has three advantages in SOS search,management and integration with other OGC data service than that of the mentioned geospatial sensor web enabled prototype system.SOS service infor- mation is registered as a"ServiceType"and SOS capability content registered as a"DataType".They could be discovered by a CSW"GetRecords"operation;SOS observation data can be registered as"WCSLayer"or"WFSLayer"data gran- ule.They can also be discovered by a CSW"GetRecords"operation and served through OGC WCS or WFS servers respec- tively,depending upon how they are registered.Through Sensor Web data registry middleware,the sensor observations can be integrated with other OGC data services.Hence,the register middleware method reduces waiting time when access- ing real time observational data in a sensor Web based emergency response system. Access to the sensor web system is by SPS and SOS.The next step is to study how to provide an open self-adaptive geospatial data service by harmonizing SOS,SPS,WCS and WFS.
摘要:Hyperspectral images provide significant information about spectral characteristics of the objects in the scene. Anomaly detection is to detect targets whose signatures are spectrally distinct from their surroundings without a priori knowledge.Generally,the prior knowledge of the targets is unknown,and therefore the anomaly detection is very impor- tant and challenging. Constrained energy minimization(CEM)proposed by Chang is a target detection algorithm.It constrains the desired target signature with a specific gain while minimizing the output energy of the whole signatures in the images.The problem is that it needs the signature of desired target as a priori knowledge.Therefore it is not appropriate for anomaly detection. Based on the CEM algorithm,we presents a local energy maximal division(LEMD)algorithm to achieve anomaly detec- tion in this paper. There are four innovations in the algorithm.Firstly,the paper proposes local energy maximal division(LEMD)algo- rithm.It localizes the detection area with a sliding window.The central pixel of the sliding window is regarded as the test- ed pixel,and the other pixels in the window are regarded as background.The idea of the algorithm is to use the maximal projection energy ratio between the tested pixel and the background to measure anomalous degree of the tested pixel. Secondly,the optimal projection vector is composed of the signature of the tested pixel and the inverse of correlation matrix estimated by background pixels.If there are anomalies in the local background,the correlation matrix will be contaminated and the tested anomaly may be missed.To solve the problem,the anomalies in the local background should be found and removed,and the employed method is based on the projection distribution of the local window.Pixel in the background is defined as anomaly if the following condition is satisfied:The projection output of the pixel is similar to that of the tested pixel and much different from that of the most background pixels.Thirdly,false alarms are generally existed in anomaly detection,and the multi-random-window voting method is proposed to control false alarm rate.The method acts on the potential anomalies produced by threshold segmentation.It chooses some local windows selected randomly in the images, and determines the potential anomalies whose anomalous degree is larger than certain threshold in most of the random win- dows as real anomalies.Experimental data are hyperspectral images with 210 bands produced by hyperspectral digital image collection experiment(HYDICE).The comparative experiments between the LEMD algorithm and the well-known RX algorithm indicate that LEMD algorithm is robust and performs better under complex unknown background.
关键词:hyperspectral images;anomaly detection;local energy maximal division(LEMD);multi-random-window voting
摘要:In practice,images of high resolution are necessary and preferred.The most direct solution to increase resolu- tion is to improve that of imaging sensor.However,the solution may not be feasible due to the growing cost and limitations on current image sensor and optics manufacturing technology. In recent years,many attentions have been attached on a technique called"super-resolution(SR)"which provides an alternative for increasing the resolution of the acquired images.The super-resolution reconstruction problem refers to restoring a high resolution image from multiple low resolution images degraded by warping,blurring,noise and aliasing. The core idea of SR is that the observed low resolution images contain slightly different views of the same object.In this case,the requirement of total information about the object is much higher than information in each frame.If the object doesn’t move and is identical in all frames,no extra information can be extracted from the low resolution images. The SR algorithm can be divided by their domain:frequency and spatial.Even since Tsai and Huang(1984)dem- onstrated how to get super-resolution in frequency domain,much work has been devoted to this problem.Although the fre- quency domain methods are intuitively simple and computationally cheap,they are sensitive to model errors,and only pure translational motion can be treated,so current researches are mostly concentrated on spatial domain in which allows more complicated motion models and prior knowledge can be taken into account to improve the performance of reconstruction. To analyze the SR reconstruction problem comprehensively,it’s necessary to formulate an observation model that re- lates the original high resolution image to the observed low resolution images first.Several observation models have been proposed in previous literatures.According to existing models,SR reconstruction is related to motion estimation,image restoration and interpolation.There are relative motions among each observed images,and we have to estimate the motions to align each low resolution image to a reference image before we can accumulate information from the observed images. After that,image restoration should be taken because the low resolution images are blurred in the formation model.It’s an ill-pose inverse problem which doesn’t have direct solution and usually requires regularization(applying some constraints according to prior knowledge). Because SR reconstruction can overcome the limitations of imaging system to improve image resolution under some conditions,it’s become more attractive,especially for the situations in which it’s easy to capture multiple low resolution images.Remote sensing imaging is one of such situations.Along with the speedup of remote sensing technology,it’s much convenient to get multiple images of the same place,but these images usually are not satisfied for our need of high resolu- tion.In this paper,we are trying to reveal more details from the observed low resolution images. The image formation model is introduced first.And then a motion estimation algorithm based on 6-parameters affine transformation is proposed.Finally constrained least squares method is chosen to regularize this ill-posed inverse problem. Practically acquired remote sensing images are used in the experiment,and three criterions are selected to evaluate the quality of restored image,which demonstrate the efficiency and practicability of the algorithm.
关键词:multi-temporal;image restoration;Motion Estimation;Constrained Least Squares
摘要:Since 1980’s,classification standard of Chinese special geographical schemes have been increasingly con- strutted,e.g.Classification and Codes for the National Land Information(GB/T 13923-1992,GB/T13923-2006),Clas- sification and Codes for the Features 1:500 1:1000 1:2000 Topographic Maps(GB/T14804-1993),Classification and Codes for the Features 1:5000 1:10000 1:25000 1:50000 1:100000 Topographic Maps(GB/T15660-1995).However, there is serious semantic heterogeneity between different geographical information classification schemes.Commonly-used intellectual approaches can normally achieve good performance,but problems always arise if there are insufficient time, money and domain experts.To tackle these problems,it is necessary to introduce effective and efficient automatic approa- ches used.Semantic conversion between classification schemes are oriental or directional,i.e.from classification scheme A(source scheme)to classification scheme B(target scheme)is not the same as semantic conversion from classification scheme B(source scheme)to classification scheme A(target scheme).Theoretically,if every entry in the source scheme can locate a closely relation entry in the target scheme,then the source scheme is fully compatible with the target scheme, and vice versa.With the investigation of string similarity measures for English and Chinese languages,we select a measure taking into account of the characteristics of Chinese language and class names of geographical information classification schemes.On the assumption that the semantic similarity between two entries from different schemes can be measured by the string similarity of their class names,a model consisting of four conversion patterns is defined.These patterns are given a priority according to their representation capability of the semantic content of the corresponding entries.And then an algorithm is developed to conduct automatic semantic conversion of geographical information classification schemes.In this algorithm,data preprocessing is a key step by means of some natural language processing techniques such as syntax processing,punctuation processing,entry splitting and so on in order that every entry can represent complete and distinct content and concept.The other steps aim to implement the conversion model and then to filter optimal semantic relations. The conclusion of semantic relevance between entries of classification schemes is subjective in any real world task.Never- theless,there is no doubt that intellectual approaches can achieve the best performance.Four measures are defined to esti- mate the performance of intellectual and automatic approaches:out of the total number of entries of the source scheme (1)recall ration is the number of unique source entries of the semantic relation created by one automatic or intellectual conversion approach ;(2)precision ration is the number of unique target entries of the semantic relations created by one automatic or intellectual conversion approach ;(3)accuracy ration is the proportion of correct semantic relations out of the total semantic relations created by the automatic approach;(4)matching ration expresses the proportion of overlaps of se- mantic relations generated by the automatic approach from the whole semantic relations created by the intellectual ap- proach.Finally,we carried out experiments on four classification standard Chinese geographical information schemes.The experimental results show that our proposed approach achieve satisfactory performance.
关键词:geographical information classification scheme;semantic conversion;string similarity;conversion model
摘要:Classification is a basic task in data mining and pattern recognition that requires the construction of a classifi- er,that is,a function that assigns a class label to instances described by a set of features(or attributes).Recently,a lot of new methods come forth,such as Fuzzy sets,Rough sets,Neural Network,Support Vector Machine and Genetic Algorithms,Ant Behavior Simulation,Case-based Reasoning,Bayesian Network etc..Since 1988 Pearl et al.provided the concept of Bayesian networks for the first time,it has been popular used in the Artificial Intelligence(AI)communi- ty.Bayesian networks are powerful formalism for representing and reasoning under conditions of uncertainty,and it is a powerful tool for knowledge representation and inference under conditions of uncertainty.However,Bayesian Network was not considered as classifiers until the discovery of Naive Bayesian Network,a simple kind of Bayesian Networks,which assumes the independent features given the class attribute(node)and is surprisingly effective.From that time on,it trig- gered experts to explore more deeply into Bayesian Networks as classifiers.Since the"Naive"independent assumption in Naive Bayesian Network can not be held in many cases,researchers have wondered whether the performance will be better if the strong independent assumption among features(or variables)were relaxed.In this paper,on the basis of the study of Naive Bayes Classifiers,the Na’lve Bayesian Network is generalized and is united with maximum likelihood classifier (MLC)on the mathematic model.To validate the feasibility and effectivity of the proposed method applied in the texture classification of aerial image,some aerial images of some cities in China are used in the experiments.The experiment results demonstrate that the generalized Bayesian Network-n-level Bayesian Network performs better in overall classifica- tion precision than maximum likelihood classifier and Naive Bayesian Network.The proposed method considers correlations among features,but the correlations are inherent.It is difficult to represent the correlations only by the parameter n.Dif- ferent values of n can get different experiment results,which will be researched deeply in the future.
摘要:Satellite remote sensing of active fires provide an important tool for monitoring and locating wild fires.At present,more and more satellites and sensors have been used to monitor the fires.The Moderate Resolution Imaging Spec- troradiometer(MODIS)sensor,on board the Terra and Aqua satellites in Earth Observing System(EOS)of National Aeronautics and Space Administration(NASA),have offered an improved combination of spectral,temporal and spatial resolution for global fire detection compared with previous sensors,and have become the major data source as the substi- tute for the Advanced Very High Resolution Radiometer(AVHRR)sensor,widely used for fire detection.MODIS global active fire detection algorithm,proposed by the MODIS fire team from NASA,and based on original MODIS detection algorithm and heritage algorithms which is developed for the AVHRR and the Visible and Infrared Scanner(VIRS),was designed for global active fire products.But the MODIS fire algorithm is imperfect for fire detection in China.There are two reasons which could cause mistakes in fire detection:one is the threshold of mid-infrared channel used to exclude the false fire pixels is so big that many fires with lower brightness temperature are eliminated,the other is some fires with higher brightness temperature,which could be identified easily with human eyes in 4μm band,are omitted for the errors of potential fire pixels during the selection of background contextual pixels.This paper introduces an improved algorithm of self-adaptive fire detection for MODIS data based on MODIS global fire detection algorithm,which makes use of the bright- ness temperature derived from the MODIS 4μm and 6μm channels to carry out fire detection,and the improved algorithm is described in detail.The active fire detection strategy is based on absolute detection of the fire.The latter test identifies pixels with values elevated above a background thermal emission obtained from the surrounding contextual pixels.This method accounts for variability of the surface temperature and reflection by sunlight.Since the improved self-adaptive algo- rithm can acquire the threshold of mid-infrared for exclusion of the false fire pixels by histogram statistics,and recognize the potential fire pixels by relative temperature augment in background pixels,it enhances the ability of active fire detection.As for higher brightness temperature the thermal emission of surface active fires could penetrate through the smooth and thin cloud and reflect on combined color satellite image,which could be easily interpreted as fires by human eyes,this paper introduces a method to discern the active fires under the cloudy condition by the satellite image texture characteristics in visible spectrum and brightness temperature variance in mid-infrared spectrum.Finally,Guangdong region is selected as the experiment region to verify the improved fire detection algorithm,and the results of fires detection are compared between improved self-adaptive algorithm and MODIS global algorithm.The analysis show that this improved self-adaptive algorithm has a perfect discernment for active fires,which could increase sensitivity to smaller,cooler fires, and discernment for active fires in higher brightness temperature omitted in MODIS global algorithm.Also,the improved algorithm has a good discernment for the active fires under the cloudy condition.
摘要:Mixed pixels widely exist in remotely sensed images.It not only influences the accuracy of target detection and classification,but also greatly hinders the development of quantitative remote sensing.A large number of spectral mix- ture analysis methods have been proposed,among which the Least Square(LS)method is a widely used technique in remote sensing to estimate fractions of materials(endmembers)existing in an image pixel.With its character of simple and effective in many application studies,it has some defects such as sensitivity to local noise,atmospheric effects and environmental radiation etc.Because the Root Mean Square Error(RMSE)is used as model fit in the LS method,when the magnitude of the spectrum changes significantly while the spectral shape is kept well which would be caused by atmos- phere,shadow and so on,the unmixing accuracy of LS method will be reduced remarkably.In this study,the spectral unmixing problem is considered as a nonlinear optimization question and a new spectral mixture analysis method based on the Spectral Correlation Matching(SCM)is proposed to overcome the defects of LS method.Different with LS method, the SCM method uses the correlation coefficient to describe the similarity between the objective and test spectrum.Based on the overall shape feature of the spectra instead of the absolute differences between the objective and test spectrum,the SCM method can reduce the influence of atmospheric effects,environmental radiation etc.In order to evaluate the per- formance of SCM method and compare it with LS method,a case study was carried out in the north-third-ring area in Bei- jing city using a Landsat ETM + and IKNOS image.The ETM + images was resampled to 28m,and the IKNOS image was first classified into four land cover types corresponding to the endmembers,then the real fraction of each endmember was calculated within a 7×7 window.The results indicated that the proposed SCM method was a better alternative to least square method,with higher accuracies for each endmember estimation than LS method.It suggests that the SCM method can be applicable to solve unmixing problem in remote sensing.
摘要:Spaceborne interferometric synthetic aperture radar(InSAR)is an important method to reconstruct global dig- ital elevation model(DEM),which measures the terrain heights using two or more SAR images from different angles.DS- InSAR is a novel radar system,which is the of combination of spaceborne SAR and satellite formation flying technology. Because of its advantage such as high DEM reconstruction precision,strong survival capability and anti-jam ability,short cycle of manufacture and low cost,the novel InSAR system has been a research hotspot in the field of spaceflight remote sensing.So far,many institutions have already proposed DS-InSAR plan with the ability of height measurement,such as interferometric Cartwheel plan proposed by CNES,interferometric Pendulum plan proposed by DLR,Radatsat-2/3 tandem plan proposed by Canada and TanDEM-X plane proposed by DLR etc. In single-baseline InSAR system,object geographic coordinates are computed from the observed ranges and Doppler central frequencies which occur when the object points are focused on SAR imaging processing.The height of object on a normal plane is generated finally.Absolute interferometric phase is needed for the above process of DEM reconstruction. However,only relative interferometric phase is obtained from phase unwrapping.So far,the effective way of converting relative interferometric phase to absolute interferometric phase requires at least one ground control point.However,it’s dif- ficult to acquire ground control points in the practical applications,which heavily restricts the application of single-base- line InSAR system. DS-InSAR system includes more than one baseline,so it is also called multi-baseline InSAR system.It offers addi- tional observation information for the derivation of position and height of object,compared to the single-baseline InSAR system.On one hand,it offers more than two ranges of information for every object ;on the other hand,it offers more than one interferometric phase difference information between every two objects.Based on the additional information it offers, this paper proposes a new DEM reconstruction approach of multi-baseline InSAR system without ground control points,So it gets rid of the above restriction of the single-baseline system.At first,theory of the approach is introduced and height measurement equations are derived.Then,error sensitivity analysis is made by first order approximation of Taylor expan- ding,which is expected to give instruction and support for the solution of DEM reconstruction equations,system analysis and error distribution etc.Its results are validated by Monte-Carlo simulation,whose results show that the above theoreti- cal error analysis method is available and can offer support for analysis and design of the system directly.
摘要:The Beijing-1 microsatellite,launched on October 27th in 2005,is the first applied earth observing microsat- ellite of China.It is also one of the members of the international Disaster Monitoring Constellation(DMC).Its total mass is about 167kg.The designed minimum operational lifetime is 5 years.Beijing-1 microsatellite applies DMC multispectral cameras with a high resolution panchromatic imager based on the standard of SSTL.After its launch,Beijing-1 had been tested and evaluated for several months.From January to March in 2006,the primary purpose of the testing work was to evaluate the performance of the sensors.This paper introduced the tested items,the experiments,the data sets,the meth- ods and the results.Based on the characteristics of Beijing-1 mierosatellite and other earth observing satellites,a compre- hensive specifications of on-orbit testing and evaluating was proposed firstly by this paper.According to these specifica- tions,four in-flight testing and evaluating experiments were carried out in Beijing,Wuxi,Hefei and in Baihu farm which is located in Anhui province.In the four sites,three simultaneous measurement datasets were acquired respectively:the Beijing-1 images,the in situ surface reflectance and the atmosphere measurements,and the geometric measurements of linear objects and other typical objects.Based on the data sets,the geometric performance,radiometric performance, spectral performance and general quality of Beijing-1 sensors and images were tested and evaluated.The results of some key parameters,such as the resolution of panchromatic camera,the modulation transform function(MTF),the signal-to- noise(SNR),the linearity and dynamic range of panchromatic and multispectral cameras were given in the paper.Judged by results of the four in-flight testing and evaluating experiments,Beijing-lmierosatellite exhibited a good geometric and radiometric performance.More experiments will be taken place,and more parameters will be measured to evaluate Beijing-1 microsatellite comprehensively.
摘要:Employing Pi-SAR polarimetric data acquired in 2002 and 2003,forest biomass estimation approach is stud- ied on Tomakomai forests located in Hokkaido,Japan.The purpose of this project is to develop effective approach for esti- mating forest biomass.The ground truth data of 19 test sites are in hand.In this test sites,one sample stand of 20m×20m are selected and tree height,age,basal area,diameter of breast height and tree species are measured,the biomass is then calculated.The conventional Radar cross section(RCS)method is first investigated.It is found that RCS increases with biomass and becomes saturated rapidly,under the situation of this paper.That is:the L-band RCS saturation levels are found approximately to the biomass of 40t/hm2,with the tree age of 30 years,the tree height of 8m,and the basal area of 30 m2/hm2.The RCS saturates at 20t/hm2 for X-band data.Therefore,forest biomass beyond saturation level cannot be estimated utilizing RCS.To search the quantitative relation between high-resolution SAR data and forest parameters,sta- tistical analysis approach is utilized.The probability density function of image amplitude is then investigated,and among different distribution including Rayleigh,log-normal,Weibull and K-distributions,the K-distribution is found to fit best to the L-band data of all polarizations according to the Akaike information criterion(AIC).The relations between K-distribu- tion index and different tree parameters including biomass,tree age,height,basal area,are investigated.It is found that the tree biomass correlates best with the index parameter.Moreover,K-distribution index increases with biomass beyond RCS saturation level,and the highest correlation coefficient is obtained at cross-polarization.The regression model is de- veloped between K-distribution index and forest biomass at cross-polarization based on 19 test sites data.In August and September of 2005,we further collected ground truth data of 23 test sites.Based on the relation of K-distribution index of cross-polarization and forest biomass,the biomass estimation is made for the 23 test sites.The comparison of estimated bi- omass and measured ground truth data rerifies that the average accuracy of the estimation reaches 85%.It is concluded that,at least for the Hokkaido forests,this empirical model is an effective and superior way of estimating forest biomass from polarimetric SAR data compared with the conventional RCS model.
关键词:synthetic aperture radar(SAR);forest biomass;saturation level;K-distribution index
摘要:Spectral properties of waters mainly show volume scattering instead of surface scattering.In optically shallow waters,light penetrates through water down to bottom,and then the light reflected from bottom penetrates through water up to surface.So the total water-leaving radiance is a sum of contributions due to water volume and due to bottom vegetation. Taihu Lake,a cover area of 2427.8 km2(including 51 islands),is a large shallow lake with a mean depth of 1.9 m.On 18-29th in October 2004,67 samples were distributed almost all over the Taihu Lake.The apparent optical properties, including remote sensing reflectance,were measured in situ,and the inherent optical properties including backscattering and absorption were also measured either in situ or in the lab according to the instrument user’s manual or the NASA Sea- WiFS protocol.Additionally,we also measured concentrations of chlorophyll-a,suspended particulate matter(SPM), suspended particulate organic matter(SPOM),suspended particulate inorganic matter(SPIM)and dissolved organic car- bon(I)OC).Other parameters,including secchi disk transparency,water depth,wind speed and direction,and water temperature,were also measured in situ using their corresponding instruments.According to some literatures describing the relationship between the water depth and Secchi disk transparency,the Taihu Lake is divided into two areas,VIZ,an optically depth water area and an optically shallow water area.And most of the optically shallow water area,with a wholly vertical transparency from water surface to the bottom,have submerged vegetation growth at the bottom.These vegetation bottoms have an effect on the remote sensing reflectance to a certain extent.We used the LEE model to calculate the con- tribution of vegetation bottom to remote sensing reflectance just beneath the surface and the vertical diffuse attenuation co- efficient based under the hypothetical condition of:(a)infinitely horizontal bottom composed of homogeneous vegetation and mush with a Lambert property,and(b)homogeneous water quality in vertical.The results show that the contribution of the vegetation bottom to subsurface remote sensing reflectance has a general tendency to increase with increasing wave- length,and the maximum emerges in the red wavelength range.Then we analyzed the relationships between vertical dif- fuse attenuation coefficient and the secchi disk transparency,total suspended matter concentration suspended inorganic matter concentration.The results show that there is a significant inverse linear function relationship between the vertical diffuse attenuation coefficient and Secchi disk transparency,and there is a significant quadratic function relationship be- tween the vertical diffuse attenuation coefficient and SPM concentration and SPIM concentration.Additionally,the rela- tionship between vertical diffuse attenuation coefficient and the contribution of vegetation bottom was also analyzed quanti- ficationally.These results will improve the retrieval precision of water quality parameters with remote sensing.Worth to note,the calculated value of bottom contribution is approximate and not constant,which may therefore vary little with dif- ferent growth periods of hydrophytes.Additionally,the assumption of BRDF(bidirectional reflectance distribution func- tion)needs to be validated by its measurements especially in vegetation lake area of Taihu Lake.
关键词:vegetation bottom;remote sensing reflectance;taihu lake
摘要:Aerosol is a key element in many research fields,such as climate changes,environment study,pollution monitoring and ground object recognition all over the world.Now the space-borne sensors have widely applied to monitor the aerosol optical properties.But there are several problems within the inversion of aerosol properties.One is the aerosol model which changes with place and season.The aerosol model can be studied through a large size of ground based meas- urement.This is the motive of establishment of AERONET(Aerosol Robotic Network),which aims to study the world wide aerosol model using solar spectrophotometer.In Beijing,there are two AERONET observation stations,one is located in the north,the another is in the south.The north one is near the Beijing Olympic Game’s site,where were lots of pro- jects constructions from 2005.The AERONET ground measurements can be used to study the local aerosol model.Dubo- vik and King had developed a method to derive aerosol optical thickness,single scattering albedo,Angstrom exponent and polarized phase function based on four band measurements.This method is used to produce the aerosol products in AERO- NET.We used a new algorithm put forward by Li Zheng-qiang based on a polarized band measurement besides the four bands to derive more aerosol properties.The properties aerosol includes the optical properties,such as aerosol optical thickness,single scattering albedo,Angstrom exponent and polarized phase function.It also includes the physical proper- ties,such as particle size and refraction index.Based on the north AERONET observation in Beijing near Beijing Olympic Game’s site in 2005 and 2006,we compared the results from the two different retrieval algorithms.The optical and physi- cal aerosol properties had been retrieved based on ground measurements of 2005 and 2006.Then the optical and physical aerosol properties had been used to analyse the aerosol property annuel change because of the Olympic Game’s architecture projects around Beijing site.The results show that(Ⅰ)the aerosol optical thickness is getting high,and the Angstrom exponent is getting low from 2005 to 2006.It means that the project construction brought more dust.(Ⅱ)The average single scattering albedo in 2006 retrieved from the measurements is round 0.85,it is about 0.05 lower than the average value in 2005.The lower single scattering albedo indicates the stronger absorbability of aerosol particle in 2006.(Ⅲ)We also got the polarized phase function from polarized band measurements,and it changes with the Angstrom exponent. (Ⅳ)The tow-year measurements both reveal the aerosol particle size in north city of Beijing is bimodal lognormal distribu- tion.The peak values of two bimodal lognormal distribution in 2006 are higher than those in 2005.It means more granules and large particles in 2006 in north Beijing had been produced by the construction.
关键词:aerosol parameter retrieval;polarization;Beijing site of AERONET
摘要:The assessment of vegetation dynamics from satellite-derived data becomes more important to the study of land surface.As arid and cold regions play a significant role in China,the importance of the study on these areas has been rec- ognized in both scientific literature and popular media especially in the last few years.These days,some researches indica- ted a strong signal of climatic shift from warm-dry to warm-humid pattern appearing in the last few years.It is known that climate variability has a large impact on the vegetation dynamics.Thus,the research on vegetation cover change and its relationship with climate in Chinese arid and cold regions have become a focus in Chinese geographic study.Since few re- searchers did a systemic research on this area,the study on vegetation cover change and its response to climate in Chinese arid and cold regions become indispensable.This research defined Chinese arid and cold regions by GIS spatial analysis for the first time,then investigated dynamic patterns of vegetation change and its relationship with climate based on analysis of time series Normalized Difference Vegetation Index(NDVI)and meteorological data for the period of 1982-2003. To define Chinese arid and cold regions respectively,temperature and precipitation data sets for the period 1970- 2000 provided by National Meteorological Information Centre are employed.The following 3 conditions of temperature are used as the sign to identify the cold regions:1.the mean air temperature of the coldest month should be lower than -3.0℃;2.less than 5 months with a mean air temperature higher than 10.0℃;3.a yearly averaged temperature not higher than 5.0℃.The yearly precipitation of 250mm and 500mm are used as the threshold values which are employed to classify arid and semi-arid regions. The GIMMS AVHRR NDVI(Normalized Difference Vegetation Index)data sets,a global data set with 8-km resolu- tion(square pixels)developed by the Global Inventory Monitoring and Modeling Studies(GIMMS)group was used in this study.The time-series starts from 1982 to 2003.In order to get the pattern of vegetation cover change,Seasonally Integrat- ed Normalized Difference Vegetation Index(SINDVI)which is defined as the sum of NDVI values for each pixel and all time intervals of maximum value composites(MVC)had been calculated.Linear regression was used to characterize the trends in vegetation cover change.Anomaly analysis was employed to show the yearly average evolution.And SINDVI and yearly meteorological data sets were employed to get the correlation coefficient between SINDVI and yearly mean tempera- ture or precipitation based on correlation analysis.The results suggest that vegetation in more than half part of Chinese arid and cold regions have obviously increased such as Tianshan Mountains,and so on.Vegetation cover change in Chinese ar- id and cold regions has a positive relationship with yearly averaged temperature.In most arid regions SINDVI has a positive relationship with yearly mean precipitation.But the correlation coefficient is negative in some cold regions such as Greater Khingan Range,Lesser Khingan Range and Changbai Mountains.
关键词:Chinese arid and cold regions;NDVI;dynamic vegetation change;climate;remote sensing
摘要:Wheat is one of the most important crops,and its grain protein content varies from 9% to 18%.Grain protein content is one of the important quality index of winter wheat product.The winter wheat can be divided into weak-gluten (≤1.5%),medium-gluten(>11.5% and<15%)and strong-gluten types(≤15%)according to Chinese of Stantard grain protein content(GB/T 17892-1999 and GB/T17893-1999).Therefore,it is valuble to monitor grain protein content from remotely sensed image for winter wheat quality division and food processing at large scale.Almost all the previous re- searches of monitoring grains protein have made use of content empirical statistical model between grain protein content and the canopy or image spectral information.These empirical statistical models were limited frequently by spatial and temporal condition of the experiments.If the grain protein content of winter wheat is monitored by using the empirical statistical model,serious predicting error often occurs due to the variations of the external experiment conditions,such as tempera- ture,sunlight,soil condition,fertilization and irrigation.The crop growth model possesses the traits of better continuity and mechanism,which can simulate winter wheat growth status dynamically for all growth stages,and can interpret the difference resulted from temporal or environmental changes.However,it is difficult to predict grains protein content of winter wheat at large region by the crop growth model,because the spatial distribution of the crop growth model’s parame- ters,such as LAI,above-ground biomass,canopy the temperature and soil moisture cannot be collected without remote sensing technique.Some researches result indicated that from quantitative remote sensing data we can retrieve the spatial distribution information for the growth model parameters.Therefore,it is necessary to monitor grain protein content of win- ter wheat dynamically by assimilating remote sensing data and crop growth model.The field experiments were carried out in Jiangsu and Henan province during 2003 to 2005.The dynamic relations between vegetation index of remote sensing and growth condition of winter wheat were analyzed,and the nitrogen transfer model from plant to grain was built based on the experiment data in Jiangsu province in 2005.As a result,the predicting model for grain protein content of winter wheat was developed based on the normalized difference vegetation index(NDVI)of TM image of the landsat satellite and grain nitrogen accumulation.According to the predicting model,the plant nitrogen accumulation,leaf area index(LAI) and dry above-ground biomass were firstly regressed from NDVI at anthesis,then,the grain nitrogen accumulation was predicted by using the plant nitrogen transfer model from plant to grain.The model was validated using the data sets of dif- ferent ecological regions in Henan province in 2003 and in Jiangsu province in 2004.The root mean square error(RMSE) of the predicted grain protein content varied from 0.47% to 0.59%,and the RMSE of the nitrogen accumulation varied from 4.75 kg·hm-2to 8.76 kg·hm-2.Therefore,the model was accurate and applicable for predicting grain protein content of winter wheat under various conditions,which made it possible to predict grain protein content at large region from remotely sensed images.However,the formation of protein content in winter wheat grain is very complex,which is not only influenced by genotype,but also by environmental conditions,such as air temperature,solar radiation,soil con- ditions,fertilization and irrigation.Besides,remote sensing image is influenced by the function sensor,the atmospheric conditions,and imaging angle.Therefore,the model presented in this paper still need to be tested by remote sensing exper- imental data under different environments.
关键词:winter wheat;TM Image;nitrogen accumulation;grain protein content;predicting model
摘要:Evapotranspiration(ET),including evaporation from soil surface and vegetation transpiration,is an impor- tant parameter for water and energy balances on the Earth’s surface.Many applications in water resource,agriculture,and forest management require the knowledge of ET over a range of spatial and temporal scales.Satellite provides an unprece- dented spatial distribution of critical land surface parameters such as surface albedo,fractional vegetation cover,land sur- face temperature,and so on,so numerous physical and empirical remote sensing-based models in conjunction of ancillary surface and atmospheric data have been developed to estimate ET for clear sky days in recent years.The models for evapo- transpiration estimation using remotely sensed data are categorized as single-layer and two-layer model,Penman-Monteith model and empirical model in this paper.A review of algorithms and approaches of these models is presented,with a dis- cussion regarding the main merits and limitations of SEBAL(Surface Energy Balance Algorithm For Land),SEBS(Sur- face Energy Balance System)and TSEB(Two Source Energy Balance)that are widely used.The problems of temporal ex- tension with its uncertainty,the temporal and spatial resolution with relative scale effect,the selection of models for appli- cation and the evaluation of model applicability,the effect of advection on energy balance principle,and the accuracy as- sessment are discussed respectively.To improve the accuracy of ET estimation using remotely sensed data at reginonal scale,several problems have to be overcome in the future.Suggestions are as follows:Firstly,the research on the mecha- nism of interaction between land surface and atmosphere with related land-surface process should be enhanced because the parameterization schemes of land-surface process are currently still too simple to depict the complicated mechanism pre- cisely,especially for the conditions of heterogeneous underlying surface and complex terrain.Secondly,the accuracy of land surface variables retrieved from remotely sensed data has not achieved perfect level and needs to be improved because it can significantly affect the accuracy of ET estimation.Enhancing the research on the interrelationship between remote sensing information of a variety of bands and characteristics of land surface and micrometeorology can enlarge the applica- ble range of two-layer model,which also conduces to the establishment of ET estimation model of more accurate and appli- cable.Thirdly,the use of hydrological methodology can provide effective methods for accuracy assessment of remote sens- ing-based models,underway to develop a new parameterization scheme incorporating the strengths of remote sensing-based and hydrological models.Fourthly,enhancing the research regarding scale effect and error propagation could improve the accuracy of ET estimate model using coarse spatial resolution data.
关键词:remote sensing;evapotranspiration;Model;land-surface process