DGPS processing

From Glacsweb Wiki
Jump to: navigation, search

Laura Edwards

(This guide is only supposed to be a quick guide to the use of the software and information on some parameter changes that can be made. It does not explain in full the principles of GPS and all the terminology as these constitute a significant topic for which there are a plethora material both online and in press.) Example command file

Directories setup and .profile additions

Create directories in your home directory as follows:

 /GPS_processing/inputs
 /GPS_processing/track_files 
 /GPS_processing/outputs

Copy the files in /home/le1d10/example_code to your /GPS_processing/track_files directory.

Add the following to your PATH in your .profile file on ENV

 /usr/local/GPS_processing/gamit_globk/gamit/bin
 /usr/local/GPS_processing/gamit_globk/kf/bin
 /usr/local/GPS_processing/gamit_globk/com
 /usr/local/GPS_processing/gamit_globk/
 /usr/local/GPS_processing/netcdf
 /usr/local/GPS_processing/gamit_globk/help/   - probably not needed in PATH(km)

Also add to the end of your .profile file on ENV

 export NETCDFHOME=/usr/local/GPS_processing/netcdf
 export HELP_DIR=/usr/local/GPS_processing/gamit_globk/help/

Summary of the workflow

Download precise orbit files

Download Hofn reference rinex files if there is no suitable data from our reference station

Convert Glacsweb raw GPS files to rinex using teqc

Next process with track (for all session times shorter than 20 epochs track cannot be used and so PPP processing should be done on the data otherwise track processing can be attempted)

Initially processing the rinex files with track using the standard track cmd file (e.g. use track_std.cmd with no adjustments to track cmd parameters) to get position estimates

Check outputs of track processing (summary and output files). If RMS values are high and there are many ambiguities unfixed run track processing again using a variety of changes to parameters (suggestions for these changes in section 5 of this document and in the track help file e.g. parameters like site_stats, bf_set, float_type. Changes will depend on whether Hofn or Glacsweb reference station data are used.)

Repeat track processing until the best RMS and number of ambiguities fixed are obtained.

N.B. With continuous data track processing will be different (see section 6 Glacsweb data versus continuous data processing for more details).

Rinex conversion with teqc

When converting make sure standard naming conventions are used. (Hopefully standard naming conventions will have been adhered to in the raw GPS saved files. Files should be saved either hourly or in 24 hour blocks therefore there will not be two or more observation files in any one hour e.g. 2009-09-04-12.18-base0904d.)

Standard rinex naming convention is as follows:

ssssdddh.yyt

where ‘ssss’ is the site identifier e.g. base or ref2, ‘ddd’ is day of year, ‘h’ is hour (in UTC time – N.B. Iceland is UTC 0), ‘yy’ is year and ‘t’ is file type (‘o’ for observation and ‘n’ for navigation) e.g. base316a.10o would be the base station observation file for day 316 in 2010 for the hour 0:00 to 01:00 UTC.

The processing with teqc puts the appropriate information for kinematic GPS processing in the rinex header (including an apriori estimate). It also allows us to give the files an appropriate naming convention in the process. Basic usage example for our Topcon data:

 teqc +meta -top tps filename

To get help with teqc type:

 teqc -help

There are however many options in the teqc command and I have created a demo bash script with the appropriate options included to get the rinex file outputs we require for the kinematic GPS processing. This file is teqc_std_manual.sh and should have been copied to your home directories from the /home/le1d10/example_code directory. Copy the scripts in /home/le1d10/example_code. The teqc_std_manual.sh script allows running of teqc processing on an individual file (please read all comments in the script and make any appropriate changes to your copy of this script before use). The script teqc_std_manual.sh will run rinex conversion on files with four letter/digit site codes (i.e. base, ref2, ref0, ref1) for files of the form 2009-09-04-12.18-base0904d and 2009-09-04-12.18-ref20904d. The script requires the base and reference rinex file for processing to be present in the current working directory (or you can adjust the code to show the script where to get the files). A text file manual_teqc_info.txt is output by teqc_std_manual.sh which records the filename, start and end times of each file processed.

Hofn GPS network data

If we have no available Glacsweb reference station data for GPS processing then we may need to use the Hofn GPS station data. This can be accessed on the Scripps Orbit and Permanent Array Center (SOPAC) website. SOPAC uses the Hatanaka (compressed rinex) file compression strategy for all of its rinex observation files. Hatanaka files contain a 'd', rather than 'o', as the last filename character. These files are then unix-compressed, adding the ".Z" extension.

The Hofn GPS data is continuous 30 second sampling rate data divided into 24 hour chunks e.g. from 0:00 on 28/12/10 up to but not including 0:00 on 29/12/10. The naming convention is

hofn3620.10d

where ‘362’ is the day of year, ‘10’ is the year (i.e. 2010) and ‘d’ is the file type, in this case the observation file (which would be ‘o’ in standard rinex format). Other file types are ‘g’ for nav, ‘m’ for met and ‘s’ for summary file types. The only files we need for processing from the Hofn GPS are the observation files (.d).

SOPAC/CSRC provides standard unix-compressed rinex observation files for the past 60 days (in addition to Hatanaka-unix compressed files). These are available from the SOPAC http archive (password: your e-mail address) or via ftp.

The Hatanaka files can be downloaded from http://sopac.ucsd.edu/cgi-bin/dbDataBySite.cgi These files need to be unzipped and converted to standard rinex format prior to use in kinematic GPS processing.

The Hatanaka Conversion Program, crx2rnx, must be run on the data to hatanaka uncompress the file (after unix-uncompressing it). The conversion code, crx2rnx, is loaded onto ENV and can be used as follows:

 crx2rnx hofn3620.10d

A rinex 'o' file should be created in your working directory. Help is available by typing:

 crx2rnx -h

Alternatively (and easier) you can use the following script from the track/gamit GPS processing software to download, unzip and convert to the correct rinex format using

 sh_get_rinex -archive sopac -yr 2011 -doy 235 -ndays 1 -sites hofn

where -yr 2011 -doy 235 requests day of year number 235 in the year 2011 and -ndays 1 requests that just this one day of data is downloaded (you can download more days by increasing this number).

Precise orbit (SP3) files

Precise orbit data is required for the GPS processing. These data are supplied by a number of providers e.g. SOPAC and IGSCB (NASA’s International GNSS Service). The IGS collects, archives, and distributes GPS and GLONASS observation data sets of sufficient accuracy to meet the objectives of a wide range of scientific activities.

On IGSCB there are three types of GPS ephemeris, clock and earth orientation solutions computed:-

  1. Final – final combinations are available at 12 days latency.
  2. Rapid – rapid product are available with approximately 17 hours latency.
  3. UltraRapid – UltraRapid combinations are released four times each day (at 0300, 0900, 1500, and 2100 UT) and contain 48 hours worth of orbits; the first half computed from observations and the second half predicted orbit. The files are named according to the midpoint time in the file: 00, 06, 12, and 18 UT.

If you require orbits in near-real time then ultrarapid orbits can be downloaded with a simple wget command e.g.

 wget http://igscb.jpl.nasa.gov/igscb/product/1617/igu16176_06.sp3.Z

Resulting files are zipped sp3 files e.g. igu16176_06.sp3.Z where 1617 is the GPS week, 6 is the day of the week (starting with Sunday being 0) and 06 is the midpoint hour. These files need to be unzipped before use.

Rapid orbits can be downloaded from the IGSCB website for one day or a number of days using e.g.

 sh_get_orbits -archive igscb -yr 2012 -doy 20 -ndays 1 -type sp3 -pre f -makeg no

where -yr 2012 -doy 20 requests day of year number 20 in the year 2012 and -ndays 1 requests that just this one day of data is downloaded (you can download more days by increasing this number). The -pre f denotes that you want to download final orbits (e.g. igs16715.sp3) and this can be changed to -pre r to get rapid orbits (e.g. igr16715.sp3).

Track processing

Introduction:

Track is the kinematic GPS processing program from the Massachusetts Institute of Technology’s (MIT’s) GAMIT, GLOBK and TRACK comprehensive suite of programs for analyzing GPS measurements. Information on these programs is available on the GAMIT-GLOBK website (http://www-gpsg.mit.edu/~simon/gtgk/index.htm). Unlike many GPS processing programs, track pre-reads all the data before processing.

N.B. In this guide I will not be discussing all the terminology and detail of the precise processing used as there is a vast array of information for this topic and the aim of this guide it to give a quick indication of the use of the track software.

By using kinematic GPS processing you’re suggesting that one or more GPS stations is moving. To obtain good results for positioning as a function of time it helps if the ambiguities can be fixed where possible to integer values. The success of kinematic processing depends on the separation of base/rover (kinematic) and reference (fixed) sites. For separations < 10 km the GPS processing is usually very successful given good sampling regime, antenna placement and conditions whereas 10>100 km can be more difficult but often successful and >100 km has very mixed results.

As site separation increases, the differential ionospheric delays and atmospheric delay differences increase. For shorter baselines (<2-3 km), ionospheric delay can be treated as ~zero and L1 and L2 ambiguities can be resolved separately. This is not true for longer baselines and so track uses the Melbourne-Wubena Wide Lane (MW-WL) algorithm to try to resolve the differences in the L1-L2 cycles and then a combination of techniques to determine L1 and L2 cycles separately. The difference between L1 and L2 phase with the L2 phase scaled to the L1 wavelength is know as widelane and this can be used to detect cycle slips. Widelane is affected by fluctuations in the ionospheric delay which impacts are frequency dependent i.e. the lower frequency L2 has a larger contribution than the higher frequency L1.

Bias flags are added at times of cycle slips and ambiguity resolution tries to resolve these to integer values. Track uses floating point estimation with LC (ionospheric delay corrected phase), MW-WL and ionospheric delay constraints to determine the integer biases and the reliability with which they are determined.

Processing:

Track is run with the command “track” with a number of option and arguments and a command file (.cmd) to run the processing. It comes with an extensive help file which can be accessed using

 track -h

The cmd file contains information on the files to be used in processing, the type of processing to do and the values of various parameters. All commands and values to be used in the cmd file must be preceded by at least one space at the beginning of the line. An “*” at the beginning of a line means that line is commented out. Some commands in the cmd file are optional and others must always be included.

I’ve set up a standard cmd file, track_std.cmd, which you should have copied from the /home/le1d10/example_code directory. This can be used along with another script I have set up, track_std_manual.sh, or used directly in the track command run. The track_std_manual.sh script should also have been copied from the /home/le1d10/example_code directory. (The track_std_manual.sh script will need certain parts of the track_std.cmd file to stay the same for correct running of the code so if you want to make changes to the track_std.cmd code to run without track_std_manual.sh it might be worth making a another copy.)

One of the main processing type commands is the mode command. This command allows the setting of the defaults for the type of the data being processed. There are three settings for this, short, long and air. (However air only applies to high sample rate aircraft GPS.) If there is a distance of up to 1 km between your base (rover) station and the reference station then mode short should be used. For distances greater than 1 km mode long should be used. In mode short the processing undertakes search and analysis type using L1+L2 GPS channels. In mode long processing undertakes search and analysis type using LC. (N.B. Mode would need to be long if using Hofn ref data rather than Glacsweb reference station.)

Sampling interval can also be set in the cmd file. By using “interval 1” in the cmd file the sampling interval will be set at 1 second. (N.B. Sampling interval is also set in my track_std_manual.sh code and running this will update the cmd file. It’s currently set at 1 second but if you are processing data with another sampling regime you can change the sampling parameter by adjusting the track_std_manual.sh script where it says “sampling=1” or directly in a copy of the track_std.cmd file for track runs without the track_std_manual.sh script.)

The command “back smooth” is used to run the smoothing filter. This is recommended on long baselines but useful on short ones too. It uses all the data for atmospheric delay estimation and any non-integer biases (non-resolved biases) are constant.

The obs_file command is used to specify the data to be processed and includes the reference and base station data and either “F” (for a fixed reference station) or “K” (for a rover base station) to denote fixed or kinematic respectively e.g.

obs_file
  ref2 ref2<day>a.10o F
  base base<day>b.10o K

The <day> is specified in the running of the track code but can also be put directly into the cmd file e.g. base base319b.10o K

Navigation (SP3 orbit) file must also be specified using e.g.

nav_file igu<week>_00.sp3 sp3

Again the <week> is specified in the running of the track code but can also be put directly into the cmd file e.g. nav_file igu16176_00.sp3 sp3

The command “out_type” in the cmd file defines how you would like the data output. Output in relative motions in North, East and Up (all in meters) from the reference station is specified using “out_type NEU”. Output as geodetic lat, long and height is specified with “out_type GEOD”. To get both use “out_type NEU+GEOD”.

The naming of these NEU and GEOD output files can be specified with e.g.

pos_root TRAK<day>a.10

This results in a file name like TRAK218m.08.GEOD.base.L1+L2 with standard day, hour and year included in the name. A summary file of the processing is also produced and the name of this can also be specified e.g.

sum_file TRAK<day>a.10.sum

There are numerous parameters that can be changed to improve the positioning results but there are a few standard ones that tend to get the best results. The following describes the few standard parameter changes. (Other parameter changes, not listed below, can be made and information on these can be found in the track help file.) These parameters will have more or less use depending on where you’re processing long or short mode data.

site_stats

Usage e.g.

site_stats
  all 4.472 4.472 4.472  4.472 4.472 4.472

This command gives the statistics to assign to the kinematic station positions. “all” denotes that this should be applied to all kinematic stations. The first three numbers are the three sigmas in XYZ (in metres) for the initial postion of the station and the last three are the three sigmas in XYZ (in metres) for the change in position between epochs of data. Since the motion of the kinematic sites is modelled as random walk, the sigma of the change in position grows as the square root of the number of epochs.

This doesn’t generally need changing but it sometimes seems to help with the results. Suggested variation of these parameters to try are 20 20 20 20 20 20 and then try gradually increasing.

ion_stats

This command allows specification of the characteristics of the ionosphere and has a number of parameters that can be changed (see track help file). Usually just the first of these can be good to change. This parameter is the “jump” and specifies the largest jump in the ion delay allowed before a bias flag is introduced. It can be increased for noisy data. Default is 0.2 but increasing up to 1 or even 3 or 4 can help with results.

Usage e.g.

ion_stats 1

bf_set

By default a gap in the data introduces a new bias flag in track. bf_set allows specification of the maximum size of gap allowed in data before a bias flag is inserted, and the number of good data needed to allow data to be kept. The defaults are 1 and 20 i.e. any 1 gap is flagged and at least 20 good phase measurements are needed between bias flags other wise the data are deleted. High rate data often misses measurements so this can be a useful one to change. Try using 2 40 or 5 100 and then gradually increase.

Usage e.g.

bf_set 1 20

float_type

This is the main control on resolving ambiguities. It allows specification of the floating point ambiquity limits for the bias fixing algorithm. The main factors to consider are the <WL_Fact> and <Ion_fact> values (although other parameters for this command exist – see track help file). <WL_Fact> is the weight given to deviation of MW-WL from zero. Reducing the value from the default value of 1 will downweight the contribution of the MW-WL. For noisy or systematic range data, the WL_fact may be reduced to around 0.5. <Ion_fact> is the weight to be given to deviation of the Ionospheric delay from zero. The default for Ion_fact is 1 and this values means the ionospheric delay is assumed to be zero and given unit weighting in deterimining how well a set of integer ambiquities fit the data. On long baselines (>20 km), this value should be reduced to give less weight to the ionospheric delay constraint. For 100 km baselines it is suggested that 0.1 works well. Increasing it with short baseline data can also help. <WL_Fact> and <Ion_fact> both having a default values of 1 and this gives them equal weight with the fit of the LC data. For L1+L2 float type, these two entries are ignored.

Usage e.g.

float_type 1 4 LC 0.25 0.5

or

float_type 1 1 LC 3.0 3.0 0.2 1.0

where the fifth value after float_type (i.e. 0.5 or 3.0) is <WL_Fact> and the sixth value in the second version of the command is <Ion_fact> (i.e. 0.2).


Another possible parameter to change is atm_stats. This parameter gives the statistics for the atmospheric delays by site. Information on this parameter along with others can be found in the track help file.

The quality of the results is assessed by looking at the RMS values in the .out file and checking the ambiguity status (in the Bias flag report fixed column) and RMS scatter of residuals in the .sum file. A 3 in the fixed column means the ambiguity has been fixed, a 1 means it’s still a floating point estimate. The more fixed values the better. The RMS values are in mm and values of up to about 20 mm are ok but the smaller the better. Adjustments in the cmd file parameters are adjusted to try and improve these values. The track_std_manual.sh grabs the relevant lines of the .out and .sum files and prints them to the screen. (The track help file describes the outputs in more detail.)

Track can also be run directly at the command line

Usage:

 track -f <command file> -a <ambiguity file> -d <day> -w <week>

see track help file for help with this.

Glacsweb data versus continuous data processing

In assessment of the GPS processing quality you would normally mainly look at ambiguity status and RMS values but also plot the x-y data to see how well it falls on a straight line (as the glacier motion should not suddenly change direction x-y data should fall on a straight line). Our data is not moving enough in the short sampling period used in the current Glacsweb data and so we cannot use this extra check and must go with just RMS and ambiguities. This not ideal as some points which may be a very long way from the majority of points can increase the RMS even if the majority are close together. Looking at a straight line plot for continuous data processed over a 24 hour period removes this problem. (Other information can also help with understanding the quality of the data and the track help file discusses them.)

N.B. RMS is a measure of the precision of the measurements NOT the accuracy. It gives an indication of the ability of the system to get the same sort of values which is not the same as the ability of the system to get the correct values. (The old physics lesson of a value being precise but not accurate!) Estimates of the accuracy of the measurements are worked out after track processing by looking at deviations from a linear fit of the x-y data over a certain time scale. In continuous data 26-28 hour chunks of data can be processed, the leading and trailing hours can then be removed (as there may be issues with the data at either end due to smoothing) to get just a 24 hour period. The data is then undergoes a series of detrending and linear fitting of the x-y data to obtain more realistic accuracy values. With the current Glacsweb data being only of very short sampling session length, this stage of processing is modified to include many sessions worth of data to form an x-y line of data.

Another issue with the current Glacsweb data is that the amount of data able to be processed in track is limited mainly due to a lack of reference station data. Therefore, a lot of the data had to be processed by Precise Point Positioning (PPP) which is a much less accurate processing technique than track.

In terms of extracting velocity data from the processed GPS data, with continuous data every hour, day, week, month etc has the same distribution of measurements and so estimating velocity for a period is easy. With the Glacsweb data, due to large gaps in the data and also the availability of viable measurements at differing points in different days introduces errors in attempts to estimate velocities for a given repeat period where data amounts can vary significantly.

PPP processing

When there is no reference station file or the base file is so short (i.e. less than 10-15 minutes of the 1 second sampling Glacsweb data) that using the Hofn GPS reference file data gives very few epochs due to the requirement to reduce sampling interval to the lower of the two files (i.e. 30 second sampling with the Hofn GPS) then track can not be used to process the data. All that can be done is to process the data using precise point positioning. This can be done through the CSRS website http://www.geod.nrcan.gc.ca/products-produits/ppp_e.php

You can also download the PPP Direct PC desktop utility to your computer and set it up so that many rinex files can be dragged and dropped onto the icon for processing (see website for more details). Results files are then emailed through to you in zipped format. After unzipping positions and RMS can be extracted from the .sum file.