RXTE GOF |
RXTE GOF: Answers to FAQ |
RXTE FAQ |
---|
Contents:
I am running my own script and the tool isn't working properly. Writing scripts can be very tricky, and many times people will look
at the Perl scripts that we have written and use those as a template to call
other tasks in a different way. This can be very dangerous. Perl, C-shell,
Korn-shell, Borne-shell, and DCL all behave differently, and if a Perl
script is calling another, as is often done in the scripts that we have
written, they may not behave as you expect when you write your own. This can
give rise to many problems, especially since Perl will take a "best guess"
at what you are attempting to do. Sometimes this guess is correct, sometimes
it isn't. Usually in the case of calling an Ftool, it isn't. So be very
careful to debug your scripts. Make sure that you know exactly how you
are calling the tool from the command line, and how the operating system
handles things like quoted strings. In general if you aren't an expert in
Perl, the operating system you are running on, and the Ftools interface,
do not attempt to write your own scripts without a great deal of testing
and debugging.
First, read the help to be sure that the parameter file hasn't
changed. Then, try running the task by hand, answering the questions as
they are prompted for. Many times the task will work properly when run via
the command-line prompt. This tells the user that the problem is in their
script-file, and not in the task. One common problem arises when the script assumes certain parameter values do not change but some other tool you may be running at the same time may be changing your parameter file. To avoid this, use plist tool to see all of the parameters and be sure to specify all of the necessary values explicitly.
I input a file via the '@filename' option, but the tool seems to have problems finding the files.
First check that the tool will accept a list of inputs in this way. Some tools which do take a file list in this format are sa/seextrct, the script runpcabackest (not pcabackest, itself), and fasebin. Note that fselect does not; fmerge takes a file list as the input itself, without an '@'; the script sefilter takes a list as input but will only use the first file in the list.
If the tool does accept input in this format but seems to have a problem reading in the files in the list, examine your input list carefully. It has to be of a specific format to be properly parsed. Run your favorite editor on the input file and use the command to tell the cursor to go to the end of this file ("Esc >" for Emacs; "G" for vi) and see if the cursor is at the beginning of a blank line just below the last line of input. There MUST be a "carriage-return" at the end of each input line for it to be processed correctly, and there must NOT be any blank lines or they will be read as input as well and cause the task to abort. Also check the end of each input line for blank spaces ("C-e" in Emacs; "$" in vi) which may be read as part of the file name.
The processing appears to be taking a VERY long time. Can I speed this up? The answer is a definite MAYBE, depending on the tool and what you
are trying to do. In general, the bigger the input file the longer it will
take to process. You can speed things up by making sure that the data is
on a "local" disk - this minimizes disk access time which is usually the
biggest root cause for poor performance. You can also speed things up by
pre-filtering large files on "Time" and other things. In general the larger
the file, the more filtering criteria you have specified.
Some users have been running the same tool
several times simultaneously in the background; this results in the
WORST possible performance since many codes require a large amount of
RAM and running multiple copies causes performance to be at its worst
due to swapping from RAM to disk and to the fact that each session can
only get a small percentage of the total CPU for processing. You will
get much better results if you run the codes in succession and give
them the lowest "nice" value possible, since this will give them the
maximum CPU time possible.
General questions and problems:
What documentation is available on this site or to download as PostScript? The general reference for XTE data reduction is the ABC Guide which has chapters on data formats, the individual instruments, screening, basic data reduction, and a brief description of the time systems used. These chapters have recently been provided in PostScript format that you can download and print out chapter by chapter. These files are also available via anonymous FTP at ftp://heasarc.gsfc.nasa.gov/xte/doc/abc_guide/. The README file provides a description of the contents of the files.
For detailed descriptions of various data analysis tasks, our Cook Book has "recipes" telling explicitly how to use the tools, linking to other relevant documents, and discussing the scientific issues involved. Due to the interlinking of these documents with each other and the ABC Guide, it is easier to read them on the web, but PostScript versions are also available via anonymous FTP at ftp://heasarc.gsfc.nasa.gov/xte/doc/cook_book/. The web pages are also updated frequently and these files may not reflect those changes immediately.
Is there a standard publication on XTE overall or the PCA/HEXTE in
particular that one can/should refer to when writing a paper based on
XTE/PCA/HEXTE data? PCA: Jahoda et al 1996 -
reference Jahoda, K., Swank, J. H., Giles, A. B., Stark, M. J., Strohmayer, T., Zhang, W., & Morgan, E. H. 1996, in EUV, X-ray and Gamma-Ray Instrumentation for Astronomy
VII, ed O. H. Siegmund (Bellingham, WA: SPIE), 59
HEXTE: Rothschild et al 1998 -
Reference Rothschild, R. E., Blanco, P. R., Gruber, D. E., Heindl, W. A.,
MacDonald, D. R., Marsden, D. C., Pelling, M. R., & Wayne, L. R., 1998, ApJ, 496, 538
I have an observation with an F, T and U at the end. What do these
letters mean?Since our launch we have needed to add a number of
designations for certain types of observations. We have created a recipe
which you can find in our cookbook called
Dealing with Data Gaps: New RXTE
Data Categories with information and explanations for these and all new
endings added since launch.
Can you tell me how fasebin interprets the "t0" column (#5, normally the
epoch of periastron) in the case when eccentricity=0?The t0 and omega
columns serve two functions; they fix the zero point of the orbit ephemeris,
as well as the time of periastron. For cases where e=0, that first function
is still there -- t0 then specifies the time at which the pulsar was at that
particular omega; generally t0 is then given for omega=0.0.
I input a time-range into one of the extractors and get a "no counts" error. What's wrong? All of the extractors and other XTE specific tools except the perl
scripts "timetrans" and "grosstimefilt" require that the input times be given in ABSOLUTE time. This is necessary because the value of the TIMEZERO keyword can change from input file to input file. Be very careful when processing multiple files and specifying filtering time-ranges.
The extractors (as of 3.6) print out detailed information about all
of the time-ranges that it has been supplied with and what those time-ranges
are. If the user is getting a "no counts" message he should carefully examine
the time-intervals being used for filtering compared to the actual time-ranges
for the data in the files.
You can use "timetrans" to convert relative times to absolute times
related to a specific file. The absolute time is calculated by adding
TIMEZERO and TSTART together to get the absolute time offset. This value
is then added to every relative time specified. The output file created
by timetrans can be fed directly into the extractor tools via the @filename
option at the timeint prompt. See the ABC Guide's page on using time intervals in the extractor.
I input a channel range into one of the extractors and get unexpected results. What's wrong? All of the extractors and XTE specific tools refer to absolute energy channels for filtering purposes. The XTE satellite contains 256
possible energy channels ranging from 0 - 255, and these channels can
be "binned" in many different ways, depending upon the "mode" the
satellite was in for the observation. The extractors are aware of this and
examine all input and compares the absolute channels input to determine
which "binned" channels these correspond to. If an absolute channel
falls in a binned channel, the entire binned channel is processed. So
if a data file contained 3 binned channels with these binned channels
containing 0~100, 101~201, 202~255 and the user were to input a minimum
acceptable channel to be 50 and maximum to be 210, ALL data within the file
would be processed, because it cannot separate channel 50 from the bin which
contains 0~100, thus it must process that entire bin.
See the ABC Guide's page on using channel selection in the extractor. If you know the relative channels (Standard2 binning) or energy range you want and need to know the corresponding absolute channels, there is a table on the web where you can easily look up your data's gain epoch and find the correct absolute channel bin.
I created a file using FXBARY and am trying to process it with SA(SE)EXTRCT but it crashes. How can I fix this?
A file operated upon by FXBARY remains
compatible with the extractors, BUT this is only if the user is specifying
"TIME or Time" as the "time" column. Since FXBARY has the option of
overwriting the TIME column or of creating a new column called BARYTIME, it is
possible that a user will incorrectly assume that he may attempt to process the barycenter corrected file with the extractor by just specifying BARYTIME as
the TIME column. Unfortunately, since all of the TSTART, TSTOP, and GTI
information in the file pertains ONLY to the true TIME column, this will
result in the code aborting if the user is lucky, or filtering out what the
user specified and NOT what he wanted!
These codes are very complicated, and they can only get all of the
information necessary if the files were created
properly and the user understands HOW to ask for what they want. Basically, you have to convert the entire file and all associated TIME keywords over into
barycentric time - that is NOT done when you simply create a BARYTIME column.
Once you have "properly" converted this file over to barycentric time, you
can process it with SA(SE)EXTRCT as you would any other file. You should
carefully read over the help for the extractor you are planning to use so
that you know how to specify such things as time-intervals for filtering
properly (they must be in ABSOLUTE TIME). And if you are operating on an SE
file with created by the PHA detector, "What are you doing with the
time-markers?" If you do not understand that question, re-read the
ABC Guide's page on event mode data.
I'm trying to calculate the location of the RXTE satellite with respect to
the earth as a function of time. The Orbital Ephemeris files that come
with the RXTE observations contain X,Y, and Z coordinates, but I can't
seem to find any information on where the X,Y, and Z unit vectors point.
Assuming that Z is parallel to the spin axis of the earth, X and Y are
still undefined. Where can I find information about the spacecraft
coordinates in these files?The quantities that you are looking for are
in the filter file (ACSEARTHLAT, ACSEARTHLON and ACSALTITUDE).
When I run xtefilt, it fails giving an error message about not finding the requested ObsId. First check that you are giving the correct path to the directory containing the FITS index files for the given ObsId, and that all of the relevant index files (FI* and FMI) are not gzipped. The next question is whether or not you are using realtime data. This type of data has several differences from processed data, so there are extra steps to the analysis. One difference is the ObsId listed in the index file; it will not match the directory name, and this causes xtefilt to error. You should read the recipe on working with realtime data to see how to get around this and other problems. You may also want to read the recipe on running xtefilt.
When I run xtefilt, it produces columns containing only INDEFs, or it errors off saying it cannot find the input files.
The XTE patch (FTOOLS v4.1.1) added some particle rates from the Standard2 data to the collection in order to produce the electron contamination rates for each PCU in the resulting filter file. For slews, which the SOF attempts to schedule during SAA passages to maximize efficiency, you will often see this error for the columns of ApId 74. This is because, during SAA, the PCA is turned off and there may be no Standard2 files. These errors, therefore, can be ignored for slews (i.e. "A" or "Z" observations.) For pointed observations or other housekeeping files, this may be indicative of a more serious problem of missing data. FTOOLS v4.1 includes the ability to run on gzipped files, but if you are using any previous versions (you should not be; get the new tools), you will see this error when you forget to unzip the housekeeping files in the ACS, PCA, and HEXTE directories. You may also want to read the recipe on running xtefilt.
When I run xtefilt, I see error messages about columns missing and cannot compute certain quantities. What's wrong? The script xtefilt uses the appidlist that you give it to collect all the columns it lists; then the tool xtederive attempts to calculate a specific set of quantities from those collected columns. If a column is missing, xtederive may not be able to calculate everything. Columns may be missing because the appidlist was incomplete, in which case the user should get a new appidlist by typing "fhelp xtefilt" and using the list at the end of the help. Even with an updated appidlist, certain columns may still be missing for observations which are slews to or from the source, i.e. obsids ending with "A" or "Z". For slews, a minimum amount of housekeeping data is telemetered, and quantities such as "ELECTRON0", etc., will be missing. If using realtime data, columns may be missing simply because realtime data is often incomplete.
I am getting anomalous spikes in my light-curve RATE file. What's wrong? This is usually caused by errors in the specified time intervals that are input to the code. So the "spike" manifests itself
at the end of a specified time interval and is exceptionally large. Since
the time-resolution of SE
(or SA) data files can be small, errors on the order of 1e-10 (usually
due to machine accuracy errors) in specifying
time intervals will manifest as the addition of an extra bin at the end
of a specified time interval containing only a few micro-seconds of
"time" (the bin will contain the amount of time specified by the nCDLTm or
TIMEDEL value). To ensure that this doesn't happen it is recommended
that the user set the parameter "mfracexp" in the parameter file
to some small, but non-zero value. This is the "Minimum acceptable fractional
exposure" given as some fraction less than 1.0d0. (The default value INDEF
will allow all FRACEXP values through.) It is recommended that the user
determine an acceptable minimum fractional exposure value and set
this parameter to remove the occurrence of very large RATE values due to
dividing a number of counts by an extremely small time interval. For SE
data a count value of 1.0d0 divided by a time-value of 1.0E-5 is NOT
uncommon, and can "wash-out" the other results, so use of "mfracexp" is
wise when one is calculating RATEs. Also, note that this does NOT affect the
spectral file. Two other parameters "mspinten" and "mlcinten" relate to the
maximum spectral and light curve value that is acceptable, but
these parameters should not be used to remove spikes due to small
fractional exposure (FRACEXP) values.
Another possibility is having negative bins occurring in the middle of a good time range in your net light curve. There may have a packet dropped in the data telemetry and that bin is empty in the data file. You wouldn't notice this until you subtracted a background produced by pcabackest which doesn't know that the data in that bin are missing. In other words, for Standard2 data, if a 16 second bin is lost, the GTI for that data file should take that into account; pcabackest, however, does not use this GTI, because that packet only affects one mode while the background may apply to other modes with nothing missing. In your background subtracted light curve, you would then have a bin which is negative and has the magnitude of the estimated background.
I have an interesting new result from my XTE analysis. How can I alert the community? We have a RXTE Papers page where we keep track of XTE results. When your results are published, please let us know! (Send email to Padi Boyd at padi@dragons.gsfc.nasa.gov.)
I attempted to make a GTI file using maketime and put it into the extractor, but did not notice that it was empty. The extractor ignored the GTI and took all the data, instead of failing. How can I avoid this? Often the case of an empty GTI file arises, especially when running jobs through scripts where you are not checking every output file. This happens when no time meets the good time criteria, e.g. when a file happens to fall entirely in occult. If this GTI file with no rows is input into the extractors, the software considers this no information and extracts all data falling within other selection criteria. To change this default behavior to a more cautious mode, there is a hidden parameter bailout. To specify, e.g. saextrct bailout=yes will tell the extractor to abort if any of the inputs are invalid, rather than trying to run as best it can with what it was given.
I'd just like to enquire as to the magnitude of the count errors
that are created when making lightcurves.
When analysing the data I notice that for areas of high activity the error
tends to be less than the (square root of number of counts) error expected,
by a magnitude of around 5-7, is this reasonable or should I add this error
on to the tabulated error? The errors (in c/s) for a light curve should
indeed be the square root of the number of counts per bin, divided by the
binsize. (If you've background-subtracted the light curve, the errors on the
source and background light curves should be propogated through to the net
light curve.)
The TIME_SINCE_SAA column in my filter file stays at 100 even through a period when all the PCUs go off for an obvious SAA passage; is this a mistake?
The TIME_SINCE_SAA is an estimate of the time since the last SAA passage where the activation would contribute significantly to the background count rate. It is a parameterization based on coordinates relative to the SAA and on a judgement of the degree of activation due to such a passage. For non-SAA orbits, this column is set to 100. There are, however, orbits where the satellite skims the edge of the SAA, the PCA turns off for a brief time, but the TIME_SINCE_SAA does not go to zero; this is because such short SAA passages cause little instrument activation and needn't be filtered out.
With all of the different background models available, how do I know which are best for my data?
We have attempted to summarize all of the information about which model files apply to which types of source, which epochs, etc., on a PCA Digest page. Please read this page carefully and check it for changes before you begin a new analysis, as it is frequently updated. If it does not answer your questions, let us know at xtehelp@athena.gsfc.nasa.gov.
My source is highly variable, sometimes within the range for the faint L7 model and sometimes not. What model should I use?
The conservative approach: for a consistent look at a variable source, the same background model should be applied to all the data, so the faint source model would not be appropriate. See the PCA Digest page for which models to use. After you get a general idea of what the source is doing, you can analyse the data where the source is faint using the faint L7 model.
Alternatively, you could try L7 on all the data, and then plot the net count rate in a signal free band vs the total count rate. One might determine that the oversubtraction at the higher rates is not problematic for the particular investigation (i.e. it is problematic for Compton reflection, it is probably not for looking for lags in soft hard bands).
How often are new SAA history files generated, and where do I get them?
The SAA history file is currently generated by hand approximately every three weeks. (Plans are in the works to automate this update.) The latest can be found from our ftp site at ftp://legacy.gsfc.nasa.gov/xte/calib_data/pca_bkgd/pca_saa_history.gz.
The background-subtracted light curve (using a background produced by pcabackest) has unexpected variability. Could this be a problem with the background estimator? The least understood aspect of the background is the component from the activation due to passage through the SAA. For the first few kiloseconds after SAA, the activation is dropping rapidly, and the rate of the drop-off is not modeled with enough precision. This can cause variations in the light curve which are only due to the background; it may also cause an net overestimation in the spectrum. So if you have a long enough observation, you would benefit from excluding the first kilosecond or so after SAA. (This can often be done by simply using maketime on the background lightcurve and giving some maximum value for the count rate.)
Extracting all layers and all channels will rarely produce a satisfactory result for the light curve. This is due to the difference in the signal to noise ratio between layer 1 and layers 2 and 3, and also to the fact that at the higher energies, the background dominates. For both of these reasons, any uncertainties in the model will be augmented and you are more likely to end up with fluctuations in the net light curve which are due to the background. So, for the best results, you should always extract a light curve from only those channels in which you expect significant source counts, and unless your source has significant flux at higher energies, you would be better off using only layer 1. See the recipe on using pcabackest.
Do I need to correct for PCA deadtime? If so, how do I correct the background?
As to the first question, it is a rather
complex issue which depends on your particular source. The
PCA Instrument team has a discussion of this issue in their
paper on the calibration of the PCA - see see section
6.1.2 of Jahoda et al. in ApJS, Volume 163, Issue 2, pp. 401-423i
If you decide to make this correction, we have a recipe to shows you how to correct PCA
spectra for deadtime. As to the second question, there is no need to correct background
spectra from the VLE background models, since the VLE rate is not effected by deadtime.
I'm having trouble downloading the pca_saa_history.gz file required for
the latest background modes. When I click on the pca_saa_history.gz link my
browser takes me to a page that contains lines of what looks like the
start of a FITS file, presumably the beginning of the pca_saa_history file.
This is all I seem to be able to find. Is there something that I'm doing
wrong? The page you mention is the saa history
file. All you need to do is under the file menu of your browser window,
choose 'save as' and save it to a local area. (or any key combo that allows
you to save) Your browser, like most of them, don't recognize a fits file and
hence reads it as text file putting on screen the contents of the fits file.
I found the new background files and SAA_HISTORY file from the Quick Guide
to Model Files. It said that users can apply the new models to PCU0 by
including a few extra steps in their analysis to get more columns in their
filter file and calculate a derived quanity. Are these steps necessary for
bright sources? The special steps outlined for analyzing PCU0 data with
the new background models apply to both the faint and the bright
models. The author of the new models, Craig Markwardt,
has a Web page that describes the models and explains the
extra filtering required to analyze PCU0 data. Go to:
http://lheawww.gsfc.nasa.gov/users/craigm/pca-bkg/bkg-users.html
and scroll down until you find the section entitled
"PCU 0 Models."
I am doing spectral analysis using standard2 data (from the top Xe layer
only) during gain epoch 3b. I am using the new faint source backgrounds.
The errors that saextrct puts in the .pha file are simply the square root of
the predicted counts, which I know isn't the background model error. What
errors should I be using for each standard2 channel?
The page
http://lheawww.gsfc.nasa.gov/users/craigm/pca-bkg/bkg-users.html
documents systematic errors in the 2-10 and 10-20 keV bands for the new
models. I suggest taking these errors and distributing them uniformly
among the channels. (For example, by distributing the error across the
channels we mean for PCU0, 2-10 keV, the error is 0.029 c/s. Since there are
22 standard2 channels 2-10 keV. You should use 0.029/sqrt(22) = 6.2e-3 c/s for
each channel.) This is equivalent to assuming that when the model
is low in one channel, that it is low in all (or at least adjacent) channels.
(Though note that the method of generating the new CM background models
leaves each channel statistically independent) This seems likely to be correct,
though it hasn't been examined in detail. The statistical error in the
background is nearly zero, and certainly much less than your source, since
so much data has gone into the model.
Follow the steps closely, and pay special attention to Craig's
note that this filtering works best for "intermediate brightness
sources, probably in the 5-100 mCrab range." The Web page explains
why this is so, and what to do if your source is brighter than
about 100 mCrab.
I've found two SAA History files in the archive, which one should I use?
The two saa history files have indentical content: one file
("pca_saa_history.gz")
is a symbolic link to the other. This allows users running remote software to
always grab "pca_saa_history.gz", and know they're retrieving the most up to
date
file. At the same time, users browsing the FTP area can tell at a glance the
date of the latest data in the file, since that's a part of the other file's
name.
The dates covered by the file are also contained in the FITS header. Thus,
users
who have an old "pca_saa_history.gz" file that was previously downloaded can
check the FITS header to see what dates it's valid for.
Why does pcabackest seems to be over-estimating by a significant factor. The most common cause of this is a mismatch in the number of PCU which are actually on during the observation and the number extracted from the background data. Check your filter file by plotting the columns PCU3_ON, PCU4_ON, and NUM_PCU_ON; these two detectors are the most likely to go off at strange times, so see how many PCU are on during your good time intervals (all go off during occults.) If one or two PCU go off, remember to extract from the background only those which were on. See the recipe on selecting by detector and anode.
If this is not the problem, then small overestimations may be due to the uncertainty in the activation component of the Q6 or VLE background models. See the pcabackest recipe for ways to handle problems with the background estimate. If using the Faint (L7) model and getting serious over-estimation, perhaps your source is too bright for this model to be applicable. See the PCA Digest page for what models to use with what data.
Can pcabackest create a background for other modes than Standard2? The background estimator uses some of the particle rates (which are included in the Standard2 data only) to model the expected good xenon background rates. It produces estimated good xenon event rates for each detector (and each layer, if you so require) in the same format as Standard2 data, i.e. spectral histograms spanning 16 second time bins (129 channels by default, or 256 if you so specify.) Though the background data are in Standard2 format, the count rates apply to the observation, not to that mode. In other words, you can use that data to subtract background from any mode used during that observation. See the recipe on using pcabackest for an example.
In pcabackest, do I need to apply the gain correction in estimating the background for my observation? This refers to the slight differences in gain among the five detectors; this is corrected in modes where the data from the separate PCU are combined on board the satellite. The gain correction option of pcabackest will adjust the events it produces accordingly for consistency in comparing the background to the data. You therefore only apply this correction if the data have had this correction applied. The only modes where this is not applied are Standard1, Standard2, Transparent, and GoodXenon, so for these you do not do so in pcabackest. Any other mode, answer yes to apply the correction. See the ABC Guide section on PCA properties (item #6.)
While running pcabackest, I get a failure and a message about "Filter file does not cover the time span of the input.". How can I fix this?
The background estimator needs certain quantities from the filter file to estimate the background event rate. Due to the different ways science and housekeepign data are formatted and telemetered, the science data will occasionally extend for a short period beyond the filter file. Pcabackest allows for 32 seconds of "slop" over which it will extrapolate for the missing housekeeping. If the science data extend slightly longer past the filter file, you can increase the amount of slop tolerated by pcabackest using the hidden parameter "timeslop". If the shortfall is more than a few minutes, however, you may be using an incorrect filter file for the input data.
Do you have a new version of the PCADTLC script that will run
on the current version of FTOOLS? I have an old version,
from FTOOLS 4.2, that was sent by you in July 2000, which does
not appear to run properly with the latest version of FTOOLS.
PCADTLC was pulled from the RXTE FTOOLS distribution when a bug
was found in how the number of PCUs was corrected for. In the
meantime, we uncovered evidence that the deadtime fraction was
dependent on at least one more variable that is not accounted for
in the current recipe (and script). This has yet to be resolved
in a satisfactory way.
For the time being, we recommend doing your deadtime correction "by
hand" using the corfile method within Xspec to align the background
and source spectrum at very high energies (say, 70 keV and above).
For a typical source in which the detected counts drop rapidly toward
zero at higher energies in the PCA, this should give you the most
reliable deadtime correction, and hence the most reliable spectral
indices as a function of time, and source strength.
An example of this procedure can be found in our online PCABACKEST
Recipe at
http://rxte.gsfc.nasa.gov/docs/xte/recipes/pcabackest.html
What are the systematic errors associated with the background modeling of pcabackest?
The PCA team published an accounting of systematic errors in the
background model in Jahoda et al (2005)
ApJ S, 163, 401. A more recent analysis by the PCA team from
2006 is also available. Generally speaking, the systematic
uncertainties are less than 1-2% of the background level in the 2-20 keV
band, although this figure depends on the particular PCUs and energy
bands being considered. More detailed information can be found in the
above two papers.
My spectrum shows an unexpected feature centered around 4.5-5 keV; could this be a feature in the response matrix? Yes, there is a known feature there due to the Xenon L edge at 4.78 keV. If you plot the ratio of your residual to the model and see a wiggle of ~3-4 %, (looking in fact like like a ~ shape) then this is probably due to this edge. Some people choose to fit a gaussian absorption feature here to get a better fit for their model, or to simply ignore those channels. Others use grppha to add a systematic error of ~ 2% to each channel (see next question). (The next pcarmf v2.2.1 will greatly improve the response around this edge.
Do I need to use pcarsp to create a different response matrix for each spectrum? The correct response matrix (RMF) will depend on which layers and detectors you used to extract each spectrum as well as on the gain in the detector(s) in which the data were taken. If you want to correct the collimator response for an offset pointing or spacecraft jitter, you need to give an attitude file which corresponds to the dataset from which you obtained the spectrum. In addition, pcarmf now accounts for the slow drift in the detector gain, which means that for a truely accurate response, observations separated by more than a few months should have separate respons matrices made to take this into account.
Why does the script pcarsp seem to hang at the xpcaarf step? The script pcarsp calls xpcaarf if you give it an attitude file (either the housekeeping file or the filter file) from which to derive the offset from the nominal source and pointing positions. There are two common occurrences of xpcaarf hanging, both simply having to do with giving it an unreasonably large offset. Firstly, though it may seem an obvious mistake to some, you cannot give xpcaarf an attitude file for a scan observation. If you wish to look at a spectrum collected over a scan, simply use "none" for the attitude file. This will tell the script not to try to account for the varying offset in computing the collimator response. Secondly, and more commonly, there is an extra step if you are using realtime data. Before processing, the data files do not have all keywords set correctly, most importantly the coordinates of the object. They have to be set by hand so that pcaarf will know how to compute the correct response. How to do this is detailed in the recipe Working with realtime data.
Is there a considerable roll dependence in the off-axis response of
the PCA collimator. That's what the response matrices I've generated with
pcarsp are implying, and I'd like a sanity check by a PCA expert
Yes, I believe the varations are reasonable for the PCA. The PCA collimator
is hexagonal in outline, so the response function is approximately a
hexagonal pyramid. Depending on the roll angle, a source at a fixed
distance from the boresite will sample different parts of this function.
Even further, each of the PCUs have a different alignment with respect to the
"instrument" axis, so in reality, the "off axis angle" will be different per
PCU for different spacecraft orientations.
When do I need to consider the propane loss of PCU 0? It is only after
the loss of the PCU0 propane layer on May 12, 2002 that special handling
of PCU0 is necessary. The PCA Digest
has more information and deatils of the new proceedures.
I'm trying to find the recommended level to set the systematic error to
when using xspec. I'm comparing my RXTE observation (epoch 4/L7 faint
source model/pcu 0,2, and 3) with a few of the standard xspec models. I've
found values for the systematics that you recommend using with the
background models, but am I correct in thinking that these are only
appropriate for use with lightcurves? When dealing with spectra, it is
the systematic error associated with the response matrix that needs to be
used, when trying to compare predicted to actual counts. If I'm not on
completely the wrong track, then it is the response matrix systematics
that I can't find. The answer is a resounding "it depends".
Personally, being a simple soul who fits continua to bright LMXBs
to look for broad variations in the fit parameters, I add 1% in XSPEC
using the "system 0.01" command, based on the rule of thumb that
residuals in power law fits to the Crab can be as large as 1%.
In the literature you'll find that many authors also routinely
add a 1% systematic error in this way when performing spectral fits.
However, I'd be the first to admit that this is not strictly
correct. If you asked Keith Jahoda of the PCA Team this question,
he'd tell you that each author has to consider what the problem
they're working on demands. If you're looking for upper limits to
narrow lines, the deviation from a continuum fit to a smooth source
tells you something relevant for each channel. If you want to bound
the range of continuum parameters, the variation among fits to
individual detectors tells you something (perhaps not everything).
The danger of adding a uniform systematic error everywhere is that
the errors are demonstrably not the same everywhere; the largest
residuals to Crab fits always appear at about the same energies. The
most rigorous way to extract the best spectral constraints would
require assigning ENERGY DEPENDENT systematic errors to the spectral
channels. Take Crab data from a nearby date (if it's not public,
and the PI is Keith Jahoda, I suspect he'll be willing to make it
available to you). Fit a power law over the same spectral range, and
the same numer of detectors/layers that you're using to analyze your
own source. Now you can derive a channel-by-channel estimate of the
systematic error that you can enter as a new column in your .pha file.
Some time ago I needed to produce a list of barycentrically
corrected PCA photon arrival times for an optical/X-ray fast
correlation. The data mode is E_125us_64M_0_1s. What I did was simply to
apply sefilter to remove the markers, then barytime to obtain
barycentrically corrected times. What I was left with was a list of
corrected arrival times (plus of course an Event column). Now I would like
to sub-select this list in a few channel bands and
cannot find a good way to do that. Is this possible?
From your description of the situation, it sounds like you
should get what you want from a two-step process:
I've got a question about faxbary - does the following output indicate
that faxbary failed to correct the times or not?
Is there any reason to expect a daily cycle in the sampling times of the
ASM on RXTE?Since there is some variation in the SAA pattern on the
approximately daily timescale, and since some target scheduling can repeat
on timescales approximately a day, there can indeed be a daily cycle in the
sampling times of ASM sources, depending on their position in the sky relative
to the pointing direction of the satellite.
Which data sets are better to use: the data available through the RXTE web
site in fits format or the data available through the MIT ASM page. I am
interested in the data quality not the format. Is there a difference? Which
sets are better the quick look or definitive products, and why are they
different?The ASM ASCII files from MIT should be exactly the same data as
our FITs files; only difference being the format of the data.
How do I barycenter correct my ASM data?
We have a new section of our
ASM recipe which includes new advice on barycenter correcting
ASM data.
My realtime data do not seem to include the 16s housekeeping required by the tool hxtdead. Can I still correct the data for deadtime?
Unfortunately, while the HEXTE 16s housekeeping files are not included in any realtime data, they are necessary to correct for deadtime. You will have to wait for the processed data to do this correction. In the mean time, you can still perform valuable analysis following the last item in the recipe for correcting the background by hand. (This procedure does not take into account the instrument deadtime, but it should correct the background for the four seconds each rocking period when it is slewing and collecting no data.)
While running hxtdead, I see warning messages about "uld or xuld undfined..." and "Detector 0: Uld data interpolated..."; is this a problem?
This is not a problem, but merely a warning that the science data extends slightly beyond the housekeeping necessary for the deadtime correction. The tool will interpolate the housekeeping over small gaps introducing only negligible error. It is a common occurrence which should have no effect on the results of your analysis.
My caldb installation is incomplete or not up to date, how do I upgrade the
current tools? On the following
Caldb Management Page,
http://rxte.gsfc.nasa.gov/docs/heasarc/caldb/caldb_manage.html, there are
detailed instructions on updating your caldb with sets of calibration files
from existing or new HEASARC supported missions.
How can I obtain the most recent HEXTE Calibration information?
In recent years, we've tried to stream line our FTP area and in doing so all
the most recent HEXTE calibration information for Cluster A and Cluster B
dectors 0,1, and 3 are found in one 'DEFAULT' area.
"http://rxte.gsfc.nasa.gov/FTP/xte/calib_data/hexte_files/DEFAULT"
The HEXTE recipe page says that people can just download the canned response
matrices, which I would like to do for HEXTE standard mode data. However, I
found that the standard mode data is in 64-bins, and the response matrix is
in 256 bins. Therefore the canned matrix must be rebinned, yes?
Yes that is true. In our FTP area, there are some
rebinned canned HEXTE
response matrices.The README file has further information about the circumstances for their use.
I would like to create light curves and spectra from HEXTE (0,1) but I
could not find the conversion between energy and channel. Where can I find
this information? You can obtain the energy to channel relation in the
HEXTE rmf files. The rmf files are located at the following url
/FTP/xte/calib_data/hexte_files/DEFAULT After obtaining the rmf
files, you can do an fdump on the file choosing the CHANNEL, E_MIN, and/or
E_MAX columns from the EBOUNDS extension to obtain the conversion.
I'm trying to separate the on-source off-source data using the
fselect command. It asks for a fits file which I want to apply the
expression ClstrPosition.eq.0.or.ClstrPosition.eq.3 to. Where is this file?
We already have an ftool to do this step for you; it is called hxtback.
This is a script which will produce one source and two background files (plus
and minus) or a single background file. In our cookbook
the hexte recipe, has more
detailed information on this step and all steps in hexte analysis.
Why haven't I received my data tape yet, and when will I? Tapes are no longer being distributed. Processing previously done by XSDC is now performed by the XTE GOF. A email message will be sent to you when your data is ready for distribution. For further inquiries, ask us at xtehelp@athena.gsfc.nasa.gov.
How do I get the orbit ephemerides for my observations if they are missing from my data? These are available in the public data archive on our anonymous FTP area: ftp://legacy.gsfc.nasa.gov/xte/data/archive/OrbitEphem.
I could not find the realtime data in xgo2.nascom.nasa.gov:pub/FITS for my observation from 2 days ago. Take a look in the following directory:
ftp://xtesof.nascom.nasa.gov/pub/FITS/production/
your data should be there.
How do I obtain the PGP software needed to unencrypt my data?
In our recipe on Obtaining and Using
PGP, there is a detailed description of all the information one might
need to work with the PGP files if they have their pgp key.
With the new electronic distribution system, when does my year proprietary
period start? The year proprietary period starts when the Principal
Investigator receives email notification of data being pgp encrypted and in
the staging area
/FTP/xte/data/archive/PGPData
I was just beginning to recombine the PCA GoodXenon data from the
observation denoted by 70154-01-05-01 using the "make_se" script. The
script could not run due to mismatched files. I took a look and nticed
that there are three GoodXenon files in the PCA directory for this
observation. Since these should be found in pairs, I am assuming one is
missing. Can you tell me what has happened here and if the data can be
found somewhere?Very rarely, when a GoodXenon observation is very short,
only one of the two necessary files is created. It looks as if you've
been unlucky, and received only half your data for those 130 seconds.
If you delete that extra goodXenon file and rerun make_se you can
then proceed with the routine analysis for the rest of the data from
this observation. Sorry the other 130 seconds can't be recovered!
Software and general FTOOLS problems:
ANALYSIS
RXTE: Bradt, Rothschild & Swank 1993 -
Reference Bradt, H. V., Rothschild, R. E., & Swank, J. H. 1993, A&AS, 97, 355
Background, response, and other PCA issues:
1) use SEBITMASK to create the bitmasks appropriate to the channel
ranges of interest.
2) use these bitmasks with FSELECT to operate on your barycenter-corrected
events list to create the channel-band data of interest to you.
[sgtl@bastet ~/FAXBARY]$ faxbary
Input file name: (gx_LR1_bcor.lc):
Output file name: (gx_LR1_bcor_faxb.lc):
Orbit ephemeris file(s) (or @filename): (FPorbit_Day2559):
**** faxbary v2.1 ****
running: axBary -i FPorbit_Day2559 -f gx_LR1_bcor_faxb.lc -ra
1.25000000E+01 -dec -7.30999985E+01 -ref FK5
axBary: Using JPL Planetary Ephemeris DE-200
axBary: Using JPL Planetary Ephemeris DE-200
axBary: Using JPL Planetary Ephemeris DE-200
axBary: Could not find a Time column in HDU 2
faxbary is one of our more verbose tools and the output you see is
perfectly normal; so I would suspect the times were corrected.
ASM Issues:
HEXTE issues:
Data, processing, and distribution:
If you have a question about RXTE, please send email to one of our
help desks.