next up previous contents
Next: Final results Up: Reducing the observations Previous: Standard stars

   
Reduction procedure

After subtracting dark current and sky brightness from your stellar data, and asking you to select a gradient estimator for each band, the reduction program converts intensities to magnitudes. At this stage, it warns you of any observations that seem to have zero or negative intensities (which obviously cannot be converted to magnitudes). These are often an indication that you have confused sky and star readings. The program then estimates starting values for the full solution.

The program treats four different categories of stars differently: standard stars, extinction stars, program stars, and variable stars. It asks for the category of each file of stars when the positions are read in. Standard stars are those having known standard values, which are used only to determine transformation coefficients. They may also be used to determine extinction. Be cautious about using published catalog values as standards; these often have systematic errors that will propagate errors into everything else. Extinction stars are constant stars, observed over an appreciable range of airmass, whose standard magnitudes and colors are unknown, or too poorly known to serve as standards. Ordinary stars taken from catalogs of photometric data are best used as extinction stars; later, you can compare their derived standard values with the published ones as a rough check. Program stars may be re-classified as either extinction or variable stars during the course of the extinction solution. If they are not used as extinction stars, they are not included in the solutions, but are treated as variable stars at the end. Variable stars are excluded from the extinction and transformation solutions. Their individual observations will be corrected for extinction and transformation after the necessary parameters have been evaluated.

You will do best to maintain separate star files for each category; however, you can intermix extinction and variable stars in a ``program star'' file, and PEPSYS will do the best it can to sort them out, if you ask it to use program stars for extinction (see below). Remember that only star files need to be separated this way; a data file normally has all the observations for a given night, regardless of the type of object.

You can also group related files in a MIDAS ``catalog'' file. Use the MIDAS command CREATE/TCAT to refer to several similar *.tbl files as a catalog file. Note that (a) you must enter the ``.cat'' suffix explicitly when giving PEPSYS a catalog; and (b) a catalog must contain only tables of the same kind -- e.g., only standard stars, or only program stars. Catalogs can be used for both star files and data files.

When all the star files have been read, the program asks for the name of the table file that describes the instrument. As usual, you can omit the ``.tbl'' suffix and it will be supplied.

Then the program asks for the data files. Remember to add the ``.cat'' suffix if you use a catalog. While reading data, it may ask you for help in supplying cross-identifications if a star name in the data did not occur in a star table. If all your data files have been read, but it is still asking for the name of a data file, reply NONE and it will go on. (This same trick works for star files too.)

The program will display a plot of airmass vs. time for the standard and extinction stars on each night, so you can judge whether it makes sense to try to solve for time-dependent extinction later on. If the airmass range is small, it will warn you that you may not be able to determine extinction.

If the data are well distributed, and there are numerous standard stars, it will obtain starting values of extinction coefficients from the standard values; otherwise, it assumes reasonable extinction coefficients and estimates starting values of the magnitudes from them. (These starting values are simple linear fits, somewhat in the style of SNOPY, except that robust lines are used instead of least squares.) From the preliminary values of magnitudes, it tries to determine transformation coefficients, if standards are available.

This whole process is iterated a few times to obtain a self-consistent set of starting values for all the parameters, except for bandwidth factors. Each time the program loops back to refine the starting values, it adds a line like ``BEGIN CYCLE n'' to the log. Don't confuse these iterations, which are just to get good starting values, with the iterations performed later in the full solution.

One problem in this preliminary estimation is that faint stars with a lot of photon noise might just add noise to the extinction coefficients. The program tries to determine where stars become too faint and noisy to be useful for estimating extinction. The rough extinction and transformation coefficients will be displayed as they are determined; if any fall outside a reasonable range of values, the program will tell you and give you a chance to try more reasonable values.

When reasonable starting values have been found for the parameters that will be estimated in the full solution, the program still needs to decide how to do the solution. Should the standard values of the standard stars be used in determining extinction (which requires estimating transformation coefficients simultaneously), or should extinction be determined from the observations alone, and the transformations found afterward? The program will ask your advice.

If you have designated no extinction stars, the reduction program will ask you whether you want to treat program stars as extinction stars. If you believe most of them are constant in light, you can try using all of them as extinction stars. If some turn out to be variable, you will have a chance to label them as such later on. If many of the program stars are faint, they may be too noisy to contribute anything useful to the extinction solution; then you could leave them out and speed up the solution. On the other hand, values obtained from multiple observations in the general solution will be a little more accurate than values obtained by averaging individual data at the end.

When the program is ready to begin iterating, it will ask how much output you want. When you first use PEPSYS, you may find it useful to choose option 2 (display of iteration number and variance). After you get used to how long a given amount of data is likely to take to reduce, you can just choose option 1 (no information about iterations). The detailed output from the other options is quite long, and should only be requested if the iterations are not converging properly and you want to look at the full details.

If you ask for iteration output, you will always see the value of the weighted sum of squares of the residuals (called WVAR in the output), and the value of the typical residual for an observation of unit weight (called SCALE), which should be near 0.01 magnitude. The weights are chosen so that the error corresponding to unit weight is intended to be 0.01 mag. When SCALE changes, there can be considerable changes in WVAR -- in particular, don't be alarmed if it occasionally increases. A flag called MODE is also displayed, which is reset to zero when SCALE is adjusted. During normal iterations, MODE = 4; ordinarily, most of the iterations are in mode 4, with occasional reversions to mode 1 when SCALE is adjusted. More detailed output contains the values of all the parameters at each iteration, and other internal values used in the solution; these are mainly useful for debugging, and can be ignored by the average user.

Typically 20 or 30 iterations are needed for convergence; if the data are poorly distributed, so that some parameters are highly correlated, more iterations will be needed. If convergence is not reached in 99 iterations, the program will give up and check the values of the parameters (see next paragraph). This usually indicates that you are trying to determine parameters that are not well constrained by the data. It will also check the values of the partial derivatives used in the problem; if there is an error here larger than a few parts in 106, you have found a bug in the program.

At the end of a solution, the program prints the typical error, and then examines the parameters obtained. If extinction or transformation coefficients or bandwidth parameters seem very unreasonable, the program will simply fix them at reasonable values and try again. If they seem marginally unreasonable, the program will ask for your advice.

When reasonable values are obtained for all the parameters, the program will check the errors in the transformation equations, if standard values have been used in the extinction solution. Then, if necessary, it will readjust the weights assigned to the standard-star data, and repeat the solution. Usually 3 or 4 such cycles are required to achieve complete convergence. Thus, including the transformation parameters in the extinction solution means the program may take longer to reach full convergence.

Having reached a solution in which the weights are consistent with the residuals, the program examines the stellar data again. If you have ``program'' stars that might be used to strengthen the extinction solution, it will ask if you want to use all, some, or none of them for extinction. If you reply SOME, it will go through the list of stars one by one and offer those that look promising; you can then accept or reject each individual star as an extinction star. Only stars whose observations cover a significant interval of time and/or airmass will be offered as extinction candidates.

In examining the individual stars, the program may find some that show signs of variability. For those that have several observations, a ``light-curve'' of residuals will be displayed. Pay close attention to the numbers on the vertical scale of this plot! Each star's residuals are scaled to fill the screen. If there are only a few data for a star, only the calculated RMS variation is shown.

A star may show more variation than expected, but if this is under 0.01 magnitude, it may still be a useful extinction star -- indeed, the star may not really be variable, but may have had its apparent variation enhanced by one or two anomalous data points. Another problem that can occur, if you have only a few extinction stars, or only one or two nights of data, is a variation in the extinction with time. This can produce a drift in the residuals of a standard or extinction star that may look like some kind of slow variation. Watch out for repeated clumps of anomalously dim measurements that occur about the same time for one star after another; this often indicates the passage of a patch of cirrus.

The program will show you several doubtful cases for every star that really turns out to be variable, so be cautious in deciding to treat them as variable stars. If you decide that a star really looks variable, you can change its category to ``variable'', and it will be excluded from all further extinction solutions.

If you change the category of any star, the program goes back and re-does the whole extinction solution again from the very beginning. When you get through a consistent solution without changing any star's category, the program announces that it has done all it can do, displays some residual plots, and prints the final results.

All reductions are done in the instrumental system, even if standard-star values (and hence transformations) are included as part of the extinction solution. It may be helpful to look at a schematic picture of the reduction process (see Figure 13.2 on the next page).


  
Figure: Schematic flowchart for reductions
\begin{picture}(120,180)
\par\put(0,170){\makebox{Pre-process data:}}
\put(60,1...
...no}}
\put(95,5){\vector(1,0){20}}
\put(105,77){\oval(34,144)[r]}
\end{picture}


next up previous contents
Next: Final results Up: Reducing the observations Previous: Standard stars
http://www.eso.org/midas/midas-support.html
1999-06-15