Standard Star Photometry

Standard stars are those that have accepted or defined magnitudes. They are used to calibrate and transform a set of photometric data onto a certain photometric system. The basic recipe for transforming a set of data onto a photometric system includes observation of the standards, distinguishing between standards and other field stars in each frame, measuring the flux from the standards and obtaining corresponding instrumental magnitudes, and finally, obtaining the terms in the transformation equations necessary for transforming these magnitudes to the accepted magnitudes.

I am working on the calibration of standards data taken at the INT in September 2004 in the V, I, 5170, stromgren-u and stromgren-v passbands. The data live in /dropbear_data2/djo3/Stds/INTsept04/. The images have been overscan subtracted and trimmed, zero-corrected (bias subtracted), linearized, and flat-fielded. Additionally, in the I-band, fringes have been (satisfactorily) removed. These corrections and calibrations were done primarily using the IRAF task CCDPROC which can be accessed through MSCRED, in addition to many Perl scripts written by Dr. Paul Harding. I use IRAF (Image Reduction and Analysis Facility) version 2.12.a throughout the reduction and photometric calibration procedures.

Finding the Standards

Locating standard stars in a given image frame can be done either in a point-and-click fashion, or in an automated fashion. The former requires that the user locate the standards in a given frame by eye, for instance, by examining a finding chart, and subsequently documenting the standards' x and y positions in each frame. This undoubtedly is a tedious approach and the 'by hand' coordinate acquisition of these stars brings into question the user-error(s) associated with this technique and therefore the integrity of the supposed x and y centers of the standards. The latter technique, on the other hand, uses the DAOFIND task: a sophisticated search algorithm that examines images for local density maxima and is predicated on user-defined parameters such as the typical FWHM of the standards, standard deviation of the background, the threshold for a detection (in multiples of sigma) above the background value, in addition to many other user-specified parameters. It is extremely important to note that under datapars, the fwhmpsf parameter asks for the FWHM of the PSF in scale units. If the FWHM value was found using a radial plot in IMEXAMINE, for instance, the values listed for the FWHM are in pixels, and not scale units. If the scale parameter is left as unity (i.e. one arc second per pixel), then you can effectively be working in pixels, however, if it is changed (which I prefer), then you must convert the FWHM to scale units. It is also important to remember that if you are working with mosaic data, the gain and the readnoise change from CCD to CCD. Thus, check the image header for each chip and update the parameters accordingly in DAOFIND because these values are used in locating stars. Finally, DAOFIND produces accurate, centered x and y coordinates for every detection which are critical to accurate photometry later on in the process. The parameters for DAOFIND are shown below.

                                            I R A F  
                             Image Reduction and Analysis Facility

PACKAGE = daophot
   TASK = daofind

image   =            @v_4.stds  Input image(s)
output  =              default  Output coordinate file(s) (default: image.coo.?)
(starmap=                     ) Output density enhancement image(s)
(skymap =                     ) Output sky image(s)
(datapar=                     ) Data dependent parameters
(findpar=                     ) Object detection parameters
(boundar=              nearest) Boundary extension (constant|nearest|reflect|wrap)
(constan=                   0.) Constant for boundary extension
(interac=                   no) Interactive mode ?
(icomman=                     ) Image cursor: [x y wcs] key [cmd]
(gcomman=                     ) Graphics cursor: [x y wcs] key [cmd]
(wcsout =            )_.wcsout) The output coordinate system (logical,tv,physical)
(cache  =             )_.cache) Cache the image pixels ?
(verify =            )_.verify) Verify critical daofind parameters ?
(update =            )_.update) Update critical daofind parameters ?
(verbose=           )_.verbose) Print daofind messages ?
(graphic=          )_.graphics) Graphics device
(display=           )_.display) Display device
(mode   =                   ql)


The input is a list of images which will be examined (which in this case represents the V-band images in CCD #4 of the INT CCD convention), and for each file in the list, a corresponding output file with the extension .coo.? (where ? is a number) is produced that contains information such as the x and y centers of each detection. The parameters datapars and findpars have parameters of their own which can be accessed by typing :e in the respective field and also must be examined and specified by the user. They are shown below. Note again, the values shown below for readnoi and epadu reflect those specifically for CCD chip four, and must be changed depending on the chip.

                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = datapars

(scale  =                 0.33) Image scale in units per pixel
(fwhmpsf=                 1.50) FWHM of the PSF in scale units
(emissio=                  yes) Features are positive ?
(sigma  =                   4.) Standard deviation of background in counts
(datamin=                 -20.) Minimum good data value
(datamax=               65500.) Maximum good data value
(noise  =              poisson) Noise model
(ccdread=                     ) CCD readout noise image header keyword
(gain   =                     ) CCD gain image header keyword
(readnoi=                  5.8) CCD readout noise in electrons
(epadu  =                  2.9) Gain in electrons per count
(exposur=              EXPTIME) Exposure time image header keyword
(airmass=              AIRMASS) Airmass image header keyword
(filter =              WFFBAND) Filter image header keyword
(obstime=              UTSTART) Time of observation image header keyword
(itime  =                INDEF) Exposure time
(xairmas=                INDEF) Airmass
(ifilter=                INDEF) Filter
(otime  =                INDEF) Time of observation
(mode   =                   ql)


                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = findpars

(thresho=                   5.) Threshold in sigma for feature detection
(nsigma =                  1.5) Width of convolution kernel in sigma
(ratio  =                   1.) Ratio of minor to major axis of Gaussian kernel
(theta  =                   0.) Position angle of major axis of Gaussian kernel
(sharplo=                  0.2) Lower bound on sharpness for feature detection 
(sharphi=                   1.) Upper bound on sharpness for  feature detection
(roundlo=                  -1.) Lower bound on roundness for feature detection
(roundhi=                   2.) Upper bound on roundness for feature detection
(mkdetec=                  yes) Mark detections on the image display ?
(mode   =                   ql)


Many of the parameters for datapars and findpars can be found by examining an image and the standard stars using IMEXAMINE to decipher information such as the background statistics, pixel saturation limits and the typical FWHM of a standard star. It is important to note that raising the threshold value will reduce the number of detections. On the other hand, of course, lowering this value will increase the number of detections, and not necessarily just stellar detections, but other spurious detections as well. It is also important to pay particular attention to the sharplo and sharphi parameters and to set the sharphi value higher than normal to begin with (say ~3) in order to get a feel for the roundness characteristics of the standards. Then, the user can adjust these parameters accordingly and tune them to his or her unique specifications.

Taken at face value, the findpars parameters are not quite as obvious as those in datapars, but they begin to make more sense when considering how DAOFIND operates. As was mentioned, DAOFIND searches for local density maxima and if such maxima are greater than the object detection threshold, the detection is documented. An important point to remember is that pixel-to-pixel noise fluctuations can result in individual pixels, or small groups of pixels, having values potentially greater than the detection threshold, especially when operating at a low detection threshold, resulting in many detections based exclusively on noise contributions. In order to increase the probability of a stellar detection, knowledge of the point spread function (PSF) is used to help ensure that a detection is more indicative of a stellar source. DAOFIND produces a Guassian convolution kernel truncated at nsigma by approximating the PSF with an elliptical Guassian function using the fwhmpsf and scale parameters within datapars to construct the distribution's sigma along the semi-major axis; using the ratio parameter to describe the ratio of the sigma along the minor axis to the sigma along the major axis; and using theta to describe the position of the major axis of the elliptical Guassian measured counter-clockwise from the x axis. The sharplo and roundlo parameters are numerical cut-offs on the image sharpness and roundness statistics used to eliminate brightness maxima due to bad pixels, or bad rows or columns. The image is then convolved with the Guassian kernel, which is mathematically equivalent to least-squares fitting each point with a truncated, lowered elliptical Guassian function. Each point now contains an estimate of the amplitude of the best fitting Guassian function at that point. In principle, the higher the amplitude of the best fitting Guassian, the greater the probability that a star is present. Finally, DAOFIND steps through each convolved point searching for density maxima greater than threshold.

In order for DAOFIND to detect all of our standards in each frame, we have to run at a relatively low threshold value. The natural consequences of operating at such a threshold value, as mentioned previously, are output files littered with unwanted detections and the standards embedded within them. In order to reduce the size of these files and to remove many of the unwanted detections, a Perl script was written by Dr. Paul Harding called coo_clean.pl (my modified version is dan_coo_clean.pl), that sorts each DAOFIND output file and keeps only the 200 brightest detections in each .coo.1 file, outputting the DAOFIND information associated with these 200 detections (i.e. x and y coordinate centers) into a file in an IRAF-friendly format with the extension .coo.2. The underlying presumption here is clear: that the standards are amongst the 200 brightest detections in each frame; which, in fact, isn't unreasonable, especially in the relatively uncrowded fields we are dealing with.

Dealing with Observation Time

The observation times of exposures are written into the header in Universal Time, which recycles every 24 hours, thereby offering no information about the night the observation was made. It is very important that the night an observation was made is known because the photometric zero point changes from night to night, and the correct zero point must be applied to the exposures. The way that we deal with the problem is to mark the observation time in decimal years, which we write into the header under the header keyword NEWEPOCH. We obtain these values by using the IRAF task ASTHEDIT located under ASTUTIL. The parameters for ASTHEDIT are shown below.

                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = astutil
   TASK = asthedit

images  =     @coo_stds.images  Images to be operated upon
commands=    asthedit_stds.com  File of commands
(table  =                     ) File of values
(colname=                     ) Column names in table file
(prompt =           asthedit> ) Prompt for STDIN commands
(update =                  yes) Update image header?
(verbose=                  yes) Verbose output?
(oldstyl=                   no) Use old style format?
(mode   =                   ql)


Of course the images parameter is the list of images we wish to add the new header information to. The list is just the [0] extension of the images because this will then write the information to all of the chips (since inherit is true). The list of commands is shown below which is the input to the commands parameter.

#example of a calculation
###########################################
###only needed if entered ST instead of UT
#$utmidn  = 4:00:01
#$stmidns  = mst ($utdate, $utmidn, obsdb ("lco", "longitude"))
#$stmidn  = real($stmidns)
#$ut      = $st - $stmidn + $utmidn  
#if($ut<0.) 
#  $ut = $ut + 24.
#endif 
######################################
$uts      = utstart
$dobs     = @'date-obs'
$nepoch = epoch ($dobs,$uts)
$nepoch = $nepoch - 2000.
newepoch  = format ("%12.7f", $nepoch)


This series of commands will produce the decimal years observation time (for instance 2004.67671). Then, we subtract 2000 since all of the observations were made in the same year, which saves only the digits that are significant and interesting to us (for instance 4.67671). These values are then written into the headers under the keyword NEWEPOCH. When running PHOT later on, we will select this keyword in the DATAPARS parameters.

Aperture Photometry

The purpose of aperture photometry is to measure the brightness of an object without including contributions from any contaminating sources. This is done by centering an aperture around a star and summing the counts within the aperture. The aperture centers on the coordinates found in DAOFIND, which is why it was important to obtain accurate x and y centers for each star. The important consideration here is the size of the aperture and the amount of the stellar signal being enclosed and summed. Since the stellar profile follows a Guassian distribution, the wings of the distribution extend out to an appreciable distance from the stellar center; not to mention the fact that seeing and air mass also increase the spread of the distribution. As such, enclosing too small an area could result in too low a signal being measured and so one would think that a very large aperture is the obvious choice in order to attempt to enclose the entire signal. A very large aperture, though, while enclosing a larger percentage of the stellar signal, increases the probability that a cosmetic effect such as a cosmic ray will be included in the summation of the counts within the aperture, thereby skewing the flux to higher values. Furthermore, the noise contributions due to the shot noise, the dark noise and the readout noise all scale with the number of pixels. A larger aperture means a larger area and more pixels being included in the summation, resulting in an increased noise contribution and effectively decreasing the signal-to-noise ratio (since far from the stellar center, you would be losing more to noise than you would be gaining from the low signal at such a distance). Therefore, one must optimize the signal while simultaneously minimizing the noise contribution. A common technique employed to find this optimal aperture radius is to take flux measurements over a series of increasing aperture radii and examine how the corresponding magnitudes change with aperture. There will be a point where the difference in magnitude between two apertures is less than 1%, and this is generally an accepted cut-off, however the tolerance here is completely up to the user. An example of such a plot is shown below where I have plotted instrumental magnitude as a function of aperture for an arbitrary star. Note that there is only a 0.002 magnitude difference between the aperture of seven arc seconds and that of ten. Also note that at apertures of 14 - 16 arc seconds, the magnitude increased (remember that the more positive and higher a magnitude, the fainter the object), which means that we subtracted off more signal due to the noise estimation than we gained by trying to include the weak stellar signal at that radius.




The next step is to subtract the counts due to contribution from the background; that is, any source other than the star in question that contributed photons within the aperture. The ideal technique (but really only possible for things like supernovae) would be to lift the star up and sum the counts within the aperture due to everything in the absence of the star and then subtract this from the previous measurement. A common technique for estimating the background contribution is to use an annulus of specified thickness around the aperture and measure the background contribution within the annulus. This can then be subtracted from the signal within the aperture. Of course, the size of the annulus again must be considered for the same reasons as noted above regarding the noise contributions.

Amongst the output from running the dan_coo_clean.pl script are x and y coordinates for each of the 100 brightest detections in every image frame including the standard stars. The next step is to do aperture photometry on the standards, measuring the flux from the standard stars and obtaining instrumental magnitudes. This is done using the PHOT task which can be accessed through NOAO, DIGIPHOT and then DAOPHOT. The PHOT parameters are shown below.

                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = phot

image   =             @phot_in  Input image(s)
coords  =           @phot_in_2  Input coordinate list(s) (default: image.coo.?)
output  =              default  Output photometry file(s) (default: image.mag.?)
skyfile =                       Input sky value file(s)
(plotfil=                     ) Output plot metacode file
(datapar=                     ) Data dependent parameters
(centerp=                     ) Centering parameters
(fitskyp=                     ) Sky fitting parameters
(photpar=                     ) Photometry parameters
(interac=                   no) Interactive mode ?
(radplot=                   no) Plot the radial profiles?
(icomman=                     ) Image cursor: [x y wcs] key [cmd]
(gcomman=                     ) Graphics cursor: [x y wcs] key [cmd]
(wcsin  =             )_.wcsin) The input coordinate system (logical,tv,physical,world)
(wcsout =            )_.wcsout) The output coordinate system (logical,tv,physical)
(cache  =             )_.cache) Cache the input image pixels in memory ?
(verify =            )_.verify) Verify critical phot parameters ?
(update =            )_.update) Update critical phot parameters ?
(verbose=           )_.verbose) Print phot messages ?
(graphic=             stdgraph) Graphics device
(display=             stdimage) Display device
(mode   =                   ql)


The input to PHOT is a list of image files (imagename.fit[?] or imagename.fits) and a list of coordinate files (imagename.coo.2). PHOT will then measure the flux within user-specified apertures centered on the coordinates provided in the .coo.2 files and subtract a background estimation calculated within an annulus centered around the star, but outside the aperture radius. The parameters datapars, centerpars, fitskypars and photpars all have parameters of their own and are shown below for CCD four (because, for instance, datapars.readnoi and datapars.epadu change for each CCD).

                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = datapars

(scale  =                 0.33) Image scale in units per pixel
(fwhmpsf=                 1.50) FWHM of the PSF in scale units
(emissio=                  yes) Features are positive ?
(sigma  =                   4.) Standard deviation of background in counts
(datamin=                 -20.) Minimum good data value
(datamax=               65500.) Maximum good data value
(noise  =              poisson) Noise model
(ccdread=                     ) CCD readout noise image header keyword
(gain   =                     ) CCD gain image header keyword
(readnoi=                  5.8) CCD readout noise in electrons
(epadu  =                  2.9) Gain in electrons per count
(exposur=              EXPTIME) Exposure time image header keyword
(airmass=              AIRMASS) Airmass image header keyword
(filter =              WFFBAND) Filter image header keyword
(obstime=             NEWEPOCH) Time of observation image header keyword
(itime  =                INDEF) Exposure time
(xairmas=                INDEF) Airmass
(ifilter=                INDEF) Filter
(otime  =                INDEF) Time of observation
(mode   =                   ql)


                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = centerpars

(calgori=                 none) Centering algorithm
(cbox   =                   3.) Centering box width in scale units
(cthresh=                   0.) Centering threshold in sigma above background
(minsnra=                   1.) Minimum signal-to-noise ratio for centering algorithim
(cmaxite=                   10) Maximum iterations for centering algorithm
(maxshif=                   1.) Maximum center shift in scale units
(clean  =                   no) Symmetry clean before centering
(rclean =                   1.) Cleaning radius in scale units
(rclip  =                   2.) Clipping radius in scale units
(kclean =                   3.) K-sigma rejection criterion in skysigma
(mkcente=                   no) Mark the computed center
(mode   =                   ql)


                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = fitskypars

(salgori=               median) Sky fitting algorithm
(annulus=                  10.) Inner radius of sky annulus in scale units
(dannulu=                   5.) Width of sky annulus in scale units
(skyvalu=                   0.) User sky value
(smaxite=                   10) Maximum number of sky fitting iterations
(sloclip=                   0.) Lower clipping factor in percent
(shiclip=                   0.) Upper clipping factor in percent
(snrejec=                   50) Maximum number of sky fitting rejection iterations
(sloreje=                   3.) Lower K-sigma rejection limit in sky sigma
(shireje=                   3.) Upper K-sigma rejection limit in sky sigma
(khist  =                   3.) Half width of histogram in sky sigma
(binsize=                  0.1) Binsize of histogram in sky sigma
(smooth =                   no) Boxcar smooth the histogram
(rgrow  =                   0.) Region growing radius in scale units
(mksky  =                   no) Mark sky annuli on the display
(mode   =                   ql)


                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = photpars

 (weighti=             constant) Photometric weighting scheme
(apertur=           5,6,7,8,10) List of aperture radii in scale units
(zmag   =                  25.) Zero point of magnitude scale
(mkapert=                   no) Draw apertures on the display
(mode   =                   ql)


As I have it set up, PHOT outputs a file with the extension .mag.1 for every input image file using the corresponding .coo.2 files to get the aperture centering information, among other things. The .mag.1 files contain an ID number for every detection within each frame, x and y coordinates of each detection, exposure time, observation time, airmass, filter, and then a series of columns that contain magnitudes for given apertures and errors associated with the magnitudes. Note that the observation time is NEWEPOCH, written into the header ourselves as described above, which allows us to more easily keep track of which night the observations took place, which is important because the zero point offsets change from night to night.

The TXDUMP Task

TXDUMP is a very useful command that will select specific fields from the PHOT output files. The fields we are interested in are: ID, xcenter, ycenter, itime (exposure time), otime (observation time), xairmass, ifilter, mag, and merr. In some cases, there were errors in obtaining magnitudes for certain apertures; for instance, it is possible that for a star towards the edge of an image that a given aperture went off the edge of the image. This error is recorded in the output of PHOT and we can discount such errors by specifying the expr parameter in TXDUMP. If expr evaluates as is specified by the user, the user can keep only those stars where the aperture photometry proceeded correctly. We used 'PIER[5]=0' as our expr boolean expression. PIER is 0 if the photometry proceeded 'normally,' otherwise a comment is printed describing the error. The '[5]' refers to the fifth aperture. So, in order for a star's photometry to be kept, we required that the fifth aperture (and the largest in our case) produced an instrumental magnitude rather than encountering an error. The logical reasoning here being that if the largest aperture is successful, then the smaller ones ought to be as well. The parameters for TXDUMP are shown below.

                                            I R A F  
                             Image Reduction and Analysis Facility
PACKAGE = daophot
   TASK = txdump

textfile=                       Input apphot/daophot text database(s)
fields  = id,xcenter,ycenter,itime,otime,xairmass,ifilter,mag,merr  Fields to be extracted
expr    =            PIER[5]=0  Boolean expression for record selection
(headers=                   no) Print the field headers ?
(paramet=                  yes) Print the parameters if headers is yes ?
(mode   =                   ql) Mode of task


Note that the parameters fields and expr do not have enclosing parentheses, which means that IRAF will prompt the user for these two parameters for every input file. Instead of repeatedly hitting the return key, a more elegant approach is to adjust these parameters such that IRAF will not prompt the user every time, effectively eliminating the interactive approach. The way to go about doing this is to go to the uparm directory and open the TXDUMP parameter file - which will not be so obvious - called dattxump.par, with an editor. The file should look something like is shown below.

textfiles,s,a,"",,,"Input apphot/daophot text database(s)"
fields,s,a,"id,xcenter,ycenter,itime,otime,xairmass,ifilter,mag,merr",,,"Fields to be extracted"
expr,s,a,"PIER[5]=0",,,"Boolean expression for record selection"
headers,b,h,no,,,"Print the field headers ?"
parameters,b,h,yes,,,"Print the parameters if headers is yes ?"
mode,s,h,"ql",,,"Mode of task"


Following each parameter are a series of letters where 's' refers to a string, 'a' means that the user will be asked for the parameter, or prompted, 'b' means the expression is a boolean expression, and 'h' means the parameter is hidden such that the user will not continually be prompted. By changing 's,a' to 's,h' for the fields and expr parameters, IRAF will take as the input a hidden string, so that the user will not continually be prompted. The modified dattxdump.par file should look like that shown below.

textfiles,s,a,,,,"Input apphot/daophot text database(s)"
fields,s,h,"id,xcenter,ycenter,itime,otime,xairmass,ifilter,mag,merr",,,"Fields to be extracted"
expr,s,h,"yes",,,"Boolean expression for record selection"
headers,b,h,no,,,"Print the field headers ?"
parameters,b,h,yes,,,"Print the parameters if headers is yes ?"
mode,s,h,"ql",,,"Mode of task"


Instead of manually executing TXDUMP on each and every *.mag.1 file, I use a Perl script I have slightly modified called dan_mk_txdump.pl - originally written by Dr. Paul Harding - in order to create a txdump.cl file which, when executed in IRAF (cl < txdump.cl), will take the standard list as input (coo_stds.list), select the aforementioned fields (TXDUMP.fields parameter) from each *.mag.1 file and put them into a file with the same image name, but extension .txd. A portion of the contents of the txdump.cl file is shown below.

txdump t421679.fit4.mag.1 > t421679.fit4.txd
txdump t421686.fit4.mag.1 > t421686.fit4.txd
txdump t421687.fit4.mag.1 > t421687.fit4.txd
txdump t421691.fit4.mag.1 > t421691.fit4.txd
txdump t421692.fit4.mag.1 > t421692.fit4.txd
txdump t421711.fit4.mag.1 > t421711.fit4.txd
txdump t421715.fit4.mag.1 > t421715.fit4.txd
txdump t421716.fit4.mag.1 > t421716.fit4.txd
txdump t421858.fit4.mag.1 > t421858.fit4.txd
txdump t421862.fit4.mag.1 > t421862.fit4.txd
txdump t421863.fit4.mag.1 > t421863.fit4.txd
txdump t421867.fit4.mag.1 > t421867.fit4.txd
txdump t421870.fit4.mag.1 > t421870.fit4.txd
txdump t421956.fit4.mag.1 > t421956.fit4.txd
txdump t421959.fit4.mag.1 > t421959.fit4.txd
txdump t421993.fit4.mag.1 > t421993.fit4.txd
txdump t422008.fit4.mag.1 > t422008.fit4.txd
txdump t422052.fit4.mag.1 > t422052.fit4.txd
txdump t422055.fit4.mag.1 > t422055.fit4.txd


Again, it is clear that running the txdump.cl file will select the fields specified by the user in the fields parameter from every *.mag.1 file, and output only those selected fields into the *.txd files.

Although we have extracted the photometric information we are interested in having run txdump.cl, the .txd files need to be formatted to an IRAF-friendly format for use later on in IRAF. This is done using a Perl script entitled dan_format_txdump.pl, which is a slightly modified version of the original written by Dr. Paul Harding. The script is designed to convert a txdump file (.txd extension) to a fixed format. The input to this script is the usual standards file (coo_stds.list). The script takes the .txd files having run txdump.cl and then formats them, giving the new (formatted) files the .txd extension and giving the old (unformatted) files a .txdold extension in place of the previous .txd.

Using TFINDER on Stetson Standards

The TFINDER task can be accessed through FINDER. It allows the user to overlay a set of data upon an input image and via an interactive approach the user can obtain a plate solution for each CCD indicative of a user-specified nth order polynomial fit to the overlaying data transformed to align with the data from the input image. In other words, if you downloaded a set of data (RA and DEC) centered on the RA and DEC of an image you took and you wanted to see how well they "matched" the stars within your image, chances are there would be noticeable offsets between the stars in the input image and the overlaying data. This is primarily due to the different optics in different telescopes that cause a star at a given RA and DEC to be projected differently. TFINDER allows you to correct for the offsets by fitting a polynomial that is known as the plate solution. Armed with the plate solution, given the x and y image coordinates of a random, interesting star within your image, you could obtain the RA and DEC for this star using the CCTRAN task. This task is especially important when the user is interested in transforming from the x and y image coordinates to celestial coordinates, which are used for observation, however, TFINDER serves many other purposes as well.

I have used the TFINDER task in conjunction with Stetson Photometric Standard data (downloaded from http://cadcwww.hia.nrc.ca/cadcbin/wdb/astrocat/stetson/query/) for each of the four CCD's for one image for each target standard field. In other words, I have chosen a single master frame in the V-band for each target standard field. Upon examination of numerous frames, I have settled on the master images listed below based upon how 'clean' the image looked in comparison to other frames, in addition to inspection of how many standard stars are photometrically accessible; that is, in the master frames we want every standard to be correctly photometered avoiding potentially bad pixels or columns and the master frames reflect the best cases of this scenario for each field. Furthermore, I made histograms for all of the images for a given standard field that expressed how many images were in 0.01° RA bins in the range of all RA's of the images for that standard field. I was able to neglect the declination because the spread was only ~.04° at most, which translates to ~400 pixels difference between images in the extreme case. (These numbers are more important later on when discussing the coordinate offsets to the master frames.) The master frames serve as master coordinate references for each target standard field and so we only have to run TFINDER on the master images. We ultimately run a Perl script which offsets the standards for a given field to a common xy coordinate system, namely, that of the corresponding master image. I have chosen the following images as master frames for their respective standard field.

Master Images (V-band)

t421686.fit ... SA92
t422254.fit ... SA110
t422681.fit ... SA107
t422503.fit ... SA114
t422225.fit ... PG02
t422905.fit ... SA98


After specifying many of the usual parameters, the input to TFINDER is an image (*.fits). It is important to note that TFINDER will not accept a mosaic image - which is in a Multi-Extension Fits (MEF) format - as input. Furthermore, even specifying the chip number of a MEF file does not suffice for TFINDER. In order to find plate solutions for mosaic data, I have found the simplest and most straightforward approach is to split the MEF files into separate *.fits files, one for each chip. This is done by entering the IRAF task MSCRED and then MSCSPLIT. MSCSPLIT takes a MEF file (or a list) as input - which is commonly named *.fit - as well as a rootname without any fit or fits extension. Then, MSCSPLIT will split the MEF file into *.fits files with the names rootname_?.fits, where rootname was provided by the user and the ? refers to chip numbers. The file rootname_0.fits is a header file that is used to reconstruct the original MEF file from all of the *.fits files if so desired. There is also an option that will delete the original MEF file after having proceeded with the split, or will save a copy of it. It is recommended that the user does not delete the original MEF file after running MSCSPLIT because we just want to acquire plate solutions at this juncture and we still want to work with the original mosaic data following TFINDER. The parameters for MSCSPLIT are shown below for an example MEF file.

                                 I R A F  
                   Image Reduction and Analysis Facility
PACKAGE = mscred
   TASK = mscsplit

input   =          t422905.fit  List of input MEF files
(output =              t422905) List of output root names
(mefext =                 .fit) MEF filename extension
(delete =                   no) Delete MEF file after splitting?
(verbose=                   no) Verbose?
(fd1    =                     )
(fd2    =                     )
(mode   =                   ql)


TFINDER will overlay data on to an image, but the overlaying data need to be in a specific format in order for IRAF to work correctly. Generally, the user will download RA and DEC data from an arbitrary catalogue and the data will in two columns, RA and DEC. I have found it easiest to work in degrees, or decimal degrees, for both RA and DEC. Thus, given a table where column one is RA in degrees and column two is DEC in degrees, the user can run the IRAF task MKGSCTAB, also located under the FINDER package, in order to produce a table of data in the correct format for TFINDER. The parameters for MKGSCTAB are shown below.

                                     I R A F  
                       Image Reduction and Analysis Facility
PACKAGE = finder
   TASK = mkgsctab

input   =       Stet_SA114.dat  Input coordinate list (RA Dec [ID])
output  =       t422503_4.fits  Output GSC format table
(ra_unit=              degrees) Input RA units
(startid=                    1) Starting ID number (if not provided)
(region =                    0) Region number for new entries
(plate  =                 NONE) Plate name for new entries
(cdfile = finder$lib/cdfile.gsc) Column definition file
(list   =       Stet_SA114.dat)
(mode   =                   ql)


The output from MKGSCTAB will be a table named *.tab. It is very important that the name of the table be consistent with the name of the image being inputted into TFINDER or else TFINDER will not be able to find the table. For example, if my input image were named t422503_4.fits, the corresponding table I use would be named t422503_4.fits.tab. Also, make sure that the table is in the same directory as the input images. At this point, we must specify the parameters for TPLTSOL (located under the FINDER package) in order to set up the polynomial fitting parameters. The parameters for TPLTSOL are shown below.

                                  I R A F  
                       Image Reduction and Analysis Facility

PACKAGE = finder
   TASK = tpltsol

image   =       t422503_4.fits  Input image name
table   =   t422503_4.fits.tab  Input table name
database=    t422503_4.fits.db  Output database name
(results=                     ) Results summary file

(imupdat=                   no) Update the image WCS?
(tabupda=                   no) Update input table?
(refitca=                   no) Recompute XYs for uncentered sources?
(verbose=                  yes) Print verbose progress messages?

(dsshead=                   no) Read plate center from DSS header?
(ra_ref =         18:43:13.699) Plate center RA (hours)
(dec_ref=         +00:32:27.08) Plate center Dec (degrees)
(eq_ref =                INDEF) Plate center coordinate equinox

(inpixsy=              logical) Input pixel system
(outpixs=              logical) Output pixel system
(insyste=                j2000) Input celestial coordinate system

(project=                  tan) Sky projection geometry
(fitgeom=              general) Fitting geometry
(functio=           polynomial) Surface type

(xxorder=                    4) Order of xi fit in x
(xyorder=                    4) Order of xi fit in y
(xxterms=                 half) Include cross-terms in xi fit?

(yxorder=                    4) Order of eta fit in x
(yyorder=                    4) Order of eta fit in y
(yxterms=                 half) Include cross-terms in eta fit?

(reject =                   2.) Rejection limit in sigma units

(interac=                  yes) Fit the transformation interactively?
(graphic=             stdgraph) Default graphics device
(cursor =                     ) Graphics cursor
(catpars=                     ) Catalog description pset

(list   =                     )
(mode   =                   ql)


The database parameter represents where the plate solution will be written for the input image. The default for the orders of the xi and eta fits is two, and I have changed them to four. Also note the reject parameter which decides whether a certain point is included in the fit based on its standard deviation from the fit.

                                  I R A F  
                       Image Reduction and Analysis Facility

PACKAGE = finder
   TASK = tfinder

image   =       t422503_4.fits  Image name
(rootnam=                     ) Alternate root name for output files
(objects=                     ) List of program object X,Y coords

(scale  =                 0.33) Plate or image scale ("/pixel)
(north  =                 left) Direction of North in the field
(east   =               bottom) Direction of East in the field

(ra     =      22.694110833333) RA  of the reference point (hours)
(dec    =      1.2038833333333) Dec of the reference point (degrees)
(equinox=                2000.) Reference coordinate equinox

(xref   =                1024.) X coordinate of the reference point
(yref   =                2048.) Y coordinate of the reference point
(date_ob=           2004-09-05) Date of the observation (YYYY-MM-DD)

(update =                   no) Update image header WCS following fit?
(interac=                  yes) Enter interactive image cursor loop?
(autocen=                   no) Center at the catalog coords when entering task?
(reselec=                  yes) Apply selectpars when entering task?
(autodis=                  yes) Redisplay after all-source keystroke command?
(verbose=                  yes) Print a running commentary?

(rotate =                 -0.7) Relative position angle (CCW positive)
(boxsize=                    9) Centering box full width
(edge   =                 200.) Edge buffer width (pixels)

(opaxis =                   no) Is the reference point on the optical axis?
(del_ra =                   0.) RA offset of the field center (degrees)
(del_dec=                   0.) Dec offset of the field center (degrees)

(list   =                     )
(mode   =                   ql)



The scale, ra, dec, equinox, and date_ob parameters can all be found from an image header by simply typing imhead t422503_4.fits l+ for instance. The north and east parameters tell IRAF which direction of the chip points north and which points east. Furthermore, they change for chip 2 because of its different orientation. Additionally, xref and yref represent reference points for the mosaic, where (0,0) is definded to be the southeast corner of chip 4. The values for north and east, as well as xref and yref are shown below for each chip.

         NORTH.....EAST.....X-REF.....Y-REF....
Chip 1   Left......Bottom...-1089.....2061.....
Chip 2   Bottom....Right.....4166......988.....
Chip 3   Left......Bottom....3124.....2018.....
Chip 4   Left......Bottom....1024.....2048.....


Now, the user must specify the number of rejection iterations and this is done by editing the parameters of CCMAP. The key parameters to focus on are maxiter and reject. Also, note the orders of the xi and eta fits are the same as in TPLTSOL. The parameters for CCMAP are shown below.

                                  I R A F  
                       Image Reduction and Analysis Facility

PACKAGE = imcoords
   TASK = ccmap

input   =                       The input coordinate files
database=                       The output database file
(solutio=                     ) The database plate solution names
(images =                     ) The input images
(results=                     ) The optional results summary files
(xcolumn=                    1) Column containing the x coordinate
(ycolumn=                    2) Column containing the y coordinate
(lngcolu=                    3) Column containing the ra / longitude
(latcolu=                    4) Column containing the dec / latitude 
(xmin   =                INDEF) Minimum logical x pixel value
(xmax   =                INDEF) Maximum logical x pixel value
(ymin   =                INDEF) Minimum logical y pixel value
(ymax   =                INDEF) Maximum logical y pixel value
(lngunit=                     ) Input ra / longitude units
(latunit=                     ) Input dec / latitude units
(insyste=                j2000) Input celestial coordinate system
(refpoin=               coords) Source of the reference point definition
(lngref =                INDEF) Reference point ra / longitude telescope coordinate
(latref =                INDEF) Reference point dec / latitude telescope coordinate
(refsyst=                INDEF) Reference point telescope coordinate system
(lngrefu=                     ) Reference point ra / longitude units
(latrefu=                     ) Reference point dec / latitude units
(project=                  tan) Sky projection geometry
(fitgeom=              general) Fitting geometry
(functio=           polynomial) Surface type
(xxorder=                    4) Order of xi fit in x
(xyorder=                    4) Order of xi fit in y
(xxterms=                 half) Xi fit cross terms type
(yxorder=                    4) Order of eta fit in x
(yyorder=                    4) Order of eta fit in y
(yxterms=                 half) Eta fit cross terms type
(maxiter=                    3) The maximum number of rejection iterations
(reject =                   3.) Rejection limit in sigma units
(update =                   no) Update the image world coordinate system ?
(pixsyst=              logical) Input pixel coordinate system
(verbose=                  yes) Print messages about progress of task ?
(interac=                  yes) Fit the transformation interactively ?
(graphic=             stdgraph) Default graphics device
(cursor =                     ) Graphics cursor
(mode   =                   ql)



The next step is to make sure that a SAOImage (ds9) window is open and to display the input image. After creating the table for TFINDER using MKGSCTAB, and editing the parameters in TFINDER, type :g in the epars of TFINDER and the data that was inputted into MKGSCTAB will be overlayed on the image and the user will enter an interactive mode. In order to obtain a plate solution, place the (blinking) cursor over an overlayed point (red circle) and type l. Then, move the cursor over the object within the input image that matches the overlaying point and press l again. A blue circle should now be centered over the object. Repeat this process for points all over the image, paying particular attention to the corners. The reason here being that the focal plane is almost always curved when projecting on to a flat CCD. Thus, at the center of the CCD - provided the center of the focal plane centers on the center of the CCD - the projection from the focal plane on to the CCD should be much better than at the edges. In other words, the linear separation between to objects at the corner of a frame is not the same as the spherical trigonometric separation on the sky and the further from the center of the CCD one looks, the more pronounced this effect is. After matching as many overlaying points to image objects as desired (usually 10-15 is a minimum for the precision I required), type e in the SAOImage window and a plot of a fit should show up based on the parameters from TPLTSOL and CCMAP. By typing x, y, r and s, the user can view the fit in each direction. It is important to pay close attention to the root-mean-square standard deviation in order to see how closely the fit matches the offset between overlaying points and the image objects. We deleted points that fell outside of ±1 sigma when looking at the residuals. By typing f, the fit will be applied to all points in the image and then the user can examine how well the points overlay the input objects according to the fit. The user can manually match more points if needed, or type a, move the cursor to a star where the overlaying point is centered pretty well on the input image object and then type j, which will move all points to center on their current coordinates. Once a fit works well enough according to the user's specifications, type q in the SAOImage window in order to quit, and then answer 'yes' to the IRAF prompt asking whether or not the user wishes to override the previous coordinates with those obtained via the fit.

Once a plate solution has been determined, the user can transform any RA and DEC into an x and y image coordinate that will now lay close to the actual object in the image. This is done using the CCTRAN task which can be accessed through FINDER or through the IMCOORDS package. The parameters for CCTRAN are shown below.

                                          I R A F  
                            Image Reduction and Analysis Facility

PACKAGE = imcoords
   TASK = cctran

input   =      Stet_SA98.radec  The input coordinate files
output  = Stet_SA98_t422905_4.xy  The output coordinate files
database=    t422905_4.fits.db  The input database file
solution=       t422905_4.fits  The input plate solutions
(geometr=            geometric) Transformation type (linear,geometric)
(forward=                   no) Transform x / y to ra / dec (yes) or vice versa (no) ?
(xref   =                INDEF) The X reference pixel
(yref   =                INDEF) The Y reference pixel
(xmag   =                INDEF) The X axis scale in arcsec per pixel
(ymag   =                INDEF) The Y axis scale in arcsec per pixel
(xrotati=                INDEF) The X axis rotation angle in degrees
(yrotati=                INDEF) The Y axis rotation angle in degrees
(lngref =                INDEF) The ra / longitude reference coordinate in lngunits      
(latref =                INDEF) The dec / latitude reference coordinate in latunits
(lngunit=              degrees) The input / output ra / longitude reference coordinate units
(latunit=              degrees) The input / output dec / latitude reference coordinate units
(project=                  tan) The sky projection geometry
(xcolumn=                    1) Input column containing the x / ra / longitude coordinate
(ycolumn=                    2) Input column containing the y / dec / latitude coordinate
(lngform=                     ) Output format of the ra / longitude / x coordinate
(latform=                     ) Output format of the dec / latitude / y coordinate
(min_sig=                    7) Minimum precision of the output coordinates
(mode   =                   ql)


The input to CCTRAN is a list of RA and DEC and CCTRAN will convert those points to points of x and y using the database (.db file) parameter for the image given in parameter solution. Note the parameters lngunit and latunit must be specified to match the input data, and I have chosed degrees for both as I think it is the simplest. Also, the xcolumn and ycolumn parameters must be specified so CCTRAN knows which column is RA and which is DEC. The forward parameter controls whether the user wants to transform from x and y to RA and DEC, in which case the value is 'yes,' or whether to transform from RA and DEC to x and y, in which case the value is 'no.'

Standard Star Coordinate Offsets

To make it clear, the primary objective for running TFINDER on each master frame is to ultimately obtain x and y image coordinates for the standard stars on the master frame. By downloading the RA's and DEC's of the standard stars from the Stetson website, in addition to each standard star name, I was able to execute CCTRAN on the list of RA's and DEC's and produce a file named, for example, Stet_SA92_t421686_4.xy, which contains each standard star name, and each star's x and y coordinates in the master image listed in the name (t421686) for, in this example, the standard field SA92. The purpose of this list is to cross-reference the standard star name with its x and y coordinates so that when we run the offset program, each set of x and y coordinates will have an associated standard star name (for example, L92-S268) and not just an ID number.

Within each individual image of a given standard field, the standard stars are going to have unique x and y image coordinates. By offsetting the coordinates of the images to the coordinates of a single master image for a given standard field, we can then match the stars and identify the standards with the correct standard star name. This is done by running a Perl script called dan_offset_stds2.pl, which in turn runs a Fortran program called offset_stds.f. The Perl script is set up for the INT September 2004 run in its current format and has many names hard-wired into it. The user is prompted to choose the standard field for which to match coordinates and upon selection of the field, the script automatically chooses the master frame that corresponds to the selected standard field. Then, a copy of the master frames .txd file is made and the copy has a .txm extension in place of the original .txd extension. This serves as the master coordinate reference and serves as a safeguard against anything happening to the original .txd file. The script assumes that a file exists for the selected field with the name of the form Stet_SA92_t421686_fit4.xy, which contains the star name in the first column (20 characters wide) and then the x and y coordinates of the star in the master frame in the second and third columns (30 characters total, free format). This list is used to cross-reference the star name with its x and y master coordinates. The Fortran program operates by comparing star patterns in a spiral-search method between the master image and the secondary image. The program has a tolerance of 400 pixels, which means that if the offset between two images (or the offset between standard star X in the master image and standard star X in the secondary image is greater than 400 pixels), the output will have large errors or will not write anything to the output files. Due to the tolerance limitations, it is important that the master file be within an offset of 400 pixels of the secondary frames. I checked the RA and DEC of my master frames against the RA and DEC of the secondary frames for each standard field by plotting each image's RA and DEC. I also created a histogram to show the distribution of images per 0.01° RA bin. If there are images within a standard field that do in fact lie outside the 400 pixel offset tolerance, the user can guess the offset by changing the starting offset in the Perl script, or an additional master frame can be chosen. The output files are .txo files that contain the standard star name, transformed x and y coordinates, and the other data located in .txd files. An example run of the offset program is shown below. Note that the errors in the example are much too large, but it serves as an example nonetheless.

[djo3@fasold INTsept04]$ ./offset_stds
 Enter file name of: (std star id) <==> (master coord) xref
Stet_SA92_t421686_4.xy
 Enter file of: (star coords on the master frame)
t421686.fit4.txm
 Enter secondary file name
ft421685.fit4.txd
 Enter outputfile name
ft421685.fit4.txo
 enter initial guess p-s
0,0
  ---  Read std file. t421686.fit4.txm
 , # Stds =          200   ---
  ---  Read secondary file. ft421685.fit4.txd
   # Stars =          200
 Stet_SA92_t421686_4.xy
 t421686.fit4.txm
 ft421685.fit4.txd
 ft421685.fit4.txo
 ************************************************
    offset,    scale factor , rotation in x,y
  -2.498117       1.000101      2.6654129E-04
   1.515641      0.9996654     -5.7293499E-05
 ************************************************
 sx,sy,sigma,sfac,nrej,n
      0.534      0.402      0.473      3.000          4         72
 ************************************************


The Standard Field Instrumental Magnitude List

After running the offset program, there should be a .txo file for every corresponding .txd file for the images within a given standard field. The first line of a .txo file gives information about the terms in the offset equations. The rest of the data is as follows: standard star name (column 1), revised/offset x and y coordinates (columns 2 and 3), exposure time (column 4), NEWEPOCH (column 5), airmass (column 6), filter (column 7), magnitudes (columns 8-12), and magnitude errors (columns 13-17). In order to proceed to the photometric calibration, we need to select one of the five magnitudes (one aperture) and its corresponding magnitude that we are going to work with. Also, we need to give each .txo file a night index so that we know which night a given observation was made. This is done by using the Perl script dan_mkall_noap_sept04.pl. This script is not set up for an aperture correction, though it could be modified relatively simply. The script is set up to select the magnitude-column corresponding to the eight arcsecond aperture -- and the column that has the error associated with these magnitudes -- by assigning every column a number and selecting the column for the eight arcsecond aperture and its error. The night index is dealt with by creating an array that has 10 entries -- one for each night -- all initialized to zero. We then define time breaks for each night as 16:00 hours for every night. Using the IRAF task ASTTIMES located under ASTUTIL I can then convert 16:00 hours on each night to its associated decimal years time. The use of ASTTIMES is shown below.

as> asttimes.observatory="lapalma"
as> asttimes.year=2004
as> asttimes.month=9
as> asttimes.time=16
as> for (i=4; i<15; i+=1) {
>>> asttimes (day=i, header=no)
>>> }
2004   9  4 SAT 16:00:00.0 16:00:00.0 2004.67671 2453253.1667 13:44:55.3
2004   9  5 SUN 16:00:00.0 16:00:00.0 2004.67944 2453254.1667 13:48:51.9
2004   9  6 MON 16:00:00.0 16:00:00.0 2004.68218 2453255.1667 13:52:48.5
2004   9  7 TUE 16:00:00.0 16:00:00.0 2004.68492 2453256.1667 13:56:45.0
2004   9  8 WED 16:00:00.0 16:00:00.0 2004.68766 2453257.1667 14:00:41.6
2004   9  9 THU 16:00:00.0 16:00:00.0 2004.69039 2453258.1667 14:04:38.1
2004   9 10 FRI 16:00:00.0 16:00:00.0 2004.69313 2453259.1667 14:08:34.7
2004   9 11 SAT 16:00:00.0 16:00:00.0 2004.69587 2453260.1667 14:12:31.2
2004   9 12 SUN 16:00:00.0 16:00:00.0 2004.69861 2453261.1667 14:16:27.8
2004   9 13 MON 16:00:00.0 16:00:00.0 2004.70135 2453262.1667 14:20:24.3
2004   9 14 TUE 16:00:00.0 16:00:00.0 2004.70408 2453263.1667 14:24:20.9


The decimal year column is the one that begins 2004. I then subtract 2000 from each entry just as I did when creating the NEWEPOCH header for the images. For a given image, the NEWEPOCH value is compared to the time breaks for each night. If the NEWEPOCH value is larger than the time break for the first night, then it is compared to the second night. If it is larger than that for the second night, then it is compared to the third night, and so forth. Once the NEWEPOCH value is less than a time break, then we know the observation was made on the preceeding night. For the case where the NEWEPOCH value is less than a time break value, we write the number '1' for the night the observation was made, and '0' for all the other nights. This is very useful when solving the transformation equations later on. The user simply specifies the standard field.list and enters the name of the output file to be written to. The script goes through every entry in standard field.list, opens each and adjusts each line within, finally writing every line from every image in standard field.list into the output file. Thus, the output file should have as many different entries for each star as there were images in the standard field.list. An example output line is shown below.

L92-S253               394.42   814.40    5.10    4.6865055   1.58   i ft422057.fit4   0.000  16.171  0.019 0 0 0 1 0 0 0 0 0 0 0


The star name is first, then the x and y offset coordinates, the exposure time, NEWEPOCH, airmass, filter, file name, aperture correction (zero because we are not dealing with it here), instumental magnitude and error for the eight arcsecond aperture, and then the night index which shows this was taken on night four.

Solving the Transformation Equations

The output file from running dan_mkall_noap_sept04.pl will be used as the input to the IRAF package FITPARAMS located under the PHOTCAL package. Again, this list contains instrumental magnitudes and errors for the user-selected aperture, amongst all of the other data list above. The FITPARAMS task is what is used to solve the transformation equations. I will show the parameters to FITPARAMS first, shown below, and then explain them.

                                          I R A F  
                            Image Reduction and Analysis Facility

PACKAGE = photcal
   TASK = fitparams

observat=            vtest.obs  List of observations files
catalogs=     Stetson_all.phot  List of standard catalog files
config  =         Stet_V_I.con  Configuration file
paramete=         Stet_V_I.out  Output parameters file
(weighti=            equations) Weighting type (uniform,photometric,equations)
(addscat=                  yes) Add a scatter term to the weights ?
(toleran=   3.0000000000000E-5) Fit convergence tolerance
(maxiter=                   15) Maximum number of fit iterations
(nreject=                    3) Number of rejection iterations
(low_rej=                   3.) Low sigma rejection factor
(high_re=                   3.) High sigma rejection factor
(grow   =                   0.) Rejection growing radius
(interac=                  yes) Solve fit interactively ?
(logfile=               STDOUT) Output log file
(log_unm=                  yes) Log any unmatched stars ?
(log_fit=                   no) Log the fit parameters and statistics ?
(log_res=                   no) Log the results ?
(catdir =            )_.catdir) The standard star catalog directory
(graphic=             stdgraph) Output graphics device
(cursor =                     ) Graphics cursor input
(mode   =                   ql)


The observat parameter is asking for the observations file (output from dan_mkall_noap_sept04.pl) that contains the instrumental magnitudes and errors. The catalogs parameter is asking for the file that contains the accepted magnitudes and errors for the standard stars. I called this file Stetson_all.phot and it contains accepted magnitudes and errors for the B, V, R, and I bands. I obtained this data by downloading the .phot files for each standard field from the Stetson website and then compiling them all into one file (Stetson_all.phot). The paramete file will have information about the transformation equation fitting written to it. The config parameter is a configuration file that describes which columns in the catalogs parameter have which data, and which columns in the observat file have which data. The transformation equations are also defined here. This file is set up for the Stetson Standard Star list and is shown below in completeness.

#This config file is for the Stetson Standard Star list
catalog

#Tells which column has which data
B 2
V 6
R 10
I 14
error(B) 3
error(V) 7
error(R) 11
error(I) 15

#Format from the output of the dan_mkall_noap_sept04.pl script
#id 1, x 2, y 3, et 4, ut 5, X 6, filt 7, file 8, apcor 9 m 10  me 11 ...
observations

xm 6
ut 5
et 4
fi 8
filt 7
apc 9
m 10 
error(m) 11
em 11
n1 12      
n2 13
n3 14
n4 15       
n5 16
n6 17
n7 18      
n8 19
n9 20
n10 21
n11 22

#Standard system transformation

transformation

#a4..a9 not used

fit a2 = 0, a3 = .12, 
a11=0.1, a12=0.1, a13=0.1, 
a14=0.1, a15=0.1, a16=0.1,
a17=0.1, a18=0.1, a19=0.1,
a20=0.1, a21=0.1 

#const  a11 = 0  
const  a12 = 0
const  a13 = 0
const  a14 = 0   
const  a15 = 0
const  a16 = 0
#const  a17 = 0   
#const  a18 = 0
const  a19 = 0
const  a20 = 0
#const  a21 = 0

#average extinction coefficients for la palma
#U = 0.46, B = 0.22, V = 0.12, R = 0.08, I = 0.04, Z = 0.05,
#g' = 0.19, r' = 0.09, i' = 0.05


const a3 = .12      #V extinction default la palma value
#const a2 = .01


#correct the counts for the shutter delay of 0.05secs
#set p = (25 - mi) / 2.5
#set cts = et * (10**p)
#set m = 25 - 2.5 * log10(cts/(et + 0.05))


set VI = V - I
set VR = V - R
set BV = B - V

set mc  = m - 2.5*log10(et)

EQ1  : m =  
V + a2*(VI) + a3*(xm) + 
n1*a11 + n2*a12 + n3*a13 +
n4*a14 + n5*a15 + n6*a16 +
n7*a17 + n8*a18 + n9*a19 +
n10*a20 + n11*a21

#EQ1  : 25 - 2.5*log10(cts) = 
#- 2.5*log10(et + c) + V + a2*(VI) + a3*(xm) + 
#n1*a11 + n2*a12 + n3*a13 +
#n4*a14 + n5*a15 + n6*a16 +
#n7*a17 + n8*a18 + n9*a19

weight(EQ1) = 1.0 / (em + .001)**2

#deriv (EQ1, a1) = 0.
#deriv (EQ1, a3) = xm
#deriv (EQ1, a2) = VI
#deriv (EQ1, a11) = n1
#deriv (EQ1, a12) = n2
#deriv (EQ1, a13) = n3           


plot (EQ1) = m - (V + a2*VI + a3*(xm) + 
n1*a11 + n2*a12 + n3*a13 +
n4*a14 + n5*a15 + n6*a16 +
n7*a17 + n8*a18 + n9*a19 +
n10*a20 + n11*a21            ) , V

plot (EQ1) = mc , m - (V + a2*VI + a3*(xm) + 
n1*a11 + n2*a12 + n3*a13 +
n4*a14 + n5*a15 + n6*a16 +
n7*a17 + n8*a18 + n9*a19 +
n10*a20 + n11*a21)

plot (EQ1) = VI , m - (V + a2*VI + a3*(xm) + 
n1*a11 + n2*a12 + n3*a13 +
n4*a14 + n5*a15 + n6*a16 +
n7*a17 + n8*a18 + n9*a19 +
n10*a20 + n11*a21  )


In order for the configuration to be acceptable by IRAF standards, there is a small, but important, task called CHKCONFIG, also located in PHOTCAL. The input is the configuration file and this task will check the file to see if it is acceptable as input to FITPARAMS. The parameters for CHKCONFIG are shown below.

                                          I R A F  
                            Image Reduction and Analysis Facility

PACKAGE = photcal
   TASK = chkconfig

config  =         Stet_V_I.con  Input configuration file
(verbose=                   no) Verbose output ?
(mode   =                   ql)


When the configuration file passes the check and the catalog with the accepted magnitudes has been created, then one can type :g from within FITPARAMS and then enter an interactive fitting protocol.

Created by Dan Oravetz
Last modified August 17, 2005