[Japanese][English]
■ Quick Reduction of HDS data on IRAF

This page is an explanation of the quick reduction of Subaru HDS (High Dispersion Spectrograph) data on sumda system at Subaru summit/remote observation system.
Basically, this page describes the quick reduction scheme on sumda system at summit or remote observing room. But, if you want, you can make same environment on your owned IRAF system. (Please download and install CL-scripts here.)
Under this reduction scheme, all data should be reduced to one dimensional spectra. If you want to keep the spatial information on them, you have to find another solution to reduce your data.

For details on data reduction with IRAF, please see these manuals (PDF : Japanese/English)
Table of Contents
How to login to sumda/How to start IRAF
Overview about the quick reduction on IRAF
OverScan by hdsql
[Preparation 1]Making Mask Frame from BIAS
[Preparation 2]Making Template for Order Trace
[Preparation 3]Making Normalized Flat
[Preparation 4]Wavelength Identification of a Th-Ar Frame
hdsql : Quick Look of Object Frames
Batch mode
Tips, Trouble Shooting

Download of CL scripts

Reduction for Image Slicer data (Flat Fielding and Aperture Extraction for data w/fringes)

Making a combined spectrum (by Continuum fit)
Making a combined spectrum (by Flux calibration)




How to login to sumda/How to start IRAF
[1] Login to sum01 from its console, using the user account for your obs. (oXXXXX).
[2] Click the menu item "ANA menu" on your desktop.
"ANA Menu" Window will come up.
[3] Push "HDS" button in ANA Menu.
[4] An xgterm for IRAF and a "HDS LOG Editor" window will come up.
For the first time, "mkiraf" command to create your IRAF environment is running in the xgterm. Please type "xgterm" for your terminal selection in mkiraf.
[5] You can use tasks "overscan" for overscan region correction and "hdsql" for HDS quick data reduction on this IRAF without any settings by yourself. The environment arguments, stdimage=imt4096, imtype="fits", have been also registered already.
[6] In IRAF terminal,
     cl> cd /work/oXXXXX
You should work under this directory.
Your data is archived in "/data/oXXXXX/" (read only).


Overview of quick data analysis on sumda IRAF
Reduction process using hdsql
▲[Reduction scheme using "hdsql"]
   Left side is the inside of the task "hdsql". Before "hdsql", you should do preparations (1 - 4) on the right side.
   IRAF tasks used in each processes are written in this color.
   2/13 Bias/Dark Subtraction and 7/13 Cross-Talk Subtraction should be skipped for object quick looks, if you don't have any special requests.


After this, I will explain how to do quick data look of HDS on sumda, using the task "hdsql" on IRAF.
Upper figure shows these processes. If you do full processes of this quick analysis, you can get one-dimensional, flat-fielded, and wavelength calibrated spectra. However, each processes in "hdsql" can be skipped, if not necessary.
Before using "hdsql", preparations 1 - 4 on the right side of the figure are required. If you can skip flat-fielding and/or wavelength calibration, you should do preparation 2 at least. (But, I recommend to prepare a mask frame following the preparation 1 in order to trace object frames without an bad affection from bad column in the CCDs.)
After these preparation processes, you should use "hdsql" for each object frames.
HDS has two CCD chips on its camera. So, one exposure has two frame IDs. The red CCD has odd HDSA numbers (=CCD1/Left side CCD), and the blue one has even HDSA numbers (=CCD2/Right side CCD). These Red/Blue frames should be analyzed independently.


OverScan by "hdsql"
Each red/blue CCDs of HDS have two read out points. And there is over scan region at the center of frame (please see users' manual of HDS). Instead of subtraction of independent BIAS frames, an average count in this overscan region should be subtracted from object frames. The task "overscan" does this process.
This task also converts ADU to electron numbers (× ∼1.7).
Raw frames of HDS before "overscan" have FITS extension table in them. So, in IRAF, you have to designate each raw frames like "HDSA000YYYYY.fits[0]", attached "[0]" to their file names.

At the beginning of "hdsql" analysis, we will start form this "overscan" process for raw frames. Before making a mask [Preparation 1] or a template for order tracing ([Preparation 2], a raw frame for a template should be "overscan" processed. So, I will explain how to do on;y "overscan" using "hdsql".


[1] Confirm the data directory, in which raw data frames (HDSA000YYYYY.fits) archived.
Usually, it should be like /data/oXXXXX/ .
[2] In IRAF terminal,
     cl> eparam hdsql
determine parameters for "hdsql". The parameter "indirec" is the raw data directory confirmed in [1] + header of raw data file name ("HDSA000" usually). And file number without header should be inserted in parameter "inid". Overscanned data are automatically created into the current working directory. So, you don't have to copy any raw data to your working directory.
PACKAGE = echelle
   TASK = hdsql
    
inid    =                22405  Input frame ID
(indirec= /data/o05129/HDSA000) directory of Input data

(batch  =                   no) Batch Mode?
(inlist =                     ) Input file list for batch-mode
(overw  =                  yes) Force to overwrite existing images?

(oversca=                  yes) Overscan?
(biassub=                   no) BIAS / Dark Subtraction?
(maskbad=                   no) Mask Bad Pixels?
(linear =                   no) Linearity Correction?
(cosmicr=                   no) Cosmicray-event rejection?
(scatter=                   no) Scattered light subtraction?
(xtalk  =                   no) CCD Amp Cross-Talk Subtraction?
(flat   =                   no) Flat fielding?
(apall  =                   no) Extract spectra with apall?
(isecf  =                   no) Extract & Flat for IS? (override flat & apall)
(remask =                   no) Re-Mask wavlength calibrated spectrum?
(wavecal=                   no) Wavelength calibration?
(rvcorre=                   no) Heliocentric wavelength correction?
(splot  =                   no) Splot Spectrum?

                                ### Overscan ###
(os_save=                  yes) Save overscaned data?
......
[3] Execute "hdsql" with these parameters. An overscanned file "H22045o.fits" should be created.
Before OverScan (RAW)
▲HDS RAW Image (Red CCD)
      After OverScan
▲HDS Image after OverScan (Red CCD)
After the OverScan process, the BIAS level will be almost same (~0) in both side of the CCD chip. And, the overscan region in the middle of the RAW image will be removed.


[Preparation 1]
Making Mask from BIAS
In the HDS CCD chips (especially in the Red), there are some strong bad column. These could cause bad affection in some reduction processes (order tracing, flat fielding etc.). In order to reduce this bad affection, a mask frame should be created at the beginning of your reduction process.
This "mask" is made by extracting bad pixels in BIAS frames, which have abnormal counts. It should be noted that this mask will create untrue (interpolated) values in each masked object frame.
Furthermore, the Blue CCD has some "negative" (dark) bad column. This mask cannot correct these dark bad column. But, these will not affect so much in your order tracing or flat-fielding. So, the "Over-Correction" by an exact mask might be avoided here.


[Preparation 1] Making Mask fro BIAS
Source BIAS
Pretreatment by hdsql OverScan only
After hdsql Make average BIAS by "imcombine".
Make mask from BIAS by "mkbadmask".
Parameters in hdsql Put the created mask frame into mb_refer in "### Masking Bad Pixels".

[1] Only "overscan" by "hdsql" for each BIAS frame.
You may use the batch mode of hdsql to apply this process to a series of flat frames. Make a text list file (Bias2x1R.lst etc.) for created HXXXXXo.fits . Make a median-ed BIAS frame by "imcombine".
     ecl> imcombine @Bias2x1R.lst Bias2x1R combine=median reject=sigclip

[2] Determine the lower / upper count level for this BIAS. Then, create a Mask using a task "mkbadmask" (original).
This "mkbad mask" uses "wacosm11.cl" in its inside. So, you have to define it in your login.cl etc.
PACKAGE = echelle
   TASK = mkbadmask

inimage =             Bias2x1R  Input BIAS image 
outimage=             Mask2x1R  Output MASK image 

(lower  =                 -100) Lower limit replacement window
(upper  =                  300) Upper limit replacement window

(clean  =                  yes) Clean up by wacosm11
(base   =                    1) Baseline for wacosm11
(mode   =                    q)
You should appoint the lower and upper counts for BIAS, here. These could be changed by the CCD binning mode. But, I can say the upper should be around 300 and the lower should be around -100 to detect the strongest bad column in the Red chip.
Furthermore, this command can remove some Cosmic-Ray events (and maybe including some isolated bad pixels) from your mask, using "wacosm11".
[3] The created mask frame will be a mono-chrome frame with its count =1 (for Bad pixels) or =0 (for Good pixels). This mask can be use by a task "fixpix".
But, the explanation for fixpix is now skipped (It is automatically called from hdsql.).
Mask image created from BIAS
▲Mask frame created from BIAS(Red CCD)
After OverScan
▲After OverScan (Red CCD)
      After Masked
▲After Masking(Red CCD)
The strongest bad column (right side) is masked.
On the other hand, the bad column in the left side (slightly weak) is still staying on.
This column might be removed by a mask made with "upper=100".
But, this left side bad column might not affect in your data reduction process (order tracing, flat-fielding etc.).
Therefore, it is not important to force to remove it in this masking process.
[4] Put the created mask frame into a hdsql parameter, mb_refer in "### Masking Bad Pixels".
[5] In sumda, masks for all binning mode and Red/Blue chips have been prepared in
   /home/hds01/share/data/  
(likely Mask2x1R.fits etc.). So, you can skip above process to create masks, if you are reducing your data in sumda.


[Preparation 2]
Making a Template for Order Trace
Templates for order trace can be made from a frame of a bright (standard) star or flats.
If you are using standard wavelength setting (cf. StdYb), you can use standard templates like "StdYbLshsl2x2.fits" in "/home/hds01/share/data/" directory. This "StdYbLshsl2x2.fits", file name means Wavelength Setting : StdYb, The Left CCD (Red), 2x2 binning. Then, you can skip the following process of making a template. In this case, you should also copy a aperture information file in "/home/hds01/share/data/database/" directory to the "database/" directory in your working place.


[Preparation 2] Making a Template for Order Trace
Source a bright (Standard) star or a flat frame (taken with a narrow slit length)
Pretreatment by hdsql OverScan + Masking Bad Pixels (or OverScan only)
After hdsql Trace echelle apertures by "apall".
Parameters in hdsql Put this aperture reference frame into sc_refer in "### Scattered Light Subtraction"
& ap_refer in "### Aperture Extraction".

[1] Run "hdsql" with overscan=yes and masbad=yes for a frame, which you want to make a template.
Rename created HXXXXXom.fits to some meaning file name (Here, we use "ApYb2x1R.fits".).
[2] Execute task "apall" for this overscanned file with following parameters.
If the aperture finding fails in the edge of the CCD, it can be solved by a small change of the parameter "line" (default "INDEF" means the center of the chip along the dispersion direction. For example, in the case of 2x1 binning, this default value is 2050. So, try 2200 or 1800 as the value of "line". Or, you can change it by typing ": line 2200" in the Aperture editing irafterm.).
In aperture extraction display (xgterm, the below left image), "m"-key is for manual detection of a order. "o"-key on the line which should be the 1st order →Aperture(1)="1" sort the order of aperture number. Then, "q"-key for the next fitting. For the following some questions, basically just say "yes" (hit return) in usual case.
In fitting display (the below right image), "s"→"s"-keys is for the elimination of affected region with bad column. "t"-key is for the clear of this regional selection. "f"-key is re-fit of the tracing.
PACKAGE = echelle
  TASK = apall

input   =             ApYb2x1R  List of input images
(output =                     ) List of output spectra
(apertur=                     ) Apertures
(format =              echelle) Extracted spectra format
(referen=                     ) List of aperture reference images
(profile=                     ) List of aperture profile images

(interac=                  yes) Run task interactively?
(find   =                  yes) Find apertures?
(recente=                  yes) Recenter apertures?
(resize =                  yes) Resize apertures?
(edit   =                  yes) Edit apertures?
(trace  =                  yes) Trace apertures?
(fittrac=                  yes) Fit the traced points interactively?
(extract=                  yes) Extract spectra?
(extras =                   no) Extract sky, sigma, etc.?
(review =                  yes) Review extractions?

(line   =                INDEF) Dispersion line
(nsum   =                   20) Number of dispersion lines to sum or median

                                # DEFAULT APERTURE PARAMETERS

(lower  =                  -20) Lower aperture limit relative to center
(upper  =                   20) Upper aperture limit relative to center
(apidtab=                     ) Aperture ID table (optional)

                                # DEFAULT BACKGROUND PARAMETERS

(b_funct=            chebyshev) Background function
(b_order=                    1) Background function order
(b_sampl=          -10:-6,6:10) Background sample regions
(b_naver=                   -3) Background average or median
(b_niter=                    0) Background rejection iterations
(b_low_r=                   3.) Background lower rejection sigma
(b_high_=                   3.) Background upper rejection sigma
(b_grow =                   0.) Background rejection growing radius

                                # APERTURE CENTERING PARAMETERS

(width  =                  15.) Profile centering width
(radius =                  30.) Profile centering radius
(thresho=                   0.) Detection threshold for profile centering

                                # AUTOMATIC FINDING AND ORDERING PARAMETERS

nfind   =                   22  Number of apertures to be found automatically
(minsep =                  40.) Minimum separation between spectra
(maxsep =                1000.) Maximum separation between spectra
(order  =           increasing) Order of apertures

                                # RECENTERING PARAMETERS

(aprecen=                     ) Apertures for recentering calculation
(npeaks =                INDEF) Select brightest peaks
(shift  =                   no) Use average shift instead of recentering?

                                # RESIZING PARAMETERS

(llimit =                 -17.) Lower aperture limit relative to center
(ulimit =                  17.) Upper aperture limit relative to center
(ylevel =                 0.05) Fraction of peak or intensity for automatic width
(peak   =                  yes) Is ylevel a fraction of the peak?
(bkg    =                   no) Subtract background in automatic width?
(r_grow =                   0.) Grow limits by this factor
(avglimi=                  yes) Average limits over all apertures?

                                # TRACING PARAMETERS

(t_nsum =                   10) Number of dispersion lines to sum
(t_step =                    3) Tracing step
(t_nlost=                   10) Number of consecutive times profile is lost befor
(t_funct=             legendre) Trace fitting function
(t_order=                    3) Trace fitting function order
(t_sampl=                    *) Trace sample regions
(t_naver=                    1) Trace average or median
(t_niter=                    2) Trace rejection iterations
(t_low_r=                   3.) Trace lower rejection sigma
(t_high_=                   3.) Trace upper rejection sigma
(t_grow =                   0.) Trace rejection growing radius

                                # EXTRACTION PARAMETERS

(backgro=                 none) Background to subtract
(skybox =                    1) Box car smoothing length for sky
(weights=                 none) Extraction weights (none|variance)
(pfit   =                fit1d) Profile fitting type (fit1d|fit2d)
(clean  =                   no) Detect and replace bad pixels?
(saturat=                INDEF) Saturation level
(readnoi=                   0.) Read out noise sigma (photons)
(gain   =                   1.) Photon gain (photons/data number)
(lsigma =                   4.) Lower rejection threshold
(usigma =                   4.) Upper rejection threshold
(nsubaps=                    1) Number of subapertures per aperture
(mode   =                   ql)
[3] An one-dimensional spectrum, "apYb2x1R.ec.fits", is created now.
The aperture information of this file is saved in "database/apapYb2x1R" .
Order extraction in apall
▲Aperture finding in "apall"
Order fitting in apall
▲Order fitting in "apall"
[4] Put this Aperture reference frame (used in "apall") into the hdsql parameters,
   sc_refer   in  ##### Scattered Light Subtraction
and
   ap_refer   in  ##### Aperture Extraction
.


[Preparation 3]
Making a Normalized Flat
Using above order trace template, make a normalized flat frame from raw flat frames.
Of course, you can skip this process, if you don't need flat-fielding in your quick look analysis.
Even for standard settings, standard flat frames are not saved in "/opt/share/hds" directory. So, if you want to do flat-fielding, this process is always required at least at once.


[Preparation 3] Making a Normalized Flat
Source Flat frames
Pretreatment by hdsql OverScan (at least) + Masking Bad Pixels + Linearity Correction
After hdsql Subtract scattered light by "apscatter" with an aperture reference (likely "ref=ApYb2x1R").
Normalize by "apflatten" also with an aperture reference ("ref=ApYb2x1R").
Parameters in hdsql Put the resultant flat frame into fl_refer in "### Flat Fielding".

[1] Overscan, Masking Bad Pixels and Linearity Correction each raw flat frames using "hdsql".
In the most case (except in the case of Near-IR), two sets of flat frames, whose maximum ADU optimized for red and blue CCD respectively, are taken. So, you should select raw flat frames only optimized for your analyzing color.
[2] Make a text list file of your resultant flat frames (HXXXXXoml.fits) with a text editor ("vi" or "mule" etc). Using "imcombine" task, make an averaged flat frame. (combine=average reject=avsigclip).
    ecl> imcombine @FlatYb2x1R.lst FlatYb2x1R combine=ave reject=avsigclip
[3] Using the task "apscatter" with the aperture reference (ref=ApYb2x1R), remove scattered light from the averaged flat frame.
    ecl> apscatter FlatYb2x1R FlatYb2x1R.sc referen=ApYb2x1R find- recent+ resize+ edit+ trac-
Aperture editing in scatter
▲Aperture edit in apscatter
   (resize and recenter referring the order template)
Fitting along dispersion in apscatter
▲Scattered light fitting (along dispersion)
   (fitted by spline3 with order=20)
[4] Using the task "apflatten", make a normalized flat frame.
For "refren" field, input the file name of order trace template.
    ecl> apflatten FlatYb2x1R.sc FlatYb2x1R.sc.nm referen=ApYb2x1R find- recent+ resize+ edit+ trac-
The order of fitting function (the below right image) should be relatively high (10-20?). In xgterm window, type like ":order 11" to change the fitting order number. Then type "f"-key for re-fitting.
The process to exclude effect of bad column and/or chip edges is similar to the case of "apall".
The output file, "FlatYb2x1R.nm.fits", will be used as a flat frame in the following "hdsql".
PACKAGE = echelle
TASK = apflatten

input   =        FlatYb2x1R.sc  List of images to flatten
output  =     FlatYb2x1R.sc.nm  List of output flatten images
(apertur=                     ) Apertures
(referen=             ApYb2x1R) List of reference images

(interac=                  yes) Run task interactively?
(find   =                   no) Find apertures?
(recente=                   no) Recenter apertures?
(resize =                  yes) Resize apertures?
(edit   =                  yes) Edit apertures?
(trace  =                   no) Trace apertures?
(fittrac=                   no) Fit traced points interactively?
(flatten=                  yes) Flatten spectra?
(fitspec=                  yes) Fit normalization spectra interactively?

(line   =                INDEF) Dispersion line
(nsum   =                  100) Number of dispersion lines to sum or median
(thresho=                  10.) Threshold for flattening spectra

(pfit   =                fit1d) Profile fitting type (fit1d|fit2d)
(clean  =                   no) Detect and replace bad pixels?
(saturat=                INDEF) Saturation level
(readnoi=                   0.) Read out noise sigma (photons)
(gain   =                   1.) Photon gain (photons/data number)
(lsigma =                   4.) Lower rejection threshold
(usigma =                   4.) Upper rejection threshold

(functio=              spline3) Fitting function for normalization spectra
(order  =                    3) Fitting function order
(sample =                    *) Sample regions
(naverag=                    1) Average or median
(niterat=                    5) Number of rejection iterations
(low_rej=                   3.) Lower rejection sigma
(high_re=                   3.) High upper rejection sigma
(grow   =                   0.) Rejection growing radius
(mode   =                    q)
Order extraction in apflatten
▲Fitting in apflatten
   (fitted by spline3 with order=20)
[5] Put the resultant normalized flat frame into the hdsql parameter,
   "fl_refer"      in ##### Flat Fielding
.


[Preparation 4]
Wavelength Identification of a Th-Ar Frame
Using above order trace templates, making a wavelength information frame from a Th-Ar (comparison) frame.
If you don't need wavelength calibration in your quick look analysis, of course you can skip this process.
If you have an old identified Th-Ar frame, you can easily proceed the following process referring it in the task "ecreidentify". In such case, you should copy an database file of the old Th-Ar (cf. ecXXXXX) into database/ under your working directory.


[Preparation 4] Wavelength Identification of a Th-Ar Frame
Source Comparison (Th-Ar)
Pretreatment by hdsql OverScan + Masking Bad Pixels + Aperture Extraction
After hdsql [a] In a case w/o reference
identify Th-Ar lines by "ecidentify".

[b] If you have an old identified Th-Ar frame,
refer the old frame by "ecreidentify" with "referen=ThArXXXX". Then, "ecidentify".
Parameter in hdsql Put the identified Th-Ar frame into wv_refer in "### Wavelength Calibration".

[1] Using "hdsql" for a raw frame of Th-Ar, proceed "OverScan", "Masking Bad Pixels" and "Aperture Extraction" at this time.
For the aperture reference ("ap_refer"), input the order template frame which already made in the previous preparation process.
And don't resize, recenter and edit aperture in "apall". In order to save the file after "apall" set the parameter, "ap_save=yes".
PACKAGE = echelle
   TASK = hdsql
    
inid    =                22073  Input frame ID
(indirec= /data/o05129/HDSA000) directory of Input data

(batch  =                   no) Batch Mode?
(inlist =                     ) Input file list for batch-mode
(overw  =                  yes) Force to overwrite existing images?

(oversca=                  yes) Overscan?
(biassub=                   no) BIAS / Dark Subtraction?
(maskbad=                  yes) Mask Bad Pixels?
(linear =                   no) Linearity Correction?
(cosmicr=                   no) Cosmicray-event rejection?
(scatter=                   no) Scattered light subtraction?
(xtalk  =                   no) CCD Amp Cross-Talk Subtraction?
(flat   =                   no) Flat fielding?
(apall  =                  yes) Extract spectra with apall?
(isecf  =                   no) Extract & Flat for IS? (override flat & apall)
(remask =                   no) Re-Mask wavlength calibrated spectrum?
(wavecal=                   no) Wavelength calibration?
(rvcorre=                   no) Heliocentric wavelength correction?
(splot  =                   no) Splot Spectrum?
   (......)
(ap_save=                  yes) Save apalled data?
(ap_in  =                     ) Input frame for apall (if necessary)?
(ap_refe=             ApYb2x1R) Reference frame for apall
(ap_inte=                  yes) Run apall interactively?
(ap_rece=                   no) Recenter apertures?
(ap_resi=                   no) Resize apertures?
(ap_edit=                   no) Edit apertures?
(ap_trac=                   no) Trace apertures?
(ap_fitt=                   no) Fit the traced points interactively?
(ap_llim=                  -30) Lower aperture limit relative to center
(ap_ulim=                   30) Upper aperture limit relative to center
(ap_ylev=                 0.05) Fraction of peak for automatic width determinati
(ap_peak=                  yes) Is ylevel a fraction of the peak?
(ap_bg  =                 none) Background to subtract
     (......)
[2] An one-dimensional spectra, "H22073om_ec.fits" (or H22073o_ec.fits, if you skipped masking), is created.
Rename this file, for example "ThArYb2x1R.ec.fits".
[3a] Without old references
At first, you can get rough wavelength information from an ASCII TABLE EXTENSION of the raw fits file.
    ecl> tprint /data/o05129/HDSA00022073.fits[1]
    #  Table HDSA00022073[1]  Fri 02:34:18 07-Apr-2017
    
    # row ORDER X-MIN Y-MIN   WL-MIN X-CEN Y-CEN   WL-CEN X-MAX Y-MAX   WL-MAX SLIT INCLINATION DISPERSION
    #           PIXEL PIXEL       nm PIXEL PIXEL       nm PIXEL PIXEL       nm           degree   nm/PIXEL
    
        1    88    47  2048  672.436   112  1024  676.602   168     1  680.767            0.000      0.004
        2    89   104  2048  664.881   169  1024  669.000   225     1  673.118            0.000      0.004
        3    90   160  2048  657.493   225  1024  661.566   280     1  665.639            0.000      0.004
        4    91   215  2048  650.268   280  1024  654.296   334     1  658.324            0.000      0.004
        5    92   269  2048  643.200   333  1024  647.184   387     1  651.169            0.000      0.004
        6    93   322  2048  636.284   385  1024  640.225   439     1  644.167            0.000      0.004
        7    94   373  2048  629.515   437  1024  633.414   490     1  637.314            0.000      0.004
        8    95   423  2048  622.888   487  1024  626.747   640     1  630.605            0.000      0.004
        9    96   473  2048  616.400   636  1024  620.218   689     1  624.037            0.000      0.004
       10    97   621  2048  610.045   684  1024  613.824   736     1  617.603            0.000      0.004
       11    98   668  2048  603.820   731  1024  607.561   783     1  611.301            0.000      0.004
       12    99   715  2048  597.721   777  1024  601.424   829     1  605.126            0.000      0.004
       13   100   760  2048  591.744   822  1024  595.410   874     1  599.075            0.000      0.004
       14   101   804  2048  585.885   866  1024  589.514   918     1  593.144            0.000      0.004
       15   102   848  2048  580.141   909  1024  583.735   961     1  587.329            0.000      0.004
       16   103   891  2048  574.509   952  1024  578.068  1003     1  581.626            0.000      0.003
       17   104   933  2048  568.985   994  1024  572.509  1044     1  576.034            0.000      0.003
In this output, you can easily find the order numbers and wavelengths. Please note that the order number can be shifted depending on your tracing works via apall in the previous chapter.

Do "ecidentify" to the renamed one-dimensional file.
In usual case, you should identify 5 or 6 points per order. . After finished inputs along all orders(you can skip 2-3 orders), "f"-key makes a wavelength fitting. Then, in xgterm window, change x & y order numbers with ":xorder 5", ":yorder 4" (In usual case, x=5 and y=4 are enough for orders of this fitting.). Delete out-stand points with "d"-key and re-fit with "f"-key. If the fitting error becomes small enough (<0.005Å, It could be changed by the resolution of the spectrograph), hit "l"-key to register all lines in the list. Then, adjust the fitting error again in the same way. If it becomes small enough, quit with "q"-key.
If you identify an UV data (< 330nm), the original thar.dat in IRAF should be not enough for identification. You must download a new one from this page, and replace it.
PACKAGE = echelle
  TASK = ecidentify

images  =           ThArYb2x1R  Images containing features to be identified
(databas=             database) Database in which to record feature data
(coordli=   linelists$thar.dat) User coordinate list
(units  =                     ) Coordinate units
(match  =                   1.) Coordinate list matching limit in user units
(maxfeat=                 1000) Maximum number of features for automatic identifi
(zwidth =                  10.) Zoom graph width in user units
(ftype  =             emission) Feature type
(fwidth =                   4.) Feature width in pixels
(cradius=                   5.) Centering radius in pixels
(thresho=                  10.) Feature threshold for centering
(minsep =                   2.) Minimum pixel separation
(functio=            chebyshev) Coordinate function
(xorder =                    2) Order of coordinate function along dispersion
(yorder =                    2) Order of coordinate function across dispersion
(niterat=                    0) Rejection iterations
(lowreje=                   3.) Lower rejection sigma
(highrej=                   3.) Upper rejection sigma
(autowri=                   no) Automatically write to database?
(graphic=             stdgraph) Graphics output device
(cursor =                     ) Graphics cursor input
(mode   =                    q)
Identification of Th-Ar lines
▲Th-Ar line identification in "ecidentify"
   (about 6 lines in each order)
Residual of fitting in ecidentify
▲Fitting errors in "ecidentify"
   (xorder=2, yorder=2 for the first fit)
Identification of Th-Ar lines
▲Fitting errors in "ecidentify"
   (xorder=4, yorder=4 : 0.6 arcsec slit width, 2x1bin)
Residual of fitting in ecidentify
▲Fitting errors in "ecidentify"
   (final result by xorder=5, yorder=4)
[3b] Referring an old identified Th-Ar frame
Copy the old Th-Ar file (ThArYb2x1R_old.fits) and its database file (database/ecThArYb2x1R_old) into your working directory. Refer it by a task "ecreidentify".
    ecl> ecreidentify ThArYb2x1R referen=ThArYb2x1R_old
If the resultant RMS error is relatively high (> 0.01), there might be a disagreement between the new and the old frames.
After "ecreidentify", run "ecidentify" for the new Th-Ar frame to confirm the fitting errors.
[4] Put the identified Th-Ar frame into the hdsql parameter,
   "wv_refer"    in ##### Wavelength Calibration
.


hdsql : Quick Look of Object Frames
After above "preparations"(1 is recommended, 2 is indispensable, 3 & 4 can be skipped) have been finished, you can start the quick look for each object with the task "hdsql".
You can choose processes (wavelength calib., flat-fielding, cosmic ray rejection and scattered light subtraction etc) you want in hdsql. If you want to skip it, you should input "no" in each parameter in "hdsql".


[1] Inside of "hdsql", there are 12 steps, from (1)OverScan to (12)Plot.
At least, (1)OverScan & (8)Aperture Extraction are necessary to get one dimensional spectra for your quick look. The following parameter list is showing the setting to get full reduction of "hdsql". If you want to skip some steps in "hdsql", just set "no" for corresponded parameters ("overscan", "maskbad", "linearity" ... etc.).
The inside of each step is described hereafter.
[1/13] OverScan (suffix="o")
This task calculate the average ADU in overscan regions in each frame and then subtract this value from the entire image. Therefore, this process is very similar to BIAS subtraction.
At the same time, it change counts in the frame from ADU to electron numbers, multiplying conversion factors (≈1.7).
All raw frames must not skip this step.
[2/13] Bias/Dark Subtraction (suffix="b")
BIAS or Dark or both of them will be subtracted from the object frame in this step.
However, this process should be skipped in usual data reduction because this subtraction can make S/N ratios in final spectra lower. [1/13] overscan already subtracted average BIAS level and smoothed Dark counts will be subtracted by [6/13] Scattered light subtraction.
The dark counts will be scaled automatically referring the exposure times of dark and target frames.
[2/13] Parameters for BIAS/Dark Subtraction
bs_refer BIAS frame
bs_dark Dark+BIAS frame
bs_style Which type of subtraction you want? [bias(=default)|dark|both]

[3/13] Masking Bad Pixels (suffix="m")
Bad pixels in the target frame are masked in this step.
[3/13] Parameters for Masking Bad Pixels
mb_refer Mask frame
mb_auto Automatic creation of a new mask from BIAS "bs_refer"? (default=no)
mb_lower Lower limit count for mb_auto mask creation (default=-100)
mb_upper Upper limit count for mb_auto mask creation (default=300)

[4/13] Linearity Correction (suffix="l")
Significant non-linearity is found in HDS EEV-42-80 CCDs for higher electron numbers (over 10,000e-). The task "hdslinear" has been prepared to correct this non-linearity.
[5/13] Cosmic-Ray Rejection (suffix="c" (wcosm11) or "C" (lacos))
In this step you can choose a rejection algorithm -- original "wacosm11" or "L.A.Cosmic" (Laplacian Cosmic Ray Identification).

"wacosm11" applies median filter to the object frame at first, fixes out-stand pixels, then, replace counts of these pixels with extrapolated values. The important parameter of this task is the baseline count "cr_base". This should be similar to the peak count of the object frame. (Smaller value is better to find cosmic ray hits. But it is afraid that smaller value can lead misunderstanding real count as cosmic ray hits.)
For further explanation, you should refer the IRAF reduction manual of HDS.

To use L.A.Cosmic you must install stsdas in your IRAF system. Then stsdas must be loaded before running hdsql. The process takes a much time compared with "wacosm11".
[5/13] Parameters for Cosmic-Ray Rejection
cr_proc Algorithm for Cosmic-Ray rejection. (default=wacosm)
cr_wbase Base line conut for wacosm11 (default=2000)
cr_ldisp (for L.A.Cosmic) Display the result on DS9? If set to 'y' you must start DS9 before running hdsql. (default=no)
cr_lgain (for L.A.Cosmic) CCD gain (default=1.67)
cr_lreadn (for L.A.Cosmic) CCD read out noise (default=4.4)
cr_lxorder (for L.A.Cosmic) Fitting order along spectrum (default=9)
cr_lyorder (for L.A.Cosmic) Fitting order along sky lines (default=3)
cr_lclip (for L.A.Cosmic) Cosmic-Ray detection limit = σ (default=10)
cr_lfrac (for L.A.Cosmic) fractional detection limit fro neighboring pix (default=3)
cr_lobjlim (for L.A.Cosmic) contrast limit between CR and underlying object (default=5)
cr_lniter (for L.A.Cosmic) max number of iterations (default=4)

[6/13] Scattered Light Subtraction (suffix="s")
This step uses the task "apscatter". It does a curved surface fitting along gaps between each orders, and subtract the fitted result from the frame.
[6/13] Parameters for Scattered Light Subtraction
sc_refer Aperture reference frame (same as ap_refer in "Aperture Extraction")
sc_inte Run apscatter interactively or not? (default=yes)
sc_rece Recentering of apertures? (default=yes)
sc_resi Re-sizing of apertures? (default=yes)
sc_edit Edit apertures? (default=yes)
sc_trac Re-tracing each aperture? (default=no)
sc_fitt Trace apertures manually? (default=no)

Scattered Light fitting along X
▲Fitting along the cross dispersion in apscatter
   (Check the fitting curve is lying on the bottom of order gaps.)

Scattered Light fitting along Y
▲Fitting along the echelle dispersion in apscatter
   (fitted by spline3 with order=20)

[7/13] Cross-Talk Subtraction (suffix="x")
This step uses the task "hdssubx" (original). HDS EEV 42-80 CCD has an over-wrap of mirrored cross-talk signals with 0.12% amplitude. This process subtracts this cross-talk signal. Mainly it should be used in the 1st reduction of flat frames.
IS flat before Cross-Talk subtraction
▲Before Cross-Talk subtraction
IS flat after Cross-Talk subtraction
▲After Cross-Talk subtraction
[7/13] Parameters for Cross-Talk Subtraction
xt_amp Amplitude of Cross-Talk signal (default=0.0012)
xt_disp Display the results on DS9? If set to 'y' you must start DS9 before hdsql. (default=no)

[8/13] Flat Fielding (suffix="f")
The target frame will be divided by the (normalized) flat.
[8/13] Parameters for Flat Fielding
fl_refer (normalized) Flat frame

[9/13] Aperture Extraction (suffix="_ec")
The apertures in the target frame will be traced to create one-dimensional spectrum.
[9/13] Parameters for Aperture Extraction
ap_refer Aperture reference frame (same as "sc_refer" in "Scattered Light Subtraction")
ap_inte Run Aperture Extraction interactively? (default=yes)
ap_rece Recentering apertures? (default=yes)
ap_resi Re-sizing apertures? (default=yes)
ap_edit Edit apertures? (default=yes)
ap_trac Re-tracing apertures? (default=no)
ap_fitt Trace apertures manually? (default=no)
ap_llim Lower limit pixel of apertures (default=-30)
ap_ulim Upper limit pixel of apertures (default=30)
ap_bg How to fit sky background? [none(=default)|average|median|minimum|fit]

Aperture Edit in apall
▲Editing apertures in apall
   ("."-key to select nearest order, "a"-key to select all order.
   ": lower ??",": upper ??" to set the size of aperture.)


Confirmation of 1-order in apall
▲Detailed confirmation of the selected order
   ("b"-key to enter this mode, "s" & "t"-key to edit the BG area.
   "c"-key to check the pixel position on the cursor.)


[(8+9)/13] Flat Fielding & Extraction for Image Slicer (suffix="_ecf")
Flat fielding and aperture extraction for data taken with Image Slicer.
Fundamentally data taken with Image Slicer can be reduced with the same way usually applied for normal slit data, if you use a flat frame normalized with apflatten.
However, if the wavelength region involves severe fringe pattern (longer than Hα), it is better to use this step instead of [8/13] Flat Fielding & [9/13] Aperture Extraction for normal slit data.
Please see here for more details.
[10/13] Wavelength Calibration (suffix="w")
Wavelength calibration referring the identified Th-Ar frame.
[10/13] Parameters for Wavelength Calibration
wv_refer Identified Th-Ar frame
wv_log Log scaled wavelength? (default=no)

[11/13] Re-Mask after Wavelength Calibration (suffix="z")
This step creates a wavelength calibrated mask from "mb_refer". Then, it will be applied to the target spectrum.
[11/13] Parameters for Re-Mask after Wavelength Calibration
zm_val Conut for masked pixels (default=1)
It is better to set to "1" (instead of "0"), if you want to edit (interpolation between selected orders) the blaze function of the target spectrum.
zm_thre Threshold count for making a new mask (default=0.1 [0-1])

Before Re-Mask
▲Target spectrum before Re-Mask
   (There are some pixels damaged by bad Column.)


After Re-Mask
▲Target spectrum after Re-Mask
   (Counts of masked pixels are set to 1.0e- .
    This should be much lower than other healthy pixels.)


[12/13] Heliocentric RV Correction (suffix="r")
Wavelength in the target frame will be corrected to the heliocentric by this step, using the task "rvhds" (original).
[12/13] Parameters for Heliocentric RV Correction
rv_obs Observatory's name to be set by the task "observatory" (default="subaru")

[13/13] Plot
This step will just show the resultant spectrum by splot.
[13/13] Parameters for Plot
sp_line Order to plot (default=1)



Here is an example of all parameters of hdsql.
You should set Mask (Mask2x1R), Aperture reference (ApYb2x1R), Flat (FlatYb2x1R.sc.nm) and Identified Th-Ar (ThArYb2x1R) to proceed a full-reduction of target frames using hdsql.
PACKAGE = echelle
   TASK = hdsql
    
inid    =                22081  Input frame ID
(indirec= /data/o05129/HDSA000) directory of Input data

(batch  =                   no) Batch Mode?
(inlist =                     ) Input file list for batch-mode
(overw  =                  yes) Force to overwrite existing images?

(oversca=                  yes) Overscan?
(biassub=                   no) BIAS / Dark Subtraction?
(maskbad=                  yes) Mask Bad Pixels?
(linear =                  yes) Linearity Correction?
(cosmicr=                  yes) Cosmic Ray Rejection?
(scatter=                  yes) Scattered Light Subtraction?
(xtalk  =                   no) CCD Amp Cross-Talk Subtraction?
(flat   =                  yes) Flat Fielding?
(apall  =                  yes) Aperture Extraction?
(isecf  =                   no) Extract & Flat for IS? (override flat & apall)
(wavecal=                  yes) Wavelength Calibration?
(remask =                  yes) Re-Mask Wave-calibrated Spectrum?
(rvcorre=                  yes) Heliocentric Wavelength Correction?
(splot  =                  yes) Splot Spectrum?

                                ### Overscan ###
(os_save=                  yes) Save overscaned data?

                                ### Bias/Dark Subtraction ###
(bs_save=                  yes) Save BIAS / Dark subtracted data?
(bs_in  =                     ) Input frame for BIAS / Dark subtraction (if necessa
(bs_styl=                 bias) Subtraction style (bias|dark)
(bs_refe=                     ) BIAS frame
(bs_dark=                     ) Dark + BIAS frame

                                ### Masking Bad Pixels ###
(mb_save=                  yes) Save masked data?
(mb_in  =                     ) Input frame for Masking Bad Pixels (if necessary)
(mb_refe= /home/hds01/share/data/Mask_2x1R) Bad Pix Mask frame
(mb_auto=                   no) Auto mask creation from BIAS(bsrefer)?
(mb_uppe=                  300) Upper limit for mb_auto
(mb_lowe=                 -100) Lower limit for mb_auto
(mb_clea=                  yes) Cleaning by wacosm11?
(mb_base=                    1) Baseline for wacosm11

                                ### Linearity Correction ###
(ln_save=                  yes) Save Linearity Corrected data?
(ln_in  =                     ) Input frame for Linearity Correction (if necessary)

                                ### Cosmic-ray Rejection ###
(cr_save=                  yes) Save cosmicray processed data?
(cr_in  =                     ) Input frame for wacosm1 (if necessary)
(cr_proc=               wacosm) CR rejection procedure (wacosm|lacos)?
                                ### Parameters for wacosm11 ###
(cr_wbas=                2000.) Baseline for wacosm11
                                ### Parameters for lacos_spec ###
(cr_ldis=                  yes) Confirm w/Display? (need DS9)
(cr_lgai=                 1.67) gain (electron/ADU)
(cr_lrea=                  4.4) read noise (electrons)
(cr_lxor=                    9) order of object fit (0=no fit)
(cr_lyor=                    3) order of sky line fit (0=no fit)
(cr_lcli=                  10.) detection limit for cosmic rays(sigma)
(cr_lfra=                   3.) fractional detection limit fro neighbouring pix
(cr_lobj=                   5.) contrast limit between CR and underlying object
(cr_lnit=                    4) maximum number of iterations

                                ### Scattered-light Subtraction ###
(sc_save=                  yes) Save scattered light subtracted data?
(sc_in  =                     ) Input frame for scattered light subtraction (if nec
(sc_refe=             ApYb2x1R) Reference for aperture finding
(sc_inte=                  yes) Run apscatter interactively?
(sc_rece=                  yes) Recenter apertures for apscatter?
(sc_resi=                  yes) Resize apertures for apscatter?
(sc_edit=                  yes) Edit apertures for apscatter?
(sc_trac=                   no) Trace apertures for apscatter?
(sc_fitt=                  yes) Fit the traced points interactively for apscatter?

                                ### Cross-Talk Subtraction ###
(xt_save=                  yes) Save cross-talk subtracted data?
(xt_in  =                     ) Input frame for cross-talk subtraction (if necessar
(xt_amp =               0.0012) Cross-talk amplifier
(xt_disp=                   no) Confirm w/Display? (need DS9)

                                ### Flat Fielding ###
(fl_save=                  yes) Save flat-fielded data?
(fl_in  =                     ) Input frame for flat fielding (if necessary)
(fl_refe=     FlatYb2x1R.sc.nm) Flat frame

                                ### Aperture Extraction ###
(ap_save=                  yes) Save apalled data?
(ap_in  =                     ) Input frame for apall (if necessary)?
(ap_refe=             Ap2x1YbR) Reference frame for apall
(ap_inte=                  yes) Run apall interactively?
(ap_rece=                  yes) Recenter apertures?
(ap_resi=                  yes) Resize apertures?
(ap_edit=                  yes) Edit apertures?
(ap_trac=                   no) Trace apertures?
(ap_fitt=                   no) Fit the traced points interactively?
(ap_nsum=                  10.) Number of Dispersion tosum for apfind
(ap_llim=                 -30.) Lower aperture limit relative to center
(ap_ulim=                  30.) Upper aperture limit relative to center
(ap_ylev=   1.0000000000000E-4) Fraction of peak for automatic width determination?
(ap_peak=                  yes) Is ylevel a fraction of the peak?
(ap_bg  =                 none) Background to subtract

                                ### IS Extraction and Flat fielding ###
(is_save=                  yes) Save IS extract & flat-fielded data?
(is_in  =                     ) Input frame for flat fielding (if necessary)
(is_plot=                  yes) Plot image and extract manually
(is_stx =                  -10) Start pixel to extract (for is_plot=no)
(is_edx =                    5) End pixel to extract (for is_plot=no)
(is_bfix=               fixpix) Fixing method for Bad Pix
(is_up  =                0.001) Upper Limit for Bad Pix in ApNormalized Flat

                                ### Wavelength Calibration ###
(wv_save=                  yes) Save wavelength-calibrated data?
(wv_in  =                     ) Input frame for wavelength calibration (if necessar
(wv_refe=           ThArYb2x1R) Reference frame for refspectra
(wv_log =                   no) Logarithmic wavelength scale?

                                ### Re-Mask after Wavelength Calibration###
(zm_save=                  yes) Save re-masked data?
(zm_in  =                     ) Input frame for re-mask afre wave calib. (if necess
(zm_val =                   1.) Pixel Value replaced to All Bad Pixels
(zm_thre=                  0.1) Threshold pixel value for bad column [0-1]

                                ### Heliocentric Wavelength Correction ###
(rv_in  =                     ) Input frame for radial velocity correction (if nece
(rv_obs =               subaru) Observatory

                                ### Splot ###
(sp_line=                    1) Splot image line/aperture to plot
[2] If the task works well, you can get a final resultant spectrum file, like "H22081omlcsf_ecwzr.fits".
Added "omlcsf_ecwzr" in the file name means "o"=overscan, "m"=masked bad pixels, "l"=linearity corrected, "c"=cosmicray rejected, "s"=scattered light subtracted, "f"=flat fielded, "_ec"=aperture extracted, "w"=wavelength calibrated, "z"=zero masked after wavelength calibrated and "r"=radial velocity corrected. If you want to keep files after each step, please set parameters *_save to "yes" in each step (cf. os_save=yes).
Completion of hdsql
▲Example of resultant spectrum
   (the order around NaD)



Batch mode
When you repeat a same reduction process to two or more frames (i.e. flat frames), you may use the batch mode.
Create a text file listed file numbers (for the 1st argument of hdsql) to be applied the reduction process. In the parameter field of hdsql, set the file name to 'inlist'.
    (batch  =                  yes) Batch Mode?
    (inlist =          FlatRaR.lst) Input file list for batch-mode
    (overw  =                  yes) Force to overwrite existing images?
Tips, trouble shooting
[1] HDS creates two frames from Red and blue CCD with one exposures. So, you have to create two calibration file sets in your preparation phase for both color reduction. If you want to reduce your data simultaneously (Maybe you want to do it during your observation.), you can use tasks "hdsql1", "hdsql2" and "hdsql3" in sumda, which has just same function of "hdsql". So, in sumda, you can use "hdsql" for Red CCD reduction and "hdsql1" for Blue CCD's for example.
[2] You can stop you reduction at any voluntary points. If you want to restart you reduction process, please input "XX_in" for your restarting inputs.
[3] This task confirm overwrite each resultant frames, if necessary. So, before you redo the same process for the same file, it is better to rename or delete existing resultant files to avoid annoying situation.
[4] If the task stopped for some reasons with some errors, some temporary files with "tmp*" file name could be remained in your working directory. You should delete such file before your next reduction.


Download of CL scripts
  • The latest version of HDS IRAF reduction package is available via github. Please follow the steps below to download it.
          % mkdir ~/IRAF   (arbitary directory name for your environment)
          % cd ~/IRAF
          % git clone https://github.com/chimari/hds_iraf
    Now the HDS reduction package would be downloaded into ~/IRAF/hds_iraf/ .
    If you already have this package, please use
          % git pull https://github.com/chimari/hds_iraf
    to reflect updates in the official branch.
  • Download stardard CAL templates from HDS official page . (The total file size is about 1GB.)
    This is not required. But it must be helpful to create CAL reference frames, if you reduce your observation data with standard setups.
          % cd ~/IRAF
          % wget https://www.naoj.org/Observing/Instruments/HDS/Data/HDS-CAL-202010.tar.gz
          % gzip -dc HDS-CAL-202010.tar.gz | tar xvf -
    This will make the ~/IRAF/HDS-CAL directory which contains CAL template files for all HDS standard setups.
  • Add 4 lines into your login.cl . (Please remove old hdsql setups if you have in it.)
          set    hdshome = 'home$IRAF/hds_iraf/'
          task   $hds.pkg = 'hdshome$hds.cl'
          set    obsdb    = "hdshome$obsdb.dat"
          set    hdscal = 'home$IRAF/HDS-CAL/'
    line 1 : The directory for HDS IRAF reduction package
    line 4 : The directory for CAL templates
  • Call HDS package in IRAF
          vocl> hds
    You should see the task list in HDS package.
hdsql[1,2,3].cl are simple copies of hdsql.cl . These same tasks might be good to use in the data analysis for the other CCD chip or wavelength settings.

The task "rvhds" requires the information about the position of the observatory. "Subaru Telescope" is not included in IRAF's original observatory.dat .
So, you should add above information into your noaolib$obsdb.dat, or save it to a new file and set it in your login.cl (see above setup in loginuser.cl).
You can check your observatory information in your IRAF environment as follows.

ecl> observatory
Command (set|list|images) (set|list|images) (list): 
Observatory to set, list, or image default (subaru): 
# Observatory parameters for SUBARU Telescope, NAOJ
       observatory = subaru
       timzone = -10
       altitude = 4163
       latitude = 19.8255
       longitude = -155.4706
       name = 'SUBARU Telescope, NAOJ'




     
[Old version]




Appendix. Reduction for Image Slicer data (Flat Fielding and Aperture Extraction for data w/fringes)

Data taken with Image Slicer fundamentally can be reduced with the same way applied to data taken with normal slit, if you use a flat frame normalized by "apflatten".

However, when your data involves severe fringe patterns (λ > Hα), you should use the following special way to reduce your data taken with Image Slicer.
This step uses the task "hdsis_ecf" (original). In this process, you must use a flat frame normalized by "apnormalize". The process applies flat-fielding and aperture extraction simultaneously.

In the parameter setup for hdsql, an apnormalized flat should be set in "fl_refer".
    (fl_refe=       FlatNIRR.sc.nm) Flat frame
Then set flat=no, apall=no, and isecf=yes.
    (flat   =                   no) Flat Fielding?
    (apall  =                   no) Aperture Extraction?
    (isecf  =                  yes) Extract & Flat for IS? (override flat & apall)
The parameters for isecf (= hdsis_ecf.cl) in hdsql are as follows.
[(8+9)/13] Parameters for Flat Fielding & Extraction for Image Slicer
is_plot Interactive aperture determination with apedit? (default=yes)
is_stx Aperture start pixel for non-interactive mode. (default=-12)
is_edx Aperture end pixel for non-interactive mode. (default=12)
is_bfix Bad pixel fixing method. (default=fixpix)
is_up Upper Limit for Bad Pix in ApNormalized Flat. (default=0.001)

In this process with "isplot=yes", apedit window will come up. Select an appropriate order with '.' (dot), then measure the start/end pixels of aperture in the order display (with 'C'-key etc.).
Aperture Extraction for Image Slicer
▲Aperture extraction for Image Slicer data
In the order display selected with '.'-key, measure pixel values with 'C'-key to determine the aperture.

After determined the start/end pixels of the aperture (or you have measured them already), you can quit apedit with 'q'-key. If you set upper and lower pixels of orders with ':upper' and ':lower', these will be the default value for the following inputs.

Then, input the start/end pixels in the cl terminal. (This aperture should be involved in the aperture used in apnormalize for the flat frame.)
    >>> Lower Pixel for Extraction (-12) : -20
    -20
    >>> Upper Pixel for Extraction (12) :  11
    11
Then, each order will be sliced along the aperture and the slicer profile will be measured.
Aperture Extraction for Image Slicer
▲The image slicer #3 profile along the "slit length"
The black line is an object spectrum and the red shows the profile of flat frame.
In both edges and gaps of each slicer element, the counts of an apnormalized flat are getting lower. If divided by this apnormalized flat, the object counts in the edges/gaps are abnormally getting higher. In this hdsis_ecf.cl process, the flat profile is measured and correct its effect.





Appendix. Making a combined spectrum merging all orders

It can be said that the above quick look analysis by hdsql includes all required processes for primary data analysis of Echelle spectra.
Hereafter, I suppose some of us want to try to create combined spectra from resultant multi-ordered spectra created by hdsql.
In this process, the "blaze function" of echelle orders must be corrected. Roughly, there are two ways to do this correction.
     [A]  Continuum Fitting of target spectrum
     [B]  Flux calibration using standard stars

[A] by continuum fitting of target spectrum
If your target is a "normal" star without any broad absorption or emission, in which the continuum level can be determined easily, it is good to correct its echelle blaze by the continuum fitting of the target spectrum.

[1] Fit the continuum of the target spectrum, using the task "continuum".
   ecl> continuum H75611omlcsf_ecwzr H75611omlcsf_ecwzr_c
Continuum fitting
▲Continuum Fitting by "continuum"
   (Fitting order is about 6-15.)

Spectrum normalized by continuum level
▲Continuum Fitted spectrum
   (normalized as continuum level=1)

If you combined this resultant spectrum (H75611omlcsf_ecwzr_c) directly by the task "scombine" (combine=ave group=images), you ca get a "rough" combined spectrum. But, its S/N ratio in this spectrum might be "partly" not good, in this case. Because counts form the edge of each order are not weighed but added with adjacent "overlapped" area.
In order to avoid this degeneration, it is better to create a "blaze function" of the spectrum.
[2] Dividing the source by the fitted spectrum, you can get easily the "Blaze Function" (CBlaze).
   ecl> sarith H75611omlcsf_ecwzr / H75611omlcsf_ecwzr_c CBlaze
(Here, CBlaze includes information about the "shape" of target's continuum. So, it is not a pure Echelle blaze function. But, for convenience, I describe it just a "blaze function".)
[3] If possible, it is better to modify CBlaze to correct affect from broad target's absorption or atmospheric absorption by interpolating from adjacent echelle orders.

If you set "zm_val=0" in hdsql, the masked area of CBlaze is also set to 0. This might be not good to do such interpolation.
Blaze function from Continuum Fit
▲Over-plotting all orders of CBlaze
Modified Blaze function from Continuum Fit
▲Over-plotting all orders of modified CBlaze
(It is not important to correct the function at the edge of CCD.
But, it is better to correct it around H alpha or other broad absorptions showing peculiar features compared with adjacent echelle orders.)


[4] If the target has been re-masked after wavelength calibration in hdsql (and "zm_val!=0"), the mask should be re-applied to the source (before continuum fitted) target frame.
hdsql automatically creates a Mask file (cf. Mask_HXXXXXomlcsf_ecw.fits) for each target spectrum and keep its file name into the image header (H_MASK).
The original task "hdsbadfix" can easily do this masking process.
     ecl> hdsbadfix H75611omlcsf_ecwzr H75611omlcsf_ecwzr_b mask+ manual+ cl_mask+ value=0
With the option manual=yes, "hdsbadfix" can easily fix the bad counts around the edge (2 orders from both edges) of CCD. At this time, if you set cl_mask=yes in your option, the mask is also modified automatically to add these edge of CCD in it.
[5] In the same way, you should apply the mask to CBlaze.
If the masked area has a "healthy" overlapped wavelength area in adjacent echelle order, you can pick up the data only from the "healthy" area (Of course, the S/N ratio in such region could be getting lower.).
Because CBlaze has been created from the target spectrum, it contains the same H_MASK in its header.
     ecl> hdsbadfix CBlaze CBlaze_b mask+ manual- cl_mask- value=0
You should set value=0 to set counts to "0" in the masked area.
[6] Using the task "scombine", sum all orders of the target and CBlaze_b (w/ combine=sum group=images). Then divide the summed target by the summed CBlaze_b_sum. This process can avoid the degeneration of the S/N ratio around overlapped area, and get the smooth connection.
     ecl> socmbine H75611omlcsf_ecwzr_b H75611omlcsf_ecwzr_b_sum combine=sum group=images
     ecl> socmbine CBlaze_b CBlaze_b_sum combine=sum group=images
     ecl> sarith H75611omlcsf_ecwzr_b_sum / CBlaze_b_sum H75611_Cfit
     
After completed to make a blaze function in [3], above [4]∼[6] can be replaced with the following one command.
     ecl> hdsmk1d H75611omlcsf_ecwzr H75611_Cfit CBlaze
PACKAGE = hds
   TASK = hdsmk1d

inec    = H75611omlcsf_ecwzr    Multi-Order Spectrum
out1d   = H75611_Cfit           Output Flux Calibrated 1D Spectrum
blaze   = CBlaze                (Modefied) Blaze Function

(trim   =                  yes) Trimming X-Pixel? (y/n)
(st_x   =                    3) Start X
(ed_x   =                 2048) End X

(adjexp =                  yes) Auto ExpTime Adjustment? (y/n)
(mode   =                   al)
Parameters for hdsmk1d
inec Input spectrum processed by hdsql
out1d Output spectrum
blaze Blaze function file name
trim ON/OFF of spectrum trimming using st_x, ed_x (y/n)
st_x Start pixel number for spectrum trimming
ed_x End pixel number for spectrum trimming
adjexp (usually se to "y")




scombined CBlaze
▲Summed CBlaze_b
scombined object
▲Summed targets spectrum
Object Soectrum normalized by Continuum fit
▲The final resultant target spectrum by continuum fit

divided by non-Masked CBlaze
▲Divided by "non-masked" CBlaze_sum
(affected by bad column)



divided by Masked CBlaze
▲Divided by masked CBlaze_b_sum
(Spectrum has been connected smoothly using the counts
  only from healthy order.)



[B] by flux calibration using a standard star
If the target spectrum has broad emission/absorption features to make it difficult to determine its continuum level, you should do the flux calibration using a standard star taken in a very close area of the sky. Of course, this method can conserve the flux information of the target.
In HDS, the blaze function of the echelle orders might be relatively unstable and changing by the telescope positions and/or focus offsets. So, instead of spectrophotometric standards (very rare), you should take some "normal stars" as standards during your observation.
  • close to your target position (& observing time)
  • Early type star (<A-type stars); and of course, its Sp-type & magnitude should be well-known.
  • (Especially for UV obs, in which Balmer limit is included) Stars w/lower rotation velocity
    (For >400nm, this is not so important. Furthermore, the rapid rotator must be good to see the earth's atmospheric absorptions. So, it is not simple to say which is better for your standards.)


[1] Measure the flux of the standard using the task "standard".
PACKAGE = echelle
   TASK = standard

input   = H75611omlcsf_ecwzr_tb  Input image file root name
output  =       HD153855R_0724  Output flux file (used by SENSFUNC)
(samesta=                  yes) Same star in all apertures?
(beam_sw=                   no) Beam switch spectra?
(apertur=                     ) Aperture selection list
(bandwid=                   1.) Bandpass widths
(bandsep=                   1.) Bandpass separation
(fnuzero=  3.6800000000000E-20) Absolute flux zero point
(extinct= hdshome$mkoextinct.dat) Extinction file
(caldir =  onedstds$blackbody/) Directory containing calibration data
(observa=       )_.observatory) Observatory for data
(interac=                  yes) Graphic interaction to define new bandpasses
(graphic=             stdgraph) Graphics output device
(cursor =                     ) Graphics cursor input
star_nam=                    v  Star name in calibration list
airmass =                 1.62  Airmass
exptime =                  30.  Exposure time (seconds)
mag     =                 6.97  Magnitude of star
magband =                    V  Magnitude type
teff    =                B1III  Effective temperature or spectral type
answer  =                 YES!  (no|yes|NO|YES|NO!|YES!)
(mode   =                    q)
▲The case of a normal star with well-known spectral type and magnitude (fitting to a blackbody).

     (......)
(caldir =  onedstds$spec50cal/) Directory containing calibration data
     (......)
star_nam=             bd284211  Star name in calibration list
airmass =                 1.62  Airmass
exptime =                  30.  Exposure time (seconds)
     (......)
▲The case of a spectrophotometric standards whose spectroscopic data has been stocked in IRAF

In "standard" task, the values for "airmass" and "exptime" automatically imported from image headers. So, if they are incorrect, you should edit them.
"teff" for the blackbody fitting can be represented not only by temperatures but also by spectral types as shown in above example.
[2] Create a sensitivity curve using the task "sensfunc".
PACKAGE = echelle
   TASK = sensfunc

standard=       HD153855R_0724  Input standard star data file (from STANDARD)
sensitiv=       HD153855R_0724  Output root sensitivity function imagename
(apertur=                     ) Aperture selection list
(ignorea=                   no) Ignore apertures and make one sensitivity functi
(logfile=              logfile) Output log for statistics information
(extinct=        )_.extinction) Extinction file
(newexti= hdshome$mkoextinct.dat) Output revised extinction file
(observa=       )_.observatory) Observatory of data
(functio=              spline3) Fitting function
(order  =                   10) Order of fit
(interac=                  yes) Determine sensitivity function interactively?
(graphs =                   sr) Graphs per frame
(marks  =       plus cross box) Data mark types (marks deleted added)
(colors =              2 1 3 4) Colors (lines marks deleted added)
(cursor =                     ) Graphics cursor input
(device =             stdgraph) Graphics output device
answer  =                  YES  (no|yes|NO|YES)
(mode   =                    q)
task standard
▲Flux calibration of a standard by "standard"
(You can choose more wider band for calibration [~4A])

task sensfunc
▲Determination of a sensitivity curve in "sensfunc"
(fitted by spiline with order=10.)

[3] Prepare the target spectrum by "hdsql" (co-add, if necessary). Then, run the task "calibrate" to do the flux calibration for it.
PACKAGE = echelle
   TASK = calibrate

input   = V1312Sco_R_omlcsf_ecwzr_t  Input spectra to calibrate
output  = V1312Sco_R_omlcsf_ecwzr_tf  Output calibrated spectra
(extinct=                  yes) Apply extinction correction?
(flux   =                  yes) Apply flux calibration?
(extinct= hdshome$mkoextinct.dat) Extinction file
(observa=       )_.observatory) Observatory of observation
(ignorea=                   no) Ignore aperture numbers in flux calibration?
(sensiti=       HD153855R_0724) Image root name for sensitivity spectra
(fnu    =                   no) Create spectra having units of FNU?
airmass =                 1.99  Airmass
exptime =                1500.  Exposure time (seconds)
(mode   =                    q)
[4] After this, the process should be almost same with the case of continuum fitting.
Dividing the source target by the flux calibrated target spectrum, you can get the "Blaze Function" (FBlaze) for the target.
   ecl> sarith V1312Sco_R_omlcsf_ecwzr_t / V1312Sco_R_omlcsf_ecwzr_tf FBlaze
(This FBlaze is not an exact same blaze function with CBlaze. But, for convenience, I describe it just a "blaze function".)
[5] If possible, it is better to modify FBlaze to correct affect from broad target's absorption or atmospheric absorption by interpolating from adjacent echelle orders.

If you set "zm_val=0" in hdsql, the masked area of FBlaze is also set to 0. This might be not good to do such interpolation.
Blaze function from Flux calib
▲Over-plotting all orders of FBlaze
Modified Blaze function from Flux calib
▲Over-plotting all orders of modified FBlaze (masked)


[6] If the target has been re-masked after wavelength calibration in hdsql (and "zm_val!=0"), the mask should be re-applied to the source (before flux calibrated) target frame. The original task "hdsbadfix" can easily do this masking process.
     ecl> hdsbadfix V1312Sco_R_omlcsf_ecwzr_t V1312Sco_R_omlcsf_ecwzr_tb mask+ manual+ cl_mask+ value=0
You should set value=0 to set counts to "0" in the masked area.
With the option manual=yes, "hdsbadfix" can easily fix the bad counts around the edge (2 orders from both edges) of CCD. At this time, if you set cl_mask=yes in your option, the mask is also modified automatically to add these edge of CCD in it.
[7] In the same way, you should apply the mask to FBlaze.
     ecl> hdsbadfix FBlaze FBlaze_b mask+ manual- cl_mask- value=0
You should also set value=0 to set counts to "0" in the masked area.
[8] Using the task "scombine", sum all orders of the target and FBlaze_b (w/ combine=sum group=images). Then divide the summed target by the summed FBlaze_b_sum. This process can avoid the degeneration of the S/N ratio around overlapped area, and get the smooth connection.
     ecl> socmbine V1312Sco_R_omlcsf_ecwzr_tb V1312Sco_R_omlcsf_ecwzr_tb_sum combine=sum group=images
     ecl> socmbine FBlaze_b FBlaze_b_sum combine=sum group=images
     ecl> sarith V1312Sco_R_omlcsf_ecwzr_tb_sum / FBlaze_b_sum V1312Sco_R_Fcalib
When you want to connect spectra of Red and Blue CCD chips, you can use "scombine" with "group=all" option.
     ecl> socmbine V1312Sco_R_Fcalib,V1312Sco_B_Fcalib V1312Sco_All_Fcalib combine=sum group=all
     
After completed to make a blaze function in [5], above [6]∼[8] can be replaced with the following one command.
     ecl> hdsmk1d V1312Sco_R_omlcsf_ecwzr V1312Sco_R_Fcalib FBlaze



Blaze function from Flux calib
▲Flux calibrated & combined target spectrum
Modified Blaze function from Flux calib
▲A part of resultant spectrum
(Broad emission lines lying along several echelle orders can be connected smoothly by this method.)




Akito Tajitsu
Last modified: Fri Nov 12 17:11:59 JST 2021


Copyright © 2000-2003 Subaru Telescope, NAOJ. All rights reserved.