Before imaging the continuum of combined TDM and FDM spectal windows (SPWs), the WEIGHTS column of the MS have to be adjusted to compensate for the presently incorrect relative weighting between SPWs of different shape.
More generally, this adjustment is necessary whenever a continuum image is to be made from SPWs that do not have the same channel width and number of channels. E.g., also in this pure FDM case
Spw 0 , channelwidth 244140.625 , nchan 3840 Spw 1 , channelwidth 244140.625 , nchan 3840 Spw 2 , channelwidth 244140.625 , nchan 3840 Spw 3 , channelwidth 488281.25 , nchan 3840the adjustment is needed(!).
The adjustment should be made as the last step before imaging, i.e. also after any spectral averaging.
For this purpose, a new recipe was introduced in CASA 4.2 named "weights.py".
It can also be used in CASA 4.1 and is available from the CASA repository at recipes/weights.py.
The recipe provides several functions to scale weights in situ in any given MS:
plus some helper functions.
To access these functions in CASA 4.2, type (at the CASA prompt):
from recipes.weights import *in CASA 4.1, you should copy weights.py into a convenient location and then run it with execfile().
For more information about each function type
help scaleweights help adjustweights help adjustweights2 ...etc.
Before the MFS clean of a given field can be perfromed, the weights need to be adjusted by a single call to function adjustweights2().
Example:
adjustweights2(vis='calibrated.ms', field=2, spws=[0,1,2,3])will scale the weights for field 2 in science spws 0,1,2,3 (some of which are TDM, the rest FDM) by a factor
2*df*dt/nchanwhere df is the channel bandwidth, dt is the integration time, and nchan is the number of channels in the individual spw.
Note that there will be no net effect if the spws share the same df, dt, and nchan.
Note also that the field needs to specified as an ID, not as a name.
Also the calls to adjustweights2() change the weights permanently and should not be repeated.
The calls to adjustweights2() should therefore be inserted for all relevant fields
For CASA 4.1, the file "weights.py" is not contained in the distribution and should be included in the ALMA data delivery package by the QA2 analyst in the directory "script". In this case, a call to execfile() on weights.py should be used instead of the call to the "import" command.
If you plan to use spectral and/or averaging to create a smaller MS to speed up MFS imaging, you should create the averaged copy of the MS before you apply adjustweights2(). Then run adjustweights2() separately on the full-resolution MS and the averaged copy. This will result in two MSs (one full-resolution MS with adjusted weights and one averaged MS with adjusted weights) which can each be imaged correctly.
NOTE that spectral averaging with split does not modify the weights. So if you spectrally average the FDM SPWs differently from the TDM SPWs and then want to combine them again, you have to account for this with an additional weights scaling due to the non-linear nature (in nchan) of the scaling factor in adjustweights2(). This can be done either in concat itself or using the scaleweights() method from the weights.py recipe.
Example with additional scaling after spectral averaging of the TDM data:
# splitting out the high-resolution dataset and rebinning it to speed up imaging # high resolution split(vis = 'uid___XYZ.ms.split.cal', outputvis = 'mytarget.cont.FDM.ms', timebin = '30s', datacolumn = 'data', width = 30, # spectral averaging of only the FDM data over 30 channels spw = '0', field = '3') # low resolution split(vis = 'uid___XYZ.ms.split.cal', outputvis = 'mytarget.cont.TDM.ms', timebin = '30s', datacolumn = 'data', spw = '1,2,3', field = '3') # concatenation, using a weight that accounts for the re-binning of the high resolution data concat(vis = ['mytarget.cont.FDM.ms', 'mytarget.cont.TDM.ms'], concatvis = 'mytarget.cont.ms', visweightscale = [1/30., 1]) # additional scaling by factor 1/30 for the first MS (the FDM data) adjustweights2(vis='mytarget.cont.ms', field=0, spws=[0,1,2,3])
Alternatively, and maybe more elegantly, you can use the selective averaging feature of split and use scaleweights():
split(vis = 'uid___XYZ.ms.split.cal', outputvis = 'mytarget.cont.ms', timebin = '30s', datacolumn = 'data', width = [30,1,1,1], # spectral averaging of only the FDM data over 30 channels spw = '0,1,2,3', field = '3') scaleweights(vis='mytarget.cont.ms', field=[0], spw=0, scale=1/30.) adjustweights2(vis='mytarget.cont.ms', field=0, spws=[0,1,2,3])
NOTE: the correct scaling factor in the concat or scaleweights statement depends on the order in which you perform the averaging and the adjustweights2 call: If you reverse the order of the operations above, you need to invert the scaling factor!
W_final_a = W_old * (2*BW*dt)/((nchan_old/30.)^2) * 1/30. = W_old * (2*BW*dt)/(nchan_old^2) * 30.
W_final_b = W_old * (2*BW*dt)/(nchan_old^2) * 30. = W_final_awhere BW is the total bandwidth of the SPW.
execfile('../../script/weights.py') for fieldid in range(0,4): # give all fields which could potentially be imaged adjustweights2(vis='uid___...ms.split.cal.ms', field=fieldid, spws=[0,1,2,3]) # give all SPWs which could potentially be imaged
execfile('../script/weights.py') for fieldid in range(0,4): # give all fields which could potentially be imaged adjustweights2(vis='calibrated.ms', field=fieldid, spws=[0,1,2,3]) # give all SPWs which could potentially be imaged
from recipes.weights import * for fieldid in range(0,4): # give all fields which could potentially be imaged adjustweights2(vis='uid___...ms.split.cal.ms', field=fieldid, spws=[0,1,2,3]) # give all SPWs which could potentially be imaged
from recipes.weights import * for fieldid in range(0,4): # give all fields which could potentially be imaged adjustweights2(vis='calibrated.ms', field=fieldid, spws=[0,1,2,3]) # give all SPWs which could potentially be imaged
From adjustweights2() you should see some terminal output similar to this:
Spw 0 , channelwidth 244140.625 , nchan 4096 Spw 1 , channelwidth 15625000.0 , nchan 124 Spw 2 , channelwidth 15625000.0 , nchan 124 Spw 3 , channelwidth 488281.25 , nchan 3840 Scale factor for weights in spw 0 is 59.6046447754 Will change weights for data description ids [0] Changes applied in 4212 rows. Scale factor for weights in spw 1 is 126008.064516 Will change weights for data description ids [1] Changes applied in 4212 rows. Scale factor for weights in spw 2 is 126008.064516 Will change weights for data description ids [2] Changes applied in 4212 rows. Scale factor for weights in spw 3 is 127.156575521 Will change weights for data description ids [3] Changes applied in 4212 rows. Done.
-- DirkPetry - 3 Mar 2014