[daip] SDGRID convolution function

Bob Garwood bgarwood at nrao.edu
Thu May 13 18:42:32 EDT 2010


Eric Greisen wrote:
> Glen Langston wrote:
>> Thanks for the update.
>>
>> Actually the Orion data do have spectral lines.  There
>> are actually two lines at opposite ends of the bands (NH3 1-1
>> at 23694 MHz and NH3 2-2 at 23722 MHz, and nothing much in the middle,
>> the lines are about 8 km/sec wide or about 0.6 MHz, but there
>> are several lines in the NH3 1-1 and 2-2 spectra, separated
>> by about 0.3 MHz)
>>
>> It takes running IMLIN to subtract out the continuum to see the
>> lines.  See the figures at the top of page:
>>
>> https://safe.nrao.edu/wiki/bin/view/Kbandfpa/WebHome
>>
>> In that example, we'd averaged three channels (I'm not sure
>> what data we gave you concerning averaging of channels, the
>> original data set has 4096 channels)
>>
>> I appreciate your points.  Its a tough problem, but I think
>> the thing to check is for the "divide by zero problem" in the
>> sum of weights.    Maybe perform a quality control check on the
>> sum(wt), not just on the individual input "wt" weights input
>> to the grid.   If "sum(wt)" falls inside some +/-threshold, then
>> those data should be flagged as well.   This check threshold
>> could be a new parameter or threshold = max(wt)*reweight(2) (?).
>> or something similar.
>
> The check is only on the sum of the weights not on the individual 
> weights going into the sum.  That is why the process described by Bob 
> can take place.

Ah, now I see what Glen's saying.  I think that this is really a problem 
for poorly sampled data and I think the current check is adequate for 
dealing with that at the edges of the images where even in a good case 
the coverage may end up being ragged.  For this case where the coverage 
is poor even in the middle of the image if you want to make an image to 
be viewed from just that data you might be better off using a larger 
convolving function to smooth things out more.  Or it might be that 
you've intentionally undersampled because you just want a quick look at 
the region in which case I think you might be OK with a less ideal 
function that doesn't have this problem (e.g. exponential or even 
nearest cell + a larger convolving function + larger cell sizes - match 
the convolving function more with what you were trying to do, which was 
not get the highest resolution out of the telescope, or you'd have 
sampled the area more carefully).  For the case of a single beam where 
you might make the images of each beam and then combine them then I 
think you want to hold off on applying the cutoff until after the 
combination. 

I've also been pondering whethe it might be useful to try and use the 
well-sampled points on each singe beam image to find areas that are 
well-sampled by more than one beam and try and make some global relative 
gain solution to adjust the gains from each beam such that the images 
agree as well as they can with each other at those common points (i.e. 
this might be a cheaper way to do something like basket-weaving in the 
multi-feed case where finding crossing points in the raw data could get 
expensive even at just 7 beams and I shudder to think how it might work 
for many tens of beams).

-Bob
>
> Eric




More information about the Daip mailing list