cpl.drs

CPL DRS submodule

This module provides standard implementations of instrument independent, higher level data processing functions for general non-linear fitting, image fourier transformation, point pattern matching, world coordinate system transformation, etc.

exception cpl.drs.WCSLibError(error_list, message)

Used to return errors from WCSLIB conversion functions.

Contains error_list attribute containing a list of all errors found in the opertation for each row in the format: (matrix row, error enum string)

This is not meant to be thrown in the Python environment.

class cpl.drs.Aperture

Returned from a Apertures’ __getitem__ method or iterator. Used to access each Aperture record individually.

Not instantiatable on its own.

property bottom

The bottommost y position in an aperture

property bottom_x

The x position of the bottommost y position in an aperture. An aperture may have multiple bottommost x positions, in which case one of these is returned.

property centroid_x

The X-centroid of an aperture

property centroid_y

The Y-centroid of an aperture. For a concave aperture the centroid may not belong to the aperture.

property flux

The flux of an aperture

property left

The leftmost x position in an aperture

property left_y

The y position of the leftmost x position in an aperture. An aperture may have multiple leftmost y positions, in which case one of these is returned.

property max

The maximum value of an aperture

property maxpos_x

The X-position of the aperture maximum value

property maxpos_y

The Y-position of the aperture maximum value

property mean

The mean value of an aperture

property median

The median value of an aperture

property min

The minimum value of an aperture

property minpos_x

The X-position of the aperture minimum value

property minpos_y

The Y-position of the aperture minimum value

property npix

The number of pixels of an aperture

property pos_x

average X-position of an aperture

property pos_y

average Y-position of an aperture

property right

The rightmost x position in an aperture

property right_y

The y position of the rightmost x position in an aperture. An aperture may have multiple rightmost y positions, in which case one of these is returned.

property stdev

The standard deviation value of an aperture

Raises:

cpl.core.DataNotFoundError – if the aperture comprises of less than two pixels

property top

The topmost y position in an aperture

property top_x

The x position of the topmost y position in an aperture. An aperture may have multiple topmost x positions, in which case one of these is returned.

class cpl.drs.Apertures

Compute statistics on selected apertures.

The aperture object contains a list of zones in an image. It is typically used to contain the results of an objects detection, or if one wants to work on a very specific zone in an image.

Can be built either with the constructor with a reference and labellised image, or via the various static extract_* functions.

Each individual Aperture statistic can be accessed either via the get_* methods (using 1 indexing) or by indexing the Apertures themselves (e.g. apt[0], 0 indexing), which will return an Aperture object, with the properties corresponding to the individual Aperture statistics.

Parameters:
  • reference (cpl.core.Image) – Reference image

  • labelized (cpl.core.Image) – Labelized image (of type cpl.core.Type.INT). Must contain at least one pixel for each value from 1 to the maximum value in the image.

Raises:

Notes

For the centroiding computation of an aperture, if some pixels have values lower or equal to 0, all the values of the aperture are locally shifted such as the minimum value of the aperture has a value of epsilon. The centroid is then computed on these positive values. In principle, centroid should always be computed on positive values, this is done to avoid raising an error in case the caller of the function wants to use it on negative values images without caring about the centroid results. In such cases, the centroid result would be meaningful, but slightly depend on the hardcoded value chosen for epsilon (1e-10).

See also

cpl.core.Image.labelise_create

Can be used for creating labelized.

dump(self: cpl.drs.Apertures, filename: str | None = '', mode: str | None = 'w', show: bool | None = True) str

Dump the Apertures contents to a file, stdout or a string.

This function is mainly intended for debug purposes.

Parameters:
  • filename (str, optional) – file path to dump apertures contents to

  • mode (str, optional) – File mode to save the file, default ‘w’ overwrites contents.

  • show (bool, optional) – Send apertures contents to stdout. Defaults to True.

Returns:

Multiline string containing the dump of the apertures contents.

Return type:

str

static extract(source_image: cpl.core.Image, sigmas: cpl.core.Vector) object

Simple detection of apertures in an image

Aperture detection on the image is performed using each value in sigmas until at least one is found.

Parameters:
Returns:

The detected apertures (cpl.drs.Apertures) and the index of the sigma that was used (int)

Return type:

cpl.drs.Apertures, int

Raises:

cpl.core.DataNotFoundError – if the apertures could not be detected

See also

cpl.drs.Apertures.extract_sigma

Used on the image for aperture detection. Also provides detailed explaination of individual sigmas.

static extract_mask(source_image: cpl.core.Image, selection: object) cpl.drs.Apertures

Simple detection of apertures in an image from a user supplied selection mask

The values selected for inclusion in the apertures must have the non-zero value in the selection mask, and must not be flagged as bad in the bad pixel map of the image.

Parameters:
  • source_image (cpl.core.Image) – The image to process. Can be of type cpl.core.Type.DOUBLE, cpl.core.Type.FLOAT, or cpl.core.Type.INT

  • sigmas (cpl.core.Vector) – Detection levels. Positive, decreasing sigmas to apply

Returns:

The detected apertures

Return type:

cpl.drs.Apertures

Raises:
static extract_sigma(source_image: cpl.core.Image, selection: float) cpl.drs.Apertures

Simple detection of apertures in an image using a provided sigma

Sigma is used to calculate the threshold for the aperture detection. This threshold is calculated using the median plus the average distance to the median times sigma.

Parameters:
  • source_image (cpl.core.Image) – The image to process

  • sigma (float) – Detection level. Used as a variable to calculate the threshold for detection.

Returns:

The detected apertures

Return type:

cpl.drs.Apertures

Raises:

Notes

In order to avoid (the potentially many) detections of small objects the mask of detected pixels is subjected to a 3x3 morphological opening filter.

static extract_window(source_image: cpl.core.Image, sigmas: cpl.core.Vector, area: tuple) object

Simple detection of apertures in an image window

Aperture detection on the window is performed using each value in sigmas until at least one is found.

Parameters:
  • source_image (cpl.core.Image) – The image to process

  • sigmas (cpl.core.Vector) – Detection level. Positive, decreasing sigmas to apply

  • area (tuple(int, int, int, int)) –

    Rectangle of the window in the format (llx, lly, urx, ury) where:

    • llx : Lower left x position

    • lly : Lower left y position

    • urx : Upper right x position

    • ury : Upper right y position

    Position indices are zero based.

Returns:

The detected apertures (cpl.drs.Apertures) and the index of the sigma that was used (int)

Return type:

cpl.drs.Apertures, int

Raises:

cpl.core.DataNotFoundError – if the apertures could not be detected

See also

cpl.drs.Apertures.extract_sigma

Used on the window for aperture detection. Also provides detailed explaination of individual sigmas.

get_bottom(self: cpl.drs.Apertures, idx: int) int

Get the bottommost y position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the bottommost y position in the aperture

Return type:

int

Raises:
get_bottom_x(self: cpl.drs.Apertures, idx: int) int

Get the x position of the bottommost y position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the bottommost x position of the aperture

Return type:

int

Raises:
get_centroid_x(self: cpl.drs.Apertures, idx: int) float

Get the X-centroid of an aperture

For a concave aperture the centroid may not belong to the aperture.

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The X-centroid of the aperture

Return type:

float

Raises:
get_centroid_y(self: cpl.drs.Apertures, idx: int) float

Get the Y-centroid of an aperture

For a concave aperture the centroid may not belong to the aperture.

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The Y-centroid of the aperture

Return type:

float

Raises:
get_flux(self: cpl.drs.Apertures, idx: int) float

Get the flux of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The flux of the aperture

Return type:

int

Raises:
get_left(self: cpl.drs.Apertures, idx: int) int

Get the leftmost x position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the leftmost x position of the aperture

Return type:

int

Raises:
get_left_y(self: cpl.drs.Apertures, idx: int) int

Get the y position of the leftmost x position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the y position of the leftmost x position

Return type:

int

Raises:
get_max(self: cpl.drs.Apertures, idx: int) float

Get the maximum value of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The maximum value of the aperture

Return type:

int

Raises:
get_maxpos_x(self: cpl.drs.Apertures, idx: int) int

Get the X-position of the aperture maximum value

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The X-position of the aperture maximum value

Return type:

int

Raises:
get_maxpos_y(self: cpl.drs.Apertures, idx: int) int

Get the Y-position of the aperture maximum value

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The Y-position of the aperture maximum value

Return type:

int

Raises:
get_mean(self: cpl.drs.Apertures, idx: int) float

Get the mean value of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The mean value of the aperture

Return type:

int

Raises:
get_median(self: cpl.drs.Apertures, idx: int) float

Get the median value of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The median value of the aperture

Return type:

int

Raises:
get_min(self: cpl.drs.Apertures, idx: int) float

Get the minimum value of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The minimum value of the aperture

Return type:

int

Raises:
get_minpos_x(self: cpl.drs.Apertures, idx: int) int

Get the X-position of the aperture minimum value

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The X-position of the aperture minimum value

Return type:

int

Raises:
get_minpos_y(self: cpl.drs.Apertures, idx: int) int

Get the Y-position of the aperture minimum value

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The Y-position of the aperture minimum value

Return type:

int

Raises:
get_npix(self: cpl.drs.Apertures, idx: int) int

Get the number of pixels of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The number of pixels of the aperture

Return type:

int

Raises:
get_pos_x(self: cpl.drs.Apertures, idx: int) float

Get the average X-position of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The average X-position of the aperture

Return type:

float

Raises:
get_pos_y(self: cpl.drs.Apertures, idx: int) float

Get the average Y-position of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The average Y-position of the aperture

Return type:

float

Raises:
get_right(self: cpl.drs.Apertures, idx: int) int

Get the rightmost x position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the rightmost x position in an aperture

Return type:

int

Raises:
get_right_y(self: cpl.drs.Apertures, idx: int) int

Get the y position of the rightmost x position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the y position of the rightmost x position

Return type:

int

Raises:
get_stdev(self: cpl.drs.Apertures, idx: int) float

Get the standard deviation value of an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

The standard deviation value of the aperture

Return type:

int

Raises:
get_top(self: cpl.drs.Apertures, idx: int) int

Get the topmost y position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the topmost y position in the aperture

Return type:

int

Raises:
get_top_x(self: cpl.drs.Apertures, idx: int) int

Get the x position of the topmost y position in an aperture

Parameters:

idx (int) – The aperture index (1 for the first one)

Returns:

the x position of the topmost y position or negative on error

Return type:

int

Raises:
sort_by_flux(self: cpl.drs.Apertures) None

Sort apertures by decreasing aperture flux and apply changes

sort_by_max(self: cpl.drs.Apertures) None

Sort apertures by decreasing peak value and apply changes

sort_by_npix(self: cpl.drs.Apertures) None

Sort apertures by decreasing size (in pixels) and apply changes

class cpl.drs.WCS(cpl.core.PropertyList plist)

Create a WCS object by parsing a propertylist.

Notes

The WCS object is created reading the WCS keyword information from the property list plist which is used to setup a WCSLIB data structure. In addition a few ancillary items are also filled in.

It is allowed to pass a cpl.core.PropertyList with a valid WCS structure and NAXIS = 0. Such a propertylist can be created by the method platesol().

Trying to use any function without first installing WCSLIB will result in a cpl.core.NoWCSError.

class platesol_fitmode

Members:

PLATESOL_4

PLATESOL_6

property name
class platesol_outmode

Members:

MV_CRVAL

MV_CRPIX

property name
class trans_mode

Members:

PHYS2WORLD

WORLD2PHYS

WORLD2STD

PHYS2STD

property name
convert(self: cpl.drs.WCS, from: cpl.core.Matrix, transform: cpl.drs.WCS.trans_mode) cpl.core.Matrix

Convert between coordinate systems.

Parameters:
Returns:

The output coordinate matrix

Return type:

cpl.core.Matrix

Raises:

Notes

This function converts between several types of coordinates. These include:

physical coordinates:

The physical location on a detector (i.e. pixel coordinates)

world coordinates:

The real astronomical coordinate system for the observations. This may be spectral, celestial, time, etc.

standard coordinates:

These are an intermediate relative coordinate representation, defined as a distance from a reference point in the natural units of the world coordinate system. Any defined projection geometry will have already been included in the definition of standard coordinates.

The supported conversion modes are:

  • cpl.drs.WCS.trans_mode.PHYS2WORLD: Converts from physical to world coordinates

  • cpl.drs.WCS.trans_mode.WORLD2PHYS: Converts from world to physical coordinates

  • cpl.drs.WCS.trans_mode.WORLD2STD: Converts from world to standard coordinates

  • cpl.drs.WCS.trans_mode.PHYS2STD: Converts from physical to standard coordinates

static platesol(ilist: cpl.core.PropertyList, cel: cpl.core.Matrix, xy: cpl.core.Matrix, niter: int, thresh: float, fitmode: cpl.drs.WCS.platesol_fitmode, outmode: cpl.drs.WCS.platesol_outmode) cpl.core.PropertyList

Do a 2d plate solution given physical and celestial coordinates

Parameters:
Returns:

The output property list containing the new WCS

Return type:

cpl.core.PropertyList

Notes

This function allows for the following type of fits:

  • cpl.drs.WCS.PLATESOL_4: Fit for zero point, 1 scale and 1 rotation.

  • cpl.drs.WCS.PLATESOL_6: Fit for zero point, 2 scales, 1 rotation, 1 shear.

This function allows the zeropoint to be defined by shifting either the physical or the celestial coordinates of the reference point:

  • cpl.drs.WCS.MV_CRVAL: Keeps the physical point fixed and shifts the celestial

  • cpl.drs.WCS.MV_CRPIX: Keeps the celestial point fixed and shifts the physical

The output property list contains WCS relevant information only.

Raises:
property image_dims

Axis lengths of the image associated with a WCS.

property image_naxis

Dimensionality of the image associated with a WCS.

cpl.drs.detector

High-level functions to compute detector features.

cpl.drs.detector.get_bias_window(bias_image: cpl.core.Image, zone_def: Tuple[int, int, int, int] | None = None, ron_hsize: int = -1, ron_nsamp: int = -1) Tuple[float, float]

Compute the bias in a rectangle.

This function is meant to compute the bias level from an image by means of a MonteCarlo approach. The input image would normally be a bias frame although no check is done on that, it is up to the caller to feed in the right kind of frame.

Parameters:
  • bias_image (cpl.core.Image) – Input image, normally a bias frame

  • zone_def (tuple(int, int, int, int), optional) – Tuple to describe the window where the bias is to be computed in the format (xmin, xmax, ymin, ymax), using PyCPL notation where the bottom left pixel is (0,0)

  • ron_hsize (int, optional) – to specify half size of squares default 4

  • ron_nsamp (int, optional) – to specify the nb of samples, default 1000

Returns:

The bias in the frame and the error of the bias in the format (bias, error)

Return type:

tuple(float, float)

Raises:

cpl.core.IllegalInputError – if the specified window (zone_def) is invalid

Notes

The algorithm will create typically 100 9x9 windows on the frame, scattered optimally using a Poisson law. In each window, the mean of all pixels in the window is computed and this value is stored.

The output bias is the median of all computed means, and the error is the standard deviation of the means.

cpl.drs.detector.get_noise_ring(diff_image: cpl.core.Image, zone_def: Tuple[int, int, float, float], ron_hsize: int = -1, ron_nsamp: int = -1) Tuple[float, float]

Compute the noise in a ring.

This function is meant to compute the noise in a frame by means of a MonteCarlo approach. The input is a frame, usually a difference between two frames taken with the same settings for the acquisition system, although no check is done on that, it is up to the caller to feed in the right kind of frame.

If the input image is the difference of two bias frames taken with the same settings then the returned noise measure will be sqrt(2) times the image sensor read noise

Parameters:
  • diff_image (cpl.core.Image) – Input image, usually a difference frame.

  • zone_def (tuple(int, int, float, float)) – Tuple to describe the window where the bias is to be computed in the format (x, y, r1, r2). The first two intergers specify the centre position of the ring as x, y, using PyCPL notation where the bottom left is (0,0). Floats r1 and r2 specify the ring start and end radiuses.

  • ron_hsize (int, optional) – to specify half size of squares default 4

  • ron_nsamp (int, optional) – to specify the nb of samples, default 1000

Returns:

The noise in the frame and the error of the noise in the format (noise, error).

Return type:

tuple(float, float)

Raises:

Notes

The algorithm will create typically 100 9x9 windows on the frame, scattered optimally using a Poisson law. In each window, the standard deviation of all pixels in the window is computed and this value is stored. The output noise is the median of all computed standard deviations, and the error is the standard deviation of the standard deviations.

See also

cpl.drs.detector.get_noise_window

Computes noise using a rectangle.

cpl.drs.detector.get_noise_window(diff_image: cpl.core.Image, zone_def: Tuple[int, int, int, int] | None = None, ron_hsize: int = -1, ron_nsamp: int = -1) Tuple[float, float]

Compute the noise in a rectangle.

This function is meant to compute the noise in a frame by means of a MonteCarlo approach. The input is a frame, usually a difference between two frames taken with the same settings for the acquisition system, although no check is done on that, it is up to the caller to feed in the right kind of frame.

If the input image is the difference of two bias frames taken with the same settings then the returned noise measure will be sqrt(2) times the image sensor read noise

Parameters:
  • diff_image (cpl.core.Image) – Input image, usually a difference frame.

  • zone_def (tuple(int, int, int, int), optional) – Tuple to describe the window where the bias is to be computed in the format (xmin, xmax, ymin, ymax), using PyCPL notation where the bottom left is (0,0)

  • ron_hsize (int, optional) – to specify half size of squares, default 4

  • ron_nsamp (int, optional) – to specify the nb of samples, default 1000

Returns:

The noise in the frame and the error of the noise in the format (noise, error).

Return type:

tuple(float, float)

Raises:

cpl.core.IllegalInputError – if the specified window (zone_def) is invalid

Notes

The algorithm will create typically 100 9x9 windows on the frame, scattered optimally using a Poisson law. In each window, the standard deviation of all pixels in the window is computed and this value is stored.

The output noise is the median of all computed standard deviations, and the error is the standard deviation of the standard deviations.

See also

cpl.drs.detector.get_noise_ring

Computes noise using a ring.

cpl.drs.detector.interpolate_rejected(to_clean: cpl.core.Image) None

Interpolate any bad pixels in an image in place

Parameters:

to_clean (cpl.core.Image) – The image to clean

Raises:

cpl.core.DataNotFoundError – if all pixels are bad

Notes

The value of a bad pixel is interpolated from the good pixels among the 8 nearest. (If all but one of the eight neighboring pixels are bad, the interpolation becomes a nearest neighbor interpolation). For integer images the interpolation in done with floating-point and rounded to the nearest integer.

If there are pixels for which all of the eight neighboring pixels are bad, a subsequent interpolation pass is done, where the already interpolated pixels are included as source for the interpolation.

The interpolation passes are repeated until all bad pixels have been interpolated. In the worst case, all pixels will be interpolated from a single good pixel.

cpl.drs.fft

FFT operations via fftw wrappers

class cpl.drs.fft.Mode

Members:

FORWARD

BACKWARD

NOSCALE

FIND_MEASURE

FIND_PATIENT

FIND_EXHAUSTIVE

property name
cpl.drs.fft.fft_image(other: cpl.core.Image, transform: cpl.drs.fft.Mode, find: cpl.drs.fft.Mode = None, scale: bool = True) cpl.core.Image

Perform a FFT operation on an image

Parameters:
  • other (-) –

  • transform (-) –

  • find (-) – cpl.drs.fft.FIND_PATIENT, cpl.drs.fft.FIND_EXHAUSTIVE)

  • scale (-) – effects backwards transforms)

Return type:

output image of the FFT operation

Notes

This function performs an FFT on an image, using FFTW. CPL may be configured without this library, in this case an otherwise valid call will set and throw UnsupportedModeError.

The input and output images must match in precision level. Integer images are not supported.

In a forward transform the input image may be non-complex. In this case a real-to-complex transform is performed. This will only compute the first nx/2 + 1 columns of the transform. In this transform it is allowed to pass an output image with nx/2 + 1 columns.

Similarly, in a backward transform the output image may be non-complex. In this case a complex-to-real transform is performed. This will only transform the first nx/2 + 1 columns of the input. In this transform it is allowed to pass an input image with nx/2 + 1 columns.

Per default the backward transform scales (divides) the result with the number of elements transformed (i.e. the number of pixels in the result image). This scaling can be turned off with CPL_FFT_NOSCALE.

If many transformations in the same direction are to be done on data of the same size and type, a reduction in the time required to perform the transformations can be achieved by passing cpl.drs.FIND_MEASURE to the find param.

For a larger number of transformations a further reduction may be achived cpl.drs.FIND_PATIENT and for an even larger number of transformations a further reduction may be achived with the flag cpl.drs.FIND_EXHAUSTIVE.

If many transformations are to be done then a reduction in the time required to perform the transformations can be achieved by using cpl_fft_imagelist().

Raises:
cpl.drs.fft.fft_imagelist(other: cpl.core.ImageList, transform: cpl.drs.fft.Mode, find: cpl.drs.fft.Mode = None, scale: bool = True) cpl.core.ImageList

Perform a FFT operation on the images in an imagelist

Parameters:
  • other (cpl.core.ImageList) – Input imagelist to transform from

  • transform (cpl.drs.fft.Mode) – cpl.drs.fft.FORWARD or cpl.drs.fft.FORWARD

  • find (cpl.drs.fft.Mode or None, default=None) – based on enum, time spent searching (cpl.drs.fft.FIND_MEASURE, cpl.drs.fft.FIND_PATIENT, cpl.drs.fft.FIND_EXHAUSTIVE)

  • scale (bool, default=True) – true or false, whether or not to transform without scaling (only effects backwards transforms)

Returns:

output imagelist to store transformed images

Return type:

cpl.core.ImageList

Notes

Convenience function for running cpl.drs.fft.image() on all images in the input imagelist

cpl.drs.fit

High-level functions for non-linear fitting

cpl.drs.fit.image_gaussian(input: cpl.core.Image, xpos: int, ypos: int, xsize: int, ysize: int, errors: cpl.core.Image | None = None, guesses: List[float | None] | None = None, fit_params: List[bool] | None = None) object

Fit a 2D gaussian to image values.

Parameters:
  • input (cpl.core.Image) – Input image with data values to fit.

  • xpos (int) – X position of center of fitting domain.

  • ypos (int) – Y position of center of fitting domain.

  • xsize (int) – X size of fitting domain. It must be at least 3 pixels.

  • ysize (int) – Y size of fitting domain. It must be at least 3 pixels.

  • errors (cpl.core.Image, optional) – Optional input image with statistical errors associated to data.

  • guesses (list or array of 7 floats or None, optional) –

    7 first-guesses for the gaussian parameters in the format: [B, A, rho, mu_x, mu_y,sigma_x, sigma_y]

    If None is passed for a parameter it will be considered invalid and not be used as a first-guess for the parameter.

    These parameters are futher detailed in the notes.

  • fit_params (list or array of 7 bool elements, optional) – Used to flag parameters for freezing. If an array element is set to False, the corresponding parameter will be frozen. Any other value (including an “invalid” array element) would indicate a free parameter. If a parameter is frozen, a first-guess value must be specified at the corresponding element of the parameters array. Default setting is all parameters being free.

Returns:

  • A FitImageGassianResult NamedTuple with the following elements

  • err_params (list of 7 floats) – the statistical error associated to each fitted parameter. None if errors is not passed

  • rms (float) – returned standard deviation of fit residuals.

  • red_chisq (float) – returned reduced chi-squared of fit. None if errors is not passed

  • covariance (cpl.core.Matrix) – The covariance matrix, None if errors is not passed

  • major (float) – returned semi-major axis of ellipse at 1-sigma.

  • minor (float) – returned semi-minor axis of ellipse at 1-sigma.

  • angle (float) – returned angle between X axis and major axis of ellipse, counted counterclockwise (radians).

  • phys_cov (cpl.core.Matrix) – 3x3 covariance matrix for the derived physical parameters major, minor, and angle, will be returned. None if errors is not passed

  • parameters (list of 7 floats) – Parameters of best fit .

Notes

This function fits a 2d gaussian to pixel values within a specified region by minimizing (chi^2) using a Levenberg-Marquardt algorithm. The gaussian model adopted here is based on the well-known cartesian form

[ z = B + frac{A}{2 pi sigma_x sigma_y sqrt{1-rho^2}} expleft({-frac{1}{2left(1-rho^2right)} left(left(frac{x - mu_x}{sigma_x}right)^2 -2rholeft(frac{x - mu_x}{sigma_x}right) left(frac{y - mu_y}{sigma_y}right) + left(frac{y - mu_y}{sigma_y}right)^2right)}right) ]

where B is a background level and A the volume of the gaussian (they both can be negative!), making 7 parameters altogether. Conventionally the parameters are indexed from 0 to 6 in the elements of the arrays parameters, err_params, fit_params, and of the 7x7 covariance matrix:

[B, A, rho, mu_x, mu_y,sigma_x, sigma_y]

The semi-axes (a, b) and the orientation (theta) of the ellipse at 1-sigma level are finally derived from the fitting parameters as:

begin{eqnarray*} theta &=& frac{1}{2} arctan left(2 rho frac{sigma_x sigma_y} {sigma_x^2 - sigma_y^2}right) \ a &=& sigma_x sigma_y sqrt{2(1-rho^2) frac{cos 2theta} {left(sigma_x^2 + sigma_y^2right) cos 2theta + sigma_y^2 - sigma_x^2}} \ b &=& sigma_x sigma_y sqrt{2(1-rho^2) frac{cos 2theta} {left(sigma_x^2 + sigma_y^2right) cos 2theta - sigma_y^2 + sigma_x^2}} end{eqnarray*}

Note that (theta) is counted counterclockwise starting from the positive direction of the (x) axis, ranging bewteen (-pi/2) and (+pi/2) radians.

If the correlation (rho = 0) and (sigma_x geq sigma_y) (within uncertainties) the ellipse is either a circle or its major axis is aligned with the (x) axis, so it is conventionally set

begin{eqnarray*} theta &=& 0 \ a &=& sigma_x \ b &=& sigma_y end{eqnarray*}

If the correlation (rho = 0) and (sigma_x < sigma_y) (within uncertainties) the major axis of the ellipse is aligned with the (y) axis, so it is conventionally set

begin{eqnarray*} theta &=& frac{pi}{2} \ a &=& sigma_y \ b &=& sigma_x end{eqnarray*}

If requested, the 3x3 covariance matrix G associated to the derived physical quantities is also computed, applying the usual

[ mathrm{G} = mathrm{J} mathrm{C} mathrm{J}^mathrm{T} ]

where J is the Jacobian of the transformation ( (B, A, rho, mu_x, mu_y, sigma_x, sigma_y) rightarrow (theta, a, b) ) and C is the 7x7 matrix of the gaussian parameters.

cpl.drs.fit.imagelist_polynomial(x_pos: cpl.core.Vector, values: cpl.core.ImageList, mindeg: int, maxdeg: int, is_symsamp: bool, pixeltype: cpl.core.Type, fiterror: cpl.core.Image | None = None, window: tuple | None = None) cpl.core.ImageList

Least-squares fit a polynomial to each pixel in a list of images

Parameters:
  • x_pos (cpl.core.Vector) – The vector of positions to fit

  • values (cpl.core.ImageList) – The list of images with values to fit

  • mindeg (int) – The smallest degree with a non-zero coefficient

  • maxdeg (int) – The polynomial degree of the fit, at least mindeg

  • is_symsamp (bool) – True iff the x_pos values are symmetric around their mean

  • pixeltype (cpl.core.Type) – The pixel-type of the created image list

  • fiterror (cpl.core.Image, optional) – Image to contain the error of the fit

  • window (tuple(int,int,int,int), optional) – If given, the window defining the area of the images to use in the format (x1,y1, x2, y2)

Return type:

The image list of the fitted polynomial coefficients

Raises:
  • IllegalInputError if mindeg is negative or maxdeg is less than mindeg or if llx or lly are smaller – than 1 or if urx or ury is smaller than llx and lly respectively.

  • AccessOutOfRange error if x2 or y2 from window exceed the size of the images

  • IncompatibleInputError if x_pos and values have different lengths, or if fiterror is given with a – different size than that of values, or if the input images do not all have the same dimensions and pixel type.

  • DataNotFoundError if x_pos contains less than nc values

  • SingularMatrixError if x_pos contains less than nc distinct values.

  • UnsupportedModeError if the chosen pixel type is not one of cpl.core.Type.DOUBLE, cpl.core.Type.FLOAT, – cpl.core.Type.INT.

Notes

For each pixel, a polynomial representing the relation value = P(x) is computed where:

P(x) = x^{mindeg} * (a_0 + a_1 * x + … + a_{nc-1} * x^{nc-1}),

where mindeg >= 0 and maxdeg >= mindeg, and nc is the number of polynomial coefficients to determine, nc = 1 + (maxdeg - mindeg).

The returned image list thus contains nc coefficient images,

a_0, a_1, …, a_{nc-1}.

np is the number of sample points, i.e. the number of elements in x_pos and number of images in the input image list.

If mindeg is nonzero then is_symsamp is ignored, otherwise is_symsamp may to be set to CPL_TRUE if and only if the values in x_pos are known a-priori to be symmetric around their mean, e.g. (1, 2, 4, 6, 10, 14, 16, 18, 19), but not (1, 2, 4, 6, 10, 14, 16). Setting is_symsamp to True while mindeg is zero eliminates certain round-off errors.

For higher order fitting the fitting problem known as “Runge’s phenomenon” is minimized using the socalled “Chebyshev nodes” as sampling points. For Chebyshev nodes is_symsamp can be set to True.

Even though it is not an error, it is hardly useful to use an image of pixel type integer for the fitting error. An image of pixel type float should on the other hand be sufficient for most fitting errors.

The call requires the following number of FLOPs, where nz is the number of pixels in any one image in the imagelist:

2 * nz * nc * (nc + np) + np * nc^2 + nc^3/3 + O(nc * (nc + np)).

If mindeg is zero an additional nz * nc^2 FLOPs are required.

If fiterror is given an additional 2 * nz * nc * np FLOPs are required.

Bad pixels in the input is suported as follows:

First all pixels are fitted ignoring any bad pixel maps in the input. If this succeeds then each fit, where bad pixel(s) are involved is redone. During this second pass all input pixels flagged as bad are ignored.

For each pixel to be redone, the remaining good samples are passed to cpl_polynomial_fit(). The input is_symsamp is ignored in this second pass. The reduced number of samples may reduce the number of sampling points to equal the number of coefficients to fit. In this case the fit has another meaning (any non-zero residual is due to rounding errors, not a fitting error). If for a given fit bad pixels reduces the number of sampling points to less than the number of coefficients to fit, then as many coefficients are fit as there are sampling points. The higher order coefficients are set to zero and flagged as bad. If a given pixel has no good samples, then the resulting fit will consist of zeroes, all flagged as bad.

cpl.drs.fit.lvmq(x: cpl.core.Matrix, y: cpl.core.Vector, starting_guess_params: cpl.core.Vector, evaluate: Callable[[List[float], List[float]], float], evaluate_derivatives: Callable[[List[float], List[float]], List[float]], participating_parameters: List[bool] | None = None, sigma_y: cpl.core.Vector | None = None, relative_tolerance: float = 0.01, tolerance_count: int = 5, max_iterations: int = 1000) object

Fit a function to a set of data.

This function makes a minimum chi squared fit of the specified function to the specified data set using a Levenberg-Marquardt algorithm.

Parameters:
  • x (cpl.core.Matrix) – N x D matrix of the positions to fit. Each matrix row is a D-dimensional position.

  • y (cpl.core.Vector) – The N values to fit.

  • starting_guess_params (cpl.core.Vector) – Vector containing M fit parameters used for the evaulate function Must contain a guess solution on input.

  • participating_parameters (list or array of bools or None) – Optional array of size M defining which fit parameters participate in the fit (non-zero) and which fit parameters are held constant (zero). Pass None to fit all parameters.

  • evaluate (function(list or array of float, list or array of float) -> float) – Function that evaluates the fit function at the position specified by the first argument (an array of size D) using the fit parameters specified by the second argument (list or array of size M). The result is the return value of the function.

  • evaluate_derivatives (function(list or array of float, list or array of float) -> list of float) – Function that evaluates the first order partial derivatives of the fit function with respect to the fit parameters at the position specified by the first argument (an array of size D) using the parameters specified by the second argument (an array of size M). The result is the return value of the function, being a float array of size M).

  • sigma_y (cpl.core.Vector) – Vector of size N containing the uncertainties of the y-values.

  • relative_tolerance (float, optional) – The algorithm converges by definition if the relative decrease in chi squared is less than tolerance tolerance_count times in a row. The current default value is the CPL recommended default of 0.01.

  • tolerance_count (int) – The algorithm converges by definition if the relative decrease in chi squared is less than tolerance tolerance_count times in a row. The current default value is the CPL recommended default of 5.

  • max_iterations (int) – If this number of iterations is reached without convergence, the algorithm diverges, by definition. The current default value is the CPL recommended default of 1000

Returns:

  • A lvmqResult NamedTuple with the following elements

  • best_fit (list or array of float) – the best fit parameters for the evaluate function. Derived from starting_guess_params if given.

  • mse (float) – the mean squared error of the best fit

  • red_chisq (float) – the reduced chi-squared of the best fit. None if sigma_y is not passed

  • covariance (cpl.core.Matrix) – The formal covariance matrix of the best fi, On success the diagonal terms of the covariance matrix are guaranteed to be positive. However, terms that involve a constant parameter (as defined by the input array evaluate_derivatives) are always set to zero. None if sigma_y is not passed

cpl.drs.geometric_transforms

Functions to compute the shift-and-add operation on an image list.

class cpl.drs.geometric_transforms.Combine

The CPL Geometry combination modes for the various cpl.drs.geometric_transforms functions

Members:

INTERSECT : Combine using the intersection of the images.

UNION : Combine using the union of the images.

FIRST : Combine using the first image to aggregate the other ones.

property name
cpl.drs.geometric_transforms.offset_combine(ilist: cpl.core.ImageList, offs: cpl.core.Bivector, min_rej: int, max_rej: int, union_flag: cpl.drs.geometric_transforms.Combine, refine: bool = False, search_hx: int | None = None, search_hy: int | None = None, measure_hx: int | None = None, measure_hy: int | None = None, anchors: cpl.core.Bivector | None = None, sigmas: cpl.core.Vector | None = None) object

Images list recombination

If offset refinement is enabled this function will detect sources in the first image (unless a list of positions has been provided by the user using the anchors parameter) then use cross correlation to refine the provided estimated image offsets from the offs parameters. If offset refinement is disabled the image offsets in offs are used as they are.

Following the optional offset refinement each image is shifted by the corresponding offset before being added together to produce a combined image.

The supported types are cpl.core.Type.DOUBLE, cpl.core.Type.FLOAT.

The number of provided offsets shall be equal to the number of input images. The ith offset (offs.x, offs_y) is the offset that has to be used to shift the ith image to align it on the first one.

If offset refinement is enabled (refine`=True), `anchors or sigmas must be given, with anchors taking precidence.

Parameters:
  • ilist (cpl.core.ImageList) – Input image list

  • offs (cpl.core.Bivector) – List of offsets in x and y. Applied directly if refine is False, otherwise it will be refined using cross-correlation.

  • union_flag (cpl.drs.geometric_transforms.Combine) – Combination mode: cpl.drs.geometric_transforms.Combine.UNION, cpl.drs.geometric_transforms.Combine.INTERSECT or cpl.drs.geometric_transforms.Combine.FIRST.

  • search_hx (int) – Half-width of search area. This parameter must be set when refine is True, if refine is False it has no effect.

  • search_hy (int) – Half-height of search area. This parameter must be set when refine is True, otherwise it has no effect.

  • measure_hx (int) – Half-width of the measurement area. This parameter must be set when refine is True, otherwise it has no effect.

  • measure_hy (int) – Half-height of the measurement area. This parameter must be set when refine is True, otherwise it has no effect.

  • refine (bool, optional) – Set to True to enable offset refinement offsets

  • anchors (cpl.core.Bivector, optional) – List of cross corelation points in the first image. Unused if refine is set to False

  • sigmas (cpl.core.Vector, optional) – Positive, decreasing sigmas to apply for cross-correlation point detection. Unused if refine is set to False, or if refine is True but anchors is given.

Returns:

NamedTuple in the format (combined, contribution, pisigma) where:

  • combined: the combined image

  • contribution: the contribution map

  • pisigma: Index of the sigma that was used. None if sigmas is not given

Return type:

NamedTuple(cpl.core.Image, cpl.core.Image, int or None)

Raises:

See also

cpl.drs.geometric_transformations.offset_fine

used to refine the offsets if refine is True

cpl.drs.geometric_transformations.offset_saa

used for image recombination using the default kernel

cpl.drs.geometric_transforms.offset_fine(ilist: cpl.core.ImageList, estimates: cpl.core.Bivector, anchors: cpl.core.Bivector, search_hx: int, search_hy: int, measure_hx: int, measure_hy: int) Tuple[cpl.core.Bivector, cpl.core.Vector]

Get the offsets by correlating the images

The images in the input list must only differ from a shift. In order from the correlation to work, they must have the same level (check the average values of your input images if the correlation does not work).

The supported image types are cpl.core.Type.DOUBLE and cpl.core.Type.FLOAT. The bad pixel maps are ignored by this function.

Parameters:
  • ilist (cpl.core.ImageList) – Input image list

  • estimates (cpl.core.Bivector) – First-guess estimation of the offsets

  • anchors (cpl.core.Bivector) – List of cross-correlation points

  • search_hx (int) – Half-width of search area

  • search_hy (int) – Half-height of search area

  • measure_hx (int) – Half-width of the measurement area

  • measure_hy (int) – Half-height of the measurement area

Returns:

Tuple of the List of offsets and the list of cross-correlation quality factors, in the format (offsets, quality_factors).

Return type:

tuple(cpl.core.Bivector, cpl.core.Vector)

Notes

The matching is performed using a 2d cross-correlation, using a minimal squared differences criterion. One measurement is performed per input anchor point, and the median offset is returned together with a measure of similarity for each plane.

The images in the input list must only differ from a shift. In order from the correlation to work, they must have the same level (check the average values of your input images if the correlation does not work).

The ith offset (offsets.x, offsets.y) in the returned offsets is the one that have to be used to shift the ith image to align it on the reference image (the first one).

Raises:

cpl.core.IllegalInputError – if ilist is not valid

cpl.drs.geometric_transforms.offset_saa(ilist: cpl.core.ImageList, offs: cpl.core.Bivector, kernel: cpl.core.Kernel, rejmin: int, rejmax: int, union_flag: cpl.drs.geometric_transforms.Combine) object

Shift and add an images list to a single image

The supported types are cpl.core.Type.DOUBLE, cpl.core.Type.FLOAT.

The number of provided offsets shall be equal to the number of input images. The ith offset (offs_x, offs_y) is the offset that has to be used to shift the ith image to align it on the first one.

The following kernel types are supported when being passed to kernel:

  • cpl.core.Kernel.DEFAULT: default kernel, currently cpl.core.Kernel.TANH

  • cpl.core.Kernel.TANH: Hyperbolic tangent

  • cpl.core.Kernel.SINC: Sinus cardinal

  • cpl.core.Kernel.SINC2: Square sinus cardinal

  • cpl.core.Kernel.LANCZOS: Lanczos2 kernel

  • cpl.core.Kernel.HAMMING: Hamming kernel

  • cpl.core.Kernel.HANN: Hann kernel

  • cpl.core.Kernel.NEAREST: Nearest neighbor kernel (1 when dist < 0.5, else 0)

If the number of input images is lower or equal to 3, the rejection parameters are ignored. If the number of input images is lower or equal to 2*(rejmin+rejmax), the rejection parameters are ignored.

Pixels with a zero in the contribution map are flagged as bad in the combined image.

The return values ppos_x and ppos_y follow the PyCPL standard, where the lower-leftmost pixel of the output image is at (0, 0). Note that this differs from the corresponding CPL function, where the lower-leftmost pixel of the output image is at (1, 1).

Parameters:
  • ilist (cpl.core.ImageList) – Input image list

  • offs (cpl.core.Bivector) – List of offsets in x and y

  • kernel (cpl.core.Kernel) – Interpolation kernel to use for resampling. See extended summary for supported kernel types

  • rejmin (int) – Number of minimum value pixels to reject in stacking

  • rejmax (int) – Number of maximum value pixels to reject in stacking

  • union_flag (cpl.drs.geometric_transforms.Combine) – Combination mode: cpl.drs.geometric_transforms.Combine.UNION, cpl.drs.geometric_transforms.Combine.INTERSECT or cpl.drs.geometric_transforms.Combine.FIRST

Returns:

NamedTuple in the format (combined, contribution, ppos_x, ppos_y) where:

  • combined: the combined image

  • contribution: the contribution map

  • ppos_x: X-position of the first image in the combined image

  • ppos_y: Y-position of the first image in the combined image

ppos_x and ppos_y represent the pixel coordinate in the created output image-pair combined and contribution where the lowermost-leftmost pixel of the first input image is located. So with cpl.drs.geometric_transforms.Combine.FIRST this will always be (0, 0).

Return type:

NamedTuple(cpl.core.Image, cpl.core.Image, float, float)

Raises:

cpl.drs.photom

High-level functions that are photometry related

class cpl.drs.photom.Unit

Members:

PHOTONRADIANCE

ENERGYRADIANCE

LESS

LENGTH

FREQUENCY

property name
cpl.drs.photom.fill_blackbody(out_unit: cpl.drs.photom.Unit, evalpoints: cpl.core.Vector, in_unit: cpl.drs.photom.Unit, temp: float) cpl.core.Vector

The Planck radiance from a black-body

Parameters:
  • out_unit (cpl.drs.photom.Unit) – cpl.drs.photom.Unit.PHOTONRADIANCE, cpl.drs.photom.Unit.ENERGYRADIANCE or cpl.drs.photom.Unit.LESS

  • evalpoints (cpl.core.Vector) – The evaluation points (wavelengths or frequencies)

  • in_unit (cpl.drs.photom.Unit) – cpl.drs.photom.Unit.LENGTH or cpl.drs.photom.Unit.FREQUENCY

  • temp (float) – The black body temperature [K]

Returns:

The computed radiance

Return type:

cpl.core.Vector

Raises:

Notes

The Planck black-body radiance can be computed in 5 different ways: As a radiance of either energy [J*radian/s/m^3] or photons [radian/s/m^3], and in terms of either wavelength [m] or frequency [1/s]. The fifth way is as a unit-less radiance in terms of wavelength, in which case the area under the planck curve is 1. The dimension of the spectrum (energy or photons or unit-less, cpl.drs.photom.Unit.LESS) is controlled by out_unit, and the dimension of the input (length or frequency) is controlled by in_unit.

evalpoints and spectrum must be of equal, positive length.

The input wavelengths/frequencies and the temperature must be positive.

The four different radiance formulas are:

\[Rph1(\lambda,T) = 2 \pi \frac{c}{\lambda^4} (\exp(hc/kT\lambda)-1)^{-1}\]
\[Rph2(\nu,T) = 2 \pi \frac{\nu^2}{c^4} (\exp(h\nu/kT)-1)^{-1}\]
\[Re1(\lambda,T) = 2 \pi \frac{hc^2}{\lambda^5} (\exp(hc/kT\lambda)-1)^{-1} = \frac{hc}{\lambda} Rph1(\lambda,T)\]
\[Re2(\nu,T) = 2 \pi \frac{h\nu^3}{c^2} (\exp(h\nu/kT)-1)^{-1} = h\nu Rph2(\nu,T)\]
\[R1(\lambda,T) = \frac{15h^5c^5}{\pi^4k^5\lambda^5T^5} (\exp(hc/kT\lambda)-1)^{-1} = \frac{h^4c^3}{2\pi^5k^5T^5} Rph1(\lambda,T)\]

where \(\lambda\) is the wavelength, \(\nu\) is the frequency, \(T\) is the temperature, h is the Planck constant, k is the Boltzmann constant and c is the speed of light in vacuum.

When the radiance is computed in terms of wavelength, the radiance peaks at \(\lambda_{max} = 2.897771955\times 10^{-3}/T\) [m]. When the radiance is unit-less this maximum, \(R1(\lambda_{max},T)\), is approximately 3.2648. \(R1(\lambda,T)\) integrated over l from 0 to infinity is 1.

cpl.drs.ppm

Point Pattern Matching Module

cpl.drs.ppm.match_points(data: cpl.core.Matrix, use_data: int, err_data: float, pattern: cpl.core.Matrix, use_pattern: int, err_pattern: float, tolerance: float, radius: float) object

Match 2-D distributions of points.

Parameters:
  • data (cpl.core.Matrix) – List of data points (e.g., detected stars positions).

  • use_data (int) – Number of data points used for preliminary match.

  • err_data (float) – Error on data points positions.

  • pattern (cpl.core.Matrix) – List of pattern points (e.g., expected stars positions).

  • use_pattern (int) – Number of pattern points used for preliminary match.

  • err_pattern (float) – Error on pattern points positions.

  • tolerance (double) – Max relative difference of angles and scales from their median value for match acceptance.

  • radius (float) – Search radius applied in final matching (data units).

Returns:

NamedTuple(matches

matcheslist of int

Indexes of identified data points (pattern-to-data).

mdatacpl.core.Matrix

the list of identified data points

mpattern,: cpl.core.Matrix

the list of matching pattern points

lin_scalefloat

the Linear transformation scale factor

lin_anglefloat

the Linear transformation rotation angle

Return type:

List[int], mdata: cpl.core.Matrix, mpattern: cpl.core.Matrix, lin_scale: float, lin_angle: float)

Notes

A point is described here by its coordinates on a cartesian plane. The input matrices data and pattern must have 2 rows, as their column vectors are the points coordinates.

This function attemps to associate points in data to points in pattern, under the assumption that a transformation limited to scaling, rotation, and translation, would convert positions in pattern into positions in data. Association between points is also indicated in the following as “match”, or “identification”.

Point identification is performed in two steps. In the first step only a subset of the points is identified (preliminary match). In the second step the identified points are used to define a first-guess transformation from pattern points to data points, that is applied to identify all the remaining points as well. The second step would be avoided if a use_pattern equal to the number of points in pattern is given, and exactly use_pattern points would be identified already in the first step.

First step:

All possible triangles (sub-patterns) are built using the first use_data points from data and the first use_pattern points from pattern. The values of use_data and use_pattern must always be at least 3 (however, see the note at the end), and should not be greater than the length of the corresponding lists of points. The point-matching algorithm goes as follow:

For every triplet of points:

Select one point as the reference. The triangle coordinates are defined by

((Rmin/Rmax)^2, theta_min theta_max)

where Rmin (Rmax) is the shortest (longest) distance from the reference point to one of the two other points, and theta_min (theta_max) is the view angle in [0; 2pi[ to the nearest (farthest) point.

Triangles are computed by using each point in the triplet as reference, thereby computing 3 times as many triangles as needed.

The accuracy of triangle patterns is robust against distortions (i.e., systematic inaccuracies in the points positions) of the second order. This is because, if the points positions had constant statistical uncertainty, the relative uncertainty in the triangle coordinates would be inversely proportional to the triangle size, while if second order distortions are present the systematic error on points position would be directly proportional to the triangle size.

For every triangle derived from the pattern points:

Match with nearest triangle derived from data points if their distance in the parameter space is less than their uncertainties (propagated from the points positions uncertainties err_data and err_pattern). For every matched pair of triangles, record their scale ratio, and their orientation difference. Note that if both err_data and err_pattern are zero, the tolerance in triangle comparison will also be zero, and therefore no match will be found.

Get median scale ratio and median angle of rotation, and reject matches with a relative variation greater than tolerance from the median of either quantities. The estimator of all the rotation angles a_i is computed as:

atan(med sin(a_i) / med cos(a_i))

Second step:

From the safely matched triangles, a list of identified points is derived, and the best transformation from pattern points to data points (in terms of best rotation angle, best scaling factor, and best shift) is applied to attempt the identification of all the points that are still without match. This matching is made by selecting for each pattern point the data point which is closest to its transformed position, and at a distance less than radius.

The returned array of integers is as long as the number of points in pattern, and each element reports the position of the matching point in data (counted starting from zero), or is invalid if no match was found for the pattern point. For instance, if element N of the array has value M, it means that the Nth point in pattern matches the Mth point in data.

Two more matrices mdata and mpattern will be returned with the coordinates of the identified points. These two matrices will both have the same size: 2 rows, and as many columns as successfully identified points. Matching points will be in the same column of both matrices.

If lin_scale is returned with a good estimate of the scale (distance_in_data = lin_scaledistance_in_pattern). This makes sense only in case the transformation between pattern and data is an affine transformation. In case of failure, lin_scale is set to zero.

If lin_angle is returned with a good estimate of the rotation angle between pattern and data in degrees (counted counterclockwise, from -180 to +180, and with data_orientation = pattern_orientation + lin_angle). This makes sense only in case the transformation between pattern and data is an affine transformation. In case of failure, lin_angle is set to zero.

The returned values for lin_scale and lin_angle have the only purpose of providing a hint on the relation between pattern points and data points. This function doesn’t attempt in any way to determine or even suggest a possible transformation between pattern points and data points: this function just matches points, and it is entriely a responsibility of the caller to fit the appropriate transformation between one coordinate system and the other. A polynomial transformation of degree 2 from pattern to data may be fit in the following way (assuming that mpattern and mdata are available):

x = cpl.core.Vector(mdata[0,:])
y = cpl.core.Vector(mdata[1,:])
x_transform = cpl.core.Polynomial(2)
y_transform = cpl.core.Polynomial(2)
x_transform.fit(mpattern, x, False, [transform_x.degree,])
y_transform.fit(mpattern, y, False, [transform_y.degree,])

The basic requirement for using this function is that the searched point pattern (or at least most of it) is contained in the data. As an indirect consequence of this, it would generally be appropriate to have more points in data than in pattern (and analogously, to have use_data greater than use_pattern), even if this is not strictly necessary.

Also, pattern and data should not contain too few points (say, less than 5 or 4) or the identification may risk to be incorrect: more points enable the construction of many more triangles, reducing the risk of ambiguity (multiple valid solutions). Special situations, involving regularities in patterns (as, for instance, input data containing just three equidistant points, or the case of a regular grid of points) would certainly provide an answer, and this answer would very likely be wrong (the human brain would fail as well, and for exactly the same reasons).

The reason why a two steps approach is encouraged here is mainly to enable an efficient use of this function: in principle, constructing all possible triangles using all of the available points is never wrong, but it could become very slow: a list of N points implies the evaluation of N*(N-1)*(N-2)/2 triangles, and an even greater number of comparisons between triangles. The possibility of evaluating first a rough transformation based on a limited number of identified points, and then using this transformation for recovering all the remaining points, may significantly speed up the whole identification process. However it should again be ensured that the main requirement (i.e., that the searched point pattern must be contained in the data) would still be valid for the selected subsets of points: a random choice would likely lead to a matching failure (due to too few, or no, common points).

A secondary reason for the two steps approach is to limit the effect of another class of ambiguities, happening when either or both of the input matrices contains a very large number of uniformely distributed points. The likelihood to find several triangles that are similar by chance, and at all scales and orientations, may increase to unacceptable levels.

A real example may clarify a possible way of using this function: let data contain the positions (in pixel) of detected stars on a CCD. Typically hundreds of star positions would be available, but only the brightest ones may be used for preliminary identification. The input data positions will therefore be opportunely ordered from the brightest to the dimmest star positions. In order to identify stars, a star catalogue is needed. From a rough knowledge of the pointing position of the telescope and of the size of the field of view, a subset of stars can be selected from the catalogue: they will be stored in the pattern list, ordered as well by their brightness, and with their RA and Dec coordinates converted into standard coordinates (a gnomonic coordinate system centered on the telescope pointing, i.e., a cartesian coordinate system), no matter in what units of arc, and no matter what orientation of the field. For the first matching step, the 10 brightest catalogue stars may be selected (selecting less stars would perhaps be unsafe, selecting more would likely make the program slower without producing any better result). Therefore use_pattern would be set to 10. From the data side, it would generally be appropriate to select twice as many stars positions, just to ensure that the searched pattern is present. Therefore use_data would be set to 20. A reasonable value for tolerance and for radius would be respectively 0.1 (a 10% variation of scales and angles) and 20 (pixels).

cpl.drs.ppm.match_positions(peaks: cpl.core.Vector, lines: cpl.core.Vector, min_disp: float, max_disp: float, tolerance: float) cpl.core.Bivector

Match 1-D patterns.

Parameters:
  • peaks (cpl.core.Vector) – List of observed positions (e.g., of emission peaks)

  • lines (cpl.core.Vector) – List of positions in searched pattern (e.g., wavelengths)

  • min_disp (float) – Min expected scale (e.g., spectral dispersion in A/pixel)

  • max_disp (float) – Max expected scale (e.g., spectral dispersion in A/pixel)

  • tolerance (float) – Tolerance for interval ratio comparison

Return type:

List of all matching points positions

Notes

This function attempts to find the reference pattern lines in a list of observed positions peaks. In the following documentation a terminology drawn from the context of arc lamp spectra calibration is used for simplicity: the reference pattern is then a list of wavelengths corresponding to a set of reference arc lamp emission lines the so-called line catalog; while the observed positions are the positions (in pixel) on the CCD, measured along the dispersion direction, of any significant peak of the signal. To identify the observed peaks means to associate them with the right reference wavelength. This is attempted here with a point-pattern matching technique, where the pattern is contained in the vector lines, and is searched in the vector peak.

In order to work, this method just requires a rough expectation value of the spectral dispersion (in Angstrom/pixel), and a line catalog. The line catalog lines should just include lines that are expected somewhere in the CCD exposure of the calibration lamp (note, however, that a catalog including extra lines at its blue and/or red ends is still allowed).

Typically, the arc lamp lines candidates peak will include light contaminations, hot pixels, and other unwanted signal, but only in extreme cases does this prevent the pattern-recognition algorithm from identifying all the spectral lines. The pattern is detected even in the case peak contained more arc lamp lines than actually listed in the input line catalog.

This method is based on the assumption that the relation between wavelengths and CCD positions is with good approximation locally linear (this is always true, for any modern spectrograph).

The ratio between consecutive intervals pairs in wavelength and in pixel is invariant to linear transformations, and therefore this quantity can be used in the recognition of local portions of the searched pattern. All the examined sub-patterns will overlap, leading to the final identification of the whole pattern, notwithstanding the overall non-linearity of the relation between pixels and wavelengths.

Ambiguous cases, caused by exceptional regularities in the pattern, or by a number of undetected (but expected) peaks that disrupt the pattern on the data, are recovered by linear interpolation and extrapolation of the safely identified peaks.

More details about the applied algorithm can be found in the comments to the function code.

cpl.drs.wlcalib

Wavelength calibration functions

class cpl.drs.wlcalib.SlitModel

Line model to generate a spectrum.

The model comprises these elements:

  • Slit Width

  • FWHM of transfer function

  • Truncation threshold of the transfer function

  • Catalog of lines (typically arc or sky)

The units of the X-values of the lines is a length, it is assumed to be the same as that of the Y-values of the dispersion relation (e.g. meter), the units of slit width and the FWHM are assumed to be the same as the X-values of the dispersion relation (e.g. pixel), while the units of the produced spectrum will be that of the Y-values of the lines.

The line profile is truncated at this distance [pixel] from its maximum:

\[x_{\mathrm{max}} = w/2 + k\sigma\]

where w is the slit width, k is the threshold and \(\sigma = w_{\mathrm{FWHM}}/(2\sqrt{2\log(2)})\) where \(w_{\mathrm{FWHM}}\) is the Full Width at Half Maximum (FWHM) of the transfer function.

The units of the X-values of the lines is a length, it is assumed to be the same as that of the Y-values of the dispersion relation (e.g. meter), the units of slit width and the FWHM are assumed to be the same as the X-values of the dispersion relation (e.g. pixel), while the units of the produced spectrum will be that of the Y-values of the lines.

Parameters:
  • catalog (cpl.core.Bivector) – the catalog of lines to be used by the spectrum filler

  • wfwhm (float) – the FWHM of th etransfer function to be used by the spectrum filler

  • wslit (float) – the slit width to be used by the spectrum filler

  • spectrum_size (int) – The size of the spectrum, returned by the spectrum filler functions

  • threshold (float) – The threshold for truncating the transfer function, default 5 (recommended).

fill_line_spectrum(self: cpl.drs.wlcalib.SlitModel, dispersion: cpl.core.Polynomial) cpl.core.Vector

Generate a 1D spectrum from a model and a dispersion relation from the line intensities.

Parameters:

disp (cpl.core.Polynomial) – 1D-Dispersion relation, at least of degree 1

Returns:

A vector of self.spectrum_size, containing the spectrum generated.

Return type:

cpl.core.Vector

Notes

Each line profile is given by the convolution of the Dirac delta function with a Gaussian with \(sigma = w_{\mathrm{FWHM}}/(2\sqrt{2\log(2)})\) and a top-hat with the slit width as width. This continuous line profile is then integrated over each pixel, wherever the intensity is above the threshold set by the given model. For a given line the value on a given pixel requires the evaluation of two calls to erf().

fill_line_spectrum_fast(self: cpl.drs.wlcalib.SlitModel, dispersion: cpl.core.Polynomial) cpl.core.Vector

Generate a 1D spectrum from a model and a dispersion relation from the line intensities, approximating the line profile for speed.

The approximation preserves the position of the maximum, the symmetry and the flux of the line profile.

The fast spectrum generation can be useful when the model spectrum includes many catalog lines.

Parameters:

disp (cpl.core.Polynomial) – 1D-Dispersion relation, at least of degree 1

Returns:

A vector of self.spectrum_size, containing the spectrum generated.

Return type:

cpl.core.Vector

Notes

Each line profile is given by the convolution of the Dirac delta function with a Gaussian with

\[\sigma = w_{\mathrm{FWHM}}/(2\sqrt{2\log(2)}) and a\]

top-hat with the slit width as width. This continuous line profile is then integrated over each pixel, wherever the intensity is above the threshold set by the given model. The use of a given line in a spectrum requires the evaluation of four calls to erf().

fill_logline_spectrum(self: cpl.drs.wlcalib.SlitModel, dispersion: cpl.core.Polynomial) cpl.core.Vector

Generate a 1D spectrum from a model and a dispersion relation from log(1 + the line intensities).

Parameters:

disp (cpl.core.Polynomial) – 1D-Dispersion relation, at least of degree 1

Returns:

A vector of self.spectrum_size, containing the spectrum generated.

Return type:

cpl.core.Vector

Notes

Each line profile is given by the convolution of the Dirac delta function with a Gaussian with ..math:: sigma = w_{mathrm{FWHM}}/(2sqrt{2log(2)}) and a top-hat with the slit width as width. This continuous line profile is then integrated over each pixel, wherever the intensity is above the threshold set by the given model. For a given line the value on a given pixel requires the evaluation of two calls to erf().

fill_logline_spectrum_fast(self: cpl.drs.wlcalib.SlitModel, dispersion: cpl.core.Polynomial) cpl.core.Vector

Generate a 1D spectrum from a model and a dispersion relation from log(1 + the line intensities), approximating the line profile for speed.

The approximation preserves the position of the maximum, the symmetry and the flux of the line profile.

The fast spectrum generation can be useful when the model spectrum includes many catalog lines.

Parameters:

disp (cpl.core.Polynomial) – 1D-Dispersion relation, at least of degree 1

Returns:

A vector of self.spectrum_size, containing the spectrum generated.

Return type:

cpl.core.Vector

Notes

Each line profile is given by the convolution of the Dirac delta function with a Gaussian with \(\sigma = w_{\mathrm{FWHM}}/(2\sqrt{2\log(2)})\) and a top-hat with the slit width as width. This continuous line profile is then integrated over each pixel, wherever the intensity is above the threshold set by the given model. The use of a given line in a spectrum requires the evaluation of four calls to erf().

find_best_1d(self: cpl.drs.wlcalib.SlitModel, spectrum: cpl.core.Vector, wl_search: cpl.core.Vector, nsamples: int, hsize: int, filler: function, guess: cpl.core.Polynomial | None = None) object

Find the best 1D dispersion polynomial in a given search space

Find the polynomial that maximizes the cross-correlation between an observed 1D-spectrum and a model spectrum based on the polynomial dispersion relation.

Parameters:
  • spectrum (cpl.core.Vector) – The vector with the observed 1D-spectrum

  • wl_search (cpl.core.Vector) – Search range around the anchor points

  • nsamples (int) – Number of samples around the anchor points

  • hsize (int) – Maximum (pixel) displacement of the polynomial guess

  • filler (function(cpl.core.Vector, cpl.core.Polynomial)) –

    The function used to make the spectrum. Currently only supports fill functions in cpl.drs.wlcalib, including:

    • cpl.drs.wlcalib.SlitModel.fill_line_spectrum

    • cpl.drs.wlcalib.SlitModel.fill_line_spectrum_fast

    • cpl.drs.wlcalib.SlitModel.fill_logline_spectrum

    • cpl.drs.wlcalib.SlitModel.fill_logline_spectrum_fast

  • guess (cpl.core.Polynomial, optional) – 1D-polynomial with the guess. If not given the guess will simply be a 1D-Polynomial with no coefficients

Returns:

NamedTuple in the format (result, xcmax, xcoors) where:

  • result: the resulting best 1D dispersion polynomial

  • xcmax: the maximum cross-correlation

  • xcoors: the correlation values

Return type:

NamedTuple(cpl.core.Polynomial, float, cpl.core.Vector)

Raises:
property catalog

Set the catalog of lines to be used by the spectrum filler

The values in the X-vector must be increasing. The catalog values will be copied into the slitmodel and thus modification of the passed Bivector will not impact the internal Slitmodel catalog, and vice versa.

property threshold

The FWHM of the transfer function to be used by the spectrum filler.

The threshold should be high enough to ensure a good line profile, but not too high to make the spectrum generation too costly. 5 is the CPL recommended.

property wfwhm

Set the FWHM of the transfer function to be used by the spectrum filler.

property wslit

Slit width to be used by the spectrum filler.