mrpro.operators.GridSamplingOp

class mrpro.operators.GridSamplingOp(grid: Tensor, input_shape: SpatialDimension, interpolation_mode: Literal['bilinear', 'nearest', 'bicubic'] = 'bilinear', padding_mode: Literal['zeros', 'border', 'reflection'] = 'zeros', align_corners: bool = False)[source]

Bases: LinearOperator

Grid Sampling Operator.

Given an “input” tensor and a “grid”, computes the output by taking the input values at the locations determined by grid with interpolation. Thus, the output size will be determined by the grid size. For the adjoint to be defined, the grid and the shape of the “input” has to be known.

__init__(grid: Tensor, input_shape: SpatialDimension, interpolation_mode: Literal['bilinear', 'nearest', 'bicubic'] = 'bilinear', padding_mode: Literal['zeros', 'border', 'reflection'] = 'zeros', align_corners: bool = False)[source]

Initialize Sampling Operator.

Parameters:
  • grid – sampling grid. Shape *batchdim, z,y,x,3 / *batchdim, y,x,2. Values should be in [-1, 1.]

  • input_shape – Used in the adjoint. The z,y,x shape of the domain of the operator. If grid has 2 as the last dimension, only y and x will be used.

  • interpolation_mode – mode used for interpolation. bilinear is trilinear in 3D, bicubic is only supported in 2D.

  • padding_mode – how the input of the forward is padded.

  • align_corners – if True, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels

adjoint(x: Tensor) tuple[Tensor][source]

Apply the adjoint of the GridSampleOperator.

forward(x: Tensor) tuple[Tensor][source]

Apply the GridSampleOperator.

Samples at the location determine by the grid.

operator_norm(initial_value: Tensor, dim: Sequence[int] | None, max_iterations: int = 20, relative_tolerance: float = 0.0001, absolute_tolerance: float = 1e-05, callback: Callable[[Tensor], None] | None = None) Tensor

Power iteration for computing the operator norm of the linear operator.

Parameters:
  • initial_value – initial value to start the iteration; if the initial value contains a zero-vector for one of the considered problems, the function throws an value error.

  • dim – the dimensions of the tensors on which the operator operates. For example, for a matrix-vector multiplication example, a batched matrix tensor with shape (4,30,80,160), input tensors of shape (4,30,160) to be multiplied, and dim = None, it is understood that the matrix representation of the operator corresponds to a block diagonal operator (with 4*30 matrices) and thus the algorithm returns a tensor of shape (1,1,1) containing one single value. In contrast, if for example, dim=(-1,), the algorithm computes a batched operator norm and returns a tensor of shape (4,30,1) corresponding to the operator norms of the respective matrices in the diagonal of the block-diagonal operator (if considered in matrix representation). In any case, the output of the algorithm has the same number of dimensions as the elements of the domain of the considered operator (whose dimensionality is implicitly defined by choosing dim), such that the pointwise multiplication of the operator norm and elements of the domain (to be for example used in a Landweber iteration) is well-defined.

  • max_iterations – maximum number of iterations

  • relative_tolerance – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration

  • absolute_tolerance – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration

  • callback – user-provided function to be called at each iteration

Return type:

an estimaton of the operator norm

property H: LinearOperator

Adjoint operator.

property gram: LinearOperator

Gram operator.

For a LinearOperator \(A\), the self-adjoint Gram operator is defined as \(A^H A\).

Note: This is a default implementation that can be overwritten by subclasses for more efficient implementations.