mrpro.operators.functionals.L2NormSquared

class mrpro.operators.functionals.L2NormSquared[source]

Bases: ElementaryProximableFunctional

Functional class for the squared L2 Norm.

This implements the functional given by \(f: C^N -> [0, \infty), x -> \| W (x-b)\|_2^2\), where \(W\) is either a scalar or tensor that corresponds to a (block-) diagonal operator that is applied to the input. This is, for example, useful for non-Cartesian MRI reconstruction when using a density-compensation function for k-space pre-conditioning, for masking of image data, or for spatially varying regularization weights.

In most cases, consider setting divide_by_n to true to be independent of input size. Alternatively, the functional mrpro.operators.functionals.MSE can be used. The norm is computed along the dimensions given at initialization, all other dimensions are considered batch dimensions.

__init__(target: Tensor | None | complex = None, weight: Tensor | complex = 1.0, dim: int | Sequence[int] | None = None, divide_by_n: bool = False, keepdim: bool = False) None[source]

Initialize a Functional.

We assume that functionals are given in the form \(f(x) = \phi ( weight ( x - target))\) for some functional \(\phi\).

Parameters:
  • target (Tensor | None | complex, default: None) – target element - often data tensor (see above)

  • weight (Tensor | complex, default: 1.0) – weight parameter (see above)

  • dim (int | Sequence[int] | None, default: None) – dimension(s) over which functional is reduced. All other dimensions of weight ( x - target) will be treated as batch dimensions.

  • divide_by_n (bool, default: False) – if true, the result is scaled by the number of elements of the dimensions index by dim in the tensor weight ( x - target). If true, the functional is thus calculated as the mean, else the sum.

  • keepdim (bool, default: False) – if true, the dimension(s) of the input indexed by dim are maintained and collapsed to singeltons, else they are removed from the result.

__call__(*args: Unpack) Tout[source]

Apply the forward operator.

For more information, see forward.

Note

Prefer using operator_instance(*parameters), i.e. using __call__ over using forward.

forward(x: Tensor) tuple[Tensor][source]

Forward method.

Compute the squared L2 norm of the input.

Parameters:

x (Tensor) – input tensor

Returns:

squared L2 norm of the input tensor

prox(x: Tensor, sigma: Tensor | float = 1.0) tuple[Tensor][source]

Proximal Mapping of the squared L2 Norm.

Apply the proximal mapping of the squared L2 norm.

Parameters:
  • x (Tensor) – input tensor

  • sigma (Tensor | float, default: 1.0) – scaling factor

Returns:

Proximal mapping applied to the input tensor

prox_convex_conj(x: Tensor, sigma: Tensor | float = 1.0) tuple[Tensor][source]

Convex conjugate of squared L2 Norm.

Apply the proximal mapping of the convex conjugate of the squared L2 norm.

Parameters:
Returns:

Proximal of convex conjugate applied to the input tensor

__add__(other: Operator[Unpack, Tout]) Operator[Unpack, Tout][source]
__add__(other: Tensor) Operator[Unpack, tuple[Unpack]]

Operator addition.

Returns lambda x: self(x) + other(x) if other is a operator, lambda x: self(x) + other*x if other is a tensor

__matmul__(other: Operator[Unpack, tuple[Unpack]]) Operator[Unpack, Tout][source]

Operator composition.

Returns lambda x: self(other(x))

__mul__(other: Tensor | complex) Operator[Unpack, Tout][source]

Operator multiplication with tensor.

Returns lambda x: self(x*other)

__or__(other: ProximableFunctional) ProximableFunctionalSeparableSum[source]

Create a ProximableFunctionalSeparableSum object from two proximable functionals.

Parameters:

other (ProximableFunctional) – second functional to be summed

Returns:

ProximableFunctionalSeparableSum object

__radd__(other: Tensor) Operator[Unpack, tuple[Unpack]][source]

Operator right addition.

Returns lambda x: other*x + self(x)

__rmul__(scalar: Tensor | complex) ProximableFunctional[source]

Multiply functional with scalar.