mrpro.data.KTrajectoryRawShape

class mrpro.data.KTrajectoryRawShape[source]

Bases: MoveDataMixin

K-space trajectory shaped ((other*k2*k1), k0).

Contains the k-space trajectory, i.e. a description of where data point was acquired in k-space, in the raw shape as it is read from the data file, before any reshaping or sorting by indices is applied. The shape of each of kx, ky, kz is ((other*k2*k1), k0), this means that e.g. slices, averages… have not yet been separated from the phase and slice encoding dimensions.

__init__(kz: Tensor, ky: Tensor, kx: Tensor, repeat_detection_tolerance: None | float = 0.001) None
classmethod from_tensor(tensor: Tensor, stack_dim: int = 0, axes_order: Literal['zxy', 'zyx', 'yxz', 'yzx', 'xyz', 'xzy'] = 'zyx', repeat_detection_tolerance: float | None = 1e-6, scaling_matrix: SpatialDimension | None = None) Self[source]

Create a KTrajectoryRawShape from a tensor representation of the trajectory.

Parameters:
  • tensor (Tensor) – The tensor representation of the trajectory. This should be a 5-dim tensor, with (kz, ky, kx) stacked in this order along stack_dim.

  • stack_dim (int, default: 0) – The dimension in the tensor along which the directions are stacked.

  • axes_order (Literal['zxy', 'zyx', 'yxz', 'yzx', 'xyz', 'xzy'], default: 'zyx') – The order of the axes in the tensor. The MRpro convention is ‘zyx’.

  • repeat_detection_tolerance (float | None, default: 1e-6) – Tolerance for detecting repeated dimensions (broadcasting). If trajectory points differ by less than this value, they are considered identical. Set to None to disable this feature.

  • scaling_matrix (SpatialDimension | None, default: None) – If a scaling matrix is provided, the trajectory is rescaled to fit within the dimensions of the matrix. If not provided, the trajectory remains unchanged.

kz: Tensor

(other*k2*k1,k0), phase encoding direction k2 if Cartesian.

ky: Tensor

(other*k2*k1,k0), phase encoding direction k1 if Cartesian.

kx: Tensor

(other*k2*k1,k0), frequency encoding direction k0 if Cartesian.

repeat_detection_tolerance: None | float

tolerance for repeat detection. Set to None to disable.

property device: device | None[source]

Return the device of the tensors.

Looks at each field of a dataclass implementing a device attribute, such as torch.Tensor or MoveDataMixin instances. If the devices of the fields differ, an InconsistentDeviceError is raised, otherwise the device is returned. If no field implements a device attribute, None is returned.

Raises:

InconsistentDeviceError – If the devices of different fields differ.

Returns:

The device of the fields or None if no field implements a device attribute.

property is_cpu: bool[source]

Return True if all tensors are on the CPU.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on cpu or if the device is inconsistent, returns True if the data class has no tensors as attributes.

property is_cuda: bool[source]

Return True if all tensors are on a single CUDA device.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on the same CUDA devices, or if the device is inconsistent, returns True if the data class has no tensors as attributes.

sort_and_reshape(sort_idx: ndarray, n_k2: int, n_k1: int) KTrajectory[source]

Resort and reshape the raw trajectory to KTrajectory.

This function is used to sort the raw trajectory and reshape it to an mrpro.data.KTrajectory by separating the combined dimension (other k2 k1) into three separate dimensions.

Parameters:
  • sort_idx (ndarray) – Index which defines how combined dimension (other k2 k1) needs to be sorted such that it can be separated into three separate dimensions using a reshape operation.

  • n_k2 (int) – number of k2 points.

  • n_k1 (int) – number of k1 points.

Returns:

KTrajectory with kx, ky and kz each in the shape (other k2 k1 k0).

apply(function: Callable[[Any], Any] | None = None, *, recurse: bool = True) Self[source]

Apply a function to all children. Returns a new object.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are MoveDataMixin instances.

apply_(function: Callable[[Any], Any] | None = None, *, memo: dict[int, Any] | None = None, recurse: bool = True) Self[source]

Apply a function to all children in-place.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • memo (dict[int, Any] | None, default: None) – A dictionary to keep track of objects that the function has already been applied to, to avoid multiple applications. This is useful if the object has a circular reference.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are MoveDataMixin instances.

clone() Self[source]

Return a deep copy of the object.

cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put in CPU memory.

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put object in CUDA memory.

Parameters:
  • device (device | str | int | None, default: None) – The destination GPU device. Defaults to the current CUDA device.

  • non_blocking (bool, default: False) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to double precision.

converts float to float64 and complex to complex128

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to half precision.

converts float to float16 and complex to complex32

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to single precision.

converts float to float32 and complex to complex64

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

to(device: str | device | int | None = None, dtype: dtype | None = None, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self[source]
to(dtype: dtype, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
to(tensor: Tensor, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self

Perform dtype and/or device conversion of data.

A torch.dtype and torch.device are inferred from the arguments args and kwargs. Please have a look at the documentation of torch.Tensor.to for more details.

A new instance of the dataclass will be returned.

The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the MoveDataMixin.

The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.

Example: If called with dtype=torch.float32 OR dtype=torch.complex64:

  • A complex128 tensor will be converted to complex64

  • A float64 tensor will be converted to float32

  • A bool tensor will remain bool

  • An int64 tensor will remain int64

If other conversions are desired, please use the to method of the fields directly.

If the copy argument is set to True (default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set to True or a conversion is necessary. If set to False, some Tensors might be shared between the original and the new object.

__eq__(other)

Return self==value.

__new__(**kwargs)