mrpro.data.CsmData

class mrpro.data.CsmData(data: Tensor, header: KHeader | IHeader | QHeader)[source]

Bases: QData

Coil sensitivity map class.

__init__(data: Tensor, header: KHeader | IHeader | QHeader) None

Create QData object from a tensor and an arbitrary MRpro header.

Parameters:
  • data – quantitative image data tensor with dimensions (other, coils, z, y, x)

  • header – MRpro header containing required meta data for the QHeader

Methods

__init__(data, header)

Create QData object from a tensor and an arbitrary MRpro header.

as_operator()

Create SensitivityOp using a copy of the CSMs.

clone()

Return a deep copy of the object.

cpu(*[, memory_format, copy])

Put in CPU memory.

cuda([device, non_blocking, memory_format, copy])

Put object in CUDA memory.

double(*[, memory_format, copy])

Convert all float tensors to double precision.

from_idata_inati(idata[, smoothing_width, ...])

Create csm object from image data using Inati method.

from_idata_walsh(idata[, smoothing_width, ...])

Create csm object from image data using iterative Walsh method.

from_single_dicom(filename)

Read single DICOM file and return QData object.

half(*[, memory_format, copy])

Convert all float tensors to half precision.

single(*[, memory_format, copy])

Convert all float tensors to single precision.

to(*args, **kwargs)

Perform dtype and/or device conversion of data.

Attributes

data

Data.

device

Return the device of the tensors.

header

Header describing quantitative data.

is_cpu

Return True if all tensors are on the CPU.

is_cuda

Return True if all tensors are on a single CUDA device.

as_operator() SensitivityOp[source]

Create SensitivityOp using a copy of the CSMs.

clone() Self

Return a deep copy of the object.

cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Put in CPU memory.

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Put object in CUDA memory.

Parameters:
  • device – The destination GPU device. Defaults to the current CUDA device.

  • non_blocking – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

data: torch.Tensor

Data. Shape (…other coils k2 k1 k0)

property device: device | None

Return the device of the tensors.

Looks at each field of a dataclass implementing a device attribute, such as torch.Tensors or MoveDataMixin instances. If the devices of the fields differ, an InconsistentDeviceError is raised, otherwise the device is returned. If no field implements a device attribute, None is returned.

Raises:

InconsistentDeviceError: – If the devices of different fields differ.

Return type:

The device of the fields or None if no field implements a device attribute.

double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Convert all float tensors to double precision.

converts float to float64 and complex to complex128

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

classmethod from_idata_inati(idata: IData, smoothing_width: int | SpatialDimension[int] = 5, chunk_size_otherdim: int | None = None) Self[source]

Create csm object from image data using Inati method.

Parameters:
  • idata – IData object containing the images for each coil element.

  • smoothing_width – Size of the smoothing kernel.

  • chunk_size_otherdim – How many elements of the other dimensions should be processed at once. Default is None, which means that all elements are processed at once.

classmethod from_idata_walsh(idata: IData, smoothing_width: int | SpatialDimension[int] = 5, power_iterations: int = 3, chunk_size_otherdim: int | None = None) Self[source]

Create csm object from image data using iterative Walsh method.

Parameters:
  • idata – IData object containing the images for each coil element.

  • smoothing_width – width of smoothing filter.

  • power_iterations – number of iterations used to determine dominant eigenvector

  • chunk_size_otherdim – How many elements of the other dimensions should be processed at once. Default is None, which means that all elements are processed at once.

classmethod from_single_dicom(filename: str | Path) Self

Read single DICOM file and return QData object.

Parameters:

filename – path to DICOM file

half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Convert all float tensors to half precision.

converts float to float16 and complex to complex32

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

header: QHeader

Header describing quantitative data.

property is_cpu: bool

Return True if all tensors are on the CPU.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on cpu or if the device is inconsistent, returns True if the data class has no tensors as attributes.

property is_cuda: bool

Return True if all tensors are on a single CUDA device.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on the same CUDA devices, or if the device is inconsistent, returns True if the data class has no tensors as attributes.

single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Convert all float tensors to single precision.

converts float to float32 and complex to complex64

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

to(*args, **kwargs) Self

Perform dtype and/or device conversion of data.

A torch.dtype and torch.device are inferred from the arguments args and kwargs. Please have a look at the documentation of torch.Tensor.to() for more details.

A new instance of the dataclass will be returned.

The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the MoveDataMixin.

The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.

Example: If called with dtype=torch.float32 OR dtype=torch.complex64:

  • A complex128 tensor will be converted to complex64

  • A float64 tensor will be converted to float32

  • A bool tensor will remain bool

  • An int64 tensor will remain int64

If other conversions are desired, please use the torch.Tensor.to() method of the fields directly.

If the copy argument is set to True (default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set to True or a conversion is necessary. If set to False, some Tensors might be shared between the original and the new object.