mrpro.data.IData

class mrpro.data.IData[source]

Bases: Data

MR image data (IData) class.

__init__(data: Tensor, header: IHeader) None
classmethod from_dicom_files(filenames: Sequence[str] | Sequence[Path] | Generator[Path, None, None]) Self[source]

Read multiple DICOM files and return IData object.

DICOM images can be saved as single-frame or multi-frame images [DCMMF].

If the DICOM files are single-frame, we treat each file separately and stack them along the other dimension. If the DICOM files are multi-frame and the MRAcquisitionType is 3D we treat the frame dimension as the z dimension. Otherwise, we move the frame dimension to the other dimension. Multiple multi-frame DICOM images are stacked along an additional other dimension before the frame dimension. Providing the list of files sorted by filename usually leads to a reasonable sorting of the data.

Parameters:

filenames (Sequence[str] | Sequence[Path] | Generator[Path, None, None]) – List of DICOM filenames.

References

classmethod from_dicom_folder(foldername: str | Path, suffix: str | None = 'dcm') Self[source]

Read all DICOM files from a folder and return IData object.

Parameters:
  • foldername (str | Path) – path to folder with DICOM files.

  • suffix (str | None, default: 'dcm') – file extension (without period/full stop) to identify the DICOM files. If None, then all files in the folder are read in.

classmethod from_single_dicom(filename: str | Path) Self[source]

Read single DICOM file and return IData object.

Parameters:

filename (str | Path) – path to DICOM file.

classmethod from_tensor_and_kheader(data: Tensor, kheader: KHeader) Self[source]

Create IData object from a tensor and a KHeader object.

Parameters:
  • data (Tensor) – image data with dimensions (broadcastable to) (other, coils, z, y, x).

  • kheader (KHeader) – MR raw data header containing required meta data for the image header.

data: Tensor

Data. Shape (...other coils k2 k1 k0)

header: IHeader

Header for image data.

property device: device | None[source]

Return the device of the tensors.

Looks at each field of a dataclass implementing a device attribute, such as torch.Tensor or MoveDataMixin instances. If the devices of the fields differ, an InconsistentDeviceError is raised, otherwise the device is returned. If no field implements a device attribute, None is returned.

Raises:

InconsistentDeviceError – If the devices of different fields differ.

Returns:

The device of the fields or None if no field implements a device attribute.

property is_cpu: bool[source]

Return True if all tensors are on the CPU.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on cpu or if the device is inconsistent, returns True if the data class has no tensors as attributes.

property is_cuda: bool[source]

Return True if all tensors are on a single CUDA device.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on the same CUDA devices, or if the device is inconsistent, returns True if the data class has no tensors as attributes.

rss(keepdim: bool = False) Tensor[source]

Root-sum-of-squares over coils image data.

Parameters:

keepdim (bool, default: False) – if True, the output tensor has the same number of dimensions as the data tensor, and the coil dimension is kept as a singleton dimension. If False, the coil dimension is removed.

Returns:

image data tensor with shape (..., 1, z, y, x) if keepdim is True or (..., z, y, x) if keepdim is False.

apply(function: Callable[[Any], Any] | None = None, *, recurse: bool = True) Self[source]

Apply a function to all children. Returns a new object.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are MoveDataMixin instances.

apply_(function: Callable[[Any], Any] | None = None, *, memo: dict[int, Any] | None = None, recurse: bool = True) Self[source]

Apply a function to all children in-place.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • memo (dict[int, Any] | None, default: None) – A dictionary to keep track of objects that the function has already been applied to, to avoid multiple applications. This is useful if the object has a circular reference.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are MoveDataMixin instances.

clone() Self[source]

Return a deep copy of the object.

cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put in CPU memory.

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put object in CUDA memory.

Parameters:
  • device (device | str | int | None, default: None) – The destination GPU device. Defaults to the current CUDA device.

  • non_blocking (bool, default: False) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to double precision.

converts float to float64 and complex to complex128

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to half precision.

converts float to float16 and complex to complex32

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to single precision.

converts float to float32 and complex to complex64

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

to(device: str | device | int | None = None, dtype: dtype | None = None, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self[source]
to(dtype: dtype, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
to(tensor: Tensor, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self

Perform dtype and/or device conversion of data.

A torch.dtype and torch.device are inferred from the arguments args and kwargs. Please have a look at the documentation of torch.Tensor.to for more details.

A new instance of the dataclass will be returned.

The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the MoveDataMixin.

The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.

Example: If called with dtype=torch.float32 OR dtype=torch.complex64:

  • A complex128 tensor will be converted to complex64

  • A float64 tensor will be converted to float32

  • A bool tensor will remain bool

  • An int64 tensor will remain int64

If other conversions are desired, please use the to method of the fields directly.

If the copy argument is set to True (default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set to True or a conversion is necessary. If set to False, some Tensors might be shared between the original and the new object.

__eq__(other)

Return self==value.

__new__(**kwargs)