mrpro.data.KHeader

class mrpro.data.KHeader[source]

Bases: MoveDataMixin

MR raw data header.

All information that is not covered by the dataclass is stored in the misc dict. Our code shall not rely on this information, and it is not guaranteed to be present. Also, the information in the misc dict is not guaranteed to be correct or tested.

__init__(recon_matrix: SpatialDimension[int], encoding_matrix: SpatialDimension[int], recon_fov: SpatialDimension[float], encoding_fov: SpatialDimension[float], acq_info: AcqInfo = AcqInfo(), trajectory: KTrajectoryCalculator | None = None, lamor_frequency_proton: float | None = None, datetime: datetime | None = None, te: list[float] | Tensor = list(), ti: list[float] | Tensor = list(), fa: list[float] | Tensor = list(), tr: list[float] | Tensor = list(), echo_spacing: list[float] | Tensor = list(), echo_train_length: int = 1, sequence_type: str = 'unknown', model: str = 'unknown', vendor: str = 'unknown', protocol_name: str = 'unknown', trajectory_type: TrajectoryType = TrajectoryType.OTHER, measurement_id: str = 'unknown', patient_name: str = 'unknown', _misc: dict = dict()) None
classmethod from_ismrmrd(header: ismrmrdHeader, acq_info: AcqInfo, defaults: dict | None = None, overwrite: dict | None = None, encoding_number: int = 0) Self[source]

Create an Header from ISMRMRD Data.

Parameters:
  • header (ismrmrdHeader) – ISMRMRD header

  • acq_info (AcqInfo) – acquisition information

  • defaults (dict | None, default: None) – dictionary of values to be used if information is missing in header

  • overwrite (dict | None, default: None) – dictionary of values to be used independent of header

  • encoding_number (int, default: 0) – as ismrmrdHeader can contain multiple encodings, selects which to consider

recon_matrix: SpatialDimension[int]

Dimensions of the reconstruction matrix.

encoding_matrix: SpatialDimension[int]

Dimensions of the encoded k-space matrix.

recon_fov: SpatialDimension[float]

Field-of-view of the reconstructed image [m].

encoding_fov: SpatialDimension[float]

Field of view of the image encoded by the k-space trajectory [m].

acq_info: AcqInfo

Information of the acquisitions (i.e. readout lines).

trajectory: KTrajectoryCalculator | None

Function to calculate the k-space trajectory.

lamor_frequency_proton: float | None

Lamor frequency of hydrogen nuclei [Hz].

datetime: datetime | None

Date and time of acquisition.

te: list[float] | Tensor

Echo time [s].

ti: list[float] | Tensor

Inversion time [s].

fa: list[float] | Tensor

Flip angle [rad].

tr: list[float] | Tensor

Repetition time [s].

echo_spacing: list[float] | Tensor

Echo spacing [s].

echo_train_length: int

Number of echoes in a multi-echo acquisition.

sequence_type: str

Type of sequence.

model: str

Scanner model.

vendor: str

Scanner vendor.

protocol_name: str

Name of the acquisition protocol.

trajectory_type: TrajectoryType

Type of trajectory.

measurement_id: str

Measurement ID.

patient_name: str

Name of the patient.

property fa_degree: Tensor | list[float][source]

Flip angle in degree.

property device: device | None[source]

Return the device of the tensors.

Looks at each field of a dataclass implementing a device attribute, such as torch.Tensor or MoveDataMixin instances. If the devices of the fields differ, an InconsistentDeviceError is raised, otherwise the device is returned. If no field implements a device attribute, None is returned.

Raises:

InconsistentDeviceError – If the devices of different fields differ.

Returns:

The device of the fields or None if no field implements a device attribute.

property is_cpu: bool[source]

Return True if all tensors are on the CPU.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on cpu or if the device is inconsistent, returns True if the data class has no tensors as attributes.

property is_cuda: bool[source]

Return True if all tensors are on a single CUDA device.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on the same CUDA devices, or if the device is inconsistent, returns True if the data class has no tensors as attributes.

apply(function: Callable[[Any], Any] | None = None, *, recurse: bool = True) Self[source]

Apply a function to all children. Returns a new object.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are MoveDataMixin instances.

apply_(function: Callable[[Any], Any] | None = None, *, memo: dict[int, Any] | None = None, recurse: bool = True) Self[source]

Apply a function to all children in-place.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • memo (dict[int, Any] | None, default: None) – A dictionary to keep track of objects that the function has already been applied to, to avoid multiple applications. This is useful if the object has a circular reference.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are MoveDataMixin instances.

clone() Self[source]

Return a deep copy of the object.

cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put in CPU memory.

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put object in CUDA memory.

Parameters:
  • device (device | str | int | None, default: None) – The destination GPU device. Defaults to the current CUDA device.

  • non_blocking (bool, default: False) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to double precision.

converts float to float64 and complex to complex128

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to half precision.

converts float to float16 and complex to complex32

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to single precision.

converts float to float32 and complex to complex64

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

to(device: str | device | int | None = None, dtype: dtype | None = None, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self[source]
to(dtype: dtype, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
to(tensor: Tensor, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self

Perform dtype and/or device conversion of data.

A torch.dtype and torch.device are inferred from the arguments args and kwargs. Please have a look at the documentation of torch.Tensor.to for more details.

A new instance of the dataclass will be returned.

The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the MoveDataMixin.

The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.

Example: If called with dtype=torch.float32 OR dtype=torch.complex64:

  • A complex128 tensor will be converted to complex64

  • A float64 tensor will be converted to float32

  • A bool tensor will remain bool

  • An int64 tensor will remain int64

If other conversions are desired, please use the to method of the fields directly.

If the copy argument is set to True (default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set to True or a conversion is necessary. If set to False, some Tensors might be shared between the original and the new object.

__eq__(other)

Return self==value.

__new__(**kwargs)