mrpro.data.KHeader

class mrpro.data.KHeader(trajectory: KTrajectoryCalculator, encoding_limits: EncodingLimits, recon_matrix: SpatialDimension[int], recon_fov: SpatialDimension[float], encoding_matrix: SpatialDimension[int], encoding_fov: SpatialDimension[float], acq_info: AcqInfo, lamor_frequency_proton: float, datetime: datetime.datetime | None = None, te: torch.Tensor | None = None, ti: torch.Tensor | None = None, fa: torch.Tensor | None = None, tr: torch.Tensor | None = None, echo_spacing: torch.Tensor | None = None, echo_train_length: int = 1, sequence_type: str = 'unknown', model: str = 'unknown', vendor: str = 'unknown', protocol_name: str = 'unknown', calibration_mode: enums.CalibrationMode = CalibrationMode.OTHER, interleave_dim: enums.InterleavingDimension = InterleavingDimension.OTHER, trajectory_type: enums.TrajectoryType = TrajectoryType.OTHER, measurement_id: str = 'unknown', patient_name: str = 'unknown', _misc: dict = <factory>)[source]

Bases: MoveDataMixin

MR raw data header.

All information that is not covered by the dataclass is stored in the misc dict. Our code shall not rely on this information, and it is not guaranteed to be present. Also, the information in the misc dict is not guaranteed to be correct or tested.

__init__(trajectory: KTrajectoryCalculator, encoding_limits: EncodingLimits, recon_matrix: SpatialDimension[int], recon_fov: SpatialDimension[float], encoding_matrix: SpatialDimension[int], encoding_fov: SpatialDimension[float], acq_info: AcqInfo, lamor_frequency_proton: float, datetime: datetime.datetime | None = None, te: torch.Tensor | None = None, ti: torch.Tensor | None = None, fa: torch.Tensor | None = None, tr: torch.Tensor | None = None, echo_spacing: torch.Tensor | None = None, echo_train_length: int = 1, sequence_type: str = 'unknown', model: str = 'unknown', vendor: str = 'unknown', protocol_name: str = 'unknown', calibration_mode: enums.CalibrationMode = CalibrationMode.OTHER, interleave_dim: enums.InterleavingDimension = InterleavingDimension.OTHER, trajectory_type: enums.TrajectoryType = TrajectoryType.OTHER, measurement_id: str = 'unknown', patient_name: str = 'unknown', _misc: dict = <factory>) None
apply(function: Callable[[Any], Any] | None = None, *, recurse: bool = True) Self

Apply a function to all children. Returns a new object.

Parameters:
  • function – The function to apply to all fields. None is interpreted as a no-op.

  • recurse – If True, the function will be applied to all children that are MoveDataMixin instances.

apply_(function: Callable[[Any], Any] | None = None, *, memo: dict[int, Any] | None = None, recurse: bool = True) Self

Apply a function to all children in-place.

Parameters:
  • function – The function to apply to all fields. None is interpreted as a no-op.

  • memo – A dictionary to keep track of objects that the function has already been applied to, to avoid multiple applications. This is useful if the object has a circular reference.

  • recurse – If True, the function will be applied to all children that are MoveDataMixin instances.

clone() Self

Return a deep copy of the object.

cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Put in CPU memory.

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Put object in CUDA memory.

Parameters:
  • device – The destination GPU device. Defaults to the current CUDA device.

  • non_blocking – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Convert all float tensors to double precision.

converts float to float64 and complex to complex128

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

classmethod from_ismrmrd(header: ismrmrdHeader, acq_info: AcqInfo, defaults: dict | None = None, overwrite: dict | None = None, encoding_number: int = 0) Self[source]

Create an Header from ISMRMRD Data.

Parameters:
  • header – ISMRMRD header

  • acq_info – acquisition information

  • defaults – dictionary of values to be used if information is missing in header

  • overwrite – dictionary of values to be used independent of header

  • encoding_number – as ismrmrdHeader can contain multiple encodings, selects which to consider

half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Convert all float tensors to half precision.

converts float to float16 and complex to complex32

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self

Convert all float tensors to single precision.

converts float to float32 and complex to complex64

Parameters:
  • memory_format – The desired memory format of returned tensor.

  • copy – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views

to(*args, **kwargs) Self

Perform dtype and/or device conversion of data.

A torch.dtype and torch.device are inferred from the arguments args and kwargs. Please have a look at the documentation of torch.Tensor.to() for more details.

A new instance of the dataclass will be returned.

The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the MoveDataMixin.

The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.

Example: If called with dtype=torch.float32 OR dtype=torch.complex64:

  • A complex128 tensor will be converted to complex64

  • A float64 tensor will be converted to float32

  • A bool tensor will remain bool

  • An int64 tensor will remain int64

If other conversions are desired, please use the torch.Tensor.to() method of the fields directly.

If the copy argument is set to True (default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set to True or a conversion is necessary. If set to False, some Tensors might be shared between the original and the new object.

acq_info: AcqInfo

Information of the acquisitions (i.e. readout lines).

calibration_mode: enums.CalibrationMode

Mode of how calibration data is acquired.

datetime: datetime.datetime | None

Date and time of acquisition.

property device: device | None

Return the device of the tensors.

Looks at each field of a dataclass implementing a device attribute, such as torch.Tensors or MoveDataMixin instances. If the devices of the fields differ, an InconsistentDeviceError is raised, otherwise the device is returned. If no field implements a device attribute, None is returned.

Raises:

InconsistentDeviceError: – If the devices of different fields differ.

Return type:

The device of the fields or None if no field implements a device attribute.

echo_spacing: torch.Tensor | None

Echo spacing [s].

echo_train_length: int

Number of echoes in a multi-echo acquisition.

encoding_fov: SpatialDimension[float]

Field of view of the image encoded by the k-space trajectory [m].

encoding_limits: EncodingLimits

K-space encoding limits.

encoding_matrix: SpatialDimension[int]

Dimensions of the encoded k-space matrix.

fa: torch.Tensor | None

Flip angle [rad].

property fa_degree: Tensor | None

Flip angle in degree.

interleave_dim: enums.InterleavingDimension

Interleaving dimension.

property is_cpu: bool

Return True if all tensors are on the CPU.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on cpu or if the device is inconsistent, returns True if the data class has no tensors as attributes.

property is_cuda: bool

Return True if all tensors are on a single CUDA device.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a MoveDataMixin)

Returns False if not all tensors are on the same CUDA devices, or if the device is inconsistent, returns True if the data class has no tensors as attributes.

lamor_frequency_proton: float

Lamor frequency of hydrogen nuclei [Hz].

measurement_id: str

Measurement ID.

model: str

Scanner model.

patient_name: str

Name of the patient.

protocol_name: str

Name of the acquisition protocol.

recon_fov: SpatialDimension[float]

Field-of-view of the reconstructed image [m].

recon_matrix: SpatialDimension[int]

Dimensions of the reconstruction matrix.

sequence_type: str

Type of sequence.

te: torch.Tensor | None

Echo time [s].

ti: torch.Tensor | None

Inversion time [s].

tr: torch.Tensor | None

Repetition time [s].

trajectory: KTrajectoryCalculator

Function to calculate the k-space trajectory.

trajectory_type: enums.TrajectoryType

Type of trajectory.

vendor: str

Scanner vendor.