mrpro.data.KData
- class mrpro.data.KData[source]
Bases:
MoveDataMixin
MR raw data / k-space data class.
- __init__(header: KHeader, data: Tensor, traj: KTrajectory) None
- classmethod from_file(filename: str | Path, ktrajectory: KTrajectoryCalculator | KTrajectory | KTrajectoryIsmrmrd, header_overwrites: dict[str, object] | None = None, dataset_idx: int = -1, acquisition_filter_criterion: Callable = is_image_acquisition) Self [source]
Load k-space data from an ISMRMRD file.
- Parameters:
ktrajectory (
KTrajectoryCalculator
|KTrajectory
|KTrajectoryIsmrmrd
) – KTrajectoryCalculator to calculate the k-space trajectory or an already calculated KTrajectoryheader_overwrites (
dict
[str
,object
] |None
, default:None
) – dictionary of key-value pairs to overwrite the headerdataset_idx (
int
, default:-1
) – index of the ISMRMRD dataset to load (converter creates dataset, dataset_1, …)acquisition_filter_criterion (
Callable
, default:is_image_acquisition
) – function which returns True if an acquisition should be included in KData
-
traj:
KTrajectory
K-space trajectory along kz, ky and kx. Shape
(*other k2 k1 k0)
- property device: device | None[source]
Return the device of the tensors.
Looks at each field of a dataclass implementing a device attribute, such as
torch.Tensor
orMoveDataMixin
instances. If the devices of the fields differ, anInconsistentDeviceError
is raised, otherwise the device is returned. If no field implements a device attribute, None is returned.- Raises:
InconsistentDeviceError – If the devices of different fields differ.
- Returns:
The device of the fields or
None
if no field implements adevice
attribute.
- property is_cpu: bool[source]
Return True if all tensors are on the CPU.
Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a
MoveDataMixin
)Returns
False
if not all tensors are on cpu or if the device is inconsistent, returnsTrue
if the data class has no tensors as attributes.
- property is_cuda: bool[source]
Return
True
if all tensors are on a single CUDA device.Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a
MoveDataMixin
)Returns
False
if not all tensors are on the same CUDA devices, or if the device is inconsistent, returnsTrue
if the data class has no tensors as attributes.
- compress_coils(n_compressed_coils: int, batch_dims: None | Sequence[int] = None, joint_dims: Sequence[int] | ellipsis = ...) Self [source]
Reduce the number of coils based on a PCA compression.
A PCA is carried out along the coil dimension and the n_compressed_coils virtual coil elements are selected. For more information on coil compression please see [BUE2007], [DON2008] and [HUA2008].
Returns a copy of the data.
- Parameters:
kdata – K-space data
n_compressed_coils (
int
) – Number of compressed coilsbatch_dims (
None
|Sequence
[int
], default:None
) – Dimensions which are treated as batched, i.e. separate coil compression matrizes (e.g. different slices). Default is to do one coil compression matrix for the entire k-space data. Only batch_dim or joint_dim can be defined. If batch_dims is not None then joint_dims has to be …joint_dims (
Sequence
[int
] |EllipsisType
, default:...
) – Dimensions which are combined to calculate single coil compression matrix (e.g. k0, k1, contrast). Default is that all dimensions (except for the coil dimension) are joint_dims. Only batch_dim or joint_dim can be defined. If joint_dims is not … batch_dims has to be None
- Returns:
Copy of K-space data with compressed coils.
- Raises:
ValueError – If both batch_dims and joint_dims are defined.
ValuerError – If coil dimension is part of joint_dims or batch_dims.
References
[BUE2007]Buehrer M, Pruessmann KP, Boesiger P, Kozerke S (2007) Array compression for MRI with large coil arrays. MRM 57. https://doi.org/10.1002/mrm.21237
[DON2008]Doneva M, Boernert P (2008) Automatic coil selection for channel reduction in SENSE-based parallel imaging. MAGMA 21. https://doi.org/10.1007/s10334-008-0110-x
[HUA2008]Huang F, Vijayakumar S, Li Y, Hertel S, Duensing GR (2008) A software channel compression technique for faster reconstruction with many channels. MRM 26. https://doi.org/10.1016/j.mri.2007.04.010
- rearrange_k2_k1_into_k1() Self [source]
Rearrange kdata from (… k2 k1 …) to (… 1 (k2 k1) …).
Note: This function will be deprecated in the future.
- Parameters:
kdata – K-space data
(other coils k2 k1 k0)
- Returns:
K-space data
(other coils 1 (k2 k1) k0)
- remove_readout_os() Self [source]
Remove any oversampling along the readout direction.
Removes oversampling along the readout direction by cropping the data to the size of the reconstruction matrix in image space [GAD].
Returns a copy of the data.
- Parameters:
kdata – K-space data
- Returns:
Copy of K-space data with oversampling removed.
- Raises:
ValueError – If the recon matrix along x is larger than the encoding matrix along x.
References
[GAD]
- select_other_subset(subset_idx: Tensor, subset_label: Literal['average', 'slice', 'contrast', 'phase', 'repetition', 'set']) Self [source]
Select a subset from the other dimension of KData.
Note: This function will be deprecated in the future.
- Parameters:
- Returns:
K-space data
(other_subset coils k2 k1 k0)
- Raises:
ValueError – If the subset indices are not available in the data
- split_k1_into_other(split_idx: Tensor, other_label: Literal['average', 'slice', 'contrast', 'phase', 'repetition', 'set']) Self [source]
Based on an index tensor, split the data in e.g. phases.
- Parameters:
kdata – K-space data (other coils k2 k1 k0)
split_idx (
Tensor
) – 2D index describing the k1 points in each block to be moved to other dimension (other_split, k1_per_split)other_label (
Literal
['average'
,'slice'
,'contrast'
,'phase'
,'repetition'
,'set'
]) – Label of other dimension, e.g. repetition, phase
- Returns:
K-space data with new shape ((other other_split) coils k2 k1_per_split k0)
- split_k2_into_other(split_idx: Tensor, other_label: Literal['average', 'slice', 'contrast', 'phase', 'repetition', 'set']) Self [source]
Based on an index tensor, split the data in e.g. phases.
Note: This function will be deprecated in the future.
- Parameters:
kdata – K-space data
(other coils k2 k1 k0)
split_idx (
Tensor
) – 2D index describing the k2 points in each block to be moved to other dimension(other_split, k2_per_split)
other_label (
Literal
['average'
,'slice'
,'contrast'
,'phase'
,'repetition'
,'set'
]) – Label of other dimension, e.g. repetition, phase
- Returns:
K-space data with new shape
((other other_split) coils k2_per_split k1 k0)
- apply(function: Callable[[Any], Any] | None = None, *, recurse: bool = True) Self [source]
Apply a function to all children. Returns a new object.
- apply_(function: Callable[[Any], Any] | None = None, *, memo: dict[int, Any] | None = None, recurse: bool = True) Self [source]
Apply a function to all children in-place.
- Parameters:
function (
Callable
[[Any
],Any
] |None
, default:None
) – The function to apply to all fields.None
is interpreted as a no-op.memo (
dict
[int
,Any
] |None
, default:None
) – A dictionary to keep track of objects that the function has already been applied to, to avoid multiple applications. This is useful if the object has a circular reference.recurse (
bool
, default:True
) – IfTrue
, the function will be applied to all children that areMoveDataMixin
instances.
- cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self [source]
Put in CPU memory.
- Parameters:
memory_format (
memory_format
, default:torch.preserve_format
) – The desired memory format of returned tensor.copy (
bool
, default:False
) – IfTrue
, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self [source]
Put object in CUDA memory.
- Parameters:
device (
device
|str
|int
|None
, default:None
) – The destination GPU device. Defaults to the current CUDA device.non_blocking (
bool
, default:False
) – IfTrue
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.memory_format (
memory_format
, default:torch.preserve_format
) – The desired memory format of returned tensor.copy (
bool
, default:False
) – IfTrue
, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self [source]
Convert all float tensors to double precision.
converts
float
tofloat64
andcomplex
tocomplex128
- Parameters:
memory_format (
memory_format
, default:torch.preserve_format
) – The desired memory format of returned tensor.copy (
bool
, default:False
) – IfTrue
, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self [source]
Convert all float tensors to half precision.
converts
float
tofloat16
andcomplex
tocomplex32
- Parameters:
memory_format (
memory_format
, default:torch.preserve_format
) – The desired memory format of returned tensor.copy (
bool
, default:False
) – IfTrue
, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self [source]
Convert all float tensors to single precision.
converts
float
tofloat32
andcomplex
tocomplex64
- Parameters:
memory_format (
memory_format
, default:torch.preserve_format
) – The desired memory format of returned tensor.copy (
bool
, default:False
) – IfTrue
, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- to(device: str | device | int | None = None, dtype: dtype | None = None, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self [source]
- to(dtype: dtype, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
- to(tensor: Tensor, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
Perform dtype and/or device conversion of data.
A
torch.dtype
andtorch.device
are inferred from the arguments args and kwargs. Please have a look at the documentation oftorch.Tensor.to
for more details.A new instance of the dataclass will be returned.
The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the
MoveDataMixin
.The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.
Example: If called with
dtype=torch.float32
ORdtype=torch.complex64
:A
complex128
tensor will be converted tocomplex64
A
float64
tensor will be converted tofloat32
A
bool
tensor will remainbool
An
int64
tensor will remainint64
If other conversions are desired, please use the
to
method of the fields directly.If the copy argument is set to
True
(default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set toTrue
or a conversion is necessary. If set toFalse
, some Tensors might be shared between the original and the new object.
- __eq__(other)
Return self==value.
- __new__(**kwargs)