【AR】Augmented reality improves procedural efficiency and reduces radiation dose for CT-guided lesion targeting: a phantom study using HoloLens 2

https://ift.tt/31Sxuam


This study is Institutional Review Board exempt from the University of Pennsylvania as no actual patient data was obtained or analyzed. CT-guided percutaneous needle targeting was simulated on a phantom model (071B, CIRS, Norfolk, VA) containing multiple targets of various sizes. A CT grid (Guidelines 117, Beekley Medical, Bristol, CT) commonly used in clinical practice was placed on the anterior surface of the phantom for planning and to serve as a fiducial target for registration.

Preoperative imaging and 3D modeling

A preoperative CT scan of the phantom was performed at 120 kVp and 2 mm slice thickness on Siemens SOMATOM Force (Fig. 1). An 11 mm lesion was selected for targeting. Manual and semi-automated segmentations of the lesions, CT grid and bony structures, and skin surface were performed with ITK-SNAP using threshold masking and iterative region growing11. Segmentation meshes were exported in STL file format followed by mesh decimation using Meshmixer (Autodesk, San Rafael, CA) to eliminate redundant vertices and reduce mesh size to improve 3D rendering performance. Reduced meshes were then exported in OBJ file format and material textures, including colors and transparencies, were applied using Blender (Amsterdam, Netherlands). The target lesion was colored in green; all other nontargeted lesions were colored in red. The final 3D surface-rendered model was exported in FBX file format (Fig. 2). Total model generation time was less than 45 min.

Figure 1

CT phantom abdominal biopsy phantom. (A) CT grid is applied to the surface of the model. Phantom contains multiple targets of various sizes. (B) CT image of model. Selected target measures 11 mm in diameter.

Figure 2

Three-dimensional surface-rendered model of phantom. (A) Lines from the CT grid can be seen along the anterior surface. Target lesion is specified in green. All other nontargeted lesions are specified in red. (B) Wireframe view of model which contains 58,498 polygons with a total file size of only 1.6 MB.

Target trajectory

A long, out-of-plane trajectory with a narrow-window access was intentionally chosen to the 11-mm target from a skin entry site along the inferior aspect between CT gridlines 3 and 4 (Fig. 3). This trajectory angle was beyond the maximum gantry tilt for potential compensation by the CT scanner.

Figure 3

Trajectory to targeted lesion from specified skin entry site. (A) Down-the-barrel look at trajectory to targeted lesion (green) from skin entry site at the inferior aspect between labeled gridlines 3 and 4 (black box). Several nontargeted lesions (red) can be seen in close proximity to the trajectory. (B) Vector of ideal trajectory based on preoperative CT scan from specified skin entry site. Total trajectory distance of 14.1 cm from skin with 23.4° angle relative to the z-plane (5.8 cm lateral, 11.6 cm deep, and 5.6 cm cranial component). Target and CT grid are not drawn to scale.

Augmented reality system

Holographic 3D AR visualization and interaction were performed using a HoloLens 2 headset device. A custom HoloLens application was developed in Unity 2019.2.21 and Mixed Reality Toolkit Foundation 2.3.0. Automated registration of the 3D model to CT grid was performed using computer vision and Vuforia 9.0.12 with the CT grid as the image target. Features on the CT grid can be reliably and quickly detected by Vuforia12, and studies have validated the accuracy of Vuforia on HoloLens (v1)10,13,14,15,16,17. Registration accuracies were not directly validated in this study; registration fidelity was confirmed visually by the operator based on complete alignment of the virtual gridlines with the physical gridlines. A virtual needle trajectory was added into the 3D model based on the ideal trajectory. This virtual guide allowed the user to easily trace the ideal trajectory using a real needle (Fig. 4).

Figure 4

Augmented reality (AR)-assisted navigation using HoloLens 2. (A) Participant inserts the needle while wearing HoloLens 2. (B) View of needle insertion without AR. (C) View of needle insertion through HoloLens 2 with three-dimensional model and virtual needle guide projected onto the phantom. Registration is visually confirmed with the actual CT gridlines aligned with the virtual gridlines. The needle is seen aligned with the virtual guide (purple line) displaying the ideal trajectory to the target lesion (green ball). Note that this two-dimensional captured image does not fully represent the three-dimensional stereoscopic view seen with HoloLens 2.

CT-guided procedure simulation

All simulations were performed on a Siemens SOMATOM Force CT scanner at 120 kVp and 2 mm slice thickness. CT scanner operation was performed within the guidelines and regulations of the Department of Radiology at the University of Pennsylvania. After applying a surgical drape over the phantom, percutaneous CT-guided targeting using a 21G-20 cm Chiba needle was simulated in the same standard fashion performed clinically. Following a topogram, an initial CT scan of the phantom was performed and reviewed for trajectory planning. The needle was then passed into the phantom and iteratively advanced, redirected, or retracted, as many times as needed, until the tip of the needle was in the target. Interval CT scans were performed following any needle adjustment. Each adjustment was counted as a needle pass, and these passes were cumulatively documented.

A total of 8 participants simulated CT-guided needle targeting: 2 attendings, 3 interventional radiology (IR) residents, and 3 medical students. Both attendings had greater than 5 years of experience. 2 residents were in their final year of training. All 3 medical students had never previously seen nor performed a CT-guided intervention. Aside from 1 resident, all other participants had no prior experience wearing or interacting with HoloLens 2. In order to limit bias, participants were randomized into cohorts: CT-guided targeting 1) without AR and then repeated with AR or 2) with AR and then repeated without AR (Fig. 5).

Figure 5

Flowchart of study design. Order of interventions with and without augmented reality-assisted navigation were randomized to limit order bias.

Procedural imaging and vector analysis

Total number of needle passes were recorded. Total CT dose index (CTDIvol) and dose-length product (DLP) were obtained from the CT dose report. Procedure duration was measured from the acquisition time, or image metadata DICOM tag (0008,0032), of the CT scan following the 1st needle pass to the acquisition time of the final CT scan with the needle tip in the target. Vector analysis of the CT scan after the first, initial needle pass was performed (Fig. 6). These CT scans were resampled into isotropic volumes (1 × 1 × 1 mm) using 3D Slicer 4.10.1 and linear interpolation18. Voxel locations at the skin entry site, needle tip, and target centroid were recorded. Distances and angles were calculated using vector magnitude and dot product, respectively. All CT scans were reviewed to record needle passes that unintentionally punctured or traversed through a nontargeted lesion.

Figure 6

Diagram demonstrating calculations in two dimensions for illustrative purposes only. Actual calculations were performed in three dimensions based on voxel locations. Blue solid arrow represents distance of needle tip from skin entry site. Red solid arrow represents remaining distance to center of target. Yellow dotted arrow represents ideal trajectory from skin entry site to center of target. Angle offsets were calculated between the needle trajectory (blue solid arrow) relative to the ideal trajectory (yellow dotted arrow).

Statistical analysis

Vector analyses, means, paired t-tests, and F-tests were performed using Google Sheets (Mountain View, CA). Post hoc power analysis suggested a total sample size of 8 for a power of 0.8 and effect size of 1 to achieve a statistical significance level of 0.05.

发表评论

电子邮件地址不会被公开。 必填项已用*标注