Difference between revisions of "Voxel Printing of Neuroimaging"

From Design Computation
Jump to: navigation, search
(Keyphrases)
Line 13: Line 13:
  
 
=Keyphrases=
 
=Keyphrases=
 
+
visual perception (210), neural network (150), human navigation (130), deep learning (120), visual characteristic (120), social medium (100), social medium image (95), urban layer (90), built environment (90), visuospatial intelligence (90), google street view (79), visual understanding (70), visual connectivity map (63), visuospatial understanding (60), street view (60), spatial environment (60), visuospatial city (50), design output (50), built form (50), urban environment (50), visual quality (50), visual preference (50), activation function (50), hidden layer (50), human vision (50), spatial intelligence (50), deep learning method (47), spatial perception measurement (47), building height map (47), google street (45)
  
 
=Topics=
 
=Topics=

Revision as of 03:06, 2 October 2020

DC I/O 2020 proceeding by LUKE HALE.


Abstract

3D printing is becoming a widespread and useful technology in medicine with applications in simulation, teaching, surgical planning and patient-specific prostheses. Typically, in order to 3D print neuroimaging data, anatomical areas of interest must first be identified on individual 2D slices, then isolated (either manually or via thresholding), and then converted to a 3D mesh. This time-consuming ‘segmentation’ typically requires commercial software and, in forming this 3D mesh, the rest of the scan data is lost and reduced to a binary representation i.e. either outside or inside the anatomical area of interest. Furthermore, the size of structures may be over or underestimated. Voxels (volume pixels) represent a value on a 3-dimensional grid, with 3D printing outputting these values to a 3D printed model. Voxel printing can obviate the need for segmentation and 3D mesh generation, with effectively lossless printing of whole neuroimaging datasets. Here, 7T MRI brain images were converted to bitmap images and printed on a Stratasys 760M printer at the printer's native resolution. SimpleITK, an open source library for imaging analysis, was used to interpolate between imaging slices to achieve the required 800dpi slice resolution; a Floyd-Steinberg dithering algorithm was used to convert images to two pixel values representing clear and opaque resin. The resulting printed model has preservation of the delicate cerebral vasculature and differentiation between grey / white matter. Voxel printing of neuroimaging therefore offers exciting possibilities with more accurate visualisation of complex neurological structures, with the added possibility of multi-material gradients which simulate native tissue.

Keywords

Keyphrases

visual perception (210), neural network (150), human navigation (130), deep learning (120), visual characteristic (120), social medium (100), social medium image (95), urban layer (90), built environment (90), visuospatial intelligence (90), google street view (79), visual understanding (70), visual connectivity map (63), visuospatial understanding (60), street view (60), spatial environment (60), visuospatial city (50), design output (50), built form (50), urban environment (50), visual quality (50), visual preference (50), activation function (50), hidden layer (50), human vision (50), spatial intelligence (50), deep learning method (47), spatial perception measurement (47), building height map (47), google street (45)

Topics

Reference

DOI: https://doi.org/10.47330/DCIO.2020.KGQD8189

Full text: Maciel, A. (Ed.), 2020. Design Computation Input/Output 2020, 1st ed. Design Computation, London, UK. ISBN: 978-1-83812-940-8, DOI:10.47330/DCIO.2020.QPRF9890