Skip to content

Visual Volume visualizes volumetric data using Three.js and WebGL, rendering 3D data from sources like CT scans.

Notifications You must be signed in to change notification settings

pocper1/visual-volume

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

visual-volume

demo

Introduction

Visual Volume is a project designed for visualizing volumetric data. This project leverages web technologies such as Three.js and WebGL to render and display three-dimensional volumetric data from sources like CT scans.

You can view the project live at https://visual-volume.vercel.app.

Purpose

The purpose of this project is to visualize the CT scan data of the ThaumatoAnakalyptor project into a point cloud. Through this tool, we aim to improve the accuracy of data conversion.

This project is an extension based on the GitHub repository: tomhsiao1260/pipeline-visualize.

Features

  • Load and parse NRRD files
  • Render and display volumetric data
  • Support for various visualization modes such as isosurfaces and volume rendering
  • Interactive features allowing users to adjust the view and visualization parameters

Tech stack

  • three.js
  • python

Setup

  1. Clone this repo
    git clone https://github.com/pocper1/visual-volume.git
  2. Install the dependency
    pip install numpy torch tifffile scipy open3d opencv-python pynrrd tqdm
  3. Init the web project
    cd web
    npm install

Process guideline

  1. Data collection, download the dataset PHercParis4.volpkg
  2. Transform CT-scan into pt
    • file format: tif->pt
    • location: code/surface_detection.py
  3. Transform into nrrd
    • file format: pt->nrrd
    • location: code/pt_nrrd.py
  4. Use WebGL to show the result
    • location: web/

Process detail

  1. Data collection Note: You need to sign up for the Vesuvius Challenge Data Agreement. Once registered, you will be granted a username and password to access the dataset.

    cd visual-volume
    wget --no-parent -r -nd --user=<userName> --password=<password> -P dataset https://dl.ash2txt.org/full-scrolls/Scroll1/PHercParis4.volpkg/volume_grids/20230205180739/cell_yxz_006_008_004.tif
  2. CT-scan transform into pt write data into /dataset/<tifName>/*

    python code/surface_detection.py dataset/cell_yxz_006_008_004.tif
  3. Transform into nrrd read data from /dataset/<tifName>/* and write data into /web/public/<tifName>/*

    python code/pt_nrrd.py
  4. Use web to show the result

    cd /web
    npm run dev

File structure

.
├── code
│   ├── nrrd_tools.py
│   ├── pt_nrrd.py
│   └── surface_detection.py
├── dataset
├── README.md
├── umbilicus.txt
└── web
    ├── index.html
    ├── package.json
    ├── package-lock.json
    ├── public
    │   └── cell_yxz_006_008_004
    │       ├── adjusted_vectors_interp.nrrd
    │       ├── adjusted_vectors.nrrd
    │       ├── blurred_volume.nrrd
    │       ├── first_derivative.nrrd
    │       ├── mask_recto.nrrd
    │       ├── mask_verso.nrrd
    │       ├── origin.nrrd
    │       ├── second_derivative.nrrd
    │       ├── sobel_vectors.nrrd
    │       ├── sobel_vectors_subsampled.nrrd
    │       └── vector_conv.nrrd
    ├── src
    │   ├── css
    │   │   └── main.css
    │   ├── img
    │   │   └── favicon.ico
    │   └── js
    │       ├── config.js
    │       ├── core
    │       │   ├── shaders
    │       │   │   ├── fragmentShader.glsl
    │       │   │   └── vertexShader.glsl
    │       │   ├── textures
    │       │   │   └── cm_viridis.png
    │       │   ├── ViewerCore.js
    │       │   └── VolumeMaterial.js
    │       ├── main.js
    │       └── volume.js
    └── vite.config.js