https://github.com/ldyken53/TVCG-progiso
Tip revision: f93ee0c2285e5a4cfe5962a28ceb12cf0ce15352 authored by Landon Dyken on 17 October 2024, 22:37:18 UTC
Update README.md
Update README.md
Tip revision: f93ee0c
README.md
# WebGPU Isosurface Visualization
This repo holds the code for the TVCG paper, ["Interactive Isosurface Visualization in Memory Constrained Environments Using Deep Learning and Speculative Raycasting"](https://ieeexplore.ieee.org/document/10577555) by Landon Dyken, Will Usher, and Sidharth Kumar. This work expands the algorithm of ["Speculative Progressive Raycasting for Memory Constrained Isosurface Visualization of Massive Volumes"](https://github.com/Twinklebear/webgpu-prog-iso) (LDAV 2023 Best Paper) by using a pretrained image reconstruction network to infer perceptual approximates from intermediate output, along with optimizing the speculative raycasting using first pass speculation and larger computational buffers to increase speculation counts in early passes.
## Demo
There is an interactive demo for several datasets online:
- [Magnetic Reconnection (Plasma)](https://ldyken53.github.io/TVCG-progiso/#dataset=magnetic) (512^3)
- [Chameleon](https://ldyken53.github.io/TVCG-progiso/#dataset=chameleon) (1024x1024x1080)
- [Miranda](https://ldyken53.github.io/TVCG-progiso/#dataset=miranda) (1024^3)
- [Richtmyer Meshkov](https://ldyken53.github.io/TVCG-progiso/#dataset=richtmyer_meshkov) (2048x2048x1920)
Note that due to initially loading the datasets, it will take some time for the rendering to appear when visiting the pages for the first time.
All datasets are available on the [Open SciVis Datasets page](https://klacansky.com/open-scivis-datasets/).
## Recreating a Representative Figure
Code was tested on an XPS-17 running a fresh install of Ubuntu v22.04.3 in Windows Subsystem for Linux (WSL) kernel v5.10.16, python v3.10.12, npm v8.5.1, node v12.22.9. See [here](https://youtu.be/_EKXnTOGvuE) for a video demonstration of this installation and test.
It is necessary to be on a device with node, npm and python3 with pip installed. To install these in WSL, it is recommended to run
```
sudo apt update
sudo apt install nodejs npm python3-pip
```
Remember to reload the terminal window or run
```
. $HOME/.profile
```
after installing to make these commands available.
### Automatic Install
After cloning the repo, first make all the scripts executable by running
```
chmod +x run_server.sh shaders/glslc.exe shaders/tint.exe
```
Then one can install needed dependencies and start serving the application with
```
./run_server.sh
```
From here, the application will be served at localhost:8000.
### Manual Install
After cloning the repo run
```
npm install
```
Then navigate to the shaders/ folder and run
```
python3 embed_shaders.py ./glslc.exe ./tint.exe
```
Then back to the top folder run
```
npm run build
```
Then move the files in the ml-models/ folder into the built dist/ folder.
Then download the compressed datasets (Chameleon, Magnetic Reconnection, and Miranda) using the following commands
```
pip install gdown
gdown 1iAN-LucPq6nUAh74I1BIXa24KaXo650k
gdown 1t98uqIjGB99k3Xso8R1EQL4fgefHlKBR
gdown 1YTBFATCaK1ApFpcefEuAj5iQTPm998pU
```
and create a folder dist/bcmc-data/ and move them there.
You can then serve the application from the dist/ folder using
```
python3 -m http.server
```
Which will default to serving the application at localhost:8000.
### Running Benchmarks
Once the application is hosted, visit 'localhost:8000/#autobenchmark=0' to begin benchmarks. This will automatically run 27 benchmarks including the Plasma, Chameleon, and Miranda datasets at 360p, 720p, and 1080p, and download .json benchmark files to your default download location. Make sure to allow automatic downloads in your browser before running these benchmarks, follow the instructions [here](https://commongoalsystems.zendesk.com/hc/en-us/articles/9197509173005-How-do-I-enable-automatic-downloads) and add the url http://localhost:8000 to your automatic downloads list. A video showing the benchmarking process is [here](https://www.youtube.com/watch?v=ALRQYkR2qOs&ab_channel=LandonDyken).
### Converting Benchmarks to Data Figure
Once the autobenchmark is complete, move all downloaded .json files to the benchmarks/ folder in this repo. Run
```
python3 plot_figure6.py
```
and files labeled "ResultsAt85%Complete.png" and "ResultsAt100%Complete.png" will be created in the folder, matching Figure 6 in the TVCG paper.
## Model Training
Another repo is provided containing all the model training code [here](https://github.com/ldyken53/TVCG-progiso-training). This repo includes checkpoints for our pretrained model and example data for training new models. Unlike this repo, an NVIDIA GPU with CUDA support is required for model training code.
