https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS
Tip revision: 6f29e5027d2118d058fb871d3d33f8cc7c25b22e authored by Albert on 20 June 2023, 17:33:31 UTC
backing up workspace
backing up workspace
Tip revision: 6f29e50
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Neural Volumetric Reconstruction for Coherent Synthetic Aperture Sonar</title>
<link rel="stylesheet" href="style.css">
<meta name="google-site-verification" content="4lvnBGrYqP8rxr2i7JUfL2opFJjeFtZ33uhN3-uHxnE" />
</head>
<body>
<header>
<div class="title-authors">
<h1>Neural Volumetric Reconstruction for Coherent Synthetic Aperture Sonar</h1>
<h2> SIGGRAPH 2023</h2>
<div class="authors">
<p>Albert Reed<br>albertnm123@gmail.com<br>Arizona State University</p>
<p>Juhyeon Kim<br>juhyeon.kim.gr@dartmouth.edu<br>Dartmouth College</p>
<p>Thomas Blanford<br>teb217@psu.edu<br>The Pennsylvania State University</p>
<p>Adithya Pediredla<br>adithya.k.pediredla@dartmouth.edu<br>Dartmouth College</p>
<p>Daniel C. Brown<br>dcb19@psu.edu<br>The Pennsylvania State University</p>
<p>Suren Jayasuriya<br>sjayasur@asu.edu<br>Arizona State University</p>
</div>
</div>
</header>
<main>
<div class="icons-row">
<section id="paper">
<h2><a href="https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS/tree/site/main_paper.pdf"><img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/main-image.png" alt="Paper icon"><br>Paper</a></h2>
</section>
<section id="supplemental">
<h2><a href="https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS/tree/site/supp_mat.pdf"><img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/main-image.png" alt="Paper icon"><br>Supplemental Material</a></h2>
</section>
<section id="code">
<h2><a href="https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS"><img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/github-image.png" alt="GitHub icon"><br>Code & Data</a></h2>
</section>
</div>
<section id="abstract">
<h2>Abstract</h2>
<p>Synthetic aperture sonar (SAS) measures a scene from multiple views in order to increase the resolution of reconstructed imagery. Image reconstruction methods for SAS coherently combine measurements to focus acoustic energy onto the scene. However, image formation is typically under-constrained due to a limited number of measurements and bandlimited hardware, which limits the capabilities of existing reconstruction methods. To help meet these challenges, we design an analysis-by-synthesis optimization that leverages recent advances in neural rendering to perform coherent SAS imaging. Our optimization enables us to incorporate physics-based constraints and scene priors into the image formation process. We validate our method on simulation and experimental results captured in both air and water. We demonstrate both quantitatively and qualitatively that our method typically produces superior reconstructions than existing approaches. We share code and data for reproducibility.</p>
</section>
<section id="method">
<h2>5 Minute Video Overview</h2>
<video width="600" height="400" controls>
<source src="movie.mp4" type="video/mp4">
</video>
</section>
<section id="methods">
<h2>Method</h2>
<p>SAS reconstruction typically uses backprojection where measurements are coherently combined onto the scene using
the time-of-flight between the sonar and scene. Instead, we propose an analysis-by-synthesis optimization for reconstruction,
enabling us to incorporate physics-based knowledge and prior information into image formation. Our pipeline adapts techniques from volume rendering
and neural fields to create a general SAS reconstruction method that outperforms backprojection.</p><br>
<img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/proposed-pipeline.png" width="700" height="325">
<p>In particular, we propose:<br><br>
(1) <strong>Pulse deconvolution</strong>: An analysis-by-synthesis method for deconvolving the transmitted pulse from measurements and increasing our system bandwidth computationally.<br>
(2) <strong>Neural backprojection</strong>: A general SAS reconstruction method formulated as an analysis-by-synthesis optimization. Neural backprojection
uses a neural network to estimate the scene and designs a forward model that considers Lambertian scattering, occlusion, and the coherent integration of acoustic waves to render measurements.
</p>
</section>
<section id="results">
<h2>Results</h2>
<p>We validate our method in simulation and on two real data sources, AirSAS and the Search Volume Sediment Sonar (SVSS)</p>
<h3><strong><i>Simulation Results</i></strong></h3>
<p>We simulate sonar measurements using a time-of-flight renderer modified from Kim et al. (2021). In particular, we use the renderer to obtain the transient
impulse response of a scene and convolve the transient with the sonar pulse to obtain sonar measurements. </p>
<p><br> <span style="display:inline-block; width: 50px;"></span><strong>Backprojection</strong> <span style="display:inline-block; width: 50px;"></span>
<strong>Gradient-Descent</strong><span style="display:inline-block; width: 60px;"></span> <strong>Ours</strong> <span style="display:inline-block; width: 90px;"></span><strong>Ground-Truth</strong></p>
<video autoplay loop muted>
<source src="./simulation-video.mp4" type="video/mp4">
</video>
<p>We compare our method to backprojection, gradient descent, and the polar formatting algorithm (PFA not shown here). Backprojection is the traditional method for reconstruction and gradient descent
is our method without using a neural network to predict the scene.</p> <img src="./table.png" width="525" height="140">
<p> We compute metrics across many simulated scenes to characterize the performance gap.</p>
<h3><strong><i>Real Results 1: AirSAS</i></strong></h3>
<p>AirSAS is a speaker and microphone directed at a turntable.
The speaker and microphone are mounted to a linear track to enable cylindrical and helical collection geometries.</p>
<div class="airsas_hardware" style="display:flex;flex-direction:row;">
<img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/airsas-hardware.png" width="317" height="260">
<video autoplay loop muted height="260">
<source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/measurements_rendering.mp4" type="video/mp4">
</video>
<video autoplay loop muted height="260">
<source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/measurements_rendering_pov.mp4" type="video/mp4">
</video>
</div>
<p>Here, we show armadillo reconstructions of AirSAS measurements using backprojection and our proposed method, neural backprojection. Neural backprojection
better captures the object geometry and details while mitigating streaking artifacts that plague backprojection. Please see the video/paper for more results.</p>
<div class="airsas_videos" style="display:flex;flex-direction:row;">
<div>
<h3>Backprojection (Traditional)</h3>
<video width="320" height="240" autoplay loop muted>
<source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/arma_bp.mp4" type="video/mp4">
</video>
</div>
<div>
<h3>Proposed Method</h3>
<video width="320" height="240" autoplay loop muted>
<source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/arma_nbp.mp4" type="video/mp4">
</video>
</div>
</div>
<h3><strong><i>Real Results 2: Search Volume Sediment Sonar (SVSS)</i></strong></h3>
<p>SVSS uses a sonar transducer array mounted to a pontoon boat to search for objects in a lakebed.</p>
<img src="./svss-hardware.png" width="600" height="330">
<div class="svss_videos" style="display:flex;flex-direction:row;">
<div>
<h3>Cinder block cores up</h3>
<video width="300" height="200" autoplay loop muted>
<source src="./bp_cinder_cores.mp4" type="video/mp4">
</video>
<video width="300" height="200" autoplay loop muted>
<source src="./nbp_cinder_cores.mp4" type="video/mp4">
</video>
</div>
<div>
<h3>Cinder block face up</h3>
<video width="300" height="200" autoplay loop muted>
<source src="./bp_cinder_face.mp4" type="video/mp4">
</video>
<video width="300" height="200" autoplay loop muted>
<source src="./nbp_cinder_face.mp4" type="video/mp4">
</video>
</div>
<div>
<h3>Pipe</h3>
<video width="300" height="200" autoplay loop muted>
<source src="./bp_pipe.mp4" type="video/mp4">
</video>
<video width="300" height="200" autoplay loop muted>
<source src="./nbp_pipe.mp4" type="video/mp4">
</video>
</div>
</div>
<p>The top row shows backprojection reconstructions and the bottom row shows ours. Our pulse
deconvolution and neural backprojection steps enable us to reconstruct sharper target features and mitigate the blobby features of backprojection.</p>
</section>
<!--<video width="600" height="400" autoplay controls>
<source src="arma_bp.mp4" type="video/mp4">
</video>/Neural-Volumetric-Reconstruction-for-Coherent-SAS/-->
<!--<section id="results">
<h2>Main Results</h2>
<p>We demonstrate the effectiveness of our approach on synthetic and real-world data captured in both air and water. Our method typically produces superior reconstructions than existing approaches. Please see the paper for more details.</p>
</section>-->
</main>
</body>
</html>