Skip to main content
  • Home
  • Development
  • Documentation
  • Donate
  • Operational login
  • Browse the archive

swh logo
SoftwareHeritage
Software
Heritage
Archive
Features
  • Search

  • Downloads

  • Save code now

  • Add forge now

  • Help

https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS
28 October 2023, 08:36:07 UTC
  • Code
  • Branches (3)
  • Releases (0)
  • Visits
    • Branches
    • Releases
    • HEAD
    • refs/heads/main
    • refs/heads/main2
    • refs/heads/site
    No releases to show
  • 12a702b
  • /
  • index.html
Raw File Download
Take a new snapshot of a software origin

If the archived software origin currently browsed is not synchronized with its upstream version (for instance when new commits have been issued), you can explicitly request Software Heritage to take a new snapshot of it.

Use the form below to proceed. Once a request has been submitted and accepted, it will be processed as soon as possible. You can then check its processing state by visiting this dedicated page.
swh spinner

Processing "take a new snapshot" request ...

To reference or cite the objects present in the Software Heritage archive, permalinks based on SoftWare Hash IDentifiers (SWHIDs) must be used.
Select below a type of object currently browsed in order to display its associated SWHID and permalink.

  • content
  • directory
  • revision
  • snapshot
origin badgecontent badge
swh:1:cnt:231f51c06374b8c817ccc10670208ce8086aa000
origin badgedirectory badge
swh:1:dir:12a702b937a177c9956fed48c12ade44ed9202b0
origin badgerevision badge
swh:1:rev:6f29e5027d2118d058fb871d3d33f8cc7c25b22e
origin badgesnapshot badge
swh:1:snp:9e2c8e761f723f56d02b61d1740453a509f14182

This interface enables to generate software citations, provided that the root directory of browsed objects contains a citation.cff or codemeta.json file.
Select below a type of object currently browsed in order to generate citations for them.

  • content
  • directory
  • revision
  • snapshot
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Generate software citation in BibTex format (requires biblatex-software package)
Generating citation ...
Tip revision: 6f29e5027d2118d058fb871d3d33f8cc7c25b22e authored by Albert on 20 June 2023, 17:33:31 UTC
backing up workspace
Tip revision: 6f29e50
index.html
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <title>Neural Volumetric Reconstruction for Coherent Synthetic Aperture Sonar</title>
    <link rel="stylesheet" href="style.css">
      <meta name="google-site-verification" content="4lvnBGrYqP8rxr2i7JUfL2opFJjeFtZ33uhN3-uHxnE" />
  </head>
  <body>
    <header>
      <div class="title-authors">
        <h1>Neural Volumetric Reconstruction for Coherent Synthetic Aperture Sonar</h1>
        <h2> SIGGRAPH 2023</h2>
        <div class="authors">
          <p>Albert Reed<br>albertnm123@gmail.com<br>Arizona State University</p>
          <p>Juhyeon Kim<br>juhyeon.kim.gr@dartmouth.edu<br>Dartmouth College</p>
          <p>Thomas Blanford<br>teb217@psu.edu<br>The Pennsylvania State University</p>
          <p>Adithya Pediredla<br>adithya.k.pediredla@dartmouth.edu<br>Dartmouth College</p>
          <p>Daniel C. Brown<br>dcb19@psu.edu<br>The Pennsylvania State University</p>
          <p>Suren Jayasuriya<br>sjayasur@asu.edu<br>Arizona State University</p>
        </div>
      </div>
    </header>
    <main>
      <div class="icons-row">
        <section id="paper">
          <h2><a href="https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS/tree/site/main_paper.pdf"><img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/main-image.png" alt="Paper icon"><br>Paper</a></h2>
        </section>
        <section id="supplemental">
          <h2><a href="https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS/tree/site/supp_mat.pdf"><img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/main-image.png" alt="Paper icon"><br>Supplemental Material</a></h2>
        </section>
        <section id="code">
          <h2><a href="https://github.com/awreed/Neural-Volumetric-Reconstruction-for-Coherent-SAS"><img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/github-image.png" alt="GitHub icon"><br>Code & Data</a></h2>
        </section>
      </div>
      <section id="abstract">
        <h2>Abstract</h2>
        <p>Synthetic aperture sonar (SAS) measures a scene from multiple views in order to increase the resolution of reconstructed imagery. Image reconstruction methods for SAS coherently combine measurements to focus acoustic energy onto the scene. However, image formation is typically under-constrained due to a limited number of measurements and bandlimited hardware, which limits the capabilities of existing reconstruction methods. To help meet these challenges, we design an analysis-by-synthesis optimization that leverages recent advances in neural rendering to perform coherent SAS imaging. Our optimization enables us to incorporate physics-based constraints and scene priors into the image formation process. We validate our method on simulation and experimental results captured in both air and water. We demonstrate both quantitatively and qualitatively that our method typically produces superior reconstructions than existing approaches. We share code and data for reproducibility.</p>
      </section>

      <section id="method">
        <h2>5 Minute Video Overview</h2>
        <video width="600" height="400" controls>
          <source src="movie.mp4" type="video/mp4">
        </video>
      </section>

      <section id="methods">
        <h2>Method</h2>
        <p>SAS reconstruction typically uses backprojection where measurements are coherently combined onto the scene using
          the time-of-flight between the sonar and scene. Instead, we propose an analysis-by-synthesis optimization for reconstruction,
          enabling us to incorporate physics-based knowledge and prior information into image formation. Our pipeline adapts techniques from volume rendering
          and neural fields to create a general SAS reconstruction method that outperforms backprojection.</p><br>

        <img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/proposed-pipeline.png" width="700" height="325">

        <p>In particular, we propose:<br><br>
          (1) <strong>Pulse deconvolution</strong>: An analysis-by-synthesis method for deconvolving the transmitted pulse from measurements and increasing our system bandwidth computationally.<br>
          (2) <strong>Neural backprojection</strong>: A general SAS reconstruction method formulated as an analysis-by-synthesis optimization. Neural backprojection
          uses a neural network to estimate the scene and designs a forward model that considers Lambertian scattering, occlusion, and the coherent integration of acoustic waves to render measurements.
        </p>

      </section>


      <section id="results">
        <h2>Results</h2>
        <p>We validate our method in simulation and on two real data sources, AirSAS and the Search Volume Sediment Sonar (SVSS)</p>
          <h3><strong><i>Simulation Results</i></strong></h3>
          <p>We simulate sonar measurements using a time-of-flight renderer modified from Kim et al. (2021). In particular, we use the renderer to obtain the transient
          impulse response of a scene and convolve the transient with the sonar pulse to obtain sonar measurements. </p>
          <p><br> <span style="display:inline-block; width: 50px;"></span><strong>Backprojection</strong> <span style="display:inline-block; width: 50px;"></span>
<strong>Gradient-Descent</strong><span style="display:inline-block; width: 60px;"></span> <strong>Ours</strong> <span style="display:inline-block; width: 90px;"></span><strong>Ground-Truth</strong></p>
          <video autoplay loop muted>
            <source src="./simulation-video.mp4" type="video/mp4">
          </video>
          <p>We compare our method to backprojection, gradient descent, and the polar formatting algorithm (PFA not shown here). Backprojection is the traditional method for reconstruction and gradient descent
          is our method without using a neural network to predict the scene.</p> <img src="./table.png" width="525" height="140">
             <p> We compute metrics across many simulated scenes to characterize the performance gap.</p>


        <h3><strong><i>Real Results 1: AirSAS</i></strong></h3>
        <p>AirSAS is a speaker and microphone directed at a turntable.
          The speaker and microphone are mounted to a linear track to enable cylindrical and helical collection geometries.</p>
        <div class="airsas_hardware" style="display:flex;flex-direction:row;">
          <img src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/airsas-hardware.png" width="317" height="260">
          <video autoplay loop muted height="260">
            <source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/measurements_rendering.mp4" type="video/mp4">
          </video>
          <video autoplay loop muted height="260">
            <source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/measurements_rendering_pov.mp4" type="video/mp4">
          </video>
        </div>
        <p>Here, we show armadillo reconstructions of AirSAS measurements using backprojection and our proposed method, neural backprojection. Neural backprojection
        better captures the object geometry and details while mitigating streaking artifacts that plague backprojection. Please see the video/paper for more results.</p>
        <div class="airsas_videos" style="display:flex;flex-direction:row;">
          <div>
              <h3>Backprojection (Traditional)</h3>
              <video width="320" height="240" autoplay loop muted>
                  <source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/arma_bp.mp4" type="video/mp4">
              </video>
          </div>
          <div>
              <h3>Proposed Method</h3>
              <video width="320" height="240" autoplay loop muted>
                  <source src="/Neural-Volumetric-Reconstruction-for-Coherent-SAS/arma_nbp.mp4" type="video/mp4">
              </video>
          </div>
	    </div>
        <h3><strong><i>Real Results 2: Search Volume Sediment Sonar (SVSS)</i></strong></h3>
        <p>SVSS uses a sonar transducer array mounted to a pontoon boat to search for objects in a lakebed.</p>
        <img src="./svss-hardware.png" width="600" height="330">
        <div class="svss_videos" style="display:flex;flex-direction:row;">
          <div>
              <h3>Cinder block cores up</h3>
              <video width="300" height="200" autoplay loop muted>
                  <source src="./bp_cinder_cores.mp4" type="video/mp4">
              </video>
              <video width="300" height="200" autoplay loop muted>
                  <source src="./nbp_cinder_cores.mp4" type="video/mp4">
              </video>
          </div>
          <div>
              <h3>Cinder block face up</h3>
              <video width="300" height="200" autoplay loop muted>
                  <source src="./bp_cinder_face.mp4" type="video/mp4">
              </video>
              <video width="300" height="200" autoplay loop muted>
                  <source src="./nbp_cinder_face.mp4" type="video/mp4">
              </video>
          </div>
          <div>
              <h3>Pipe</h3>
              <video width="300" height="200" autoplay loop muted>
                  <source src="./bp_pipe.mp4" type="video/mp4">
              </video>
              <video width="300" height="200" autoplay loop muted>
                  <source src="./nbp_pipe.mp4" type="video/mp4">
              </video>
          </div>
	    </div>
        <p>The top row shows backprojection reconstructions and the bottom row shows ours. Our pulse
          deconvolution and neural backprojection steps enable us to reconstruct sharper target features and mitigate the blobby features of backprojection.</p>
      </section>

        <!--<video width="600" height="400" autoplay controls>
          <source src="arma_bp.mp4" type="video/mp4">
        </video>/Neural-Volumetric-Reconstruction-for-Coherent-SAS/-->


      <!--<section id="results">
        <h2>Main Results</h2>
        <p>We demonstrate the effectiveness of our approach on synthetic and real-world data captured in both air and water. Our method typically produces superior reconstructions than existing approaches. Please see the paper for more details.</p>
      </section>-->
    </main>
  </body>
</html>

back to top

Software Heritage — Copyright (C) 2015–2025, The Software Heritage developers. License: GNU AGPLv3+.
The source code of Software Heritage itself is available on our development forge.
The source code files archived by Software Heritage are available under their own copyright and licenses.
Terms of use: Archive access, API— Content policy— Contact— JavaScript license information— Web API