Enter Note Done
Go to previous page in this tab
Session
  • Presentation | H14D: Advances in Machine Learning for Earth Science: Observation, Modeling, and Applications III Oral
  • Oral
  • Bookmark Icon
  • H14D-10: Neural Radiance Field (NeRF) for 3D Reconstruction from Drone Imagery
  • Schedule
    Notes
  • Location IconNew Orleans Theater B
    NOLA CC
    Set Timezone
  •  
    View Map

Generic 'disconnected' Message
Author(s):
Evan Hammam, University of Georgia (First Author, Presenting Author)
Nancy O'Hare, University of Georgia
Deepak Mishra, University of Georgia


Making detailed 3‑D maps of landscapes from drone photos is slow and often fails when the ground is shiny, snowy, or lacks obvious landmarks. We tested a newer approach called a Neural Radiance Field (NeRF). Instead of stitching photos together, NeRF teaches a small neural network how light moves through every point in the scene, so a computer can redraw the view from any angle.

We flew a six‑band multispectral camera over forests and hills near Fairbanks, Alaska in summer, autumn, and mid‑winter. After a careful three‑step alignment to ensure the red, green, and blue color bands lined up exactly, the NeRF learned each scene in just 30–70 minutes, about one‑tenth the time of the best‑known mapping software, and produced highly realistic 3‑D pictures. Snow‑covered landscapes trained fastest and looked sharpest, suggesting their smooth, bright surfaces help the model learn.

Our next goal is to tie these digital scenes to real‑world coordinates. Once that hurdle is cleared, drones could stream near‑real‑time, centimeter‑scale maps for search‑and‑rescue missions, wildfire planning, and tracking seasonal changes in remote Arctic terrain.



Scientific Discipline
Suggested Itineraries
Neighborhood
Type
Main Session
Discussion