Abstract
Chemical, biological, radiological, nuclear, and explosives incidents require rapid detection and characterization for appropriate response. For a nuclear detonation, visible-light cameras may be used to locate the cloud and characterize fallout deposition when coupled with numerical models. Films from the United States’ nuclear testing era compose the only sizeable collection of imagery depicting high-yield detonations. These films offer unique insights into characteristics of flows involving scales that are difficult to replicate experimentally, and they are a valuable source of data for the validation of models for nuclear fallout transport, either as part of emergency response or forensic activities. In this work, we implement modern computer vision and machine learning techniques to identify and track the cloud automatically and subsequently determine the time dependence of some of its features. We trained a ResNet-18 image classifier on hundreds of images to categorize nuclear cloud morphology. Each category or cloud regime is determined by early cloud evolution and is associated to constitutive properties of the flow, such as distribution of vorticity. Next, we identified keypoint features using the KAZE algorithm and tracked these keypoints in the images, allowing us to determine the dimensions and velocities of the cloud across film frames. These measurements converted to real-world units provide valuable experimental data that can be used in the development and validation of nuclear cloud models. We compared the results of this method against manual cloud rise measurements from two different films. In one, our automated method accelerated the feature extraction process without sacrificing measurement accuracy.