Rendered.AI Announces 200 Times Speed Improvement by Moving Simulation Code to NVIDIA GPUs
Rendered.AI's CEO Nathan Kundtz PhD kicks off NVIDIA GTC second day event with attention grabbing announcement of Synthetic Aperture Radar (SAR) synthetic data simulation performance improvements. In a head-to-head comparison of the Intel CORE CPU i5-103OOH and a TESLA V100 -SXM2, Rendered.AI sees over 200-fold improvement in electromagnetic simulation calculation time with NVIDIA GPUs
BELLEVUE, Wash., October 6, 2020 (Newswire.com) - Rendered.AI (www.rendered.ai) — the leading physics-based synthetic data and simulation workflow platform supplier for achieving prove-able outcomes for SAR — announced today a game-changing increase in compute performance on NVIDIA GPUs. At the beginning of the second day of NVIDIA’s GTC event in a session titled: Synthetic Data Generation with Active Learning Workflows for Radar [A21119] Dr. Nathan Kundtz, Rendered.AI’s CEO, said, “In the core electromagnetic simulations we see over a 200x improvement between a calculation being performed on an Intel CORE CPU i5-103OOH and a TESLA V100 -SXM2.” He continued; “These tremendous improvements take advantage of raw GPU performance increases and the fact that each of the slow time steps can be separated and simulated in parallel with others. Taken together, these improvements take us a long way to being quite speedy with our calculations.”
NVIDIA and Rendered.AI worked together over the last three months to take Rendered.AI’s compiled C code and transition it into Cuda based code that can be run on NVIDIA GPUs. Faster simulations mean we can not only get results faster but we can also render things from the sensor data that were impractical before. At any given time 70% of the earth is covered by cloud and, of course, 50% of it by darkness. SAR sensors allow us to see and monitor the planet at night, through rain, fog and forest fire smoke. Additionally, we can see truths we did not expect because they are invisible to the naked eye. We get access to polarization, diffraction, and polarization dependent reflection coming off of the scene; all of these can give clues as to what is actually happening. This new speed standard marks the beginning of a journey that will further the relationship between data for training, machine learning, and the full range of information returned by these sensors.
“NVIDIA’s GPUs are a fundamental game-changer in the world of synthetic data, simulation, and active learning workflows,” said Dr. Kundtz, “and it will broaden the way we think about and use SAR and other sensor data starting today.”
For more information visit us at Rendered.ai or LinkedIn.
Rendered.AI is currently capable of synthesizing visible, electro-optical (EO) – panchromatic and RGB, thermal infrared (IR), and synthetic aperture radar (SAR) imagery (including uncompressed raw), SWIR, and LIDAR simulation libraries with active learning workflows.
Rendered.ai is a privately held company based in Bellevue, Washington.
Media Inquiries or Business Inquiries for Rendered.AI:
Head of Customer Innovation