OpenSnow's Severe Weather Forecast System
(StormNet) leverages cutting-edge artificial
intelligence to deliver advanced severe
weather forecasting with unmatched precision.
By analyzing massive datasets from
Super-Res Radar, satellite, and ground-based
sensors in real-time, our AI models identify
patterns that indicate heightened risks
for:
- Lightning (cloud-to-ground strikes)
- Hail (greater than 1 inch)
- Wind (greater than 58 mph)
- Tornado (of any strength)
Unlike traditional forecasts, our
system is continuously learning and adapting,
offering faster, more accurate predictions
when every second counts.
Where did StormNet
originate?
StormNet was engineered by Andrew
Brady.
Andrew's passion for meteorology
started when he was a young child. After
school, he would read books about meteorology
and, later, would read online articles on the
subject. One highlight of his childhood was
begging his mother for a megaphone so he could
make severe weather announcements to the
neighborhood where he grew up.
Years later, in
2020, Andrew founded AtmoSphere Analytics with
one familiar goal: to improve prediction and
communication around impactful weather and,
eventually, save lives. It began with
experiments using the open-source WRF model on
his low-end PC. This evolved into WRF
experiments on second-hand Dell PowerEdge
servers. It started with one, which grew to
four -- all running in parallel for WRF
experiments. Andrew would tinker with
configurations and source code, learning about
how it all worked along the way. Eventually,
he arrived at a version that he liked -- one
that, in his experiments, would generally
produce the most realistic forecasts when it
came to impactful weather events. A
combination of open source WRF with customized
segments of code in the physics
parameterizations. He then built a system to
run WRF automatically, daily, to be posted on
the AtmoSphere Analytics website: the
Microscale-Mesoscale Prediction System
(MMFS).
While MMFS was helpful to look at
and would produce interesting forecasts, he
wanted to take it a step further: machine
learning. He had been interested in machine
learning for some time, but had only tinkered
with the idea briefly. He brainstormed: "how
can I make this even more centered around the
goal? Around impactful weather prediction and
communication?". His original goal was to
apply machine learning to MMFS outputs to
generate higher resolution, refined forecasts.
But, then, in 2021, he arrived at a new idea:
severe weather forecasts from MMFS outputs.
The idea seemed simple: take MMFS outputs,
post-process them using machine learning to
produce severe weather forecasts. This was the
first step towards StormNet. Throughout 2021,
he built a ML algorithm that would take MMFS
outputs and would produce severe weather
forecasts. It was called HazCast. In 2022, he
started providing HazCast forecasts on his
website alongside the MMFS forecasts.
Throughout 2022 and 2023, he worked on
tweaking HazCast and MMFS to make them as
accurate as possible.
HazCast was interesting. It would produce
broad forecasts that were, at times, quite
accurate. HazCast had some significant
limitations, however. Namely, HazCast would
produce very broad/imprecise forecasts. It was
also limited by the biases and data
availability of MMFS. HazCast had a relatively
simple algorithm, primarily due to data
limitations from MMFS. The biggest limitation,
however, was that Andrew felt that it wasn't
quite 'there' when it came to making a
difference (such as saving lives). The key to
making a true difference with impactful
weather prediction/communication is precision
and accuracy. Over time, the idea of StormNet
was formed: a very complex deep learning model
which would take various data sources, as well
as current conditions, and would produce
hyper-local precise severe weather forecasts.
StormNet: Severe storm and TOrnado
Real-Time Monitoring NETwork
Andrew started working on StormNet
in 2023. He quickly realized that such a
complex model wouldn't be able to run on his
Dell poweredge server cluster. It needed
graphics processing units (GPUs). AtmoSphere
Analytics was making some money from
subscriptions at the time, but not enough to
purchase massive GPUs. Andrew had to decide
between abandoning the project or making the
model simple enough to work without GPUs. A
third option came to mind -- building a GPU
server at home. He decided to embrace this
crazy idea, with hopes that StormNet would
eventually be success and save lives -- it
would be worth it. He researched different
motherboards that had the specific adapter
needed for high powered GPUs and learned that
nearly all of those require a data-center
setup.
At this point, he decided to do this
completely from scratch. He ordered a
motherboard that had the correct adapters,
ordered the GPUs second hand, and ordered all
of the remaining supplies to build this
computing system. Then, he went to the local
home improvement store to purchase a piece of
plywood to put the system on. He had no idea
how this was going to work -- or if it would
work.. but it was worth a shot if he could
build something special. Over the next several
weeks, the parts came in and he built the
system. It didn't work at first, but after
lots of trial and error and re-builds, the
green light on the motherboard finally came on
-- it was working.

The original
StormNet training system
With this computing system working, he then
moved towards actually building the model. He
had to collect all of the training data that
he would use, then design an architecture.
This had never been done before, so he had to
figure it out on-the-fly. He tried various
different architectures (graph attention
networks, convolutional neural networks,
multi-layer perceptron, etc) before eventually
deciding that he would need to build something
custom for this task.
After weeks of
trial-and-error, StormNet v0.1 was trained.
This was cool -- very cool. He ran the model
on various past events, such as the Mayfield,
KY 2021 EF4 tornado. StormNet was actually
able to predict that a tornado would form and
move into Mayfield with 30min+ lead-time.
Trial-and-error continued through the end of
2023, and finally in January 2024, he
announced StormNet to the world. Around that
time, he set up a real-time inference system.
At the time, StormNet was producing 1-hour
tornado predictions across the eastern CONUS
every 5 minutes. It quickly gained popularity
in the weather community. Andrew quickly
iterated updates and upgrades, allowing the
system to learn from its own performance. By
March 2024, after a ton of R&D effort,
Andrew released StormNet v1.0.
Through Spring
2024, StormNet's popularity continued to
skyrocket; largely due to the incredible
accuracy and usefulness. New versions of
StormNet were able to predict more than just
tornadoes -- hail and damaging wind outputs
were also added. By summer 2024, Andrew met
with meteorologists from Fox Weather, The
Weather Channel, RadarOmega, and others to
gather valuable feedback on StormNet.
In fall
2024, OpenSnow acquired AtmoSphere Analytics
with a vision: to make this groundbreaking
technology available to many more
people. From late 2024 into 2025, Andrew
and the OpenSnow team have worked hard to
improve StormNet and bring it to where it is
today.
How can StormNet help guide
decisions?
Whether you are
planning to go hiking in a couple of hours or
planning an outdoor event days away, StormNet
can inform you about what specific weather
hazards may impact your plans.
With StormNet, you can be
confident in your severe weather
awareness.
- Hiking: A heightened risk of lightning in the 30-60min time-frame may necessitate turning around early or seeking shelter / going to a lower elevation.
- Driving: An elevated risk of hail in an area that you plan on driving to in a few hours may affect your plans and help you avoid vehicle damage or excessive traffic.
- Parking: Planning a trip and needing to park your car for a few days? An alert that hail or a tornado is possible 2-3 days in advance would help guide your decision to park your car inside vs parking it outside.
- Day-to-day life: A heightened risk of a
tornado in the 30-60min time-frame will
give you the necessary 'heads up' to pay
close attention to the weather over the
coming hour, and take cover if needed;
potentially giving you a life-saving
alert.
How does StormNet work?
StormNet is a deep-learning AI model
by OpenSnow.
It uses a proprietary combination of
various data sources to analyze the current
and predicted state of the atmosphere. Through
extensive training, this AI engine can analyze
and find very complex patterns that lead to
hail, damaging winds, tornadoes, and
lightning.
How does StormNet perform in
evaluations?
We tested StormNet on over 250
severe weather events from 2024. Data from
2024 was excluded from training for validation
purposes. Here are some highlights from our
testing:
- StormNet has an average accuracy
percentage of 98.8% across all hazards
(1).
- StormNet's short range detection rate
across hazards is 72% (2).
- StormNet 50% short range tornado POD =
67% vs NWS tornado warning POD = 62%
(2).
- StormNet's short range false alarm rate
across hazards is 25%.
- StormNet 50% short range tornado FAR =
35% vs NWS tornado warning FAR =
70%.
- 48 Hour StormNet tornado probabilities are
3000x higher, on average, for tornado
cases vs non-tornado cases.
- Average tornado probability 48 hours
prior to tornadoes is 22%.
- Average baseline 48 hour tornado
probability is 0.007%.
- 3 hour StormNet tornado probabilities are
175,000x higher, on average, for tornado
cases vs non-tornado cases.
- Average tornado probability 3 hours
prior to tornadoes is 35%.
- Average baseline 3 hour tornado
probability is 0.0002%.
Brayden Barton with the University
of Oklahoma evaluated a very early version of
StormNet and found that StormNet is a very
impressive deep learning tool, with its claims
largely backed by the results of this study'.
Barton continues: 'At short lead times prior
to tornadogenesis, this study found that
StormNet outperforms the National Weather
Service in tornado detection percentage at
lower tornado probability thresholds, while
even possessing the ability to detect
potential for tornadogenesis up to an hour
before tornadogenesis.' (6)
The Box and Whisker plot below shows
that even on an early experimental
version of StormNet, mean tornado
probabilities were around 25% an hour before
tornadogenesis, peaking at 50% in the 10
minutes leading up to
tornadogenesis.

Barton,
2024
What is different about StormNet
vs other models?
- Hourly forecasts to 168 hours.
- StormNet produces hourly forecasts to
7 days into the future.
- No other product matches this
precision.
- Updates every 2 minutes.
- StormNet updates every 2 minutes,
always using the latest information to
guide forecasts.
- Weather is constantly changing,
StormNet re-evaluates forecasts with
the constant, rapid evolution of
weather in mind.
- Lightning, hail, wind, and tornadoes all
in one place.
- One model - 4 impactful severe weather
hazard predictions.
- A unified system for evaluating and
visualizing multiple hazardous weather
conditions.
- Artificial Intelligence and deep machine
learning
- StormNet is a constantly evolving
system, always learning and making
improvements.
- The atmosphere is fluid and, in many
ways, random. We are unable to fully
observe the conditions at every point,
but StormNet is able to 'fill in the
gaps' through state-of-the-art weather
pattern recognition.
How are StormNet forecasts
different from Storm Prediction Center
convective outlooks?
SPC
- Forecasts are human-generated.
- Forecasts are valid for entire days.
- Forecasts update once per day (for day 4
thru 8) or twice per day (day 2 thru
3).
- Forecasts are official government
guidance.
- Forecasts output 'general severe weather'
risk beyond 2 days, hail/damaging
wind/tornado splits for day
1-2.
StormNet
- Forecasts are machine-generated.
- Forecasts are valid for individual hours
to 168 hours (7 days) into the
future.
- Forecasts update every 2
minutes.
- Forecasts are proprietary by
OpenSnow.
- Forecasts output lightning, hail, damaging
wind, and tornado hourly to 168
hours.
What do the probability colors
mean?
0-10% (none to grey): Hazard is unlikely
during the specified time period.
10-20% (darker grey): Hazard is still
unlikely during the time period, but
storms in the area may start to display
signs of the hazard in the future. Stay
weather aware during the period.
20-50% (blue): Hazard is possible during
the time period. The model is seeing signs
that storms may produce the hazard, even
if confidence is lacking. Watch future
updates very closely for changes.
50-75% (yellow to orange): Hazard is
likely to occur during the time period, or
it is currently in progress and moving
towards this location. Seek NWS guidance
and heed any warnings issued.
75-90% (red): Hazard is likely ongoing or
is very likely to occur during the time
period. The model has high confidence that
the severe weather hazard will
occur. Seek NWS guidance and heed any
warnings issued.
90-100% (pink): Hazard is very likely
ongoing or will be during the time period.
The model is very confident that the
hazard will occur in the
vicinity. Seek NWS guidance and heed
any warnings issued.
How often does StormNet
update?
StormNet probabilities update every
2 minutes. Longer range guidance may change
little or not at all with every 2-minute
update since that data evolves at a slower
pace than rapidly changing near-term
conditions.
If StormNet can predict
tornadoes, does that mean that it's a
replacement for the National Weather
Service?
No, StormNet is not a replacement
for National Weather Service warnings,
watches, or other guidance. StormNet is to be
used as a supplement to any official
guidance.
How do I use StormNet? What do
the maps mean?
Short-Range Example:

StormNet outputs are plotted with
Super-Res Radar.
The example above is displaying
damaging wind probabilities within a line of
storms. In the example, we are looping
Super-Res Radar data with the damaging wind
probabilities. The damaging wind probabiltiies
are 30 minute windows, valid after the radar
frame. The last 4 frames are current
forecasts, for now to 30 minutes, 30 minutes
to 1 hour, 1 hour to 2 hours and 2 hours to 3
hours. The grey to blue contours ahead of the
storm indicate elevated damaging wind
probabilities.
In this example, StormNet is
considering several elements:
- Where are the storms?
- Will this storm produce damaging
winds?
- Where is damaging wind most likely to
occur within this storm?
- Which direction is the storm moving? Is
there any possibility that it may switch
directions?
- How long will the damaging wind threat
persist? Is it only for the next couple of
minutes or will it persist during the
entire 30 minute period?
The final product is contours of
probability. In this example, damaging wind
probabilities are around 40% to 50% ahead of
the main core of the storm.
Longer-Range Example:
In this example, radar is turned off
and we are looking at hail probabilities for
the next day.
Specifically, hail probabilities for
5:00 pm to 6:00 pm local time. We can see that
StormNet is evaluating the state of the
atmosphere, considering how the atmosphere may
evolve over the next several hours, and how
that may impact hail probabilities
specifically at 5:00 pm to 6:00 pm the next
day.
The blues indicate lower
probabilities whereas the yellows are
localized areas where there is increased
confidence in hail occurring.
Can you explain Super-Res Radar
> Reflectivity?
Radar reflectivity measures how much
energy a radar signal bounces back from
objects in the atmosphere, like precipitation.
It indicates the size and concentration of
particles (like raindrops or snowflakes) and
is used to estimate precipitation intensity
and type.
Higher reflectivity values generally
mean larger and more numerous particles, often
associated with heavier precipitation.
Can you explain Super-Res Radar
> Velocity?
Radial velocity in measures the
speed at which precipitation (like raindrops
or snowflakes) is moving toward or away from
the radar site.
Using the Doppler effect, the radar
detects changes in the frequency of the
returned signal caused by motion. If
precipitation is moving toward the radar, the
frequency increases (a positive velocity), and
if it’s moving away, the frequency decreases
(a negative velocity).
This information helps
meteorologists understand wind patterns inside
storms, which can reveal important details
like rotation in a thunderstorm or wind shear
that might indicate severe weather.
Can you explain Super-Res Radar
> Spectrum Width?
Spectrum width measures the variability or
spread in the velocities of precipitation
particles within a radar beam. Instead of
showing the average speed like radial
velocity, it tells how diverse the speeds
are—like if some raindrops in a radar sample
are moving faster or slower than others.
A small spectrum width means the
particles are all moving at nearly the same
speed, while a large spectrum width suggests
turbulence, wind shear, or other chaotic
motion. This helps meteorologists identify
areas of atmospheric instability, like strong
gust fronts or tornadoes.
Can you explain Radar +
Risk?
With Radar + Risk, you can visualize
the current and recent radar with StormNet
hazard probabilities overlaid. The first group
of timesteps are recent radar and StormNet
frames, while the last 4 frames are the
StormNet hazard risk probabilities for the
next 30 minutes, 30 minutes to 1 hour, 1 hour
to 2 hours, and 2 hours to 3
hours.

In the above example, we visualize
the radar and lightning probabilities of a
line of thunderstorms in Nebraska. Watch as
the StormNet lightning probabilities remain
elevated along and ahead of the storms before
pushing out ahead of the storms in the last 4
forecast frames.
Have additional
questions?
Send an email to hello@opensnow.com and
a real human will respond within 24
hours.
Notes
- Accuracy in machine learning
statistics is defined as the proportion of
grid-points in a test set where the
prediction is correct. This includes both
true positives and true negatives. Since
true negatives dominate these datasets, a
high accuracy percentage is
expected.
- NWS POD/FAR Statistics are
from:
- Detection
Rate or Probability
of
Detection or POD is
defined as the proportion of grid-points
in a test set where the hazard is forecast
(>50%) and is actually occurs (within n
miles of the point). False Alarm
Rate or FAR is
defined as the proportion of grid-points
in a test set where the hazard is forecast
(>50%) but it does not occur. Higher
POD is better, lower FAR is better.
- This plot is based on internal evaluations
on a 250+ event benchmark. This benchmark
consists of a diverse set of severe
weather events (or non-events) from 2024.
These events were held out of training, so
the model has never seen these events.
Higher POD is better, lower FAR is
better.
- This plot compares StormNet with other
state-of-the-art lightning prediction
models. Sources for these models,
including published evaluation results,
are found below.
- HREF Calibrated Thunder Ensemble:
- LightningCast:
- Seamless Lightning Nowcasting (did not publish POD/FAR, only CSI):
- Barton, Brayden. (2024). Evaluation of STORM-Net 1-hour Tornado Forecast Detection Prior to Tornadogenesis of Significant Tornadoes during February -Early May 2024