How to Define a 1 MOA Rifle
- Peyton
- 5 days ago
- 10 min read
A proposal for how to determine the precision of a rifle system and why an R99 value of .5 MOA should be the standard for a "1 MOA" rifle

The internet is chock full of people claiming the precision capabilities of their rifles. One guy shoots a mighty fine five shot group, another may shoot ten or 20 shots, yet another may shoot a 5x5. Manufacturers may even claim that their rifles are 1 MOA guaranteed. All seek to accomplish the same thing: determine the precision of their rifle or handloads (so that we may brag to our friends and internet acquaintances for accolades!).
Having a strong estimate of a rifle's precision is an important factor for myself, as the rise of powerful tools like Applied Ballistics' Weapon Employment Zone (WEZ) depends on having good data to input into the simulation so that I can determine my own effective range, either on targets or game animals. For confidence in my weapon system, good dispersion data is so critical, no different to having solid drop data at range.
At Reloading All Day, we try to lead with scientific inquiry and testing; no small part in the last year has been devoted to educating riflemen on the potential downfalls of relying on small sample sizes for testing purposes. We shoot what most would consider extremely large samples in lots of our testing, trying to provide the best data and analysis that we can to the community. We also realize that the average rifleman has no desire or need to test to that depth, so in this article we'll propose a method of testing and defining rifle precision that will hopefully satisfy the need to conserve components while also offering a much more representative perspective on the true precision of a rifle.
How Do We Currently Define a 1 MOA Rifle?
Below, you can see an example of how groups grow with sample size. This visual was created by myself in GNU Octave, where "shots" are plotted from a bivariate normal distribution, which means that the distribution of shots in the horizontal and vertical are both of a normal distribution (refer back to Statistics and the Reloader for a review on Normal Distributions). These graphs also assume that the standard deviation in the vertical and horizontal directions is equal, which should be the case for rifle dispersion with a free floated barrel. What's more, these groups are all coming from the exact same distribution; it is centered on (0, 0) and the standard deviation of the population was chosen to be 0.15 inches. This graphic is indicative of something you might see on paper if you physically fired all of these groups.

Table 1 | Group Extreme Spread |
5 Rounds | 0.511" |
10 Rounds | 0.708" |
20 Rounds | 0.785" |
100 Rounds | 1.118" |
500 Rounds | 1.225" |
1000 Rounds | 1.470" |
As you can see in Table 1, the extreme spread climbs rapidly with sample size. While it's not impossible to get a smaller 10 shot group than a 5 shot group from the same rifle, it is EXCEEDINGLY rare and it gets more improbable as the sample sizes grow. I don't think this is a stretch of the imagination to most riflemen. Sometimes we just get lucky.
This is one of the reasons that extreme spread is a lousy measure of dispersion in the long run — statistically, true maximums are impossible when using unbounded probability distributions. The group growth will slow with additional samples, but it will always continue to grow. Practically, it takes a huge number of rounds to approach a value that could be labelled as a maximum dispersion value, leading to a waste of components and time for the average shooter. It's great for competition because there's rarely a tie, but for statistical modelling, it's very unstable.
Additionally, if averaging multiple groups, there will never be a group larger than the maximum and smaller groups will only bring the average down, further away from the true measure of what we're trying to estimate. Take a 5x5 shot aggregate for example:

As an aside, I would also like to point readers to Group 1, specifically. This was a completely randomly selected set of five points within the distribution, but it looks very similar to groups we often see on paper. Four shots clustered tightly with a fifth out of the group (in this case, about .4 inches out). Most would want to call this a "flier" or they'll assume they pulled the shot, but this is just the nature of dispersion.
Table 2 | Extreme Spread |
1000 Round Group | 1.442" |
Group 1 | 0.560" |
Group 2 | 0.612" |
Group 3 | 0.507" |
Group 4 | 0.253" |
Group 5 | 0.586" |
Based on extreme spread, we would be very tempted to call this a sub-MOA rifle! I mean, it averaged 0.504 inches in the 5x5 for crying out loud! Only once you've fired substantially more rounds do you find out that the extreme spread of the platform is exceeding 1.4". That being said, I think it would be as equally disingenuous to call this a 1.25 MOA rifle because the shots needed to make this measurement are very unlikely to occur in the same group outside of large samples. The struggle is in extremes and absolutes. The shots that grow the extreme spread are just that: extreme, and encompassing all of them makes the dispersion cone of a rifle inordinately large because we can never be 100% certain in life.
So what's the line? If too small a sample size doesn't accurately predict dispersion potential and firing more rounds can serve to continually expand the group extremes, what's the solution?
It's all in how you measure it.
The Next Level of Measuring Precision
Increasing sample size is a huge help when collecting data, as we've seen, but that's incredibly expensive in 2025 between powder, projectiles, primers, barrel life, etc. Just because we do testing that involves multiple hundreds of rounds to get to the bottom of doesn't mean that's realistic for the average shooter. If you've shot a 5x5 looking for rifle precision, all that's needed is a shift in how the data is analyzed. Previously we described the groups in the 5x5 with extreme spread — a metric that only uses two shots per group to define, 10 shots in the total test. If we instead measure using something like Mean Radius, we immediately gain a 60% larger sample size to work with (in a 5x5) because it is taking the position of every shot into account. Using the exact same 5x5 generated above (Figure 2), you can see in Table 3 that the mean radii of the 5 shot groups are actually falling around the value for the 1000 shot group instead of predicting something much lower like with extreme spread. This already makes it look like a more effective metric than ES for predicting rifle precision.
Table 3 | Mean Radius |
1000 Round Group | 0.188" |
Group 1 | 0.162" |
Group 2 | 0.190" |
Group 3 | 0.169" |
Group 4 | 0.099" |
Group 5 | 0.173" |
If the average of the 5x5 mean radii is taken, that results in a value of 0.159", only about a 10% difference in this example from the 1000 round group. In terms of a confidence interval of 95%, that would be 0.126 to 0.190 inches. This means that if we were to repeat this testing, the true value of the mean radius would fall within the confidence interval of 95% of the tests. You can also see that the "tested" 1000 round mean radius falls within these bounds, lending some confidence that this isn't too kooky of an idea.
Confidence intervals will vary with the consistency of the system you're testing and can be calculated for each individual test, but this showcases the power of what is happening here. Just by using a different metric we have a much closer approximation of the rifle's true precision.
So, how do we apply this to determine the overall precision of the rifle in a way that makes sense to us, as mean radius can be difficult to visualize on paper?
Enter: the R99 value
An R95 value is a defined subset of Circular Error Probable (CEP). Simply put, the R99 value is an estimate of the smallest radius circle that will cover 99% of the shots in a group. CEP values have been used to characterize the dispersion patterns of firearms for years, especially in large caliber applications (think artillery). This is nothing new, but I do see it as an improvement to the methods that are currently used by a large segment of the shooting community. In a largely simplified manner, you can multiply the Mean Radius by 2.42 to find an approximate R99. This value is calculated from the ratio of R99 to mean radius based on a Rayleigh Distribution quantile function, a topic that I may go further into at a later date.
Reloading All Day 1 MOA Rifle Acceptance Criteria: an R99 Value of .5 MOA or less.
To recap: the procedure of finding the R99 value of your rifle is as follows,
Fire 5, five shot groups as a recommended minimum
Calculate the Mean Radius of each group (group measurement apps help a lot here)
Calculate the average Mean Radius of all groups
Multiply the average Mean Radius by 2.42
This would mean that our 5x5 test would result in an R99 value of 0.385" (remember this is a radius), and therefore we would expect this rifle and ammunition to have 99% of its shots fall within a circle of 0.769" in diameter, assuming a perfect zero. Thinking back to our confidence interval, that would result in an R99 value of between 0.305" and 0.460", and thus a circle of between 0.610" and 0.920" I would expect this to be a rifle that most shooters would be very happy to have in their safe and would fall comfortably within our definition of a 1 MOA rifle.

Again, this assumes that the rifle is perfectly zeroed on the population center, which is another problem entirely. To refine this, each shot would need to be measured from the aim point and a different statistical model used (the Rice Distribution) to decouple the mean point of impact from the point of aim. For the sake of simplifying this testing method down to an easily calculated metric I think that possibility of a small induced error is an acceptable trade for most people.
This testing could also be done with a single 25 round group and the mean radius calculated directly from it if every shot could be plotted precisely within the group. Systems like a ShotMarker could make this a reality for some shooters, but for those of us manually measuring groups, it's best to keep the groups from forming a big hole so we can pinpoint each shot exactly.
I estimate that this method will be able to characterize the precision of a rifle within about ±20%. While this is not exactly a perfect estimate, it is a far better estimate than a standard ES averaged 5x5, which would result in an error of >50% in this instance, while using the same amount of ammunition fired. As always, if your needs require more measurement precision, larger sample sizes are your friend. If 50 total shots were fired, your estimated confidence interval would shrink and the error would become ~±10%.
The Elephant in the Room
What happens to those shots outside the R99 circle? Why are they "thrown out?" Do they not matter? Are they a flier? Why are you "throwing out data to make your group better?"
Remember earlier how I said the problem with ES lies in extremes and absolutes? That often holds true even when using statistics. Many probability distributions are unbounded, which means that they have no upper or lower limit to values that could theoretically happen. The Normal Distribution covers 99.73% percent of the population bounded by 3 SD's from the mean. That's a lot, but there is still 0.27% that lies outside that. If you extend to 6 SD's from the mean, the probability of something that or more extreme occurring is low (0.0000001973%), but it's never zero. You can never be 100% certain that something will or won't happen until it does or doesn't. We can't predict the future perfectly, but we can model things pretty closely sometimes.
The probability of any single shot landing outside the R99 value is 1%. The probability of seeing two or more examples that extreme in a finite, realistic shot sequence is incredibly low. To find a value that we can reliably converge to over time and is repeatable, we have to set a limit to how confident we want to be.
Rifle Precision: Defined
This method of testing rifle precision offers a much more stable and well-defined criteria than Extreme Spread. The prediction of rifle precision in this way does not vary nearly as much as Extreme Spread from small to extremely large samples. It appears that acceptable confidence intervals can be obtained from as little as 25 rounds, and is most likely adequate for the needs of most shooters. This is not the only way to characterize precision in rifles and I would be confident in saying that it is not nearly the best. This is a highly simplified approach to what could be complex statistics, pared down to the data and metrics that the average shooter can have access to on a regular basis (such as mean radius). I would go so far as to say that this method is not revolutionary. Bryan Litz has been using mean radius multiplied by 2.1 since at least 2016 (Modern Advancements in Long Range Shooting: Volume II) to estimate R95. "Ballistipedia" by its various authors and editors is a very deep dive into modelling rifle precision and seems to have existed since around 2013. This is my dive into the statistics of rifle groupings, the creation of code to create simulated groupings, and my own derivation of the proper multiplication factor for R99.
This will never replace Extreme Spread measurements in many regards, and I don't want it to. ES is fast, easy, and offers the simplicity required for scoring hundreds of targets in a timely fashion. There is, however, plenty of gain to be had in modelling and characterizing the precision and function of a rifle system in a more advanced way.
We look forward to hearing your comments and opinions on the subject here or on any of our various social media channels!
Do you think you have a sub-MOA rifle?
Yes
No
About the Author

Comments