Advertisement

Statistics for differentiating between groups?

Forget about testing groups. You are only interested in testing loads. The difference might sound a bit trivial and semantic, but it is not.
I'm not sure how you're differentiating between testing groups and testing loads here. Can you elaborate or provide an example?
 
No. 2 rounds per group. Lots and lots of 2-round groups. Then a simple t-test. Very easy to do, very simple analysis. I wrote an essay on this once.

What exactly are you seeing in two rounds?
 
I'm not sure how you're differentiating between testing groups and testing loads here. Can you elaborate or provide an example?
You are generally interested in whether Load B is more precise than Load A (your current best load). Your group from each load is an n of 1. If you shot another group of Load B, it would be different than the first group. So, you want to know something about the population of groups possible from Loads A and B.
What exactly are you seeing in two rounds?
Precision with the least amount of environmentally induced variance.
 
Ah, I see what you're getting at Brent. I am thinking of this from a different angle though.

Let's say I buy a few boxes of Nosler e-tip factory ammo (load A) and a few boxes of Hornady superformance CX factory ammo (load B) for my 7mm-08. For the sake of argument, let's say I have an indoor 100yd range, all the time in the world, and a hardcore bench vice setup that holds my rifle in exactly the same position for each shot. In other words, let's forget about environmental variables, shooter error, barrel warmup, barrel fouling, etc etc. Everything except the baseline rifle-ammo variables themselves.

Theoretically, if I were to shoot all the rounds of Load A I could possibly shoot before burning out the barrel and measure the exact (x,y) position of each round, that would give me the sample space of load A, i.e., the set of all possible locations that a bullet from load A could land. From that, I could construct the associated probability density function associated with load A; this is the "true" or "underlying" distribution of A. Since we obviously can't fire and observe all possible shots, we can only infer what that true distribution looks like based on a relatively small number of shots fired (i.e., samples taken from A). This is why we call it inferential statistics.

What I want is a test of whether the true distribution of A is different than the true distribution of B based on a single event (i.e., a set of samples, aka a group) observed from each load. In other words, I want to be able to differentiate between whether the difference I actually see between the group from load A and the group from load B is anything more than random variation.
 
You are generally interested in whether Load B is more precise than Load A (your current best load). Your group from each load is an n of 1. If you shot another group of Load B, it would be different than the first group. So, you want to know something about the population of groups possible from Loads A and B.

Precision with the least amount of environmentally induced variance.
So you’re shooting multiple two shot groups of the same load and then comparing those results to multiple two shot groups from another load?

You math nerds are using words and phrases that are way over my head…
 
Precision with the least amount of environmentally induced variance.

You're not limiting the amount of environmentally induced variance by shooting fewer shots in each group. You're simply not observing the variance, which is basically like saying ignorance is bliss.

This is, in point of fact, one of the most widespread and dangerous misunderstandings I encounter in the shooting world. For a given load-rifle combo, the expected size of 3 shot groups will always be smaller than the expected size of 5 shot groups, which will always be smaller than 7 shot groups, etc. This is not because you've limited the environmental variables, it's an inexorable result of basic math.
 
You're not limiting the amount of environmentally induced variance by shooting fewer shots in each group. You're simply not observing the variance, which is basically like saying ignorance is bliss.
[/QUOTE]
Yes I am. In a couple of ways. First, if you are using only group spread as a measure one unknown demonic intrusion will f-up your entire sample (of 1).

Secondly, variability of the types we worry about (namely wind direction and speed) care autocorrelated in time (and space). By shooting two shot groups, each group happens over a very short time period and thus, less opportunity for environmental change, or at least smaller magnitude of environmental change.

This is, in point of fact, one of the most widespread and dangerous misunderstandings I encounter in the shooting world. For a given load-rifle combo, the expected size of 3 shot groups will always be smaller than the expected size of 5 shot groups, which will always be smaller than 7 shot groups, etc. This is not because you've limited the environmental variables, it's an inexorable result of basic math.
Well, there you are implying what I just wrote above, but you are also missing another issue - increased sample size gives you more opportunity for extreme events (even in the absence of environmental variation).

This is easy to see with any random number generator. You can do this in Excel if you want. I wrote a little application I called the Random Cannon, to play around with group size and some methodological issues while knowing exactly the parameters of the hypothetical loads. In any event, you can find, without environmental change, that the probability of observing an extreme random event increases with sample size.
 
So you’re shooting multiple two shot groups of the same load and then comparing those results to multiple two shot groups from another load?

You math nerds are using words and phrases that are way over my head…
exactly, but in a much more objective fashion than just they ol' hairy eyeball method.
 
Ah, I see what you're getting at Brent. I am thinking of this from a different angle though.

Let's say I buy a few boxes of Nosler e-tip factory ammo (load A) and a few boxes of Hornady superformance CX factory ammo (load B) for my 7mm-08. For the sake of argument, let's say I have an indoor 100yd range, all the time in the world, and a hardcore bench vice setup that holds my rifle in exactly the same position for each shot. In other words, let's forget about environmental variables, shooter error, barrel warmup, barrel fouling, etc etc. Everything except the baseline rifle-ammo variables themselves.

Theoretically, if I were to shoot all the rounds of Load A I could possibly shoot before burning out the barrel and measure the exact (x,y) position of each round, that would give me the sample space of load A, i.e., the set of all possible locations that a bullet from load A could land. From that, I could construct the associated probability density function associated with load A; this is the "true" or "underlying" distribution of A. Since we obviously can't fire and observe all possible shots, we can only infer what that true distribution looks like based on a relatively small number of shots fired (i.e., samples taken from A). This is why we call it inferential statistics.

What I want is a test of whether the true distribution of A is different than the true distribution of B based on a single event (i.e., a set of samples, aka a group) observed from each load. In other words, I want to be able to differentiate between whether the difference I actually see between the group from load A and the group from load B is anything more than random variation.
Yes, this would, more or less accomplish what you want, you want to make those measurements from the center of the group, which you have to estimate (calculate) and that costs you a few more degrees of freedom. It is doable, but it is a LOT of work.
 
Well, there you are implying what I just wrote above, but you are also missing another issue - increased sample size gives you more opportunity for extreme events (even in the absence of environmental variation).

This is easy to see with any random number generator. You can do this in Excel if you want. I wrote a little application I called the Random Cannon, to play around with group size and some methodological issues while knowing exactly the parameters of the hypothetical loads. In any event, you can find, without environmental change, that the probability of observing an extreme random event increases with sample size.
Apologies if I was tearing down a straw man.

The issue I'm getting at, which from what I can tell is widely misunderstood in the shooting world, is this:

We shoot practice/testing groups in order to try and figure out how confident we can be that we'll hit our target when push comes to shove. To use my own hunting rifle goal as an example, say I want to be able to ethically kill a deer at 500 yards, which by my estimation means that the rifle-ammo combo should be capable of at least a 95% chance of hitting within an 8" diameter circle at that range, ignoring environmental- and shooter-induced sources of error, not because they're not important, but simply because they're independent of the raw accuracy potential of the rifle-ammo combo. (Please note this accuracy standard is just an example and not something I want to debate at the moment). 8" at 500yds translates to a diameter of 1.53 MOA, so I want to make sure I can hit within that circle 95 out of 100 times, and it would sure be nice if the ones that landed outside that ring didn't land too far outside it. This is equivalent to saying that if I were to shoot a whole bunch of 95 shot groups, I'd want the average group size to be 1.53 MOA or less.

So, how can I figure out if my rifle-ammo combo is capable of that, especially if I don't have easy access to a 500yd range and the cash to throw 100 test rounds down it? In other words, how do I know if I just need more practice, a new rifle, or simply a different ammo?

The traditional answer is to go shoot a handful of 3 shot groups at 100 yards and, if they're within 1.5 MOA, then you have a 1.5 MOA gun and any misses are due to not holding your mouth right. The problem is, this is terrible logic and betrays an ignorance of how reality behaves. The situation is easy enough to understand: if you shoot 100 shots, and choose any 3 of them at random, the group size for the randomly chosen 3 would obviously be smaller than the size of the full 100 shot group. As it turns out, it's easy enough to calculate how much smaller a 3 shot sample would be on average, and real-life testing (such as the experiments described here) bear it out: 3 shot groups are, on average, roughly half the size of 100 shot groups.

So if I want to make sure I can hit a 1.53 MOA target with a high degree of confidence, my 3-shot groups are going to need to be a lot smaller than that on average, and I'm going to need to shoot a lot of them to be sure. The situation improves a bit with 5 shot groups, but gets even worse with 2-shot groups.
 
Last edited:
So if I want to make sure I can hit a 1.53 MOA target with a high degree of confidence, my 3-shot groups are going to need to be a lot smaller than that on average, and I'm going to need to shoot a lot of them to be sure. The situation improves a bit with 5 shot groups, but gets even worse with 2-shot groups.

No, because you are going to shoot a lot more of them.

But if your goal is all about shooting deer at 500 yds, I'd spend more time practicing my wind calls, holds, breath control, etc, than I would worrying about ammo. The biggest chuck of additive variances is in your part of the equation, not the ammo.
 
I'm not much of a stats guy, just a plain old engineer. Seems to me that going very far at all in a conversation of statistics ends up being only an exercise, since putting me behind the rifle and pulling the trigger will remove a significant amount of precision from the whole process.

David
NM
 

Forum statistics

Threads
113,656
Messages
2,028,633
Members
36,274
Latest member
johnw3474
Back
Top