What do you do when you have a set of observations, you don’t know what the distribution is, but you have a certain question, for example I want to test between two populations and I don’t know what the distributions in either are? What you do is, if you have let’s say 10 observations in the sample, drop one and then compute statistic, put it back, drop a second one computer statistic. At the end if your 10 observations, you have 10 what are called jackknife estimates. Each one is obtained by dropping one of the original observations. Though the jackknife method actually worked very well for a lot of cases and it was the method of choice for quite a long time, it also had some problems. It didn’t work very well with small samples and it didn’t work if you were dealing with data where there were a lot of ties. But then when we got to about the 80s, and the 90s, that’s when the bootstrap method came out.
To learn more from Dr. Helena Kraemer listen to the podcast episode below.