GÃ³mez GonzÃ¡lez, Carmen1; PeÃ±a RodrÃguez, Amelia2; Salas DÃaz, Inmaculada3; Praena FernÃ¡ndez, Juan Manuel4; GÃ¡lvez Acebal, Juan5; Lozano RodrÃguez, JesÃºs6; Vilches Arenas, Ãngel 7; Ortega Calvo, Manuel8
Introduction: The common semantic core for all uses of “bootstrapping” is the realization of a complex task by practicing a simple gesture (an individual and his horse can take a big leap after only rider has been thrown the bootlaces). The “bootstrapping” is a statistical method designed to estimate the sampling distribution of an estimator by re-sampling with replacement
Methodology: Trying to compensate for epistemological weaknesses of sample size calculations should be obtained by the researchers the smallest possible values of the sampling relative error or design effect. On the other hand, we can also create a virtual universe (VU) by a topological placing of samples obtained by “bootstrap”.
Results: VU size will be approximately equal to the number of repeats multiplied by the size of the original sample. In frequentist terms we can issue an equality hypothesis (H0) and another of inequality (H1) between our VU and the actual population (AP) from which comes the sample. To support these hypotheses we have developed a practical demonstration of Berkson bias in a case-control design by bootstrap resampling.
Conclusion: We stand for a topological concept of resampling with “the bootstrap” that can extend the hierarchic external validation scheme proposed by Justice et al. to a 0.1 level just to the embodiment of the simulator effect on the original data package study.