Statistics theory: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Robert Badgett
imported>Robert Badgett
(→‎Inferential statistics and hypothesis testing: Moved this content to new article.)
Line 36: Line 36:


==Inferential statistics and hypothesis testing==
==Inferential statistics and hypothesis testing==
The null hypothesis is the there is no difference between two samples in regard to the factor being studied.<ref name="isbn0-910133-36-0">{{cite book |author=Mosteller, Frederick; Bailar, John Christian |authorlink= |editor= |others= |title=Medical uses of statistics |edition= |language= |publisher=NEJM Books |location=Boston, Mass |year=1992 |origyear= |pages= |quote= |isbn=0-910133-36-0 |oclc= |doi= |url= |accessdate=}} [http://books.google.com/books?isbn=0910133360 Google Books]</ref> Two errors can occur in assessing the probability that the null hypothesis is true:
{{main|Statistical significance}}
* Type I error, also called alpha error, is the the rejection of a correct null hypothesis. The probability of this is usually expressed by the [[p-value]]. Usually the null hypothesis is rejected if the p-value, or the chance of a type I error, is less than 5%. However, this threshold may be adjusted when multiple hypotheses are tested.<ref>{{Cite journal | doi = 10.1093/biomet/75.4.800 | volume = 75 | issue = 4 | pages = 800-802 | last = Hochberg | first = Yosef | title = A sharper Bonferroni procedure for multiple tests of significance | journal = Biometrika | accessdate = 2008-10-15 | date = 1988-12-01
| url = http://biomet.oxfordjournals.org/cgi/content/abstract/75/4/800 }}</ref>
* Type II error, also called beta error, is the acceptance of an incorrect null hypothesis. This error may occur when the sample size was insufficient to have power to detect a statistically significant difference.<ref name="pmid7647644">{{cite journal |author=Altman DG, Bland JM |title=Absence of evidence is not evidence of absence |journal=BMJ (Clinical research ed.) |volume=311 |issue=7003 |pages=485 |year=1995 |month=August |pmid=7647644 |pmc=2550545 |doi= |url=http://bmj.com/cgi/pmidlookup?view=long&pmid=7647644 |issn=}}</ref><ref name="pmid3985731">{{cite journal |author=Detsky AS, Sackett DL |title=When was a "negative" clinical trial big enough? How many patients you needed depends on what you found |journal=Archives of internal medicine |volume=145 |issue=4 |pages=709–12 |year=1985 |month=April |pmid=3985731 |doi= |url= |issn=}}</ref><ref name="pmid6881780">{{cite journal |author=Young MJ, Bresnitz EA, Strom BL |title=Sample size nomograms for interpreting negative clinical studies |journal=Annals of internal medicine |volume=99 |issue=2 |pages=248–51 |year=1983 |month=August |pmid=6881780 |doi= |url= |issn=}}</ref>
 
===Frequentist method===
This approach uses  mathematical formulas to calculate deductive probabilities (p-value) of an experimental result.<ref name="pmid10383371">{{cite journal |author=Goodman SN |title=Toward evidence-based medical statistics. 1: The P value fallacy |journal=Ann Intern Med |volume=130 |pages=995–1004 |year=1999 |pmid=10383371 |doi=|url=http://www.annals.org/cgi/content/full/130/12/995}}</ref> This approach can generate [[confidence interval]]s.
 
A problem with the [[frequentist]] analyses of p-values is that they may overstate "statistical significance".<ref name=Goodman1999a>{{cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 1: The P value fallacy. | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 995–1004 | year = 1999 | pmid = 10383371}}</ref><ref name=Goodman1999b>{{cite journal | author = Goodman S | title = Toward evidence-based medical statistics. 2: The Bayes factor. | journal = Ann Intern Med | volume = 130 | issue = 12 | pages = 1005–13 | year = 1999 | pmid = 10383350}}</ref> See [[Bayes factor]] for details.
 
===Likelihood or Bayesian method===
Some argue that the P-value should be interpreted in light of how plausible is the hypothesis based on the totality of prior research and physiologic knowledge.<ref name="pmid3573245">{{cite journal |author=Browner WS, Newman TB |title=Are all significant P values created equal? The analogy between diagnostic tests and clinical research |journal=JAMA |volume=257  |pages=2459–63 |year=1987 |pmid=3573245 |doi=}}</ref><ref name="pmid10383371"/><ref name="pmid10383350">{{cite journal |author=Goodman SN |title=Toward evidence-based medical statistics. 2: The Bayes factor |journal=Ann Intern Med |volume=130 |pages=1005–13 |year=1999 |pmid=10383350 |doi=|url=http://www.annals.org/cgi/content/full/130/12/1005}}</ref> This approach can generate Bayesian 95% credibility intervals.<ref name="isbn1-58488-410-X">{{cite book |author=Gelfand, Alan E.; Sudipto Banerjee; Carlin, Bradley P. |authorlink= |editor= |others= |title=Hierarchical Modeling and Analysis for Spatial Data (Monographs on Statistics and Applied Probability) |edition= |language= |publisher=Chapman & Hall/CRC |location=Boca Raton |year=2003 |origyear= |pages= |quote= |isbn=1-58488-410-X |oclc= |doi= |url= |accessdate=|id={{LCC| QA278.2 .B36}}}}</ref>


==Classification==
==Classification==

Revision as of 07:58, 10 February 2009

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Advanced [?]
 
This editable Main Article is under development and subject to a disclaimer.

Statistics refers primarily to a branch of mathematics that specializes in enumeration, or counted, data and their relation to measured data.[1][2] It may also refer to a fact of classification, which is the chief source of all statistics, and has a relationship to psychometric applications in the social sciences.

An individual statistic refers to a derived numerical value, such as a mean, a coefficient of correlation, or some other single concept of descriptive statistics . It may also refer to an idea associated with an average, such as a median, or standard deviation, or some value computed from a set of data. [3]

More precisely, in mathematical statistics, and in general usage, a statistic is defined as any measurable function of a data sample [4]. A data sample is described by instances of a random variable of interest, such as a height, weight, polling results, test performance, etc., obtained by random sampling of a population.

Simple illustration

Suppose one wishes to embark on a quantitative study of the height of adult males in some country C. How should one go about doing this and how can the data be summarized? In statistics, the approach taken is to assume/model the quantity of interest, i.e., "height of adult men from the country C" as a random variable X, say, taking on values in [0,5] (measured in metres) and distributed according to some unknown probability distribution[5] F on [0,5] . One important theme studied in statistics is to develop theoretically sound methods (firmly grounded in probability theory) to learn something about the postulated random variable X and also its distribution F by collecting samples, for this particular example, of the height of a number of men randomly drawn from the adult male population of C.

Suppose that N men labeled have been randomly drawn by simple random sampling (this means that each man in the population is equally likely to be selected in the sampling process) whose heights are , respectively. An important yet subtle point to note here is that, due to random sampling, the data sample obtained is actually an instance or realization of a sequence of independent random variables with each random variable being distributed identically according to the distribution of (that is, each has the distribution F). Such a sequence is referred to in statistics as independent and identically distributed (i.i.d) random variables. To further clarify this point, suppose that there are two other investigators, Tim and Allen, who are also interested in the same quantitative study and they in turn also randomly sample N adult males from the population of C. Let Tim's height data sample be and Allen's be , then both samples are also realizations of the i.i.d sequence , just as the first sample was.

From a data sample one may define a statistic T as for some real-valued function f which is measurable (here with respect to the Borel sets of ). Two examples of commonly used statistics are:

  1. . This statistic is known as the sample mean
  2. . This statistic is known as the sample variance. Often the alternative definition of sample variance is preferred because it is an unbiased estimator of the variance of X, while the former is a biased estimator.

Transforming data

Statisticians may transform data by taking the logarithm, square root, reciprocal, or other function if the data does not fit a normal distribution.[6][7] Data needs to be transformed back to its original form in order to present confidence intervals.[8]

Summary statistics

Measurements of central tendency

Measurements of variation

  • Standard deviation (SD) is a measure of variation or scatter. The standard deviation does not change with sample size.
  • Variance is the square of the standard deviation:
  • Standard error of the mean (SEM) measures the how accurately you know the mean of a population and is always smaller than the SD.[9] The SEM becomes smaller as the sample size increases. The sample standard devision (S) and SEM are related by:

Inferential statistics and hypothesis testing

For more information, see: Statistical significance.


Classification

Problems in reporting of statistics

In medicine, common problems in the reporting and usage of statistics have been inventoried.[10] These problems tend to exaggerated treatment differences.

See also

References

  1. Trapp, Robert; Beth Dawson (2004). Basic & clinical biostatistics. New York: Lange Medical Books/McGraw-Hill. LCC QH323.5 .D38LCCN 2005-263. ISBN 0-07-141017-1. 
  2. Mosteller, Frederick; Bailar, John Christian (1992). Medical uses of statistics. Boston, Mass: NEJM Books. ISBN 0-910133-36-0.  Google Books
  3. Guilford, J.P., Fruchter, B. (1978). Fundamental statistics in psychology and education. New York: McGraw-Hill.
  4. Shao, J. (2003). Mathematical Statistics (2 ed.). ser. Springer Texts in Statistics, New York: Springer-Verlag, p. 100.
  5. This is the case in non-parametric statistics. On the other hand, in parametric statistics the underlying distribution is assumed to be of some particular type, say a normal or exponential distribution, but with unknown parameters that are to be estimated.
  6. Bland JM, Altman DG (March 1996). "Transforming data". BMJ 312 (7033): 770. PMID 8605469. PMC 2350481[e]
  7. Bland JM, Altman DG (May 1996). "The use of transformation when comparing two means". BMJ 312 (7039): 1153. PMID 8620137. PMC 2350653[e]
  8. Bland JM, Altman DG (April 1996). "Transformations, means, and confidence intervals". BMJ 312 (7038): 1079. PMID 8616417. PMC 2350916[e]
  9. What is the difference between "standard deviation" and "standard error of the mean"? Which should I show in tables and graphs?. Retrieved on 2008-09-18.
  10. Pocock SJ, Hughes MD, Lee RJ (August 1987). "Statistical problems in the reporting of clinical trials. A survey of three medical journals". N. Engl. J. Med. 317 (7): 426–32. PMID 3614286[e]