Intelligence Quotient
An intelligence quotient, or IQ, is a score derived from one of several different standardized tests designed to assess intelligence. The term “IQ,” from the German Intelligenz-Quotient, was devised by the German psychologist William Stern in 1912 as a proposed method of scoring children’s intelligence tests such as those developed by Alfred Binet and Théodore Simon in the early 20th Century. Lewis Terman accepted that form of scoring, expressing a score as a quotient of “mental age” and “chronological age,” for his revision of the Binet-Simon test, the first version of the Stanford-Binet Intelligence Scales.
Although the term “IQ” is still in common use, the scoring of modern IQ tests such as the Wechsler Adult Intelligence Scale is now based on standard scoring of the subject’s rank order on the test item content with the median score set to 100, and a standard deviation of 15, although not all tests adhere to that assignment of 15 IQ points to each standard deviation.
IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status, and, to a substantial degree, parental IQ. While the heritability of IQ has been investigated for nearly a century, controversy remains regarding the significance of heritability estimates, and the mechanisms of inheritance are still a matter of some debate.
IQ scores are used in many contexts: as predictors of educational achievement or special needs, by social scientists who study the distribution of IQ scores in populations and the relationships between IQ score and other variables, and as predictors of job performance and income.
The average IQ scores for many populations have been rising at an average rate of three points per decade since the early 20th century, a phenomenon called the Flynn effect. It is disputed whether these changes in scores reflect real changes in intellectual abilities, or merely methodological problems with past or present testing.