The average IQ is 100. That is the official answer, the textbook answer, and the answer most websites throw at you in the first sentence.
It is also the kind of answer that makes smart people suspicious, because it sounds almost too tidy. And honestly, your suspicion is healthy.
Here is the trick: IQ is not like average height, where we measure a bunch of people and discover a number. Modern IQ tests are scaled so that the average score in the norming population is 100. In other words, 100 is not a mysterious fact nature carved into a mountain. It is a reference point created by test designers so scores are easy to interpret.
That does not mean IQ is fake or useless. It means we need to ask a better question. Not “What is the average IQ?” but “Average for whom, on which test, normed when, and compared to what group?” Once you ask that, the topic gets much more interesting.
100 is the average because the test is built that way
Early IQ testing did not work quite like modern testing. Alfred Binet’s original work in France — covered in depth in our article on the history of intelligence and IQ tests — aimed to identify children who might need extra educational support, and the old system later popularized by Wilhelm Stern and Lewis Terman used a mental-age formula: mental age divided by chronological age, then multiplied by 100. That worked reasonably well for children, but it got awkward fast in adulthood, where “mental age” is not exactly something you want to calculate at family dinner.
Modern IQ tests use what psychologists call a deviation IQ. Instead of asking whether a 10-year-old thinks like a 12-year-old, today’s tests compare your performance with that of a large standardized sample of people your age. Then the raw scores are converted so the distribution has a mean of 100 and usually a standard deviation of 15.
As the medical reference Standard of Care explains, modern IQ scores are transformed to a normal distribution with a mean of 100 and a standard deviation of 15. Psych Central made the same point in a 2022 overview: the mean and median are set at 100. So yes, if someone asks for the conventional answer, it is 100.
Why 100? Mostly because it is convenient. It is an easy midpoint, and people intuitively understand that numbers above it are above average and numbers below it are below average. Test makers could have chosen 500 if they were feeling theatrical, but thankfully they did not.
This is also why the phrase “average IQ is between 85 and 115” is slightly sloppy. Strictly speaking, 100 is the average. The range from 85 to 115 is the average range, meaning the band where a large chunk of people fall.
What your score means in plain English
Once we know that IQ scores are centered on 100, the next useful piece is the spread. Most major IQ tests use a standard deviation of 15 points. That gives us a very handy map of the bell curve.
Roughly 68% of people score between 85 and 115. About 95% score between 70 and 130. Only around 2% score above 130, and a similar small percentage score below 70. That is why 130 is often used as a rough threshold for very superior performance, while scores below 70 can be one part of an evaluation for intellectual disability. But clinicians do not diagnose intellectual disability from IQ alone; adaptive functioning—how well someone manages everyday life—matters too.
Percentiles help here as well. An IQ of 100 is about the 50th percentile. An IQ of 115 is around the 84th percentile. An IQ of 130 is around the 98th percentile. So when somebody says they have an IQ of 130, they are not saying they got 130 questions right out of 100, which would be a very impressive violation of arithmetic. They are saying they scored higher than about 98% of the norm group.
And once you understand percentiles, the famous bell curve stops looking like abstract statistics wallpaper and starts looking like a map. That leads us to the next question: does the real data actually behave that way?
The bell curve is not a myth
You have probably seen the classic bell-curve graphic floating around online, usually next to some terrible opinion. Annoying as that is, the basic shape itself is real.
IQ tests are designed to produce a roughly normal distribution, and in practice they usually do. Richard Warne, reviewing the thorny literature on national mean IQ estimates in 2023, argued that IQ data generally behave well enough statistically that calculating means does not violate the usual assumptions. That sounds dry, but it matters: you can actually talk sensibly about average scores.
We see this pattern even in groups people stereotype. In a study of children with ADHD, reading difficulties, or both, psychologist Bonnie Kaplan and colleagues found that the estimated Full-Scale IQ distributions in all three groups did not differ significantly from a normal distribution, with more than half of the children landing in the average range. Their conclusion was refreshingly blunt: children with ADHD were no more likely to have above-average IQs than other children.
I like this study because it punctures two myths at once. First, the bell curve shows up where we expect it to. Second, clinical labels do not magically tell you someone’s intelligence. Real people stubbornly refuse to fit internet stereotypes (rude of them, really).
Now for the messy part: real groups do not always average 100
If IQ tests are normed to 100, why do you sometimes read that the U.S. average is about 97, or that the “world average IQ” is around 89? Is the official answer wrong?
No. But this is where the phrase average IQ changes meaning.
When writers talk about the average IQ of a country, they are usually combining data from different samples, different years, different tests, and sometimes very questionable methods. That is not the same as the standardized score of 100 built into a test.
Psych Central, for example, cited an estimate that the U.S. average IQ was 97.43 in 2019. That number is not impossible, but it is not some eternal property of Americans floating in the air like a weather report. It depends on how the estimate was built.
Warne’s 2023 review is especially useful here because he refuses to join either tribe shouting from opposite hills. He does not say national IQ datasets are perfect. He also does not say they are worthless. He argues that some of these estimates capture “something of importance,” but he also points out major quality problems, especially in countries with sparse or outdated data.
One of his striking observations is that country estimates from multiple samples often differ by only about 5.8 points on average, though some countries show discrepancies of more than 20 points because one old or poor-quality sample skews the picture. He also showed that depending on assumptions, a calculated global mean from one controversial dataset could land around 86.7 to 88.3. Your mind might be boiling right now. Does that mean humanity’s “real” average IQ is not 100 after all? Not so fast.
As Warne stresses, IQ is a measurement, not the same thing as intelligence itself. And group averages cannot tell you whether differences come from education, nutrition, health, test familiarity, language, sampling bias, or something else. They certainly do not tell you anyone’s innate potential. I find this point especially important because public discussions of IQ often sprint from a shaky number to a grand theory of civilization in about twelve seconds. That is not science. That is caffeine with a Wi-Fi connection.
Average compared to when? The Flynn effect changes everything
There is another reason average IQ gets slippery: the comparison group changes over time.
For much of the 20th century, raw scores on IQ tests rose across many countries. This pattern is known as the Flynn effect, after researcher James Flynn. The Standard of Care summary notes the classic estimate of roughly 3 IQ points per decade, and the broader research literature reviewed in the dossier puts the effect at about 2.93 points per decade in a 2014 meta-analysis by Trahan and colleagues. A later meta-analysis by Pietschnig and Voracek in 2015 also found broad gains, though not equally across all forms of intelligence.
This means that if you gave a modern person an old IQ test using old norms, they might score noticeably higher than 100. Not necessarily because human brains suddenly evolved turbo mode, but because environments changed: better schooling, nutrition, healthcare, and familiarity with abstract problem-solving all likely played a role.
And this is exactly why IQ tests must be re-normed. If they were not, the “average” would drift upward and stop meaning average. In other words, 100 stays stable because the tests are updated. The ruler gets recalibrated.
Interestingly, some countries now show a slowing or even reversal of the Flynn effect. So even the long rise in scores is not a law of nature. Intelligence research has a nasty habit of punishing anyone who becomes too smug (which, to be fair, is a useful service).
What average IQ can tell us—and what it absolutely cannot
Quite a bit, if we stay disciplined. And not nearly as much as people wish, if we do not.
At the individual level, IQ tests can be genuinely useful. A school psychologist might use them to help figure out why one child reads fluently but struggles badly with working memory, or why another needs a more advanced academic track. In clinics, IQ scores can be one part of evaluating developmental conditions or cognitive decline. That is real-world value, not psychometric decoration.
At the group level, average scores can describe patterns. But description is not explanation. We said earlier that a group mean does not tell you why that mean is what it is. That distinction matters enormously.
For example, research summarized in the dossier shows that environment can strongly shape IQ outcomes. In a famous 2003 study, Eric Turkheimer and colleagues found that in poor families, shared environment explained much more of the variation in children’s IQ than genes did — a topic we explore in our article on whether intelligence is hereditary — while in affluent families genetic differences accounted for more variance. That is one of those findings that should make everyone on every ideological team sit down for a minute.
Social context matters too. Claude Steele and Joshua Aronson famously showed that stereotype threat can depress test performance when people fear confirming a negative stereotype about their group. So even before we get to giant claims about race, nation, or “civilizational intelligence” (already a bad sign), we have to admit something basic: test performance is not produced in a vacuum.
This is why I get uneasy when IQ is treated as destiny. The science does not support that. IQ measures something real and important, but it does not measure your worth, your creativity, your kindness, your judgment, or your future in any complete sense. It is one tool. A sharp one, sometimes. But still one tool.
The answer you should actually remember
If someone corners you at dinner and asks, “What is the average IQ?”, you can safely say: 100 on modern standardized IQ tests.
But now you know the better answer hiding underneath. That 100 is a calibrated center, not a magical truth about the human species. Most people score between 85 and 115. Scores form a bell curve. Different countries, samples, and decades can produce different empirical averages. And the meaning of those differences is often much harder to interpret than the internet would like.
So the next time you see a dramatic IQ claim online, do not just stare at the number. Ask four annoying questions: who was tested, with what test, against which norms, and for what purpose? People may stop inviting you to barbecues, but your understanding will improve dramatically.
That, to me, is the most interesting part of intelligence research. The number looks clean. The reality is gloriously inconvenient.
.png)






.png)


