Contrary to common belief, intelligence is one of the most robust scientific concepts that exist. When we agree on a narrow definition, it can be measured with high accuracy and reliability. Yet it is also true that it is very hard to reach an agreement about its limits and boundaries. One of the leading experts, Sternberg, summarized it this way: “there seems to be at least as many definitions of intelligence as there are experts asked to define it”.
That robustness corresponds to a long and complex history of theory and research. If we understand its history and how we reached our current level of knowledge, we will be better equipped to fully understand everything related to intelligence and its measurement. As you may have guessed, it all started a very long time ago, so let us go back in time.
The ancients & intelligence
It is very likely that the measurement of intelligence not only started a very long time ago but that it has been with us, in one form or the other, since we have culture and language. Archives show that already the Chinese Han dynasty (200 B.C) had established an examination for civil service jobs that evaluated the skills of applicants in a similar fashion to intelligence tests. Initially, those exams were centered around essays on law and agriculture, while later measures emphasized problem-solving, creativity, and divergent thinking as well as visuospatial perception.
In the writings of the most famous Greek philosophers we can find the first considerations about intelligence. In the work Meno, Plato’s discussion with its master Socrates started with a question: “Can you tell me Socrates whether excellence is teachable?...or does it come by nature?”. Which is another way of asking the current question of “How much do our genes determine our intelligence?”, an issue which science has mostly settled, as we explain in our article about IQ and genes, in that genes have some degree of responsibility. For Plato, intelligence was about the love for learning and the unwillingness to accept falsehoods.
His disciple Aristotle expressed his views in his fantastic piece Nichomachean Ethics. For him, intelligence should be divided into three parts: (i) understanding, (ii) doing and (iii) making. These three components would later constitute the Latin triarchy of (i) science, (ii) prudence, and (iii) art. For Aristotle, deductive and inductive reasoning were the building blocks of the scientific part of intelligence, or in other words of understanding.
Precisely this distinction will be the battlefield where the most heated debates around intelligence will take place during the last century. As we will see, the scientific study of intelligence will focus solely on what Aristotle conceived as understanding, forgetting completely the “doing” and “making”, that will be picked again by the recent theories of practical, social and emotional intelligence.
Advancing to the Renaissance, we find French philosopher Montaigne who advocated that intelligence was important because it helped to avoid dogmatism and accept the challenge to one’s beliefs. For British philosopher Hobbes, intelligence was about thinking fast, in line with current theories of information processing speed as intelligence’s biological underpinning. And Stuart Mill suggested that intelligent people were characterized by making greater use of originality, whereas for the “collective mediocrity…their thinking is done for them by men much like themselves”.
The beginning of the scientific study of Intelligence
Despite his bad fame, Galton played a big role in pushing the study of psychology into real science. He studied intelligence focusing on the physiological dimension, comparing the discriminative capabilities of individuals. For example, he carried out weight discrimination tests. If the person was able to discriminate between lower differences in weight, he deemed it more intelligent.
These kinds of measurements were later disproven, but new proposals of measuring intelligence through physiology would appear later, albeit in different forms to Galton’s. His disciple, MacKeen Cattell, actually extended his research creating more than fifty tests, as varied as measuring the speed of hand movements or the greatest possible squeeze with the hand.
We will find at the beginning of the 20th century in France, the most important spark for the fire of intelligence research. The French Ministry of education wanted to identify children with learning difficulties so that they could receive adequate teaching. This mission was assigned to Alfred Binet, who designed intelligence tests to determine if a child had an intelligence level comparable to its peers, testing the different abilities necessary in a school setting. Binet thought that with the appropriate intervention, children could improve. And he used the concept of mental age to compare it with its chronological age.
Lewis Terman, at Stanford University, built upon Binet’s ideas the powerful IQ test Stanford-Binet Scales, aimed at children of different age ranges. With tasks as varied as block building and picture vocabulary, the scales evaluated children comprehensively. He also invented with Stern the concept of IQ (intelligence quotient), which was the division of mental age by chronological age multiplied by 100. For example, if your kid’s age is 10 and his mental age is equivalent to 12 year old children, his IQ would have been computed as 12/10 * 100 = 120 IQ. However, IQ is now computed in a very different way, as you can learn in our IQ scale page.
Terman also started a longitudinal study to understand how gifted children performed later in life, discovering that they achieved a higher degree of academic and professional success. His findings have been thoroughly replicated and it is a current solid theory that high IQ correlates strongly with many different forms of success such as academic achievement, career, profession, money, and even health and life expectancy. You can learn more about it in our article about the correlation between IQ and success.
In 1914 World War I erupted and the best psychologists in the United States gathered with military leaders to discuss how they could help in the war effort. They agreed that classifying recruits efficiently was an important goal and they worked on creating the Army Tests, easy to score IQ tests that could be administered to big groups. There were two tests, the Alpha test, for persons who could read and which tested general information and verbal skills, and the Beta test, which was non-verbal with tasks such as block design, perception, and mazes.
After the war ended, David Wechsler, who was working at the New York Bellevue Psychiatric Hospital, became convinced that the Stanford-Binet scales were problematic, especially because of their excessive focus on verbal tasks. Too much weight on verbal tasks could underrate the level of intelligence of children with low verbal skills. So in 1939 he published his first version of what would become the famous Wechsler Intelligence Scales, which is the most widely used IQ test today by professional psychologists.
These scales were not innovative in the tasks they used, as they were more than anything a recollection of the tasks of different tests available at the time, but all together created the most comprehensive evaluation to date. Wechsler did not support his scales with a new theory. It was more than anything a hands-on approach that sought to be more precise in real-life evaluations.
The appearance of many intelligence theories
Later ensued an era of great theoretical development. Spearman suggested that general intelligence was a mental energy, called “g”, that was behind every kind of ability. And that there were also specific types of intelligence that each kind of task tested. This proposal would be called the two-factor theory. For the reknown Thorndike, intelligence was about associations. The more intelligent someone was, the more brain connections that person would have. Intelligence testing would be an indirect approach to discovering the number of connections. Although reductionist, it was another early attempt to underpin intelligence theory in psychobiology.
Thurstone, who was a scientific enemy of Spearman, proposed that intelligence was made up of seven interrelated abilities such as memory, inductive reasoning, or verbal fluency, and that no single “g” existed. Cattell found evidence for two general factors of intelligence, fluid intelligence -the raw processing power, the ability to reason in novel situations and learn fast- and crystallized intelligence -which represents learning and knowledge-. In 1940 he developed his culture-free test focused solely on fluid intelligence.
It would be Carroll’s hierarchical theory of the three levels of intelligence that would have the biggest impact. Combined afterward with the previous theories of Cattell and Horn, it became known as the Cattell-Horn-Carroll theory of intelligence (CHC-model), which is the most proven and widely accepted model of intelligence that exists today. According to contemporary CHC theory, intelligence is structured in three levels:
- There is a general intelligence factor at the top, which is not given much importance
- Then, there are seven middle factors that correlate with the general “g” to a different extent. They are:
- fluid intelligence (Gf),
- crystallized intelligence (Gc),
- short-term memory (Gsm),
- visual processing (Gv),
- auditory processing (Ga),
- long-term retrieval (Ga), and
- processing speed (Gs)
- At the last level, each factor is made up of several specific skills, which we don’t list here to keep it simple.
Other recent theories of intelligence
Besides CHC, other theories have appeared that are valid contenders. First we should mention IQ tests based on Luria’s neuropsychological approach. These tests focus more on trying to evaluate the processes that underlie cognition and not the results of cognition themselves like verbal tasks.
Examples are the Kaufmann Assesment Battery for Children and the Cognitive Assesment System for Children of Das and Naglieri. This last test is based on the theory that four processes need to be tested: (1) planning, (2) attention, (3) simultaneous processing (when several elements need to be integrated into a conceptual whole with tasks such as matrices), and (4) sequential processing (sometimes referred as working memory with tasks such as sentence repetition). It shouldn’t surprise us that these tests have shown less racial bias and a more powerful diagnosis of strengths and weaknesses.
A second theory gaining momentum is the g-VPR model suggested by Johnson and Bouchard in 2005 after reanalyzing and comparing the different models. Based on Vernon’s previous theories, it states that intelligence is made up of general intelligence at the top and three middle factors: verbal, perceptual and rotational/kinesthesic ability.
Lastly, we should mention the wave of theories that focus not only on Aristotle’s understanding component but also on the doing and making components. To them belong Goleman’s theory of emotional intelligence and even more holistic approaches like Gardner’s famous theory of multiple intelligences. His list of intelligences is:
Keep in mind that the proponents of more holistic approaches do not necessarily reject the narrower definitions of intelligence as invalid. What they claim is that they are too narrow and that intelligence should be conceived more holistically in order to do it justice. Yet it is precisely the narrower theories' strength, their statistical validity, what constitutes the weakness of more holistic approaches that lack thorough data validation. For Gardner for example, objective instruments could not be the basis for measuring real intelligence, which should be based more on observations of skills and preferences in real world activities. A claim in opposition to most scientists in the field that explains why it is difficult to prove his theories.
The present moment of intelligence research
Nowadays, IQ tests are being used mainly for the diagnosis of learning deficits, assisting in vocational decisions, and predicting achievement. Children are tested much more often than adults. And from a geographical point of view, western countries use them more frequently than Asian, African Latin-american countries, but they are picking up the pace fast.
We have seen that the history and development of intelligence theory and the creation of IQ tests have not gone precisely hand in hand. That is still the case. Intelligence scientists like Flanagan are trying to bridge the gap by teaching how to follow a cross-battery approach that allows testing IQ under the CHC model. It entails using subtests from different tests of intelligence in order to evaluate all of the abilities of the CHC model. This approach also enables personalizing the tasks chosen depending on the aspects of the person that really need to be evaluated.
All in all, we should keep in mind that “all major IQ tests measure g well,...even if some yield verbally flavored IQs, and others perhaps spatially flavored IQs”. So if you haven’t yet, try our IQ test of fluid intelligence based on Cattell’s culture-free proposal. It is fast and a good estimate of your IQ level.