Editor’s Note: This essay was adapted from Samo Burja’s Intellectual Legitimacy series.
Scientific authority is one of the foundations of power in our society. The landscape of policies and measures considered legitimate and acceptable by society is shaped by whether they are considered to be justified by science. It is not possible to justify a non-scientific response to climate change, pandemics, or some other crisis, even if such crises ultimately pose far more than just scientific questions. As a result, those with the authority to speak on behalf of science have power that goes far beyond just settling scientific debates.
In the early 20th century, sociologist Max Weber concluded that modern civilization had been remade by what he called “legal-rational” authority, which displaced charismatic and traditional authority in Western civilization throughout the modern period. Weber correctly observes that rather than rising on its own, scientific authority developed in close union with legal, administrative, philosophical, and ideological authority. These forms of authority have no sharp boundary between them. What does such authority look like in our society today?
Consider a scientific study demonstrating a new medicine to be safe and efficacious. An FDA official can use this study to justify the medicine’s approval, and a doctor can use it to justify a patient’s treatment plan. The study has this legitimacy even when incorrect. In contrast, even if a blog post by a detail-oriented self-experimenter contained accurate facts, those facts would not have the same legitimacy: a doctor may be sued for malpractice or the FDA may spark public outcry if they based their decisions on reports of this sort. The blog itself would also risk demonetization for violating terms of service, which usually as a matter of policy favors particular “authoritative” sources. A stark example of this is YouTube’s COVID-19 Medical Misinformation Policy, which plainly states that “YouTube doesn’t allow content that spreads medical misinformation that contradicts local health authorities’ or the World Health Organization’s (WHO) medical information.”
For another example, the edit history of the Wikipedia page for the book Why We Sleep, written by a Berkeley professor, is full of back-and-forth edits both adding and removing references to a long list of factual errors in the book compiled by a diligent reader named Alexey Guzey. While Guzey’s contribution was controversial to the editors, they preserved all references to a discussion by Columbia University statistician Andrew Gelman that directly cites Guzey’s work and praises its carefulness. Yet neither Guzey nor Gelman is a sleep scientist.
The difference? As the edit history describes, Guzey has “no association to any academic institution,” so per Wikipedia’s official policies his essay simply could not be cited directly. Once Guzey’s work had “now been the subject of a BBC Radio 4 episode,” however, references to it were added back for good. There is no particular reason to believe the staff at BBC Radio 4 are qualified to understand or make arguments about the science of sleep, yet this is what it took to present the information. The information on its own was not enough.
Wikipedia’s policies explicitly prohibit citing such research, a policy that makes the questionable assumption that original research can only happen in academia. These policies amount to relying on journalistic and academic consensus to establish truth. Such reliance is widely and unreflectingly shared, even by the most educated. When we think we are evaluating an idea or information on its own merits, we are also typically evaluating the authority of who has communicated the idea, and how, without our conscious awareness. Scientific authority is just one component of Weber’s “legal-rational authority,” or what can alternately be generalized as intellectual authority.
Intellectual authority is a personal reputation. It reflects not only perceived expertise, but the perceived pro-social orientation of the individual. Weber had good reason to group these forms of authority as closely related, since it is possible to amass intellectual authority in one domain and then exercise it in another.
An idea has intellectual legitimacy insofar as it is recognized by society as respectable and reasonable. Such a theory, idea, or assertion of fact does not need to be recognized as credible by all people, or even by very many people at all. There only needs to be a general perception that society at large holds the idea to be legitimate. Powerful institutions and individuals are seen as tolerating or endorsing it. Such a perception isn’t necessarily coupled to whether an idea is true; intellectually valid ideas that are considered legitimate can still ultimately turn out to be false, such as the solid but since-disproven scientific hypotheses about phlogiston or the plum pudding model of the atom. Rather, intellectual legitimacy rests mainly in the hands of institutions and individuals with intellectual authority who are able to bestow or withhold it from ideas, theories, and entire fields of knowledge.
Intellectual Authority and Intellectual Legitimacy
This reliance on authority and legitimacy isn’t something we could simply abolish, because it is difficult for us to assess claims in fields that are outside our areas of expertise. To help deal with this intrinsic limitation, we learn from experience which sources to rely on and which to discard. The shortcut is not just an efficient and effective heuristic for individuals—this regulation acts as the core way that societies process information at scale. In a healthy society, the shortcut works and saves us intellectual work. In a less healthy society, evaluating an idea’s intellectual legitimacy first is often safer than evaluating the idea itself. Individuals who are ignorant of the intellectual legitimacy of an idea do so at their own social peril. Sometimes this amounts to lost social and economic opportunities, other times it escalates the risk of physical violence against them.
Often those with the highest intellectual authority aren’t necessarily the most generative. As a consequence, credit for originating a legitimized idea nearly always flows to the person with the highest intellectual authority that has a claim to it, rather than the first person to have thought of it. This phenomenon inspired what is known as Stigler’s law of eponymy, which states that no scientific discovery is named after its original discoverer. Fortunately, this law isn’t absolute, and it is possible to organize institutions and societies where generativity, validation, and authority are better aligned. The few societies that manage such feats tend to produce intellectual golden ages and transformative technologies. The ones that misalign these factors stagnate.
The intellectual legitimacy of ideas is partially determined by who communicates them. The messenger matters for the reputation of an idea. A contemporary example of this dynamic can be found in the work of Professor Robin Hanson, who began his career as a physicist but later rebranded as an economist. As reported in Fortune, Robin Hanson “says he went to the expense and trouble of getting a Ph.D. almost entirely so that people would take him seriously…he was thinking up market-based improvements to government institutions, but found that with his techie background he couldn’t get anyone…to listen. So he went to CalTech to get a Ph.D.”
Physicists do not have intellectual authority on the topic of how society should be run. Economists do. Hanson was soon implementing a pilot prediction market for the Defense Advanced Research Project Agency (DARPA). Today, Hanson’s work is often cited by leading commentators and intellectuals such as Nate Silver and Tyler Cowen. By understanding how to acquire the relevant intellectual authority, Hanson gained an audience for his idea he couldn’t have had otherwise.
Different social roles hold different amounts of intellectual authority. The academic has more intellectual authority than the journalist, who in turn has more authority than the blogger. We even recognize the intellectual authority of social roles that no longer exist in our society: Marcus Aurelius’ Meditations wouldn’t be as widely read today if he hadn’t been a Roman emperor.
The intellectual authority of these different social roles themselves can change over time. In this way such roles are often examples of borrowed power: one can use one’s socially legible role to legitimate one’s ideas, but at the end of the day one’s social role is contingent on a broader social landscape that is itself prone to change. King Charles III may still hold the ability to issue honors, but he holds much less intellectual sway today than he would have in 1625. The English Civil War, as well as the American, French, and Russian revolutions are significant causes of this shift. The intellectual authority of a role changes with success or failure at social, economic, and political competition.
Power Creates Prestige
Power can be used to create prestige. A sovereign is usually the primary source of prestige in a society. This follows naturally from their status as the society’s leader, that is, the person who has the highest authority in decision-making and is deferred to above all. The ruler uses his fount of prestige to regulate overall status and prestige competition so that the right people and behaviors win. This solves coordination problems and tragedies of the commons at all levels of power. The ruler is the ultimate referee or tiebreaker in competitions of status; who is ultimately allowed to win will shape the goals and behaviors of everyone else in society. A key mechanism for this is awarding honors and prizes. This works not because cash rewards incentivize eventual winners, but because it is a public demonstration of status. The Nobel Prize greatly shapes the favored ambitions and fields of both scientists and economists. Winners are determined by Sweden’s national academies, but each award is personally hand-delivered by the King of Sweden before an audience. If the recipients are worthy, this also raises the prestige of the giver of honor.
In the 16th century, Queen Elizabeth I of England granted minor titles to former pirates, such as Sir Francis Drake and Sir John Hawkins, who helped harass the Spanish navy and set the course for later English naval domination. In the 17th century, King Charles II granted a charter creating the Royal Society, which would play a crucial role in the Scientific Revolution. By conferring the highest honor in the land on naval warfare and scientific exploration—later mainstays of British power—these rulers may have made the most important decisions of their reigns.
Not all ways of using power to create prestige are functional, however. Soviet leader Joseph Stalin’s elevation of the biologist Trofim Lysenko and his rejection of Mendelian genetics was perhaps useful for politically bolstering Stalin’s preferred agricultural politics, but it set back Soviet genetic science by decades and contributed to terrible famines in Ukraine and China.
Of course, there are brilliant rulers who might really have something to contribute to a field, and there are some who aren’t particularly brilliant but wish to engage in hobbies for personal fulfillment. A historically common practice for such rulers is to be intellectually active under assumed identities or proxies—sometimes convincingly, sometimes not. Frederick the Great of Prussia, for example, anonymously published a political treatise shortly after assuming the throne. Anonymity prevents the prestige distortions that might make a purely intellectual exercise a matter of state politics. While scientific pursuits should be regulated by the ruler, this can only be done when they can function as an impartial arbitrator.
Since a ruler’s prestige subsidizes all other forms of prestige in a society, when the landscape of power shifts, the landscape of prestige shifts accordingly. It is then critical that rulers allocate prestige well—that is, in accordance with the actual distribution of excellence. If they don’t, as in the case of Stalin, the resulting distortions in the allocation of prestige produce distortions in their society’s understanding of what is good and what is true. Lysenkoism was an epistemic and moral disaster.
This kind of corruption can ultimately have catastrophic effects on a society’s health, because the ability to ascertain the truth, and refer to the truth with legitimacy, is fundamental to the functionality of a civilization’s institutions and people. The more closely social status corresponds to activity that’s ultimately beneficial for society, the more such activity is incentivized, much more strongly than by even a large financial reward. Wisely distributing status makes the difference between a world where most kids dream of becoming YouTubers and one where they dream of taking us to space.
Modern states tend to have presidents and prime ministers rather than kings, but it is a mistake to think they don’t engage in exactly this kind of status regulation both in science as well as other domains of intellectual authority. In the aftermath of World War II, American officials in the State Department and the CIA wanted to undermine the dominance of pro-Soviet communists in the Western highbrow cultural scene. To do this, they planned to promote artists and intellectuals who were either anti-Soviet or at least not especially sympathetic to the Soviets. They considered abstract expressionist painting, which was then a new and obscure movement, a promising candidate.
In 1946, the State Department organized an international exhibition of abstract painting called “Advancing American Art.” It was so poorly received that the tour was canceled and the paintings were sold off for next to nothing. Undeterred, the CIA continued to arrange international exhibitions for abstract expressionists under a front organization called the Congress for Cultural Freedom. Eventually, the movement caught on.
It would be an oversimplification to say that the CIA made abstract expressionism famous, but their support was not irrelevant. Perhaps the U.S. government would do well to issue an American Nobel Prize to domains like physics and biology. After all, why should the Swedish King be giving out the highest award for American scientists?
Bureaucracies and Committees Cannot Regulate Intellectual Authority
A significant problem with contemporary states is that they award prestige on autopilot, based on the political needs of small but dogged constituencies in major bureaucracies. This is not governance, but rather yielding to the very problem that governance is supposed to solve. By default, sharp elbows win against sharp minds. It takes an individually empowered ruler to balance the scale in the other direction.
For example, there are valid reasons to be skeptical of the usefulness or veracity of recent scientific work being done in the fields of, say, quantum computing or particle physics. But where the interests of bureaucratic science and bureaucratic government are aligned, there is no one more authoritative than the bureaucracy to hold bad ideas accountable. It doesn’t matter if the bad ideas will preclude achieving an organization’s mission, so long as the bad ideas can secure the organization’s continued existence.
Bureaucracies are often constructed in such a way as to only recognize certain pre-existing bureaucratic markers of legitimacy as valid. This process is easiest to observe in contemporary academia. The tenure committee recognizes the number of papers published as a valid indicator of a professor’s academic success, but not blog posts with the same information and reach. The justification for the proliferation of professionalized committees and academic bureaucracies is to improve the epistemic foundations of a field. But improvement in epistemic foundations is not always what we observe.
Rather, professionalization is a way to reduce variance. Professionalization is essentially the creation of set procedures, norms, and social roles to govern a given area of knowledge. But while this eliminates unserious crackpots, it also crowds out the “unorthodox” and often stochastic experimentation employed by all exceptional live players as they drive new fields forward. In short, reducing variance cuts off both tails of the distribution. While professionalization does eliminate some malpractice, for pre-paradigmatic fields it can be harmful, since researchers on the frontier of knowledge must pursue hypotheses that can’t be justified to bureaucrats. If the hypothesis could be justified to bureaucrats, the field would already be mature.
Bureaucratization closes many fields of inquiry. Archaeologists today are less likely to change their historical views based on new finds than their 19th century counterparts. The latter successfully integrated evidence of previously unknown civilizations, including the Hittites and the Sumerians, when digs suggested it. It’s not that there weren’t counter-arguments and skepticism, but the weight of physical evidence would be engaged and eventually win over the inertia of established theories. The long and slow road of evaluating modern finds such as the impressive 11,000-year-old Göbekli Tepe site in Turkey serves as a notable contrast. It took the life-long dedication of the unorthodox archaeologist Klaus Schmidt to find and promote the site. Even then, his success was arguably not in convincing Western academia, but in convincing the Turkish government, whose authority was then sufficient for widespread recognition. Schmidt would live to see Turkey recognize the site’s significance, but not UNESCO.
The more fundamental problem with only recognizing or working on highly processed knowledge is that even in the most functional institution, a completely new field or paradigm shift cannot emerge institutionalized. As a consequence, a society friendly to pursuing the frontiers of knowledge must face the challenge of legitimizing the intellectually productive activities of key individuals early in their career, before their success is bureaucratically legible.
Legible intellectual success can somewhat move the authority of a social role even without direct governance by the sovereign. A good example of this is 20th century physics. The triumphs of theories such as general relativity and quantum mechanics greatly enhanced the intellectual authority of the profession. The detonation of two atomic bombs in wartime also demonstrated this new mastery of physics as vital to state interests. This drove physicists such as Albert Einstein to comment on statesmanship and others like J. Robert Oppenheimer to even try their hand at it, with mixed success.
However, any kind of intellectual legitimacy can grow detached from truth or intellectual validity. This can lead society to endorse falsehoods, sometimes even when available information easily disproves views still held to be legitimate. Perception is ultimately outsourced to various authorities—sometimes independently prominent individuals, but most commonly to institutions and those acting as their representatives. It takes live players or practitioners of living traditions of knowledge to orient such institutions towards the truth. Incentives inside of old, automated organizations are at best weakly coupled to truth and at times actively push against it.
Powerful Individuals Should Steer Science
Given the corrupting influence of patronage networks, committees, and political survival on truth-seeking, it might be tempting to try and find refuge in solutions that don’t require knowledge to be institutionalized. This first runs into the problem of how to solve the knowledge succession problem that is needed to keep traditions of knowledge alive. Various proposals have been made for substitutes to legitimizing institutions, from universal literacy and education to technology-assisted wikis. All such solutions, when closely examined, amount to institution building. The reliability of contemporary Wikipedia, for example, rests on a tight social network of obscure and enigmatic editors rather than occasional contributions or vigilante edits from visitors. And universal literacy and education have never been achieved without the help of either states or organized religion, which always take the opportunity of educating the masses to put the stamp of legitimacy on their own ideas.
The second problem is that intellectual authority is too useful to power centers to be ignored. It will be deployed, one way or another. Social engineers have used it to guide behavior, loyalties, and flows of resources for all of recorded history, and likely long before as well. The most impressive example is the Catholic Church, which built its authority on the interpretation of religious matters, synthesizing human psychology, law, and metaphysics. The state church of the Roman Empire outlived the empire by many centuries. By the 11th century, Church authority was sufficient to organize and pursue political aims at the highest level. It was sufficient to force the Holy Roman Emperor Henry IV to kneel for three days as a blizzard raged, waiting for the Pope’s forgiveness. Few military and political victories are as clear. The Pope was revealed to be more powerful than kings.
The power of the Pope didn’t rest primarily in his wealth, armies, or charisma. Rather it rested on a claim of final authority in matters of theology, a field considered as or even more prestigious than cosmology is today. This can be compared to the transnational influence of contemporary academia on policy and credibility.
Such exercises of power weren’t completely unopposed. Today it is often forgotten that Martin Luther’s ninety-five theses and debates such as that at the Diet of Worms were first a challenge of intellectual authority, and only consequently a political struggle. The centuries-long consequences of the Protestant Reformation are myriad, but one of them is the negative connotation of the word “authority” in the English-speaking West. Protestant pamphlets had harsh and at times vulgar critiques of Papal “authority.” Merely making a word carry a negative connotation didn’t stop Protestant nations such as England or Sweden from creating their own state churches with much the same structure as the Catholic Church. Their new institutional authority was then a transformation of the old, using much the same social technology, rather than a revolution.
Inheritance of such authority shows some surprising patterns. The Anglican Church would famously have its own dissenters who ended up settling in the North American colonies. America’s Ivy League universities run on a bequeathment of intellectual authority which they first acquired as divinity schools serving different denominations of the many experiments in theocracy that made up the initial English colonies of the region. Harvard’s founding curriculum conformed to the tenets of Puritanism and used the University of Cambridge as its model. Amusingly, the enterprising Massachusetts colonists decided to rename the colony of Newtowne to Cambridge a mere two years after Harvard’s founding. Few attempts to bootstrap intellectual authority by associating with a good name are quite as brazen!
Many know that the University of Pennsylvania served the Quakers of Pennsylvania, since the colony and consequently the university was named after its founder, the Quaker thinker William Penn. But fewer know Yale was founded as a school for Congregationalist ministers and that Princeton was founded because of Yale professors and students who disagreed with prevalent Congregationalist views. The intellectual authority of modern academia can be traced back to an era when theology was the basis of its intellectual authority. Today, theology has nothing to do with it and the authority has been re-justified on new grounds. This shows that intellectual authority can be inherited by institutions even as they change the intellectual justification of that authority.
That such jumps are possible allows for interesting use of social technology, such as the King of Sweden bestowing credibility on physicists through the Nobel Prize or Elon Musk ensuring that non-technical employees at his companies listen to engineers through designing the right kind of performance art. Different types of intellectual authority are easily conflated for both good and bad. This also explains why we see uncritical belief in those who wear the trappings of science without doing science itself. When medicine suffered a worse reputation than science in the 19th century, doctors adapted by starting to wear white lab coats. This trick in particular continues to work in the present day.
Intellectual golden ages occur when new intellectual authority is achievable for those at the frontiers of knowledge. This feat of social engineering that legitimizes illegible but intellectually productive individuals is then upstream of material incentives, which is why a merely independently wealthy person cannot just throw money at any new scientific field or institution and expect it to grow in legitimacy. It ultimately rests on political authority. The most powerful individuals in a society must lend their legitimacy to the most promising scientific minds and retract it only when they fail as scientists, rather than as political players. The society in which science can not just exist, but flourish, is one where powerful individuals can elevate people with crazy new ideas on a whim.
The dreams of automating scientific progress with vast and well-funded bureaucracies have evidently failed. This is because bureaucracies are only as dynamic as the live players who pilot them. Without a live player at the helm who is a powerful individual in control of the bureaucracy, the existing distribution of legitimacy is just frozen in place, and more funding works only to keep it more frozen rather than to drive scientific progress forward. Powerful individuals will not always make the right bets on crazy new ideas and the crazy people who come up with them, but individuals have a chance to make the right bets, whereas bureaucracies can only pretend to make them. Outsourcing science to vast and well-funded bureaucracies then gives us the impression of intense work on the cutting edge of science, but without any of the substance.
The solution is not just to grant more funding and legitimacy to individual scientists rather than scientific bureaucracies, but to remind powerful individuals, and especially those with sovereign authority, that if they don’t grant this legitimacy, no one else will. Science lives or dies on personal endorsement by powerful patrons. Only the most powerful individuals in society can afford to endorse the right immature and speculative ideas, which is where all good ideas begin their life cycle.