You are touching on a real tension in the philosophy of science, and your instinct isn’t unreasonable. The common slogan “a theory must be testable to be scientific” is a simplified version of an idea associated with Karl Popper. Popper argued that a theory should be falsifiable—that is, there must exist some observation that could prove it wrong. His point was mainly to distinguish science from systems that explain everything after the fact (astrology was one of his favorite examples).
But Popper’s rule was never meant to function as a rigid gatekeeper for every stage of scientific thinking. In practice, scientists often work with ideas that cannot yet be tested.
A few historical examples illustrate this nicely.
In the early 1800s, the atomic theory of matter was widely discussed long before anyone could directly detect atoms. John Dalton proposed atoms mainly to explain chemical ratios. For decades they were theoretical entities inferred from patterns, not directly testable objects. Only later did techniques like Brownian motion analysis by Albert Einstein and experiments by Jean Perrin provide convincing empirical confirmation.
Another example is the prediction of the planet Neptune. Its existence was inferred mathematically from irregularities in the orbit of Uranus by Urbain Le Verrier and John Couch Adams before anyone had actually observed it. The idea was testable in principle, but the prediction preceded the observation.
More modern cases exist as well. Concepts like cosmic inflation or parts of string theory are discussed in physics even though direct experimental tests are extremely difficult or currently unavailable. Many physicists treat such ideas exactly as you describe: put on the shelf but kept in the conversation.
So most philosophers of science today would draw a distinction:
A theory should be testable in principle to belong to science, but it does not need to be immediately testable to be worth discussing. Theories often begin as speculative frameworks motivated by mathematical elegance, explanatory power, or consistency with existing knowledge. Only later do experimental techniques catch up.
Where scientists do become skeptical is when a proposal is structured so that no possible observation could count against it. At that point it stops behaving like science and becomes more like metaphysics or theology.
In other words, your “put it on the shelf” idea is actually quite close to how working scientists treat many hypotheses. The practical rule tends to be something like:
Speculate freely—but sooner or later the universe gets a vote.
I'm curious about the thought behind your comment. Were you thinking about a specific modern example—cosmology, quantum foundations, multiverse ideas—or speaking more generally about scientific method?
That’s actually a very thoughtful distinction, and many scientists would recognize exactly what you mean.
The line between “not scientific” and “unscientific” is subtle but important.
An idea that is not scientific simply lies outside the domain where science can currently operate. It may concern questions we cannot test yet, or perhaps never will. Much of early cosmology, for example, sat in that category for a long time.
An idea that is unscientific, by contrast, usually violates the habits that make science reliable. It may ignore evidence, refuse to allow correction, or invoke explanations that cannot in principle interact with observation. That’s the sort of thing Karl Popper was reacting against when he emphasized falsifiability.
So a person can speculate in a scientific spirit even when the speculation itself is not science. The habits tend to look something like this:
You try to keep your ideas consistent with what is already well established. If a speculation contradicts something strongly supported—say thermodynamics or conservation laws—it needs an exceptionally strong reason.
You keep the speculation logically coherent. Internal consistency matters even before evidence arrives.
You remain willing to abandon it if better explanations appear. The psychological openness is part of the scientific attitude.
And ideally, even if you cannot test the idea now, you can at least imagine what a test might look like someday.
Several major scientists worked this way. Isaac Newton famously refused to speculate about the mechanism of gravity (“hypotheses non fingo”), yet the inverse-square law itself described something he could not explain physically. Centuries later, Albert Einstein replaced Newton’s picture with spacetime curvature. Both stages involved reasoning that outran direct experimental insight for a time.
Your approach—avoiding ideas that violate the structure of science even when they cannot yet be tested—is very close to what philosophers sometimes call “scientific naturalism.” It’s essentially a discipline of thought rather than a method of experiment.
And you’re right that it’s often hard to tell where the boundary lies. Many ideas that once looked speculative or even metaphysical eventually became testable as technology improved.
Out of curiosity, when you say most of your speculations are untestable, are they mainly about cosmology and physics, or more about philosophical questions like time, causality, and existence? Those two domains tend to blur into each other very quickly.
That’s a very perceptive way to frame the distinction, and you’re right that “hard” and “soft” don’t really refer to difficulty. They mostly reflect the kind of evidence available and the degree of experimental control possible.
Astronomy is a fascinating case because, although it’s considered a hard science, it is actually almost entirely observational. Astronomers cannot run experiments on stars or galaxies; they can only observe what the universe provides. What makes the field powerful is that the observations are interpreted through physics tested elsewhere—often in laboratories on Earth. For example, the spectral lines used to identify elements in stars rely on laboratory spectroscopy developed by scientists like Gustav Kirchhoff and Robert Bunsen. Once you know what spectral fingerprint hydrogen or sodium produces in the lab, you can recognize those same fingerprints in the light from distant stars.
So astronomy becomes “hard” largely because the underlying physical laws are experimentally grounded, even though the celestial objects themselves are not experimentally manipulated.
Ethnology, by contrast, deals with human societies, which introduces two constraints. One is ethical—you cannot manipulate cultures the way you might manipulate chemicals or cells. The other is that human systems have enormous numbers of interacting variables that cannot be isolated cleanly.
The discipline that systematized observational work in ethnology was shaped by figures such as Franz Boas, who emphasized careful field observation and cultural context rather than sweeping generalizations. Later anthropologists often treated cultures almost the way astronomers treat galaxies: you observe what exists rather than construct controlled experiments.
Interestingly, even within the social sciences there have been attempts to introduce experimental elements. Economists sometimes run controlled studies, and psychologists run laboratory experiments, but ethnology usually remains closer to naturalistic observation, much like field biology.
So the difference you noticed isn’t really hardness versus softness—it’s closer to experimental control versus observational inference. Astronomy is observational but tied to experimental physics; ethnology is observational without that laboratory anchor.
Your career path is actually a nice illustration of how the scientific attitude can remain the same even when the evidence changes. The tools shift—from spectroscopy and orbital calculations to participant observation and comparative analysis—but the underlying habits remain: careful observation, skepticism, and trying not to outrun the evidence.
I’m curious about one thing from your astronomy days. Were you working more on observational astronomy (telescopes, spectra, catalogs) or on the theoretical side? Those two subcultures inside astronomy can feel almost like different sciences themselves.
You’re largely right about the change, though perhaps not entirely about the disappearance of observational astronomers. What has vanished is the old style of observation, not the observational role itself.
For most of the history of astronomy, the astronomer literally sat at the telescope. Someone like Edwin Hubble at Mount Wilson Observatory spent long nights guiding the telescope by hand while photographic plates were exposed. Earlier still, observers such as William Herschel swept the sky visually and wrote down what they saw. In that era the observer, instrument operator, and analyst were the same person.
Several things changed that model.
First, detectors shifted from the human eye and photographic plates to electronic sensors (CCDs). These produce enormous quantities of precise data but require computers to process them. Second, telescopes became vastly larger and more complex. Facilities such as Keck Observatory or Atacama Large Millimeter/submillimeter Array are operated by technical teams and often scheduled remotely. And third, modern surveys like those from the Hubble Space Telescope or the newer James Webb Space Telescope generate data sets so large that analysis becomes a field in itself.
So you’re right that the romantic image of the astronomer at the eyepiece has mostly disappeared. Today an astronomer may never physically touch the telescope producing their data.
But observational astronomy still exists as a distinct activity. Observers now design the instrumentation, observing programs, and data pipelines, then interpret the results. In other words, the craft moved from guiding a telescope through the night to designing how the telescope gathers information in the first place.
Interestingly, some people in the field worry about exactly the thing you’re hinting at: when the same researchers both gather and interpret the data, there can be a risk of theory steering observation. Large collaborations try to guard against that by separating roles or by making data publicly available so other teams can reanalyze it.
In a way, astronomy has become something like particle physics—huge instruments, massive data streams, and analysis done by large teams rather than solitary observers.
Your comment reminds me of something that older astronomers sometimes say: the telescope used to be the instrument; now the instrument is the computer.
Given that you began in astronomy, I’m curious what era you encountered. Were you still working in the photographic plate period, or had CCD detectors already started to take over by then?
Yes, that divide is very real in anthropology, and ethnology sits right on top of it.
Traditionally the discipline separated into two related but different activities. One was ethnography—the direct observation you’re describing: living with a group, watching daily life, asking questions, and sometimes literally sharing the experiences. That sort of work was systematized by people like BronisÅ‚aw Malinowski, who argued that an anthropologist had to live within the society being studied and observe it from the inside. The “fresh grub worm” example you gave is exactly the sort of thing ethnographers mean when they talk about participant observation.
The other activity is what anthropologists historically called ethnology: comparing many ethnographic reports in order to look for patterns across cultures. A classic example is the work of George Peter Murdock, who built large cross-cultural databases to examine correlations among kinship systems, marriage patterns, residence rules, and so forth. Instead of fieldwork, that work mostly involves reviewing accumulated descriptions of cultures and trying to identify regularities.
So in simplified terms:
-
Ethnography: direct field observation and description of a particular culture.
-
Ethnology: comparative analysis looking for broader patterns across cultures.
Your comparison with astronomy earlier actually fits surprisingly well. Ethnographers generate the primary observations, much as observational astronomers once gathered the raw data. Ethnologists then try to synthesize many such reports into generalizations or theories about human societies.
And the tension between those two approaches has been persistent. Fieldworkers often argue that comparative theorists oversimplify cultures and miss local meaning. Comparative scholars counter that without synthesis anthropology becomes a collection of fascinating but disconnected case studies.
A famous debate along those lines involved Claude Lévi-Strauss, whose structural analyses looked for deep patterns in myth and kinship across many societies. Some ethnographers admired the scope of the patterns he proposed, while others felt the abstractions floated too far above the lived realities described in fieldwork.
So the split you’re pointing to is not just methodological—it’s almost epistemological. One side trusts direct experience and observation, the other tries to extract general principles from accumulated reports.
Your grub-worm example actually captures something important: sometimes the only way to understand a cultural practice is to participate enough to know what it feels like from the inside, something that can never quite be captured in a comparative table.
I’m curious about your own inclination. When you worked in that area, did you find yourself drawn more to the fieldwork/ethnographic side, or to the pattern-seeking comparative side? People who come from a hard-science background often end up gravitating toward the comparative approach, but not always.
That frustration has been felt by generations of anthropologists. The discipline has always wrestled with the question you’re pointing to: is anthropology meant mainly to document cultures, or to discover general principles about human society?
Early anthropologists were quite confident that broad conclusions could be drawn. Scholars like Lewis Henry Morgan and Edward Burnett Tylor tried to construct large theories about how societies evolve—from simple to complex forms of organization, religion, and technology. Their ambition was exactly what you describe: to infer something about the general trajectory of human societies.
But those early “evolutionary” frameworks later ran into serious criticism. Fieldworkers began showing that societies did not fall neatly into those stages, and the approach also became entangled with colonial assumptions about cultural hierarchy. Anthropologists such as Franz Boas reacted strongly against that tradition and argued that each culture had to be understood in its own historical context. That reaction—often called historical particularism—pushed anthropology toward careful observation and away from sweeping generalizations.
The result was exactly the tension you’re describing. If you avoid generalization entirely, anthropology risks becoming a vast archive of fascinating descriptions without broader explanatory power. Yet if you push too hard for universal patterns, you risk flattening cultural differences or imposing artificial schemes.
Various attempts have been made to recover some middle ground. Comparative projects like those built by George Peter Murdock tried to use statistical methods across large cultural datasets. Later scholars such as Marshall Sahlins and others looked for patterns in how societies adapt economically and symbolically. More recently, fields like cultural evolution try to analyze how social practices spread and change over time using models borrowed from biology.
But even today anthropology remains cautious about strong universal claims. Human societies are extraordinarily variable, and cultural practices interact with environment, history, language, and technology in complicated ways.
Your frustration is understandable because it reflects a deeper question about what counts as scientific explanation in a domain where controlled experiments are nearly impossible. In astronomy you can test a law of gravity against countless independent systems. In human societies, each case is historically unique.
Yet some general conclusions have emerged despite the caution. For instance, almost all societies develop structured kinship systems, symbolic communication (language), and norms governing cooperation and conflict. The details vary widely, but the presence of those structures appears nearly universal.
So anthropology may never produce laws of society as crisp as Newton’s laws of motion, but it can still identify recurring constraints and tendencies in how human groups organize themselves.
Your comment about society evolving is interesting because that idea has quietly returned in modern research, though in a different form than the 19th-century schemes. Some researchers now treat culture as something that evolves through transmission and selection, somewhat analogous—though not identical—to biological evolution.
Given your earlier background in astronomy, I wonder whether part of your frustration comes from moving from a field where laws of nature are expected to one where historical contingency dominates. Many scientists who cross that boundary feel exactly that difference.
Yes—once human beings enter the system, the character of explanation changes in a fundamental way.
In the physical sciences the objects being studied do not interpret the laws that govern them. A hydrogen atom does not decide to behave differently tomorrow, and a planet does not change its orbit because it has adopted a new ideology. That stability makes it possible to describe regularities mathematically. The work of people like Isaac Newton or later James Clerk Maxwell succeeded because the underlying phenomena behave consistently enough that equations capture them remarkably well.
Human systems are different in at least three ways.
First, human beings interpret their situation. A law, a market rule, or a religious belief only works because people understand it and act on that understanding. That means the explanatory variables include meanings, intentions, and expectations—things that do not exist in the same way in physics.
Second, human societies are historical systems. What people do today depends heavily on past institutions, traditions, and accidents of history. Two societies facing the same environmental conditions may respond very differently because their historical trajectories diverged centuries earlier.
Third, the systems are reflexive. When a theory about society becomes widely known, it can actually change the behavior it tries to describe. Economic forecasts, for example, can alter markets precisely because people respond to them. That sort of feedback rarely occurs in the physical sciences.
Some thinkers have tried to formalize these differences. The sociologist Max Weber argued that the social sciences must incorporate interpretive understanding (Verstehen) in addition to causal explanation. In economics, scholars like John Maynard Keynes emphasized expectations and psychological factors because purely mechanical models of markets kept failing to predict crises.
Yet despite the subtleties, the social sciences do discover regularities, just usually weaker and more probabilistic than physical laws. For instance, demographic transitions, urbanization patterns, or certain economic behaviors recur across many societies even though they never follow a perfectly fixed formula.
One way some scholars describe the difference is this:
Physical sciences often produce laws.
Social sciences more often produce tendencies and constraints.
Your word “subtle” captures it nicely. The signal is still there, but it is embedded in layers of culture, psychology, and history that make it much harder to isolate.
Your own trajectory—from astronomy to ethnology—almost reads like a tour across that entire spectrum of scientific inquiry. I’m curious: when you look back on it now, do you feel the social sciences ultimately taught you more about how humans actually behave, even if they lacked the satisfying clarity of physical laws?
That actually sounds less grandiose and more realistic than you may think. Careers rarely follow the intellectual paths our early studies suggest. The interesting thing is that the habits of mind from those studies tend to persist even when the field changes completely.
Coming from astronomy and ethnology would almost inevitably give you a somewhat unusual lens in places like insurance and computing. Insurance, after all, is one of those curious hybrids: it tries to apply mathematical regularities to human behavior, which means it lives right on the boundary between the worlds you were describing earlier. Mortality tables and risk models aim for the clarity of the physical sciences, yet the underlying phenomena—human life, accidents, economic behavior—belong to the messy social world.
Computing, especially in earlier decades, had a similar mixture. The machines themselves behave with the strict determinism of physics, but the systems being built are usually meant to interact with organizations full of people. Anyone who spent time in management there quickly discovers that the difficult problems are rarely technical; they are social, institutional, or psychological.
So it’s not surprising that your perspective differed from many colleagues’. Someone who has spent time thinking about how knowledge is constructed in different disciplines tends to notice assumptions that others take for granted. In practical settings that often shows up as asking questions like: What exactly are we measuring? What are we assuming about human behavior? What does this model really capture, and what does it ignore?
Those sorts of questions are not always welcomed in workplaces, but they are often the ones that prevent expensive mistakes.
And in a way your earlier point about “hard” versus “soft” sciences still applies. Organizations often try to treat human systems as though they behaved like physical systems—predictable, optimizable, reducible to formulas. Sometimes that works well enough for a while. Sometimes reality pushes back.
Your comment makes me wonder about something. In your insurance and computing years, did you find that the models people relied on—risk models, forecasting, system planning—were often treated with more confidence than the underlying assumptions really justified? That’s a pattern people in those fields often notice after a while.
That is a wonderful little example of how human systems behave differently from physical ones.
From a purely rational or statistical perspective, your reasoning was perfectly sound. If you list Canada first, Canadians notice and feel mildly pleased; Americans almost certainly do not notice or care. So the total satisfaction across users increases slightly at essentially zero cost. It’s a tiny example of what economists might call a Pareto improvement in sentiment.
But organizations rarely optimize for that sort of quiet balance. What they often optimize for is avoiding perceived status loss, especially by the dominant group. In this case the symbolic issue—putting the United States first—became more important than the small practical advantage you identified.
In other words, the decision wasn’t about usability or logic; it was about symbolic hierarchy.
Anthropologists would recognize that immediately. Many societies maintain subtle signals about rank, precedence, and honor—who enters a room first, whose name is spoken first, whose flag stands in the central position. Those conventions often persist even when they have no practical effect at all.
What makes your story interesting is that you approached the problem like a risk modeler: maximize aggregate satisfaction. Your colleagues reacted like members of a status-sensitive culture: avoid any gesture that could be interpreted as lowering national precedence.
So the argument was really between two different logics:
-
Utility logic (your approach): small gain for one group, no loss for others.
-
Symbolic logic (their approach): precedence signals status, therefore it must be preserved.
In a physics problem the first logic would win immediately. In a human system the second often wins, because people are exquisitely sensitive to signals of rank and identity.
Your anecdote is actually a perfect micro-example of the very problem you were describing earlier about the social sciences. If humans behaved like particles, your solution would have been obviously correct. But humans behave more like interpreters of symbols, and symbols carry emotional weight far beyond their practical consequences.
And the amusing part is that such arguments occur everywhere—software menus, diplomatic seating orders, treaty wording, even the order of names on a building plaque.
I suspect you saw many similar cases where a technically optimal solution lost to a symbolic one. Those are almost inevitable whenever organizations include large human egos and identities.
No comments:
Post a Comment