Appendix B: Methods Across Disciplines – A Comparative Guide
Home » Law Library Updates » Sarvarthapedia » Sarvarthapedia Glossary » Appendix B: Methods Across Disciplines – A Comparative Guide
Here is the complete text of Appendix B: Methods Across Disciplines – A Comparative Guide for the Sarvarthapedia Subject Guide for Human Understanding.
This appendix is written without tables — using descriptive prose, decision trees, and comparative narratives instead. It is designed for the mobile reader: practical, actionable, and revealing how different fields know what they claim to know.
Here is the complete text of Appendix B: Methods Across Disciplines – A Comparative Guide for the Encyclopedia Subject Guide for Human Understanding.
This appendix is written without tables — using descriptive prose, decision trees, and comparative narratives instead. It is designed for the mobile reader: practical, actionable, and revealing how different fields know what they claim to know.
APPENDIX B: METHODS ACROSS DISCIPLINES
A Comparative Guide – How Different Fields Know What They Know
Before You Begin: Why Methods Matter
A fact is only as trustworthy as the method that produced it.
The astronomer and the astrologer both look at stars. The doctor and the faith healer both treat the sick. The economist and the stock-picking psychic both make predictions. What separates knowledge from wishful thinking is not the answer — it is the method.
This appendix is a guide to the toolkit of human inquiry. It will help you:
- Understand how different disciplines establish claims
- Recognize the strengths and weaknesses of each method
- Choose the right method for your own questions
- Detect when a method is being used outside its proper domain
The methods are organized from most controlled to most interpretive — but no method is “better” than any other. The right method depends on the question you are asking.
PART 1: THE GREAT METHODOLOGICAL DIVIDE
All methods can be roughly located on a spectrum between two poles.
The Nomothetic Pole seeks general laws. It asks: What is true in general? Its methods are quantitative, statistical, and replicable. It prefers large samples, controlled conditions, and mathematical expression. Physics, chemistry, and economics (in some forms) live near this pole.
The Idiographic Pole seeks particular understanding. It asks: What is true in this specific case? Its methods are qualitative, interpretive, and context-sensitive. It prefers depth over breadth, narrative over numbers, and meaning over measurement. History, anthropology, and literary criticism live near this pole.
Most disciplines live somewhere in between. Psychology uses experiments (nomothetic) and case studies (idiographic). Sociology uses surveys (nomothetic) and ethnography (idiographic). Medicine uses randomized trials (nomothetic) and clinical judgment (idiographic).
The mistake is not choosing one pole over the other. The mistake is using a method from one pole to answer a question that belongs to the other.
PART 2: THE EXPERIMENTAL FAMILY
The Controlled Experiment
Signature discipline: Physics, chemistry, biology, psychology (basic research)
The logic: Change one thing at a time. Hold everything else constant. Measure the effect. If you see a difference, that one thing caused it.
The classic example: Galileo rolling balls down inclined planes. He changed the angle, measured the time, and derived the law of falling bodies.
The recipe:
- Form a hypothesis: “X causes Y.”
- Create two groups that are identical in every way except one.
- The control group receives no treatment (or a placebo).
- The experimental group receives the treatment (X).
- Measure Y in both groups.
- If Y differs, and everything else was identical, X caused the difference.
The gold standard: The randomized controlled trial (RCT) . Random assignment ensures that, on average, the two groups are identical in all respects — known and unknown. Any difference after the experiment must be caused by the treatment.
Where it works best: Questions of causation. Does this drug cure that disease? Does this teaching method improve learning? Does this fertilizer increase crop yield?
Where it fails: When you cannot control the variables (astronomy, economics). When you cannot randomize (you cannot randomly assign people to be poor or rich). When the question is about meaning, not causation (what does this poem mean?).
The danger: Artificiality. A laboratory is not the real world. What works in the lab may fail in the field — and vice versa.
The Natural Experiment
Signature discipline: Economics, epidemiology, political science, sociology
The logic: Nature or policy or history has already performed the experiment for you. You just need to find the comparison.
The classic example: John Snow and the 1854 London cholera outbreak. One neighborhood had two water companies. One drew water upstream (clean). One drew water downstream (polluted by sewage). The neighborhoods were otherwise identical. Snow compared cholera rates and found the polluted water company caused the outbreak. He could not randomize. He found a natural experiment.
The recipe:
- Find a situation where a “treatment” (a policy, a disaster, a law) was applied to one group but not another for reasons unrelated to the outcome.
- Compare the treated group to the untreated group.
- If the groups were similar before the treatment, any difference after is plausibly caused by the treatment.
The key assumption: The treatment was as if random — not caused by the outcome you are studying. People do not choose to be in the treatment group for reasons that also affect the outcome.
Where it works best: When controlled experiments are impossible or unethical. You cannot randomly assign cities to have a new factory or not. You cannot randomly assign people to experience a recession. But you can find cases where the factory or the recession happened for reasons unrelated to the outcomes you care about.
The danger: Hidden bias. The treatment may look random but is not. People who choose to move to a city with a factory may be different from those who do not — more ambitious, more educated, younger. The factory did not cause their success. Their ambition did.
The Quasi-Experiment
Signature discipline: Education, program evaluation, public policy
The logic: You cannot randomize. You cannot find a perfect natural experiment. But you can still try to estimate causation by comparing groups before and after, or by matching treated and untreated groups as closely as possible.
The recipe:
- Measure the outcome before the treatment in both groups.
- Apply the treatment to one group.
- Measure the outcome after.
- Compare the change in the treated group to the change in the untreated group.
The key assumption: The two groups were changing at the same rate before the treatment. If they were, and then they diverge after, the treatment is the most likely cause.
Where it works best: When you have good data on both groups before the intervention. When you can argue that the groups are comparable.
The danger: Regression to the mean — extreme scores tend to move toward average even without any treatment. A bad teacher’s class scores improve next year even if the teacher does nothing. A good teacher’s class declines. The change is statistical, not causal.
PART 3: THE OBSERVATIONAL FAMILY
The Survey
Signature discipline: Sociology, political science, public health, marketing
The logic: Ask a representative sample of people questions. Generalize from the sample to the whole population.
The recipe:
- Define your population (all adults in the United States, all registered voters, all patients with diabetes).
- Draw a random sample — every person in the population has a known chance of being selected.
- Ask the same questions, in the same order, in the same way, to every respondent.
- Calculate statistics (percentages, averages, correlations) from the sample.
- Use those statistics to estimate the same quantities in the population, with a known margin of error.
The key concept: Random sampling is what makes surveys scientific. If your sample is not random, you cannot generalize with confidence. An online poll of volunteers tells you about the people who chose to respond — not about the population.
Where it works best: Questions of prevalence (how many people have X?), correlation (are X and Y related?), and attitudes (what do people believe?).
The danger: Response bias — the people who answer may be different from those who do not. Question wording — small changes in wording can produce large changes in answers. Social desirability bias — people lie about things that make them look bad (voting, drug use, charitable giving).
The Longitudinal Study
Signature discipline: Epidemiology, developmental psychology, sociology
The logic: Follow the same people over time. Measure them repeatedly. Watch how they change.
The recipe:
- Recruit a sample (sometimes random, sometimes not).
- Measure them at Time 1 (baseline).
- Wait.
- Measure them again at Time 2, Time 3, Time N.
- Analyze who changed, who did not, and what predicts change.
The classic example: The Framingham Heart Study. Begun in 1948, it followed thousands of residents of Framingham, Massachusetts, measuring their health every two years. It discovered the major risk factors for heart disease: smoking, high blood pressure, high cholesterol, sedentary lifestyle.
Where it works best: Questions about development, aging, and the long-term effects of early experiences. Does childhood poverty predict adult disease? Do exercise habits in middle age predict longevity?
The danger: Attrition — people drop out over time. The people who remain may be different from those who leave. Cohort effects — the generation you study may be different from other generations. Findings about Baby Boomers may not apply to Millennials.
The Case Study
Signature discipline: Clinical psychology, medicine, business, law
The logic: Study one person, one organization, one event in extraordinary depth. Use that depth to generate hypotheses, illustrate principles, or challenge general claims.
The recipe:
- Select a case that is typical (to represent a class), extreme (to see the limits), or revelatory (to show something new).
- Gather all available data: interviews, documents, observations, tests.
- Construct a rich, detailed narrative.
- Draw conclusions about this case. Generalize with caution.
The classic example: Freud’s case studies (Dora, the Rat Man, the Wolf Man). They are not generalizable — but they generated hypotheses that shaped a century of psychology.
Where it works best: Rare conditions (a woman with no amygdala), unique events (the fall of Enron), or the early exploration of a new phenomenon (the first COVID-19 patients).
The danger: Selection bias — you chose this case because it was interesting, which means it is probably not typical. Confirmation bias — you find what you are looking for. The problem of generalizability — what can you learn about all humans from studying one?
PART 4: THE INTERPRETIVE FAMILY
Ethnography
Signature discipline: Anthropology, sociology
The logic: Live with people. Learn their language. Participate in their lives. See the world as they see it. Write what you learn.
The recipe:
- Enter the field — the community you want to understand.
- Build relationships. Gain trust.
- Observe. Participate. Ask questions. Listen.
- Take copious notes (field notes).
- Leave the field. Analyze your notes.
- Write an ethnography — a thick description of the culture.
The classic example: Bronisław Malinowski’s study of the Trobriand Islanders (1914–1918). He lived in the village, learned the language, participated in daily life, and transformed anthropology from “armchair speculation” to empirical science.
The key concept: Participant observation — you do not just watch from a distance. You join in. You become a student of the culture, not a judge.
Where it works best: Understanding how people make meaning. How does a religious community actually practice its faith? How do workers in a factory experience their labor? What is it like to be a member of a street gang?
The danger: Going native — you become so identified with the community that you lose critical distance. Observer effect — your presence changes what you are studying. Positionality — your own identity (race, gender, class, nationality) shapes what you can see and what people will tell you.
The Interview Study
Signature discipline: Sociology, psychology, education, oral history
The logic: Ask people open-ended questions. Listen to their stories. Find patterns in how they talk about their lives.
The recipe:
- Recruit participants (purposively, not randomly — you want people who have experienced the phenomenon you are studying).
- Conduct semi-structured interviews. You have questions in mind, but you follow the participant’s lead.
- Record and transcribe.
- Code the transcripts for themes.
- Write an analysis that presents those themes, with quotes as evidence.
The key concept: Saturation — you interview until new interviews stop revealing new themes. There is no magic number (10? 30? 100?). It depends on the diversity of the population and the depth of the question.
Where it works best: Understanding lived experience. What does it feel like to be diagnosed with cancer? How do undocumented immigrants navigate daily life? What is the experience of being a first-generation college student?
The danger: Memory bias — people do not remember the past accurately; they reconstruct it in light of the present. Social desirability — people tell you what they think you want to hear. The problem of the lone researcher — one person’s coding may be idiosyncratic. (Solution: have multiple coders and check reliability.)
Focus Groups
Signature discipline: Marketing, public health, political science
The logic: Bring a small group of people together. Ask them to discuss a topic. Watch how they interact. See what emerges.
The recipe:
- Recruit 6–10 people who share a relevant characteristic (all are parents of young children, all are voters in a swing district).
- A moderator guides the discussion with open-ended questions.
- The group talks. They respond to each other, not just to the moderator.
- Record and analyze.
The key insight: Groups produce different data than individuals. In a focus group, people remember things they had forgotten, change their minds in response to others, and reveal social norms through their interactions.
Where it works best: Exploring a new topic (you do not know what questions to ask yet), testing messages (how do people react to this ad?), understanding group dynamics (how do teenagers talk about vaping?).
The danger: Groupthink — one dominant voice shapes the whole discussion. The moderator effect — participants try to please the moderator. Not generalizable — a few focus groups do not represent the population.
PART 5: THE HISTORICAL FAMILY
Archival Research
Signature discipline: History, political science, sociology
The logic: Go to the archives. Find the documents. Read them. Piece together what happened and why.
The recipe:
- Identify your research question.
- Identify relevant archives (government archives, university special collections, corporate records, personal papers).
- Request boxes. Read. Take notes. Photograph documents.
- Corroborate across sources. No single document is trustworthy.
- Construct a narrative based on the weight of evidence.
The key concept: Primary sources — documents created at the time under study. They are not “truer” than secondary sources (they have their own biases), but they are closer to the event.
Where it works best: Questions about the past that cannot be studied any other way. What did Nixon know about Watergate and when did he know it? How was the Marshall Plan designed? What did colonial administrators write to each other about indigenous resistance?
The danger: Survivorship bias — the documents that survive are not a random sample. The boring files get thrown away. The scandalous files get burned. The powerful leave records; the poor leave none. The archive as a constructed space — archives are not neutral. Someone decided what to keep, what to destroy, and how to organize what remains.
Oral History
Signature discipline: History, sociology, anthropology
The logic: Find people who were there. Interview them about their memories. Record their stories before they die.
The recipe:
- Identify potential narrators (people with firsthand experience of the event or period you are studying).
- Conduct open-ended interviews, often over multiple sessions.
- Record and transcribe.
- Deposit the recordings in an archive for future researchers.
- Analyze the interviews as historical sources — not as “truth” but as memory, which has its own logic.
The key insight: Memory is not a video recording. It is reconstructed, shaped by later experience, and influenced by social context. Oral history tells you what people remember and choose to tell — which is itself valuable data about identity, narrative, and commemoration.
Where it works best: Studying groups that left few written records (workers, women, enslaved people, Indigenous communities). Studying recent events where documents are still classified. Studying how people make meaning of their own lives.
The danger: Memory decay — people forget. Post-hoc rationalization — people tell stories that make sense of chaos. The politics of memory — people remember in ways that serve their current identity or interests.
Comparative Historical Analysis
Signature discipline: Political science, sociology, history
The logic: Compare a small number of cases (often countries or revolutions) in depth. Look for patterns. Infer causes by comparing similarities and differences.
The recipe:
- Select cases that vary on the outcome you want to explain (e.g., revolutions succeeded in France and Russia but failed in Germany and China).
- Select cases that vary on potential causes (e.g., state strength, class structure, international pressure).
- Use Mill’s methods:
- The method of agreement — if all the cases with the outcome share a common factor, that factor may be a cause.
- The method of difference — if the cases with the outcome have a factor that the cases without the outcome lack, that factor may be a cause.
- Process-trace: within each case, look for evidence of causal mechanisms, not just correlations.
The classic example: Barrington Moore’s Social Origins of Dictatorship and Democracy (1966). He compared England, France, the United States, China, Japan, and India. He argued that the relationship between the landed aristocracy and the peasantry determined whether a country became democratic, fascist, or communist.
Where it works best: Big, rare events. You cannot run an experiment on revolutions. You have only a handful of cases. Comparative historical analysis is the best you can do.
The danger: Too few cases, too many variables — the classic small-N problem. Selection bias — you chose cases based on the outcome, which can bias your conclusions. Confirmation bias — you find what you are looking for in the archives.
PART 6: THE FORMAL FAMILY
Mathematical Proof
Signature discipline: Mathematics, logic
The logic: Start with axioms (self-evident truths or stipulated assumptions). Apply rules of inference. Derive theorems. If the axioms are true and the rules are valid, the theorem is necessarily true.
The recipe:
- State your axioms.
- Define your terms.
- Apply logical rules step by step.
- Reach a conclusion.
- The conclusion is deductively valid — it must be true if the axioms are true.
The classic example: Euclid’s proof that there are infinitely many prime numbers. Assume there are finitely many. Multiply them all and add one. The result is either prime (contradiction) or has a prime factor not in the list (contradiction). Therefore, there are infinitely many primes.
Where it works best: Formal systems. Mathematical truths are necessary — they could not be otherwise. They do not depend on empirical observation.
The danger: Garbage in, garbage out — if your axioms are false or your definitions are poor, your conclusions may be valid but useless. Gödel’s incompleteness theorems — any consistent system strong enough to do arithmetic cannot prove its own consistency. There are true statements that cannot be proved within the system.
Computer Simulation
Signature discipline: Physics, biology, economics, climate science
The logic: Write a program that models a real-world system. Run the program. See what happens. The simulation is your laboratory.
The recipe:
- Identify the entities in your system (atoms, cells, people, firms).
- Specify the rules that govern their behavior (physics, biology, psychology, economics).
- Write code that implements those rules.
- Initialize the system with starting conditions.
- Run the simulation. Watch it evolve.
- Compare simulation outputs to real-world data. If they match, your model may be correct (or may be a lucky accident).
The classic example: Climate models. They simulate the atmosphere, oceans, ice, and land. They incorporate the laws of physics and chemistry. They are run forward in time to predict future climate. They are tested against past climate data. They are the only way to predict the effects of greenhouse gas emissions.
Where it works best: Systems that are too complex for analytic solution (no equation will give you an answer), too slow or fast for direct observation (geological time, molecular dynamics), or too dangerous to experiment on (nuclear explosions, pandemics).
The danger: Garbage in, garbage out — the simulation is only as good as its assumptions. Overfitting — you can tune your model to match past data but fail to predict the future. The illusion of control — a simulation is not reality. Policy based on simulation alone is risky.
PART 7: THE CLINICAL FAMILY
Differential Diagnosis
Signature discipline: Medicine, clinical psychology
The logic: A patient presents with symptoms. Many diseases could cause those symptoms. Gather evidence to rule some out and rule one in.
The recipe:
- Take a history.
- Perform an examination.
- Generate a list of possible diagnoses (the differential).
- Order tests to rule out possibilities.
- Refine the list.
- Arrive at a working diagnosis.
- Treat. See if the patient improves. Revise if needed.
The key concept: Bayesian reasoning — the probability of a diagnosis depends on the base rate (how common is this disease?) and the test characteristics (sensitivity and specificity). A positive test for a rare disease is more likely to be a false positive than a true positive.
Where it works best: The practice of medicine. Every patient is unique. The goal is not to discover a general law but to help this particular person.
The danger: Availability bias — you diagnose what you have seen recently. Anchoring — you latch onto the first plausible diagnosis and ignore evidence against it. Overconfidence — you stop searching too soon.
The N-of-1 Trial
Signature discipline: Medicine (especially chronic conditions)
The logic: The patient is the only subject. Test treatments on that one patient, systematically. Find what works for this person.
The recipe:
- The patient tries Treatment A for a period.
- Then Treatment B for a period.
- Then back to A (washout).
- The patient (and sometimes the doctor) does not know which treatment is which (blinding).
- Measure outcomes systematically.
- Determine which treatment works best for this patient.
Where it works best: Chronic conditions where treatments have variable effects (pain, insomnia, depression, arthritis). The average effect from a large trial may not apply to you.
The danger: Time-consuming — each N-of-1 trial takes weeks or months. Not generalizable — what works for you may not work for anyone else. But that is the point.
PART 8: THE CRITICAL FAMILY
Discourse Analysis
Signature discipline: Linguistics, sociology, cultural studies
The logic: Language is not neutral. It shapes what can be thought, who has authority, and what is taken for granted. Analyze texts, speeches, and conversations to reveal hidden power.
The recipe:
- Collect texts (political speeches, news articles, therapy sessions, corporate reports).
- Identify patterns: recurring words, metaphors, grammatical constructions, narrative structures.
- Ask: What assumptions are built into this way of speaking? Who is included? Who is excluded? What is treated as natural or inevitable?
- Connect linguistic patterns to social power.
The classic example: Critical discourse analysis of news coverage of protests. Activists are “protesters” when you agree with them; “rioters” or “looters” when you do not. The language does not describe reality neutrally — it constructs reality.
Where it works best: Questions about ideology, power, and taken-for-granted assumptions. How does corporate language depoliticize environmental destruction? How does medical language turn social problems (poverty, racism) into individual pathologies?
The danger: Confirmation bias — you find what you are looking for. The problem of evidence — a single metaphor can be interpreted in many ways. The political temptation — discourse analysis can become a tool for asserting, not discovering.
Deconstruction
Signature discipline: Philosophy, literary theory
The logic: Every text has hidden contradictions. It says one thing but assumes another. It tries to establish a stable meaning but reveals its own instability. Deconstruction finds the fault lines.
The recipe:
- Identify the binary oppositions in a text (nature/culture, speech/writing, male/female, presence/absence).
- Show that the opposition is not stable. Each term depends on the other for its meaning. The “privileged” term contains traces of the “devalued” term.
- Do not reverse the hierarchy (that would just create a new binary). Instead, show that the opposition cannot be maintained.
- The text deconstructs itself — you are just pointing it out.
The classic example: Jacques Derrida’s reading of Rousseau. Rousseau celebrates nature over culture, speech over writing. But his argument against writing is itself written. He uses writing to condemn writing. The text contradicts itself.
Where it works best: Philosophical texts that claim to have found firm foundations. Literary texts that invite multiple readings.
The danger: Nihilism — if everything deconstructs, nothing is stable, and no argument can be made. Obscurantism — deconstruction can be a performance of difficulty that impresses but does not illuminate. The retreat from evidence — if all readings are equally valid, why do research at all?
PART 9: CHOOSING THE RIGHT METHOD – A DECISION GUIDE
You have a question. How do you choose the method?
Start here: What kind of question are you asking?
- Causal question: Does X cause Y? → Experimental family. Randomized controlled trial if possible. Natural experiment or quasi-experiment if not.
- Descriptive question: How much X is there? → Survey family. Random sample. Measure with care.
- Developmental question: How do people change over time? → Longitudinal study. Follow the same people repeatedly.
- Meaning question: What does X mean to the people who experience it? → Interpretive family. Ethnography, interview study, oral history.
- Historical question: What happened in the past? → Historical family. Archival research. Oral history. Comparative historical analysis.
- Formal question: What can be proved given these assumptions? → Formal family. Mathematical proof. Computer simulation.
- Clinical question: What is wrong with this specific person or system? → Clinical family. Differential diagnosis. N-of-1 trial.
- Critical question: How does power operate through discourse? → Critical family. Discourse analysis. Deconstruction.
Second: What is your relationship to the phenomenon?
- You can manipulate it → Experiment.
- You cannot manipulate it, but you can observe it systematically → Observational family.
- You can participate in it → Ethnography.
- You can only read about it → Archival research or discourse analysis.
Third: What is your tolerance for uncertainty?
- Low (you need a definitive answer) → Mathematical proof (but only for formal questions). Randomized controlled trial (but only for causal questions you can manipulate).
- Medium (you want the best estimate) → Survey with random sampling. Longitudinal study.
- High (you are generating hypotheses, not testing them) → Case study. Ethnography. Oral history.
Fourth: What resources do you have?
- Time: Years → Longitudinal study. Ethnography.
- Time: Months → Survey. Archival research.
- Time: Days → Focus group. Small-N comparative analysis.
- Money: Lots → Large RCT. Multi-site ethnography.
- Money: Little → Secondary data analysis. Single case study.
Fifth: What are the ethical constraints?
- You cannot randomize people to potentially harmful conditions → Natural experiment.
- You cannot experiment on people at all → Observational or historical methods.
- You must respect privacy and consent → All methods have ethical requirements. Some (deception in experiments) require special justification.
PART 10: THE METHODOLOGICAL TRADE-OFFS
No method is perfect. Every strength comes with a corresponding weakness.
Internal validity (confidence that X caused Y) is highest in randomized experiments. But experiments have low external validity (the lab is not the real world).
External validity (generalizability) is highest in random-sample surveys. But surveys have low internal validity (correlation is not causation).
Depth (rich understanding of a single case) is highest in ethnography and case studies. But depth comes at the cost of breadth (you cannot generalize).
Breadth (coverage of many cases) is highest in large-N surveys and statistical analyses. But breadth comes at the cost of depth (you know a little about many people, not a lot about a few).
Objectivity (minimizing the researcher’s influence) is highest in double-blind experiments. But objectivity comes at the cost of relevance (artificial settings may not capture real life).
Relevance (capturing real life) is highest in ethnography and naturalistic observation. But relevance comes at the cost of objectivity (the researcher is part of the phenomenon).
Precision (exact measurement) is highest in physics and chemistry. But precision comes at the cost of scope (you can measure simple systems precisely, complex systems poorly).
Scope (capturing complexity) is highest in qualitative and interpretive methods. But scope comes at the cost of precision (rich description is not exact measurement).
You cannot have it all. The art of research is choosing the trade-off that best fits your question.
AFTERWORD: THE METHODOLOGICAL PLURALISM OF THIS ENCYCLOPEDIA
This encyclopedia includes physics and poetry, randomized trials and oral histories, mathematical proofs and deconstruction. It does not choose one method as “the best.” It insists that different questions require different tools.
The wise reader is methodologically pluralist:
- You trust a randomized controlled trial for drug efficacy.
- You trust an ethnography for understanding a religious community.
- You trust a mathematical proof for the infinitude of primes.
- You trust an oral history for the experience of war.
- You trust a survey for the unemployment rate.
- You trust a differential diagnosis for your chest pain.
And you do not ask any of these methods to do what they cannot do.
The most important sentence in this entire appendix is this one:
The right method is the one that fits the question.
Everything else is detail.