
Illustration: Midjourney
Every six months, dentists insist on X-rays. Patients sit still, bite down on that uncomfortable plastic tab, and assume that someone, somewhere, has determined this is worth doing, and that a particular X-ray test is more important than other tests. But have they?
Not every test can be run. Some information-gathering activities are costly, time-consuming, or even destructive — meaning teams have to choose which ones matter most. How do organizations decide which tests to prioritize? And have they been doing that math properly?
These questions sit at the heart of research that just won one of the top awards in decision analysis, and the answer, according to USC engineering professor Ali Abbas and his co-author Gordon Hazen of Northwestern University, is that most organizations may have been doing that math wrong for decades.
“Most people today use the expected utility increase, not the buying price of information to determine the value of a test,” Abbas said. “Partly because it’s simpler to calculate. But simplicity is not an excuse for making a bad decision.”
The Problem Nobody Noticed
Abbas, a professor in the Daniel J. Epstein Department of Industrial and Systems Engineering at USC’s Viterbi School of Engineering, co-authored a paper with Gordon Hazen of Northwestern University that just received the Clemen-Kleinmuntz Decision Analysis Best Paper Award from INFORMS, the leading professional society for operations research.
The paper tackles a problem that has existed quietly in the field since the 1960s, when two schools of thought proposed different ways to measure what information is worth. One camp argued that the value of information should be measured by how much it improves the expected utility of the decision at hand. The other camp argued that the right measure is more intuitive: what is the maximum you would actually pay to receive that information? This is known as the buying price of information.
For years, researchers used these two approaches interchangeably, assuming they were close enough to produce the same results. That assumption, it turns out, was only half right.
When Context Corrupts the Calculation
Picture a large oil company with drilling teams scattered across a dozen countries. Each team wants to gather information independently before committing to a drill site: geological surveys, soil samples, data on what might be underground. Information costs money, so headquarters needs to prioritize. Which teams get the green light?
Each team runs the numbers and submits a score to headquarters reflecting how much their survey is worth. The teams may even use the same organizational utility function, since each ultimately contributes to the company’s bottom line. But here is the hidden problem: each team is doing their math in isolation, shaped by their own local stakes, their own risk levels, their own starting conditions. The score each produces reflects their own world, not a shared one.
“If organizations use the wrong measure, they will be spending money on the wrong information-gathering projects” Abbas said. “What will end up happening is that these organizations will lose significant dollars.”
The buying price of information fixes this by asking each division one concrete question: what is the most you would actually pay for this information before making your decision? That number is rooted in real dollars and real consequences, and it does not shift depending on whether the calculation is conducted at the team or organizational level. When every team answers that same question, headquarters can finally compare them in a consistent way. The right surveys get funded.
From Oil Fields to Operating Rooms
The stakes go well beyond oil and gas. Abbas points to hospitals deciding which diagnostic tests to order, universities weighing which surveys to run, and government agencies like the TSA allocating security resources. He co-wrote a separate paper with the TSA’s Chief Risk Officer quantifying the value of security screening programs, and he sees the same logic applying everywhere from healthcare to housing policy to crime prevention.”Medical tests, medical diagnosis. How do you prioritize which information test to run?” Abbas said.
The dental X-ray is not as far-fetched an example as it sounds. If a dentist’s office is billing insurance for a biannual X-ray without a systematic framework for determining whether that test is actually warranted or whether another test or procedure should be conducted instead. Abbas is not saying the X-ray is unnecessary. He is saying that organizations need a sound method to prioritize testing and information gathering activities, and most do not have one.
A 60-Year Assumption, Quietly Corrected
What makes this finding land hard is how long the field went without attempting to reconcile the two approaches. Abbas and Hazen proved formally that the two methods only agree when an organization is perfectly risk-neutral. In practice, many organizations may exhibit some form of risk aversion, particularly in high-stakes decisions, and that is precisely where the two approaches diverge.
The fix is straightforward: ask each division what they would pay for the information, using the corporate utility function, rather than asking them about the expected utility increase. Those numbers travel cleanly up the chain of command. The expected utility increase scores do not. It is a small methodological shift with large consequences, hiding in plain sight for sixty years.
When Money Is Not the Measure
The next frontier, Abbas argues, is one the field has largely ignored. Once organizations get the method right, they still need to agree on what their outcomes are actually worth, and that gets complicated fast when the outcomes are not purely financial.
How do you quantify the value of a security screening program, a public health intervention survey, or a housing policy?
“If there are other attributes involved, such as health, security, or efficiency — how do we come up with a meaningful value measure? This is precisely the question I addressed with the Chief Risk Officer at the TSA” Abbas said. “We need to spend more time on the value measure, or the price tag by which we value outcomes that are not only monetary.”
Without it, organizations may keep making decisions with the right tool pointed in the wrong direction.
Published on March 18th, 2026
Last updated on March 24th, 2026

