{"id":2218,"date":"2025-11-12T12:36:21","date_gmt":"2025-11-12T13:36:21","guid":{"rendered":"https:\/\/armotto.com\/?p=2218"},"modified":"2025-11-17T15:53:00","modified_gmt":"2025-11-17T15:53:00","slug":"new-technologies-like-ai-come-with-big-claims-borrowing-the-scientific-concept-of-validity-can-help-cut-through-the-hype","status":"publish","type":"post","link":"https:\/\/armotto.com\/index.php\/2025\/11\/12\/new-technologies-like-ai-come-with-big-claims-borrowing-the-scientific-concept-of-validity-can-help-cut-through-the-hype\/","title":{"rendered":"New technologies like AI come with big claims \u2013 borrowing the scientific concept of validity can help cut through the hype"},"content":{"rendered":"
Closely examining the claims companies make about a product can help you separate hype from reality.<\/span> Flavio Coelho\/Moment via Getty Images<\/a><\/span><\/figcaption><\/figure>\n

Technological innovations can seem relentless. In computing, some have proclaimed that \u201ca year in machine learning<\/a> is a century in any other field.\u201d But how do you know whether those advancements are hype or reality?<\/p>\n

Failures quickly multiply when there\u2019s a deluge of new technology, especially when these developments haven\u2019t been properly tested or fully understood. Even technological innovations from trusted labs and organizations sometimes result in spectacular failures. Think of IBM Watson<\/a>, an AI program the company hailed as a revolutionary tool for cancer treatment in 2011. However, rather than evaluating the tool based on patient outcomes, IBM used less relevant measures \u2013 possibly even irrelevant ones<\/a>, such as expert ratings rather than patient outcomes. As a result, IBM Watson not only failed to offer doctors reliable and innovative treatment recommendations, it also suggested harmful ones<\/a>.<\/p>\n

When ChatGPT was released<\/a> in November 2022, interest in AI expanded rapidly<\/a> across industry and in science<\/a> alongside ballooning claims of its efficacy<\/a>. But as the vast majority of companies are seeing their attempts at incorporating generative AI fail<\/a>, questions about whether the technology does what developers promised are coming to the fore.<\/p>\n

\n \"Black<\/a>
\n IBM Watson wowed on Jeopardy, but not in the clinic.<\/span>
\n
AP Photo\/Seth Wenig<\/a><\/span>
\n <\/figcaption><\/figure>\n

In a world of rapid technological change, a pressing question arises: How can people determine whether a new technological marvel genuinely works and is safe to use? <\/p>\n

Borrowing from the language of science, this question is really about validity<\/a> \u2013 that is, the soundness, trustworthiness and dependability of a claim. Validity is the ultimate verdict<\/a> of whether a scientific claim accurately reflects reality. Think of it as quality control for science: It helps researchers know whether a medication really cures a disease, a health-tracking app truly improves fitness, or a model of a black hole genuinely describes how it behaves in space.<\/p>\n

How to evaluate validity for new technologies and innovations has been unclear, in part because science has mostly focused on validating claims about the natural world. <\/p>\n

In our work as researchers<\/a> who study how to<\/a> evaluate science across disciplines, we developed a framework to assess the validity<\/a> of any design, be it a new technology or policy. We believe setting clear and consistent standards for validity and learning how to assess it can empower people to make informed decisions about technology \u2013 and determine whether a new technology will truly deliver on its promise.<\/p>\n

Validity is the bedrock of knowledge<\/h2>\n

Historically, validity was primarily concerned with ensuring the precision of scientific measurements, such as whether a thermometer correctly measures temperature or a psychological test accurately assesses anxiety<\/a>. Over time, it became clear that there is more than just one kind of validity. <\/p>\n

Different scientific fields have their own ways of evaluating validity<\/a>. Engineers test new designs against safety and performance standards. Medical researchers use controlled experiments to verify treatments are more effective than existing options. <\/p>\n

Researchers across fields use different types of validity<\/a>, depending on the kind of claim they\u2019re making. <\/p>\n

Internal validity asks whether the relationship between two variables is truly causal. A medical researcher, for instance, might run a randomized controlled trial<\/a> to be sure that a new drug led patients to recover rather than some other factor such as the placebo effect. <\/p>\n

External validity is about generalization \u2013 whether those results would still hold outside the lab or in a broader or different population. An example of low external validity is how many early studies that work in mice don\u2019t always translate<\/a> to people.<\/p>\n

Construct validity, on the other hand, is about meaning. Psychologists and social scientists rely on it when they ask whether a test or survey really captures the idea it\u2019s supposed to measure. Does a grit scale<\/a> actually reflect perseverance or just stubbornness? <\/p>\n

Finally, ecological validity asks whether something works in the real world rather than just under ideal lab conditions. A behavioral model or AI system might perform brilliantly in simulation but fail once human behavior, noisy data or institutional complexity enter the picture. <\/p>\n

Across all these types of validity, the goal is the same: ensuring that scientific tools \u2013 from lab experiments to algorithms \u2013 connect faithfully to the reality they aim to explain.<\/p>\n

Evaluating technology claims<\/h2>\n

We developed a method to help researchers across disciplines clearly test the reliability and effectiveness of their inventions and theories. The design science validity framework<\/a> identifies three critical kinds of claims researchers usually make about the utility of a technology, innovation, theory, model or method.<\/p>\n

First, a criterion claim<\/a> asserts that a discovery delivers beneficial outcomes, typically by outperforming current standards. These claims justify the technology\u2019s utility by showing clear advantages over existing alternatives. <\/p>\n

For example, developers of generative AI models such as ChatGPT may see higher engagement with the technology the more it flatters and agrees with the user. As a result, they may program the technology to be more affirming \u2013 a feature called sycophancy<\/a> \u2013 in order to increase user retention<\/a>. The AI models meet the criterion claim of users considering them more flattering than talking to people<\/a>. However, this does little to improve the technology\u2019s efficacy in tasks such as helping resolve mental health issues<\/a> or relationship problems. <\/p>\n

\n