SP
BravenNow
Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction
| USA | technology | โœ“ Verified - arxiv.org

Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction

#Epistemic uncertainty #Invariant transformation #Resampling #AI inference accuracy #Machine learning #Model optimization #Aleatoric uncertainty

๐Ÿ“Œ Key Takeaways

  • Sha Hu developed a novel method to reduce epistemic uncertainty in AI models
  • The approach uses invariant transformations and resampling techniques
  • This method can improve inference accuracy without increasing model size
  • The research addresses both aleatoric and epistemic uncertainties in AI systems
  • The paper was published on arXiv on February 26, 2026

๐Ÿ“– Full Retelling

Researcher Sha Hu published a new paper on arXiv on February 26, 2026, proposing a novel method to reduce epistemic uncertainty in artificial intelligence models through invariant transformation and resampling techniques, addressing the persistent challenge of inference errors in even well-trained AI systems. The paper introduces an innovative approach to tackle one of the fundamental challenges in artificial intelligence: uncertainty. AI models, despite being well-designed and thoroughly trained, can still produce inference errors due to two main types of uncertainty: aleatoric (inherent randomness in data) and epistemic (uncertainty in model parameters). Hu's research makes a significant observation that when multiple samples are inferred based on invariant transformations of an input, the resulting inference errors exhibit partial independence due to epistemic uncertainty. Leveraging this insight, the paper proposes a 'resampling' based inference method that applies to already trained AI models, creating multiple transformed versions of an input, running inference on each version, and then aggregating the outputs to produce a more accurate result.

๐Ÿท๏ธ Themes

Artificial Intelligence, Uncertainty Reduction, Model Optimization

๐Ÿ“š Related People & Topics

Uncertainty quantification

Science of characterizing uncertainties

Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict...

View Profile โ†’ Wikipedia โ†—

Resampling

Topics referred to by the same term

Resampling may refer to:

View Profile โ†’ Wikipedia โ†—

Machine learning

Study of algorithms that improve automatically through experience

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances i...

View Profile โ†’ Wikipedia โ†—

Entity Intersection Graph

Connections for Uncertainty quantification:

๐ŸŒ Hallucination 1 shared
๐ŸŒ Computer vision 1 shared
View full profile
Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.23315 [Submitted on 26 Feb 2026] Title: Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction Authors: Sha Hu View a PDF of the paper titled Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction, by Sha Hu View PDF HTML Abstract: An artificial intelligence model can be viewed as a function that maps inputs to outputs in high-dimensional spaces. Once designed and well trained, the AI model is applied for inference. However, even optimized AI models can produce inference errors due to aleatoric and epistemic uncertainties. Interestingly, we observed that when inferring multiple samples based on invariant transformations of an input, inference errors can show partial independences due to epistemic uncertainty. Leveraging this insight, we propose a "resampling" based inferencing that applies to a trained AI model with multiple transformed versions of an input, and aggregates inference outputs to a more accurate result. This approach has the potential to improve inference accuracy and offers a strategy for balancing model size and performance. Comments: 5 pages, 5 figures Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2602.23315 [cs.AI] (or arXiv:2602.23315v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2602.23315 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Sha Hu [ view email ] [v1] Thu, 26 Feb 2026 18:22:40 UTC (957 KB) Full-text links: Access Paper: View a PDF of the paper titled Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction, by Sha Hu View PDF HTML TeX Source view license Current browse context: cs.AI < prev | next > new | recent | 2026-02 Change to browse by: cs References & Citations NASA ADS Google Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation ร— loading... Data provided by: Bookmark Bibliographic Tools Bibliographic...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

๐Ÿ‡ฌ๐Ÿ‡ง United Kingdom

๐Ÿ‡บ๐Ÿ‡ฆ Ukraine