Block achieved a significant milestone this month with the acceptance of two research papers at NeurIPS 2025, a world premier artificial intelligence conference. The papers address fundamental challenges in machine learning that directly impact how AI systems operate in financial technology and beyond.
Written by Florence Regol and colleagues, this paper addresses the growing need for a principled understanding of how to develop efficient AI systems as models become larger and more expensive to operate.
The research provides the first formal theoretical framework for training two-stage classifiers, where a small, fast model handles easy cases and can decide to escalate complex cases to a larger, more powerful model. While this architecture is increasingly common in production systems, it previously lacked rigorous theoretical foundations. This is achieved through the use of hinge-based surrogate loss that is provably consistent, meaning optimization of this loss guarantees convergence to the optimal solution. This bridges the gap between theoretical guarantees and practical implementation needs.
This paper was awarded a spotlight, placing it in the top 3% of all submitted papers at NeurIPS. This recognition underscores the significance of its theoretical contributions. These consistency guarantees for surrogate losses represent a meaningful advance in machine learning theory that can inform and guide the development of novel methods for practical applications.
This paper, written by Fred Xu and Thomas Markovich, tackles a critical problem in AI systems: knowing when a graph machine learning model is uncertain about its predictions. Most current uncertainty methods assume that neighboring nodes in a graph should have similar uncertainty levels. This assumption breaks down in real-world scenarios where a vertex is dissimilar to its neighbors, as is often the case with fraud in transaction networks. The research introduces Structure Informed Stochastic Partial Differential Equations (SISPDE), which uses Matérn gaussian processes to control spatial correlations in uncertainty estimation across different graph structures.
The method achieved state-of-the-art performance on out-of-distribution detection across eight benchmark datasets. For financial applications, this translates to more reliable fraud detection systems that can identify when they’re uncertain rather than making overconfident predictions.
Both research areas directly address challenges companies face in deploying AI at scale. In financial services, AI systems must balance accuracy with computational efficiency while maintaining high standards for reliability and explainability.
Uncertainty estimation becomes crucial when AI systems make decisions with financial consequences. A fraud detection system that can accurately assess its own confidence helps reduce both false positives that inconvenience customers and false negatives that allow fraudulent transactions.
Efficient two-stage architectures enable Block to deploy sophisticated AI capabilities while controlling operational costs. The theoretical framework ensures these efficiency gains don't compromise the quality of decisions affecting millions of transactions.
The spotlight recognition for the two-stage classifier work particularly highlights the significance of the theoretical contributions. Consistency guarantees for surrogate losses represent a meaningful advance in machine learning theory that can inform and guide the development of novel methods for practical applications.
The research reflects Block's approach to AI development: addressing real-world challenges while maintaining rigorous scientific standards. As AI becomes increasingly central to financial services, this combination of practical focus and theoretical rigor becomes essential for building trustworthy systems at scale.