STAC Launches Gradient-Boosted Tree Benchmark in STAC-ML™

We’re pleased to announce that the Gradient-Boosted Tree (GBT) benchmark suite, named El-Popo, is now live in STAC-ML™. This release formally extends the STAC-ML program to one of the most widely used machine learning approaches in financial services.

Gradient-Boosted Trees have become a cornerstone of applied machine learning. Unlike deep neural networks, which often require massive datasets and long training times, GBTs are relatively lightweight and excel at structured data problems. In areas such as risk modeling, fraud detection, credit scoring, trade classification, and signal generation, GBTs consistently deliver high predictive accuracy with manageable computational demands. In many production trading and risk systems, they remain the model of choice, especially when low-latency inference is critical.

But as with any model family, inference performance depends heavily on the underlying infrastructure and software stack. Measuring that performance under realistic conditions is what STAC-ML is designed to do. Earlier this year, we released a Proof of Concept version of a GBT benchmark in STAC-ML. That initial work helped the community explore design questions, validate the relevance of workloads, and generate early results. The feedback from firms and vendors was invaluable in refining the approach.

Thanks to the contributions of the STAC-ML Working Group, the benchmark has now been formalized and is officially part of the suite. The GBT suite focuses on inference latency and performance consistency, two key factors for real-time financial applications. Specifically, the benchmark measures how quickly different systems can score live market data using GBT models of varying complexity.

While deep learning benchmarks tend to emphasize throughput, the GBT benchmark emphasizes 99th percentile and maximum latency, which align more closely with the requirements of time-sensitive trading environments. Our findings throughout the benchmark development process confirm the importance of having a standardized way to compare GBT performance under common conditions.

Call for Submissions

Now that the benchmark is live, we are inviting new benchmark submissions from across the community. This is an opportunity for technology providers to demonstrate and compare performance on a workload that has real-world relevance to production finance.

For firms interested in submitting benchmarks or learning more about participation, please [contact us].

Acknowledgments

We’d like to thank the STAC-ML Working Group for their time, expertise, and guidance in shaping this benchmark. Their input has ensured that the GBT suite addresses real use cases and meets the community’s need for meaningful metrics.

This release is another step forward in helping the industry evaluate and optimize machine learning infrastructure for the most time-sensitive financial applications.

About the STAC Blog

STAC and members of the STAC community post blogs from time to time on issues related to technology selection, development, engineering, and operations in financial services.

About the blogger