NHacker Next
login
▲Time Series Forecasting with Graph Transformerskumo.ai
90 points by turntable_pride 13 hours ago | 29 comments
Loading comments...
cye131 8 hours ago [-]
I'm not a fan of this blog post as it tries to pass off a method that's not accepted as a good or standard time series methodology (graph transformers) as though it were a norm. Transformers perform poorly on time series, and graph deep learning performs poorly for tasks that don't have real behaviorial/physical edges (physical space/molecules/social graphs etc), so it's unclear why combining them would produce anything useful for "business applications" of time series like sales forecasting.

For those interested in transformers with time series, I recommend reading this paper: https://arxiv.org/pdf/2205.13504. There is also plenty of other research showing that transformers-based time series models generally underperform much simpler alternatives like boosted trees.

After looking further it seems like this startup is both trying to publish academic research promoting these models as well as selling it to businesses, which seems like a conflict of interest to me.

rusty1s 3 hours ago [-]
Hey, one of the authors here—happy to clarify a few things.

> Transformers perform poorly on time series.

That’s not quite the point of our work. The model isn’t about using Transformers for time series per se. Rather, the focus is on how to enrich forecasting models by combining historical sequence data with external information, which is often naturally structured as a graph. This approach enables the model to flexibly incorporate a wide range of useful signals, such as:

* Weather forecasts for a region

* Sales from similar products or related categories

* Data from nearby locations or stations

* More fine-granular recent interactions/activities

* Price changes and promotional campaigns

* Competitor data (e.g., pricing, availability)

* Aggregated regional or market-level statistics

The architecture is modular: we don't default to a Transformer for the past sequence component (and in fact use a simpler architecture). The Graph Transformer/Graph Neural Network then extends the past sequence component by aggregating from additional sources.

> It seems like this startup is both trying to publish academic research promoting these models as well as selling it to businesses which seems like a conflict of interest to me.

That’s a bold claim. All of our academic work is conducted in collaboration with university partners, is peer-reviewed, and has been accepted at top-tier conferences. Sharing blog posts that explain the design decisions behind our models isn’t a conflict of interest—it's part of making our internals more transparent.

ethan_smith 6 hours ago [-]
Recent work like Informer (AAAI'21) and Autoformer (NeurIPS'21) have shown competitive performance against statistical methods by addressing the quadratic complexity and long-range dependency issues that plagued earlier transformer architectures for time series tasks.
tough 7 hours ago [-]
thoughts on TimesFM?

> After looking further it seems like this startup is both trying to publish academic research promoting these models as well as selling it to businesses, which seems like a conflict of interest to me.

is this a general rule of thumb that one should not use the same organization to publish research and pursue commercialization generally?

shirokiba 7 hours ago [-]
Would you be so kind as to recommend some resources on modern, promising methods for time series forecasting? I'm starting a position doing this work soon and would like to learn more about it if you'd be willing to share
ayongpm 4 hours ago [-]
https://dontfuckwithscroll.com/
rusty1s 3 hours ago [-]
Forwarded :)
loehnsberg 12 hours ago [-]
I disregard anything on time series forecasting from any entity that uses Facebook Prophet as a benchmark.
frakt0x90 11 hours ago [-]
Prophet is great and we use it for multiple models in production at work. Our industry has tons of weird holidays and seasonality and prophet handles that extremely well.
drewbitt 7 hours ago [-]
We also used it at my previous job. Yes it does handle that well, but it was also simply not as correct as we would have liked (often over adjusting based on seasonality) even with tuning. Prophet was probably the right choice initially though just on how easy it is to set up to get decent results.
tech_ken 11 hours ago [-]
This is sales research, and after "CAGR in a GSheet" FB Prophet is what's going to be most recognizable to the widest base of customers.

FWIW seems like the real value add is this relational DB model: https://kumo.ai/research/relational-deep-learning-rdl/ The time-series stuff is them just elaborating the basic model structure a little more to account for time-dependence

melenaboija 11 hours ago [-]
For such strong and personal statement I have to ask why.
Worksheet 10 hours ago [-]
If you arrived into, say, London and googled "Best fish and chips" would you believe that the top result gives you the meal that you're after?
gk1 6 hours ago [-]
…yes? Feels like there’s some bit of tribal knowledge required to understand your point, but fewer people know it than you think.
motoxpro 7 hours ago [-]
I would believe those are some of the better options and definitely a useful benchmark. 1. How do you go about finding the "absolute" best when you go to a city 2. What does this have to do with the GP's question?
hotstickyballs 8 hours ago [-]
Why not? It’s definitely a useful benchmark
esafak 11 hours ago [-]
Why? That is what everybody uses. What do you use?
loehnsberg 10 hours ago [-]
L1-regularized autoregressive features, holiday dummies, Fourier terms (if suitable in combination) yield lower test errors, are faster in training, and easier to cross-validate than Prophet.
hotstickyballs 8 hours ago [-]
Sounds like prophet with extra steps
esafak 9 hours ago [-]
With which library though? Is it fast enough for production?
cwmoore 12 hours ago [-]
“Here, sign this.”

    accept all cookies
ziofill 12 hours ago [-]
I can't stand websites that override scrolling
pealco 12 hours ago [-]
Most of my time interacting with this site was spent in developer tools, trying to figure out where the scrolling behavior was coming from. (Couldn't figure it out.) I can't understand why people are still doing this in 2025.
bestest 11 hours ago [-]
Enter this in the console:

document.body.onwheel = (e) => e.stopPropagation();

almosthere 11 hours ago [-]
Most likely the developer is using a Windows computer.
rossant 11 hours ago [-]
I came here to say this. Don't mess with my scrollbar. Ever.
10 hours ago [-]
grimpy 12 hours ago [-]
[flagged]
superfrank 11 hours ago [-]
I don't think the people who wrote the paper are the same people who built the website.
meindnoch 9 hours ago [-]
1. Stop messing with my scrolling.

2. If this really worked, you'd be making billions on the stock market. The fact that you don't, tells me it doesn't work.

tech_ken 9 hours ago [-]
> If this really worked, you'd be making billions on the stock market

That's kind of a weird thing to say given that the market cap for quantitative finance is well over a billion dollars, and this product clearly seems to be targeting that sector (plus others) as a B2B service provider. Do you think that all those quantitative trading firms are using something other than time-series analytics?

Also, setting aside the issue of whether time-series forecasting is valuable for stock-market trading, it seems like the value add of this product isn't necessarily the improved accuracy of the forecasts, but rather the streamlined ETL -> Feature Engineering -> Model Design process. For most firms (either in quantitative finance or elsewhere) that's the work of a small dedicated team of highly-trained specialists. This seems like it has the potential to greatly reduce the labor requirements for such an organization without a concomitant loss of product quality.