US startup launches AI model that reduces compute by 1,000 times

US-based startup Subquadratic has launched its large language model 'SubQ', which reduces compute by nearly 1,000 times compared to standard models. SubQ is the first model built on a fully sub-quadratic sparse attention architecture, which allows it to identify the context that matters and save compute, the firm said. The model outperforms Claude's Opus 4.7 in long context, Subquadratic said.

Load More