Hello everyone,
I am Zhongpan Tang, an independent researcher from an ordinary company.
I have recently completed a paper on a Transformer improvement architecture with linear attention, which I hope to submit to the cs.LG and stat.ML categories on arXiv. As this is my first submission, the system requires an endorsement from an endorser in the field.
My work primarily aims to address the quadratic complexity bottleneck of the standard Transformer’s attention mechanism and the issue of KV Cache consumption during inference. We propose a new architecture called TangLinFormer, which theoretically achieves strict linear complexity while retaining the full expressive power of standard attention.
Paper Abstract:
The Transformer architecture has become the cornerstone of modern artificial intelligence, but its core self-attention mechanism suffers from a quadratic complexity bottleneck with respect to sequence length, severely limiting its application in long-sequence tasks. To address this challenge, existing linear attention methods often sacrifice model performance by relying on data-agnostic kernel approximations or restrictive context selection. This paper introduces a novel, lossless linear attention architecture—TangLinFormer. TangLinFormer achieves strict linear complexity while fully preserving the expressive power of standard attention, thereby eliminating the performance issues prevalent in existing approximation methods. We systematically evaluated the performance of TangLinFormer against standard Transformer baselines on long-sequence inference tasks through a series of experiments. The results demonstrate that TangLinFormer exhibits overwhelming advantages in key metrics such as inference time, KV cache efficiency, memory footprint, and overall speedup ratio.
If there are any friends in the community who are active arXiv authors in cs.LG or related fields and believe this work is relevant and meaningful after reading the abstract/draft, would you please consider providing an endorsement for me?
I will send you the official endorsement code generated by arXiv via private message. Thank you very much for your time and consideration.
1 Like
Hi Zhongpan, I have read your paper and feel interested in TLinFormer. However, the description in the paper is not very clear, could you please open source the code?
1 Like

