Exploring whether a quantised structured transform-domain representation can act as a stable primary form for storing and operating on neural network models and embeddings, rather than just a preprocessing step for quantisation

I’m experimenting with storing model parameters and embeddings as quantised coefficients of a structured orthogonal transform (e.g., Hadamard/DCT) and treating that representation as the persistent form of the model. Certain operations (similarity, projections) appear possible directly in this domain, and models can also be reconstructed into standard tensors for normal inference. I’m looking for technical feedback on how this relates to existing quantisation or representation approaches and what tests would best probe its behaviour.
Code and experiments: GitHub - UnrealJon/DTDR: Transform-domain representation enabling 3–4× storage reduction with direct ANN search and novel multi-resolution signals. Patent-pending.