A persistent memory representation for semantically-encoded high-dimensional data

I’ve been experimenting with a representation I call DTDR (Distributed Transform-Domain Representation).
The idea is to store vectors or model parameters as quantised coefficients of a structured orthogonal transform and treat that representation as the persistent form rather than a preprocessing step.

Some interesting behaviours seem to emerge:

  • similarity relationships are often still usable directly in coefficient space

  • approximate nearest-neighbour search sometimes improves due to coefficient dilution

  • quantised representations can be reconstructed into standard FP tensors for normal inference

  • the stored form often remains further losslessly compressible

I’m trying to understand whether this is just another way of viewing existing quantisation techniques, or whether it has genuinely different implications for storage/search pipelines.

Constructive critique very welcome — especially pointers to prior work I may have missed.

Code and demos: https://github.com/UnrealJon/

1 Like

I don’t see any repo there mate.

1 Like

Maybe wrong URL? correct one (maybe) : https://github.com/UnrealJon/DTDR

Yes thanks John6666 - your link is correct.

1 Like