Using k-means clustering on spatial data requires linearization of the dataset. This causes pixels on the right of one row and on the left of the next to be considered neighbors in a traditional linearization scheme (using a Hilbert curve might be a better idea), which is inaccurate. This stems from the fact that observations are stored as a matrix: observations in the rows, features in the columns. Using tensors, it seems we can store the spatial information as well as any constituent features – the dimensionality of the dataset would then be the rank of the tensor – 1, with the final dimension reserved for features.
That would seem more efficient. Maybe I should see whether it has been done already.