## Distances in Higher Dimensions

Ideas about how a world with more than three spatial dimensions would work - what laws of physics would be needed, how things would be built, how people would do things and so on.

### Distances in Higher Dimensions

https://stats.stackexchange.com/questions/99171/why-is-euclidean-distance-not-a-good-metric-in-high-dimensions

If you have random points scattered randomly around a finite high dimensional space then the distance from any point to any other converges to being the same. Machine learning algorithms can't distinguish closeness. So the dimensionality of the space is reduced somehow.
PatrickPowers
Tetronian

Posts: 481
Joined: Wed Dec 02, 2015 1:36 am

### Re: Distances in Higher Dimensions

It's not so much that the dimensionality of space is reduced, but that the distributions most relevant to said machine learning algorithms exhibit pathological behaviour. The reason is that the data that we often use higher-dimensional representations for comes not from a space of that dimension naturally; instead it's data relevant to our 3D world that, for the sake of analysis, we raise to a higher dimension in order to create a more orthogonal model. So it should not be surprising that, given enough dimensions, the distance between data points start approaching a constant -- because they more-or-less differ in some average amount of features (each of which we map to a dimension in our modelling), and as the number of features/dimensions increase, they differ from each other in more or less the same number of features.

Native n-dimensional data, such as one might conceivably obtain from measuring phenomena in a "native" n-dimensional world (let's postulate an n-dimensional planet, handwaving away problems related to atomic and orbital stability) where there would be enough matter to evenly fill a significant chunk of the space, would have a much different distribution than the kind of data we see in n-dimensional models of phenomena that ultimately originate from our 3D-centric world. But of course, even then, certain counterintuitive phenomena would still be observed, such as the divergence in geometric content of the n-cube and n-sphere, and other such things.

One would be surrounded by so many orthogonal directions that one couldn't effectively take stock of one's surroundings (there's an unbounded number of dimensions that you have to turn to in order to fully observe the myriad lateral directions that surround you). Given such a situation, it perhaps would not be surprising that if hypothetical n-dimensional creatures exist for sufficiently large n, they would have to possess the equivalent of a myriad of eyes, just to be able to be aware of their surroundings. It would be almost completely useless to look only in a single direction. Furthermore, energy dissipation would be so high (almost all of its bulk would be close to its surface, therefore it would have trouble retaining heat) it probably wouldn't make sense for such a creature to possess locomotion at all.
quickfur
Pentonian

Posts: 2988
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

### Re: Distances in Higher Dimensions

One has to distinguish between a geometry as such, and the ability of ML algorithms to learn about it. If you consider the probability distribution curve of distances (from 0 to max diagonal), the curve narrows in higher spaces, eventually approaching a spike at some unique "average" value (technically, it will be the mode for that distribution). ML has difficulty making sense of such narrow spikes.
steelpillow
Trionian

Posts: 67
Joined: Sat Jan 15, 2011 7:06 pm
Location: England

### Re: Distances in Higher Dimensions

Well yes, the thing about ML algorithms is that they are tuned for data bounded by some maximum feature value, usually in an n-cube-like bounding volume. In a "native" geometric situation this may no longer be the case.
quickfur
Pentonian

Posts: 2988
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North