Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
7

What is precise definition of "dimension"?

Wikipedia says "In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it" (https://en.wikipedia.org/wiki/Dimension).

However, this is a vague definition.

In fact, one could represent "2-dimensional" space with only a singular coordinate.

For example, consider the 2-D polar coordinates (r, θ) in R^2; θ is in radians. We can then create a single number C such that the even digits of C are the digits of r, and the odd digits of C are the digits of θ. There is a 1-to-1 mapping between (r, θ) and C.

So, what is a precise and accurate definition of "dimension"? Do we think about dimensions the wrong way? What is so inherent about the way we think about dimensions?

8 comments
71% Upvoted
What are your thoughts? Log in or Sign uplog insign up
Applied Mathematics
13 points · 4 months ago · edited 4 months ago

Note that the Wikipedia article correctly says that "number of required coordinates" is an informal definition. In other words, that's not the precise definition of dimension. As you cleverly point out, there's no requirement of using 2 numbers to denote a point in R2. Indeed, since Rn has the same cardinality as R, we can always specify a point in Rn uniquely by specifiying a single number in R. But clearly we don't say that Rn has the same dimension for all n.

In physics, the only meaningful definition of dimension is either that of a manifold or that of a vector space. Space, spacetime, phase space, particle trajectories, Lie groups, etc. are all manifolds of a certain dimension. Spacetime is 4-dimensional, classical phase space for N particles is 6N-dimensional, etc.

There are other notions of dimension in mathematics, e.g., Hausdorff dimension, but I don't know of those definitions in physics.

The word "dimension", without any extra information, doesn't have any precise definition in mathematics since it covers an extremely wide and diverse array of notions. The general theme connecting them is as stated in the Wiki --- the number of independent parameters, but the specific definitions in specific areas will have lots of different technical details. The easiest example is the dimension of vector spaces, in particular Rn . In this case we have a notion of the [basis](https://en.wikipedia.org/wiki/Basis_(linear_algebra)), which is a set of vectors which are linearly independent (i.e. there are no coefficients a1, .., an such that a1e1+...+anen = 0) and such that any other vector can be expressed as their linear combination. If such a set exists, then one can prove that any other basis will have the same number of elements, and this number is then called the dimension of a vector space. More abstractly, we define the dimension of the standard vector space Rn of sequences (a1, ..., an) to be n, and we say that any other vector space has dimension n if it is isomorphic to Rn . At this point you need to prove that if n != m, then Rn and Rm are not isomorphic --- this is the same as the statement above that the size of a basis is well-defined.

A more complicated situation is when we consider manifolds, which are roughly the sets of solutions of equations (more rigorously, they must have this form only locally). In this case we say that an open ball in Rn has dimension n and any other manifold which locally looks like this open ball (diffeomorphic to it) has dimension n. So the functions which parametrize our space must me infinitely smooth, which is not the case for the digit parametrization that you describe.

An even more complex situation is complex-analytic spaces. They look locally like an open subset in the complex vector space Cn and the coordinate functions are required to be complex-analytic. Again, we say that such a space will have dimension n, but note that you could also consider an open subset in Cn as an open subset in R2n , so while the complex dimension of the space is n, its real (meaning w.r.t. real numbers) dimension is 2n. This is a general situation which illustrates the point above: there are different notions of dimension connected by the same theme.

It's also possible that dimension is not well-defined globally for the whole space but is well-defined and different locally: consider a union of a line and a point. A more complex example is two crossing line: while it looks "sort of" one-dimensional, it doesn't have dimension 1 in the sense described above, since near the intersection point this space isn't describable by a single real-valued function. It's also possible to produce even more complicated examples where the dimension is entirely undefined.

An entirely different notion of dimension is Hausdorff dimension. It is used in the discussion of fractal sets. Unlike the dimension defined above, it can have any non-negative real value (not only integers).

It's also possible to define a notion of dimension which could be any integer (or any real number). For example, let our objects of study be pairs of vector spaces (V, W). We define that the dimension of this pair dim(V,W) is dim V - dim W (if either of those dimensions isn't well-defined, then neither is the dimension of the pair). While it may seem silly, it is actually quite useful since there are some very nontrivial operations on pairs of spaces which don't interact nicely with the ordinary vector space dimension, but works very nicely with this one. You can start with the Wiki article on super vector spaces.

Electrophysiology
1 point · 4 months ago

Not a mathematician, but perhaps you could use the definition "maximum possible number of orthogonal coordinates to specify any point"? Since it would be impossible to define an orthogonal coordinate system with 3 coordinates in R2. Would only make sense for vector spaces, of course.

You need a little bit more than just vector space properties in order for "orthogonal" to be well-defined. In Rn we have the dot product and can say "u and v are orthogonal if u . v = 0", but an arbitrary vector space need not have an equivalent to this. A vector space which has an operation analogous to the dot product, which allows you to talk about notions like distance and angles, is called an inner product space. If you phrase things in terms of linear independence rather than orthogonality, you get a notion of dimension that works for any vector space, not just inner product spaces.

Combinatorics | Graph Theory | Algorithms and Complexity
1 point · 3 months ago

I think your mapping is missing some details. Is C a number between 0 and 1? How do you encode (6000,0.001) and (6000,1)? etc.

But something like what you have described is possible. Still, there are two "dimensions" to such a system. You have essentially rearranged the order. When you add two vectors in R2 you get a third vector. You can also multiply a vector by a scalar to get another vector. If you consider the corresponding operations in your isomorphic space (albeit represented by a single real number), you should see that nothing has really changed. If you want to generate all numbers in the system via a linear combination of some set of numbers (according to these redefined operations) you will find that you need at least two: one with an even digit, and one with an odd digit.

We say that a collection of vectors is "linearly independent" if there is no way to scale and add the vectors from the collection in a way that results in the zero vector. On the other hand, they are linearly dependent if you can get the zero vector by scaling and adding them. For example, {(1,0), (2,0)} is linearly dependent since 2*(1,0) + -1*(2,0) = (0,0). But {(1,0), (0,1)} is linearly independent, since no combination of scaling or adding will yield (0,0) from these two vectors. Since we're being precise, I should point out that we explicitly disallow the case where you scale each vector by a factor of zero, so you can't prove linear dependence using the fact that 0*(1,0) + 0*(0,1) = 0. However, it is true that {(1,0), (0,1), (0,0)} is linearly dependent since 0*(1,0) + 0*(0,1) + 1*(0,0) = (0,0). Given this, we can define the dimension of a vector space as the maximum possible size of a linearly independent collection of vectors. You can check for yourself that you can't have more than two linearly independent vectors in R2, or more than three in R3.

There are other definitions as well, mostly defined for other settings besides vector spaces, but they're more technical.

Community Details

15.7m

Subscribers

5.9k

Online

Ask a science question, get a science answer.

Create Post
r/askscience

Please read our guidelines and FAQ before posting

Features
Calendar

Ask Anything Wednesday - Physics, Astronomy, Earth and Planetary Science

July 24, 2018

Ask Anything Wednesday - Engineering, Mathematics, Computer science

July 31, 2018

Ask Anything Wednesday - Biology, Chemistry, Neuroscience, Medicine, Psychology

August 7, 2018

Ask Anything Wednesday - Economics, Political Science, Linguistics, Anthropology

August 14, 2018

Ask Anything Wednesday - Physics, Astronomy, Earth and Planetary Science

August 21, 2018
r/askscience Rules
1.
medical advice
2.
offensive or abusive language
3.
homework question
4.
meme, joke, just link
Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.