When you think about the Fibonacci sequence, you probably imagine a swirling vortex of oscillating points stretching outwards to infinity:
Okay, no, obviously you don’t. Yet.
When you think about the Fibonacci sequence, you probably flush with a latent rage when you remember that it is, more often than not, the way that we introduce the concept of “recursive functions” to new programmers, in some sort of cruel hazing intended to make it harder for them to ever appreciate how recursion can help them write better programs. Sometimes we even add memoization, and call it “dynamic programming,” in order to impress upon them that even the most trivial problems deserve complex, inefficient solutions.
Er, okay, you probably don’t think about the Fibonacci sequence much at all. It doesn’t, you know, come up very often.
But I hope that you will spend some time thinking about it with me today, because I think that the Fibonacci sequence – despite being a terrible showcase for recursion – is a really interesting vector for discussing some techniques from linear algebra.
how to fibonacci | space complexity | time complexity |
---|---|---|
insane recursion | linear | exponential |
memoized insane recursion | linear | linear |
trivial iteration | constant | linear |
exponentiation-by-squaring | constant | logarithmic |
eigendecomposition | let's talk |
We will spend no time on the recursive Fibonaccis; I’m sure that you’ve seen them before. Instead, let’s skip right to the “obvious” way to calculate Fibonacci numbers:
function fib(n) {
if (n == 0) {
return 0;
}
let current = 1;
let previous = 0;
for (let i = 0; i < n - 1; i++) {
const next = current + previous;
previous = current;
current = next;
}
return current;
}
No recursion, no memoization. We have two pieces of state: the “current number” and the “previous” number, and at every step of the iteration we advance both of these to new values.
But there’s something very interesting about this function: the new values for our state are a linear combination of the old values.
current' = current + previous
previous' = current
Using x'
to mean “the next value for x
.”
And you might recognize this as a “system of linear equations.” I think it’s more obvious when we write it like this:
current' = 1 * current + 1 * previous
previous' = 1 * current + 0 * previous
And you might remember that there’s another, more cryptic way to write down a system of linear equations:
current' |
previous' |
1 | 1 |
1 | 0 |
current |
previous |
This is exactly the same thing! This is just another way of writing the equation – it’s just a shorthand notation.
Here, let’s test it out to make sure of that:
1 | 1 |
1 | 0 |
8 |
5 |
1 • 8 + 1 • 5 |
1 • 8 + 0 • 5 |
13 |
8 |
Well that’s exactly what we expected – 13 is the next Fibonacci number in the sequence, and 8 was the previous one.
We can, of course, repeat this process, by applying the system of linear equations again:
1 | 1 |
1 | 0 |
13 |
8 |
21 |
13 |
Or, to put that another way:
1 | 1 |
1 | 0 |
1 | 1 |
1 | 0 |
8 |
5 |
21 |
13 |
And here’s why we care: matrix multiplication is associative, so we can actually think of that like this:
1 | 1 |
1 | 0 |
1 | 1 |
1 | 0 |
8 |
5 |
21 |
13 |
Or:
2 | 1 |
1 | 1 |
8 |
5 |
21 |
13 |
In other words: given a system of linear equations to find the next state of our iteration, we can square the matrix-of-coefficients of the system to find a new system of linear equations that represents “two states from now.”
Of course we don’t need matrices to do this. We can compute a formula for “two steps” of our iteration using term substitution:
current' = current + previous
previous' = current
current'' = current' + previous'
previous'' = current'
current'' = (current + previous) + current
previous'' = (current + previous)
current'' = 2 * current + previous
previous'' = current + previous
Which is a new system of linear equations – which we can represent as a matrix as well.
current'' |
previous'' |
2 | 1 |
1 | 1 |
current |
previous |
We got the same result, because of course we did: multiplying by this matrix really means “advance to the next state.” Multiplying twice means “advance to the next state and then advance to the next state after that.”
And we can keep going. What’s the state three steps from now?
1 | 1 |
1 | 0 |
1 | 1 |
1 | 0 |
1 | 1 |
1 | 0 |
3 | 2 |
2 | 1 |
Or, more concisely:
1 | 1 |
1 | 0 |
3 | 2 |
2 | 1 |
If we do this repeatedly, you might notice a familiar pattern start to emerge:
1 | 1 |
1 | 0 |
5 | 3 |
3 | 2 |
1 | 1 |
1 | 0 |
8 | 5 |
5 | 3 |
1 | 1 |
1 | 0 |
13 | 8 |
8 | 5 |
Which makes sense, doesn’t it? Because if we multiply this matrix with the matrix [1 0]
– our starting values – then it’s going to advance forward through six steps of the Fibonacci sequence in a single leap. So naturally we have to be encoding something about the sequence itself in the matrix – otherwise we wouldn’t be able to advance by N steps in constant time.
Now, the insight that takes this from linear to logarithmic is that we don’t have to do this multiplication one step at a time. We can multiply in leaps and bounds.
Let’s call our original starting matrix F, for Fibonacci.
1 | 1 |
1 | 0 |
We’ve already calculated F2:
2 | 1 |
1 | 1 |
And now it’s only one more matrix multiplication to calculate F4:
2 | 1 |
1 | 1 |
2 | 1 |
1 | 1 |
We can use this fact to calculate arbitrary matrix powers, by breaking the problem up into products of powers of two:
And by doing that, we can calculate the nth Fibonacci number in only log2(n) steps.
Okay, so that’s fun and all, but that’s not really what this blog post is about.
I don’t know about you, but if I came across this matrix in the wild, I would not think “Oh, that’s the Fibonacci sequence”:
1 | 1 |
1 | 0 |
I would probably think “huh, I dunno, it’s like, a reflection, sort of, or maybe a shear; what’s a shear again, hang on, I need to see a picture.”
That is, I am used to thinking of matrices as transformations of points in space – scales and rotations and things like that. I’m not really used to thinking of matrices as “state machines.”
But this duality is the beauty of linear algebra! Matrices are transformations of points in space and graphs and state machines all at the same time.
So let’s take a look at the Fibonacci transformation, applied to arbitrary points in R2:
That animation is progressively applying and removing the transformation, so we can get some intuition for how it deforms a square. But we’re really more interested in repeated applications of the transformation. So let’s start with the same points, but multiply by that same matrix over and over:
Interesting. Over time, they have a tendency to stretch out along the long diagonals of this rhombus. Let’s zoom out:
Every time a point reflects over that diagonal, it reflects at a slightly different angle, slowly converging towards this straight line.
You might already have an idea of what that straight line means. You might know that, if you look at the ratio between subsequent Fibonacci numbers, they approximate the golden ratio:1
1 / 1 = 1
2 / 1 = 2
3 / 2 = 1.5
5 / 3 = 1.666...
8 / 5 = 1.6
13 / 8 = 1.625
21 / 13 = 1.61538462
34 / 21 = 1.61904762
The golden ratio is irrational, but every subsequent Fibonacci number is a better and better rational approximation. (The golden ratio is around 1.618033988749 – so we’re already pretty close.)
It’s interesting to see that these estimations don’t “sneak up” on the golden ratio. In fact they alternate between over- and under-estimating it. Which is exactly what we saw in our visualization!
If you return to the “state machine” interpretation of our matrix, remember that the value we’re plotting as x
is really “the current Fibonacci number,” and the value we’re plotting as y
is “the previous Fibonacci number.” So the ratio between successive numbers – x/y
– is just the slope of the lines that our points are traveling along. And we could see points reflecting over that diagonal, over- and under-shooting it, slowly converging… towards the line whose slope is the golden ratio.
Which is, in fact, the “long diagonal” of our rhombus.
And this makes sense, I think – this isn’t some weird coincidence. The golden ratio is all about the ratio between parts and wholes being the same as ratio between parts. And the Fibonacci sequence is all about adding together parts to become wholes that become parts in the next number of the sequence.
Here, our two parts are the “current” and “previous” values, and the whole that they make is the “next” Fibonacci number. Even if we start with two numbers that are completely unrelated to the Fibonacci sequence – say, 8
and 41
– the simple way that we pick the next number will cause us to approximate the golden ratio after only a few iterations:
8 / 41 = 0.1951219
(8 + 41 = 49) / 8 = 6.125
(49 + 8 = 57) / 49 = 1.16326531
(57 + 49 = 106) / 57 = 1.85964912
(106 + 57 = 163) / 106 = 1.53773585
Why is that? Well, because of the definition of the golden ratio.
This is extremely unrigorous, but I can try to sketch out a very informal argument for why this is:
Let’s say the ratio between A
and B
is some unknown quantity S
. It’s not the golden ratio, it might not be anywhere near the golden ratio; we have no idea what it is. In my 8 and 41 example, it wasn’t even in the right ballpark.
So the ratio between the next element in our series and A will be (1 + (1/S))
.
We still don’t know what S
is! But if we do this again…
After each iteration, the original S
will become a smaller and smaller component in the final answer, until eventually we’ll just have an expression that looks like this:
Whatever our original S
was, its contribution to the final result will eventually be negligible. Even after just a few iterations, we can see that the choice of S
doesn’t make a huge difference in the outcome:
And of course even that will fade away after a few more steps.
In fact the version of that expression with an infinite number of steps – where there is no S
at all, but just an infinite sequence of divisions – is the “continued fraction” expression of the golden ratio.
Except, well, I’m lying here.
That residue will not fade away for all values of S
. First of all, if S
is zero, it doesn’t matter how small that term gets – you’re not going to squeeze a number out of it.
But there is another, more interesting value of S
that breaks this rule. There is one other number that will not tend towards 1.618 when you repeatedly take its reciprocal and add one. It is the number that is already one plus its own reciprocal:
Oh, gosh, yes, the golden ratio is one plus its own reciprocal. But I was talking about the other number with that property:
This number is (1 - φ), and it is also -φ-1. The golden ratio is weird like that.
That number is a weird number, because if we have two numbers with that ratio – say, -1.236
and 2
– and we applied our transformation, those points would not spread their wings towards the diagonal. What would they do instead?
Aha. Well, that makes sense.
Some points tend towards the top right, some points tend towards the bottom left, but some points get stuck. Sucked into the origin, cursed to forever travel along this one straight line.
Points along the long diagonal also travel in a straight line – they don’t bounce over the diagonal, because they’re already on it. Let’s just focus on these perfectly straight lines:
Not all matrices will produce straight lines like this when you apply them repeatedly. A rotation matrix, for example, will always change the direction of every single line each time you multiply a point by it.
These straight lines are called eigenvectors, which is German2 for something like “intrinsic vector” or “characteristic vector."3
Well, to be more precise, any particular point on those straight lines is an “eigenvector.” The vector [φ 1]
is an eigenvector, and so is [-2.1φ -2.1]
. And the vector [-1/φ 1]
is an eigenvector, and so is [-2/φ 2]
.
But all of the eigenvectors on each line are “similar,” so I’m just going to pick [φ 1]
and [(1-φ) 1]
as our two representative eigenvectors.
When you multiply an eigenvector of a matrix by the matrix itself, you get back a new eigenvector on “the same line.” That is to say, you get back another eigenvector that is just some scalar multiple of the original eigenvector.
For example, when we multiply our first eigenvector by the Fibonacci matrix:
1 | 1 |
1 | 0 |
φ |
1 |
φ + 1 |
φ |
Well… it’s not obvious that this is the case, but we actually just scaled the vector by φ. Because φ2 = φ + 1. The golden ratio is weird.
Similarly:
1 | 1 |
1 | 0 |
1 - φ |
1 |
2 - φ |
1 - φ |
We scaled it by (1 - φ), again somewhat cryptically:
So when we multiply our Fibonacci matrix with its eigenvectors, we scale those numbers by φ and (1 - φ). These scaling factors are called “eigenvalues,” and it’s weird that they look so much like the eigenvectors. That’s… that’s a weird Fibonacci coincidence, a weird golden ratio thing, and not a general pattern that holds for eigenvectors and eigenvalues in general.
Okay, so why do we care about this?
Well, once we know the eigenvectors and eigenvalues of the matrix, we can actually perform repeated matrix multiplication in constant time.
…Sort of. You have to imagine a big asterisk after that sentence, which I will explain below.
To explain how, we’re going to need to do a little bit of linear algebra. But first, I just want to restate everything I’ve said so far in explicit notation:
Multiplying F with each eigenvector is the same as multiplying that eigenvector by its corresponding eigenvalue. So:
1 | 1 |
1 | 0 |
φ |
1 |
φ |
1 |
And:
1 | 1 |
1 | 0 |
1 - φ |
1 |
1 - φ |
1 |
Right. But there’s actually a way to write those two equalities as a single equality:
1 | 1 |
1 | 0 |
φ | 1 - φ |
1 | 1 |
φ | 1 - φ |
1 | 1 |
φ | 0 |
0 | 1 - φ |
Instead of writing out each eigenvector as a separate column vector, I stuck them into a matrix. And instead of scaling each one by a scalar, I multiplied that matrix by a diagonal matrix.
This is the same statement, though: right-multiplication by a diagonal matrix just means “scale the columns of the left matrix by the corresponding diagonal value.” We can gut check this by performing the multiplication, and seeing that we’re making the exact same statements as before:
1 | 1 |
1 | 0 |
φ | 1 - φ |
1 | 1 |
φ + 1 | 2 - φ |
φ | 1 - φ |
φ | 1 - φ |
1 | 1 |
φ | 0 |
0 | 1 - φ |
φ + 1 | 2 - φ |
φ | 1 - φ |
But now we’re making these statement about both eigenvectors in parallel.
This equality – this statement about how multiplication by the Fibonacci matrix scales eigenvectors – is the secret to computing Fibonacci numbers in “constant time”:
1 | 1 |
1 | 0 |
φ | 1 - φ |
1 | 1 |
φ | 1 - φ |
1 | 1 |
φ | 0 |
0 | 1 - φ |
The trick here is that we’re going to right-multiply both sides of the equation by the inverse of our eigenvector matrix. This will eliminate it from the left-hand side entirely:
1 | 1 |
1 | 0 |
φ | 1 - φ |
1 | 1 |
φ | 0 |
0 | 1 - φ |
φ | 1 - φ |
1 | 1 |
And now we have a new way to calculate the “next Fibonacci number.” Previously we knew how to do it by multiplying with the matrix F
. Now we can do it by multiplying with, uhh, this inverse eigenvector matrix thing, and then the diagonal matrix of eigenvalues, and then the non-inverse matrix-of-eigenvectors.
Much simpler, right?
This is getting really long and complicated and I’m going to run out of space soon, so let’s give these things names:
1 | 1 |
1 | 0 |
φ | 1 - φ |
1 | 1 |
φ | 0 |
0 | 1 - φ |
That’s an upper-case lambda, and look, it’s just the convention for the eigenvalue matrix. Eigenvalues are called λ, and when you put them in a diagonal matrix you call it Λ. I don’t make the rules here.
Now that we have some abbreviations, we can write that as the much more palatable:
Now, the whole reason that we’re doing this is to take advantage of another trick of associativity:
That was very abstract, so take a second to think about what this means. F2 is the matrix that calculates two steps of our Fibonacci state machine. And we can use this same trick to calculate any power of F, just by calculating powers of Λ.
And this is good, because Λ is a diagonal matrix. And it’s really easy to exponentiate a diagonal matrix! You just exponentiate each element of its diagonal. We don’t even need to use repeated squaring.
This means that we can actually calculate arbitrary powers of F in constant time… if we pretend that exponentiation of a scalar is a constant time operation.
It’s not, though. I mean, yes, exponentiation of an IEEE 754 64-bit floating-point number is constant time, but that’s not what we said. We’re talking about exponentiating an irrational number, and my computer can only represent approximations of that number, and that floating-point error adds up fast. So in order to actually use this to compute large Fibonacci numbers, we would need to use arbitrary-precision floating point, and exponentiating arbitrary precision values is not constant time. It’s… I don’t know, probably logarithmic? But like both to the exponent and the size of the result, and the size of the result is increasing exponentially, so it nets out to linear? I don’t actually know.4
But I don’t want to spoil the fun. This is still a very interesting trick, and it’s worth understanding how it works, even if it doesn’t actually give us a way to compute arbitrarily large Fibonacci numbers in constant time.
So: what are we doing.
We moved a bunch of symbols around, and we wound up with this expression:
But I don’t really know what Q-1 means, and it’s not really clear to me why I should care. Why is multiplying by these three weird matrices the same as multiplying by F? What, intuitively, are we doing here?
At a high level, we’re translating points into a different coordinate system, then doing something to it, and then translating them back into our original coordinate system.
You already know that we can write any point in space as a vector – X and Y coordinates. That’s what we’ve been doing this whole time.
But we can also write a point in space as the sum of two other vectors. Like, [5 3]
. We could write that as [1 2] + [4 1]
instead. Which, okay, sure. That’s not very interesting.
One “interesting” way to write [5 3]
is as the sum of these two vectors: [5 0] + [0 3]
. Or, to say that another way:
1 |
0 |
0 |
1 |
5 |
3 |
This is interesting because [1 0]
and [0 1]
are basically the “X axis” and “Y axis.” And we can think of the point [5 3]
as a (trivial!) linear combination of these two axes.
But we could pick different axes. We can pick any vectors we want as our axes,5 so let’s pretend for a moment that our axes are [1 1]
and [1 -1]
instead. Which means that we would write [5 3]
as:
1 |
1 |
1 |
-1 |
5 |
3 |
Or, to write that another way:
1 | 1 |
1 | -1 |
4 |
1 |
5 |
3 |
Now we can think of this vector-of-coefficients, [4 1]
, as another way to identify the point in space x=5 y=3
when we we’re pretending that our axes are [1 1]
and [1 -1]
. Except in linear algebra we’d call these “basis vectors” instead of “axes.”
But how did we find the coefficients [4 1]
? Well, I just found that one by hand; it was pretty easy. But in general, if we want to express some other point using these basis vectors – let’s say [63 -40]
– we’ll need to solve an equation that looks like this:
1 | 1 |
1 | -1 |
? |
? |
63 |
-40 |
And we can do that by, you know, regular algebra. We “divide” both sides by our matrix-of-basis-vectors, by left-multiplying with the inverse matrix:
1 | 1 |
1 | -1 |
1 | 1 |
1 | -1 |
? |
? |
1 | 1 |
1 | -1 |
63 |
-40 |
And after the inverses cancel, we’re left with the following formula:
? |
? |
1 | 1 |
1 | -1 |
63 |
-40 |
And the problem reduces to matrix inversion.
Now, I don’t know about you, but I don’t remember how to invert a matrix. I know there’s a formula in two dimensions, but the only thing I remember about it is that it involves calculating the determinant, and I forgot how to do that too. So let’s just ask a computer to invert it for us:
? |
? |
0.5 | 0.5 |
0.5 | -0.5 |
63 |
-40 |
Hmm. I feel like I probably could’ve worked that out myself.
But that lets us solve the equation, and figure out how to write the point [63 -40]
as a combination of the vectors [1 1]
and [1 -1]
:
0.5 | 0.5 |
0.5 | -0.5 |
63 |
-40 |
51.5 |
11.5 |
Great! We did it.
And here’s why we care:
We can use this exact same trick to write down the points in our Fibonacci sequence as a linear combination of our two eigenvectors. Like this:
Click or tap to add points there, to see how we can write each point in space as a combination of the “short diagonal” and “long diagonal” eigenvectors of our matrix.
Normally to identify a point in space we would give its XY coordinates: go this far along the X-axis, then this far along the Y-axis. But here we’re representing points in “φ” and “1 - φ” coordinates: go this far along the short diagonal, then this far along the long diagonal.
But how do we know how far to go along these diagonals? Well, we “divide by” the eigenvectors. In other words, we have to compute the inverse of this matrix:
φ | 1 - φ |
1 | 1 |
1 | φ - 1 |
-1 | φ |
Now, matrix inversion is boring, so I’m just presenting the answer here. This inverse matrix is how we can convert from “XY coordinates” into “eigenvector coordinates.”
Let’s work through a concrete example to make sure this works.
[8 5]
is a point on the Fibonacci sequence. We can express that as a combination of eigenvectors instead:
1 | φ - 1 |
-1 | φ |
8 |
5 |
4.96 |
0.04 |
4.96 and 0.04 are the coefficients we will pair with our eigenvectors: we have to travel 4.96 units down the long diagonal, and 0.04 units along the short diagonal to arrive at the point [8 5]
.
Φ |
1 |
1 - Φ |
1 |
8.025 |
4.96 |
-0.025 |
0.04 |
8 |
5 |
Great. It worked!
But that wasn’t very interesting – we just converted our point into the eigenvector basis and then right back into the normal XY basis. It was kind of a pointless transformation.
But we don’t have to do the unconversion immediately. We can keep the point in this “eigenbasis” for a little while, and do stuff to the vector-of-coefficients, and then convert it back.
Specifically, we can scale the coefficients by the eigenvalues of our Fibonacci matrix. We can multiply the “long diagonal” component by Φ2, and multiply the short diagonal component by (1 - Φ)2, and we’ll have a new point: something close to [12.985 0.015]
. And if we convert that back into XY coordinates:
Φ |
1 |
1 - Φ |
1 |
21 |
13 |
We just advanced our point two more steps along the Fibonacci sequence, with nothing more than scalar exponentiation and a constant number of vector operations.
This is exactly the same as the expression:
But as someone with no background in linear algebra, I find it easy to get lost in the notation, so it’s easier for me to think about this as operations on separate column vectors rather than as operations on matrices. Even though they are the same thing.
Of course, calculating two steps of the Fibonacci sequence in constant time isn’t that impressive. But we can do the same with Φ1000, and use that to calculate the thousandth Fibonacci number in constant time.
…Assuming we could calculate Φ1000 in constant time. Which we can’t, in real life.
Alright.
The post is over; you saw the trick. “Eigendecomposition,” this is called.
I glossed over a few steps – I spent absolutely no time explaining how I knew the eigenvalues and eigenvectors of this matrix, for example. I just asserted that they were related to the golden ratio. But in reality you can solve for them, or ask a computer to do it for you. It’s pretty mechanical, like matrix inversion – it seems linear algebra is best explored with a repl nearby.
In any case, I think that the why of eigendecomposition is more interesting than the how.
As for the Fibonacci sequence… well, this is a pretty terrible way to actually calculate Fibonacci numbers. Even if we pretend that we only care about numbers that can fit in IEEE 754 double-precision floats, we still can’t use this technique to calculate very many Fibonacci numbers, because the floating-point error adds up too quickly.
But if we only care about double-precision floats… well, there is one more Fibonacci implementation to consider. It’s an algorithm that that runs in constant time, and constant space, and covers the full gamut of floating-point numbers without accumulating any error at all…
const fibs = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073, 4807526976, 7778742049, 12586269025, 20365011074, 32951280099, 53316291173, 86267571272, 139583862445, 225851433717, 365435296162, 591286729879, 956722026041, 1548008755920, 2504730781961, 4052739537881, 6557470319842, 10610209857723, 17167680177565, 27777890035288, 44945570212853, 72723460248141, 117669030460994, 190392490709135, 308061521170129, 498454011879264, 806515533049393, 1304969544928657, 2111485077978050, 3416454622906707, 5527939700884757, 8944394323791464];
const fib = (n) => fibs[n];
But it’s more fun to overthink it.
-
In case you are one of today’s lucky 10,000: the golden ratio is also very close to the conversion rate between miles and kilometers, so you can use Fibonacci numbers to approximate conversions between miles and kilometers in your head. For example, 80km ≈ 50mi. This is a weirdly good conversion – the exact answer is 49.7097mi.
It even works when you don’t have a round Fibonacci number to work with. 120kph is probably around 90% of 130kph, and 90% of 80mph is 72mph… the correct answer would be 74.6mph, but we got a decent ballpark with nothing but eyeball math. ↩︎
-
Er, the German word is Eigenvektor, so it’s like… half a loan word? A loan sub-word? Whatever, the “eigen” part is the relevant bit here. ↩︎
-
Straight lines are eigenvectors, but eigenvectors are not necessarily straight lines. Rotation matrices also have eigenvectors, but they have complex eigenvectors. Straight lines like this are real eigenvectors. ↩︎
-
The exact same argument applies to the “logarithmic” exponentiation-by-squaring algorithm as well – squaring arbitrarily large numbers requires arbitrary precision multiplication. It feels different to me, though, because of floating point error: when you’re exponentiating eigenvalues, you need to use arbitrary precision arithmetic even when your final answer could fit into a double. But the integer squaring approach only needs bigints when the Fibonacci numbers themselves become too large to fit into words. ↩︎
-
As long as the vectors we pick are “linearly independent” – there’s no way to express
[5 3]
as a combination of[1 0]
and[2 0]
, for example. ↩︎