Alexander Bass

Plus Nine Divide Five

Here’s a fun little math problem: take any number and add nine, and divide by five. We’ll use 15 as a starting example


Then add nine, and divide by five



(2.76+9)5=2.352 (2.352+9)5=2.2704

[5 more skipped]


The value seems to converge to 2.25. Let’s try another starting number: 58 and repeat the process. And let’s be a bit more formal with our notation. We can describe each result of adding nine and dividing by five to be a step in a sequence. So we can describe the next in a sequence given the last in the sequence


And we’ll define S0=58

S0=58S1=9.8S2=0.16 S3=1.768S4=2.1536S5=2.23072S6=2.246144S7=2.249S8=2.250

Funnily enough, even with a different starting value, the process converges to the same value of 2.25. Try a few starting numbers yourself; you’ll see that they all converge to 2.25.

What’s the big deal with 2.25? Well, let’s make it a fraction


Nine fourths is pretty similar to the rule used for the problem add 9 divide by 5. Could it be that the value converges to a fraction of the value you add (9) over one less than the value you divide by (51)?

Now, we can redefine the problem: add 7 divide by 2. If the hypothesis is correct, we should see all starting values converge to 71. We redefine the sequence


Then try it out


If you were to continue on you would see that the value converges to 7.

Let’s notate the problem to not depend on any 7’s or 2’s


And with what we learned from the two examples above, we can state our hypothesis


The next iteration in the sequence is approximately equal to a over b minus one. This approximation will become more accurate with more steps. We could even be fancy and write it with limits


But how do we prove it?

Into the weeds

Instead of evaluating each step in the sequence with numbers, what if we were to do it with algebra?

Setting everything up we have the definition of the sequence


Given that S0 is some unimportant constant

Lets evaluate the first few steps with algebra. Of course S0=S0 but the following ones are more interesting

S1=S0+ab=S0b+abS2=S1+ab= S0+ab+ab=S0b2+ab2+abS3=S2+ab= S0+ab+ab+ab=S0b3+ab3+ab2+ab

After dividing through all the repeated fractions, we get a relatively clean pattern where, the starting value is divided by a certain amount of b’s which is added to the sum of a divided by increasing powers of b. Lets factor out the form of the equation into two parts


We can simplify (or complicate depending on how you look at it) the sum of reciprocal powers of b by using a the summation notation


Given that our goal is to find the value Sn converges to with an ever-growing n, let’s take the limit


With an ever increasing n we can see that the term S0bn will become ever smaller and smaller and with that, we’ve proven that the starting value doesn’t matter. The term which has S0 will go to zero as n goes to infinity.

We still haven’t proven our hypothesis of


But we can simplify the equation a bit by removing the term which approaches zero, and putting the limit into the upper bound of the sum


Unless we can untangle that infinite sum of a increasing powers, we can’t yet prove it. Through this method, and please forgive me, but this route is somewhat of a dead end road (I’ll explain why I say somewhat later).

The simple way

So, here’s the problem in case you forgot it


If this sequence does converge to one specific value, then there should be an Sn such that Sn+1=Sn . Or in plainer words, there should be a value which we can add a to, then divide by b and get that same value back. We can setup the equation to say such


Multiply both sides by b


subtract Sn from both sides


Solve for Sn


So now (if we are willing to assume that there is only one value the process converges on) we have proven that


Or, the process will converge to be the fraction a over b minus one.

But we have proven much more than that! Allow me to restate some equations which we have seen to be valid


Then we can set them to be equal to each other


We can then divide out the a


And now − all from playing a silly number game − we have a simple identity for what a complicated infinite sum evaluates to. I hope you now understand why I took the time to go down the ‘dead end road.’


The infinite sum identity discovered in this article is nothing new. It’s well known and called the Geometric Series. I’ve been curious about add nine, and divide by five for more than a year at this point − bouncing around in the back of my mind. Only in the past few months did I solve the problem. I first made a computer program that brute forced plenty of different values just to get an idea of the problem, then I went on to solve it in a very similar way to how I’ve written it out today. I’m still a bit fascinated that a problem as simple as add nine, and divide by five could reveal a seemingly obscure identity for an infinite sum.

Technical notes

(bonus stuff for nerds and pedants)

The values of a and b within the following statement are assumed to be positive and non-zero


And in general, any time you see a fraction in this article, it is assumed that division by zero is not considered.

The series I called the geometric series is not quite a real geometric series


The real one has a lower bound of n=0 and is typically not expressed as a fraction


We define p=1b

These two sums are equivalent except for the fact that the real geometric series has one more term at n=0. We can find that term by evaluating p0 which equals 1. Therefor, we can state that the ‘fake’ geometric series is one less than the real one, and then set them to be equal to each other.


And as established earlier, we know the solution to the ‘fake’ geometric sum, so we can then solve for the real one.


Let’s substitute out b in favor of p to make the equation clearer and then crunch some algebra


Which yields us the standard form you may be familiar with


If you catch any mistakes in my reasoning, math, or notation: please let me know.