Do you remember when you first learned how to round numbers? For example, to round 687 to the nearest hundred, we notice it’s between 600 and 700. Since 687 is closer to 700 than it is to 600, we round up to 700. Maybe you even learned a rule: Look at the digit to the right of the one you’re rounding. If it’s 5 or greater, round up. If it’s 4 or less, round down. Rules can be good, but they can also be dangerous.
Now, consider rounding a decimal. The same rules basically apply. To round 0.687 to the nearest tenth, you can look at the hundredths place. It clearly tells us we’re closer to 0.7 than 0.6, so again we round up to 0.7.
Where this gets tricky is when you focus too much on a rule itself, rather than why the rule works in the first place.
So consider this tricky problem: What is 0.64999\dots (or written in a slightly more common way, 0.64\overline{9}) rounded to the nearest tenth?
If we apply our rule, we see that 4 in the hundredths place, and immediately round down to 0.6. This would be bad, because we didn’t do any thinking. If we paused to think about that number for a second, we’d notice it’s a little different, a little weirder, than the numbers we’ve dealt with before. It has an infinite number of digits extending to the right, so it’s not as obvious if our rule actually applies.
Taking the time to think a little deeper, you may remember that even though 0.65 is halfway between 0.6 and 0.7, it’s an accepted tradition to always round that number up to 0.7. This may lead you to believe that 0.64\overline{9} is the largest number that rounds down to 0.6 when rounding to the nearest tenth. That would be a reasonable assumption, because it keeps everything aligned with the rule we learned before.
But that’s the issue: the rule isn’t quite correct as stated. It’s missing something, because 0.64\overline{9} is exactly equal to 0.65, and thus rounds up to 0.7 when rounding to the nearest tenth. The issue here is that we taught a rule, then asked a question we weren’t prepared to handle. Have we seen numbers with infinitely many decimal digits? Have we explored how 0.\overline{9} =1 and everything that entails? Do we understand thoroughly why we can “look at the digit to the right” to round? We need the answers to all of these to be yes if we hope to answer the tricky question correctly.
If you teach mathematics using rules, you have to keep them straight. This is why sequencing is such a huge deal: you don’t want to shock a student with a question they literally are not able to answer.
However, it’s better to teach mathematics by avoiding rules, except those that are assumptions we must hold, or those we can rigorously understand. This encourages deeper thinking and a reliance on problem-solving, rather than the application of rules written in a messy language liable to cause confusion.
It’s a tough line to balance, but one that is important to keep in mind if you want students to get the most out of their mathematical education.