Numerical vs Analytical Solutions

There’s a good chance that most (if not all) of the math you’ve learned so far has been analytical, meaning that, for a given problem, there is a finite series of steps that leads to an exact solution. Square the first side length, square the second side length, add the two together, take the square root, and you get the exact hypotenuse every time. Numerical solutions work differently. You would first guess the hypotenuse; then ask the triangle if you’re high or low; then just repeat those two steps until you’re sufficiently confident in your guess. Obviously this is messy and time-consuming, but it’s also shockingly easy.

All numerical methods (at least all the ones I’m aware of) follow a guess-and-update pattern. For example, say you’re trying to find the minimum of a function:

Find $k$ such that $f(k) \leq f(x)$  for all $x$. Unless you have some prior knowledge, your initial guess $k_0$ might as well be random. Then each time you iterate, you check the slope of the function and update the next guess accordingly:

"$k_{n+1} = k_n - \alpha f'(k_n),$"

where $n$ is the iteration number, and $\alpha$  is a feedback parameter which determines how big of a step to take in each iteration. There are three possible outcomes of this algorithm. If the function is something like $f(x)=x^2$, you’ll sink straight to the bottom, no matter where you start. With each iteration, your guess will change less and less. This is called convergence. If the function is something like $f(x)=-x^2$, your updates will become larger and larger, as the algorithm races down an ever steepening slope. This is called divergence. If your function is more complicated—which it almost always will be—you will probably end up converging to a local minimum. The best algorithms have systematic ways to avoid getting stuck in these local minima.

Numerical methods often require hundreds or even thousands of iterations before converging and, unless you’re very lucky, the result will never be the exact solution. So you might wonder, &quot;Why would these methods even exist?&quot; Here’s how I like to think about it. In the toolkit known as &quot;math,&quot; analytical methods are the wrenches, screwdrivers, and Allen keys. Each one goes with a specific type of problem. Numerical methods exist for the same reason pliers exist—sometimes you just don’t have the right size wrench. Sure, it might only be an approximation, but that approximation can be very good. Sure, it might be absurdly tedious, but that’s literally what computers are for. And sure, it’s not the most elegant process, but it can solve problems that don’t have a known analytical solution.

There’s a good thread on Stack Exchange discussing the two methods. The Wikipedia article on numerical analysis is also a good source.