Because it has less banding at the same bitrate. With 8-bit encoding at some point you get to the point where the image quality (motion, blockiness, artifacts, etc) is almost identical to the source except that the image still has banding on smooth color transitions. To solve that problem, you need an even higher bitrate, which becomes extremely wasteful since most of that bitrate is burned on further unnoticeably "improving" the overall image quality. 10-bit encoding stores higher precision colors, which means that it trades some of that overkill image quality we already have for less banding. Basically 10-bit encoding quality is much more balanced, meaning that you don't need to bump up the bitrate to improve banding specifically.
The color precision in an 8-bit encoded video is usually only around 6-7 bits depending on bitrate. For 10 bits you'll get even more than your 8-bit (per color channel, usually called 24-bit) monitor can display, but thanks to dithering the extra precision can still make a difference. In extremely dark scenes, it's extremely easy to see the difference between undithered 8-bit colors (not H264 8-bit, simply a pixel with 256 different shades) and dithered colors.