There are a couple rules-of-thumb for this:
- At least 20% of the total/raw flash free. With a 120GB drive this would mean no more than about 100GiB of the 112GiB of user-accessible flash (128GiB of total/raw flash * .8)
- At least 15% of the user-accessible flash. With a 120GB drive this would mean no more than about 95GiB.
The first number comes from the fact that 20% of over-provisioning ("OP") in total provides about three times the endurance of the typical 7% (from simply converting binary to decimal, e.g. 128GiB -> 119GiB). This is/was a bit of a magic number for consumer usage. A 120GB drive already has more than the typical amount of OP, often to mitigate drive weaknesses such as it being DRAM-less or reliant on a large SLC cache, although more OP can also improve performance esp. writes.
The second number comes from the fact that modern drives/controllers are able to use any free space as "dynamic over-provisioning" due to how aggressive TRIM is nowadays. It's not as effective as dedicated/reserved OP (outside user space) so given the balance - there's always at least 7% dedicated/reserved from the conversion, as mentioned above - you want a bit more of the dynamic (and thus overall OP) to compensate.
Given a 120GB already has more OP you are probably fine with #1, unless your drive has more OP to compensate for being DRAM-less for example, in which case #2 is optimal.