sminlal :
No matter how much GC has changed there's still a fundamental problem if the SSD controller isn't aware that blocks are not being used by the file system. It means a lot more work on its part in order to create clear flash memory pages that are ready to accept writes. TRIM and Secure Erase are the only mechanisms by which the system can inform the SSD that the information in specific sectors no longer needs to be retained and can be put into the free page pool.
Onboard write cache, compression and parallelism are being used to improve write performance under duress, but under the covers there's going to be a lot more work that needs to be done and ultimately this means more write wear on the drive.
If TRIM and secure erase were the only mechanisms to do that then using apps like AS Cleaner(free space cleaner) would never work as they do on some controllers. I only run raids on all my systems(and even tested a few different single drives with TRIM turned off) and the newer firmwares do in fact keep up if garbage collection time is allowed.
TRIm(not available in raided volumes) and SE is not needed since I can and often do write many times the capacity of my drives without ever seeing read/write/modify speed degradation effects. Have to avoid doing that all in one session though and implement idle time recovery/GC in between logins to recover dirty blocks for the next sessions workflow.
The drive keeps maps of the logically viable data in comparison to physically viable data and easily cleans what's been deleted with only idle time being needed to do it most effectively. You can even help the GC process along by shrinking the partition down to let the controller know(again.. through mapping comparisons) that the unallocated space is no longer containing valid data. Still needs time to GC that space though whereas TRIM marked blocks will be utilized/returned that much more quickly on controllers that do it in near real time. TRIM is just a quicker means to the same end as GC, is all.
Although you are completely right about overhead/WA increases for such complex algorithms but seems to be a very small price to pay for such an awesome recovery algorithm, IMO. Especially when we consider just how many TB's worth of data can be written before nand burnout even becomes a concern.