Userbenchmark is generally considered to be a bad site for product comparisons, with those running it being incredibly biased and doing everything they can to try to make AMD look bad, with them going so far as to claim that all other reviewers are lying and that their site's often questionable numbers from their synthetic tests are the only source for "unbiased" comparisons. They also add a joke of a writeup for pretty much any piece of AMD hardware, where they come off sounding like 12-year-old fanboys. Their RX 6600 "review" on that page is a typical example...
Their bizarre and often inaccurate rants read like something one might expect to find written by a little kid in the comments of an article at WCCFTech, not as a description of a piece of hardware at a supposedly professional site. They don't even attempt to be discreet about it, and it gives a strong impression that they might be paid shills. Or at the very least, they refuse to acknowledge that the synthetic tests their site is built upon are often not representative of real-world performance. It's a complete sham, and that's unfortunate, because the concept for the site is actually rather neat. If someone is checking whether a piece of hardware in their system might be under-performing relative to others with the same hardware, or if they are comparing graphics cards with similar architecture, their numbers might be somewhat meaningful, but one definitely shouldn't use them to compare graphics card of different brands or architectures. Use actual professional review sites for that, not dodgy numbers based off some synthetic tests that don't align much with performance in actual games.
In any case, a 1080 Ti is definitely not anywhere remotely close to 44% faster than an RX 6600. The video you linked is a lot more representative of what one should expect, with the 1080 Ti typically performing around 10% faster at 1440p, and it would be closer still at 1080p (or when utilizing upscaling from a similar render resolution). That aligns pretty well with what reviews show, though being a bit older at this point, most recent reviews don't tend to include 10-series hardware, so you may need to compare two reviews for hardware that's a couple generations apart. And if the card happened to be a 6600 "XT", it would actually be faster than a 1080 Ti in most games, though I'm assuming it's just the standard 6600.
There are of course a number of other things to consider though...
1) PCIe bandwidth: As was pointed out, the 6600 ultilizes a PCIe 4.0 x8 connection, which on a PCIe 4.0 motherboard provides the same bandwidth as the PCIe 3.0 x16 connection that the 1080 Ti uses. On an older 2.0 motherboard, the 1080 Ti may see some minor performance regressions, and the 6600 would be affected a bit more. However, it's unlikely that PCIe 2.0 would "choke the card VERY hard", as it only applies to data transfers over the PCIe bus, though it could widen the performance gap a little. Techpowerup tested this very thing with the similar 6600 XT, and found that PCIe 2.0 only reduced performance by around 4% on average at 1440p across the 22 games they tested...
When the Radeon RX 6600 XT launched with an interface limited to PCI-Express 4.0 x8, lots of discussion emerged about how AMD crippled the bandwidth, and how much it affects the gaming experience. In this article, we're taking a close look at exactly that, comparing 22 titles running at PCIe...
www.techpowerup.com
2) VRAM: 8GB of VRAM is still generally fine for the vast majority of games at max settings, and Nvidia even thought it was acceptable for their $400 4060 Ti released a couple months back, but in future graphically-demanding games the 1080 Ti's 11GB should provide more flexibility to not have to lower texture settings to avoid hurting performance.
3) Raytracing: Arguably neither card provides "good" performance with RT lighting effects enabled in the games that feature them, so it would probably be best to leave those turned off in most cases, but the 6600 should tend to take the lead and provide "more usable" performance in games that make limited use of RT when combined with upscaling. The 1080 Ti's Pascal architecture wasn't designed with RT in mind, so it will tend to take more of a performance hit in scenes heavily utilizing RT.
4) Age: The 1080 Ti was high-end when it first came out, but that was over 6 years ago, and its successor came out nearly 5 years ago. So any second-hand 1080 Ti has likely been in use for at least several years by this point. As such, it will have no warranty coverage from the manufacturer in the event that it fails, and could potentially be closer to failure as well compared to new hardware. New cards often come with 3 years of warranty coverage. The RX 6600, by comparison, first came out less than two years ago, and will likely still be covered by warranty for some time.
5) Power draw: The RX 6600's graphics processor is built on a newer manufacturing process, making it much more efficient. It tends to draw only around 120-130 watts while gaming, while a 1080 Ti can draw around 250 watts, or roughly double the power. That also results in more heat output to contend with, and you would also need to make sure your PSU has sufficient output to handle it. The power draw of your existing RX 570 would be roughly in-between the two.
6) CPU limitations: In many recent games, your aging FX system will likely limit your frame rates a fair amount. It might not be so bad in graphically-demanding titles running at 1440p on these cards, but you are bound to encounter titles that don't get ideal performance due to demand on the CPU, and lowering settings or resolution isn't likely to help with that. And if the CPU is what's limiting performance in a title, you may not not see much performance difference between a 1080 Ti and an RX 6600, as either will be waiting on the CPU to finish its calculations much of the time. So, performance-wise, the choice between the two might not not matter quite as much as it would otherwise.
As for Freesync, it's AMDs branding for their implementation of adaptive sync. At the time it came out, Nvidia only supported their proprietary version of the tech called G-Sync that required special hardware to be installed in a monitor, significantly driving up the cost. AMD's Freesync, on the other hand, utilized hardware that became standard in a wide range of monitor chipsets, making the feature widely available at a low cost. Eventually, Nvidia caved in and supported the standard as well. They still offer a "G-Sync Compatible" certification program for manufacturers to advertise that their screen meets Nvidia's standards, but nearly all Freesync monitors should work. The main exception would be some lower-end screens that only have HDMi, as Nvidia only supports adaptive sync over a DisplayPort connection.