To be clear, temps are related to noise, not to how much your PC heats up the room it's in. My Raspberry Pi runs at 70C, but it would take 20+ of them to make any noticeable change to the temperature in my office.
On temps - Most GPUs have a target temperature somewhere between 65C-75C where the GPU will adjust fan speeds to hit this target. For AMD GPUs, if you go to AMD Settings -> WattMan Tab (Global) and scroll to the bottom of the tab, you'll see a "Target Temperature" slider that you can adjust. This will affect fan speeds. More on the WattMan tab below...
Wattage = Heat. If you want to reduce heat, you have to reduce power, AMD provides a large number of options:
(1) Under global settings, there are toggles for "Power Saver", "Frame Rate Target Control", and "Chill".
FRTC works to limit the GPU frequency so that the delivered frame rates do not exceed a given value. You can use this very effectively on 60Hz monitors so the GPU isn't cranking away at full speed delivering 120fps while the monitor only shows you every other frame.
Chill works similar to FRTC on the upper bound frame rate setting. But Chill will also throttle back the GPU in "scenes with little/no motion on screen" to the lower bound frame rate setting. This saves additional power. I'd only recommend this for variable refresh rate monitors.
Not necessarily a power saver, but I'd recommend enabling "AMD Adaptive Sync" in the global settings
(2) AMD's WattMan utility is built into the driver software, so unlike for Nvidia cards, there's no need to download 3rd party software such as MSI Afterburner.
**I will say that I've had consistent trouble getting WattMan to apply voltages properly after a cold boot, but simply restarting your machine will fix the problem. (aka power up, restart, then you're fine)***
Within WattMan you can select "Auto" or "Manual" for GPU frequency and voltage. If you want to save power across the board, you can select "Manual" for both and do an Undervolt. This will show you the frequency and voltage at each performance "State" for your graphics card. From my experience, as well as others' I've seen online, it seems AMD took the stable voltage for each frequency, then added 50mV to that number. Therefore, it's pretty safe to say you can subtract 50mV from each State on the curve and shouldn't experience any issues. I recommend graphing the frequency/voltage curve on Excel or similar so you can visualize the curve you're making. VRAM voltage acts as the lower limit to how much voltage the core can get under load. I've had good results with a 900-910mV VRAM Setting, but you can test lower if you want. Once you've gotten everything set the way you want it, there's a box on the upper right corner to "Save Profile" so that you can later "Load Profile" without having to redo everything. Experience says that the GloFo 14nm process "kinks" around 940mV. AKA, you'll notice the slope of the freq/voltage curve increases above this point. For the clockspeeds on an RX570, I'd imagine you can set your 1244MHz to 945mV.
As with CPU OCing, if you go to manual frequency or voltage on your GPU, it's a good idea to stress test your settings to ensure stability. FurMark is a good one to try. Run the test for a few hours at each frequency/voltage setting.
By undervolting my personal RX480, I've reduced its power output by 25% without changing core frequency. Many other similar results.