Order 66
Grand Moff
the 1080ti is still a beast at 1080p though, why do you feel the need to upgrade?same here. im ready to move on from my 1080ti and its gonna be an amd card after the next gen comes up.
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
the 1080ti is still a beast at 1080p though, why do you feel the need to upgrade?same here. im ready to move on from my 1080ti and its gonna be an amd card after the next gen comes up.
makes sense given speed difference between storage and onboard vram. I wonder if it can compress it like windows can.
Depends on the API I think, textures take up the bulk and they are already compressed. I just know that people look at the GPU-Z's "memory in use" and freak up thinking that is how much that game "needs", when in reality it's similar to Windows Resource Monitor "memory free".
Quite a few reasons not the last of which would be a possible announcement of a 5000 series. Lower demand, all sorts of things."In response to AMD's Radeon RX 7800 XT?"
Why else would they do that!!
it would just dump data on shutdown, since its only read only. Unlike windows that saves it to storage at shutdown. I am using 1.7gb, wonder what its actually got in there... you would think there are tools that can access the vram to see what is in there...like rammap
Managing a COM object's lifetime
When an object is created, the system allocates the necessary memory resources. When an object is no longer needed, it should be destroyed. The system can use that memory for other purposes. With C++ objects, you can control the object's lifetime directly with the new and delete operators in cases where you're operating at that level, or just by using the stack and scope lifetime. COM doesn't enable you to directly create or destroy objects. The reason for this design is that the same object may be used by more than one part of your application or, in some cases, by more than one application. If one of those references were to destroy the object, then the other references would become invalid. Instead, COM uses a system of reference counting to control an object's lifetime.
An object's reference count is the number of times one of its interfaces has been requested. Each time that an interface is requested, the reference count is incremented. An application releases an interface when that interface is no longer needed, decrementing the reference count. As long as the reference count is greater than zero, the object remains in memory. When the reference count reaches zero, the object destroys itself. You don't need to know anything about the reference count of an object. As long as you obtain and release an object's interfaces properly, the object will have the appropriate lifetime.
Properly handling reference counting is a crucial part of COM programming. Failure to do so can easily create a memory leak or a crash. One of the most common mistakes that COM programmers make is failing to release an interface. When this happens, the reference count never reaches zero, and the object remains in memory indefinitely.
VRAM amount is important, especially with the current gen games. For example even at 1080p games like Ratchet & Clank: Rift Apart are using nearly 12GB at 1080p very high settings and ray tracing disabled.
No it's not, it just doesn't immediate evict data from graphics memory when it is no longer needed. If there is no more room and new data is requested, the oldest asset with the lowest reference count will get evicted. The only time there is ever performance issues is if you are attempting to use more data in a scene then the graphics card has room for, meaning you would need a room / area with more then 12GB worth of data. Otherwise the engine will just evict the older unused data to make room for the new data.
That is how all the new graphics engines work. In the past a game would have to manually load assets whenever you loaded into an area, then evict / reload new during area transitions as part of a load screen. The developers would have an "expected GPU memory" target they would be working around and that value is what we say the game "needed". Modern engines did away with this and instead adopted the same model that is used for main memory with unused memory acting as a cache. Unfortunately programs like GPU-Z only show the maximum amount with data allocated, not how old that data is or if it's actually being used. Instead we'd have to test for stuttering effects, when the graphics engine has to unload stuff that is actually being used, load new stuff, then unload that new stuff to reload the stuff it just evicted.