AMD or Nvidia?
There's gpu-burn for Nvidia GPU's, available on GitHub with link to compiled binaries, though I've not much experience with that myself. I burned in my most recent one with a few days' worth of Folding@Home GPU workunits, but that's under Windows and I'm not sure if it would work the same way under Linux - Though I'd expect it to.
Other than those, there are mfaktc(CUDA)/mfakto(OpenCL), which are used for trial factoring Mersenne primes.
AFAIK about all of them stressed only parts of the GPU relevant to their use case, and may have trouble finding all instabilities.
I'd be interested to know if there's something better, as well.
EDIT: Also please use caution with stress tests - Damages may result with aging or improperly manufactured or configured hardware and, while regrettable, no author of such software (nor anyone mentioning them 😉) would accept any liability for such happenings.