Hi,
I've used my oldish 1070 to learn algorithm yet once i play it on my 2x2060rtx i face this error:
InternalError (see above for traceback): Blas SGEMM launch failed : m=1384448, n=32, k=64
[[Node: conv2d_3/convolution = Conv2D[T=DT_FLOAT, _class=["locbatch_normalization_3/cond/batchnorm/mul_1/Switch"], data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](leaky_re_lu_2/LeakyRelu/Maximum, conv2d_3/kernel/read)]]
[[Node: yolo_loss/while_1/strided_slice_1/stack_1/_2879 = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3968_yolo_loss/while_1/strided_slice_1/stack_1", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopyolo_loss/while_1/strided_slice_1/stack_2/_2795)]]
I've used my oldish 1070 to learn algorithm yet once i play it on my 2x2060rtx i face this error:
InternalError (see above for traceback): Blas SGEMM launch failed : m=1384448, n=32, k=64
[[Node: conv2d_3/convolution = Conv2D[T=DT_FLOAT, _class=["locbatch_normalization_3/cond/batchnorm/mul_1/Switch"], data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](leaky_re_lu_2/LeakyRelu/Maximum, conv2d_3/kernel/read)]]
[[Node: yolo_loss/while_1/strided_slice_1/stack_1/_2879 = _HostRecv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3968_yolo_loss/while_1/strided_slice_1/stack_1", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopyolo_loss/while_1/strided_slice_1/stack_2/_2795)]]