News Intel's DG2 GPU Shows GTX 1050-Like Performance in Early Benchmarks

Howardohyea

Commendable
May 13, 2021
259
64
1,790
I'm actually very interested in seeing all the large semiconductor companies all making GPUs (Intel, Nvidia, AMD).
Can't wait to see benchmarks for the actual production chips and reviews
 

Matt_ogu812

Honorable
Jul 14, 2017
160
32
10,620
I have a EVGA GTX1050 Ti in my system that I built about a year ago which I used because it was new and free.
My first though when I read that Intel's first stab at a graphics card is already outdated is, why bother.
I'm chomping at the bit to get a new video card but not at these shark infested prices.
I'd settle for a RTX2060 right now. But given the feeding frenzy in the GPU market I'll continue to wait.
 

usiname

Prominent
BANNED
Feb 27, 2020
92
20
535
Based on DG1 benchmarks assuming nearly perfect scaling with EU count and clocks, the top-end DG2 part might land in the RTX3060Ti's neighbourhood, which would be decent if the rumoured $300 MSRP is accurate.
If they scale linear and if we start from the asus variant that Steve from Gamers Nexus tested (80eu according his review, 96eu according ETA PRIME ) we have 10nm 80 or 96eu, 30W and 70GB/s bandwidth vs gt 1030, 30W, 14nm, 50GB/s bandwidth. The asus variant lose to 5 years old architecture with full node advantage and with 40% faster memory, this is not Pascal level, this is not even Maxwell, its more like Kepler level and if the 512eu variant is TSMC6nm and 230W maybe will be close to 1080ti, but if the 512eu variant is 10nm intel, even gtx 1080 will be to fast to catch.
edit: I forgot that the intel 10nm has same density as 7nm TSMC, which mean that the situation with the Asus variant is even worse and even 6nmTSMC woth help much, so maybe 1070 ti level
 
Last edited:
we have 10nm 80 or 96eu, 30W and 70GB/s bandwidth vs gt 1030, 30W, 14nm, 50GB/s bandwidth. The asus variant lose to 5 years old architecture with full node advantage and with 40% faster memory,
I doubt that theoretical max bandwidth of lpddr4x is applicable here, did any of the sites do an actual bandwidth test?!

Also the dg1 has hardware acceleration that lacks from the nvidia/amd offerings of the same tier so there is going to be a market for them even if they suck at gaming.
 

rluker5

Distinguished
Jun 23, 2014
620
371
19,260
Based on DG1 benchmarks assuming nearly perfect scaling with EU count and clocks, the top-end DG2 part might land in the RTX3060Ti's neighbourhood, which would be decent if the rumoured $300 MSRP is accurate.
DG will probably be higher. This integrated geekbench score is slightly over my haswell iris I just ran on a locked hp office mobo with 1333 ram: https://browser.geekbench.com/v5/compute/3030766
I'm pretty sure multiplying it by 2 would be safe for a proper benchmark. Maybe 3 or 4? depending on arch improvements since 2013.

Edit: my gpu was throttling due to a 47w max tdp so I had to reduce the clocks and volts a bit and here: https://browser.geekbench.com/v5/compute/3030823
6545 from 47w haswell that is still throttling a bit. Pretty sure more than double the EUs on less than half the nm node from seven years later can do better.
 
Last edited:
I consider a GTX1050 to be a bit 'meh' these days for the newest of games, but, am still content with my GTX1060 for 4 year old games like BF1, so, maybe some of the more powerful Intel offerings will equal a GTX1080Ti/2070 some day, and for a reasonable cost! (Which for me would be an upgrade!)