Intel Core i3, i5 Arrandale and Clarkdale in Photos

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]ben850[/nom]i'm kind of outta the "news" as far as this CPU goes, but what can it's on-board GPU be comparable to? for example: nvidia 9800GT? 8800? or is it not even meant to be that beefy of a GPU?[/citation]

You got a printer? Yeah, it'll be a tad faster than that... 😉
 
That IGP will be great for things like, web browsing and word and flash, and .... erm.... photo viewing, maybe some video usage too. Think of Win 98 era gpus.
 
[citation][nom]Boxa786[/nom]That IGP will be great for things like, web browsing and word and flash, and .... erm.... photo viewing, maybe some video usage too. Think of Win 98 era gpus.[/citation]
And how do all of you guys know or assume this???
 
liquidsnake718: Evidence to suggest Intel has (or will ever have) a decent graphics solution or GTFO plz thx.


The laws of probability give the IGP a roughly 100% chance of sucking.
 
[citation][nom]34kl3l4k[/nom]"i'm kind of outta the "news" as far as this CPU goes, but what can it's on-board GPU be comparable to? for example: nvidia 9800GT? 8800? or is it not even meant to be that beefy of a GPU?"Think more like a nvidia 4200..... MAYBE a 6200.[/citation]

From what I hear and have seen its about on par with a ATI 785GX GPU. Not too bad for such a little change.

[citation][nom]zipzoomflyhigh[/nom]Great. A crappy 45nm graphics chip to heat up your shiny new 32nm processor. YUCK.[/citation]

And I am sure AMDs Fusion will be better. Hell lets slap a 5870 on or next to the CPU. I am sure a GPU thats used to a 50c idle will be happy with a CPU thats used to a 30c idle. And when the GPU heats up to 70-80c under load and the CPU is set to turn off at those temps, it will be awesome.

Its what it is. A low power, cheap decent IGP. Thats why Intel has the major market share of GPUs. Cheap.

[citation][nom]beefy_mcpoo[/nom]zipzoomflyhigh got it right, this is to AMD's Fusion what Intel's hyperthreading was to AMD's dual-core. All gimmick, no substance.[/citation]

Wait.... Intel had their dual core CPUs with no hyperthreading out a week before AMD had their dual cores out. Pentium D came out on May 26th 2005, Athlon X2 came out June 5th 2005. Hyperthreading was only in the Pentium 4 single core CPUs that competed with AMDs Athlon XP/64. So I am not sure how hyperthreading was meant to compete with a dual core rather pave the way for multicore programming......

As for the item itself, it is a on-chip GPU. The entire package is the chip. On die is a different story and TBH, will probably be harder to do since the lithography is different as is the process normally for a CPU and GPU. Those who think AMD will pull it off without a hitch beware. I doubt they will do it problem free nor will Intel. I bet on-die GPUs are still a few years away.
 
@jimmysmitty: but... I was refering to hyperthreading being a neglible performance gain that looks like 2 cores, vs. the near doubling of power that an extra core can give. AMD is aiming to move GPGPU to the mainstream, Intel have never proven themselves capable in highly parallel computing. Even in their darkest of days, AMD/ATI graphics were light years ahead of anything Intel has ever done, and the recent Larra-fail fiasco only underscores that. I'd go as far as to say that the billions spent on Larrabee R&D were a complete waste, and will never become a worthwhile product.
 
Having integrated graphic chip is a bad idea cause it's useless and it will create additional heat. Nvidia's approach is the only right at this moment where the things are done well with Fermi.
 
At the end of the day it's a 32nm CPU that will offer superior performance and reduced power usage. The GPU will be good enough for playing blu-ray movies and games such as WOW that have less intense graphics requirements.
 
lradunovic: Yes and no... I think the aim was to reduce latency, the problem with GPGPU now is that offloading to the GPU requires enormous latency, everything must be copied to the GPUs memory, then processed, and sent back over to the CPU. If the GPU is integrated into the CPU, it can share L3 cache at a latency of perhaps 50 cycles, instead of a latency measured in seconds(billions of cycles). You'll never be able to fit a teraflop+ GPU under the CPU's heatsink, but it could allow floating point operations to be scheduled automatically by the CPU onto the GPU.
 
Not too bad for on chip GPU. they'll get better in years to come. And better. And better. Like it or not, gamers, this is the future for all but the tipity top, top of the top of the line hardcore ATX formfactor tower computers. Computers will continue to get smaller and more energy efficient. That means goodbye to ATX formfactors and goodbye to graphics cards.

Never say never, oP3n_CL_pr0gramm3r. When they get down to 11 nanometers it'll be a whole nuther ballgame.
 
Intel's hyper threading is not a gimmick by any means, look at the benefits of it on the core i7s in CPU intensive tasks like rendering. It's not as good as a full dedicated hardware core but it certainly helps drastically with much smaller cost to intel and the end user.

We don't know how good these chips are, but it's a step in the right direction. Baby steps must be taken before pulling something huge off. I'm not an intel fan but I can't help but be impressed by their streak of wins recently.

This technology isn't aimed at the high end desktop user (gamer) but it has it's place in may different scenarios such as netbooks, media streaming boxes, ultra portable internets devices and etc. Give it time folks, why the negative criticism?
 
Niva: The typical real-world performance gain from hyperthreading is +10% to -10%, and personally I hope AMD never goes that route. Most of i7's (occasional) superiority is from the new SSE instructions, not the god-awful return of hyperthreading. Phenom II and i7 usually tie in games, because none of them take advantage of the new SSE instructions, whereas video encoding does, hence i7 wins that one hands down.
 
for serious use like graphics workstation, this is ridiculous. this chip is for the corporations. and where in hell does the width of bitrate go. Right out to lunch, it started with the 775. I hope a peoples computer returns again. Damn sound like hitler to a beetle.
 
[citation][nom]riversdirect[/nom]"On-chip" sounds miss leading. Usually people say a "chip" is a piece of silicon. The picture is obviously not one die. All they did is move the video card closer to the processor. Technically this is not on-chip graphics, its in package graphics.[/citation]

It is in fact "on-chip"....you're confusing "on-chip" with "on-die"....it's essentially the same as Intel was doing with the Pentium-D...2 die's, 1 chip.
 
[citation][nom]Anonymous[/nom]Niva: The typical real-world performance gain from hyperthreading is +10% to -10%, and personally I hope AMD never goes that route. Most of i7's (occasional) superiority is from the new SSE instructions, not the god-awful return of hyperthreading. Phenom II and i7 usually tie in games, because none of them take advantage of the new SSE instructions, whereas video encoding does, hence i7 wins that one hands down.[/citation]

Whoever posted this comment...please refrain from posting again. You don't have even the slightest clue what you're talking about. Now, on to facts... The PhenomII and Core i7 perform so closely in gaming benchmarks because games only use 1-2 threads and therefore don't make any use of HyperThreading. The processors are left to handle all the processing on physical cores. The +/-10% performance impact was for Pentium4....in cases where HyperThreading is used on the Core i7, it's impact is upwards of +25% according to the benchmarks posted everywhere on the net.

[citation][nom]virtualban[/nom]Great, nobody mentioned "will it play SuperMario"[/citation]

No....it won't run Crysis either...
 
Status
Not open for further replies.