The Core i7-4770K Review: Haswell Is Faster; Desktop Enthusiasts Yawn

Page 15 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

mapesdhs

Distinguished


You stated some of your points several times, but then maybe one has to given the intelligence level
of some readers. :D Still, I agree with you re the general idea about CUDA, but perhaps Chris would
point out that the OpenCL-related test on pp. 13 does use the same Titan with all CPU/mbd combos
and thus the intent is to reveal how each CPU is able to exploit the available OpenCL resources,
though I have to say - given the CPU ordering - it does look as though the OpenCL path is limited by
something that's only single-threaded, otherwise the 3930K would be a lot quicker. So, this test really
only shows what it shows, namely how well each CPU can exploit available OpenCL processing power,
rather than be any kind of OpenCL vs. CUDA test. To me it merely shows that OpenCL is not very
efficient, because of the 3930K data.

What would indeed have been useful and interesting is to run the same test but using After Effects,
exploiting the Titan's CUDA, and ensuring the test platforms has much more minimum RAM, ie. 32GB
(otherwise, testing Adobe apps reveals nothing except that one doesn't have enough RAM). That would
then show how each CPU/mbd como is able to exploit available CUDA resources from the same card,
which as you point out is the Titan's native GPU acceleration mode.

The existing results allow one to infer the AMD CPUs are not very good for exploiting OpenCL, but I
think it might be an IPC 1-thread issue because of the positioning of the 3930K result. I don't know
whether this effect is specific to NV cards though, so it would be better to run an OpenCL test of this
kind using an AMD card, and then run an equivalent test with an NV card for CUDA. Not only would
this show in more detail how each CPU is able to exploit GPU resources, ie. whether they differ in
their efficacy for OpenCL vs. CUDA (ie. would a CUDA test show the 3930K doing better?), it would
also finally give an example data point for how well OpenCL compares to CUDA for acceleration in
the same application and render test, especially AE (though the way NV's newer cards work for
CUDA is such that running the same test with two 580 3GB cards would be shockingly revealing,
ie. faster than a Titan).

I suggest using a 290 or 290X for the OpenCL test, and a 780 or 780 Ti for the CUDA test (Titan
really isn't relevant anymore - these apps simply don't need the 64bit fp option most of the time).

Ian.

 


I agree most of customers I've talked to like the Ivy Bridge, but moving over to the Haswell hasn't been that big a deal to them. Of course it's all moot anyway the choice of what people can buy is shrinking as we "speak". Most Sandy Bridge 1st and 2nd generations are gone and the Ivy Bridge is starting to show some shortages. AMD hasn't had a good new FX model since the FX 8350 unless you want to go with the FX 9000 series which are expensive and are 220w and require more expensive cooling solutions. The Haswells out perform the FXs core to core and the Broadwell chip coming out next year will also use the same 1150 socket.
 

InvalidError

Titan
Moderator

While there is an LGA1150 Broadwell on the desktop roadmap, the "Broadwell-K" name slotted into Q4-2014 seems to imply there won't be non-K variants to choose from other than the BGA/PGA mobile chips launching in Q1-2014 if there aren't any new delays.

If you read reports about the 9x chipset series, I would not count on LGA1150 being much of a blessing since it seems Intel changed something about voltage regulation on Broadwell which may not be compatible with 8x series. So it seems LGA1150 could be one more entry on Intel's list of effectively single-CPU-generation motherboards.
 

If the FIVR change and other adjustments make Broadwell overclock like Sandy Bridge, I think enthusiasts will be happy. Keep in mind that Broadwell is supposed to be the small upgrade, while Haswell was supposed to be the large one (and failed to live up to that expectation).
 
The Haswell was the "Tick" and Broadwell is supposed to be the "Tock" in Intel's tick-tock pattern. Rumors were that Haswell was going to be the only LGA1150 model and the last socketed CPU. The Broadwell was going to be BGA and would require it they be soldered on to the motherboard. It was a little bit of a surprise Intel announced the Braodwell was going to be LGA1150. I don't think Intel would have done that and not made it compatible with the Haswell models, but time will tell.
 

No. Haswell was a tock and Broadwell is a tick. A tock is when they introduce a new microarchitecture, like Haswell or Sandy Bridge, and the tick is when they do a die shrink, eg. from 32 to 22 nm with Ivy Bridge, or from 22 to 14 nm with Broadwell.

https://en.wikipedia.org/wiki/Intel_Tick-Tock
 



You may be right it just sounds right to me going tick-tock then tock-tick, lol.
 


it won't. i think the one thing that's clear with the core i achetecture is as it gets smaller, overclock headroom shrinks. every new intel chip since SB has seen a 10%-11% loss in overclock ability; which almost perfectly matches the gains in IPC, i remember seeing some math which showed the average SB overclock vs the average haswell overclock when considering the IPC gain made haswell 4% faster.

To expect broadwell to break that cycle is a bit much. especially since all intel will talk about with broadwell is the igpu and lower power draw.
 

I didn't say it would, but anyway... most of the loss in OC ability since SB came with Haswell, which is NOT a smaller architecture. It's 22 nm just like IB. The shrink from SB to IB did hamper OC a little, but that seems to be caused by the switch to thermal paste under the IHS, and a poor configuration of the IHS leaving too much of a gap down to the silicon. So the shrink doesn't seem to be much of a problem, but Haswell still sucked... which we then have to find other culprits for. The FIVR was already on the shortlist, and now it seems Intel is going to ditch it... that certainly doesn't weaken the case against the FIVR.
 

InvalidError

Titan
Moderator

I would not be so sure about Intel ditching it: a few weeks ago, there were news that Broadwell-K would have 128MB eDRAM and I doubt the LGA1150 socket could spare that many pins to feed yet one more rail if the VRM was taken off-package.

If Intel wants to remove FIVR from the CPU die for whatever reason but still maintain most benefits from having on-package VRM, they could relocate the circuitry in their eDRAM chip to leverage it as a heat-spreader. As an added benefit, the DRAM manufacturing process is probably more suitable for power electronics.
 

mapesdhs

Distinguished


The market may be fragmenting somewhat for home consumers, but there's no evidence
whatsoever that professional users are ditching desktops. One cannot use pro apps on
a touch device, the precision required isn't possible. When I think of how the menu system
in Maya operates, I shudder to think how that could ever be done on anything other than
a desktop.

Ian.

 

InvalidError

Titan
Moderator

The professional/prosumer/enthusiast market is less than 10% of the market and if the input device precision is the only thing you are worrying about, keep in mind that you can plug a USB into or pair a Bluetooth keyboard and mouse with most smartphones and tablets so that argument does not hold water very well.

The bulk of the PC market is home PC/laptops, office desktops and terminals. Tablets and smartphones are quickly catching up with the lower end of people's everyday computing requirements - the most common CPU-intensive task the avid social notworking person does on a regular basis is transcoding h264 video and most modern devices can encode live 1080p to h264 on-the-fly which eliminates that transcoding altogether.
 
All the entry-level usage is certainly leaving desktops. That's been obvious for years; the more recent development is that it's beginning to leave laptops as well.

But high-end usage, whether professional or gaming, isn't leaving the desktop. At least not anytime soon. The migration of the entry-level usage to different devices isn't that much of a problem. It does have some detrimental effects, like Intel focusing on improving integrated GPU performance and power consumption rather than x86 performance. But it's not enough to destroy the desktop PC.
 

mapesdhs

Distinguished


You're welcome to try movie content creation with a smartphone or tablet! ROFL! :D What a joke...

Come on dude, get serious. I'm talking about pro apps here, not faffing around on a smart phone with
social media junk. Design an oil rig, animation, you name it. Titchy screen for that? I don't think so.

Ian.



 

InvalidError

Titan
Moderator

Engineers and architects running high-end CAD and other software on a professional basis would normally be running workstations with Xeon/Opteron CPU and Fire/Quatro GPU instead of regular desktop. More or less the same goes for professional movie editing and special effects, often with a whole render farm processing previews and final cuts out-of-sight.

Most people do not do any such high-end stuff or anything remotely close to that on their PC, be that at home or at work. There is potential there for 60-80% of the conventional PC/laptop market to simply disappear over the next few years and that should have interesting effects on AMD and Intel's product lines.
 

mapesdhs

Distinguished
That depends on the application - some are best run on gamer cards (Ensight, for example), others don't
need multiple CPUs. I know lots of such users. I can't imagine any of them their work on a non-desktop.

Plus, what you refer to as "high-end stuff" is now close to the kind of workload professionals used to cope
with a few years ago, especially manipulating HD video (and not too far way, 4K), so the processing demands
of HD in the professional market some years back is now hitting home users as they try to cope with the video
data they're creating. Less relevant for things like pre-rendered effects of course, but definitely true for video.

Also, you've missed out a whole segment, namely the thousands of small businesses that cannot afford 'pro'
workstation hardware. I deal with lots of these; they use 2nd-hand hardware, gamer/consumer tech, or a
combo of both. Oh, the topic was desktops btw, not laptops; limited usage IMO in the pro space these days
as virtually all the modern displays are horrible widescreen (not good for many tasks, such a small vertical
height).

The demands of pro tasks and the increasing workload consumers are coping with is not tapering off, even
if the CPU market has utterly stagnated due to lack of competition. I just hope some company will fill that
void, which is supposed to be how the market works. There is a demand, it ought to be met; IBM, where
are you?... (they have the resources and plenty of experience producing high-clocked CPUs with large
caches)

Btw, maybe you're not familiar with high many pro apps work, but a lot of them do not tax modern CPUs
and GPUs to anywhere near 100% efficiency. Lots of them are single/dual thread only, stuffed with older
code; as such, modern games actually push PC tech much more strongly. There are numerous exceptions
too, like AE, but I've dealt with various pro apps which don't hammer PC hw as much as games do. And
then even AE has oodles of bits of code which are more than a decade old, hamstrung by 1-thread
bottlenecks or poor exploitation of multiple system components (eg. GPUs).

Ian.

 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


Is anybody listening??... :)

I didn't realize mem would be such an issue, but I'd certainly like to see some AMD vs. NV (opencl vs. cuda) to prove that in adobe. In some instances yes I know data size is the biggest issue, but I'm saying I didn't realize it to be an overall problem in all things adobe (thought that to be more of a niche comment but maybe not). But your comment brings up an interesting test I'd like to see, and maybe the difference between 8/32gb, since I don't know anyone doing any content crap (creating, editing etc) on 4GB or less anyway and mem is very cheap for a person EARNING from their PC all day (I would only have 32GB at today's prices if earning on my PC daily). So add that to my request. First Cuda vs. OpenCL and if time allows how much does mem change a semi normal job (whatever some agree is a normal sized data set, not looking for extremes, but at this point I'd take even that scenario...LOL). At least we'd have some relevant CUDA vs. OpenCL data we could make purchase decisions on. I sometimes am baffled by how many different ways etc you have to say the same thing to get the point across but I digress... ;) Sorry people like you have to keep seeing it.. :(

I think we're pretty much on the same page. We want more relevant data in a few areas of toms tests :) But I don't care to see ANY OpenCL test run on NV hardware if there is a CUDA option. Opencl will NEVER be faster than Cuda if available already in the same app or even probably a competing app that does the job. I mean why would NV even bother if Cuda is already done for X app? They assume you're already using cuda because you have a brain and that is what you should do right? Why is it like pulling teeth to pit one against the other? Considering AMD's lean towards OpenCL I'd always like to see that used against Cuda (rather than anything else unless we already know OpenGL is faster in X test or something). Or more precisely, use the best each side can do period, because that is why I'm reading these reviews. What is the best case scenario for both sides in any test, and then can I afford the winner? :) When I get done reading an article on toms (or most other places) I have no idea if Cuda or OpenCL is the way to go. I know toms etc likes OpenCL (half understand why, it's open) but have no idea why they pretend the two are not at war. They are, and all we want to know is who wins when one is pitted against the other. Sighting one sides scores is useless for the most part without the other. I'm repeating myself again...LOL.

You may be right in what Chris would say for his reasons (but why do they avoid responding to questions about this?). But I'd ask why bother showing me something that I won't ever do or at best rarely in real world use? I mean like testing cpus in 640x480 when it will never happen in my life again (heck I'll never play 1024x768 without wanting to jump off a cliff first). As you crank things a bit in that situation the platform may completely change the results anyway. While interesting, it's really not very useful to me to know who wins that contest (everything comes into play, and it's rarely a pure cpu contest in anyone's day). You might thrown in one benchmark there for comparison/frame of reference but I'd rather have the bulk closer to where I'd run (at least 1024x768 for example, still far below where I'm running but a bit closer to reality while still illustrating the point).

Either way, I enjoy your posts and hope Tom's read them and starts giving us more complete data (probably a better way to put that but you get the point - more relevant data? Usable? :)). You get it and I'm certainly not repeating myself for you :) I get more from some of your comments than the parts of toms articles that pertain to this topic. That just seems wrong since I expect to get YOUR type of data from the review itself, but thanks anyway :) I think you have a better idea of how it plays out in different situations than I do these days (with no PC business I'm now TEST limited so to speak with few configs available to me now). I know what I want to see, but don't have as many ways to get the data on my own now. I used to just benchmark what I wanted myself if a site ignored the data I needed. I had to have that info to sell to engineering dept's. I was building pro-e/solidworks/cad/FAE etc stations for engineers and had access to the software and the cards since I sold both back then & for a time worked with DEC Alphas also with FX!32 or not depending on app and had helpers to give me trail files etc. Now I have to depend on websites and it kind of sucks :) I miss those days (well, all the toys every week...LOL)... It was more fun back then in more ways than one.

Happy New Year :)
 

InvalidError

Titan
Moderator

For "Pro apps that do not tax anything anywhere near 100%," take a tablet, attach a keyboard, mouse, external displays and whatever else may be necessary, call it done.
 

No. Tablets have far lower single-threaded performance.
 

mapesdhs

Distinguished


Ah, no, not as simple as that, and to believe so is bizarre IMO. Such tasks almost always do require
GPU power far beyond that of a tablet (Quadro 4000 in the last such system I built for an engineering
company). Then there's reliability, maintenance, support, OS issues, drivers, etc. It's just not as simple
as you're making out at all. Such apps may not be efficient, but they still need strong CPU/GPU/RAM
resources, often I/O aswell. It's just that they could be a lot better if they were recoded for modern tech,
but until then brute-force upgrades continue to be the main way companies such these apps obtain
speed improvements. That isn't going to change, and it's nothing to do with tablets. This obsession with
tablets these days is ridiculous. It's becoming as bad as the fever in the movie industry with 3D (another
pointless venture).

Ian.

 
Status
Not open for further replies.