The Core i7-4770K Review: Haswell Is Faster; Desktop Enthusiasts Yawn

Page 16 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

InvalidError

Titan
Moderator

And in that case, you are into Xeon territory. Not necessarily 8-socket territory but at least LGA2011, preferably with ECC RAM.
 

mapesdhs

Distinguished
somebodyspecial writes:
> I didn't realize mem would be such an issue, but I'd certainly like
> to see some AMD vs. NV (opencl vs. cuda) to prove that in adobe.

Adobe what? ;D From what I've read, the various Adobe apps behave
quite differently for these things.


> saying I didn't realize it to be an overall problem in all things
> adobe (thought that to be more of a niche comment but maybe not).

It might not be, I don't know. I'm not familiar with how Photoshop
& Premiere handle data & RAM allocation. I only know it's very
important for AE.


> But your comment brings up an interesting test I'd like to see, and
> maybe the difference between 8/32gb, ...

Toms' tests have already said many times in the past that AE tests
done with 8GB systems should be taken with a pinch of salt since
they're not realistic, being RAM limited.


> I'd take even that scenario...LOL). At least we'd have some
> relevant CUDA vs. OpenCL data we could make purchase decisions on. I

I was going to say I'd try this, but I don't have a modern AMD
card yet. Still hunting. I do plan on testing AE on a couple of
older configs with 16GB, see how it behaves, but not yet, other
things to sort first.


> the same thing to get the point across but I digress... ;) Sorry
> people like you have to keep seeing it.. :(

I understand your implied point, but alas in the real world it's
more difficult to demonstrate some of the things you're asking
about. Even with the same application, two different datasets
might behave quite differently. Simply using or not using an old
effect like Shatter can completely alter the amount of RAM the
app grabs, how many threads are involved, the GPU loading, etc.,
which means the bottlenecks can change from one part of a render
to another, or indeed during the same frame.


> data in a few areas of toms tests :) But I don't care to see ANY
> OpenCL test run on NV hardware if there is a CUDA option. Opencl

I understand why, but it would mean someone with an existing AMD
setup, combining both types of test, can compare what happens for
upgrading to a newer AMD vs. switching to NVIDIA.


> one is pitted against the other. Sighting one sides scores is

I agree, though sadly I do get the impression that Adobe itself is
leaning deliberately towards AND GPUs. Maybe it's a cost issue.


> You may be right in what Chris would say for his reasons (but why
> do they avoid responding to questions about this?). But I'd ask why

Avoid? Could be they're just busy. They have a huge workload. :D


>in real world use? I mean like testing cpus in 640x480 when it will

Usually such measures are taken in order to isolate the testing in
a particular way, to focus the workload on the CPU, otherwise it
becomes a system test rather than a CPU test, but i know what you mean.


> Either way, I enjoy your posts and hope Tom's read them and starts
> giving us more complete data (probably a better way to put that but

A friend of mine is sorting out an AE CUDA test they could use, but
whether or not they can do the kind of tests you've been describing,
that I don't know. Limited time, resources, etc. Hard to say.


> ... I get more from some
> of your comments than the parts of toms articles that pertain to
> this topic. ...

Careful, if my head gets any bigger it'll need its own post code. :D


> with few configs available to me now). ...

I have a fairly wide range now, more than a dozen mbds/systems
with over a hundred different possible CPU/GPU combinations. I've
certainly come to appreciate toms author comments about the practical
issues involved in doing such tests. It's hellishly time consuming.


> building pro-e/solidworks/cad/FAE etc stations for engineers and had

That's sort of what I've started doing, though ironically I don't
have any of those apps to use for real-world tests, only benchmarks
like Viewperf, so I can't test some of the things I mentioned
earlier such as big data sets that might stretch main RAM or GPU RAM.


> for a time worked with DEC Alphas also with FX!32 or not depending

Ahh DEC Alphas... how I once used to long after a 21264 just for
fiddling. :D But that was before I bagged a 32-CPU SGI... ;)


> Happy New Year :)

Ditto. 8)

Ian.

 

mapesdhs

Distinguished


Yes, entirely agree, hence why - for someone without the budget to get a XEON setup - I built
an X79 system (ASUS P9X79 WS), 3930K C2 @ 4.7, 64GB @ 2133, Quadro 4000 with three
GTX 580 3GB for CUDA, etc.

Ian.





 

InvalidError

Titan
Moderator

With SoC performance almost doubling every year while PCs have been stagnating for much of the past four years, it looks like the gap is closing furiously fast to me.
 

Some of the SoC performance gains are from adding cores though, which doesn't affect single-threaded performance.
 

InvalidError

Titan
Moderator

Most of this year's SoCs are quads and most of last year's SoCs were also quads. Most of the performance improvements over the past year came from architectural improvements in the ARM cores, faster RAM and 50-75% higher peak clock rates... 2013's quads were about twice as fast as 2012's.

This year should see more mainstream use of ARMv8 and manufacturing at 20nm or smaller so another substantial jump in IPC and clock rates is quite likely for 2014. I would not be surprised if per-core performance almost doubled again and that would bring ARM throughput close to par with x86.
 

biohazrdfear

Honorable
Mar 1, 2013
340
0
10,860
I don't plan on upgrading any time soon. I waited years after the original i7 lineup before swapping out my Core 2 Quad. And even then, I wasn't going to upgrade simply because my Core 2 was doing all it needed. Then I was forced to upgrade. I'll just keep holing onto my i7 3930k until Intel makes a chip that just blows me out my seat. Like, literally blow me out my seat.
 

biohazrdfear

Honorable
Mar 1, 2013
340
0
10,860
I don't plan on upgrading any time soon. I waited years after the original i7 lineup before swapping out my Core 2 Quad. And even then, I wasn't going to upgrade simply because my Core 2 was doing all it needed. Then I was forced to upgrade. I'll just keep holing onto my i7 3930k until Intel makes a chip that just blows me out my seat. Like, literally blow me out my seat.
 

InvalidError

Titan
Moderator

That probably won't happen the way you expect it to.

In about two years from now, you will probably start itching for USB 3.1, PCIe 4.0, SATA3.1/3.2, 3200+MT/s DDR4, etc.

So, even if the CPU is still adequate performance-wise, the platform will start feeling considerably outdated even though none of the updates are that much to get excited about for most people... so you may end up itching for Skylake-E anyway.
 

PCIe 4.0? PCIe 3.0 is still not really necessary today, there's no way PCIe 4.0 would be an advantage within just a couple of years.
 


The point he is trying to make is that platforms become outdated before the CPU speed starts to become lacking.
Especially when going for the high end like the 2011 platform.
 

I don't disagree. SATA Express and USB3.1 are nice upgrades. I'm just saying PCIe 4.0 isn't going to matter anytime soon.
 

InvalidError

Titan
Moderator

One word: GPGPU.

The practical limit of how small GPGPU computations can be before they are no longer worth even trying to offload to the GPGPU is heavily dependent on the amount of time spent feeding data to the GPGPU and extracting results out of it afterwards. After all, it does not matter how much more efficient or faster a GPGPU might be at a given time-critical task if shoving the data in/out of the GPU takes longer than doing the same computation directly on the more expensive, power-hungry and busy-with-other-stuff host CPU.

If you haven't been hiding under a log for the past year, you should know by now that just about every significant CPU, GPU, SoC, IGP-IPCore, middleware, etc. vendor has set at least one foot on the GPGPU train. In principle, that should mean GPGPU has a high probability of happening on a significant scale.
 
Well ... then there is DDR4.

On a point-to-point bus with dedicated address space, PCIe Gen4 lanes potentially could become a bottleneck. I haven't done the math, but if you take a quad-channel IMC, fast DDR4 RAMs, dedicated address space for drives, GPUs and other 'peripherals' ... all in 'serial' --- you will need XXXX MT/s to keep everything well fed.

 

Ahadihunter1

Honorable
Jan 16, 2014
7
0
10,510
Well what did you expect guys? intel is owned by the rich to stay rich... You think those bastards would generously make a 10 generation ahead cpu to renovate the world?Absolutely not! Until these son's of bitches see a hand made CPU made by a self-made scientists in his basement that does 300 GHZ at 30 Degrees load stable, this company of the rich will do nothing until then.
 

Ahadihunter1

Honorable
Jan 16, 2014
7
0
10,510
Well what did you expect guys? intel is owned by the rich to stay rich... You think those bastards would generously make a 10 generation ahead cpu to renovate the world?Absolutely not! Until these son's of bitches see a hand made CPU made by a self-made scientists in his basement that does 300 GHZ at 30 Degrees load stable, this company of the rich will do nothing until then.
 

InvalidError

Titan
Moderator

If doing better than what Intel is doing now was so easy, the 50 or so less known companies that also design CPUs, SoCs and microcontrollers would be doing it too. Nobody is doing miracle chips because nobody knows how to make in a cost-effective mass-production way.

As for your basements scientists creating a 300GHz CPU, good luck with that when the best labs in the world are barely able to make more than a handful of transistors work together at anywhere near this speed so, good luck getting billions of them to operate on a ~3ps cycle... at 3ps, even thermal dilatation becomes significant enough to skew timings pretty badly and the amount of power wasted on clock signal distribution would likely be enormous.
 

InvalidError

Titan
Moderator

What most people think when they read that name is the performance doubling every 18 months but lately, doubling performance takes closer to 4-5 years. That interpretation of it is definitely dead and buried.

The transistor count part is still alive but I have doubts about how much longer it can keep going - most of the transistor count on AMD's and Intel's newer chips is driven by IGPs on the desktop side and caches on the server side. This helps with padding transistor count and maintain Moore's Law self-fulfilling prophecy status but does not help raw CPU performance much.
 


There is a lot more to Moore's stuff than the 18 months doubling.

http://en.wikipedia.org/wiki/Moore's_law

Read "Ultimate limits of the law". Its a little on the extreme limit. But the concept remains the same. The closer we get to that barrier, the more difficult it becomes to move forward in the leaps and bounds from yesteryear.
 

JonnyDough

Distinguished
Feb 24, 2007
2,235
3
19,865
Most over-quoted quote ever. Even Moore himself admitted it was taken out of context. Unrelated, I wish it was legal to virtually hit brony's in the face just for taking a girlie child's show and following it like fanboys - but alas I digress there are worse things than trolling - although trolling is still trolling so I'll stop there...staying on topic...

Even if you can afford Haswell, it would make financial sense to buy an 1155 socket mobo and throw IVB in it - except that 1155 hasn't really gone down in price at all since Haswell was released, a testament to just how "yawn" this topic always was... This is a discussion that was obvious tanked from the start and should have never been created. Let's lay it to rest now and ask the moderators to close the thread.
 


I would much rather be scorned for being a Brony than be like THAT to people.

Hope you live a long and happy life. :p

 

JonnyDough

Distinguished
Feb 24, 2007
2,235
3
19,865
Dude the PinkiePie crap on Steam is just ridiculous. It's like people are following this just so they can be ridiculed and be a part of something. If only that something were actually important. :\
 
Status
Not open for further replies.