Skylake: Intel's Core i7-6700K And i5-6600K

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I really liked this:

"WinZip Pro 19 (Compression)

"The trick with this benchmark is to compress different types of content, such as text, pictures, multimedia files, videos and applications, without producing troublesome overhead due to time-sensitive file operations. This is why we copy all 3.02GB worth of data to an ISO file that can be compressed in one go."

This is an excellent approach... i've often wondered how much of compression throughput was limited by file system problems, especially with 8 or more threads going. This approach takes file system artifacts off the table.

FYI, in case you ever want a compare point - a mainframe zEDC compression card sustains a bit more than a gigabyte/sec of deflate compression single stream....
 
What the heck Intel? So, you provide great integrated graphics into Broadwell, then nerf it for Skylake? I guess you had to find a way to help sell your 'paper launch' of Broadwell. I really hope Xen makes you guys wake up; although it more than likely won't.

..."nerf it"? That's a new one on me. But I like it. Adding it to my personal lexicon.
 
What the heck Intel? So, you provide great integrated graphics into Broadwell, then nerf it for Skylake? I guess you had to find a way to help sell your 'paper launch' of Broadwell. I really hope Xen makes you guys wake up; although it more than likely won't.

..."nerf it"? That's a new one on me. But I like it. Adding it to my personal lexicon.

"nerf it" - a developer bringing out the "nerf bat" to change, for example, an item in a video game because it had unrealistic, over-powered properties, and was toned down to a more reasonable level.

Nowadays, like right here, it isn't used properly. I may also be wrong (its not uncommon tbh).

Skylake is looking pretty good, but my needs are still quite satisfied by sandy bridge. I recently just upped the clock rate by another 1000Mhz to try to keep pace with all that fancy new software demanding more and more from it (this last part would be italicized if I knew how to do that).
 
"Should everything we learn support the data we generated today, then I think it’s safe to say Skylake will become the first architecture to really get enthusiasts excited since Sandy Bridge—and not even entirely because of the processors themselves."

It's pretty hard to get excited when, taking OCing into account on all sides, you get just a ~25% performance increase over what is now a FOUR YEAR OLD chip. Oh, and of course to get your massive 25% increase, you need in addition to the new CPU, a new motherboard and new RAM. Is the "excitement" threshold so low these days? I'll pass on this one.
 


1. would like to see a some portion of dedicated text used discussing QUALITY of rips with Skylake vs. Haswell (and Broadwell if possible) in handbrake (since everyone uses it I guess), and possibly vs. Adobe Encoder with Cuda (since you used Adobe already, flicking a cuda box when 80% of us own nv would be a good test too). Are they still ripping fast but sucking in quality? haswell was a regression in Qual for speed. NVenc has gotten much better I've heard, and would likely be my next purchase from my 5850. Heck test AMD in there too if you can. Who's better at ripping quality stuff fast these days? Speeds I saw recently with Adobe (and a price so cheap now monthly) I'm wondering if it isn't better to go with them, use cuda and save massive time, or using skylake (possible next cpu, then hand my 4790 to my dad's pc) with quicksync. My point is, I need to rip a LOT faster than haswell cpu only I use now. 🙁 Anandtech did two articles showing the weakness of haswell but did nothing with broadwell last week (not a peep), and now nothing here with skylake. Is it better with quality now or not?

2. would like to see a ~200-250 discrete card with $150-100 (depending on gpu price) to show what you can do gaming with a cheaper cpu+discrete combo (what's the point in GTX 750s? that basically replace integrated with ZERO gain??) so what you end up with is basically the PRICE of Intel's $350 chip alone.

That would show what a gamer could do that perhaps doesn't need the most power cpu, but just wants to get great gaming on the cheap. I'm thinking Intel would look silly vs. a $150 cpu (pick a great one here) + $200 vid card. The question is what is the best bang for buck with those two picks? I keep seeing you test basically what amounts to almost no upgrade (750) vs. straight Intel. What you NEED to test is something that actually changes the scores a LOT (meaning $150 cpu+200 gpu). Obviously that wouldn't be a 980, but something in the $200 range, like a Radeon 380 2GB (newegg has multiple brands for $199), or a $130 cpu with the radeon 380 4GB models at $219. If a gamer doesn't care about pure cpu power, pretty much nothing you showed helps this guy. He's interested in a "good enough" cpu, and a GAMING gpu. You really can't have fun on low details etc in this article. Anandtech tested with multiple gpus, but alas only cpus over $300. Defeating my questions here. Sure you'd buy AMD APU alone if you're completely broke, but what about people (gamers who aren't completely broke) with $300-350 to spend on cpu/gpu TOTAL instead of just i7 and such. They tested with $250/$560 cpus instead of using cheaper cpu+decent gpu.

I hope you could add these two things at some point. I have other things I'd like to see, but these two pieces of info would be far more useful to a LOT of people trying to do both things (ripping for quality, and REAL gaming for the price of Intel chips alone), myself included (ripping for me, builds for others in the family and outside family).
 
Gotta love how Intel found a way to look good in at least one performance branch - The igp. They just add a bit each time but still the 100% of nothing is still nothing holds true (when comparing to a discrete card), wish they spent that silicon estate on cpu performance instead....
 



Might want to consider being a little more amicable on the open forum threads man...
 

Even Broadwell with 128MB eDRAM falls ~20% short from beating the GTX750 at low detail and the IGP would likely start to fall apart once texture and other details get knocked up a few notches to where 128MB no longer cuts while 1-2GB VRAM still does.

Nobody seems to have bothered benchmarking GT3e beyond low details to see how quickly its performance trails off when details get bumped up compared to an R7-260 or GTX750.
 


Did you read the links closely, or just quickly look at which CPU was at the top and stop there? There was almost no difference in performance in every single test. They swapped positions a lot too. The only CPU that sometimes showed notable improvements was the 6 core CPU, not the Broadwell.

Everything was within 1%-2% on every link and even then, they swapped spots, which just falls within margin of error.
 
Re: My previous post - please test at playable settings. Forgot to mention that, they did tests at anandtech showing some good info, but 17-20fps avg with ultra settings on others (Atilla)? Pointless, never mind the slideshow if showing MIN fps. Pick settings (per game) that hit >40fps avg so we get a good idea of playable settings (one reason I like reading hardocp). Why they wasted time on 4K is beyond me at $245 cards. I'd rather see far more games tested, I mean 4k Ultra settings is barely worth discussing at 980ti levels especially knowing <5% of us use it at all. Heck, Atilla over there is barely in the mid 30's avg on 1080P with 980 Strix card ($550 gpu) and only on $250+ cpus!
 


Might be true, but you get the point. I would never make a purchase of 750 If I had one of those cpus. I would aim much higher (and if I'm a gamer on a budget) would drop cpu to get there as you might not lose much in cpu but gain MASSIVELY in gpu for fun stuff. IE, $150 cpu $200 gpu. For a gamer, this would be a much better purchase than gtx750+anything, as broke people can make much better decisions than this. It's ok to show what toms showed, but it kind of is a situation not suited for middle of the road gamers. A few more tests would have shown the gamer side, instead of completely ignoring the middle ground so to speak. Anandtech got close and basically cover mid-range but some with strange settings that prove useless.

I guess their point in going so low (all sites doing this) is you can actually play, even if details completely suck and your game looks like 1999...LOL.
 


Oh yeah totally, it's not like IrisPro doesn't take up a large part of the CPU die and resources or anything.
 


That's because the 5960X is a DIFFERENT CATEGORY.

The non extreme edition i7s are only quad core because they are aimed at the mainstream market. While the hexa core and octo core extreme edition CPUs are designed for the high end market.
 
Guys saying "Zen" this and "Zen" that realize that it's not coming out until this time next year right? AMD has said they don't plan to release it until LATE 2016.

Meanwhile Intel is still planning on releasing another new architecture on the 14 nm process here shortly and I think they planned this so that any Zen hype is very short lived.
 
On the "How We Tested" page you have a X79 motherboard listed, but I do not see any Sandy Bridge-E or Ivy Bridge-E CPUs on any of the test results.
You also have the X99 board listed as Z99.
 
Still 4 cores.... Im sticking to my Q6600.

Then you really are missing out, 4 cores or not a current i5 (let alone an i7) will simply destroy the old Q6600 C2Q. It was great in the day but it's very old hat now and the lack of features on the board worse still.
Still 4 cores.... Im sticking to my Q6600.
You do know that your Q6600 is astronomically slower than Skylake in every single department, right? By your logic, the Phenom II X6 is better than the i7 6700K.

I think you should consider upgrading. You won't regret, promise.

im on a phenom II 955 be no oc,
until recently just playing games there was next to no reason to upgrade my cpu

you have to remember,once pcs went dual core and quad core, there is damn near no reason to upgrade outside of hardware failure for most people. there are only a few games where i cant enjoy 60fps with everything maxed at 1920x1200.

if you are rendering, than there is a very good reason to upgrade every few cpus, but just generally computer use... please, a ssd or a better gpu will make everything you care about faster apposed to a new cpu.

also, any of the games that don't run 60fps you have an argument to make for poor coding opposed to hardware limitations.
 
I use my Sandisk Extremes 16 GB ang get over 250 MB / sec in windows and I store a lot of video. I guess it'd be more useful for "digital Packrats" :)



 


Yeah I am sorry, I just assume that people posting something like that inform themselves properly and thus know about the dozens of reviews that show exactly that.
http://www.legitreviews.com/intel-core-i7-6700k-skylake-processor-review_169935/15
http://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/6
http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/10
http://www.hardwareluxx.de/index.php/artikel/hardware/prozessoren/36200-skylake-core-i7-6700k-und-core-i5-6600k-im-test.html?start=9


Good work intel . Gaming performance is NOT better than Haswell and system cost is way up .
 
Which is why we need clock speed increases now.



 
Status
Not open for further replies.