AMD Ryzen 7 1800X CPU Review

Page 10 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I can understand that but to me it screams that we will witness something similar to Phenom II, where it launched and was shortly dismissed with a new Intel lineup. It got a few refreshes but from 2008 till 2011 it really did not do much and its successor originally was slightly worse than it.

I don't see why AMD could not push 32 PCIe 3.0 lanes in the top end CPUs like Broadwell-E, which has 40 in the CPU alone. That way it could actually compete on a full scale platform level.

While I will probably never use SLI of CFX, it is still a viable option that people sometimes want and AMD will possibly lose sales for not being able to provide it, even if x16 3.0 doesn't benefit any SLI/CFX setup.

Time will tell.
 
The thread really is driving me kind of nuts but I still keep replying on here.

full of glass half empty/glass half full replies
Screaming AMD fanboys crying 'foul' & biased review
Screaming Intel fanboys inanely laughing on their thread comments while secretly crying on the inside
People with too much money/too little sense who preordered these cpu's while really having very little idea or very little need for them.

A sensible few who are realise what this means for the consumer pc market as a whole & the disintegration of the monopoly that Intel has had for the last 8 or 9 years - & this really is what matters be you INtel or AMD biased,nobody should be unhappy at this current moment in time.

Me - Im quite simply 'quietly' impressed ,dont see any need for the shoutimng or screaming thats going on .

Im going to sit back & wait for the hardware to be stabilised a little more - I really really want to see if the cheapest 1700 non x can be overclocked to any significant degree on one of the fairly budget b350 boards.
If it can do even 3.5ghz stable on something like the b350 prime then it may be the chip I buy for my next build,The 1700x/1800x are too expensive for me as a general consumer ,the 1700 is within my practical budget.

Otherwise Im waiting for the 5 & 6 series chips to drop,theyre the ones that hold a big interest for me & are what Id consider the consumer market chips.

A 4c/8t with a 3.5-3.8ghz base clock at a $200ish pricepoint is just going to go straight for intels jugular - thats the pricepoint Id like to see.
 


It is interesting that you point out that Ryzen IS NOT a "general consumer gaming CPU". I am also not exactly certain what your point is either.

However I can tell you this, Ryzen 8 core 16 thread far outperforms ALL Intel 8 core 16 thread CPU WITHOUT a dGPU.

Using FritzMark and several other Chess Benchmarks Ryzen is demonstrated to be far superior.

ChessBase FritzMark can support 64 cores.

http://en.chessbase.com/post/amd-releases-new-ryzen-processor#discuss

Euorpean consumers take FritzMark seriously. Tom's did too, I guess until Intel did not so well.

 


The other thing I've noticed on various review sites is the 1700x and the 1700 provide darned-near the same gaming performance at 2560x1440 and 4K as the i7-7700k as well as compared to each other (1700x v 1700). Those where it does not match the 7700K in performance, the framerates are still well above the 60 mark.

Also, from what I understand, there's speculation there will be R5 6C12T at 3.6GHz/4.0GHz released. I'm guessing these will perform similarly at the higher than 1080p resolutions as well.

This is definitely going to get interesting...
 


Your point is well taken and it is not about the fan boys being outraged about bias.

Collectively the On-line media which Tom's is a part has performed quite a hatchet job on AMD and cost them over $2BILLION in valuation.

Peoples lives are impacted by this. Tom;s need to be a responsible journalist not just another hack website site like Motley Fool etc.

The simple fact is RYZEN was judged on the basis of an obsolete API and it's competitors dGPU card which NVidia has not optimized to be fully compatible with ANY AMD CPU for good reason. NVidia pushes game enthusiasts into Intel silicon with incompatible hardware. NVidia has no incentive to make GTX work well with AMD x86-64.

Ryzen should NOT have even been tested as a graphics cpu until VEGA was released likely next month.

I am willing to bet that there is NO RYZEN purchaser that intends to use that CPU with nVidia gaming kit.

Likely uses are with FirePro or Radeon gaming kit. No self respecting AMD user would even consider that.

Tom;s Hardware just could not wait to rush to judgement and in so doing they joined the rest of the online media hacks performing a hatchet job on AMD.

And that hurt people.

What are the facts?

Outside of any reliable graphics benchmark, AMD RYZEN CRUSHES ALL 8 CORE 16 THREAD INTEL CPU'S IN ALL COMPUTE BENCHMARKS.

When Vega is released will Tom's revisit this piece? If the have integrity then they will.

Are they just an online hack or is Tom's Hardware want to be taken seriously?



 


It is funny that you should mention your use of SLI or CFX. Both are DX11 features and Microsoft no longer supports DX11 with any revisions since DX11.3 in 2015.
AMD has also announced it stopped supporting CFX as well and Nvidia evidently dropped 3 and 4 way support.

Why?

DX12 allows developers the ability to code directly to all available GPU resources regardless of the livery of the GPU. This is called EMA; Explicit-multi Adaptor.

SLI and CFX is just as obsolete as DX11.1. However most folks don't even know it.

I find it ironic that the online media makes a big deal about the latest hardware kit but benchmarks it with obsolete and now unsupported API: DX11.

The last revision of DX11 was DX 11.3 and that was for Windows 8; in 2015. The last version for WIndows 7 was DX11.1.

Microsoft released the old DX11.3 with Windows 10 and plans no further revisions as DX12 has replaced it and ALL gaming will support it by Christmas 2017. There was a rumour of DX11.4 or DX11.x as used with XBOX but that died with Windows 9.

Microsoft wants the consumer into Windows 10 and out of Win 7 and Win 8. Dx11 is dead with no further revisions planned.

Yet the viability of AMD's RYZEN was judged by it's competitors dGPU running an obsolete API DX11 while disabling AMD most competitive advantage; Asynchronous Compute.

Tom's Hardware DISABLED AMD's most competitive hardware advantage with DX12: Asynchronous Compute!! Which by now everybody should know is AMD Hardware IP and poorly executed by NVidia ONLY through software emulation.

 


I still don't understand why people make this statement.

I have a 4770K and GTX 1070 and I can stream my games just fine. Never had an issue.
My buddy is running a 4770K with a 7970 and can stream his games just fine too.

I don't get why people make it sound like, for some reason, Ryzen will make you be able to stream games better, or that this is the only CPU that can stream games.
 

DX12 is exclusive to Windows 10.

Windows 7 still has a 48% market share while Windows 8.x holds another ~25%. That's 75% of PCs unable to run DX12, which is why most game developers are still targeting DX11 even for new games.

Microsoft may have stopped active development of DX11 (no more new features) but it will remain a supported API with associated bug fixes if severe enough bugs are found in it to warrant them until all OS that depend on it are officially retired.
 


I asked some very pointed questions and you choose to respond NOT with a cogent and reasoned response addressing my questions upon their merits but rather with a personal attack and insult.

Who is the TROLL?

Tom's Hardware is part and parcel of the inline media that collectively launched a hatchet job on AMD that cost then over $2BILLION in valuation.

That puts peoples livelihoods at risk.

And the best you can come up with is calling me a TROLL?

I am heavily critical of Tom;s Hardware. And justifiably so. If Tom's can criticize a product then they should be able to take criticism as well.

The responsibility of a FREE PRESS is integrity and balance. Tom's HArdware has demonstrated neither with this piece.

When Tom's Hardware judges a major product release on the basis of it's competitors GPU compatibility, an obsolete API and disabling AMD's major hardware advantage Asynch Compute then yes I am going to criticize their procedures.

If all you can do instead of debating my argument with a cogent response is call for my banning as a troll then I think somehow the deficiency is with YOU and not my criticism.

You have offered nothing but non-sequitur.

 
akamateau, Async compute is a GPU based feature. Had it been enabled it would have improved performance on both systems. Had it been used on an RX 480 then it would have improved the RX performance more. However to level the playing field Tom's (and everyone else for that matter) used an overpowered GPU to eliminate that from the equation since its not fair to benchmark a CPU on the special features of a GPU.

Multi-adaptor is supported in ONE GAME. And either way the point is to take the GPU out of the performance equation. Not introduce the GPUs benefits and faults skewering the results.

There are plenty of very good and clear explanations for your concerns (including the valuation drop which is NORMAL once the hype train ends for any product. Its not going to affect anyone's job).

As for your other concerns they have been addressed by the reviewers and authors here. We understand what you are thinking but at this point all you are doing is inciting unrest in the forum. As I posted in your other thread you need to let this go if you want to continue to post here.
 
Curious to know is this was tested with the same amount of memory/channels as the Ryzen board? I believe the Intel board has Quad channel memory
 



"Async compute is a GPU based feature."

Not entirely true!

Asynch Compute allows the CPU to send data to the GPU at a much faster rate. With the OBSOLETE DX11 data can ONLY be sent serially; CORE 1 followed by CORE 2 etc. Asynch Compute Engines on the GPU manage the data the CPU sends asynchronously vastly improving the throughput of graphic data to the GPU. Ie; C1, C8, C2, C4, C1 etc... when the data is ready to shoot down the shader pipeline it goes. The GPU manages it. This was solved by AMD for the 8 core Jaguar in XBOX and PS4

http://wccftech.com/async-compute-praised-by-several-devs-was-key-to-hitting-performance-target-in-doom-on-consoles/

Why do you think the consoles used an 8 core APU? For looks?

https://www.pcper.com/reviews/Graphics-Cards/Whats-Asynchronous-Compute-3DMark-Time-Spy-Controversy

http://gamingbolt.com/fable-legends-xbox-one-using-asynchronous-compute-and-efficient-multi-threaded-rendering

http://www.redgamingtech.com/asynchronous-shaders-analysis-their-role-on-ps4-xbox-one-pc-according-to-amd/

The multi core cpu then moves multi-threaded data faster.

You are ALSO WRONG about Explicit mulit-adaptor.

ALL GAMES CODED WITH DX12 will support EMA.

You can read it here on Tom;s Hardware.

http://www.tomshardware.com/answers/id-3028955/enable-directx-multi-gpu-support.html

https://www.onmsft.com/news/directx-12-update-brings-explicit-multi-adapter-support-gpus-work-better-together

LET ME REPEAT THAT AGAIN. EXPLICIT MULTI-ADAPTOR WORKS ON ALL DX12 GAMING.

SINCE VIRTUALLY ALL GAMING BY CHRISTMAS 2017 WILL BE DX12 GAMING. EXPLICT MULIT-ADAPTOR IS HUGE.

The consoles have given developers the incentive to code for Asynch Compute and why it was so important for AMD to supply XBOX and PS4.
 


You do realize you linked to a post, that *I* was the best answer and confirmed that you CANNOT do that. There is nothing in that thread that says all DX12 games support it. Again, only AoTS supports multi adaptor.

Regarding the article you just added, read it again:

DirectX 12 also makes it possible to maximize the use two video cards simultaneously to improve performance, though it’s not an automatic process as developers have to work to support it on their games. However, according to a new report from Neowin, Microsoft will soon make it easier for developers to implement multi-GPU scaling.

Right now, the Explicit Multi Adaptor (EMA) feature that allows two DirectX 12 video cards to work in synergy is actually a low-level tool that is hard to implement in games and can even hurt performance if used incorrectly by developers. But Microsoft is working to add a new abstraction layer that will allow game developers to add basic multi-GPU support with only minimal coding. However, unlocking the full capabilities of EMA will still require them to implement it the hard way.

Also you need to re-read the article you posted and realize that both Intel and AMD processors would have benefited from Async Compute, if an AMD gpu was utilized. Right now AMD does not have any current model GPUs that can max out a system. Hence they were not used. Also against the 6900k, an 8c/16t processor, it would have negated any benefit against it.

The funny part here I pre-orderd an R7 1800X, I have it at home now waiting for the motherboard to come and can't wait to use it. However I am realistic about how it works. So before you call me an Intel paid shrill realize that (and my current gaming system in my signature).

Again we got it, you don't like the methods. We got it. let it go.
 


Testing was done with dual-channel memory on both Intel and AMD to level the playing field.


 

Only Intel's LGA2011 CPUs are capable of running quad-channel memory and if you are going to pay $600+ for a CPU, you aren't going to needlessly cripple it by running it with half of the channels used, even less so for benchmarking purposes where it would give AMD an advantage that next to nobody buying LGA2011 CPUs and motherboards would give it.
 


"Also you need to re-read the article you posted and realize that both Intel and AMD processors would have benefited from Async Compute, if an AMD gpu was utilized. Right now AMD does not have any current model GPUs that can max out a system. Hence they were not used.


Intel and AMD would NOT have responded equally to the benchmark. Yes Intel benefits. However the point of RX 480 was SCALABILITY.

" Again we got it, you don't like the methods. We got it. let it go. "

Actually you do not get it.
Until you understand that you are part of a collective media sub-culture whose bias and agenda caused the a massive loss over $2BILLION in valuation and admit that you will not "GET IT'

And I will continue to criticize you just as you criticize others. Somebody once said "if you can't take the heat... get out of the kitchen".

What you so blithely write about impacts people lives and is very serious stuff.

And if you wrote this piece the way that you did with the intent to cause harm....?

If you want to be taken as a serious journalist then write as a serious journalist. Report the facts without bias.

You were in such a rush to get this online that you did not run a balanced test or write a factual piece. Omitting salient facts is a lie of omission. especially if done deliberately to support an agenda.

Since you are here, respond regarding the FritzMark, Star Swarm, and 3d MArk and Super Pi omissions?

Did you run Vision Raytracer (POV-RAY)? Of course not it would be another win for AMD. Ray tracing is a huge graphic computet test and another that did not fit you agenda and you omitted.

Run Star swarm using RX 480. The result will astonish you.

These are 4 benchmarks that Tom's did not include, why? In the past you have.

Show your readers that you have no bias and run them.



 
I did not write the article. The writers and other contributors here have responded to your concerns throughout this thread. I will not repeat them for you, you can feel free to search through the thread.

The scalability of using 2 GPUs has nothing to do with the absolute performance of a CPU on a level playing field. Thats how an unbiased review works.

And, again, the valuation loss of AMD is completely normal, the hype train is over, the product is out. They are still at 11b which this is only the second time in their history its been this high. Projections show it going higher. Nobody is going to lose their job over this, and in fact AMD is headed for record profits.
 


You know it's kinda hard to take some one serious when they act like a crazed fanboy and just keep spouting the same thing over and over again.
 



Another personal attack and insult. I have written nothing but a reasoned though highly critical post and the best that you can do is insult me and call me crazed. Isn't that the behavior of a troll, I'm not sure, I always believed in rational discourse, I canl;t say I've always had that discipline, however here I'll make an exception. It is important that you understand why I am criticizing you and you do something about it.

Resorting to insults does nothing but weaken your argument no matter how good it makes you feel.

In debate, the first sign that you have won the argument is when your opponent offers nothing but insults and a personal attack.

I get it, you are thin skinned and enjoy the power of your website. You hate it when your readers insist on clarity and reasoned writing.


 


No we hate it when we are argued against without facts, logical arguments, critical thinking, or even the slightest bit of respect.
 
I didn't call you a crazed fanboy, simple said you are acting like one with your frenzied caps lock and accusation filled posts/rants. Instead of just asking why these tests weren't run(possibly because it was rushed due to technical issues). Instead you seemed to take personal insult and you started(and continued) spouting off how Tom's must be paid shills, and the results are bias(btw this is what fanboys do). Your replies come off as unhinged. You'd have gotten alot further if you as asked nicely(or at least civilly) for more information rather than just went on the offensive with accusations. As I said before it's hard to take someone seriously when they act as you have been acting.
 


I don't necessarily disagree with you, I'm just stating facts and what the reality is for the majority of pc users.
That is, that games, desktop-office, and workstation uses are the majority uses for most people. I think we can agree to that.
So if for all those uses a 4-core chip can perform better than an 8-core chip, and it's way cheaper, then obviously there is no need to buy the 8-core one.
The 4-core is the better choice and that is for the majority of people. And that is happening because as you very well said, most games and apps aren't doing well with more than 4 cores, because the developers didn't choose to make them so.
So yeah maybe in very specific uses where 8-cores matter the most, we see ryzen outperform intel and that is good. But on all other cases an intel 4-core chip is way cheaper and way better than ANY 8-core chip (Be it AMD OR INTEL).

I think for most people now there shouldn't even be any debate.
They can give $300-350 and get an i7 7700 chip that is better than a $1000 i7 6900 and a $500 Ryzen 1800X in most of their uses.
Things could get much more interesting when AMD releases a 4-core chip against the i7 7700 that could perform similar with a better price tag.
I think it's as simple as that
 
Considering it's a brand new design on a brand new process, I think the results are a definite win for AMD. It will be exciting to see the improvements they bring out in the future and how Intel responds in the near future!
 
I have read much that you have written, and you are obviously an AMD fanboi akamateau. And quite manic as far as all of your posts indicate. Take a deep breath and think, then write.

Perhaps you should shed emotion in favor of logic, particularly in light of your chess references.

If you indeed value chess program testing you should be following the Rybka forums, if you don't play chess and more particularly computer chess I don't understand your point other than fanboi nonsense. I think that at present clusters based on the 1800X *could* have promise based on cost with some work, but there has been no testing to confirm. I expect that will happen. Mature discussions are starting at the Rybka forums though, emphasis on mature.

My personal belief is that the 1800X has promise for computational intense programs that are optimized for multiple cores, but a day or so after release is a little soon to be completely sold on a single idea, however promising.

Hopefully, you get back on your meds and allow reason to triumph over your obvious emotion.

An idea for you: do some actual real world tests and then publish them in the forums, or just keep blathering about nonsense.
 


Hey now. Speaking on behalf of AMD fanboi's...he's not with us. Still, I'll start a gofundme to get him some more tinfoil.
 
Status
Not open for further replies.