Blade Stealth + Core + GTX 1080 benchmarks, first impressions

Discussion in 'Systems' started by vitorious, May 27, 2016.

Thread Status:
Not open for further replies.
  1. BuckRogers_no_id

    BuckRogers_no_id New Member

    No problem, I hate superstition as well. I have more links below that are worth taking a look at.

    Because you're wasting your time with synthetic benchmarks like Firestrike. It means nothing at all in real applications. No correlation or causation.
    Sometimes I wonder if people would prefer their system was #1 on the 3DMark charts and #10 in real-world performance. As long as a benchmark validated their feeling that it was fast.

    Yes like everything, there is some overhead to Thunderbolt. It's in worst-case scenario after accounting for overhead you have 4GB/sec of bidirectional data transfer to work with.
    Its been proven that 4GB/sec is more than enough for a 980 and from results I've seen elsewhere online, a 1080 is still not bottlenecked in any way except maximum FPS-
    at resolutions you should never be using a GTX 1080 with anyway in 200FPS+ scenarios. Once the resolution goes up, it's not even worth discussing the difference between Thunderbolt 3 and PCIE 3.0 x16 links. For reference, Thunderbolt3 is roughly on-par with PCIE 1.1 x16 or PCIE 3.0 x4.

    Hard evidence, not superstition, feelings or mythology-
    1. http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/21.html
    2.http://www.guru3d.com/articles_pages/pci_express_scaling_game_performance_analysis_review,7.html

    Try to enjoy your system.
     
    Last edited: Jul 20, 2016
    KillerFry likes this.
  2. KillerFry

    KillerFry New Member

    I agree with this. From what I've seen, the Thundetbolt link is only a problem when the game does not push the GPU; at higher resolutions or higher quality settings, the performance gap is smaller. My take is because more time is being taken rendering instead of "transmitting".

    Except in CPU bound games. Before the Vulkan patch, in a Blade Stealth, Doom was giving me around 30-40fps. HotS, which is based on the StarCraft engine - known to be more CPU needy - also had a larger performance gap for me.

    After the Vulkan patch in Doom, fps went to 80-ish with everything max'd out and no AA. As for HotS, they recently had a patch enabling DX11; performance also went significantly up.

    This is with a Blade Stealth, a 980 Ti, and a ROG Swift at 1440p. We could discuss canned benchmarks and theoreticals, but the actual performance and enjoyment of my games is more than enough.

    Sent from my Nexus 6P using Tapatalk
     
  3. Possibly it's latency related then? I see definite performance gaps on UE4 and Unity based games in VR on both my Rift / Vive. I've noticed very noticeable frame latency and judder in Raw Data, Project CARS, etc. that is not there in the same desktop scenario with the 1080 + i5 4690k stock (pretty close in CPU passmark with i7-6700hq).

    I'm not basing off of benchmarks for numbers reasons, I'm trying to find why I'm getting frame latency which is introducing judder that is awful in VR. On the Rift it's not as bad or noticeable thanks to ATW (async time warp). Possibly bandwidth isn't the issue and it's latency related?
     
  4. Deathalo

    Deathalo Active Member

    Why not get actual gameplay FPS numbers on the 2 machines and judge that way, from what I've seen the core does an excellent job, especially when considering the CPU and RAM it's paired with, I think you're making a mountain out of a mole hill or you had your expectations way too high thinking this would turn a laptop into a full on desktop with just a GFX Card
     
  5. BuckRogers_no_id

    BuckRogers_no_id New Member

    It's hard to tell without some data to look at and compare. I doubt latency is a problem, the Razer Core cable is so short for a reason. It has to be within the range of PCIE system bus latency or it wouldn't work at all. You'd be seeing all sorts of errors and driver failures. The Core has a Thunderbolt3<->PCIE bridge in it, but the end result has to be PCIE compliant.

    My guess if your hunch is correct, is that Razer has some more fine tuning with their product overall. It's still early days, you are all early adopters and Razer is blazing a new trail here. Give it some time. But I think it's working very well already. I don't think there's a bandwidth issue, that's what most people are going to point the finger at. If there is a problem, which I'd need some sort of quantifiable proof to even start speculating about, it's probably a "Razer" problem with their specific implementation. Not Intel Thunderbolt 3.

    If there were another TB3 GPU case on the market, I'd say compare with that. It won't be long till it can be done. But I think that's the most productive way forward.

    If it doesn't feel good enough, the only solution today is to build an oldschool ATX rig. Something I'll personally never be doing again, I've built dozen or so since the 80s. It's time to move on, and I'm glad Razer is pushing this product out (even though it's a little pricey). With the release of Skull Canyon it's NUCs or laptops for me from here on out.

    Obviously like everyone, I don't want something that's not right or sub-par, but I would personally use a Razer Blade and Core or Skull Canyon (I have one of these) and a Core. So I'm convinced it's good enough even in these early days.

    For the VR judder complaints, direct that towards Razer or one of their engineers. If you get a response post back here, I'd be interested to hear their response. To have a professional review site look into that, or take the proper measurements yourself to prove it would be a lot of work so you might not get a great answer.

    It's really a matter of either giving Razer time to fine-tune or getting competitor's TB3 based products on the market and seeing if there's any difference. I hate to say the early-adopter thing but it's true in this case, I wouldn't go so far as to say anyone is "beta testing" this for Razer though. It's ready for primetime, it's just V1.
     
  6. Firebat246

    Firebat246 The One

    My only hope with all this being said is that it's mostly software tuning and not hardware. I have yet to buy the core for my 2016 blade by I intend to later this year. I would hate to see a NEW core released next year because they can't tweak the current one without changing hardware around. Hope that makes sense.
     
  7. BuckRogers_no_id

    BuckRogers_no_id New Member

    Yup, it's tough to tell. But to be clear we don't have real evidence that there's any problems at all. I'm not discounting his personal experience, but I can't really write more books about it till we have something we can verify. I doubt there's any bugs or unoptimized part of the system that can't be resolved with drivers/firmware. If they were to suggest such a thing, I'd tell them they were lying.

    Now, it may be out of their control? Say they need better cooperation with Microsoft or NV/AMD. That's always going to be a possibility. This is V1 of a rather complex system, built on top of what is already a complex system. :D Your computer, all the software involved.
    That's why it's amazing everything works as well as it does (not just referring to the Core), game consoles are not terrible ideas for this reason. Everyone knows exactly what they're working with.

    I'm just kind of speculating a little bit and trying to give the best suggestion there is. Try a competitor's product, try contacting Razer. But I'm not convinced there's a problem, other than if you have a couple years to wait- sure, it might work slightly better then.
    It's either buy a Razer Core now, wait for V2 or V3, wait for a competitor's alternative, or build an ATX system.

    Would I stay away from it today? No. It's clear it works very well right now. I think modular computing is awesome. So many reasons/usecases. Kid at college, needs a laptop and wants to come back to his dorm and play some games (without maintaining two machines). Family with a desk using the Core as a docking station essentially, many have their own laptops and can plug into the Core and enjoy a GTX 1080 (without having to buy a 3-4 person household 3-4 GTX 1080s for everyone).

    For me, I have a NUC and while I need both a desktop (NUC) and laptop, I may move to just a laptop someday, I like the Razer Blade a lot. I'm eyeballing those because I think for VR, laptops are the future. Get rid of the wire, just need some sort of battery backpack. So I might just have a MSI GS60 or Razer Blade and a Razer Core type product someday. But I would want a highpower GPU solution that works with both laptops and desktops (NUC in my case).

    Other than my interest in VR, I'm not much of a PC gamer anymore. I'm satisfied by the performance of my Skull Canyon NUC as it is (blasphemy compared to this GTX1080+Razer Core stuff). But what tipped me over the edge was the Thunderbolt GPU possibilities.
     
  8. Answer me this why the gap between the Alienware Amplifier and the Razer Core? (PCI-e direct vs TB3). The Alienware Amplifier is running x4 lanes PCIe 3.0

    http://www.3dmark.com/compare/spy/62103/spy/87120

    Comparing my Blade 14 (i7-6700hq) + Core + 1080 vs an Alienware (i7-6700hq) + AGA + 1070 is almost identical scores? With a 1070??

    I know it's a "synthetic benchmark" but I assure you it translate into performance in gaming, especially VR which is double duty rendering. Every little ounce of performance equates frames.

    http://www.3dmark.com/compare/fs/8877136/fs/9273607
    http://www.3dmark.com/compare/fs/9273607/fs/8903869
    http://www.3dmark.com/compare/fs/8943914/fs/9273607

    Comparing my Blade 14 (i7-6700hq) + Core + 1080 vs Alienware 15 R2 (i7-6700hq) + AGA+ 1080. The Alienware is more in line with what I would expect from a desktop (1000 point less GPU). 20-22% faster? Something is off. (added second comparison to another one of the AGA + 1080 + i7-6700hq guys) (added a third one, slight o/c on 1080 150 core / 200 mem).
     
    Last edited: Jul 20, 2016
  9. KillerFry

    KillerFry New Member

    I could be wrong, I don't know much about the Alienware Amplifier, but I believe that is straight up PCIe with some sort of propietary connection Dell created. No doubt there is a little overhead in the Core's PCIe -> TB conversion.

    Some weeks ago I actually read a post somewhere (can't remember where) were they compared the Amplifier vs the Core. Indeed, the Amplifier has smaller performance hit than the Core; but you need an Alienware system. The Core is more universal in that regard (I know not all TB3 laptops will support it, it's up to the manufacturer).

    Personally, I do not like Alienware laptops; they're too ugly for me. So considering the Alienware Amplifier was never an option I entertained.

    As I mentioned previously, the resolution and image quality can reduce the performance gap. Those comparisons you posted are done at 1080p. Go to 1440p, or higher. I explored those scenarios a little bit in my original Blade/Blade Stealth + Core vs my desktop. As you move from 1080 to more demanding settings, the gap closes as the cable/conversion latency is less of a factor; GPU rendering time is more important. Also, I saw some differences between using the laptop's internal display vs an external display; having USB devices attached has an impact, too.

    As for VR... that is a completely different animal with a whole lot of additional variables. I have a Vive, but have not tried it out on the Blade + Core. It is a scenario in which the hardware (Core and Rift/Vive) and software are in their infancy. Personally, I wouldn't expect it to work as well as it does in a desktop. If I have free time over the weekend, I'll try Raw Data with the core; I could even put the 1080 from my desktop to have a similar scenario to yours.

    In any sense, I didn't expect this to be perfect from the start :smile_: But I like being an early adopter of things, and sometimes imperfection is the price paid.
     
    BuckRogers likes this.
  10. BuckRogers_no_id

    BuckRogers_no_id New Member

    I'd personally dismiss any 3DMark anything. It's just not worth even looking at. If you could find real world benchmarks with a gap between the Alienware and the Core at 4K resolution- then I'll start talking. But I bet you won't find any gap that's outside of a ~2%, margin of error level stuff between the two anyway. :)

    I build complex systems for a living, that's one reason I say ignore 3DMark. Benchmarks are good at telling you 1 thing: it's fast at that benchmark. Nothing else at all.

    I bet for every game that the Alienware beats the Core, I could find an example where the Core beats the Alienware. That's the point to the 3DMark comments, in the end it's meaningless.

    If you want that oddball Alienware Amp instead of a slick Thunderbolt GPU case, then by all means... but I'll pass even if Razer is working things out still (which they are). You know Dell is moving off of that anyway, and onto Thunderbolt for their next GPU case?

    KillerFry is exactly right. If you're running 1080P benchmarks with the Amp vs Core, you're really wasting your time. To spend $500 on a Core and pair it with anything less than an RX480 (or GTX970), GTX 1070 (or 980Ti) or 1080 is kind of pointless. To put all this together and run a lowly 1080P instead of ultrawide or 4K is kind of silly.

    But even if people ARE doing that, because there's nothing wrong with 1080P, the performance from both and all solutions is so great that there's no need to worry if the Core is faster than the Amp. Right? Both are blazing at a lowly 1080P res...

    But you (ShredderMiller) did hit the point I made: compare with other implementations. You can ask Razer about the differences. I suggest you don't have anything to worry about, like that guy said, mountain out of a molehill.

    Yes you are right- something is off. As noted though, LOTS of things are off with all the hardware in your system. I bet there's a ton of bottlenecks and shortcomings that you're not even aware of.
    I know this, because I build complex systems myself. :D There's so much shoddy engineering out there, that it's surprising anything works at all.

    There's lots of products here and there that perform better than another. In this case, with what I've seen, the Core is STILL the product to buy today. No question, even with the price too. Screw that 4-USB cabled Alienware Amp thing. I don't care if it's slightly faster at 1080P.. what a joke really.

    Again the options remain, it's either buy a Razer Core now (this is what I'd do), wait for V2 or V3 (if you don't mind waiting years that is), wait for a competitor's (preferably Thunderbolt) alternative, or build an ATX system.

    I actually dislike Razer products to be honest, always have because I've had so many of them fail on me prematurely. I'm a Corsair/Zowie/MSI fanboy (sorry Razer admins/employees/fans), but I commend them on the Core. I'll defend them when they did well. They took a chance, did a good job with it and rest assured they're working to improve it as much as they can.

    Give them some time, you're an early adopter and this is V1..
    also remember that the other answer is called upgrading. When V2 does arrive, the V1 Core will still run as good or better than it does today- but you can just upgrade and put the V1 on Ebay. It's not a lifetime commitment.

    Rereading this, I honestly think you should sell your stuff off and build an ATX/ITX rig. I just don't think you're going to be made happy and Razer isn't going to kneel at 1 guy's feet to satisfy your demands immediately. Everyone else seems to simply be enjoying their stuff.

    And their stuff is not bottlenecked. Bump it up to 1440, 2K ultrawide, or 4K in a real game and all these concerns/complaints vanish.

    Exactly, the Model T wasn't a Maserati either. But that's an unfair comparison, the Core isn't a "Model T" by any means. I think the Core complaints are way overblown. This thing is ready for primetime, it's a solid V.1 product and the only way to fly today if you don't want to build an ATX system.
     
    Last edited: Jul 20, 2016
  11. I'm hoping it doesn't fall on death ears, obviously synthetic or not those raw numbers equate to frame loses. I can't physically render 2160x1200 @ 1.5 render scale 3240x1800 on the Core setup 1080 vs a slower desktop 1080 setup. My performance numbers in Steam VR dumps are more in line with a desktop 1070. 4K gaming is the same (run all games 4k), running through my UE4 projects is the same in editor, it's all noticeable. Obviously running real world scenarios I'm getting matching numbers more on par with a 1070, so no bandwidth limitations I wonder what's the factor here?

    So I ask myself is that acceptable on a $500 TB3 eGPU? It's debatable. Spending $700 on a 1080 and looking at $450 1070 performance, is that acceptable? It's debatable.

    I assume anyone investing in a $2300-2400 Blade + $500 Core + $700 1080 is hoping to get the most out of their 1080 with a $3600-$3700 investment. That's all I'm saying. I'm hoping and assuming Razer "is" working on updates / firmware for both the Blade and Core to improve performance. I'm curious if a AMD equivalent (Fury X) suffers from the same performance loss as the 10XX series cards since the Core and it's fundamental workings are built on AMD Xconnect technology. Someone on another thread was going to test his 290x in the Core that's in his desktop to see if the performance gap was better on an AMD Xconnect based card.

    I'm satisfied with the setup don't get me wrong but I would have possibly saved the money and stuck a 1070 in this thing if I realized I would be getting the equivalent performance of a 1070. Seems like good food for thought for everyone investing in a 1080, a 1070 will run 4k gaming just the same at half the cost (no point to invest $700 if you can't use it). I've been doing the whole PC thing personal and career for 14-15 years now so I'm no stranger to how it all works (engineer complex systems, I know the how/why/should/shouldn't work aspects).

    Like you said buck it's ready for prime time, it's a great "consumer" product that gives you some desktop graphics on a laptop. But considering Razer is target at gaming enthusiast many of them want to stick 1080's / *insert next Ti card* / etc in these and get maximum performance possible.
     
    Last edited: Jul 21, 2016
    Destrok likes this.
  12. Deathalo

    Deathalo Active Member

    ..Then buy a 1070, compare the performance IN-GAME, pick which one you like more and weigh it to the amount you're paying. I don't see the issue here, it's your decision with how much performance you want vs how much you pay, the Core does a great job, sorry it's not a holy grail for price of a cup of coffee, but it's pretty damn great for what it is IMO.
     
  13. Destrok

    Destrok Well-Known Member

    SWTOR is an interesting choice for a graphical comparison..... it isn't exactly demanding graphically. But the rest looks great.
     
  14. Exactly Deathalo! I want 1080 performance so I bought a 1080, I'm getting 1070 performance IN-GAME and I'm not too thrilled about that. Everyone notates that TB3 is not a bottleneck for x4 lanes PCIe 3.0 so I'm wondering why we are not getting our cards performance levels. I understand some frame-loss on an external GPU, I get that. But not at the level I'm seeing here compared to another x4 lane PCIe 3.0 solution.

    I expect it to be the industry setter for $500 bucks, once we start seeing other TB3 PCIe enclosures that'll tell the tale. If the performance loss is in-fact TB3 related it is what is it, but if everyone here is saying no bottleneck exist then obviously something is wrong with the product that hopefully Razer "is" working on with firmware updates for the stealth / blade / core. No reason to get up-in-arms. It's a great piece of tech that's v1, early adopter, and needs some work.



    Curious to see his completely separate video dedicated to performance and his findings. at 5:53 he alludes to the long answer of complicated lol.
     
    Last edited by a moderator: Jul 21, 2016
    Destrok likes this.
  15. Deathalo

    Deathalo Active Member

    No, you're getting 1080 performance, you're not getting 1080 + Desktop grade i7 performance, which is to be expected. You're limited at some point by the rest of your laptop, that's just how it is, maybe if there's a desktop that has TB3 and supports the Core you will see the 1080 perform much closer to that of a 1080 inside the tower, but on a laptop I don't know how you expected any more than what you're getting (which is still quite high when put into perspective).
     
    BuckRogers likes this.
  16. BuckRogers_no_id

    BuckRogers_no_id New Member

    I think Deathalo nailed it there and you were also the wrong person to be adopting the first release of a completely new product. If you want a great 3DMark score, then look at those before buying, copy those rigs and buy that stuff. Can't go buy a laptop and external GPU and complain about it not matching that exact equipment you purposefully avoided.
     
  17. Sigh I give up here I posted in another thread talking about the performance gaps, I'm not the only one on this. If the Alienware Amp can give you within 2-3% loss over 4x lane PCIe 3.0 and TB3 utilizes underlying 4x lane PCIe 3.0 (Intel spec docs) no reason a TB3 will go with a 12% loss. No matter how you spin it. I'm not getting 1080 in a PCIe 3.0 x4 lane setup, I'm getting like 9% less.

    Did you not read the benches Deathalo? http://www.3dmark.com/compare/fs/8877136/fs/9273607

    Those comparisons were with same model mobile i7-6700hq (2.6ghz clock), same laptop 16gb DDR4, same laptop Intel chipset (skylake based), etc. Factors were completely the same swap PCIe connector on the Amp with TB3 connector on Core, delta should be no different. I'll continue on the other threads as we get this figured out, a competing PCIe 3.0 x4 lane eGPU enclosure on an exact duplicate spec'ed laptop pulled a 22% lead in GPU scores meaning using the GPU not any other factor (not that this matters since we have duplicate laptop specs).
     
    Last edited: Jul 21, 2016
  18. Deathalo

    Deathalo Active Member

    That benchmark is testing 2 completely different graphics cards, so I'm not sure how scientific your conclusions can be just based on that...
     
  19. Every Razer Blade 14 + Core that I'm aware of reports the video card as the "970m" no matter what. I wondered the same thing when I saw others posting their Core benchs w/ the Razer Blade 14. Must be something to do with the dedicated GPU in the laptop and the external GPU (note manufacture Razer and 8GB GPU memory).

    http://www.3dmark.com/fs/9273607 (mine)
    http://www.3dmark.com/3dm/12728263 (nightmyth on this forum)

    So I am benching the exact same 1080 FE card as the other identical setup I was comparing to with a 22% gap. I and other Razer Blade + Core + 1080's are neck and neck, same spec'd Alienware guys with Amp + 1080's are pushing 22% more GPU scores hence I assume a bottleneck w/ TB3 / Core setup compared to the PCIe conenctor (both use 4x lane PCIe 3.0)
     
    Last edited: Jul 21, 2016
  20. Deathalo

    Deathalo Active Member

    Ever think it's possible the benchmark isn't optimized correctly for the Core? How about 2 real world gaming tests as comparisons, have any real world data?
     
Thread Status:
Not open for further replies.
Sign In with Razer ID >


Don't have a Razer ID yet?
Get Razer ID >