[*updated 1/4/2019 with new link for Citrix article]
Steven Wright of Citrix Consulting has released another guidance for getting an A+ NetScaler Rating at SSL Labs (SSLLabs.com) on June 9th, 2016. The good news is that I've validated it works- read on to see the proof!
Why You Want an A+ NetScaler Rating at SSLLabs.com
Security is very much front-of-mind these days, and fortunately SSLlabs.com has a tool to scan your site, including NetScaler Gateway, to detect known problems against current threats.
In case you missed it, you have a whole new reason to re-visit your NetScaler SSL configuration, even if it is a VPX which previously didn't support nifty security like TLS 1.2. This changed after the last round of updates, so you no longer are forced into an MPX to get good security (though admittedly the CPU usage is a bit higher without the offload chip offered in the MPX and SDX platforms).
If you are running a NetScaler VPX, your out-of-the-box configuration will likely give you a NetScaler Rating of either an F or a C in most cases. Around here, we go for the big grade- so here's how to get an A+ NetScaler Rating, even with a VPX.
Words of Warning
A few caveats that I know of – First off- I don't really consider myself an authority on NetScaler, so take all of this with a grain of salt and ALWAYS TEST BEFORE YOU GO LIVE IN PRODUCTION. Messing with SSL ciphers can cause outages, especially for NetScaler Gateway.
Second, if you need to support older clients, especially Windows XP clients, be VERY CAREFUL deploying all of these settings. You may be stuck with a “C” score for needing SSL v3 to stay turned on in some cases. Even a C rating can still be very secure, this is just how SSLLabs.com rates things even if there's just one attack vector left (unfortunately, SSL is a big one).
But… if not, you can get a score that looks more like this:
What an A+ Rating looks like from a NetScaler Gateway VPX
Before we go further, I want to reiterate that I'm just validating what someone else created- don't credit me with this, Steven Wright and Citrix Consulting Services (CCS) did all the work making this possible! Even though I still do occasional work for CCS, I want to make sure noone gets confused!
The nice thing here is that the blog article has all the steps you need, so break out that puTTY connection and get started!
First things first- note your current rating at SSLlabs.com – I typically do NOT share my results, but feel free if you like to brag.
My configuration included a more modern GoDaddy SSL cert with SHA256 and RSA 2048 strength on a NetScaler VPX 200 with the Enterprise license.
I tested this with firmware 11.0 65.72.nc using the NetScaler Gateway wizard. In my case, it works, so don't hate me for taking a shortcut 🙂
As I mentioned above- this gave me a NetScaler Rating of “C”. You can test yours by going back to SSLLabs.com and hitting ‘clear cache' to re-test.
SSLLabs C Rating on NetScaler VPX
Going from C to B
Disable SSL v3
I Disabled TLS 1 and 1.1
I tried first enabling ECDHE cipher group settings included as a default
Not too bad- a Solid B with this change! I thought it would be an A- but I think there may be a few things in the ECDHE group that will rob you of the rating. You'll need to define your ciphers manually.
SSLLabs B Rating on NetScaler VPX
Getting a NetScaler Rating of A+
Removed Ciphers (all)
Implemented STS (Strict Transport Security)
Added the cipher lists that Steven came up with, below
Bound the new cipher sets and made sure to use the ECC Curve configuration
Here's the commands to use in the CLI- note that everything in BOLD ITALIC is a name you will need to give it yourself, not a specific command.
add ssl cipher custom-ssllabs-cipher
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1.2-ECDHE-RSA-AES256-GCM-SHA384
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1.2-ECDHE-RSA-AES128-GCM-SHA256
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1.2-ECDHE-RSA-AES-256-SHA384
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1.2-ECDHE-RSA-AES-128-SHA256
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1-ECDHE-RSA-AES256-SHA
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1-ECDHE-RSA-AES128-SHA
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1.2-DHE-RSA-AES256-GCM-SHA384
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1.2-DHE-RSA-AES128-GCM-SHA256
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1-DHE-RSA-AES-256-CBC-SHA
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1-DHE-RSA-AES-128-CBC-SHA
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1-AES-256-CBC-SHA
bind ssl cipher custom-ssllabs-cipher -cipherName TLS1-AES-128-CBC-SHA
bind ssl cipher custom-ssllabs-cipher -cipherName SSL3-DES-CBC3-SHA
Gotta boost the signal on this. My friend Nick Rintalan from Citrix Consulting has put together a new ‘best practice' (or leading practices for the lawyers) update that I feel it's important for people to see!
Nick Rintalan, Lead Architect at Citrix Consulting
New Best Practice(s)?
Here are some of the highlights of the article, sorted here by what I feel is most important for you to read:
PVS and Memory Buffers. Yes, yes, for the love of all that is holy, yes. I haven't yet deployed for validated the Write Cache features now in MCS, but I can tell you from experience that XenApp with 2-4 GB of RAM cache with failover to disk has been giving roughly 20-30% faster logons and overall better experience for most of my customers.
Protocols (as in HDX). One of my primary frustrations for quite some time now is that Citrix XenDesktop ships by default with a protocol that has a good experience on LAN but tends to be problematic at distance. H.264 is great for video, but frankly I hate it everywhere else. I think it almost singlehandedly ruined things for Citrix since PCoIP can perform better than this hog (my opinion). Thinwire and even the legacy encoder, however- actually deliver on the promises and need to be investigated in nearly every single use case I see. So I agree with Nick- use the policy templates included with 7.6 u3 and above (including LTSR) as a starting point. Odds are good you won't be disappointed. When I say ‘use' here what I mean is remember that you can apply these codecs on a per user basis, connection basis or even per delivery group- meaning filters are your friend! It is perfectly acceptable to have multiple codecs going for various use cases. One size nearly NEVER fits all, so test these out!
vSphere Cluster Sizing. Number 3 on my list right now. You need a dedicated resource cluster for Enterprise workloads- but honestly- for XenApp workloads, consider more hosts per cluster. You should be using bigger VMs anyway, so the number of managed VMs is about the same- just more computing power. CCS is seeing 24+ hosts per cluster be just fine in XenApp. For XenDesktop with more than 5000 VMs- I will add here that a dedicated vCenter may save you a lot of pain… my opinion, and of course… you guessed it. TEST!
XenApp CPU Over-Subscription. Seriously. The “1.5x” thing needed an update so I'm glad to see some clarification here. In all things- I still encourage practical testing instead of just implementing something because “Citrix said to.”
PVS Ports and Threads. Those of you who know me know I bang this drum a lot- so here's some backup for what I'm saying. The defaults are not good enough. Good design is still required!
Farm Design. You're probably like me and are coming along kicking and screaming from XenApp 6.5, which most would agree has been the “Windows XP” of the Citrix world. It just hasn't been this good yet, and I still feel 7.9 doesn't have true feature parity… but as Nick describes, they are getting there. As always… TEST, TEST and then TEST some more before you implement zones with FMA!!!!
XenMobile Optimizations. I guess we have to talk about it. XenMobile is here to stay, so best to not take the ‘out of the box' experience there either.
PVS (Provisioning Services) or MCS (Machine Creation Services, aka linked clones)? This is a long-standing debate that I'm hoping to have the time to address after Citrix Synergy. But I did appreciate this breakdown since I continue getting this question all the time: Should I choose PVS or MCS for my deployment?
Well, in our debate-obsessed culture (US Elections, Batman vs Superman, Captain America vs Iron Man… the list is endless), this one is heating up. In some ways- it's like having to chose the less of multiple evils…
But in all honesty- how do you make the decision?
Well- of course it depends- but one thing you may want to consider first from Dan Feller's recent blog- which bottleneck will you be experiencing?
In a nutshell- if your storage is awesome (super fast with good deduplication capability) but your network may not be… MCS is an easy win.
If you plan to deploy to the cloud- MCS is an easy win.
If you need it deployed quick for a POC- MCS is an easy win.
Considering the real network consumption to boot a VM is less than 300 MB, and that PVS makes diskless or near-diskless configurations possible…
PVS is still my reference standard, even for smaller environments. Here's why:
PVS has a proven track record and an ability to deploy a single image to multiple hypervisor pools. MCS struggles with requiring copies of the master VM to each storage. While this has gotten better, PVS is still epic in this regard.
Networks have evolved and is barely ever a bottleneck that makes PVS struggle. Even a single 1Gbit connection can boot and maintain several hundred target VMs. Given that most VMs are operating at more than 10Gbit in the enterprise today and the load can be spread to multiple PVS servers… this factor barely exists any longer.
PVS has always reduced IOPS requirements overall, but in the past 4 years has seriously jumped forward because of two things:
Your .vhdx file is read and cached into RAM, so subsequent reads for target device requests come from RAM. This means you can scale nearly endlessly with virtually no IOPS impact from reading the base vDisk.
Write Cache in RAM with Failover to Hard Disk, while the longest description ever is perhaps one of the single most epic bits of Citrix technology to be deployed in the past 10 years. Reducing the amount of storage IOPS for Write, which used to be almost 90% of the overall IOPS required for PVS targets is now lessened because the writes are cached in RAM and in some cases don't even hit the disk at all!
PVS makes a pod-based architecture viable, lowering downtimes significantly. With the right design, you can have an entire rack of servers go offline and your users won't even know. You can design in ways that allows you to mix storage and hypervisor pools that MCS has trouble maintaining. So when I say it scales “better” I am rarely talking about quantity but operational quality. Of course, it all depends on good design but if you want to hear more about that, I'd love to discuss it!
PVS prevents the SAN battle. Nearly every time I go into a deployment of XenDesktop the team managing the SAN storm into the conference room with a unified front ready to say ‘no.' But I tell them we may not need them at all (local storage really is possible for PVS targets) or that our IOPS will be less than 1 per user… their shoulders drop down, they smile and tell me to have a nice day. And, I do. Because they said “yes”- because I've made their life easier.
PVS can track versioning and rollback images with much more speed and efficiency than every other technology out there I have ever seen.
Now, does PVS represent a learning curve- absolutely, which is I think the other thing that needs to be further discussed. I continue to see bad practices out there… but first I want to hear from you: What experiences have you had between MCS and PVS, and what are your thoughts? What kind of questions do you have? Comment below!
But if you want my advice in most cases, subject to a whole bucket of ‘it depends' here it is:
POC with MCS
Small deployments and cloud-based deployments with MCS
Go to Production with PVS
Put on your gloves and get ready for a fight
Good luck! Share this with your colleagues – I'd love to hear more from people before I start the Citrix Imaging topic in a few weeks!