Thursday, December 4, 2014

Automatic VMware VLAN connectivity testing

Story behind this post

Recently we have built new datacenter network because old one wasn't anymore enough modern.
New network structure contains lot of more VLANs than earlier one because servers are now own VLANs based who owns them and which are they role.

After our network provider got new VLANs created and configured to VMware hosts I noticed that we should some how to test that all of them are configured correctly and connection at least to gateway works from each VLAN on each VMware hosts.

Automatic testing

Like always there is many ways to do automation so I choosed first idea what I got.

I created PowerCLI script which do following things with Windows Server 2012 R2 (probably it works with 2012 server too):

  • Migrates virtual machine to each host on VMware cluster one by one.
  • Moves virtual machine to each VLAN what is configured to VLANs.csv one by one.
  • Set IP address which is listed on VLANs.csv to virtual machine.
  • Uses ICMP (Ping) to test gateway connectivity.
  • Writes results to log file (which updates after every test) and to report file (which is generated after all the tests).

NOTE! You must disable UAC from virtual machine. Other why script can't change IP address on VM.

Configuring script

Because I didn't wanted to re-test all old networks I needed to generate list of VLAN I want to test.
This can be easily done by exporting list of all VLANs on VMware using following command and removing VLANs what you don't want to test from it.

Get-VirtualPortGroup | Select-Object Name, VLanID | Export-Csv .\VLANs.csv -NoTypeInformation

Because script needs configure IP address to virtual machine and ping to gateway I also added "Subnet" column to CSV file where is subnet number without last number.

Example CSV:
"Name","VLanId","Subnet"
"Frontend","100","192.168.100."
"Backend","200","192.168.200."

Script itselves

Script is located below. I hope that you think that this is useful too.
$vCenter = "vcenter.domain.local"
$ClusterName = "VM cluster"
$TestVMname = "VLAN-Tester"
$VLANsList = Import-Csv ".\VLANs.csv"
$GatewayIP = "1"
$TestVMIP = "253"
$Netmask = "255.255.255.0"
$vCenterCred = Get-Credential -Message "Give vCenter account"
$HostCred = Get-Credential -Message "Give shell account to VMware hosts"
$GuestCred = Get-Credential -Message "Give guest vm credentials"
$LogFile = ".\VLAN_test.log"
$ReportFile = ".\VLAN_test_report.csv"

### 
Connect-VIServer -Server $vCenter -Credential $vCenterCred

$Cluster = Get-Cluster -Name $ClusterName
$vmHosts = $Cluster | Get-VMHost
$TestVM = Get-VM -Name $TestVMname

ForEach ($vmHost in $vmHosts) {
 # Migrate VM to vmHost
 $TestVM | Move-VM -Destination $vmHost
 
 # Find networks which are available for testing on current host
 $vmHostVirtualPortGroups = $vmHost | Get-VirtualPortGroup
 ForEach ($VLAN in $vmHostVirtualPortGroups) {
  ForEach ($VLANtoTest in $VLANsList) {
   If ($VLANtoTest.Name -eq $VLAN.Name) {
    $NetworkAdapters = $TestVM | Get-NetworkAdapter
    Set-NetworkAdapter -NetworkAdapter $NetworkAdapters[0] -Connected:$true -NetworkName $VLAN.Name -Confirm:$False
    
    # Set IP address to guest VM
    $IP = $VLANtoTest.Subnet + $TestVMIP
    $GW =  $VLANtoTest.Subnet + $GatewayIP
    $netsh = "c:\windows\system32\netsh.exe interface ip set address Ethernet static $IP $Netmask 0.0.0.0 1"
    Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType bat -ScriptText $netsh
    
    # Wait little bit and try ping to gateway
    Start-Sleep -Seconds 5
    $PingGWResult = Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType PowerShell -ScriptText "Test-NetConnection $GW"
    $ParsedPingGWResult = $PingGWResult.ScriptOutput | Select-String True -Quiet
    If ($ParsedPingGWResult -ne $True) { 
     Start-Sleep -Seconds 30
     $PingGWResult = Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType PowerShell -ScriptText "Test-NetConnection $GW"
     $ParsedPingGWResult = $PingGWResult.ScriptOutput | Select-String True -Quiet
    }
    
    # Generate report line
    $ReportLine = New-Object -TypeName PSObject -Property @{
     "VMhost" = $vmHost.Name
     "Network" = $VLAN.Name
     "GatewayConnection" = $ParsedPingGWResult
    }
    
    $ReportLine.VMhost+"`t"+$ReportLine.Network+"`t"+$ReportLine.GatewayConnection | Out-File $LogFile -Append
    [array]$Report += $ReportLine
    Remove-Variable ParsedPingGWResult
   }
  }
 }
}
$Report | Export-Csv $ReportFile -NoTypeInformation

Wednesday, November 19, 2014

Chinese mini pc review

Story behind this post

On these days all electronics is made in China.
Didn't see any good reason to pay anything for middlemans so I decide order my next PC directly from China.

My employer offers me laptop I didn't needed another one and I didn't wanted any full ATX to my living room anymore so I decide to order mini pc.

This is review about that device.

Mini PC

Selection

After some research I decided to order this device.
My version is with i5-4200U CPU so it was little bit more expensive than on that link.

Other parts are:

  • G.Skill DDR3 1600 MHz SO-DIMM 4GB x 2
  • Samsung 850 Pro 120GB

Installation

Hardware installation

Hardware installation is very simple.
Just put memories ja HDD/SDD inside to device.

Package contains cables for one harddisk but there is enough space and free cable slots for another harddisk too.

Operating System installation

Reseller says on they page that this device is tested with Windows 7 but it works fine with Windows 8.1 too and actually I tested that Windows 10 preview also can be installed to it.

Important note on this point is that device is sold without Windows OEM license so if you want run Windows on that you need buy retail license. Myself are using this with MSDN licensed version of Windows 8.1 because this is my test machine.


What comes to operating system installation you need create USB media which supports UEFI. I used Rufus for that.

Installation from USB can be started by following these steps:

  • Connect USB stick/etc to USB port.
  • Press power button
  • Press ESC on system start. UEFI BIOS will be loaded.
  • Move to "Save & Exit" screen.
  • Select "Launch EFI Shell from filesystem device", operating system installation will be loaded if your USB media is valid.

About drivers

Windows founds drivers all the devices by default but them are not best for this device. There also isn't any device manufacture's page where you would download correct ones so you need find them one by one.

Here is most important ones:

WLAN works with driver what comes with Windows but time to time it losts connection.

Solution to this problem was manually "update" driver to Broadcom 802.11 Multiband Network Adapter driver even if Windows thinks that it is not good one for this device.




Windows tuning

I also found from Windows event log that hibernate and fast boot after that didn't worked correctly.



I don't know reason for that but because normal from boot power button press to status when Windows is up and running and user is logged in (using automatic login) only takes about 10 seconds so I just totally disabled hibernate using command:
powercfg.exe /hibernate off

Currently device still gives warning like this on every boot for every CPU core. I haven't found any reason for that one but device works without any problems so I assume that it is false even.


Performance tests

Windows 8.1 doesn't contain anymore graphical version of experience index tool but you still can get it using this tool.

Result about that test:

Highest rate comes from SDD so here is also more detailed informations from it:


Because device is totally passive cooled I was also interested to see what happens when device is running under full load longer time. On this picture it have under full load over one hour:

No problems found on that test.

Review summary

Because these devices comes without operating system license and there is no official and tested drivers available them are not best choise for end users.

Possible use cases like I see it are:

  • Companies who have are volume license contract with Microsoft can use these on workstations. Price/performance ratio is very good to that use case because they don't need buy douple licenses (OEM + volume).
  • Universities and companies which are using Linux workstations can use these devices without need buy Windows license.
  • Companies who needs PCs to industry hall(s) would be interested because device didn't contain any moving parts so dust not easily broke that one and it is very cheap keep couple spares on storage. Device is also very small and provided with wirelss connection so it would work fine example on storage trucks.
  • It nerds like me can use these on test machines or example on HTPC.

Anyway, if you know what you are looking for I can recommend to try one of these too. If you are not sure about all device details you can ask from reseller. They reply very fast.

Sunday, October 19, 2014

How to build ADFS (SAML 2.0) to KCD "proxy" using Citrix NetScaler - Part 2

This is second part of my 'How to build ADFS (SAML 2.0) to KCD "proxy" using Citrix NetScaler' guide.
You can find first of guide from: How to build ADFS (SAML 2.0) to KCD "proxy" using Citrix NetScaler - Part 1

On first part of this guide I said that we will join Netscaler to domain. Well I noticed later that is not needed on this configuration because any client doesn't connect to Netscaler using Kerberos authentication.

Anyway here is rest of needed configurations to get this working.

Enable Kerberos authentication to IIS page


I don't want copy whole that checklist here but shortly you need:
  • Disable "Anonymous Authentication"
  • Enable "Windows Authentication" to IIS web site.
  • Disable kernel mode authentication.
  • Add "Negotiate:Kerberos" authentication provider and remove all others.
My IIS settings looks like this.

Test that kerberos authentication works

On that point it is good idea test that you really can connect to application and authentication to it works from some machine which is on same domain with backend server.


Because we are using kerberos authentication you will notice that you must connect to application using name which have registered SPN (Service Principal Names) on Active Directory.

That means that on default settings these urls works:
  • http://iis.contoso.local
  • http://iis

And these ones are not working:
  • http://192.168.100.21
  • http://iis.contoso.com

I used ASP code like this on IIS to show me that which account was actually authenticated to it (IIS ASP feature is needed):
<asp:LoginName id="LoginName1" runat="server" FormatString ="Welcome, {0}" />

Important note here is that when you connect to this page through Netscaler, it will always use server's FQDN name to connect it. That means that even public url on this example is iis.contoso.com, you don't need register SPN for it.

Custom monitor

After you are forced IIS to use Kerberos authentication you will notice that service on Netscaler will go down (at least if you are used HTTP monitor).

Reason for this is that Netscaler can't any more get right response from IIS server.
To solve this issue I created "HealthCheck" folder to IIS side and enabled anonymous authentication to it.

Then I created custom monitor like and linked it to IIS service.
add lb monitor http_HealthCheck HTTP -respCode 200 -httpRequest "HEAD /HealthCheck/" -LRTM DISABLED
unbind service svc_IIS -monitorName http
bind service svc_IIS -monitorName http_HealthCheck


Allow Kerberos delegations from AD

  • Created domain account svc_ns_kcd
  • Created new SPN for that account using following command:
  • setspn -S host/nsidp.contoso.com svc_ns_kcd
    • This is only needed for enabling "Delegation" tab to that service account but of course it need to unique on domain. I used nsidp.contoso.com name which was used SAML provider.
  • Added following delegation:


Traffic policy

When we have everything else on place we just need create KCD account to Netscaler and assign it to service using traffic policy.
add aaa kcdAccount svc_ns_kcd -realmStr CONTOSO.LOCAL -delegatedUser svc_ns_kcd -kcdPassword Qwerty7

add tm trafficAction trafficKCDSSO -SSO ON -kcdAccount svc_ns_kcd
add tm trafficPolicy trafficKCDSSO TRUE trafficKCDSSO
bind lb vserver vsrv_IIS -policy trafficKCDSSO -priority 100

After you add that configuration you should be able to connect to https://iis.contoso.com using SAML federation and you should be authenticated to application using kerberos.

Session handling trick

After some testing I noticed that Netscaler got kerberos ticket only for first user and after that it authenticated second user to application using first user's credentials. Because I didn't found solution to that problem my self I created support request to Citrix. They found from log files that problem on this configuration is that Netscaler always connects to backend application using same source port, that why IIS didn't requested user authentication (using 401 error) and second user was delegated to application using first users session.

Solution to this problem is set maxClient = 1 setting to service. With that configuration Netscaler always uses different source port when it connects to application. Then IIS always responses 401 to first request and Netscaler will get kerberos ticket for user.
We can enable this setting following commands:
rm service svc_IIS

add service svc_IIS IIS HTTP 80 -maxClient 1

bind lb vserver vsrv_IIS svc_IIS
bind service svc_IIS -monitorName http_HealthCheck

Add another page behind this proxy

After you are used lot of time to get this working, relevant question is that "What is needed to add another vserver behind this system?"
First you need have:
  • AssertionConsumerService value specified on SAML metadata.
    • On example on first part of this guide I already included iis2.contoso.com to there.
  • You need add kerberos delegations to new IIS server for svc_ns_kcd account.
Then you just create new vserver with needed configurations like this:
add server DC 192.168.110.11
add service svc_DC DC HTTP 80 -maxClient 1
add lb vserver vsrv_DC SSL 192.168.110.22 443 -persistenceType NONE
bind lb vserver vsrv_DC svc_DC
bind ssl vserver vsrv_DC -certkeyName wildcard
set ssl vserver vsrv_DC -tls11 DISABLED -tls12 DISABLED
bind service svc_DC -monitorName http_HealthCheck

set lb vserver vsrv_DC -AuthenticationHost iis2.contoso.com -Authentication ON -authnVsName auth_vsrv
bind lb vserver vsrv_DC -policy trafficKCDSSO -priority 100

NOTE! On this configuration both of these web pages (iis.contoso.com and iis2.contoso.com) are authenticated using auth.contoso.com authentication vserver and them are using same relaying party rule on ADFS side. If you are service provider which wants publish different web pages to different customers then you need have one authentication vserver per customer that configuration allows to use different relaying parties for different web servers.

The final words

It have been very nice experience to leaning how kerberos authentication and SAML federations works on deep level and I really like that Netscaler allows us to do this. I hope that this will be useful guide for people who are looking for same kind solution. My trip here is now over and it is time to move on to next technologies.

Thursday, September 18, 2014

Easily modify UAC protected config files

Story behind this post

Especially on testing and development environments I have see the situation that I need manually modify config files.

Personally I mostly use Notepad++ for that task because it is free tool and with good syntax highlight feature. With Compare plugin it is also very powerful tool for finding errors from manually created configurations.


Challenge with UAC protected files

If you want modify example web.config file using Notepad++ you will notice that it can't save files which are protected by UAC (User Account Control).

I have seen people using three different solutions for this problem:
  1. Disable UAC
    1. This of course works but it is bad solution from security point of view.
  2. Run CMD or PowerShell using "Run as administrator" function and run command: notepad config.file on there.
    1. This also works but now you haven't syntax highlight in use so you can easily broke example XML syntax.
  3. Run CMD or PowerShell using "Run as administrator" function and start Notepad++ from there and open config file to it.
    1. This also works but there it is too painful way to do that.

Better solution

Once when I was plan what should be included to virtual machine templates I got idea that it would be nice to have "Edit with Notepad++ (Run as administrator)" function on right click menu.

What I tried to do on first place was just enable "Run this program as an administrator" feature to notepad++.exe unfortunately result wasn't so good:

Reason for error is way how "Edit with Notepad++" feature is implemented. It uses ShellExecute function which just not working here.


After some of Googling and testing I found solution to this. This can be done if it is configured to Windows registry differently. Just in case I created new function instead of touching the old one so now I have "Edit with Notepad++" and "Edit with Notepad++ (Run as administrator)" selections on right click menu.
Result looks like this:



Because like PowerShell here is script for you which do following tasks:
  • Install Notepad++
  • Install Compare plugin
  • Disable Notepad++ auto update
  • Disable plugins auto update
  • Create "Edit with Notepad++ (Run as administrator)" function to right click menu
You just need put this script to same folder with Notepad++ installer and Compare plugin's DLL file.

$ScriptPath = Split-Path $script:MyInvocation.MyCommand.Path
$NPPInstallerName = "$ScriptPath\npp.6.6.9.Installer.exe"
$NPPInstallerParameters = "/S"
$InstallFolder = "C:\Program Files (x86)\Notepad++"

# Install
Start-Process -NoNewWindow -Wait -FilePath $NPPInstallerName -ArgumentList $NPPInstallerParameters

# Install compare plugin
Copy-Item "$ScriptPath\ComparePlugin.dll" "$InstallFolder\plugins\"

# Disable Notepad++ updater
$CommonConfig = "C:\Program Files (x86)\Notepad++\config.model.xml"
[xml]$Config = Get-Content $CommonConfig 
($Config.NotepadPlus.GUIConfigs.GUIConfig | Where-Object {$_.name -eq "noUpdate"})."#text" = "yes"
$Config.Save($CommonConfig)


# Disable plugins auto update on all profiles
$PluginManagerINI = @"
[Settings]
NotifyUpdates=0
DaysToCheck=0
"@
$ProfileFolders = Get-ChildItem C:\Users -Directory -Exclude Public,"Default User" -Force
ForEach ($ProfileFolder in $ProfileFolders) {
 New-Item -ItemType Directory -Path "$($ProfileFolder.FullName)\Notepad++\plugins\Config" -Force
 $PluginManagerINI | Out-File "$($ProfileFolder.FullName)\Notepad++\plugins\Config\PluginManager.ini"
}


# Create "Edit with &Notepad++ (Run as administrator)" to right click menu
Copy-Item "$InstallFolder\notepad++.exe" "$InstallFolder\notepad++admin.exe"
New-Item -Path 'HKLM:SOFTWARE\Classes\*\shell\OpenWithNotepad' -Force
New-Item -Path 'HKLM:SOFTWARE\Classes\*\shell\OpenWithNotepad\Command' -Force
New-Item -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers' -Force
Set-ItemProperty -Path 'HKLM:SOFTWARE\Classes\`*\shell\OpenWithNotepad' -Name "(Default)" -Value "Edit with &Notepad++ (Run as administrator)"
New-ItemProperty -Path 'HKLM:SOFTWARE\Classes\`*\shell\OpenWithNotepad' -Name "icon" -Value "%SystemRoot%\system32\imageres.dll,73"
Set-ItemProperty -Path 'HKLM:SOFTWARE\Classes\`*\shell\OpenWithNotepad\Command' -Name "(Default)" -Value '"C:\Program Files (x86)\Notepad++\notepad++admin.exe" "%1"'
New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers" -Name "C:\Program Files (x86)\Notepad++\notepad++admin.exe" -Value "~ RUNASADMIN"


This is also reported to Notepad++ feature request. Hopefully it will be included to official version on some point of time: https://sourceforge.net/p/notepad-plus/feature-requests/2517/

Friday, July 25, 2014

How to build ADFS (SAML 2.0) to KCD "proxy" using Citrix NetScaler - Part 1

Story behind this post

Some time ago I got request from customer project that they need give for customer Excel access to SQL Analysis Services which is located on our Cloud environment and customer will connect to it from they network over the internet.

First tricky part here was that this connection should be single-sign-on from user point of view.

Using SharePoint's Excel Services it is possible create Excel sheet which is located on SharePoint and Excel Services will connect to back-end service. Because data on SQL Analysis Services would be also different depending who connecting to Analysis Services, Kerberos delegation was only possible way to continue.

SharePoint contains native support for ADFS federation to it, but another tricky part was that if you enable ADFS federation to SharePoint it will authorize users to it without authenticating them. Because users are not authenticated you can't get Kerberos tickets to them so connection SQL Analysis Services autetication is not working any more. More info about that on here: http://blogs.msdn.com/b/andrasg/archive/2010/05/04/setting-up-sharepoint-2010-excel-services-to-get-external-data.aspx


How Citrix NetScaler can help with this situation?

Unlike SharePoint, NetScaler supports extracting user information from ADFS (SAML 2.0) claims and retrieve Kerberos ticket for them. That concept is called for Kerberos constrained delegation (KCD). With that Kerberos ticket NetScaler can forward user's session to any web service which supports Kerberos authentication.

When I say any web service, it really means that. So what actually happened was that I found solution which actually can provide SAML 2.0 federation support to any application which are using IIS native authentication without coding that manually to every application and because it uses Kerberos authentication to back-end services, double or even triple hop are not problem any more.

Which basically means that you can example have web page which gets data from another web server which gets data from SQL server and still use integrated authentication on SQL server side.

How authentication process works on this concept

This picture show how authentication process works on this concept.
What is missing from picture is communication with Claims Provider which actually gives SAML claims for user.

If you want get this working like federation to Office 365 you also need these:
  • ADFS server to customer's network where user actually can be authenticated using they own domain's accounts.
    • On picture above that means that after Step 3, ADFS would redirect user's browser first to customer's ADFS server and wait that it comes back with correct ADFS claim from there.
  • Because we want use Kerberos delegation to back-end service(s) you need have users created to Cloud environment's Active Directory with same identifier field (best practice is use UPN) than them are on customer's domain.
    • Important notes is that these are standard based solutions. Which means that you can use any SAML 2.0 product (ADFS, Shibboleth,etc) on you or customer's side and you can mix them if needed. And on customer's side can be any directory (Active Directory, OpenLDAP, SQL, etc) where you can get authentication information. 

Configuring

Next I will explain for you step by step how configure this concept to lab environment. I will use one empty IIS server on this example

I used totally empty Netscaler for building this lab so all needed steps should be on this guide. I used Netscaler's 10.1 version but all configurations are done from command line so them should works at least on all 10.x versions.

On this example Domain Controller, IIS server and Netscaler are all part of same Active Directory domain (yes, we will join Netscaler to domain :) )

Basic configs

Because Netscaler need get Kerberos tickets from domain controllers it need to have working DNS settings. And because on Active Directory domain Kerberos tickets are only valid for five minutes (default setting) Netscaler's clock must be on same time with domain.

You can configure both of these things using following commands:
add dns nameServer 192.168.100.11
add dns suffix contoso.com
add ntp server 192.168.100.11

We also need create virtual server front of real IIS server using Netscaler.
You can create this using normal procedure but there is two important things what you should remember.
  1. You must create server record to Netscaler instead of using destination server's IP addresses directly on service.
  2. You must user server's real hostname on server record.
    1. That mean that you can't use example "srv_" prefix on server records.
    2. This is important because Netscaler will request Kerberos tickets for that hostname so if it isn't exactly same than server's real name, them domain controller will reject that request.



I created *.contoso.com certificate to Netscaler and imported it using wildcard keypair name and all servers Trusted Root Certificatest. That solved all certificate problems what I was on this lab.

Easiest way what I know is generate self-signed certificate using following PowerShell command (PowerShell 4.0 is needed):
New-SelfSignedCertificate -DnsName *.contoso.com -CertStoreLocation cert:\LocalMachine\My
Then you can export certificate on PFX format and import that to Netscaler accordance with this instruction: http://support.citrix.com/article/CTX136444


I used following commands for configure IIS service load balancing (and enabling LB feature).
enable feature LB
enable feature SSL
enable feature AAA

add server IIS 192.168.100.21
add service svc_IIS IIS HTTP 80
add lb vserver vsrv_IIS SSL 192.168.100.20 443 -persistenceType NONE
bind lb vserver vsrv_IIS svc_IIS 
bind service svc_IIS -monitorName http
set ssl vserver vsrv_IIS -tls11 DISABLED -tls12 DISABLED
bind ssl vserver vsrv_IIS -certkeyName wildcard
On this point is good idea create DNS record for your web page and check that you can connect to it. I'm using url https://iis.contoso.com on this example.

Creating authentication vserver

We need authentication vserver on this concept so I created that one next.
I configured it firstly to use LDAP authentication because it is must easier configure than SAML so I was able to test that auth vserver works.

There is good guide for this part on here so I don't explain this part more but my commands are below: http://support.citrix.com/article/CTX126852

add authentication vserver auth_vsrv SSL 192.168.100.30 443 -AuthenticationDomain contoso.com
bind ssl vserver auth_vsrv -certkeyName wildcard

add authentication ldapAction auth_ldap_srv -serverIP 192.168.100.11 -ldapBase "dc=contoso,dc=local" -ldapBindDn ns@contoso.local -ldapBindDnPassword Password1 -ldapLoginName samAccountName
add authentication ldapPolicy auth_ldap_policy ns_true auth_ldap_srv
bind authentication vserver auth_vsrv -policy auth_ldap_policy -priority 100

add tm sessionAction sessionLDAPSSO -SSO ON -ssoCredential PRIMARY -ssoDomain contoso.local
add tm sessionPolicy sessionLDAPSSO ns_true sessionLDAPSSO
bind authentication vserver auth_vsrv -policy sessionLDAPSSO -priority 1

When authentication vserver was ready I enabled it to my IIS page using following command and tested that authentication using username and password works.
set lb vserver vsrv_IIS -AuthenticationHost auth.contoso.com -Authentication ON -authnVsName auth_vsrv

Configuring SAML support to Netscaler

Here is guide how configure SAML support to Netscaler: http://support.citrix.com/article/CTX133919

You can follow that guide but there is one important note "Metadata file is not created by default. NetScaler administrator has to create the metadata file".

That guide also contains example about metadata file but it is on screenshot format and guide actually doesn't explain very well what that file should contain. So if you are not familiar with SAML protocol it can be hard get it done.


That why I will provide for you my Netscaler's metadata file from my lab and try explain all relevant part from it.

First of all, you need give some DNS name for your Netscaler SAML IDP. You can use same IDP for multiple URLs as long all of them can use same ADFS (or any SAML provider) policies.


I used DNS name nsidp.contoso.com on this example. You don't need configure that name to DNS but it will be included to SAML signing certificate. SAML certificate can be self-signed because only other SAML providers (ADFS on this example) need to be trust it and it will be included to metadata xml. I used same method than earlier for generating this certificate.

After that I exported nsidp.contoso.com certificate again from server but now only public key and saved it on base64 encoded format.

Then I opened that .cer file on notepad, removed all end of line marks from it and copied certificate without headers to metadata XML.


Whole my metadata file is visible on here:
Important settings on that file are:
  • entityID
    • This is your Netscaler IDPs unique identity. That will be send to ADFS on all requests so it know which policy it should use.
  • ds:X509SubjectName
    • Your IDPs name.
  • ds:X509Certificate
    • Your IDPs public certificate.
  • md:AssertionConsumerService
    • This is url where ADFS will redirect user session after successfully authentication. Netscaler will send this url to ADFS on all requests but ADFS reject them if url are not configured to it.
    • URL will be automatically generated to all sites where you are using  SAML authentication.
    • You must add new url to metadata every time when you add new vserver to use SAML authentication. Each url must have unique index.
When metadata file is ready you can import it to relaying party trusts on ADFS console and follow the guide about all other steps.

Because Netscaler's certificate is self-signed, I also disabled it's CRL check using following PowerShell command on ADFS server:
Set-AdfsRelyingPartyTrust -SigningCertificateRevocationCheck None -TargetName nsidp.training.lab


ADFS server's signing certicate I uploaded to Netscaler and configured ADFS server to Netscaler's claims provider trust using following commands:
add ssl certKey adfs-signing -cert adfs-signing.cer 
add authentication samlAction auth_saml -samlIdPCertName adfs-signing -samlSigningCertName nsidp -samlRedirectUrl "https://adfs.contoso.com/adfs/ls/" -samlUserField "Name ID" -samlIssuerName auth.contoso.com


Change authentication vserver to use SAML authentication

On this point I created new authentication policy which are using SAML and changed authentication vserver to use it.
unbind authentication vserver auth_vsrv -policy auth_ldap_policy

add authentication samlPolicy auth_saml_policy ns_true auth_saml
bind authentication vserver auth_vsrv -policy auth_saml_policy

Now you should be able to connect your web page using ADFS federation. If it works you probably need test that using some browser which not do automatic login to ADFS server. Other way you can't see was ADFS used or not.


Next steps are join Netscaler to domain and generate needed Kerberos configurations.
I will write this guides part 2 about them later.

Monday, July 14, 2014

Automatic failover on two data centers SQL AlwaysOn solution with minimum number of components


Using SQL AwaysOn you can very easily create solution where you have two SQL servers (physical or virtual) located to different data centers and all data is replicated between them.

Because in AlwaysOn shared storage is not needed you also don't need expensive storage replication systems and you can even keep all data on local storage.

Automatic failover challenges

If you want that failover is automatic when active node goes down or lose network connectivity, you need plan and test carefully how traffic between nodes and from nodes to quorum works on all situations.

There is of course multiple ways solve this but I will explain for you how we solved this with our network provider.

We are using node and file share majority on our quorum model because it is only model which can be used on this case. More information why witness share is only choice on this case you can find from here: http://blogs.technet.com/b/askpfeplat/archive/2012/06/27/clustering-what-exactly-is-a-file-share-witness-and-when-should-i-use-one.aspx

Our network provider was also created support request to Microsoft and got recommendation that both cluster nodes should always see witness share even if connection between data centers is lost. That was one important reason behind our solution.
Another one was that if both cluster nodes can see each others but lose connection to witness share it only causes alert to event log (which you should monitoring) and services keeps alive.

Our solution

Our solution to problem was create totally separated route from both data centers to third data center where witness share server is located. These routes are using even on physically layer totally different devices than connections between VLANs or to default gateway.

On our solution traffic between data centers 1 and 2 is using layer 2 so both cluster nodes are in same subnet and only witness share connections are routed.

Using that solution we got route to witness share working even connection between data centers will lost or normal route between VLANs/to internet is lost. Like Microsoft's support recommended.

Logical picture

Following picture explains how SQL network is split on logical layer and how routes are done from/to witness share server.

On this example you should use example following IP settings on SQL cluster nodes and persistent routes which you can see on picture.
  • SQL cluster node 1
    • IP: 10.10.10.100
    • Netmask: 255.255.255.0
    • Gateway: 10.10.10.1
  • SQL cluster node 2
    • IP: 10.10.10.200
    • Netmask: 255.255.255.0
    • Gateway: 10.10.10.1

Limitations/challenges on our solution

I figured out at least these limitations/challenges on our solution which you should remember.
  • You need very carefully check that you are using correct router on witness share connection. If you are misconfigured that you will lose route to witness share when connection between datacenter 1 and 2 is lost.
  • You need very carefully check that you are using IP address from correct piece of address list. If you are misconfigured that you will lose route from witness share server back to your cluster node.

Benefits on this solution

Biggest benefits behind this solution is that you get low cost SQL cluster solution which can handle even whole data center crash automatically. That is very important especially with applications which are store all important data to databases.

Saturday, May 10, 2014

PowerShell automations with control database

Story behind this post

A while ago I noticed that it isn't possible create smart automations if you haven't way provide control data for scripts.
After lot of thinking and discuss with my colleagues I noticed that it would be useful have SQL database which contains all control data and all the scripts can read them from there.

On this blog post, I will explain how you can create simple server provision database and script which will automatically create new virtual machines using control data on database.

Plan and generate control database

I used DbSchema for generating this database schema. It's trial works 15 days with all features which is enough for generating this kind test databases.

Here is picture of DB and SQL script for create it.

CREATE DATABASE ServerProvision
GO
USE ServerProvision
GO
CREATE TABLE dbo.Server ( 
 Id                   int NOT NULL   IDENTITY,
 Name                 varchar(15) NOT NULL   ,
 Description          varchar(100)    ,
 Installed            bit NOT NULL CONSTRAINT defo_Installed DEFAULT 0  ,
 CONSTRAINT Pk_Server PRIMARY KEY ( Id )
 );


Deploy servers using control database

Following functions can be used for communicating with SQL server. You can just copy / paste them to PowerShell.
Function Connect-ControlDB {
 param (
  [Parameter(Mandatory=$True)]$SQLinstance,
  [Parameter(Mandatory=$True)]$Database,
  [Parameter(Mandatory=$True)]$SQLUser,
  [Parameter(Mandatory=$True)]$SQLPwd
 )
 $global:SQLConnection = New-Object System.Data.SqlClient.SqlConnection("Data Source=$SQLinstance;Initial Catalog=$Database;User ID=$SQLUser;Password=$SQLPwd");
}

Function New-VMSchedule {
 param (
  [Parameter(Mandatory=$True)]$VMName,
  [Parameter(Mandatory=$True)]$VMDescription
 )
 If (!($SQLConnection)) { Connect-ControlDB }
 $SQLConnection.Open()
 $SqlCmd = New-Object System.Data.SqlClient.SqlCommand("INSERT INTO Server (Name,Description) VALUES ('$VMName','$VMDescription')", $SQLConnection);
 $SqlCmd.ExecuteNonQuery()
 $SQLConnection.Close()
}

Function Get-ScheduledProvisions {
 If (!($SQLConnection)) { Connect-ControlDB }
 $SQLConnection.Open()
 $SqlCmd = New-Object System.Data.SqlClient.SqlCommand("SELECT * FROM Server WHERE Installed = 0", $SQLConnection);
 $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
 $SqlAdapter.SelectCommand = $SqlCmd
 $DataSet = New-Object System.Data.DataSet
 $SqlAdapter.Fill($DataSet)
 $SQLConnection.Close()
 Return $DataSet.Tables[0]
}

Function Set-ProvisionDone {
 param (
  [Parameter(Mandatory=$True)]$Id
 )
 If (!($SQLConnection)) { Connect-ControlDB }
 $SQLConnection.Open()
 $SqlCmd = New-Object System.Data.SqlClient.SqlCommand("UPDATE Server SET Installed = 1 WHERE Id = $Id", $SQLConnection);
 $SqlCmd.ExecuteNonQuery()
 $SQLConnection.Close()
}
Then you can schedule server deployments using commands:
Connect-ControlDB -SQLInstance "SQLserver" -Database "ServerProvision" -SQLuser "SQLuser" -SQLPwd "SQLpwd"
New-VMSchedule -VMName "testserver1" -VMDescription "First provision test"
Then you would have example this kind script scheduled on Hyper-V server (of course you need include functions for above to it). Script reads scheduled installations from SQL, deploy them and mark ready.
Connect-ControlDB -SQLInstance "SQLserver" -Database "ServerProvision" -SQLuser "SQLuser" -SQLPwd "SQLpwd"
$VMs = Get-ScheduledProvisions
If ($VMs[0] -gt 0) {
 ForEach ($VM in ($VMs[1..$VMs.count])) {
  New-VM -Name $VM.Name
  Set-VM -Name $VM.Name -Notes $VM.Description
  Set-ProvisionDone -Id $VM.Id
 }
}

Wednesday, February 19, 2014

Manage FloodLight using PowerShell

PowerShell is very powerful tool for manage Hyper-V and VMware environments.

That why it would be useful have possibity manage networks using same tool.

FloodLight contains REST API which is very simple call from PowerShell. I created PowerShell module for basic FloodLight's management. PowerShell module and it's documentation is available in here: https://floodlightpscmdlet.codeplex.com/documentation

Tuesday, February 18, 2014

Build latest of FloodLight's version from GitHub

FloodLight's documentation includes information about Virtual Network Filter v1.1:
http://docs.projectfloodlight.org/display/floodlightcontroller/VirtualNetworkFilter+%28Quantum+Plugin%29+%28Dev%29

FloodLight version which is included to Ubuntu is only supporting older Quantum Plugin 1.0. That why I wanted build latest version from GitHub to deb package and use it.

Reason for using deb package is that when some day newer version is included to Ubuntu's repository you can upgrade FloodLight using apt-get.

Here is short guide how you can build your own deb package:
# Installing pre-requirements
sudo apt-get install build-essential git default-jdk ant python-dev devscripts debhelper junit4 thrift-compiler libjs-twitter-bootstrap libjs-backbone libjs-jquery libjs-underscore yui-compressor

# Clone latest source code from GitHub
git clone git://github.com/floodlight/floodlight.git
mv floodlight floodlight-0.90+dfsg-9custom1
cd floodlight-0.90+dfsg-9custom1

GitHub contains only very basic version of deb packaging information. That why it is easier your remove debian folder cloned from GitHub and use Ubuntu's own version for template.
rm -rf debian
wget http://archive.ubuntu.com/ubuntu/pool/universe/f/floodlight/floodlight_0.90+dfsg-0ubuntu1.debian.tar.gz
tar -zxvf floodlight_0.90+dfsg-0ubuntu1.debian.tar.gz

You need add new version information. Other why apt-get upgrade will replace you package.
debchange -i
floodlight (0.90+dfsg-9custom1) raring; urgency=low

  * Latest version from GitHub

 -- Olli Janatuinen <olli.janatuinen@gmail.com>  Tue, 18 Feb 2014 18:18:06 +0000

Some modifications was also needed because them was requested by debian/rules script.
rm debian/source/format
cp README.md README.txt

Then we just need build new package and install it.
dpkg-buildpackage -b
cd ..
sudo dpkg -i floodlight_0.90+dfsg-9custom1_all.deb

Sunday, February 16, 2014

How to create OpenFlow testing virtual machine over Hyper-V

Like I said on my welcome post idea is create OpenFlow testing virtual machine (VM) over Hyper-V. So here is guide how do that :)

This configuration isn't production ready but it still is enough good for testing and learning OpenFlow. I will create another posts about production ready configurations.

I'm speaking here only about Hyper-V here but it should be possible do everything I describe on this post  over VMware too. You just need create and networks manually.

Components used in here and reasons for selecting them:
  • Floodlight OpenFlow Controller
    • Active OpenFlow Controller project
    • Good REST API which is easy call using PowerShell.
    • OpenStack have support for FloodLight. (I probably will talking about OpenStack later)
  • Open vSwitch
    • Included to Ubuntu
    • Easy creating virtual OpenFlow switch
  • Ubuntu Server 13.10
    • I'm already familiar with Ubuntu.
    • Latest LTS (Long Term Support) version doesn't include Floodlight.
    • Hyper-V drivers are included to Ubuntu.

Couple words about network topology

On Hyper-V you can create virtual switches which type is "Private network" but these networks are still shared between VMs. That why we need create as many virtual switches to Hyper-V than we want create ports to our Open vSwitch.

Hyper-V supports extensible switch extensions which makes possible connect it directly to OpenFlow controller but unfortunately there isn't yet any free/open source solution for that.

Logical picture how virtual machines and virtual switches will be connected on this scenario.


Creating virtual machine to Hyper-V

Like my colleague said it "Because real admins are not using GUI", here is PowerShell script for generating virtual switches to Hyper-V and virtual machine which uses them. That VM will contain OpenFlow controller and OpenFlow switch.

You can specify number of switch ports in to varible $SwitchPortNum but then you need change same value to Open vSwitch ports configuration script too.

# Settings
$vmMemory = 2048MB
$vmCPU = 2
$vmName = "OpenFlowTestVM"
$vmSwitchBaseName = "OpenFlow-SwitchPort"
$SwitchPortNum = 5

# Create VM
New-VM -Name OpenFlowTestVM -MemoryStartupBytes $vmMemory
Set-VM -Name $vmName -ProcessorCount $vmCPU

# Create private networks for switch ports and connect them to VM
For ($i = 1; $i -ile $SwitchPortNum; $i++) {
     New-VMSwitch -Name "$vmSwitchBaseName`-$i" -SwitchType Private
     Add-VMNetworkAdapter -VMName $vmName -SwitchName "$vmSwitchBaseName`-$i" -Name "eth$i"
}

# Allow mac address spoofingh for all NICs (needed by Open vSwitch)
Set-VMNetworkAdapter -VMName $vmName -MacAddressSpoofing On

Installing VM


Here is two possible ways installing this VM. You can use kickstart script which I already created for you and do everything manually.

Automatic install and configure for VM

Connect VM to network where is DHCP available.
Boot VM from Ubuntu's ISO -file.
Select language, click F6, click ESC, write ks=http://pastebin.com/raw.php?i=N1unnRjp and click Enter.
If you are not using USA keyboard and you want copy paste that url to VM, you also need click F3 and select your keymap.

Then VM will be automatically installed.
Open vSwitch port configurations you can generate using command (remember fix number of ports to it first):
sudo /configure-switch-ports.sh

Credentials to VM are:
Username: user
Password: Qwerty7!

Manually install and configure VM

Connect VM to network where is DHCP available.
Boot VM from Ubuntu's ISO -file.
Use default settings on installation (including OpenSSH is good idea).

Installing FloodLight

sudo apt-get install floodlight
sudo update-rc.d floodlight start 90 2 3 4 5 . stop 10 0 1 6 .
sudo reboot

Installing Open vSwitch, configure switch ports and connect it to Floodlight

# Installing Open vSwitch
sudo apt-get install openvswitch-switch

# Enable automatic start and stop for service
sudo update-rc.d openvswitch-switch start 91 2 3 4 5 . stop 10 0 1 6 .

# Config switch ports
sudo -s
# Guide: cat /usr/share/doc/openvswitch-switch/README.Debian
PORTSNUM=5
CONFIG="/etc/network/interfaces"
echo >> $CONFIG
echo "allow-ovs br0" >> $CONFIG
echo "iface br0 inet manual" >> $CONFIG
echo "ovs_type OVSBridge" >> $CONFIG
for ((i=1; i<=$PORTSNUM; i++)) {
  OVSPORTS+="eth$i "
}
echo "ovs_ports $OVSPORTS" >> $CONFIG
echo "ovs_extra set-controller br0 tcp:127.0.0.1:6633" >> $CONFIG
echo >> $CONFIG

for ((i=1; i<=$PORTSNUM; i++)) {
  echo "allow-br0 eth$i" >> $CONFIG
  echo "iface eth$i inet manual" >> $CONFIG
  echo "ovs_bridge br0" >> $CONFIG
  echo "ovs_type OVSPort" >> $CONFIG
  echo >> $CONFIG
}

sudo reboot

Testing

You need at least two virtual machines for testing this configuration.
Connect first VM to network OpenFlow-SwitchPort1 and second one to network OpenFlow-SwitchPort2

Because forwarding is enabled by default on Ubuntu's FloodLight package, ping between these machines should working now (disable firewall from them if it isn't).

Just to be sure that OpenFlow really controls traffic you can try disable forwarding module.
sudo nano /etc/floodlight/floodlightdefault.properties
remove line "net.floodlightcontroller.forwarding.Forwarding,\"
sudo service floodlight restart
After that ping reply should stops working. More information about FloodLights's modules: http://www.openflowhub.org/display/floodlightcontroller/Module+Applications


On next post I will talking more about how control traffic using OpenFlow.

Tuesday, February 11, 2014

Welcome to my blog

Hi

I'm working on Solution Architect in Cloud Architecture and Availability team in SaaS providing company.

Our company is not yet using network virtualization but I can see that them would give us easier way handling network security.

I created this blog for the purpose of I didn't find any guide how to test OpenFlow with Hyper-V and/or multi hypervisor environments.

Plan is that I will start testing and learning OpenFlow over Hyper-V share my experiences with you.


Reason for my OpenFlow interest is that in mixed VMware + Hyper-V environment you need two different network virtualization technologies (VXLAN and NVGRE) and also like more a vendor independent solutions.


I hope that this blog will give you useful information.


EDIT on 2014-07-13: Looks that this blog will also contain lot of my ideas and experiences from all Cloud technology areas which with I'm working.