Wednesday, October 12, 2016

Taking first steps to DevOps world - part 3

This is third part of my multi part blog post series about suggested first steps (from point of view) to DevOps world.

Link to part 1
Link to part 2

IIS configuration defined by code

Here I have code defined IIS configuration for my example application.

I used core version of Windows Server 2012 R2 together with Windows Management Framework 5.0 and .NET framework 4.6.2 to test this configuration but it should also works on core version of Windows Server 2016. Nano version of Win 2016 does not yet contains all needed features for this example application so it was not possible to use it for this purpose.

Configuration WebApplication
{
    param
    (
        [String[]]$NodeName = 'localhost',

        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [String]$SourcePath,
  
        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [String]$AppName
    )

    Import-DscResource -ModuleName PSDesiredStateConfiguration
    Import-DscResource -ModuleName xWebAdministration

    Node $NodeName
    {
        WindowsFeature IIS
        {
            Ensure                         = 'Present'
            Name                           = 'Web-Server'
        }

        WindowsFeature AspNet45
        {
            Ensure                         = 'Present'
            Name                           = 'Web-Asp-Net45'
            DependsOn                      = '[WindowsFeature]IIS'
        }
  
        WindowsFeature NET-WCF-HTTP-Activation45
        {
            Ensure                         = 'Present'
            Name                           = 'NET-WCF-HTTP-Activation45'
            DependsOn                      = '[WindowsFeature]AspNet45'
        }
  
        File WebContent
        {
            Ensure                         = 'Present'
            SourcePath                     = $SourcePath
            DestinationPath                = "C:\inetpub\" + $AppName
            Recurse                        = $true
            Type                           = 'Directory'
            DependsOn                      = '[WindowsFeature]NET-WCF-HTTP-Activation45'
            Checksum                       = "modifiedDate"
            MatchSource                    = $true
        }
  
        xWebAppPool AppPool
        {
            Name                           = $AppName
            Ensure                         = 'Present'
            State                          = 'Started'
            autoStart                      = $true
            enable32BitAppOnWin64          = $false
            startMode                      = 'AlwaysRunning'
            DependsOn                      = "[File]WebContent"
        }

        xWebApplication WebApp
        {
            Website                        = "Default Web Site"
            Name                           = $AppName
            WebAppPool                     = $AppName
            PhysicalPath                   = "C:\inetpub\" + $AppName
            Ensure                         = "Present"
            PreloadEnabled                 = $true
            DependsOn                      = "[xWebAppPool]AppPool"
        }
 
        # Enable IIS remote management
        WindowsFeature Web-Mgmt-Service
        {
            Ensure                         = 'Present'
            Name                           = 'Web-Mgmt-Service'
            DependsOn                      = '[WindowsFeature]IIS'
        }
  
        Registry RemoteManagement {
            Key                            = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
            ValueName                      = 'EnableRemoteManagement'
            ValueType                      = 'Dword'
            ValueData                      = '1'
            DependsOn                      = @('[WindowsFeature]IIS','[WindowsFeature]Web-Mgmt-Service')
       }
 
       Service StartWMSVC {
            Name                           = 'WMSVC'
            StartupType                    = 'Automatic'
            State                          = 'Running'
            DependsOn                      = '[Registry]RemoteManagement'
       }
    }
}

WebApplication -NodeName "DevFE01","DevFE02" -SourcePath "\\server\WebAppContent" -AppName "Web"
WebApplication -NodeName "DevBL01","DevBL02" -SourcePath "\\server\DataAccessAppContent" -AppName "DataAccess"
This configuration will install needed Windows roles, create IIS application pools, applications and copy application binaries from UNC share to these servers (computer accounts need to have read access to that share).

When you have these servers installed and joined to domain you can apply this configuration to them using these commands:
.\WebApplication.ps1
Start-DscConfiguration -Path .\WebApplication

IIS remote management

Another interesting detail is that this configuration will also enable IIS remote management so you can use IIS console from your management server to manage these applications on core server(s) like this:


Fifth lessons learned is that: Install and configure IIS management service role to your IIS servers so you are able to manage them remotely.

Database(s) defined by code

Using Entity Framework it is possible define data model using "code first" technology. On that model you define using source code which kind of data you will have, how they are linked together, etc and entity framework can automatically create and update your database(s) based on that model.

On my example application there is two entity types Companies and People and based on these code files entity framework will create database like this:


It is not possible to define all database settings which are needed on production using entity framework so I suggest that you create empty database(s) with correct initial size, auto grow, etc settings for your application(s) and you let entity framework handle database structure creating/updating.

Sixth lessons learned is that: Create SQL users and databases manually and use entity framework code first technology to create and update your databases.

Screenshots from example application

Now we have got example application installed and working on our environment so here is screenshots how it looks:





This was third part of this blog post series. I'm not sure if there will be more parts on this but I let it open now and will post more if I will get some good idea about content.
I hope that you found useful information from these posts.

Sunday, September 25, 2016

Taking first steps to DevOps world - part 2

This is second part of my multi part blog post series about suggested first steps (from point of view) to DevOps world.

Link to part 1

Environments defined by code

Purpose of scripting is automate routine tasks and release time for more important things. Another benefit on scripting is that script(s) will do steps always on same way.


Issue in traditional scripts is that dependency and error handling and especially version control is complex because it is impossible to say which scripts are applied to which environments.
Fortunately there is new tools like PowerShell DSC (Desired State Configuration) which can be used to define target environment/configuration and which will automatically make sure that correct scripts will be run on correct order.

Third lessons learned is that: Use code to define target environment/configuration instead of traditional scripts when possible.

Virtual servers

Here I have very simple example how all define these needed 12 virtual servers configuration using very simple piece of code. This example is done on Hyper-V environment (using xHyper-V module) but there is also similar module available for VMware on here.
configuration TestAppVMsToHyperV
{
    param
    (
        [string[]]$NodeName = 'localhost',

        [Parameter(Mandatory)]
        [string]$VMName,
        
        [Parameter(Mandatory)]
        [Uint64]$StartupMemory,

        [Parameter(Mandatory)]
        [Uint64]$MinimumMemory,

        [Parameter(Mandatory)]
        [Uint64]$MaximumMemory,

        [Parameter(Mandatory)]
        [String]$SwitchName,

        [Parameter(Mandatory)]
        [String]$Path,

        [Parameter(Mandatory)]
        [Uint32]$ProcessorCount,

        [ValidateSet('Off','Paused','Running')]
        [String]$State = 'Off',

        [Switch]$WaitForIP
    )

    Import-DscResource -module xHyper-V

    Node $NodeName
    {  
        xVhd DiffVhd
        {
            Ensure          = 'Present'
            Name            = $($VMName + ".vhd")
            Path            = $Path
            ParentPath      = "D:\Templates\NanoIIS_v1\NanoIIStemplate.vhd"
            Generation      = "Vhd"
        }
  
        xVMHyperV NewVM
        {
            Ensure          = 'Present'
            Name            = $VMName
            VhdPath         = $($Path + "\" + $VMName + ".vhd")
            SwitchName      = $SwitchName
            State           = $State
            Path            = $Path
            Generation      = 1
            StartupMemory   = $StartupMemory
            MinimumMemory   = $MinimumMemory
            MaximumMemory   = $MaximumMemory
            ProcessorCount  = $ProcessorCount
            MACAddress      = ""
            Notes           = ""
            SecureBoot      = $False
            EnableGuestService = $True
            RestartIfNeeded = $true
            WaitForIP       = $WaitForIP 
            DependsOn       = '[xVhd]DiffVhd'
        }
    }
}

########## Dev environment servers ########## 
# StartupMemory/ MinimumMemory = 2 GB
# MaximumMemory = 4GB
TestAppVMsToHyperV -NodeName "localhost" -VMName "DevFE01" -SwitchName "DevFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "DevFE02" -SwitchName "DevFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

TestAppVMsToHyperV -NodeName "localhost" -VMName "DevBL01" -SwitchName "DevBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "DevBL02" -SwitchName "DevBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2


########## Q&A environment servers ##########
TestAppVMsToHyperV -NodeName "localhost" -VMName "QAFE01" -SwitchName "QAFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "QAFE02" -SwitchName "QAFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

TestAppVMsToHyperV -NodeName "localhost" -VMName "QABL01" -SwitchName "QABL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "QABL02" -SwitchName "QABL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

########## Production environment servers ##########
TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdFE01" -SwitchName "ProdFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdFE02" -SwitchName "ProdFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdBL01" -SwitchName "ProdBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdBL02" -SwitchName "ProdBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

That code creates these servers based on my Windows Server 2016 Nano template which I have created using this command:
New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath E:\ -BasePath .\Base -TargetPath .\Nano\NanoIIStemplate.vhd -ComputerName NanoTemp -Packages Microsoft-NanoServer-DSC-Package,Microsoft-NanoServer-IIS-Package
More information about how to create Nano server templates you can find from here.


I have been working with automation for years and biggest issue what I have seen is that there is always someone who will go can do some unexpected configuration/upgrade/etc manually and that will break the automation.

Fourth lessons learned is that: Use Core or Nano servers instead of full GUI version of servers on your code defined environments. That will keep persons who do not know what they are doing out of from your servers ;)

When you have that environment define code ready you need apply it to virtualization platform. Here is simple example how to apply it locally on Hyper-V server:
.\TestAppEnvironmentsHyperV.ps1
Start-DscConfiguration -Path .\TestAppVMsToHyperV
That can be of course done also remotelly and on production you probably want use DSC pull server instead of pushing these configurations directly to servers.


This was second part of this blog post series. I will try post next one near future. Thanks for reading :)

Friday, September 23, 2016

Taking first steps to DevOps world - part 1

I have been saying already 1-2 years that DevOps world is direction where our company should go. Now (finally) management is also started talk that we should evaluate DevOps principles and how include them our daily operations.

That why it is good time to start multi part blog post (I'm not sure yet how many parts there will be) where I will describe my opinions which are correct steps on this "road".

On my work career I have been working first eight years with infrastructure and now two last years more together with application development. With that background I see that I have quite good opinion from "Dev" and "Ops" point of view.

I will also pick most critical "lessons learned" from my text and summarize them on last post for these who are too lazy/not interested to read whole story.

Example application

Best places to start following DevOps principles is of course when you are creating totally new application or doing re-building existing one from scratch.

I will present my example application from that point of view.

Needed environment

When we think about modern, scalable cloud application which is as simple as possible it would contain following server roles:
  • Frontend
  • Business logic
  • Backend

And logical picture of needed infrastructure would look something like this:

And like we all know minimum set of environments for that kind of application is:
  • Development environment
  • Quality assurance environment
  • Production environment

Using traditional approach it would be very big task for Ops team to get all these environments up and running so here is first place we can get benefits using DevOps principles.

First lessons learned is that: Create firewall rules and load balancer configs manually (because you only need create them once) and automate servers installations (because you need multiple servers with same settings, you need to be able to scale up/down your environment and it makes upgrades much easier if you can always install new servers with latest binaries instead of upgrading old ones).

Handling configurations

Everyone who have even tried to configure environment like this manually knows that needed work effort is huge because each server role can contains multiple application components and each or them can have multiple config -files.

Our packaging team has used lot of time to develop installers which can handle all these configurations and to be honest they are done very good and important work with it. That was especially important earlier when all applications was single tenant applications and most of the installations was on customers' in-house environments.

Now when our company focus is on Cloud business and all new/re-built applications are multi-tenant I see that we can (and actually even need) to use "shortcuts" on here to be able create and update environments on cost-effective way.


Fortunately Visual Studio contains this very nice feature which can be used to generate Web.config -files for different environments automatically.

Here is example pictures of it configured to use:


There is also this nice extension which will allow you to do samething for all other *.config -files too.


I can already see that someone will ask question "What if we still need provide applications for in-house customers?" so I will answer it already. Nice thing on this feature is that you still can have one configuration which creates *.config files for your packaging process and these packages can be used on in-house installations.

Purpose of this configuration is define Cloud environments on code which allows you to do application deployments/upgrades without care about configs (which is important step on your way to continuous deployments). Another and maybe even bigger benefit on this configuration is that dev, QA and production configurations are all documented on code, under version control and you can be sure that settings are 100% same on all these environments.

Second lessons learned is that: Use Visual Studio *.config -file transform feature to generate and document different environments *.config -files.


This was first part of this blog post series. I will try post next one near future. Thanks for reading :)



Thursday, September 15, 2016

Troubleshooting performance issues caused by performance monitoring tool

Background

It have been a while after my last post because I'm now working mostly with applications and not some much with infrastructure anymore. Anyway, now I have nice real world troubleshooting story which I like to share with you.


Everyone who has even tried troubleshoot application performannce issues knows that it is very time consuming. That why we decided to acquire Application Performance Management/Monitoring tool to help with it.

After comparing our requirements to tools which are available we decided to start proof of concept project using Dynatrace APM which is provided us by their local partner company here in Finland so we was able to be sure that log data does not leave the country (which was business requirement).

Findings in PoC

Installation package issues

After testing Dynatrace Agent to couple of servers I noticed that there looks be these two bugs on installation MSI package:

  • On some servers IIS agent installation fails but I did not found any real reason to it.
  • On some servers MSI package fails to store IIS module registration to IIS configuration.
    • On this case MSI log even says that installation was done successfully.
Another issue was that there is also quite many manual configuration tasks needed after installation.

Because we are planning to deploy Dynatrace Agents to multiple servers and enabling/disabling them where needed I decided to create PowerShell scripts which can be used to install Dynatrace agents and enable/disable them when needed and include workarounds to these issues to these scripts (These scripts are btw available here.).

Performance issues caused by performance monitoring

First test results from production environment was very shocking. Reports shown for us that there was huge slowness on one of the environments.

When I investigated reasons to this issue I found that slowness mostly happened evening/night times when there was just couple of users on system.


Then on one evening when I was troubleshooting this issue I noticed that biggest delays was there immediately after iisreset and slowness was much worse than I had seen earlier on any other environment even when I was only user on system.

Because Dynatrace Agent was only difference between this and other environments which worked fine I decided to try remove it also from this environment and suprise suprise first load time after iisreset decreased to one tenth of earlier.


Situation was very interesting that tool which should help us to improve application perfomance actually caused them.

When we investigated this with Dynatrace support we got information that is it actually normal that instrumenting .NET profilers will slow down application start little bit even we did some tuning to configurations.

As solution to this we decided to configure IIS to keep application pools always running and preload them immediately after iisreset based on this guide: http://weblog.west-wind.com/posts/2013/Oct/02/Use-IIS-Application-Initialization-for-keeping-ASPNET-Apps-alive

That looked fixing issue so I included also that feature to my enable.ps1 -script.

Difficulties moving to production mode

After done lot of testings on test environment we assumed that we are finally ready to go production with this tool and we decided to enable monitoring agents again to couple of production environments.

Assuming was once wrong again. This time issue was that most of the application pools did not started at all after enabling Dynatrace agent. Logs showed that IIS agents started fine without any errors but .NET agents not even tried to start.

Hidden IIS feature

This issue seemed to be tricky and it took long time for me, my colleagues and Dynatrace partner company to investigate. In the end we noticed that when we removed all configurations done by script and did all configuration steps manually it suddenly started working and that happened on two different environment but not on third one even we did all steps same way than ealier.


After comparing working and not working installations I found that there is actually one hidden feature on IIS Management Console.
When you register module using IIS console it checks if binary is 32 or 64 bit version and automatically adds precondition that module will be only loaded by application pool which are running on same bitness. PowerShell cmdlet "New-WebGlobalModule" does not do that automatically so if you do not give -Precondition parameter (which is not mandatory) IIS will try load that module to both 32 and 64 applications. And actually you need give same precondition again when you run "Enable-WebGlobalModule" cmdlet.

Here you can see that precondition is not visible on IIS console:

But here you can see that precondition is still on IIS config file:

or it can be missing like we had and you cannot see that from IIS console and it caused application pools starting issue.

This is also now fixed to my scripts.

Whose fault it was?

When you use lot of time for troubleshooting and finally got problem fixed it is also interesting to use sometime to think that whose fault it was after all?


If we think all issues I have explained here then we can probably say that:

  • It is Dynatrace fault that their MSI package does not fail if installing IIS modules fails.
    • Or maybe it is Microsoft fault because their appcmd.exe tools is used inside of MSI?
    • Or maybe it is Microsoft fault that it is their AppFabric process which locks IIS config when this happens?
  • It is Microsoft fault that IIS module registration using IIS console and PowerShell works differently.
    • Or maybe it is Dynatrace fault that they don't say on their documentation that we should care about this but they are still using "/precondition" parameter inside of MSI package?
    • Or maybe it was my fault that I did not readed all Microsoft documentations that how IIS module registration should be done using PowerShell?


Anyway, it was Olli's problem and after all I was able to fix it :)

Thursday, December 4, 2014

Automatic VMware VLAN connectivity testing

Story behind this post

Recently we have built new datacenter network because old one wasn't anymore enough modern.
New network structure contains lot of more VLANs than earlier one because servers are now own VLANs based who owns them and which are they role.

After our network provider got new VLANs created and configured to VMware hosts I noticed that we should some how to test that all of them are configured correctly and connection at least to gateway works from each VLAN on each VMware hosts.

Automatic testing

Like always there is many ways to do automation so I choosed first idea what I got.

I created PowerCLI script which do following things with Windows Server 2012 R2 (probably it works with 2012 server too):

  • Migrates virtual machine to each host on VMware cluster one by one.
  • Moves virtual machine to each VLAN what is configured to VLANs.csv one by one.
  • Set IP address which is listed on VLANs.csv to virtual machine.
  • Uses ICMP (Ping) to test gateway connectivity.
  • Writes results to log file (which updates after every test) and to report file (which is generated after all the tests).

NOTE! You must disable UAC from virtual machine. Other why script can't change IP address on VM.

Configuring script

Because I didn't wanted to re-test all old networks I needed to generate list of VLAN I want to test.
This can be easily done by exporting list of all VLANs on VMware using following command and removing VLANs what you don't want to test from it.

Get-VirtualPortGroup | Select-Object Name, VLanID | Export-Csv .\VLANs.csv -NoTypeInformation

Because script needs configure IP address to virtual machine and ping to gateway I also added "Subnet" column to CSV file where is subnet number without last number.

Example CSV:
"Name","VLanId","Subnet"
"Frontend","100","192.168.100."
"Backend","200","192.168.200."

Script itselves

Script is located below. I hope that you think that this is useful too.
$vCenter = "vcenter.domain.local"
$ClusterName = "VM cluster"
$TestVMname = "VLAN-Tester"
$VLANsList = Import-Csv ".\VLANs.csv"
$GatewayIP = "1"
$TestVMIP = "253"
$Netmask = "255.255.255.0"
$vCenterCred = Get-Credential -Message "Give vCenter account"
$HostCred = Get-Credential -Message "Give shell account to VMware hosts"
$GuestCred = Get-Credential -Message "Give guest vm credentials"
$LogFile = ".\VLAN_test.log"
$ReportFile = ".\VLAN_test_report.csv"

### 
Connect-VIServer -Server $vCenter -Credential $vCenterCred

$Cluster = Get-Cluster -Name $ClusterName
$vmHosts = $Cluster | Get-VMHost
$TestVM = Get-VM -Name $TestVMname

ForEach ($vmHost in $vmHosts) {
 # Migrate VM to vmHost
 $TestVM | Move-VM -Destination $vmHost
 
 # Find networks which are available for testing on current host
 $vmHostVirtualPortGroups = $vmHost | Get-VirtualPortGroup
 ForEach ($VLAN in $vmHostVirtualPortGroups) {
  ForEach ($VLANtoTest in $VLANsList) {
   If ($VLANtoTest.Name -eq $VLAN.Name) {
    $NetworkAdapters = $TestVM | Get-NetworkAdapter
    Set-NetworkAdapter -NetworkAdapter $NetworkAdapters[0] -Connected:$true -NetworkName $VLAN.Name -Confirm:$False
    
    # Set IP address to guest VM
    $IP = $VLANtoTest.Subnet + $TestVMIP
    $GW =  $VLANtoTest.Subnet + $GatewayIP
    $netsh = "c:\windows\system32\netsh.exe interface ip set address Ethernet static $IP $Netmask 0.0.0.0 1"
    Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType bat -ScriptText $netsh
    
    # Wait little bit and try ping to gateway
    Start-Sleep -Seconds 5
    $PingGWResult = Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType PowerShell -ScriptText "Test-NetConnection $GW"
    $ParsedPingGWResult = $PingGWResult.ScriptOutput | Select-String True -Quiet
    If ($ParsedPingGWResult -ne $True) { 
     Start-Sleep -Seconds 30
     $PingGWResult = Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType PowerShell -ScriptText "Test-NetConnection $GW"
     $ParsedPingGWResult = $PingGWResult.ScriptOutput | Select-String True -Quiet
    }
    
    # Generate report line
    $ReportLine = New-Object -TypeName PSObject -Property @{
     "VMhost" = $vmHost.Name
     "Network" = $VLAN.Name
     "GatewayConnection" = $ParsedPingGWResult
    }
    
    $ReportLine.VMhost+"`t"+$ReportLine.Network+"`t"+$ReportLine.GatewayConnection | Out-File $LogFile -Append
    [array]$Report += $ReportLine
    Remove-Variable ParsedPingGWResult
   }
  }
 }
}
$Report | Export-Csv $ReportFile -NoTypeInformation

Wednesday, November 19, 2014

Chinese mini pc review

Story behind this post

On these days all electronics is made in China.
Didn't see any good reason to pay anything for middlemans so I decide order my next PC directly from China.

My employer offers me laptop I didn't needed another one and I didn't wanted any full ATX to my living room anymore so I decide to order mini pc.

This is review about that device.

Mini PC

Selection

After some research I decided to order this device.
My version is with i5-4200U CPU so it was little bit more expensive than on that link.

Other parts are:

  • G.Skill DDR3 1600 MHz SO-DIMM 4GB x 2
  • Samsung 850 Pro 120GB

Installation

Hardware installation

Hardware installation is very simple.
Just put memories ja HDD/SDD inside to device.

Package contains cables for one harddisk but there is enough space and free cable slots for another harddisk too.

Operating System installation

Reseller says on they page that this device is tested with Windows 7 but it works fine with Windows 8.1 too and actually I tested that Windows 10 preview also can be installed to it.

Important note on this point is that device is sold without Windows OEM license so if you want run Windows on that you need buy retail license. Myself are using this with MSDN licensed version of Windows 8.1 because this is my test machine.


What comes to operating system installation you need create USB media which supports UEFI. I used Rufus for that.

Installation from USB can be started by following these steps:

  • Connect USB stick/etc to USB port.
  • Press power button
  • Press ESC on system start. UEFI BIOS will be loaded.
  • Move to "Save & Exit" screen.
  • Select "Launch EFI Shell from filesystem device", operating system installation will be loaded if your USB media is valid.

About drivers

Windows founds drivers all the devices by default but them are not best for this device. There also isn't any device manufacture's page where you would download correct ones so you need find them one by one.

Here is most important ones:

WLAN works with driver what comes with Windows but time to time it losts connection.

Solution to this problem was manually "update" driver to Broadcom 802.11 Multiband Network Adapter driver even if Windows thinks that it is not good one for this device.




Windows tuning

I also found from Windows event log that hibernate and fast boot after that didn't worked correctly.



I don't know reason for that but because normal from boot power button press to status when Windows is up and running and user is logged in (using automatic login) only takes about 10 seconds so I just totally disabled hibernate using command:
powercfg.exe /hibernate off

Currently device still gives warning like this on every boot for every CPU core. I haven't found any reason for that one but device works without any problems so I assume that it is false even.


Performance tests

Windows 8.1 doesn't contain anymore graphical version of experience index tool but you still can get it using this tool.

Result about that test:

Highest rate comes from SDD so here is also more detailed informations from it:


Because device is totally passive cooled I was also interested to see what happens when device is running under full load longer time. On this picture it have under full load over one hour:

No problems found on that test.

Review summary

Because these devices comes without operating system license and there is no official and tested drivers available them are not best choise for end users.

Possible use cases like I see it are:

  • Companies who have are volume license contract with Microsoft can use these on workstations. Price/performance ratio is very good to that use case because they don't need buy douple licenses (OEM + volume).
  • Universities and companies which are using Linux workstations can use these devices without need buy Windows license.
  • Companies who needs PCs to industry hall(s) would be interested because device didn't contain any moving parts so dust not easily broke that one and it is very cheap keep couple spares on storage. Device is also very small and provided with wirelss connection so it would work fine example on storage trucks.
  • It nerds like me can use these on test machines or example on HTPC.

Anyway, if you know what you are looking for I can recommend to try one of these too. If you are not sure about all device details you can ask from reseller. They reply very fast.

Sunday, October 19, 2014

How to build ADFS (SAML 2.0) to KCD "proxy" using Citrix NetScaler - Part 2

This is second part of my 'How to build ADFS (SAML 2.0) to KCD "proxy" using Citrix NetScaler' guide.
You can find first of guide from: How to build ADFS (SAML 2.0) to KCD "proxy" using Citrix NetScaler - Part 1

On first part of this guide I said that we will join Netscaler to domain. Well I noticed later that is not needed on this configuration because any client doesn't connect to Netscaler using Kerberos authentication.

Anyway here is rest of needed configurations to get this working.

Enable Kerberos authentication to IIS page


I don't want copy whole that checklist here but shortly you need:
  • Disable "Anonymous Authentication"
  • Enable "Windows Authentication" to IIS web site.
  • Disable kernel mode authentication.
  • Add "Negotiate:Kerberos" authentication provider and remove all others.
My IIS settings looks like this.

Test that kerberos authentication works

On that point it is good idea test that you really can connect to application and authentication to it works from some machine which is on same domain with backend server.


Because we are using kerberos authentication you will notice that you must connect to application using name which have registered SPN (Service Principal Names) on Active Directory.

That means that on default settings these urls works:
  • http://iis.contoso.local
  • http://iis

And these ones are not working:
  • http://192.168.100.21
  • http://iis.contoso.com

I used ASP code like this on IIS to show me that which account was actually authenticated to it (IIS ASP feature is needed):
<asp:LoginName id="LoginName1" runat="server" FormatString ="Welcome, {0}" />

Important note here is that when you connect to this page through Netscaler, it will always use server's FQDN name to connect it. That means that even public url on this example is iis.contoso.com, you don't need register SPN for it.

Custom monitor

After you are forced IIS to use Kerberos authentication you will notice that service on Netscaler will go down (at least if you are used HTTP monitor).

Reason for this is that Netscaler can't any more get right response from IIS server.
To solve this issue I created "HealthCheck" folder to IIS side and enabled anonymous authentication to it.

Then I created custom monitor like and linked it to IIS service.
add lb monitor http_HealthCheck HTTP -respCode 200 -httpRequest "HEAD /HealthCheck/" -LRTM DISABLED
unbind service svc_IIS -monitorName http
bind service svc_IIS -monitorName http_HealthCheck


Allow Kerberos delegations from AD

  • Created domain account svc_ns_kcd
  • Created new SPN for that account using following command:
  • setspn -S host/nsidp.contoso.com svc_ns_kcd
    • This is only needed for enabling "Delegation" tab to that service account but of course it need to unique on domain. I used nsidp.contoso.com name which was used SAML provider.
  • Added following delegation:


Traffic policy

When we have everything else on place we just need create KCD account to Netscaler and assign it to service using traffic policy.
add aaa kcdAccount svc_ns_kcd -realmStr CONTOSO.LOCAL -delegatedUser svc_ns_kcd -kcdPassword Qwerty7

add tm trafficAction trafficKCDSSO -SSO ON -kcdAccount svc_ns_kcd
add tm trafficPolicy trafficKCDSSO TRUE trafficKCDSSO
bind lb vserver vsrv_IIS -policy trafficKCDSSO -priority 100

After you add that configuration you should be able to connect to https://iis.contoso.com using SAML federation and you should be authenticated to application using kerberos.

Session handling trick

After some testing I noticed that Netscaler got kerberos ticket only for first user and after that it authenticated second user to application using first user's credentials. Because I didn't found solution to that problem my self I created support request to Citrix. They found from log files that problem on this configuration is that Netscaler always connects to backend application using same source port, that why IIS didn't requested user authentication (using 401 error) and second user was delegated to application using first users session.

Solution to this problem is set maxClient = 1 setting to service. With that configuration Netscaler always uses different source port when it connects to application. Then IIS always responses 401 to first request and Netscaler will get kerberos ticket for user.
We can enable this setting following commands:
rm service svc_IIS

add service svc_IIS IIS HTTP 80 -maxClient 1

bind lb vserver vsrv_IIS svc_IIS
bind service svc_IIS -monitorName http_HealthCheck

Add another page behind this proxy

After you are used lot of time to get this working, relevant question is that "What is needed to add another vserver behind this system?"
First you need have:
  • AssertionConsumerService value specified on SAML metadata.
    • On example on first part of this guide I already included iis2.contoso.com to there.
  • You need add kerberos delegations to new IIS server for svc_ns_kcd account.
Then you just create new vserver with needed configurations like this:
add server DC 192.168.110.11
add service svc_DC DC HTTP 80 -maxClient 1
add lb vserver vsrv_DC SSL 192.168.110.22 443 -persistenceType NONE
bind lb vserver vsrv_DC svc_DC
bind ssl vserver vsrv_DC -certkeyName wildcard
set ssl vserver vsrv_DC -tls11 DISABLED -tls12 DISABLED
bind service svc_DC -monitorName http_HealthCheck

set lb vserver vsrv_DC -AuthenticationHost iis2.contoso.com -Authentication ON -authnVsName auth_vsrv
bind lb vserver vsrv_DC -policy trafficKCDSSO -priority 100

NOTE! On this configuration both of these web pages (iis.contoso.com and iis2.contoso.com) are authenticated using auth.contoso.com authentication vserver and them are using same relaying party rule on ADFS side. If you are service provider which wants publish different web pages to different customers then you need have one authentication vserver per customer that configuration allows to use different relaying parties for different web servers.

The final words

It have been very nice experience to leaning how kerberos authentication and SAML federations works on deep level and I really like that Netscaler allows us to do this. I hope that this will be useful guide for people who are looking for same kind solution. My trip here is now over and it is time to move on to next technologies.