Tuesday, December 12, 2017

Hide SQL server databases which user don't have rights

Background

Part of DevOps implementation we are now delegating rights for DevOps teams by following least privilege principle.

That why I also started to look how to delegate SQL permissions which will allow users only see databases which they have access.

Default settings on SQL server

By public role has right called for "VIEW ANY DATABASE". It can be removed using this SQL query:
REVOKE VIEW ANY DATABASE TO PUBLIC

but problem with that solution is that then users cannot see even these databases which they have access unless you set user as dbo.

Here is list of my test databases when I look them as sa


And here is what test user will see. User have db_datareader role on ShouldBeSeenByTestUser1 and db_owner role on ShouldBeSeenByTestUser2 database.

Note that if user know name of each database where he has access he also can connect to them.

Maybe Microsoft will fix this issue?

This issue has been reported to Microsoft and they are actually "promised" to fix in on future version of SQL server. There is just small BUT they have promised that on year 2008 and they have not still implemented anything to this one...

So let Olli make it working

Because there is no official solution offered by Microsoft it is time to look if Olli can find some "good enough" solution to it.

How SQL Server Management Studio is looking for DBs?

To be able to figure out solution/workaround to this issue I needed to first reverse engine logic which is used to get list of databases to SQL Server Management Studio.

I did that by running SQL Profiler same time when I connected to instance with SQL Server Management Studio.

Simplified version of query looks like this:
SELECT * FROM master.sys.databases

Using this query I was able to see content of that sys.databases view:
select object_definition(object_id('[sys].[databases]')) AS [processing-instruction(x)] FOR XML PATH('')

Simplified version of it looks like this:
SELECT * FROM sys.sysdbreg WHERE has_access('DB', id) = 1 

It gets list of databases from system table sys.sysdbreg and uses non-documented  has_access function to filter out databases.

It is possible to run queries against of sys.sysdbreg table by using dedicated admin connection but has_access function is not accessible even from there.

Creating stored procedure which lists only databases where user has access

There is actually couple of ways to get lists of databases where user have access but this is simplest one which I found.

First we create stored procedure which will list all databases. That contains command WITH EXECUTE AS OWNER so it will be run by using master database owner rights. In this example it is sa.
CREATE PROCEDURE sp_all_dbs
WITH EXECUTE AS OWNER
AS
SELECT name FROM sys.databases
GO

Then we will create another stored procedure which will be run using user rights and where we filter databases which user have access:
CREATE PROCEDURE sp_my_dbs
AS
CREATE TABLE #databases (
 name sysname not null
)
INSERT INTO #databases
EXEC sp_all_dbs
SELECT name FROM #databases WHERE HAS_DBACCESS(name) = 1
GO

and last step is grant role public to run that stored procedure:
GRANT EXECUTE ON OBJECT::sp_my_dbs TO PUBLIC

So now user can use this stored procedure to see which databases he actually have access:

and because we are using HAS_DBACCESS function we get list of all databases where user have at least public role.

But how to get that to SQL Server Management Studio?

Unfortunately I don't have access to SQL Server Management Studio source code (because I'm not working on Microsoft) so I cannot fix this issue to it.


But there is new open source solution called for SQL Operations Studio which also uses sys.databases view to get list of databases and because it is open source I was able to make modified version of it.

You can see my code change and download modified DLL from here. Just download 0.23.6 version of SQL Operations Studio and drop that MicrosoftSqlToolsServiceLayer.dll to \resources\app\extensions\mssql\sqltoolsservice\Windows\1.2.0-alpha.37\ folder over existing one.

And here is example how it looks then. Notice that on left side you can still see only system databases (because these are queried using same way that on SSMS) but on right side you can actually see these databases which was hidden without this modification.



Closing words

I will paste this blog post as comment to that bug report which I mentioned earlier and hope that some day Microsoft will include fix to SQL server. My opinion is that this should be fixed to sys.databases view.

In mean while my solution can be used as workaround.



Wednesday, October 12, 2016

Taking first steps to DevOps world - part 3

This is third part of my multi part blog post series about suggested first steps (from point of view) to DevOps world.

Link to part 1
Link to part 2

IIS configuration defined by code

Here I have code defined IIS configuration for my example application.

I used core version of Windows Server 2012 R2 together with Windows Management Framework 5.0 and .NET framework 4.6.2 to test this configuration but it should also works on core version of Windows Server 2016. Nano version of Win 2016 does not yet contains all needed features for this example application so it was not possible to use it for this purpose.

Configuration WebApplication
{
    param
    (
        [String[]]$NodeName = 'localhost',

        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [String]$SourcePath,
  
        [Parameter(Mandatory)]
        [ValidateNotNullOrEmpty()]
        [String]$AppName
    )

    Import-DscResource -ModuleName PSDesiredStateConfiguration
    Import-DscResource -ModuleName xWebAdministration

    Node $NodeName
    {
        WindowsFeature IIS
        {
            Ensure                         = 'Present'
            Name                           = 'Web-Server'
        }

        WindowsFeature AspNet45
        {
            Ensure                         = 'Present'
            Name                           = 'Web-Asp-Net45'
            DependsOn                      = '[WindowsFeature]IIS'
        }
  
        WindowsFeature NET-WCF-HTTP-Activation45
        {
            Ensure                         = 'Present'
            Name                           = 'NET-WCF-HTTP-Activation45'
            DependsOn                      = '[WindowsFeature]AspNet45'
        }
  
        File WebContent
        {
            Ensure                         = 'Present'
            SourcePath                     = $SourcePath
            DestinationPath                = "C:\inetpub\" + $AppName
            Recurse                        = $true
            Type                           = 'Directory'
            DependsOn                      = '[WindowsFeature]NET-WCF-HTTP-Activation45'
            Checksum                       = "modifiedDate"
            MatchSource                    = $true
        }
  
        xWebAppPool AppPool
        {
            Name                           = $AppName
            Ensure                         = 'Present'
            State                          = 'Started'
            autoStart                      = $true
            enable32BitAppOnWin64          = $false
            startMode                      = 'AlwaysRunning'
            DependsOn                      = "[File]WebContent"
        }

        xWebApplication WebApp
        {
            Website                        = "Default Web Site"
            Name                           = $AppName
            WebAppPool                     = $AppName
            PhysicalPath                   = "C:\inetpub\" + $AppName
            Ensure                         = "Present"
            PreloadEnabled                 = $true
            DependsOn                      = "[xWebAppPool]AppPool"
        }
 
        # Enable IIS remote management
        WindowsFeature Web-Mgmt-Service
        {
            Ensure                         = 'Present'
            Name                           = 'Web-Mgmt-Service'
            DependsOn                      = '[WindowsFeature]IIS'
        }
  
        Registry RemoteManagement {
            Key                            = 'HKLM:\SOFTWARE\Microsoft\WebManagement\Server'
            ValueName                      = 'EnableRemoteManagement'
            ValueType                      = 'Dword'
            ValueData                      = '1'
            DependsOn                      = @('[WindowsFeature]IIS','[WindowsFeature]Web-Mgmt-Service')
       }
 
       Service StartWMSVC {
            Name                           = 'WMSVC'
            StartupType                    = 'Automatic'
            State                          = 'Running'
            DependsOn                      = '[Registry]RemoteManagement'
       }
    }
}

WebApplication -NodeName "DevFE01","DevFE02" -SourcePath "\\server\WebAppContent" -AppName "Web"
WebApplication -NodeName "DevBL01","DevBL02" -SourcePath "\\server\DataAccessAppContent" -AppName "DataAccess"
This configuration will install needed Windows roles, create IIS application pools, applications and copy application binaries from UNC share to these servers (computer accounts need to have read access to that share).

When you have these servers installed and joined to domain you can apply this configuration to them using these commands:
.\WebApplication.ps1
Start-DscConfiguration -Path .\WebApplication

IIS remote management

Another interesting detail is that this configuration will also enable IIS remote management so you can use IIS console from your management server to manage these applications on core server(s) like this:


Fifth lessons learned is that: Install and configure IIS management service role to your IIS servers so you are able to manage them remotely.

Database(s) defined by code

Using Entity Framework it is possible define data model using "code first" technology. On that model you define using source code which kind of data you will have, how they are linked together, etc and entity framework can automatically create and update your database(s) based on that model.

On my example application there is two entity types Companies and People and based on these code files entity framework will create database like this:


It is not possible to define all database settings which are needed on production using entity framework so I suggest that you create empty database(s) with correct initial size, auto grow, etc settings for your application(s) and you let entity framework handle database structure creating/updating.

Sixth lessons learned is that: Create SQL users and databases manually and use entity framework code first technology to create and update your databases.

Screenshots from example application

Now we have got example application installed and working on our environment so here is screenshots how it looks:





This was third part of this blog post series. I'm not sure if there will be more parts on this but I let it open now and will post more if I will get some good idea about content.
I hope that you found useful information from these posts.

Sunday, September 25, 2016

Taking first steps to DevOps world - part 2

This is second part of my multi part blog post series about suggested first steps (from point of view) to DevOps world.

Link to part 1

Environments defined by code

Purpose of scripting is automate routine tasks and release time for more important things. Another benefit on scripting is that script(s) will do steps always on same way.


Issue in traditional scripts is that dependency and error handling and especially version control is complex because it is impossible to say which scripts are applied to which environments.
Fortunately there is new tools like PowerShell DSC (Desired State Configuration) which can be used to define target environment/configuration and which will automatically make sure that correct scripts will be run on correct order.

Third lessons learned is that: Use code to define target environment/configuration instead of traditional scripts when possible.

Virtual servers

Here I have very simple example how all define these needed 12 virtual servers configuration using very simple piece of code. This example is done on Hyper-V environment (using xHyper-V module) but there is also similar module available for VMware on here.
configuration TestAppVMsToHyperV
{
    param
    (
        [string[]]$NodeName = 'localhost',

        [Parameter(Mandatory)]
        [string]$VMName,
        
        [Parameter(Mandatory)]
        [Uint64]$StartupMemory,

        [Parameter(Mandatory)]
        [Uint64]$MinimumMemory,

        [Parameter(Mandatory)]
        [Uint64]$MaximumMemory,

        [Parameter(Mandatory)]
        [String]$SwitchName,

        [Parameter(Mandatory)]
        [String]$Path,

        [Parameter(Mandatory)]
        [Uint32]$ProcessorCount,

        [ValidateSet('Off','Paused','Running')]
        [String]$State = 'Off',

        [Switch]$WaitForIP
    )

    Import-DscResource -module xHyper-V

    Node $NodeName
    {  
        xVhd DiffVhd
        {
            Ensure          = 'Present'
            Name            = $($VMName + ".vhd")
            Path            = $Path
            ParentPath      = "D:\Templates\NanoIIS_v1\NanoIIStemplate.vhd"
            Generation      = "Vhd"
        }
  
        xVMHyperV NewVM
        {
            Ensure          = 'Present'
            Name            = $VMName
            VhdPath         = $($Path + "\" + $VMName + ".vhd")
            SwitchName      = $SwitchName
            State           = $State
            Path            = $Path
            Generation      = 1
            StartupMemory   = $StartupMemory
            MinimumMemory   = $MinimumMemory
            MaximumMemory   = $MaximumMemory
            ProcessorCount  = $ProcessorCount
            MACAddress      = ""
            Notes           = ""
            SecureBoot      = $False
            EnableGuestService = $True
            RestartIfNeeded = $true
            WaitForIP       = $WaitForIP 
            DependsOn       = '[xVhd]DiffVhd'
        }
    }
}

########## Dev environment servers ########## 
# StartupMemory/ MinimumMemory = 2 GB
# MaximumMemory = 4GB
TestAppVMsToHyperV -NodeName "localhost" -VMName "DevFE01" -SwitchName "DevFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "DevFE02" -SwitchName "DevFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

TestAppVMsToHyperV -NodeName "localhost" -VMName "DevBL01" -SwitchName "DevBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "DevBL02" -SwitchName "DevBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2


########## Q&A environment servers ##########
TestAppVMsToHyperV -NodeName "localhost" -VMName "QAFE01" -SwitchName "QAFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "QAFE02" -SwitchName "QAFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

TestAppVMsToHyperV -NodeName "localhost" -VMName "QABL01" -SwitchName "QABL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "QABL02" -SwitchName "QABL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

########## Production environment servers ##########
TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdFE01" -SwitchName "ProdFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdFE02" -SwitchName "ProdFE" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdBL01" -SwitchName "ProdBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2
TestAppVMsToHyperV -NodeName "localhost" -VMName "ProdBL02" -SwitchName "ProdBL" -State "Running" -Path "D:\ExampleAppVMs" -StartupMemory 2147483648 -MinimumMemory 2147483648 -MaximumMemory 4294967296 -ProcessorCount 2

That code creates these servers based on my Windows Server 2016 Nano template which I have created using this command:
New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath E:\ -BasePath .\Base -TargetPath .\Nano\NanoIIStemplate.vhd -ComputerName NanoTemp -Packages Microsoft-NanoServer-DSC-Package,Microsoft-NanoServer-IIS-Package
More information about how to create Nano server templates you can find from here.


I have been working with automation for years and biggest issue what I have seen is that there is always someone who will go can do some unexpected configuration/upgrade/etc manually and that will break the automation.

Fourth lessons learned is that: Use Core or Nano servers instead of full GUI version of servers on your code defined environments. That will keep persons who do not know what they are doing out of from your servers ;)

When you have that environment define code ready you need apply it to virtualization platform. Here is simple example how to apply it locally on Hyper-V server:
.\TestAppEnvironmentsHyperV.ps1
Start-DscConfiguration -Path .\TestAppVMsToHyperV
That can be of course done also remotelly and on production you probably want use DSC pull server instead of pushing these configurations directly to servers.


This was second part of this blog post series. I will try post next one near future. Thanks for reading :)

Friday, September 23, 2016

Taking first steps to DevOps world - part 1

I have been saying already 1-2 years that DevOps world is direction where our company should go. Now (finally) management is also started talk that we should evaluate DevOps principles and how include them our daily operations.

That why it is good time to start multi part blog post (I'm not sure yet how many parts there will be) where I will describe my opinions which are correct steps on this "road".

On my work career I have been working first eight years with infrastructure and now two last years more together with application development. With that background I see that I have quite good opinion from "Dev" and "Ops" point of view.

I will also pick most critical "lessons learned" from my text and summarize them on last post for these who are too lazy/not interested to read whole story.

Example application

Best places to start following DevOps principles is of course when you are creating totally new application or doing re-building existing one from scratch.

I will present my example application from that point of view.

Needed environment

When we think about modern, scalable cloud application which is as simple as possible it would contain following server roles:
  • Frontend
  • Business logic
  • Backend

And logical picture of needed infrastructure would look something like this:

And like we all know minimum set of environments for that kind of application is:
  • Development environment
  • Quality assurance environment
  • Production environment

Using traditional approach it would be very big task for Ops team to get all these environments up and running so here is first place we can get benefits using DevOps principles.

First lessons learned is that: Create firewall rules and load balancer configs manually (because you only need create them once) and automate servers installations (because you need multiple servers with same settings, you need to be able to scale up/down your environment and it makes upgrades much easier if you can always install new servers with latest binaries instead of upgrading old ones).

Handling configurations

Everyone who have even tried to configure environment like this manually knows that needed work effort is huge because each server role can contains multiple application components and each or them can have multiple config -files.

Our packaging team has used lot of time to develop installers which can handle all these configurations and to be honest they are done very good and important work with it. That was especially important earlier when all applications was single tenant applications and most of the installations was on customers' in-house environments.

Now when our company focus is on Cloud business and all new/re-built applications are multi-tenant I see that we can (and actually even need) to use "shortcuts" on here to be able create and update environments on cost-effective way.


Fortunately Visual Studio contains this very nice feature which can be used to generate Web.config -files for different environments automatically.

Here is example pictures of it configured to use:


There is also this nice extension which will allow you to do samething for all other *.config -files too.


I can already see that someone will ask question "What if we still need provide applications for in-house customers?" so I will answer it already. Nice thing on this feature is that you still can have one configuration which creates *.config files for your packaging process and these packages can be used on in-house installations.

Purpose of this configuration is define Cloud environments on code which allows you to do application deployments/upgrades without care about configs (which is important step on your way to continuous deployments). Another and maybe even bigger benefit on this configuration is that dev, QA and production configurations are all documented on code, under version control and you can be sure that settings are 100% same on all these environments.

Second lessons learned is that: Use Visual Studio *.config -file transform feature to generate and document different environments *.config -files.


This was first part of this blog post series. I will try post next one near future. Thanks for reading :)



Thursday, September 15, 2016

Troubleshooting performance issues caused by performance monitoring tool

Background

It have been a while after my last post because I'm now working mostly with applications and not some much with infrastructure anymore. Anyway, now I have nice real world troubleshooting story which I like to share with you.


Everyone who has even tried troubleshoot application performannce issues knows that it is very time consuming. That why we decided to acquire Application Performance Management/Monitoring tool to help with it.

After comparing our requirements to tools which are available we decided to start proof of concept project using Dynatrace APM which is provided us by their local partner company here in Finland so we was able to be sure that log data does not leave the country (which was business requirement).

Findings in PoC

Installation package issues

After testing Dynatrace Agent to couple of servers I noticed that there looks be these two bugs on installation MSI package:

  • On some servers IIS agent installation fails but I did not found any real reason to it.
  • On some servers MSI package fails to store IIS module registration to IIS configuration.
    • On this case MSI log even says that installation was done successfully.
Another issue was that there is also quite many manual configuration tasks needed after installation.

Because we are planning to deploy Dynatrace Agents to multiple servers and enabling/disabling them where needed I decided to create PowerShell scripts which can be used to install Dynatrace agents and enable/disable them when needed and include workarounds to these issues to these scripts (These scripts are btw available here.).

Performance issues caused by performance monitoring

First test results from production environment was very shocking. Reports shown for us that there was huge slowness on one of the environments.

When I investigated reasons to this issue I found that slowness mostly happened evening/night times when there was just couple of users on system.


Then on one evening when I was troubleshooting this issue I noticed that biggest delays was there immediately after iisreset and slowness was much worse than I had seen earlier on any other environment even when I was only user on system.

Because Dynatrace Agent was only difference between this and other environments which worked fine I decided to try remove it also from this environment and suprise suprise first load time after iisreset decreased to one tenth of earlier.


Situation was very interesting that tool which should help us to improve application perfomance actually caused them.

When we investigated this with Dynatrace support we got information that is it actually normal that instrumenting .NET profilers will slow down application start little bit even we did some tuning to configurations.

As solution to this we decided to configure IIS to keep application pools always running and preload them immediately after iisreset based on this guide: http://weblog.west-wind.com/posts/2013/Oct/02/Use-IIS-Application-Initialization-for-keeping-ASPNET-Apps-alive

That looked fixing issue so I included also that feature to my enable.ps1 -script.

Difficulties moving to production mode

After done lot of testings on test environment we assumed that we are finally ready to go production with this tool and we decided to enable monitoring agents again to couple of production environments.

Assuming was once wrong again. This time issue was that most of the application pools did not started at all after enabling Dynatrace agent. Logs showed that IIS agents started fine without any errors but .NET agents not even tried to start.

Hidden IIS feature

This issue seemed to be tricky and it took long time for me, my colleagues and Dynatrace partner company to investigate. In the end we noticed that when we removed all configurations done by script and did all configuration steps manually it suddenly started working and that happened on two different environment but not on third one even we did all steps same way than ealier.


After comparing working and not working installations I found that there is actually one hidden feature on IIS Management Console.
When you register module using IIS console it checks if binary is 32 or 64 bit version and automatically adds precondition that module will be only loaded by application pool which are running on same bitness. PowerShell cmdlet "New-WebGlobalModule" does not do that automatically so if you do not give -Precondition parameter (which is not mandatory) IIS will try load that module to both 32 and 64 applications. And actually you need give same precondition again when you run "Enable-WebGlobalModule" cmdlet.

Here you can see that precondition is not visible on IIS console:

But here you can see that precondition is still on IIS config file:

or it can be missing like we had and you cannot see that from IIS console and it caused application pools starting issue.

This is also now fixed to my scripts.

Whose fault it was?

When you use lot of time for troubleshooting and finally got problem fixed it is also interesting to use sometime to think that whose fault it was after all?


If we think all issues I have explained here then we can probably say that:

  • It is Dynatrace fault that their MSI package does not fail if installing IIS modules fails.
    • Or maybe it is Microsoft fault because their appcmd.exe tools is used inside of MSI?
    • Or maybe it is Microsoft fault that it is their AppFabric process which locks IIS config when this happens?
  • It is Microsoft fault that IIS module registration using IIS console and PowerShell works differently.
    • Or maybe it is Dynatrace fault that they don't say on their documentation that we should care about this but they are still using "/precondition" parameter inside of MSI package?
    • Or maybe it was my fault that I did not readed all Microsoft documentations that how IIS module registration should be done using PowerShell?


Anyway, it was Olli's problem and after all I was able to fix it :)

Thursday, December 4, 2014

Automatic VMware VLAN connectivity testing

Story behind this post

Recently we have built new datacenter network because old one wasn't anymore enough modern.
New network structure contains lot of more VLANs than earlier one because servers are now own VLANs based who owns them and which are they role.

After our network provider got new VLANs created and configured to VMware hosts I noticed that we should some how to test that all of them are configured correctly and connection at least to gateway works from each VLAN on each VMware hosts.

Automatic testing

Like always there is many ways to do automation so I choosed first idea what I got.

I created PowerCLI script which do following things with Windows Server 2012 R2 (probably it works with 2012 server too):

  • Migrates virtual machine to each host on VMware cluster one by one.
  • Moves virtual machine to each VLAN what is configured to VLANs.csv one by one.
  • Set IP address which is listed on VLANs.csv to virtual machine.
  • Uses ICMP (Ping) to test gateway connectivity.
  • Writes results to log file (which updates after every test) and to report file (which is generated after all the tests).

NOTE! You must disable UAC from virtual machine. Other why script can't change IP address on VM.

Configuring script

Because I didn't wanted to re-test all old networks I needed to generate list of VLAN I want to test.
This can be easily done by exporting list of all VLANs on VMware using following command and removing VLANs what you don't want to test from it.

Get-VirtualPortGroup | Select-Object Name, VLanID | Export-Csv .\VLANs.csv -NoTypeInformation

Because script needs configure IP address to virtual machine and ping to gateway I also added "Subnet" column to CSV file where is subnet number without last number.

Example CSV:
"Name","VLanId","Subnet"
"Frontend","100","192.168.100."
"Backend","200","192.168.200."

Script itselves

Script is located below. I hope that you think that this is useful too.
$vCenter = "vcenter.domain.local"
$ClusterName = "VM cluster"
$TestVMname = "VLAN-Tester"
$VLANsList = Import-Csv ".\VLANs.csv"
$GatewayIP = "1"
$TestVMIP = "253"
$Netmask = "255.255.255.0"
$vCenterCred = Get-Credential -Message "Give vCenter account"
$HostCred = Get-Credential -Message "Give shell account to VMware hosts"
$GuestCred = Get-Credential -Message "Give guest vm credentials"
$LogFile = ".\VLAN_test.log"
$ReportFile = ".\VLAN_test_report.csv"

### 
Connect-VIServer -Server $vCenter -Credential $vCenterCred

$Cluster = Get-Cluster -Name $ClusterName
$vmHosts = $Cluster | Get-VMHost
$TestVM = Get-VM -Name $TestVMname

ForEach ($vmHost in $vmHosts) {
 # Migrate VM to vmHost
 $TestVM | Move-VM -Destination $vmHost
 
 # Find networks which are available for testing on current host
 $vmHostVirtualPortGroups = $vmHost | Get-VirtualPortGroup
 ForEach ($VLAN in $vmHostVirtualPortGroups) {
  ForEach ($VLANtoTest in $VLANsList) {
   If ($VLANtoTest.Name -eq $VLAN.Name) {
    $NetworkAdapters = $TestVM | Get-NetworkAdapter
    Set-NetworkAdapter -NetworkAdapter $NetworkAdapters[0] -Connected:$true -NetworkName $VLAN.Name -Confirm:$False
    
    # Set IP address to guest VM
    $IP = $VLANtoTest.Subnet + $TestVMIP
    $GW =  $VLANtoTest.Subnet + $GatewayIP
    $netsh = "c:\windows\system32\netsh.exe interface ip set address Ethernet static $IP $Netmask 0.0.0.0 1"
    Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType bat -ScriptText $netsh
    
    # Wait little bit and try ping to gateway
    Start-Sleep -Seconds 5
    $PingGWResult = Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType PowerShell -ScriptText "Test-NetConnection $GW"
    $ParsedPingGWResult = $PingGWResult.ScriptOutput | Select-String True -Quiet
    If ($ParsedPingGWResult -ne $True) { 
     Start-Sleep -Seconds 30
     $PingGWResult = Invoke-VMScript -VM $TestVM -HostCredential $HostCred -GuestCredential $GuestCred -ScriptType PowerShell -ScriptText "Test-NetConnection $GW"
     $ParsedPingGWResult = $PingGWResult.ScriptOutput | Select-String True -Quiet
    }
    
    # Generate report line
    $ReportLine = New-Object -TypeName PSObject -Property @{
     "VMhost" = $vmHost.Name
     "Network" = $VLAN.Name
     "GatewayConnection" = $ParsedPingGWResult
    }
    
    $ReportLine.VMhost+"`t"+$ReportLine.Network+"`t"+$ReportLine.GatewayConnection | Out-File $LogFile -Append
    [array]$Report += $ReportLine
    Remove-Variable ParsedPingGWResult
   }
  }
 }
}
$Report | Export-Csv $ReportFile -NoTypeInformation

Wednesday, November 19, 2014

Chinese mini pc review

Story behind this post

On these days all electronics is made in China.
Didn't see any good reason to pay anything for middlemans so I decide order my next PC directly from China.

My employer offers me laptop I didn't needed another one and I didn't wanted any full ATX to my living room anymore so I decide to order mini pc.

This is review about that device.

Mini PC

Selection

After some research I decided to order this device.
My version is with i5-4200U CPU so it was little bit more expensive than on that link.

Other parts are:

  • G.Skill DDR3 1600 MHz SO-DIMM 4GB x 2
  • Samsung 850 Pro 120GB

Installation

Hardware installation

Hardware installation is very simple.
Just put memories ja HDD/SDD inside to device.

Package contains cables for one harddisk but there is enough space and free cable slots for another harddisk too.

Operating System installation

Reseller says on they page that this device is tested with Windows 7 but it works fine with Windows 8.1 too and actually I tested that Windows 10 preview also can be installed to it.

Important note on this point is that device is sold without Windows OEM license so if you want run Windows on that you need buy retail license. Myself are using this with MSDN licensed version of Windows 8.1 because this is my test machine.


What comes to operating system installation you need create USB media which supports UEFI. I used Rufus for that.

Installation from USB can be started by following these steps:

  • Connect USB stick/etc to USB port.
  • Press power button
  • Press ESC on system start. UEFI BIOS will be loaded.
  • Move to "Save & Exit" screen.
  • Select "Launch EFI Shell from filesystem device", operating system installation will be loaded if your USB media is valid.

About drivers

Windows founds drivers all the devices by default but them are not best for this device. There also isn't any device manufacture's page where you would download correct ones so you need find them one by one.

Here is most important ones:

WLAN works with driver what comes with Windows but time to time it losts connection.

Solution to this problem was manually "update" driver to Broadcom 802.11 Multiband Network Adapter driver even if Windows thinks that it is not good one for this device.




Windows tuning

I also found from Windows event log that hibernate and fast boot after that didn't worked correctly.



I don't know reason for that but because normal from boot power button press to status when Windows is up and running and user is logged in (using automatic login) only takes about 10 seconds so I just totally disabled hibernate using command:
powercfg.exe /hibernate off

Currently device still gives warning like this on every boot for every CPU core. I haven't found any reason for that one but device works without any problems so I assume that it is false even.


Performance tests

Windows 8.1 doesn't contain anymore graphical version of experience index tool but you still can get it using this tool.

Result about that test:

Highest rate comes from SDD so here is also more detailed informations from it:


Because device is totally passive cooled I was also interested to see what happens when device is running under full load longer time. On this picture it have under full load over one hour:

No problems found on that test.

Review summary

Because these devices comes without operating system license and there is no official and tested drivers available them are not best choise for end users.

Possible use cases like I see it are:

  • Companies who have are volume license contract with Microsoft can use these on workstations. Price/performance ratio is very good to that use case because they don't need buy douple licenses (OEM + volume).
  • Universities and companies which are using Linux workstations can use these devices without need buy Windows license.
  • Companies who needs PCs to industry hall(s) would be interested because device didn't contain any moving parts so dust not easily broke that one and it is very cheap keep couple spares on storage. Device is also very small and provided with wirelss connection so it would work fine example on storage trucks.
  • It nerds like me can use these on test machines or example on HTPC.

Anyway, if you know what you are looking for I can recommend to try one of these too. If you are not sure about all device details you can ask from reseller. They reply very fast.