Developer42

2017-02-04

PowerShell Suggestion: Simplify Write-Verbose in Modules

Write-Verbose is a really useful command in PowerShell; it lets you add code to see what’s going on behind the scenes, which you can easily toggle on and off with a simple parameter, rather than by amending your code each time you need to investigate some issue. Here’s a blog nicely summarising it for those unfamiliar: https://blogs.technet.microsoft.com/poshchap/2014/09/18/scripting-tips-and-tricks-write-verbose/.

However, in my opinion there’s one major flaw with this; it doesn’t work the way I’d expect it to where modules are concerned. That is, if I have Write-Verbose statements in my code, and I put my common code into a .psm1 module file then import that module into the script (.ps1 file), the behaviour will not be the same as were I to just have those functions defined in my script.

To illustrate, say I had the following code:

# MyScript.ps1
[CmdletBinding()]
param()

function Invoke-Demo {
    [CmdletBinding()]
    param ([Parameter()][string]$Message)
    process {Write-Verbose $Message}
}

Invoke-Demo 'This DOES show when I run ".\MyScript.ps1 -Verbose"'
Write-Verbose 'This DOES show when I run ".\MyScript.ps1 -Verbose"'

Running this script with the Verbose parameter outputs 2 lines:

.\MyScript.ps1 -Verbose
This DOES show when I run ".\MyScript.ps1 -Verbose
This DOES show when I run ".\MyScript.ps1 -Verbose

However, were I to put my function into a module (i.e. to simplify its re-use in other scripts) as below, the functionality would change; making it harder for me to investigate issues.

# MyModule.psm1
function Invoke-Demo {
    [CmdletBinding()]
    param ([Parameter()][string]$Message)
    process {Write-Verbose $Message}
}
# MyScript.ps1
[CmdletBinding()]
param()
Import-Module -Path '.\MyModule.psm1' -Force 
Invoke-Demo 'This does NOT show when I run ".\MyScript.ps1 -Verbose"'
Write-Verbose 'This DOES show when I run ".\MyScript.ps1 -Verbose"'
.\MyScript.ps1 -Verbose
This DOES show when I run ".\MyScript.ps1 -Verbose

I can understand that perhaps MS decided that modules contain that’s at a stage that’s ready to be packaged and shared, and therefore assumed that people may not wish to see verbose output from the packaged module, but I’d assume that there should be some way to override this for anyone who does need to look under the covers.

There is a workaround suggested by Craig on StackOverflow: http://stackoverflow.com/a/16442063/361842. That is, whenever calling a function from a module, assign the verbose parameter with the value of the caller’s Verbose parameter as so:

Invoke-Demo 'This DOES show when I run ".\MyScript.ps1 -Verbose"; but it smells' -Verbose:($PSBoundParameters['Verbose'] -eq $true)`

Suggested Improvement

Add a parameter to the Import-Module cmdlet which states whether to inherit the caller’s Verbose setting (be that from $VerbosePreference or because the calling function/script was called with the Verbose switch specified (be that explicitly or through inheritance). i.e.

# MyModule.psm1
function Invoke-Demo {
    [CmdletBinding()]
    param ([Parameter()][string]$Message)
    process {Write-Verbose $Message}
}
# MyScript.ps1
[CmdletBinding()]
param()
Import-Module -Path '.\MyModule.psm1' -Force -InheritVerbose # <-- new switch on Import-Module
Invoke-Demo 'This now DOES show when I run ".\MyScript.ps1 -Verbose" (if above suggestion implemented by MS)'
Write-Verbose 'This DOES show when I run ".\MyScript.ps1 -Verbose"'

NB: Currently MS Connect for PowerShell (https://connect.microsoft.com/PowerShell/) is not taking submissions, hence blogging this.

If you like this idea, please share; I’m hoping to get this on Microsoft’s radar so that it can be implemented; or some improved solution can be offered.

Update: Thanks to Zachary Alexander for pointing out that suggestions can be submitted on GitHub.  This is now logged here: https://github.com/PowerShell/PowerShell/issues/3106

 

2016-12-19

Filtering on a Date Window

Filed under: SQL Server, Technology — Tags: , , , , , — Developer42 @ 17:21

This post is to cover an explanation I have to repeat quite often when demoing code, or when suggesting improvements to others’ logic.

A lot of the time we’re asked to filter for results falling within a date window and you’ll see code implemented such as this:
(I’ve used SQL in this example; but the concept illustrated here is the same regardless of implementation language).

declare @StartDate datetime = '2016-01-01 00:00'
, @EndDate datetime = '2016-12-31 23:59:59.997' --997 instead of 999 due to sql date quirk: see http://stackoverflow.com/a/3584913/361842

select *
from myTable
where myStartDate >= @StartDate
and myEndDate <= @EndDate

--or

select *
from myTable
where myStartDate between @StartDate and @EndDate
and myEndDate between @StartDate and @EndDate

That’s fine… sometimes.
The logic above will return any results where the entire result’s duration falls within the window’s range; i.e. the record starts after the window opens and completes before it’s closed.
This is best illustrated in the below graphic.

Example of Time Window with Whole Record falling within Window

Example of Time Window with Whole Record falling within Window

However, a lot of the time the requirement is that any part of the record falls into our window. i.e. the results we want are:

Date window where any part of record falls within window

Date window where any part of record falls within window

Given this requirement, lots of people still implement the above logic, causing any records which cross the window’s boundaries to be incorrectly missed off.
Those who do spot this issue often also make a mistake by overcomplicating the logic, providing rules for each scenario (and often missing off the last scenario illustrated below):

select *
from myTable
where 
--whole record in range
(
	myStartDate between @StartDate and @EndDate
	and myEndDate between @StartDate and @EndDate
)
--record starts in range
or
(
	myStartDate between @StartDate and @EndDate
)
--record ends in range
or
(
	myEndDate between @StartDate and @EndDate
)
--record crosses entire range
or
(
	myStartDate <= @StartDate 
	and myEndDate >= @EndDate
)

Preferred Solution

However, this can be drastically simplified, covering all of the above scenarios with a lot less code, and a lot more efficiently:

select *
from myTable
where myStartDate <= @EndDate --choice of < vs <= depending on requirement
and myEndDate >= @StartDate  --choice of > vs >= depending on requirement

To illustrate why this works, and why we use an AND rather than an OR, take a look at the previous image once we add lines showing where the condition’s met (green) or not met (red); here you can see those records where both conditions are met are shown as green, and where one condition’s not met as red (there is no scenario where both conditions are not met, since that would require a record which finished before it started).

Records with any part in date window with lines showing condition results

Records with any part in date window with lines showing condition results

Bonus Info

NB: In the above code’s comments I mention that you can use < or vs >=).
I’d also closed my datetime window with the inclusive-last value.
In many scenarios I’d use an inclusive start date and an exclusive end date; such as is demoed below:

declare @StartDate datetime = '2016-01-01 00:00' --inclusive
, @EndDate datetime = '2017-01-01 00:00'  --exclusive

select *
from myTable
where myStartDate < @EndDate 
and myEndDate >= @StartDate  

The advantage here is that we don’t need to think of how many days there are in a given month, or how many microseconds we can have before we see rounding errors; we just work with whole numbers. It also means that the end date of our previous window can be the start date of our next window without risking issues caused by the windows overlapping (though note: we may still get results which legitimately fall into both windows).
That said; always think about these decisions; as you need to understand what requirement you need to meet, and how your code will meet that requirement for any given data.

2016-07-28

PowerShell :: Display AD Group Membership Hierarchy

Filed under: Microsoft, powershell, Uncategorized — Developer42 @ 14:26
#requires –Modules ActiveDirectory -version 3.0

clear-host

#NB: currently assumes that all users & groups are in the same domain
function Get-AdGroupHierarchy {
    [cmdletbinding()]
    param(
        [parameter(Mandatory = $true, ValueFromPipeline = $true)]
        [string]$AdGroupSamAccountName
        ,
        [parameter(Mandatory = $false)]
        [int]$Level = 0
        ,
        [parameter(Mandatory = $false)]
        [string]$DisplayField = 'SamAccountName' #allows us to use names instead of samAccountNames if desired; though faster if we just use the sAmAccountName as avoids a lookup per call
        ,
        [parameter(Mandatory = $false)]
        [switch]$IncludeManagedBy 
    )
    begin {
        [string[]]$DefaultDisplayProperties = @('Directory')
        if($IncludeManagedBy){
            $DefaultDisplayProperties += 'ManagedBy'
        }
    }
    process {
        [string]$managedBy = ''
        [string]$displayName = $AdGroupSamAccountName
        if(($DisplayField -ne 'SamAccountName') -or ($IncludeManagedBy)) { #save an ad lookup if not required
            $group = Get-AdGroup $AdGroupSamAccountName -Properties ManagedBy, $DisplayField 
            $displayName = $group | Select-Object -ExpandProperty $DisplayField
            if (($IncludeManagedBy) -and ($group.ManagedBy)) {
                $managedBy = $group.ManagedBy | Get-AdUser -Properties DisplayName | Select-Object -ExpandProperty DisplayName
            }
        }
        (new-object -TypeName PSObject -Property ([ordered]@{
            Level = $Level
            DisplayName = $DisplayName
        })) `
        | %{if($IncludeManagedBy){$_ | Add-Member -MemberType NoteProperty -Name ManagedBy -Value $managedBy -PassThru}else{$_}} `
        | Add-Member -MemberType ScriptProperty -Name DirectorySpacer -Value {("`t" * $this.Level) + ('|- ' * ($this.Level -gt 0))} -PassThru `
        | Add-Member -MemberType ScriptProperty -Name Directory -Value {$this.DirectorySpacer + $this.DisplayName} -PassThru `
        | Add-Member -MemberType MemberSet -Name PSStandardMembers -Value ([System.Management.Automation.PSMemberInfo[]](New-Object System.Management.Automation.PSPropertySet DefaultDisplayPropertySet, $DefaultDisplayProperties)) -PassThru
        Get-AdGroupMember $AdGroupSamAccountName | ?{$_.objectClass -eq 'group'} | Select-Object -ExpandProperty SamAccountName | Sort-Object | Get-AdGroupHierarchy -Level ($level + 1) -DisplayField $DisplayField -IncludeManagedBy:$IncludeManagedBy
    }
}

Get-AdGroupHierarchy 'MyAdGroupSamAccountName' #see just the directory structure
Get-AdGroupHierarchy 'MyAdGroupSamAccountName' -IncludeManagedBy | ft -AutoSize #see directory structure and manager
Get-AdGroupHierarchy 'MyAdGroupSamAccountName' -IncludeManagedBy | select * #see all properties being returned

2016-03-08

Update Active Directory Thumbnail Photo for All Users Without a Pic to Show Their Initials

We recently upgraded to Lync 2015 (aka Skype for Business). One feature which frustrated me was the group conversations; you can either see people’s names; in which case they’re listed in such a way as to use up a large amount of screen space, or you can see people’s photos/thumbnails, which is great except when most people don’t have their photo associated with their account.

To resolve this quickly I came up with the below script which:

  • Gets all AD users under a given OU / SearchBase
  • Filters for those without a thumbnail photo associated
  • Determines the users’ initials
  • Creates a JPG of their initials
  • Assigns this JPG as their thumbnail image

Thus giving an easier way to tell who you’re talking to until they get around to uploading a photo.

NB: The below script is currently untested as I don’t have sufficient rights to update AD / will have to wait for our infrastructure team to assist there… but those parts that I could test have been tested.

Add-Type -AssemblyName System.Drawing
Add-Type -AssemblyName System.IO
Add-Type -AssemblyName Microsoft.ActiveDirectory.Management
Import-Module ActiveDirectory

function Convert-InitialsToJpegImageByteArray {
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $true, ValueFromPipeline = $true)]
        [string]$Initials
        ,
        [Parameter(Mandatory = $false)]
        [int]$WidthInPixels = 96  #Ref https://technet.microsoft.com/en-gb/library/jj688150.aspx: decided not to go 648x648 just for text
        ,
        [Parameter(Mandatory = $false)]
        [int]$HeightInPixels = 96
        ,
        [Parameter(Mandatory = $false)]
        [System.Drawing.Font]$Font = (new-object System.Drawing.Font ('Proxima Nova Alt',32, [System.Drawing.FontStyle]::Bold))
        ,
        [Parameter(Mandatory = $false)]
        [System.Drawing.Color]$BackgroundColor = "#FF000434" #ARGB Colors
        ,
        [Parameter(Mandatory = $false)]
        [System.Drawing.Color]$ForegroundColor = "#FF0099D8"
    )
    process {

        [System.Drawing.Bitmap]$bmp = new-object System.Drawing.Bitmap ($widthInPixels, $heightInPixels)
        [System.Drawing.Brush]$BrushBg = (new-object System.Drawing.SolidBrush ($BackgroundColor))
        [System.Drawing.Brush]$brushFg = (new-object System.Drawing.SolidBrush ($ForegroundColor))

        [System.Drawing.Graphics]$graphics = [System.Drawing.Graphics]::FromImage($bmp) 
        $graphics.FillRectangle($brushBg,0,0,$bmp.Width,$bmp.Height) 
        $graphics.DrawString($initials,$font,$brushFg,15,18) 
        $graphics.Dispose() 

        [System.IO.MemoryStream]$stream = new-object System.IO.MemoryStream
        $bmp.Save($stream,[System.Drawing.Imaging.ImageFormat]::Jpeg)
        $stream.Close()
        $stream.ToArray()
        $stream.Dispose()

    }
}

function Get-AdUserInitials {
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $true, ValueFromPipeline = $true)]
        [Microsoft.ActiveDirectory.Management.ADUser]$User
    )
    process {
        #$User = Get-ADUser $Identity #we could do parameter sets to allow passing only an identity
        $User | select-object @{Name='User';Expression={$User}}, @{Name='Initials';Expression={("{0}{1}" -f ("{0}?" -f $_.GivenName)[0],("{0}?" -f $_.Surname)[0]).ToUpperInvariant()}}
    }
}

function Get-AdUsersWithNoThumbnail {
    [CmdletBinding()]
    param (
        [Parameter(Mandatory = $true, ValueFromPipeline = $true)]
        [string]$SearchBase
        ,
        [Parameter(Mandatory = $false)]
        [ValidateSet('Base','OneLevel','Subtree')]
        [string]$SearchScope = 'Subtree'
        ,
        [Parameter(Mandatory = $false)]
        [string]$Filter = {Enabled -eq $true}
    )
    process {
        get-aduser -filter {Enabled -eq $true} -SearchBase $SearchBase -SearchScope $SearchScope -Properties thumbnailPhoto | ?{!$_.thumbnailPhoto} 
    }
}


@('OU=UK,DC=myCompany,DC=com') `
| Get-AdUsersWithNoThumbnail `
| Get-AdUserInitials `
| %{
    $User = $_.User
    [byte[]]$Image = $_.Initials | Convert-InitialsToJpegImageByteArray 
    Set-Aduser $user -replace @{thumbnailPhoto=$Image}
}

2016-02-05

PowerShell PlayTime :: Project Oxford

I recently stumbled across Microsoft’s Project Oxford; a collection of “AI” APIs for computer vision, speech and language. More info can be found here:

  1. MSDN: https://msdn.microsoft.com/en-us/library/mt422983.aspx
  2. Demo: https://www.projectoxford.ai/demo
  3. GitHub: https://github.com/Microsoft/ProjectOxford-ClientSDK

The project has links to a C# SDK, but that felt a bit heavy-weight for what are essentially just web services; so I thought I’d have a quick play with these functions using PowerShell.

Before coding, an API key is required. This can be obtained by logging into the following site using your MS Account (aka .net passport / live account) and registering for the free API keys: https://www.projectoxford.ai/Subscription.

I began by looking at the Emotion API, trawling through the client SDK code found here: https://github.com/Microsoft/ProjectOxford-ClientSDK/blob/master/Emotion/Windows/ClientLibrary/EmotionServiceClient.cs

Based on that code I was able to find all the info needed to call the rest services through PowerShell, resulting in this short script:

Clear-Host
$imageFilePath = 'c:\SomeFolder\ImageOfMeLookingAwesome.jpg'
$emotionApiKeyFree = '**my emotion aki key goes here**' #api keys: https://www.projectoxford.ai/Subscription

$RecogniseUri = ("https://api.projectoxford.ai/emotion/v1.0/recognize?subscription-key={0}" -f $emotionApiKeyFree)
$ContentType = 'application/octet-stream'
$Body = [System.IO.MemoryStream][System.Convert]::FromBase64String([convert]::ToBase64String((get-content $imageFilePath -Encoding Byte)))
$Result = Invoke-RestMethod -Method Post -Uri $RecogniseUri -ContentType $ContentType -Body $Body 

#show the results
$Result | format-list #information saying I don't look awesome; just angry, contemptable, and sad... until you spot the Es

#open the image file so we can compare the results with the picture
invoke-item $imageFilePath

To make it easier to see how the results map to the image, I then appended this to the end of my script, generating an HTML page with an image map. Hovering your mouse over the image’s faces gives you the emotions.

$htmlResultPath = 'c:\SomeFolder\EmotiomApiMap.html'

$html = @"
<html>
<head><title>really simple html page for demoing project oxford's emotion api</title></head>
<body>
<img src="$imageFilePath" usemap="#emotionMap">
<map name="emotionMap">
{0}
</map>
</body>
</html>
"@

$html = $html -f ($Result | %{
        $top = $_.faceRectangle.top
        $left = $_.faceRectangle.left
        $bottom = ($_.faceRectangle.top + $_.faceRectangle.height)
        $right = ($_.faceRectangle.left + $_.faceRectangle.width)

        "<area shape='rect' coords='{0},{1},{2},{3}' title='{4}'>" -f $left, $top, $right, $bottom, ($_.scores | out-string) 
    } | Join-String)

$html | out-file $htmlResultPath -Force
invoke-item $htmlResultPath 
My emotional state

PowerShell Project Oxford Emotion API PoC

I plan to play with these APIs more over the next few months / hope to find time to knock up a PowerShell module for them which I could share on GibHub.

2016-02-03

PowerShell & SQL :: A Quick Fix for the SQLPS Problems

Filed under: Microsoft, powershell, SQL Server, Technology — Tags: , , , — Developer42 @ 18:19

There are a couple of issues with the SQLPS PowerShell module.

  1. It changes the current directory to PS SQLSERVER:\>.
  2. It may cause warnings (see end of this post for examples).

Calling the Import-Module-SQLPS function (the definition of which you’ll find below) instead of running Import-Module SQLPS will prevent such issues.

cls
function Import-Module-SQLPS {
    #pushd and popd to avoid import from changing the current directory (ref: http://stackoverflow.com/questions/12915299/sql-server-2012-sqlps-module-changing-current-location-automatically)
    #3>&1 puts warning stream to standard output stream (see https://connect.microsoft.com/PowerShell/feedback/details/297055/capture-warning-verbose-debug-and-host-output-via-alternate-streams)
    #out-null blocks that output, so we don't see the annoying warnings described here: https://www.codykonior.com/2015/05/30/whats-wrong-with-sqlps/
    push-location
    import-module sqlps 3>&1 | out-null
    pop-location
}

"Is SQLPS Loaded?"
if(get-module sqlps){"yes"}else{"no"}

Import-Module-SQLPS

"Is SQLPS Loaded Now?"
if(get-module sqlps){"yes"}else{"no"}

NB: Though from PowerShell 3.0 onwards you don’t need to import modules; for SQLPS I’d recommend that you do import it; this ensures that you can control when in your script this happens, rather than the first command from that library (e.g. Invoke-SqlCmd statement) causing the script to load and thus your directory being changed and warnings being spewed.

(more…)

2015-12-04

SQL Server :: Compare Database Schemas

This is a script to list basic differences in table definitions between two databases.

Background Waffle
When we make database changes in a transactional system, we need to ensure that these won’t adversely affect our BI solution. Normally our developers give us this information when we plan which items to put into a release, so we can liaise with BI; unfortunately the developers machines are down at present due to a disaster at the data center of the company to whom we outsourced development. Since the changes for the release are already in our UAT environment (and only those changes are there), we could determine what would be changing in production by comparing the schemas of the two environments.

My first thought was to use the Database Comparison Tool that comes with Visual Studio. However whilst this does a great job for a developer looking for and resolving differences, it’s not so easy to communicate this information to others (i.e. giving a before and after image of columns side by side. You have to either click on each difference to get a definition of the before and after, then make notes or take screenshots, or you have to generate the DDL, in which case you don’t see the before picture; only the code to amend from the current to the to-be definition.

Instead I realised that I could simply pull the definitions of the tables & columns from SQL, compare these and list the differences; resulting in the code below.
This code can handle where both databases are on the same instance, or where they’re on different instances. For the latter scenario you’d need to create a linked server.
If you frequently compare different combinations of databases and want to avoid amending the instance and catalog names in the code each time you may want to look into views or synonyms (potentially aliases work too if the catalog name is consistent across instances).

Since our BI solution uses replication to get the data from the transactional system, I also included information on which columns were replicated; thus we can filter on those replicated to see just those which BI would be interested in, or can ignore replication to see all differences.

Caveats

  1. We’re not comparing everything; only basic table and column data.
  2. The maxLength value gives length in bytes; not characters (see previous post on how to correct for this).
  3. There are probably other issues; this was a quick fix to resolve an immediate issue which worked for us; there may be issues we’ve not encountered (e.g. maybe the linked server only gives access to a subset of the tables; thus giving false responses)…

Code

set transaction isolation level read uncommitted
go

;with uat as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myUatDbServer\uatInstance].[myUatDbCatalog].sys.tables t
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
, prd as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myProdDbServer\prodInstance].[myProdDbCatalog].sys.tables t
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
select coalesce(uat.TableName, prd.TableName) TableName
, coalesce(uat.ColumnName, prd.ColumnName) ColumnName
, case
       when prd.TableName is null and not exists (select top 1 1 from prd x where x.TableName = uat.TableName) then 'Add Table'
       when uat.TableName is null and not exists (select top 1 1 from uat x where x.TableName = prd.TableName)  then 'Remove Table'
       when prd.ColumnName is null then 'Add Column'
       when uat.ColumnName is null then 'Remove Column'
       else 'Change Column Definition'
end [AXtion] --our transaction system's Dynamics AX; I'm hilarious :S
, prd.ColType ColTypeFrom
, uat.ColType ColTypeTo
, prd.ColMaxLength ColMaxLengthFrom --not going to fuss about char byte size for now; just want to get a list of changes
, uat.ColMaxLength ColMaxLengthTo --not going to fuss about char byte size for now; just want to get a list of changes
, prd.ColIsNullable ColIsNullableFrom
, uat.ColIsNullable ColIsNullableTo
, prd.ColCollation ColCollationFrom
, uat.ColCollation ColCollationTo
, prd.ColPrecision ColPrecisionFrom
, uat.ColPrecision ColPrecisionTo
, prd.ColScale ColScaleFrom
, uat.ColScale ColScaleTo
, prd.is_replicated PrdIsReplicated --\_these aren't compared; just returned to make it easy to tell what's interesting to BI
, uat.is_replicated UatIsReplicated --/
from uat
full outer join prd
       on prd.TableName = uat.TableName
       and prd.ColumnName = uat.ColumnName
where prd.TableName is null
or uat.TableName is null
or prd.ColumnName is null
or uat.ColumnName is null
or (uat.ColType != prd.ColType)
or (uat.ColMaxLength != prd.ColMaxLength)
or (uat.ColIsNullable != prd.ColIsNullable) 
or (uat.ColCollation != prd.ColCollation) 
or (uat.ColPrecision != prd.ColPrecision) 
or (uat.ColScale != prd.ColScale)
order by coalesce(uat.TableName, prd.TableName) 
, coalesce(uat.ColumnName, prd.ColumnName) 

2015-12-03

SQL Server :: Getting Column Length of Various Columns

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , — Developer42 @ 19:17

Sometimes we need to lookup the maximum length allowed for various columns in a number of tables.
This script allows you to specify a list of table/column pairs into the table variable @interesting, then pulls back the max_length information for these, taking into account the data type (i.e. so data types storing 2 byte character sets can cope predictably).

It works by creating a temp table with a column holding a single character of each (string) data type, then reading the length of that field to get the length of a single character of that data type, and dividing the max length of the fields we’re after by this multiplier to get the length in characters rather than bytes.


declare @interesting table(col sysname, tab sysname)
insert @interesting
values ('someColumn','someTable')
,('secondColumn','someTable')
,('column3','anotherTable')

declare @JBDataTypes table(system_type_id int, user_type_id int, max_length int)
declare @sql nvarchar(max) 
select @sql = coalesce(@sql + ',','create table #dataTypesJB (') + quotename(name) + ' ' + name  
from sys.types 
where collation_name is not null --string fields only
set @sql = @sql + '); 
select system_type_id
, user_type_id 
, max_length 
from tempdb.sys.columns 
where object_id = object_id(''tempdb..#dataTypesJB''); 
drop table #dataTypesJB'
insert into @JBDataTypes exec (@sql)

select c.max_length / coalesce(jb.max_length,1) max_length 
, i.col
, i.tab
from @interesting i
inner join sys.columns c 
on c.object_id = object_id(i.tab)
and c.name = i.col
left outer join @JBDataTypes jb
on jb.system_type_id = c.system_type_id
and jb.user_type_id = c.user_type_id 
order by i.tab,i.col

2015-06-16

FTP Traffic Simulator / Test Script (PowerShell)

Filed under: Microsoft, Technology — Tags: , , , , , , , , , — Developer42 @ 01:38

We recently had an intermittent communication issue between an FTP server and the file share on which it stored its files.
In order to ensure frequent activity (with some variety to make it realistic) between the servers whilst investigating I created the below script.

The script connects to an FTP server, then loops (a given number of times) creating a dummy (Hunka.Junk) file of a random size, then uploading this file before pausing a random amount of time, then repeating for the next iteration.

cls
 
$fnTemplate = "hunka.junk.{0}.jpg"
$fnTempPath = "c:\temp\ftpTestScript\"
$user = "myFtpUser"
$pass = "myFtpPassword"
$ftp = "ftp://myFtpDns.myCompany.com/{0}"; 
$minFileSizeBytes =  20 * 1024 #min file size is 20kb
$maxFileSizeBytes = 100 * 1024 #max file size is 100kb
$minWaitSecs = 0 
$maxWaitSecs = 30
$stopConditionCounter = 1000000 #just in case we forget about the script have a stopping condition so it doesn't fill the server with junk
 
$webclient = New-Object -TypeName System.Net.WebClient;
$webclient.Credentials = New-Object System.Net.NetworkCredential($user,$pass) 
#$webclient.ContentType = "application/octet-stream";
#$webclient.Proxy = $null;
 
0..$stopConditionCounter | %{
    #setup variables for this iteration
    $filesize = Get-Random -Minimum $minFileSizeBytes -Maximum $maxFileSizeBytes #filesize in bytes
    $sleepTimeSecs = Get-Random -Minimum $minWaitSecs -Maximum ($maxWaitSecs-1)  #time to wait between uploads in seconds
    $sleepTimeSecs = Get-Random -Minimum $minWaitSecs -Maximum ($sleepTimeSecs+1) #increase liklihood of getting smaller wait times
    $fn = ($fnTemplate -f [string]([System.Guid]::NewGuid())) #get a unique filename
    write-output ("{0:0,000,000}: Generating {1:000} kb hunka.junk file ({2}), uploading to ftp, then sleeping {3} seconds" -f $_,($filesize/1024),$fn,$sleepTimeSecs)
    #create file
    $fnFullPath = (join-path $fnTempPath $fn)
    fsutil file createnew $fnFullPath $filesize
    #upload file
    $uri = New-Object -TypeName System.Uri -ArgumentList ($ftp -f $fn);
    $webclient.UploadFile($uri, $fnFullPath);
    #clean-up
    remove-item $fnFullPath
    #sleep
    Start-Sleep -seconds $sleepTimeSecs
}

2015-05-19

Hunka.Junk Files

Filed under: Microsoft, Movies, Technology — Tags: , , , , , , — Developer42 @ 17:21

A few times, even with capacity planning in place, we’ve hit disk capacity limits.
When this occurs you search for content which can be deleted to free up some space to get you moving again.
Here’s my new quick fix for this: HunkaJunk files.
Create dummy files which take up space, but can be quickly & easily deleted without risk; giving you teh wiggle room you need to get moving again.

There are various ways this could be done, but a useful tool is FSUtil.
This allows you to create blank files which take up a defined number of bytes.
e.g. to create a 1GB file you could run the following code from the Windows command line.

fsutil file createnew c:\PurgeForDiskSpace\Hunka.Junk 1073741824

– c:\PurgeForDiskSpace\Hunka.Junk – is the filename of your junk file (you can call it anything; just make it obvious)
– 1073741824 – is the filesize in bytes. NB: 1GB = 1024MB = 1024 * 1024 KB = 1024 * 1024 * 1024 bytes = 1,073,741,824 bytes.

 

Note: These files don’t work as expected in all scenarios.  The created files are populated with 0s, whilst real files are a mix of various values.  This can impact some things; e.g. these files generally can be transferred much faster than most files, as they’re far simpler to compress.  If you need more realistic files, you can generate files which are a collection of various (random) byte values using the script proposed here: https://stackoverflow.com/a/49836223/361842

$randomGenerator = [System.Security.Cryptography.RNGCryptoServiceProvider]::new()
$bytes = [Byte[]]::new((1GB – 2))
$randomGenerator.GetBytes($bytes)
[System.IO.File]::WriteAllBytes(‘c:\temp\hunka.junk’, $bytes)

 

Older Posts »

Create a free website or blog at WordPress.com.