Developer42

2016-02-05

PowerShell PlayTime :: Project Oxford

I recently stumbled across Microsoft’s Project Oxford; a collection of “AI” APIs for computer vision, speech and language. More info can be found here:

  1. MSDN: https://msdn.microsoft.com/en-us/library/mt422983.aspx
  2. Demo: https://www.projectoxford.ai/demo
  3. GitHub: https://github.com/Microsoft/ProjectOxford-ClientSDK

The project has links to a C# SDK, but that felt a bit heavy-weight for what are essentially just web services; so I thought I’d have a quick play with these functions using PowerShell.

Before coding, an API key is required. This can be obtained by logging into the following site using your MS Account (aka .net passport / live account) and registering for the free API keys: https://www.projectoxford.ai/Subscription.

I began by looking at the Emotion API, trawling through the client SDK code found here: https://github.com/Microsoft/ProjectOxford-ClientSDK/blob/master/Emotion/Windows/ClientLibrary/EmotionServiceClient.cs

Based on that code I was able to find all the info needed to call the rest services through PowerShell, resulting in this short script:

Clear-Host
$imageFilePath = 'c:\SomeFolder\ImageOfMeLookingAwesome.jpg'
$emotionApiKeyFree = '**my emotion aki key goes here**' #api keys: https://www.projectoxford.ai/Subscription

$RecogniseUri = ("https://api.projectoxford.ai/emotion/v1.0/recognize?subscription-key={0}" -f $emotionApiKeyFree)
$ContentType = 'application/octet-stream'
$Body = [System.IO.MemoryStream][System.Convert]::FromBase64String([convert]::ToBase64String((get-content $imageFilePath -Encoding Byte)))
$Result = Invoke-RestMethod -Method Post -Uri $RecogniseUri -ContentType $ContentType -Body $Body 

#show the results
$Result | format-list #information saying I don't look awesome; just angry, contemptable, and sad... until you spot the Es

#open the image file so we can compare the results with the picture
invoke-item $imageFilePath

To make it easier to see how the results map to the image, I then appended this to the end of my script, generating an HTML page with an image map. Hovering your mouse over the image’s faces gives you the emotions.

$htmlResultPath = 'c:\SomeFolder\EmotiomApiMap.html'

$html = @"
<html>
<head><title>really simple html page for demoing project oxford's emotion api</title></head>
<body>
<img src="$imageFilePath" usemap="#emotionMap">
<map name="emotionMap">
{0}
</map>
</body>
</html>
"@

$html = $html -f ($Result | %{
        $top = $_.faceRectangle.top
        $left = $_.faceRectangle.left
        $bottom = ($_.faceRectangle.top + $_.faceRectangle.height)
        $right = ($_.faceRectangle.left + $_.faceRectangle.width)

        "<area shape='rect' coords='{0},{1},{2},{3}' title='{4}'>" -f $left, $top, $right, $bottom, ($_.scores | out-string) 
    } | Join-String)

$html | out-file $htmlResultPath -Force
invoke-item $htmlResultPath 
My emotional state

PowerShell Project Oxford Emotion API PoC

I plan to play with these APIs more over the next few months / hope to find time to knock up a PowerShell module for them which I could share on GibHub.

2016-02-03

PowerShell & SQL :: A Quick Fix for the SQLPS Problems

Filed under: Microsoft, Technology, SQL Server, powershell — Tags: , , , — Developer42 @ 18:19

There are a couple of issues with the SQLPS PowerShell module.

  1. It changes the current directory to PS SQLSERVER:\>.
  2. It may cause warnings (see end of this post for examples).

Calling the Import-Module-SQLPS function (the definition of which you’ll find below) instead of running Import-Module SQLPS will prevent such issues.

cls
function Import-Module-SQLPS {
    #pushd and popd to avoid import from changing the current directory (ref: http://stackoverflow.com/questions/12915299/sql-server-2012-sqlps-module-changing-current-location-automatically)
    #3>&1 puts warning stream to standard output stream (see https://connect.microsoft.com/PowerShell/feedback/details/297055/capture-warning-verbose-debug-and-host-output-via-alternate-streams)
    #out-null blocks that output, so we don't see the annoying warnings described here: https://www.codykonior.com/2015/05/30/whats-wrong-with-sqlps/
    push-location
    import-module sqlps 3>&1 | out-null
    pop-location
}

"Is SQLPS Loaded?"
if(get-module sqlps){"yes"}else{"no"}

Import-Module-SQLPS

"Is SQLPS Loaded Now?"
if(get-module sqlps){"yes"}else{"no"}

NB: Though from PowerShell 3.0 onwards you don’t need to import modules; for SQLPS I’d recommend that you do import it; this ensures that you can control when in your script this happens, rather than the first command from that library (e.g. Invoke-SqlCmd statement) causing the script to load and thus your directory being changed and warnings being spewed.

(more…)

2015-12-04

SQL Server :: Compare Database Schemas

This is a script to list basic differences in table definitions between two databases.

Background Waffle
When we make database changes in a transactional system, we need to ensure that these won’t adversely affect our BI solution. Normally our developers give us this information when we plan which items to put into a release, so we can liaise with BI; unfortunately the developers machines are down at present due to a disaster at the data center of the company to whom we outsourced development. Since the changes for the release are already in our UAT environment (and only those changes are there), we could determine what would be changing in production by comparing the schemas of the two environments.

My first thought was to use the Database Comparison Tool that comes with Visual Studio. However whilst this does a great job for a developer looking for and resolving differences, it’s not so easy to communicate this information to others (i.e. giving a before and after image of columns side by side. You have to either click on each difference to get a definition of the before and after, then make notes or take screenshots, or you have to generate the DDL, in which case you don’t see the before picture; only the code to amend from the current to the to-be definition.

Instead I realised that I could simply pull the definitions of the tables & columns from SQL, compare these and list the differences; resulting in the code below.
This code can handle where both databases are on the same instance, or where they’re on different instances. For the latter scenario you’d need to create a linked server.
If you frequently compare different combinations of databases and want to avoid amending the instance and catalog names in the code each time you may want to look into views or synonyms (potentially aliases work too if the catalog name is consistent across instances).

Since our BI solution uses replication to get the data from the transactional system, I also included information on which columns were replicated; thus we can filter on those replicated to see just those which BI would be interested in, or can ignore replication to see all differences.

Caveats

  1. We’re not comparing everything; only basic table and column data.
  2. The maxLength value gives length in bytes; not characters (see previous post on how to correct for this).
  3. There are probably other issues; this was a quick fix to resolve an immediate issue which worked for us; there may be issues we’ve not encountered (e.g. maybe the linked server only gives access to a subset of the tables; thus giving false responses)…

Code

set transaction isolation level read uncommitted
go

;with uat as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myUatDbServer\uatInstance].[myUatDbCatalog].sys.tables t
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
, prd as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myProdDbServer\prodInstance].[myProdDbCatalog].sys.tables t
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
select coalesce(uat.TableName, prd.TableName) TableName
, coalesce(uat.ColumnName, prd.ColumnName) ColumnName
, case
       when prd.TableName is null and not exists (select top 1 1 from prd x where x.TableName = uat.TableName) then 'Add Table'
       when uat.TableName is null and not exists (select top 1 1 from uat x where x.TableName = prd.TableName)  then 'Remove Table'
       when prd.ColumnName is null then 'Add Column'
       when uat.ColumnName is null then 'Remove Column'
       else 'Change Column Definition'
end [AXtion] --our transaction system's Dynamics AX; I'm hilarious :S
, prd.ColType ColTypeFrom
, uat.ColType ColTypeTo
, prd.ColMaxLength ColMaxLengthFrom --not going to fuss about char byte size for now; just want to get a list of changes
, uat.ColMaxLength ColMaxLengthTo --not going to fuss about char byte size for now; just want to get a list of changes
, prd.ColIsNullable ColIsNullableFrom
, uat.ColIsNullable ColIsNullableTo
, prd.ColCollation ColCollationFrom
, uat.ColCollation ColCollationTo
, prd.ColPrecision ColPrecisionFrom
, uat.ColPrecision ColPrecisionTo
, prd.ColScale ColScaleFrom
, uat.ColScale ColScaleTo
, prd.is_replicated PrdIsReplicated --\_these aren't compared; just returned to make it easy to tell what's interesting to BI
, uat.is_replicated UatIsReplicated --/
from uat
full outer join prd
       on prd.TableName = uat.TableName
       and prd.ColumnName = uat.ColumnName
where prd.TableName is null
or uat.TableName is null
or prd.ColumnName is null
or uat.ColumnName is null
or (uat.ColType != prd.ColType)
or (uat.ColMaxLength != prd.ColMaxLength)
or (uat.ColIsNullable != prd.ColIsNullable) 
or (uat.ColCollation != prd.ColCollation) 
or (uat.ColPrecision != prd.ColPrecision) 
or (uat.ColScale != prd.ColScale)
order by coalesce(uat.TableName, prd.TableName) 
, coalesce(uat.ColumnName, prd.ColumnName) 

2015-12-03

SQL Server :: Getting Column Length of Various Columns

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , — Developer42 @ 19:17

Sometimes we need to lookup the maximum length allowed for various columns in a number of tables.
This script allows you to specify a list of table/column pairs into the table variable @interesting, then pulls back the max_length information for these, taking into account the data type (i.e. so data types storing 2 byte character sets can cope predictably).

It works by creating a temp table with a column holding a single character of each (string) data type, then reading the length of that field to get the length of a single character of that data type, and dividing the max length of the fields we’re after by this multiplier to get the length in characters rather than bytes.


declare @interesting table(col sysname, tab sysname)
insert @interesting
values ('someColumn','someTable')
,('secondColumn','someTable')
,('column3','anotherTable')

declare @JBDataTypes table(system_type_id int, user_type_id int, max_length int)
declare @sql nvarchar(max) 
select @sql = coalesce(@sql + ',','create table #dataTypesJB (') + quotename(name) + ' ' + name  
from sys.types 
where collation_name is not null --string fields only
set @sql = @sql + '); 
select system_type_id
, user_type_id 
, max_length 
from tempdb.sys.columns 
where object_id = object_id(''tempdb..#dataTypesJB''); 
drop table #dataTypesJB'
insert into @JBDataTypes exec (@sql)

select c.max_length / coalesce(jb.max_length,1) max_length 
, i.col
, i.tab
from @interesting i
inner join sys.columns c 
on c.object_id = object_id(i.tab)
and c.name = i.col
left outer join @JBDataTypes jb
on jb.system_type_id = c.system_type_id
and jb.user_type_id = c.user_type_id 
order by i.tab,i.col

2015-06-16

FTP Traffic Simulator / Test Script (PowerShell)

Filed under: Microsoft, Technology — Tags: , , , , , , , , , — Developer42 @ 01:38

We recently had an intermittent communication issue between an FTP server and the file share on which it stored its files.
In order to ensure frequent activity (with some variety to make it realistic) between the servers whilst investigating I created the below script.

The script connects to an FTP server, then loops (a given number of times) creating a dummy (Hunka.Junk) file of a random size, then uploading this file before pausing a random amount of time, then repeating for the next iteration.

cls
 
$fnTemplate = "hunka.junk.{0}.jpg"
$fnTempPath = "c:\temp\ftpTestScript\"
$user = "myFtpUser"
$pass = "myFtpPassword"
$ftp = "ftp://myFtpDns.myCompany.com/{0}"; 
$minFileSizeBytes =  20 * 1024 #min file size is 20kb
$maxFileSizeBytes = 100 * 1024 #max file size is 100kb
$minWaitSecs = 0 
$maxWaitSecs = 30
$stopConditionCounter = 1000000 #just in case we forget about the script have a stopping condition so it doesn't fill the server with junk
 
$webclient = New-Object -TypeName System.Net.WebClient;
$webclient.Credentials = New-Object System.Net.NetworkCredential($user,$pass) 
#$webclient.ContentType = "application/octet-stream";
#$webclient.Proxy = $null;
 
0..$stopConditionCounter | %{
    #setup variables for this iteration
    $filesize = Get-Random -Minimum $minFileSizeBytes -Maximum $maxFileSizeBytes #filesize in bytes
    $sleepTimeSecs = Get-Random -Minimum $minWaitSecs -Maximum ($maxWaitSecs-1)  #time to wait between uploads in seconds
    $sleepTimeSecs = Get-Random -Minimum $minWaitSecs -Maximum ($sleepTimeSecs+1) #increase liklihood of getting smaller wait times
    $fn = ($fnTemplate -f [string]([System.Guid]::NewGuid())) #get a unique filename
    write-output ("{0:0,000,000}: Generating {1:000} kb hunka.junk file ({2}), uploading to ftp, then sleeping {3} seconds" -f $_,($filesize/1024),$fn,$sleepTimeSecs)
    #create file
    $fnFullPath = (join-path $fnTempPath $fn)
    fsutil file createnew $fnFullPath $filesize
    #upload file
    $uri = New-Object -TypeName System.Uri -ArgumentList ($ftp -f $fn);
    $webclient.UploadFile($uri, $fnFullPath);
    #clean-up
    remove-item $fnFullPath
    #sleep
    Start-Sleep -seconds $sleepTimeSecs
}

2015-05-19

Hunka.Junk Files

Filed under: Microsoft, Movies, Technology — Tags: , , , , , , — Developer42 @ 17:21

A few times, even with capacity planning in place, we’ve hit disk capacity limits.
When this occurs you search for content which can be deleted to free up some space to get you moving again.
Here’s my new quick fix for this: HunkaJunk files.
Create dummy files which take up space, but can be quickly & easily deleted without risk; giving you teh wiggle room you need to get moving again.

There are various ways this could be done, but a useful tool is FSUtil.
This allows you to create blank files which take up a defined number of bytes.
e.g. to create a 1GB file you could run the following code from the Windows command line.

fsutil file createnew c:\PurgeForDiskSpace\Hunka.Junk 1073741824

– c:\PurgeForDiskSpace\Hunka.Junk – is the filename of your junk file (you can call it anything; just make it obvious)
– 1073741824 – is the filesize in bytes. NB: 1GB = 1024MB = 1024 * 1024 KB = 1024 * 1024 * 1024 bytes = 1,073,741,824 bytes.

2015-05-18

Microsoft Dynamics AX 2012 Chart of Accounts Advanced Validation Rules SQL

A follow up to an earlier article: Microsoft Dynamics AX 2012 Chart of Accounts Validation Rules SQL.

The Advanced Rules can be found in AX under: General Ledger > Setup > Chart of accounts > Configure account structures > select account structure > Advanced rule.
Here’s a small sample of our advanced rules from AX:

Advanced Rules in AX

Advanced Rules in AX

Here’s the same rule (results row 9), as produced by the below SQL:

AX Advanced Rules in SQL

AX Advanced Rules in SQL

Full SQL:

select dr.name AdvancedRule
, dr.description Name
, a.Name AccountStructure
, a.Description ASDescription
, case when dr.IsDraft =1 then 'Draft' else 'Active' end [Status] --seems to just duplicate the IsDraft info
, dr.IsDraft Draft
, drc.Filter
, dh.Name ARSAdvancedRuleStructure
, dh.DESCRIPTION ARSName
from DimensionRule dr
inner join DimensionHierarchy a on a.recid = dr.AccountStructure and a.Partition = dr.Partition
inner join (
       select drcO.DimensionRule
       , drcO.Partition
       , stuff
       (
              (
                     select ' And ' + da.Name + ' '
                     + case
                           when coalesce(drcI.WildCardString,'') > ''
                                  then case
                                         when drcI.WildCardString like '\%%\%' escape '\' then 'contains '
                                         when drcI.WildCardString like '%\%' escape '\' then 'begins with '
                                         when drcI.WildCardString like '\%%' escape '\' then 'ends with '
                                         else 'equals '
                                  end + quotename(replace(drcI.WildCardString,'%',''),'''')
                           when drcI.IsFromOpen = drcI.IsToOpen then --if both are 0 or both are 1 (second scenario seems weird, but that's how the app behaves)
                                  case
                                         when drcI.RangeFrom = drcI.RangeTo then 'Is ' + quotename(drcI.RangeFrom,'''')
                                         else 'Between ' + quotename(drcI.RangeFrom,'''') + ' through ' + quotename(drcI.RangeTo,'''')
                                  end
                                            --in the below statements IsFromOpen and IsToOpen seem to behave backwards to what you'd expect; but again that's what the app does
                           when drcI.IsFromOpen=1 and drcI.IsToOpen=0 then 'Greater than or equal to ' + quotename(drcI.RangeFrom,'''')
                           when drcI.IsFromOpen=0 and drcI.IsToOpen=1 then 'Less than or equal to ' + quotename(drcI.RangeTo,'''')
                           else '-UNEXPECTED SCENARIO; SPEAK TO JB-' --this should nevere happen
                     end 
                     FROM DimensionRuleCriteria drcI
                     inner join DimensionAttribute da on da.RecId = drcI.DimensionAttribute and da.Partition = drcI.Partition
                     where drcI.DimensionRule = drcO.DimensionRule and drcI.Partition = drcO.Partition
                     order by da.Name --drcI.RecId
                     for xml path(''), type
              ).value('.','nvarchar(max)')
       ,1,4,'Where') Filter
       FROM DimensionRuleCriteria drcO
       group by drcO.Partition
       , drcO.DimensionRule
) drc on drc.DimensionRule = dr.RecId and drc.Partition = dr.Partition
inner join DimensionRuleAppliedHierarchy drah on drah.DimensionRule = dr.RecId and drah.Partition = dr.Partition
inner join DimensionHierarchy dh on dh.recid = drah.DimensionHierarchy and dh.Partition = drah.Partition
order by a.Name, dr.name, dh.Name, dr.IsDraft

NB: One anomaly you may notice is some rules appear twice; once as Draft and once as Active. This is where an existing rule is being edited; the existing rule remains active whilst its replacement is created in draft status. When you browse the rules you’ll see the draft rule; however the active rule is the one being applied in any validation.

PowerShell Script :: Combine-Paths

Filed under: Microsoft, Technology — Tags: , , , , , , , — Developer42 @ 18:34

When you wish to create paths based on various combinations of strings, the following accepts a root plus an array of arrays and returns all combinations:

cls
function combine-paths([string]$root,[array]$paths)
{
    if(($paths -ne $null) -and ($paths.length -gt 0))
    {
        [array]$roots = $paths | select -first 1
        [array]$tail = @($paths | select -skip 1) #@() used to prevent final array from becoming an array of strings instead of a single-element array containing an array of strings
        $roots | %{combine-paths -root (join-path $root $_) -paths $tail}
    }
    else
    {
        $root
    }
}

$x = ('PROD','UAT','SIT'),('SystemA','SystemB','SystemC'),'Interfaces',('In','Out')
combine-paths -root '\\myFileShare\' -paths $x

NB: If you had multiple roots you could do the following:

'c:\','\\somewhere' | %{combine-paths -root $_ -paths $x}

Or amend the function to become a cmdlet taking root from the pipeline to simplify the above further / or add logic to allow the root to be blank/unspecified and to just pass `$_` instead of `(join-path $root $_)` where root is blank/null.
I’ve not done that here in order to keep the code short & simple.

2015-03-09

Microsoft Dynamics AX 2012 Chart of Accounts Validation Rules SQL

In doing some analysis work on performance, a consultant recently asked for an extract of all of our AX2012’s COA validation rules.
The below SQL is an attempt at regenerating the rules based off the data in the various related tables.
NB: I’m not 100% certain that this gives all rules as they’d appear in the application, but a few initial tests prove promising; so hopefully this SQL can be reused by others with similar requirements.

Here’s a small sample of our validation rules from AX:

AX 2012 Chart Of Accounts Validation Rules Screenshot

AX 2012 Chart Of Accounts Validation Rules Screenshot

Here’s the same rules, as produced by the below SQL:

AX 2012 Chart Of Accounts Validation Rules SQL Results Screenshot

AX 2012 Chart Of Accounts Validation Rules SQL Results Screenshot

Full SQL:

set transaction isolation level read uncommitted
go

;with c1 as
(
       select DimensionConstraintNode
       , STUFF
       (
              (
                     SELECT ';' + case 
                           when WILDCARDSTRING > '' then WILDCARDSTRING 
                           when RangeFrom=RangeTo then RangeFrom 
                           else concat(RangeFrom,'..',RangeTo) 
                     end 
                     FROM DimensionConstraintNodeCriteria inside
                     where inside.DimensionConstraintNode = outside.DimensionConstraintNode
                     ORDER BY ORDINAL, RECID
                     FOR XML PATH (''),TYPE
              ).value('.','nvarchar(max)')
              ,1,1,''
       ) AS Rules
       from DimensionConstraintNodeCriteria outside
       group by DimensionConstraintNode
)
, ActiveDCN as
(
       select dct.dimensionhierarchy 
       , dcn.DIMENSIONCONSTRAINTTREE
       , dhl.level_
       , dhl.dimensionattribute
       , da.name AttributeName
       , dcn.dimensionhierarchylevel
       , dcn.PARENTCONSTRAINTNODE
       , dcn.RecId
       , c1.Rules
       from DimensionConstraintTree dct 
       inner join DIMENSIONHIERARCHYLEVEL dhl on dhl.dimensionhierarchy = dct.dimensionhierarchy
       inner join DIMENSIONATTRIBUTE da on da.recid = dhl.dimensionattribute 
       inner join DimensionConstraintNode dcn on dcn.DIMENSIONCONSTRAINTTREE = dct.recid and dcn.dimensionhierarchylevel = dhl.RECID
       left outer join c1
       on c1.DimensionConstraintNode = dcn.recid
    where (dcn.activeto = '1900-01-01 00:00' or dcn.activeto >= getutcdate())
    and (dcn.activefrom = '1900-01-01 00:00' or dcn.activefrom <= getutcdate())
)
select dh.name HierarchyName
, concat(dcn1.AttributeName,': ',coalesce(dcn1.Rules,'*')) R1
, concat(dcn2.AttributeName,': ',coalesce(dcn2.Rules,'*')) R2
, concat(dcn3.AttributeName,': ',coalesce(dcn3.Rules,'*')) R3
, concat(dcn4.AttributeName,': ',coalesce(dcn4.Rules,'*')) R4
from DIMENSIONHIERARCHY dh
inner join ActiveDCN dcn1 on dcn1.dimensionhierarchy = dh.recid and dcn1.level_ = 1 --not sure if level’s required; it does affect results; keeping in for now as improves performance (though also reduces result set by 75%)
left outer join ActiveDCN dcn2 on dcn2.ParentConstraintNode = dcn1.Recid --and dcn2.level_ = 2
left outer join ActiveDCN dcn3 on dcn3.ParentConstraintNode = dcn2.Recid --and dcn2.level_ = 3
left outer join ActiveDCN dcn4 on dcn4.ParentConstraintNode = dcn3.Recid --and dcn2.level_ = 4
--where dcn1.Rules like '%640103%' --to produce screenshot
order by dh.name, dcn1.DIMENSIONCONSTRAINTTREE

2014-10-13

PowerShell Script :: Get HotFix Info from Local Machine & Web

Filed under: Microsoft, Technology — Tags: , , , , , , , , — Developer42 @ 22:02

In answering a question on Stack Overflow[1], I discovered that the Get-Hotfix cmdlet doesn’t list all hotfixes on your machine. Rather, if the fix is included in a CU, details of the installed CU will be presented, with no apparent way to drill down into the information and check for a specific fix.
So far as I can tell, there are no services / open databases which would allow you to programmatically retrieve this information. The only way currently is to manually head to the CU or fix’s related knowledge base site and read the contents.
As a very rough hack-around, I created a scraping script which will interrogate the MS knowledge-base for information on each hotfix, finding whether it’s a CU or not (based on the hotfix’s description in the page’s title) and if so try to find all related fixes. I only go down one level – to avoid the risk of an infinite loop & save time; should anyone want to go deeper, a caching mechanism could be used to store previous results and thus prevent the risk of a loop and also potentially improve performance.
As with most of the scripts on this site, this is more for me to play with PowerShell than to be of much practical use; but hopefully it could be to someone…

function get-hotfixInfo()
{
	process 
	{
		#$url = "http://support2.microsoft.com/kb/{0}" -f ($_.HotFixId -replace "KB(\d*)",'$1')
		$url = $_.Caption
		if($url -like "*.microsot.com")
		{
			$url = "{0}/{1}" -f $url,($_.HotFixId -replace "(KB)(\d*)",'$1/$2') 
		}
		try
		{
			$response = (Invoke-Webrequest $url -ea stop)
		} catch {
			$response = @{
				ParsedHTML = @{
					Title = "{0}`n`nURL: {1}" -f $error[0].Exception,$url 
				}
			}
		}
		$html = $response.ParsedHTML
		$isCU = $html.title -like "*cumulative*update*"
		$kblets = $null
		if($isCU) #this bit can be even slower than the above, hence only run if we believe we have a CU
		{
			$baseUri = $response.BaseResponse.ResponseURI
			$kblets = $html.getElementsByTagName('a') `
				| ? { ($_.parentNode.tagname -eq 'TD') -and ($_.parentNode.nextsibling.tagname = 'TD') } `
				| ? { $_.className -eq 'KBlink' } `
				| % { New-Object -TypeName PSObject -Prop @{
					Id = "KB{0}" -f $_.innerText
					Uri = (new-object System.URIBuilder($baseUri.scheme,$baseUri.dnssafehost,$baseUri.port,($_.href -replace "about:/(.*),'$1'"))).ToString()
					Title = $_.parentNode.nextsibling.innerText
				}}
		}
		New-Object -TypeName PSObject -Prop @{
			Id = $_.HotFixId
			IsCU = $isCU 
			Title = $html.title
			Source = $_.source
			Description = $_.description
			InstalledBy = $_.installedby
			InstalledOn = $_.installedon
			Uri = "http://support2.microsoft.com/kb/{0}" -f ($_.HotFixId -replace "KB(\d*)",'$1')
			KBlets = $kblets
		}
	}
}

#demos

#get the hotfix id for IE11 CU Feb 2014, and it's component hotfixes
get-hotfix -id kb2909921 | get-hotfixInfo | %{ $_; $_.KBlets | %{ $_; }} | ft -property id,title -autosize

#get the first 2 cumulative update hotfixes
get-hotfix | select -first 2 | get-hotfixInfo | ?{$_.isCU} 


Older Posts »

The WordPress Classic Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 798 other followers

%d bloggers like this: