PowerShell & SQL :: A Quick Fix for the SQLPS Problems

Filed under: Microsoft, powershell, SQL Server, Technology — Tags: , , , — Developer42 @ 18:19

There are a couple of issues with the SQLPS PowerShell module.

  1. It changes the current directory to PS SQLSERVER:\>.
  2. It may cause warnings (see end of this post for examples).

Calling the Import-Module-SQLPS function (the definition of which you’ll find below) instead of running Import-Module SQLPS will prevent such issues.

function Import-Module-SQLPS {
    #pushd and popd to avoid import from changing the current directory (ref:
    #3>&1 puts warning stream to standard output stream (see
    #out-null blocks that output, so we don't see the annoying warnings described here:
    import-module sqlps 3>&1 | out-null

"Is SQLPS Loaded?"
if(get-module sqlps){"yes"}else{"no"}


"Is SQLPS Loaded Now?"
if(get-module sqlps){"yes"}else{"no"}

NB: Though from PowerShell 3.0 onwards you don’t need to import modules; for SQLPS I’d recommend that you do import it; this ensures that you can control when in your script this happens, rather than the first command from that library (e.g. Invoke-SqlCmd statement) causing the script to load and thus your directory being changed and warnings being spewed.



SQL Server :: Compare Database Schemas

This is a script to list basic differences in table definitions between two databases.

Background Waffle
When we make database changes in a transactional system, we need to ensure that these won’t adversely affect our BI solution. Normally our developers give us this information when we plan which items to put into a release, so we can liaise with BI; unfortunately the developers machines are down at present due to a disaster at the data center of the company to whom we outsourced development. Since the changes for the release are already in our UAT environment (and only those changes are there), we could determine what would be changing in production by comparing the schemas of the two environments.

My first thought was to use the Database Comparison Tool that comes with Visual Studio. However whilst this does a great job for a developer looking for and resolving differences, it’s not so easy to communicate this information to others (i.e. giving a before and after image of columns side by side. You have to either click on each difference to get a definition of the before and after, then make notes or take screenshots, or you have to generate the DDL, in which case you don’t see the before picture; only the code to amend from the current to the to-be definition.

Instead I realised that I could simply pull the definitions of the tables & columns from SQL, compare these and list the differences; resulting in the code below.
This code can handle where both databases are on the same instance, or where they’re on different instances. For the latter scenario you’d need to create a linked server.
If you frequently compare different combinations of databases and want to avoid amending the instance and catalog names in the code each time you may want to look into views or synonyms (potentially aliases work too if the catalog name is consistent across instances).

Since our BI solution uses replication to get the data from the transactional system, I also included information on which columns were replicated; thus we can filter on those replicated to see just those which BI would be interested in, or can ignore replication to see all differences.


  1. We’re not comparing everything; only basic table and column data.
  2. The maxLength value gives length in bytes; not characters (see previous post on how to correct for this).
  3. There are probably other issues; this was a quick fix to resolve an immediate issue which worked for us; there may be issues we’ve not encountered (e.g. maybe the linked server only gives access to a subset of the tables; thus giving false responses)…


set transaction isolation level read uncommitted

;with uat as
       select TableName
       , c.Name ColumnName
       , ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myUatDbServer\uatInstance].[myUatDbCatalog].sys.tables t
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
, prd as
       select TableName
       , c.Name ColumnName
       , ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myProdDbServer\prodInstance].[myProdDbCatalog].sys.tables t
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
select coalesce(uat.TableName, prd.TableName) TableName
, coalesce(uat.ColumnName, prd.ColumnName) ColumnName
, case
       when prd.TableName is null and not exists (select top 1 1 from prd x where x.TableName = uat.TableName) then 'Add Table'
       when uat.TableName is null and not exists (select top 1 1 from uat x where x.TableName = prd.TableName)  then 'Remove Table'
       when prd.ColumnName is null then 'Add Column'
       when uat.ColumnName is null then 'Remove Column'
       else 'Change Column Definition'
end [AXtion] --our transaction system's Dynamics AX; I'm hilarious :S
, prd.ColType ColTypeFrom
, uat.ColType ColTypeTo
, prd.ColMaxLength ColMaxLengthFrom --not going to fuss about char byte size for now; just want to get a list of changes
, uat.ColMaxLength ColMaxLengthTo --not going to fuss about char byte size for now; just want to get a list of changes
, prd.ColIsNullable ColIsNullableFrom
, uat.ColIsNullable ColIsNullableTo
, prd.ColCollation ColCollationFrom
, uat.ColCollation ColCollationTo
, prd.ColPrecision ColPrecisionFrom
, uat.ColPrecision ColPrecisionTo
, prd.ColScale ColScaleFrom
, uat.ColScale ColScaleTo
, prd.is_replicated PrdIsReplicated --\_these aren't compared; just returned to make it easy to tell what's interesting to BI
, uat.is_replicated UatIsReplicated --/
from uat
full outer join prd
       on prd.TableName = uat.TableName
       and prd.ColumnName = uat.ColumnName
where prd.TableName is null
or uat.TableName is null
or prd.ColumnName is null
or uat.ColumnName is null
or (uat.ColType != prd.ColType)
or (uat.ColMaxLength != prd.ColMaxLength)
or (uat.ColIsNullable != prd.ColIsNullable) 
or (uat.ColCollation != prd.ColCollation) 
or (uat.ColPrecision != prd.ColPrecision) 
or (uat.ColScale != prd.ColScale)
order by coalesce(uat.TableName, prd.TableName) 
, coalesce(uat.ColumnName, prd.ColumnName) 


SQL Server :: Getting Column Length of Various Columns

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , — Developer42 @ 19:17

Sometimes we need to lookup the maximum length allowed for various columns in a number of tables.
This script allows you to specify a list of table/column pairs into the table variable @interesting, then pulls back the max_length information for these, taking into account the data type (i.e. so data types storing 2 byte character sets can cope predictably).

It works by creating a temp table with a column holding a single character of each (string) data type, then reading the length of that field to get the length of a single character of that data type, and dividing the max length of the fields we’re after by this multiplier to get the length in characters rather than bytes.

declare @interesting table(col sysname, tab sysname)
insert @interesting
values ('someColumn','someTable')

declare @JBDataTypes table(system_type_id int, user_type_id int, max_length int)
declare @sql nvarchar(max) 
select @sql = coalesce(@sql + ',','create table #dataTypesJB (') + quotename(name) + ' ' + name  
from sys.types 
where collation_name is not null --string fields only
set @sql = @sql + '); 
select system_type_id
, user_type_id 
, max_length 
from tempdb.sys.columns 
where object_id = object_id(''tempdb..#dataTypesJB''); 
drop table #dataTypesJB'
insert into @JBDataTypes exec (@sql)

select c.max_length / coalesce(jb.max_length,1) max_length 
, i.col
from @interesting i
inner join sys.columns c 
on c.object_id = object_id(
and = i.col
left outer join @JBDataTypes jb
on jb.system_type_id = c.system_type_id
and jb.user_type_id = c.user_type_id 
order by,i.col


Microsoft Dynamics AX 2012 Chart of Accounts Advanced Validation Rules SQL

A follow up to an earlier article: Microsoft Dynamics AX 2012 Chart of Accounts Validation Rules SQL.

The Advanced Rules can be found in AX under: General Ledger > Setup > Chart of accounts > Configure account structures > select account structure > Advanced rule.
Here’s a small sample of our advanced rules from AX:

Advanced Rules in AX

Advanced Rules in AX

Here’s the same rule (results row 9), as produced by the below SQL:

AX Advanced Rules in SQL

AX Advanced Rules in SQL

Full SQL:

select AdvancedRule
, dr.description Name
, a.Name AccountStructure
, a.Description ASDescription
, case when dr.IsDraft =1 then 'Draft' else 'Active' end [Status] --seems to just duplicate the IsDraft info
, dr.IsDraft Draft
, drc.Filter
, dh.Name ARSAdvancedRuleStructure
from DimensionRule dr
inner join DimensionHierarchy a on a.recid = dr.AccountStructure and a.Partition = dr.Partition
inner join (
       select drcO.DimensionRule
       , drcO.Partition
       , stuff
                     select ' And ' + da.Name + ' '
                     + case
                           when coalesce(drcI.WildCardString,'') > ''
                                  then case
                                         when drcI.WildCardString like '\%%\%' escape '\' then 'contains '
                                         when drcI.WildCardString like '%\%' escape '\' then 'begins with '
                                         when drcI.WildCardString like '\%%' escape '\' then 'ends with '
                                         else 'equals '
                                  end + quotename(replace(drcI.WildCardString,'%',''),'''')
                           when drcI.IsFromOpen = drcI.IsToOpen then --if both are 0 or both are 1 (second scenario seems weird, but that's how the app behaves)
                                         when drcI.RangeFrom = drcI.RangeTo then 'Is ' + quotename(drcI.RangeFrom,'''')
                                         else 'Between ' + quotename(drcI.RangeFrom,'''') + ' through ' + quotename(drcI.RangeTo,'''')
                                            --in the below statements IsFromOpen and IsToOpen seem to behave backwards to what you'd expect; but again that's what the app does
                           when drcI.IsFromOpen=1 and drcI.IsToOpen=0 then 'Greater than or equal to ' + quotename(drcI.RangeFrom,'''')
                           when drcI.IsFromOpen=0 and drcI.IsToOpen=1 then 'Less than or equal to ' + quotename(drcI.RangeTo,'''')
                           else '-UNEXPECTED SCENARIO; SPEAK TO JB-' --this should nevere happen
                     FROM DimensionRuleCriteria drcI
                     inner join DimensionAttribute da on da.RecId = drcI.DimensionAttribute and da.Partition = drcI.Partition
                     where drcI.DimensionRule = drcO.DimensionRule and drcI.Partition = drcO.Partition
                     order by da.Name --drcI.RecId
                     for xml path(''), type
       ,1,4,'Where') Filter
       FROM DimensionRuleCriteria drcO
       group by drcO.Partition
       , drcO.DimensionRule
) drc on drc.DimensionRule = dr.RecId and drc.Partition = dr.Partition
inner join DimensionRuleAppliedHierarchy drah on drah.DimensionRule = dr.RecId and drah.Partition = dr.Partition
inner join DimensionHierarchy dh on dh.recid = drah.DimensionHierarchy and dh.Partition = drah.Partition
order by a.Name,, dh.Name, dr.IsDraft

NB: One anomaly you may notice is some rules appear twice; once as Draft and once as Active. This is where an existing rule is being edited; the existing rule remains active whilst its replacement is created in draft status. When you browse the rules you’ll see the draft rule; however the active rule is the one being applied in any validation.


Microsoft Dynamics AX 2012 Chart of Accounts Validation Rules SQL

In doing some analysis work on performance, a consultant recently asked for an extract of all of our AX2012’s COA validation rules.
The below SQL is an attempt at regenerating the rules based off the data in the various related tables.
NB: I’m not 100% certain that this gives all rules as they’d appear in the application, but a few initial tests prove promising; so hopefully this SQL can be reused by others with similar requirements.

Here’s a small sample of our validation rules from AX:

AX 2012 Chart Of Accounts Validation Rules Screenshot

AX 2012 Chart Of Accounts Validation Rules Screenshot

Here’s the same rules, as produced by the below SQL:

AX 2012 Chart Of Accounts Validation Rules SQL Results Screenshot

AX 2012 Chart Of Accounts Validation Rules SQL Results Screenshot

Full SQL:

set transaction isolation level read uncommitted

;with c1 as
       select DimensionConstraintNode
       , STUFF
                     SELECT ';' + case 
                           when WILDCARDSTRING > '' then WILDCARDSTRING 
                           when RangeFrom=RangeTo then RangeFrom 
                           else concat(RangeFrom,'..',RangeTo) 
                     FROM DimensionConstraintNodeCriteria inside
                     where inside.DimensionConstraintNode = outside.DimensionConstraintNode
                     ORDER BY ORDINAL, RECID
                     FOR XML PATH (''),TYPE
       ) AS Rules
       from DimensionConstraintNodeCriteria outside
       group by DimensionConstraintNode
, ActiveDCN as
       select dct.dimensionhierarchy 
       , dhl.level_
       , dhl.dimensionattribute
       , AttributeName
       , dcn.dimensionhierarchylevel
       , dcn.RecId
       , c1.Rules
       from DimensionConstraintTree dct 
       inner join DIMENSIONHIERARCHYLEVEL dhl on dhl.dimensionhierarchy = dct.dimensionhierarchy
       inner join DIMENSIONATTRIBUTE da on da.recid = dhl.dimensionattribute 
       inner join DimensionConstraintNode dcn on dcn.DIMENSIONCONSTRAINTTREE = dct.recid and dcn.dimensionhierarchylevel = dhl.RECID
       left outer join c1
       on c1.DimensionConstraintNode = dcn.recid
    where (dcn.activeto = '1900-01-01 00:00' or dcn.activeto >= getutcdate())
    and (dcn.activefrom = '1900-01-01 00:00' or dcn.activefrom <= getutcdate())
select HierarchyName
, concat(dcn1.AttributeName,': ',coalesce(dcn1.Rules,'*')) R1
, concat(dcn2.AttributeName,': ',coalesce(dcn2.Rules,'*')) R2
, concat(dcn3.AttributeName,': ',coalesce(dcn3.Rules,'*')) R3
, concat(dcn4.AttributeName,': ',coalesce(dcn4.Rules,'*')) R4
inner join ActiveDCN dcn1 on dcn1.dimensionhierarchy = dh.recid and dcn1.level_ = 1 --not sure if level’s required; it does affect results; keeping in for now as improves performance (though also reduces result set by 75%)
left outer join ActiveDCN dcn2 on dcn2.ParentConstraintNode = dcn1.Recid --and dcn2.level_ = 2
left outer join ActiveDCN dcn3 on dcn3.ParentConstraintNode = dcn2.Recid --and dcn2.level_ = 3
left outer join ActiveDCN dcn4 on dcn4.ParentConstraintNode = dcn3.Recid --and dcn2.level_ = 4
--where dcn1.Rules like '%640103%' --to produce screenshot


T-SQL: Generate Series: Getting a list of numbers in a given range.

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , , , , , , — Developer42 @ 01:27

I recently came across the Postgres generate_series function whilst reading a blog post.
So far as I can tell, there’s no equivalent in T-SQL. To make up for this, I coded my own, making use of the recursive nature of common table expressions:

create function dbo.generate_series
	  @start bigint
	, @stop bigint
	, @step bigint = 1
	, @maxResults bigint = 0 --0 = unlimited
returns @results table(n bigint)

	--avoid infinite loop (i.e. where we're stepping away from stop instead of towards it)
	if @step = 0 return
	if @start > @stop and @step > 0 return
	if @start < @stop and @step < 0 return
	--ensure we don't overshoot
	set @stop = @stop - @step

	--treat negatives as unlimited
	set @maxResults = case when @maxResults < 0 then 0 else @maxResults end

	--generate output
	;with myCTE (n,i) as 
		--start at the beginning
		select @start
		, 1
		union all
		--increment in steps
		select n + @step
		, i + 1
		from myCTE 
		--ensure we've not overshot (accounting for direction of step)
		where (@maxResults=0 or i<@maxResults)
			   (@step > 0 and n <= @stop)
			or (@step < 0 and n >= @stop)
	insert @results
	select n 
	from myCTE
	option (maxrecursion 0) --sadly we can't use a variable for this; however checks above should mean that we have a finite number of recursions / @maxResults gives users the ability to manually limit this 

	--all good	

Example Usage:

--check we get expected results
select * from generate_series(1, 10, default, default)
select * from generate_series(10, 5, -1, default)
select * from generate_series(1, 10, 4, default)
select * from generate_series(1, 10, default, 2)
select * from generate_series(1, 10, 4, -1)

--check we don't get results if we have "invalid" input
select * from generate_series(1, 10, 0, default)
select * from generate_series(10, 1, default, default)
select * from generate_series(10, 5, 1, default)
select * from generate_series(1, 10, -4, default)

NB: Should you wish to generate a series of dates instead of a series of numbers, check my comments here:


Countdown Numbers Round Solver in T-SQL

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , , , , — Developer42 @ 14:00

Every now and then I try to create programs to solve puzzles as a way to introduce myself to new ideas & test my coding skills. My latest self-challenge was to write SQL code to solve the Countdown Numbers Round. The idea is; given 6 random numbers (each of which may be used at most once) and using only basic arithmetic (i.e. addition, subtraction, multiplication and division) to get to a randomly generated target number. e.g. Given the numbers 3, 7, 9, 10, 50 & 75, and the target 396 one solution would be: 3 * (75 + 50 + 7).

Here’s how I approached it in T-SQL:
Run this solution at SQL Fiddle

create procedure dbo.CountdownNumberSolver
	@target int output --output used to allow these to be referenced outside of the proc if rnd numbers are generated
	, @number1 int output
	, @number2 int output
	, @number3 int output
	, @number4 int output
	, @number5 int output
	, @number6 int output 
) as
	--if the user didn't specify values randomly pick some.
	--   Small Number (1-10):			1 + 10 * rand() 
	--   Large Number (25,50,75,100):	floor(1 + 4 * rand()) * 25
	--   Target (1-999):				1 + (999 * rand())
	select @target = ISNULL(@target, 1 + (999 * rand())) --I assume 0 isn't a valid solution in countdown?
		, @number1 = ISNULL(@number1,1 + 10 * rand())
		, @number2 = ISNULL(@number2,1 + 10 * rand())
		, @number3 = ISNULL(@number3,1 + 10 * rand())
		, @number4 = ISNULL(@number4,1 + 10 * rand())
		, @number5 = ISNULL(@number5,floor(1 + 4 * rand()) * 25)
		, @number6 = ISNULL(@number6,floor(1 + 4 * rand()) * 25)

	--output question
    /* commented out as sql fiddle only returns first result set
	select @target [target]
		, @number1 [#1]
		, @number2 [#2]
		, @number3 [#3]
		, @number4 [#4]
		, @number5 [#5]
		, @number6 [#6]
	--records combinations tried so far / partial equations
	create table #solutions
		id bigint not null identity(1,1) primary key clustered
		, binaryFlags tinyint not null
		, number float not null
		, equation nvarchar(256) not null
	create index ix_solutions_number on #solutions(number)
	--start with the given values - the id stuff is just to make it easy to reuse this procedure should we want a different # of source numbers
	insert #solutions
	select power(2,id) 
	, number
	, CAST(number as nvarchar(3))
	)x(id, number)

	declare @previousCount bigint = 0
	, @newCount bigint = (select COUNT(1) from #solutions)
	, @tempCount bigint
	while @previousCount < @newCount --repeat whilst there are still untried combos
		and not exists (select top 1 1 from #solutions where number=@target) --and we're still looking for a solution

		set @tempCount = @newCount
		insert #solutions
		select a.binaryFlags | b.binaryFlags
		, case x.o
			when '+' then a.number + b.number
			when '-' then a.number - b.number
			when '*' then a.number * b.number
			when '/' then a.number / b.number
		, '(' + a.equation + ' ' + x.o + ' ' + b.equation + ')'
		from #solutions a
		inner join #solutions b
			on a.binaryFlags & b.binaryFlags = 0 --ensure we're not reusing source numbers
			and != --minor (potential) optimisation
			and --no point comparing things we've already checked (this may allow for some performance improvement- it doesn't affect the logic)
			( > @previousCount
				or > @previousCount
		inner join --doing things this way means we only need to find the new combinations from a&b once, not 4 times
			values ('+'), ('-'), ('*'), ('/')
		) x(o)
			on  not(x.o = '/' and b.number = 0) --avoid div 0 errors
			and not(x.o = '-' and a.number - b.number = 0) --don't create 0 value entries (if 0's not an allowed solution this result just adds overhead without value)
			and not(x.o = '+' and a.number + b.number = 0) --don't create 0 value entries (if 0's not an allowed solution this result just adds overhead without value)
			and not(x.o = '+' and > --commutative function optimisation
			and not(x.o = '*' and > --commutative function optimisation
		set @previousCount = @tempCount
		select @newCount = COUNT(1) from #solutions	

	--limited to one answer to fit with the game / avoid the assumption that all possible solutions would be listed 
	--(since we exit the above loop the moment we have an answer rather than continuing to check all combos)
	select top 1 
	  @number1 [#1]
	, @number2 [#2]
	, @number3 [#3]
	, @number4 [#4]
	, @number5 [#5]
	, @number6 [#6]
	, @target  [target] 
	, equation [solution]
	from #solutions 
	where number = @target
	order by len(equation) --show the shortest equation as that's probably simplest
	option (fast 1)	

	if object_id('tempdb..#solutions','U') is not null drop table #solutions

	return 0

Here’s an example of how to run it:

--run the proc using random numbers (the proc replaces nulls with random #s)
exec dbo.CountdownNumberSolver null, null, null, null, null, null, null

--run the proc on user defined values
exec dbo.CountdownNumberSolver 396, 7, 3, 10, 9, 50, 75


Run VBScript from SQL

Filed under: Microsoft, SQL Server, Technology — Tags: , , , — Developer42 @ 18:13

Though it’s fairly well publicised how to call an existing VBS file from SQL, or how to execute VBScript within a SQL agent, there’s not much info out there on how to have SQL run VBS directly. The code below allows you to manage your scripts in a database table, creating temp files to host and run the scripts as required. The sp_RunSQL stored proc can also be used to run any VBS script; including dynamically generated code.
As always, feedback, suggestions, etc are welcome.


    use [insert-db-name-here]
    ----use below code to enable commands (required for xp_cmdshell to work)
    --exec sp_configure 'show advanced options', 1
    --exec sp_configure 'xp_cmdshell', 1  
    if OBJECT_ID('vbsScripts') is not null drop table vbsScripts
    create table vbsScripts 
        id bigint not null identity(1,1) constraint pk_vbsScripts primary key clustered 
        , name nvarchar(256) not null constraint uk_vbsScripts_name unique
        , script nvarchar(max) not null
        , timeoutSecs int null constraint df_vbsScripts_timeoutSecs default(86400)--leave as null if you don't want a timeout / defaults to 1 day / 24*60*60
        , batchMode bit not null constraint df_vbsScripts_batchMode default(1)
    Gets the temp directory from environment variables
        declare @tempPath nvarchar(max)
        exec GetTempDirectory @tempPath out 
        select @tempPath 
    if OBJECT_ID('GetTempDirectory') is not null drop proc GetTempDirectory
    create proc GetTempDirectory(@path nvarchar(max) out)
        set @path = ''
        declare @tempTable table(data nvarchar(max))
        insert @tempTable exec master..xp_cmdshell 'echo %temp%'
        select top 1 @path = data from @tempTable
        if SUBSTRING(@path,len(@path),1)  '\' set @path = @path + '\'
    Creates a unique filename (using guid to ensure uniqueness and datetime to make the name friendlier)
        declare @tempPath nvarchar(max)
        exec GetTempDirectory @tempPath out 
        select @tempPath 
    if OBJECT_ID('GetTempFilename') is not null drop proc GetTempFilename
    create proc GetTempFilename(@fn nvarchar(max) out)
        --exec GetTempDirectory @fn out --can just use environment variable - originally had issues testing as was looking at the wrong user's temp directory :/
        --set @fn = @fn + 'sqlTemp_' + replace(replace(replace(convert(nvarchar(24), getutcdate(),127),'-',''),':',''),'.','') + '_' + CAST(NEWID() as nvarchar(36)) + '.tmp'
        set @fn = '%temp%\' + 'sqlTemp_' + replace(replace(replace(convert(nvarchar(24), getutcdate(),127),'-',''),':',''),'.','') + '_' + CAST(NEWID() as nvarchar(36)) + '.tmp'
    if OBJECT_ID('dbo.fn_EscapeDosCharachters') is not null drop function dbo.fn_EscapeDosCharachters
    create function dbo.fn_EscapeDosCharachters
        @text nvarchar(max)
    returns nvarchar(max)
        set @text = REPLACE(@text,'^','^^')
        set @text = REPLACE(@text,'!','^!')
        set @text = REPLACE(@text,'&amp;','^&amp;')
        set @text = REPLACE(@text,'|','^|')
        set @text = REPLACE(@text,'%','%%')

	--based on experience
	set @text = REPLACE(@text,'&gt;','^&gt;')
	set @text = REPLACE(@text,'&lt;','^&lt;')
        return @text
    if OBJECT_ID('createTempTextFile') is not null drop proc createTempTextFile
    create proc createTempTextFile
        @fn nvarchar(max) out 
                          --the filename to output to (nb: environment variables don't currently work (e.g. you can't use %temp%\myFile.vbs)
                          --works fine with spaces in filename (so far at least)
                          --if user passes null a temporary filename will be auto allocated &amp; returned in this variable
        , @content nvarchar(max)

        declare @charPos int
        , @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation

        if @fn is null or LEN(@fn)=0
            exec GetTempFilename @fn out

        set @cmd = '@echo.&gt;' + @fn --create a new file for our script output
        EXEC master..xp_cmdshell @cmd, no_output

        set @content = replace(@content,char(13) + char(10), char(10))--ensure uniform line endings (i.e. \r\n -&gt; \n)
        set @content = replace(@content,char(13), char(10))--ensure uniform line endings (i.e. \r -&gt; \n)
        set @content = @content + CHAR(10) --ensure last character of script is new line
        set @charPos = CHARINDEX(char(10),@content)
        while (@charPos &gt; 0)
            --todo: consider what additional escaping is required to prevent injection issues
            set @cmd = '@echo.' + dbo.fn_EscapeDosCharachters(SUBSTRING(@content,1,@charPos-1)) + '&gt;&gt; ' + @fn
            EXEC master..xp_cmdshell @cmd, no_output
            set @content = SUBSTRING(@content,@charPos+1,len(@content))
            set @charPos = CHARINDEX(char(10),@content)

    if OBJECT_ID('deleteTempTextFile') is not null drop proc deleteTempTextFile
    create proc deleteTempTextFile
        @fn nvarchar(max)
        declare @cmd varchar(8000)
        if CHARINDEX(' ',@fn)&gt;0 and CHARINDEX('&quot;',@fn)&gt;1 set @fn = QUOTENAME(@fn,'&quot;')
        set @cmd = 'del ' + @fn
        EXEC master..xp_cmdshell @cmd, no_output
    if OBJECT_ID('sp_RunScript') is not null drop proc sp_RunScript
    create proc sp_RunScript
        @script nvarchar(max)
        , @arguments nvarchar(max)
        , @timeoutSecs int = null
        , @batchMode bit = 1
        , @tempfileUri nvarchar(max) out
        declare @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation
        exec createTempTextFile @tempfileUri out, @script
        if CHARINDEX(' ',@tempfileUri)&gt;0 and CHARINDEX('&quot;',@tempfileUri)&gt;1 set @tempfileUri = QUOTENAME(@tempfileUri,'&quot;')
        set @cmd = 'cscript ' + @tempfileUri + ' //E:vbscript //NOLOGO '
        --batch mode or interactive
        if @batchMode=1 
            set @cmd = @cmd + '//B '
            set @cmd = @cmd + '//I '
        --should script timeout after x seconds?
        if @timeoutSecs is not null
            set @cmd = @cmd + '//T:' + CAST(@timeoutSecs as nvarchar(18)) + ' '
        set @cmd = @cmd + isnull(@arguments,'')
        --select @cmd --if debugging enable this line to see the script file / etc
        EXEC master..xp_cmdshell @cmd --if required we can capture output as has been done in GetTempDirectory
        exec deleteTempTextFile @tempfileUri --tidyup the temp script - disable this line for debugging
    if OBJECT_ID('sp_RunScriptByID') is not null drop proc sp_RunScriptByID
    create proc sp_RunScriptByID
        @scriptId bigint
        , @arguments nvarchar(max)
        declare @timeoutSecs int
        , @batchMode bit
        , @script nvarchar(max)
        , @tempfileUri nvarchar(max)
        , @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation
        select @timeoutSecs=timeoutSecs 
        , @batchMode = batchMode
        , @script = script
        from vbsScripts 
        where id = @scriptId
        exec sp_RunScript  
			, @arguments 
			, @timeoutSecs 
			, @batchMode 
			, @tempfileUri out 
    if OBJECT_ID('sp_RunScriptByName') is not null drop proc sp_RunScriptByName

    provides a friendly interface to sp_RunScriptByID
    create proc sp_RunScriptByName
        @scriptName nvarchar(256)
        , @arguments nvarchar(max)

        declare @id bigint

        select @id = id
        from vbsScripts 
        where name = @scriptName

        exec sp_RunScriptByID @id, @arguments


Example Usage:


    --register a new script in the scripts table
    insert vbsScripts 
    select 'demo', '
    option explicit
    dim objFSO, i, path
    path = "c:\example1\"
    wscript.echo "hello" ''show what console output looks like (if interactive)
    for i = 0 to wscript.Arguments.Count-1 ''show that we can handle command line arguments
        wscript.echo wscript.arguments.item(i)
    set objFSO = CreateObject("Scripting.FileSystemObject")    
    if not objFSO.FolderExists(path) then
		on error resume next
        objFSO.CreateFolder(path) ''create a folder to demonstrate that the vbs is running / affecting the outside environment
        if err.number = 0 then
			wscript.echo "Folder " &amp; path &amp; " successfully created."
			wscript.echo "Folder " &amp; path &amp; " was not created.  " &amp; cstr(Err.number) &amp; ": " &amp; Err.Description
		end if
        on error goto 0
        wscript.echo "Folder " &amp; path &amp; " already exists."
    end if
    set objFSO = Nothing
    wscript.echo "Done"
    ', null, 0

    --execute above script via friendly name
    sp_RunScriptByName 'demo','"this is a demo" "hopefully it will work" yes it does'


Intelligently Splitting an Address

Filed under: SQL Server, Technology, Uncategorized — Tags: , , , , , , , , — Developer42 @ 08:05

An issue recently arose at work where I was asked to take a street address from one field, and split it into two for use with another system. The simple solutions available were:

  • simply splitting on the first comma to appear in the string, and putting each side of this split into the respective fields.
  • breaking the string down the middle.
  • putting everything I could into the first field, then any remainder into the second.

However, none of these seemed to feel right; the first solution would work well, but didn’t take the destination fields’ sizes into account, and the second and third solutions could break half-way through a word / in the case of the third, may not use the second field.
As a result, I came up with the following function; it’s probably not the best solution (since this was written in half an hour on getting home from a night out, and with just a quick check that all worked OK this morning), but hopefully it’ll be useful to some people.

The function takes 3 arguments; the string to be split, the length of the first field to be populated, and the length of the second. It returns a table with an ID field to give the order of the elements (not really required), and a String field to give the two return values in separate rows. This function will always return exactly 2 rows.

The function is written in T-SQL for SQL Server, though I believe it should also work with Sybase.





CREATE FUNCTION dbo.SplitSpecial


      @stringToSplit nvarchar(max)

      ,@len1 int

      ,@len2 int


RETURNS @returnVal table (id int identity(1,1),string nvarchar(max)) 



      declare @i int = 0                        --used as a counter when searching for chars

      , @j int = 0                              --another char counter

      , @ldiff int = 0                    --says how many characters we have spare to play with

      , @mark int = 0                     --a marker

      , @tempmark int   = 0                     --a marker

      , @back nvarchar(max)               --stringToSplit in reverse

      , @temp nvarchar(max)               --for storing works in progress

      , @forward bit = 0                        --used to indicate the direction of the string from the marker


      --remove any starting, ending or duplicate spaces should they exist, to try to cram in the characters

      select @stringToSplit = ltrim(rtrim(@stringToSplit))

      while (charindex('  ',@stringToSplit)&gt;0)

            select @stringToSplit = replace(@stringToSplit,N'  ',N' ')


      --find out what we have to play with / reduce max size of strings to size of input

      select @tempmark = len(@stringToSplit)

      select @len1  = 0.5*((@len1 + @tempmark) - abs(@len1 - @tempmark))

            ,  @len2  = 0.5*((@len2 + @tempmark) - abs(@len2 - @tempmark))

      select @ldiff = (@len1 + @len2) - @tempmark


      if @ldiff &gt; 0


            --get a good starting point / break point: no longer required

            --select @mark = round(len(@stringToSplit)/2,0) --assume we can get away with halving the string

            --select @mark = 0.5*((@mark + @len1) - abs(@mark - @len1)) --account for first field being too short (least function math hack)

            --select @mark = len(@stringToSplit) - 0.5*(((len(@stringToSplit)-@mark) + @len2) - abs((len(@stringToSplit)-@mark) - @len2)) --account for 2nd field being too short (least function math hack)


            --the two fields are big enough for us to try some funky stuff with

            --is there a delimeter we can use?

            select @temp = substring(@stringToSplit,len(@stringToSplit)-@len2,@ldiff+1)--get crossover chars; the +1 accounts for dropping the delim

            while patindex('%[a-z0-9]%', @temp) &gt; 0

                  set @temp = stuff(@temp, patindex('%[a-z0-9]%', @temp), 1, '')

            if (len(@temp)+charindex(N' ',@temp)) &gt; 0 --a suitable delim exists in the crossover area


                  if charindex(N',',@temp)&gt;0 --if commas are available, these are the preferred delimeter

                        select @temp = N','


                        if charindex(N' ',@temp)&gt;0  --spaces are the second favourite

                              select @temp =N' '

                  select @back = reverse(@stringToSplit)

                  , @i=1

                  , @j = ceiling(len(@stringToSplit)/2)-1

                  , @forward = 1

                  , @mark = len(@stringToSplit)

                  while @i&lt;=(len(@temp)+charindex(N' ',@temp))--try out all available delimeters (if comma or space are present, others are ignored)


                        while @j&gt;0


                              select @tempmark = charindex(substring(@temp,@i,1),@stringToSplit,@j)

                              if @tempmark &gt; 0 and @tempmark &lt; @mark

                                    select @mark = @tempmark, @forward = 1

                              select @tempmark = charindex(substring(@temp,@i,1),@back,@j)

                              if @tempmark &gt; 0 and @tempmark &lt; @mark

                                    select @mark = @tempmark, @forward = 0

                              if(@mark&lt;=@len1 and (len(@stringToSplit)-@mark)&lt;=@len2)


                              select @j = @j - 1


                        select @i = @i + 1


                  --select @mark = case @forward when 0 then len(@stringToSplit)-(@mark+1) else @mark end

                  select @mark = case @forward when 0 then (1+len(@stringToSplit))-@mark else @mark end



      --catch any issues &amp; account for when we can't find a good break point / there isn't space

      if (@mark &lt;= 0 or @mark &gt; @len1)

            select @mark = @len1


      insert into @returnVal (string)     values (ltrim(rtrim(substring(@stringToSplit,1,@mark))))

      insert into @returnVal (string)     values (ltrim(rtrim(substring(@stringToSplit,1+@mark,@len2))))








--examples of the function in action

select * from dbo.SplitSpecial('33 Rue De Maisons, Paris',100,100)

select * from dbo.SplitSpecial('33 Rue De Maisons, Paris, France 123456',100,100)

select * from dbo.SplitSpecial('33,Rue,De,Maisons,Paris,France,123456',100,100)

select * from dbo.SplitSpecial('33 Rue De Maisons Paris France 123456',100,100)

select * from dbo.SplitSpecial('33 Rue De Maisons, Paris, France 123456',8,50)

select * from dbo.SplitSpecial('this string is too big for the fields',10,10)

At some point, I hope to update this to allow it to take any number of fields of different lengths, to make it more general (e.g. should you ever have a full address to be split into 4 or 5 lines), but for now, it works (so far, at least)


Ideas for Wave Robots

Filed under: Google, Ideas, Technology, Wave — Tags: , , , , , , , , , , , — Developer42 @ 21:32

One of the hardest things about developing a wave robot is coming up with an original idea. My smiley bot [] was just for me to play with wave & get used to uploading apps to App Engine / get back into java coding and using eclipse. Now that that’s done, it’s time to think of something useful; an application that people will actually want to use. The point of this post is partly a brainstorming session for myself, and partly to put these ideas out there to anyone stuck for ideas, who think they can code one of these, and finally, to encourage others to submit ideas.

Webby – The Web Service Robot
A robot which when added, detects any URLs pointing to web services, generates a form based on the parameters, then submits this data via SOAP, and puts the return value into a new blip.

Squely – A Database Query Bot
A robot which can run SQL statements against given connection strings. At present, this will only be possible if the database is publicly accessible, or if you have a wave server installed on the same network as your database.

QIFry – Interesting
This robot would monitor waves for key words, then on detection, pop in an interesting fact about that word (e.g. if it spots the word Banana, it says “did you know that bananas are herbs, and these shrubs can walk?”). It also monitors for boring words, and replaces them with more eloquent ones.

Pretty – Pretty Print
Detects common languages / markup & changes the layout to make it more readable.

Finds recipes based on ingredients or dish names in the wave.

Uses the GEO features of HTML 5 to locate where each wave user is and lets wavers know where their nearest wavers are. This could also take info about their interests to help match people up to folks whose hobbies they share.

Create a free website or blog at

%d bloggers like this: