Developer42

2016-12-19

Filtering on a Date Window

Filed under: SQL Server, Technology — Tags: , , , , , — Developer42 @ 17:21

This post is to cover an explanation I have to repeat quite often when demoing code, or when suggesting improvements to others’ logic.

A lot of the time we’re asked to filter for results falling within a date window and you’ll see code implemented such as this:
(I’ve used SQL in this example; but the concept illustrated here is the same regardless of implementation language).

declare @StartDate datetime = '2016-01-01 00:00'
, @EndDate datetime = '2016-12-31 23:59:59.997' --997 instead of 999 due to sql date quirk: see http://stackoverflow.com/a/3584913/361842

select *
from myTable
where myStartDate >= @StartDate
and myEndDate <= @EndDate

--or

select *
from myTable
where myStartDate between @StartDate and @EndDate
and myEndDate between @StartDate and @EndDate

That’s fine… sometimes.
The logic above will return any results where the entire result’s duration falls within the window’s range; i.e. the record starts after the window opens and completes before it’s closed.
This is best illustrated in the below graphic.

Example of Time Window with Whole Record falling within Window

Example of Time Window with Whole Record falling within Window

However, a lot of the time the requirement is that any part of the record falls into our window. i.e. the results we want are:

Date window where any part of record falls within window

Date window where any part of record falls within window

Given this requirement, lots of people still implement the above logic, causing any records which cross the window’s boundaries to be incorrectly missed off.
Those who do spot this issue often also make a mistake by overcomplicating the logic, providing rules for each scenario (and often missing off the last scenario illustrated below):

select *
from myTable
where 
--whole record in range
(
	myStartDate between @StartDate and @EndDate
	and myEndDate between @StartDate and @EndDate
)
--record starts in range
or
(
	myStartDate between @StartDate and @EndDate
)
--record ends in range
or
(
	myEndDate between @StartDate and @EndDate
)
--record crosses entire range
or
(
	myStartDate <= @StartDate 
	and myEndDate >= @EndDate
)

Preferred Solution

However, this can be drastically simplified, covering all of the above scenarios with a lot less code, and a lot more efficiently:

select *
from myTable
where myStartDate <= @EndDate --choice of < vs <= depending on requirement
and myEndDate >= @StartDate  --choice of > vs >= depending on requirement

To illustrate why this works, and why we use an AND rather than an OR, take a look at the previous image once we add lines showing where the condition’s met (green) or not met (red); here you can see those records where both conditions are met are shown as green, and where one condition’s not met as red (there is no scenario where both conditions are not met, since that would require a record which finished before it started).

Records with any part in date window with lines showing condition results

Records with any part in date window with lines showing condition results

Bonus Info

NB: In the above code’s comments I mention that you can use < or vs >=).
I’d also closed my datetime window with the inclusive-last value.
In many scenarios I’d use an inclusive start date and an exclusive end date; such as is demoed below:

declare @StartDate datetime = '2016-01-01 00:00' --inclusive
, @EndDate datetime = '2017-01-01 00:00'  --exclusive

select *
from myTable
where myStartDate < @EndDate 
and myEndDate >= @StartDate  

The advantage here is that we don’t need to think of how many days there are in a given month, or how many microseconds we can have before we see rounding errors; we just work with whole numbers. It also means that the end date of our previous window can be the start date of our next window without risking issues caused by the windows overlapping (though note: we may still get results which legitimately fall into both windows).
That said; always think about these decisions; as you need to understand what requirement you need to meet, and how your code will meet that requirement for any given data.

2016-02-03

PowerShell & SQL :: A Quick Fix for the SQLPS Problems

Filed under: Microsoft, powershell, SQL Server, Technology — Tags: , , , — Developer42 @ 18:19

There are a couple of issues with the SQLPS PowerShell module.

  1. It changes the current directory to PS SQLSERVER:\>.
  2. It may cause warnings (see end of this post for examples).

Calling the Import-Module-SQLPS function (the definition of which you’ll find below) instead of running Import-Module SQLPS will prevent such issues.

cls
function Import-Module-SQLPS {
    #pushd and popd to avoid import from changing the current directory (ref: http://stackoverflow.com/questions/12915299/sql-server-2012-sqlps-module-changing-current-location-automatically)
    #3>&1 puts warning stream to standard output stream (see https://connect.microsoft.com/PowerShell/feedback/details/297055/capture-warning-verbose-debug-and-host-output-via-alternate-streams)
    #out-null blocks that output, so we don't see the annoying warnings described here: https://www.codykonior.com/2015/05/30/whats-wrong-with-sqlps/
    push-location
    import-module sqlps 3>&1 | out-null
    pop-location
}

"Is SQLPS Loaded?"
if(get-module sqlps){"yes"}else{"no"}

Import-Module-SQLPS

"Is SQLPS Loaded Now?"
if(get-module sqlps){"yes"}else{"no"}

NB: Though from PowerShell 3.0 onwards you don’t need to import modules; for SQLPS I’d recommend that you do import it; this ensures that you can control when in your script this happens, rather than the first command from that library (e.g. Invoke-SqlCmd statement) causing the script to load and thus your directory being changed and warnings being spewed.

(more…)

2015-12-04

SQL Server :: Compare Database Schemas

This is a script to list basic differences in table definitions between two databases.

Background Waffle
When we make database changes in a transactional system, we need to ensure that these won’t adversely affect our BI solution. Normally our developers give us this information when we plan which items to put into a release, so we can liaise with BI; unfortunately the developers machines are down at present due to a disaster at the data center of the company to whom we outsourced development. Since the changes for the release are already in our UAT environment (and only those changes are there), we could determine what would be changing in production by comparing the schemas of the two environments.

My first thought was to use the Database Comparison Tool that comes with Visual Studio. However whilst this does a great job for a developer looking for and resolving differences, it’s not so easy to communicate this information to others (i.e. giving a before and after image of columns side by side. You have to either click on each difference to get a definition of the before and after, then make notes or take screenshots, or you have to generate the DDL, in which case you don’t see the before picture; only the code to amend from the current to the to-be definition.

Instead I realised that I could simply pull the definitions of the tables & columns from SQL, compare these and list the differences; resulting in the code below.
This code can handle where both databases are on the same instance, or where they’re on different instances. For the latter scenario you’d need to create a linked server.
If you frequently compare different combinations of databases and want to avoid amending the instance and catalog names in the code each time you may want to look into views or synonyms (potentially aliases work too if the catalog name is consistent across instances).

Since our BI solution uses replication to get the data from the transactional system, I also included information on which columns were replicated; thus we can filter on those replicated to see just those which BI would be interested in, or can ignore replication to see all differences.

Caveats

  1. We’re not comparing everything; only basic table and column data.
  2. The maxLength value gives length in bytes; not characters (see previous post on how to correct for this).
  3. There are probably other issues; this was a quick fix to resolve an immediate issue which worked for us; there may be issues we’ve not encountered (e.g. maybe the linked server only gives access to a subset of the tables; thus giving false responses)…

Code

set transaction isolation level read uncommitted
go

;with uat as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myUatDbServer\uatInstance].[myUatDbCatalog].sys.tables t
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
, prd as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myProdDbServer\prodInstance].[myProdDbCatalog].sys.tables t
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
select coalesce(uat.TableName, prd.TableName) TableName
, coalesce(uat.ColumnName, prd.ColumnName) ColumnName
, case
       when prd.TableName is null and not exists (select top 1 1 from prd x where x.TableName = uat.TableName) then 'Add Table'
       when uat.TableName is null and not exists (select top 1 1 from uat x where x.TableName = prd.TableName)  then 'Remove Table'
       when prd.ColumnName is null then 'Add Column'
       when uat.ColumnName is null then 'Remove Column'
       else 'Change Column Definition'
end [AXtion] --our transaction system's Dynamics AX; I'm hilarious :S
, prd.ColType ColTypeFrom
, uat.ColType ColTypeTo
, prd.ColMaxLength ColMaxLengthFrom --not going to fuss about char byte size for now; just want to get a list of changes
, uat.ColMaxLength ColMaxLengthTo --not going to fuss about char byte size for now; just want to get a list of changes
, prd.ColIsNullable ColIsNullableFrom
, uat.ColIsNullable ColIsNullableTo
, prd.ColCollation ColCollationFrom
, uat.ColCollation ColCollationTo
, prd.ColPrecision ColPrecisionFrom
, uat.ColPrecision ColPrecisionTo
, prd.ColScale ColScaleFrom
, uat.ColScale ColScaleTo
, prd.is_replicated PrdIsReplicated --\_these aren't compared; just returned to make it easy to tell what's interesting to BI
, uat.is_replicated UatIsReplicated --/
from uat
full outer join prd
       on prd.TableName = uat.TableName
       and prd.ColumnName = uat.ColumnName
where prd.TableName is null
or uat.TableName is null
or prd.ColumnName is null
or uat.ColumnName is null
or (uat.ColType != prd.ColType)
or (uat.ColMaxLength != prd.ColMaxLength)
or (uat.ColIsNullable != prd.ColIsNullable) 
or (uat.ColCollation != prd.ColCollation) 
or (uat.ColPrecision != prd.ColPrecision) 
or (uat.ColScale != prd.ColScale)
order by coalesce(uat.TableName, prd.TableName) 
, coalesce(uat.ColumnName, prd.ColumnName) 

2015-12-03

SQL Server :: Getting Column Length of Various Columns

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , — Developer42 @ 19:17

Sometimes we need to lookup the maximum length allowed for various columns in a number of tables.
This script allows you to specify a list of table/column pairs into the table variable @interesting, then pulls back the max_length information for these, taking into account the data type (i.e. so data types storing 2 byte character sets can cope predictably).

It works by creating a temp table with a column holding a single character of each (string) data type, then reading the length of that field to get the length of a single character of that data type, and dividing the max length of the fields we’re after by this multiplier to get the length in characters rather than bytes.


declare @interesting table(col sysname, tab sysname)
insert @interesting
values ('someColumn','someTable')
,('secondColumn','someTable')
,('column3','anotherTable')

declare @JBDataTypes table(system_type_id int, user_type_id int, max_length int)
declare @sql nvarchar(max) 
select @sql = coalesce(@sql + ',','create table #dataTypesJB (') + quotename(name) + ' ' + name  
from sys.types 
where collation_name is not null --string fields only
set @sql = @sql + '); 
select system_type_id
, user_type_id 
, max_length 
from tempdb.sys.columns 
where object_id = object_id(''tempdb..#dataTypesJB''); 
drop table #dataTypesJB'
insert into @JBDataTypes exec (@sql)

select c.max_length / coalesce(jb.max_length,1) max_length 
, i.col
, i.tab
from @interesting i
inner join sys.columns c 
on c.object_id = object_id(i.tab)
and c.name = i.col
left outer join @JBDataTypes jb
on jb.system_type_id = c.system_type_id
and jb.user_type_id = c.user_type_id 
order by i.tab,i.col

2015-05-18

Microsoft Dynamics AX 2012 Chart of Accounts Advanced Validation Rules SQL

A follow up to an earlier article: Microsoft Dynamics AX 2012 Chart of Accounts Validation Rules SQL.

The Advanced Rules can be found in AX under: General Ledger > Setup > Chart of accounts > Configure account structures > select account structure > Advanced rule.
Here’s a small sample of our advanced rules from AX:

Advanced Rules in AX

Advanced Rules in AX

Here’s the same rule (results row 9), as produced by the below SQL:

AX Advanced Rules in SQL

AX Advanced Rules in SQL

Full SQL:

select dr.name AdvancedRule
, dr.description Name
, a.Name AccountStructure
, a.Description ASDescription
, case when dr.IsDraft =1 then 'Draft' else 'Active' end [Status] --seems to just duplicate the IsDraft info
, dr.IsDraft Draft
, drc.Filter
, dh.Name ARSAdvancedRuleStructure
, dh.DESCRIPTION ARSName
from DimensionRule dr
inner join DimensionHierarchy a on a.recid = dr.AccountStructure and a.Partition = dr.Partition
inner join (
       select drcO.DimensionRule
       , drcO.Partition
       , stuff
       (
              (
                     select ' And ' + da.Name + ' '
                     + case
                           when coalesce(drcI.WildCardString,'') > ''
                                  then case
                                         when drcI.WildCardString like '\%%\%' escape '\' then 'contains '
                                         when drcI.WildCardString like '%\%' escape '\' then 'begins with '
                                         when drcI.WildCardString like '\%%' escape '\' then 'ends with '
                                         else 'equals '
                                  end + quotename(replace(drcI.WildCardString,'%',''),'''')
                           when drcI.IsFromOpen = drcI.IsToOpen then --if both are 0 or both are 1 (second scenario seems weird, but that's how the app behaves)
                                  case
                                         when drcI.RangeFrom = drcI.RangeTo then 'Is ' + quotename(drcI.RangeFrom,'''')
                                         else 'Between ' + quotename(drcI.RangeFrom,'''') + ' through ' + quotename(drcI.RangeTo,'''')
                                  end
                                            --in the below statements IsFromOpen and IsToOpen seem to behave backwards to what you'd expect; but again that's what the app does
                           when drcI.IsFromOpen=1 and drcI.IsToOpen=0 then 'Greater than or equal to ' + quotename(drcI.RangeFrom,'''')
                           when drcI.IsFromOpen=0 and drcI.IsToOpen=1 then 'Less than or equal to ' + quotename(drcI.RangeTo,'''')
                           else '-UNEXPECTED SCENARIO; SPEAK TO JB-' --this should nevere happen
                     end 
                     FROM DimensionRuleCriteria drcI
                     inner join DimensionAttribute da on da.RecId = drcI.DimensionAttribute and da.Partition = drcI.Partition
                     where drcI.DimensionRule = drcO.DimensionRule and drcI.Partition = drcO.Partition
                     order by da.Name --drcI.RecId
                     for xml path(''), type
              ).value('.','nvarchar(max)')
       ,1,4,'Where') Filter
       FROM DimensionRuleCriteria drcO
       group by drcO.Partition
       , drcO.DimensionRule
) drc on drc.DimensionRule = dr.RecId and drc.Partition = dr.Partition
inner join DimensionRuleAppliedHierarchy drah on drah.DimensionRule = dr.RecId and drah.Partition = dr.Partition
inner join DimensionHierarchy dh on dh.recid = drah.DimensionHierarchy and dh.Partition = drah.Partition
order by a.Name, dr.name, dh.Name, dr.IsDraft

NB: One anomaly you may notice is some rules appear twice; once as Draft and once as Active. This is where an existing rule is being edited; the existing rule remains active whilst its replacement is created in draft status. When you browse the rules you’ll see the draft rule; however the active rule is the one being applied in any validation.

2015-03-09

Microsoft Dynamics AX 2012 Chart of Accounts Validation Rules SQL

In doing some analysis work on performance, a consultant recently asked for an extract of all of our AX2012’s COA validation rules.
The below SQL is an attempt at regenerating the rules based off the data in the various related tables.
NB: I’m not 100% certain that this gives all rules as they’d appear in the application, but a few initial tests prove promising; so hopefully this SQL can be reused by others with similar requirements.

Here’s a small sample of our validation rules from AX:

AX 2012 Chart Of Accounts Validation Rules Screenshot

AX 2012 Chart Of Accounts Validation Rules Screenshot

Here’s the same rules, as produced by the below SQL:

AX 2012 Chart Of Accounts Validation Rules SQL Results Screenshot

AX 2012 Chart Of Accounts Validation Rules SQL Results Screenshot

Full SQL:

set transaction isolation level read uncommitted
go

;with c1 as
(
       select DimensionConstraintNode
       , STUFF
       (
              (
                     SELECT ';' + case 
                           when WILDCARDSTRING > '' then WILDCARDSTRING 
                           when RangeFrom=RangeTo then RangeFrom 
                           else concat(RangeFrom,'..',RangeTo) 
                     end 
                     FROM DimensionConstraintNodeCriteria inside
                     where inside.DimensionConstraintNode = outside.DimensionConstraintNode
                     ORDER BY ORDINAL, RECID
                     FOR XML PATH (''),TYPE
              ).value('.','nvarchar(max)')
              ,1,1,''
       ) AS Rules
       from DimensionConstraintNodeCriteria outside
       group by DimensionConstraintNode
)
, ActiveDCN as
(
       select dct.dimensionhierarchy 
       , dcn.DIMENSIONCONSTRAINTTREE
       , dhl.level_
       , dhl.dimensionattribute
       , da.name AttributeName
       , dcn.dimensionhierarchylevel
       , dcn.PARENTCONSTRAINTNODE
       , dcn.RecId
       , c1.Rules
       from DimensionConstraintTree dct 
       inner join DIMENSIONHIERARCHYLEVEL dhl on dhl.dimensionhierarchy = dct.dimensionhierarchy
       inner join DIMENSIONATTRIBUTE da on da.recid = dhl.dimensionattribute 
       inner join DimensionConstraintNode dcn on dcn.DIMENSIONCONSTRAINTTREE = dct.recid and dcn.dimensionhierarchylevel = dhl.RECID
       left outer join c1
       on c1.DimensionConstraintNode = dcn.recid
    where (dcn.activeto = '1900-01-01 00:00' or dcn.activeto >= getutcdate())
    and (dcn.activefrom = '1900-01-01 00:00' or dcn.activefrom <= getutcdate())
)
select dh.name HierarchyName
, concat(dcn1.AttributeName,': ',coalesce(dcn1.Rules,'*')) R1
, concat(dcn2.AttributeName,': ',coalesce(dcn2.Rules,'*')) R2
, concat(dcn3.AttributeName,': ',coalesce(dcn3.Rules,'*')) R3
, concat(dcn4.AttributeName,': ',coalesce(dcn4.Rules,'*')) R4
from DIMENSIONHIERARCHY dh
inner join ActiveDCN dcn1 on dcn1.dimensionhierarchy = dh.recid and dcn1.level_ = 1 --not sure if level’s required; it does affect results; keeping in for now as improves performance (though also reduces result set by 75%)
left outer join ActiveDCN dcn2 on dcn2.ParentConstraintNode = dcn1.Recid --and dcn2.level_ = 2
left outer join ActiveDCN dcn3 on dcn3.ParentConstraintNode = dcn2.Recid --and dcn2.level_ = 3
left outer join ActiveDCN dcn4 on dcn4.ParentConstraintNode = dcn3.Recid --and dcn2.level_ = 4
--where dcn1.Rules like '%640103%' --to produce screenshot
order by dh.name, dcn1.DIMENSIONCONSTRAINTTREE

2014-09-17

T-SQL: Generate Series: Getting a list of numbers in a given range.

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , , , , , , — Developer42 @ 01:27

I recently came across the Postgres generate_series function whilst reading a blog post.
So far as I can tell, there’s no equivalent in T-SQL. To make up for this, I coded my own, making use of the recursive nature of common table expressions:

create function dbo.generate_series
(
	  @start bigint
	, @stop bigint
	, @step bigint = 1
	, @maxResults bigint = 0 --0 = unlimited
)
returns @results table(n bigint)
as
begin

	--avoid infinite loop (i.e. where we're stepping away from stop instead of towards it)
	if @step = 0 return
	if @start > @stop and @step > 0 return
	if @start < @stop and @step < 0 return
	
	--ensure we don't overshoot
	set @stop = @stop - @step

	--treat negatives as unlimited
	set @maxResults = case when @maxResults < 0 then 0 else @maxResults end

	--generate output
	;with myCTE (n,i) as 
	(
		--start at the beginning
		select @start
		, 1
		union all
		--increment in steps
		select n + @step
		, i + 1
		from myCTE 
		--ensure we've not overshot (accounting for direction of step)
		where (@maxResults=0 or i<@maxResults)
		and 
		(
			   (@step > 0 and n <= @stop)
			or (@step < 0 and n >= @stop)
		)  
	)
	insert @results
	select n 
	from myCTE
	option (maxrecursion 0) --sadly we can't use a variable for this; however checks above should mean that we have a finite number of recursions / @maxResults gives users the ability to manually limit this 

	--all good	
	return
	
end

Example Usage:

--check we get expected results
select * from generate_series(1, 10, default, default)
select * from generate_series(10, 5, -1, default)
select * from generate_series(1, 10, 4, default)
select * from generate_series(1, 10, default, 2)
select * from generate_series(1, 10, 4, -1)

--check we don't get results if we have "invalid" input
select * from generate_series(1, 10, 0, default)
select * from generate_series(10, 1, default, default)
select * from generate_series(10, 5, 1, default)
select * from generate_series(1, 10, -4, default)

NB: Should you wish to generate a series of dates instead of a series of numbers, check my comments here: http://stackoverflow.com/questions/1478951/generate-a-resultset-of-incrementing-dates-in-tsql/25881077#25881077

2013-07-28

Countdown Numbers Round Solver in T-SQL

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , , , , — Developer42 @ 14:00

Every now and then I try to create programs to solve puzzles as a way to introduce myself to new ideas & test my coding skills. My latest self-challenge was to write SQL code to solve the Countdown Numbers Round. The idea is; given 6 random numbers (each of which may be used at most once) and using only basic arithmetic (i.e. addition, subtraction, multiplication and division) to get to a randomly generated target number. e.g. Given the numbers 3, 7, 9, 10, 50 & 75, and the target 396 one solution would be: 3 * (75 + 50 + 7).

Here’s how I approached it in T-SQL:
Run this solution at SQL Fiddle

create procedure dbo.CountdownNumberSolver
(
	@target int output --output used to allow these to be referenced outside of the proc if rnd numbers are generated
	, @number1 int output
	, @number2 int output
	, @number3 int output
	, @number4 int output
	, @number5 int output
	, @number6 int output 
) as
begin
	
	--if the user didn't specify values randomly pick some.
	--   Small Number (1-10):			1 + 10 * rand() 
	--   Large Number (25,50,75,100):	floor(1 + 4 * rand()) * 25
	--   Target (1-999):				1 + (999 * rand())
	select @target = ISNULL(@target, 1 + (999 * rand())) --I assume 0 isn't a valid solution in countdown?
		, @number1 = ISNULL(@number1,1 + 10 * rand())
		, @number2 = ISNULL(@number2,1 + 10 * rand())
		, @number3 = ISNULL(@number3,1 + 10 * rand())
		, @number4 = ISNULL(@number4,1 + 10 * rand())
		, @number5 = ISNULL(@number5,floor(1 + 4 * rand()) * 25)
		, @number6 = ISNULL(@number6,floor(1 + 4 * rand()) * 25)

	--output question
    /* commented out as sql fiddle only returns first result set
	select @target [target]
		, @number1 [#1]
		, @number2 [#2]
		, @number3 [#3]
		, @number4 [#4]
		, @number5 [#5]
		, @number6 [#6]
	*/
	--records combinations tried so far / partial equations
	create table #solutions
	(
		id bigint not null identity(1,1) primary key clustered
		, binaryFlags tinyint not null
		, number float not null
		, equation nvarchar(256) not null
	)
	create index ix_solutions_number on #solutions(number)
	
	--start with the given values - the id stuff is just to make it easy to reuse this procedure should we want a different # of source numbers
	insert #solutions
	select power(2,id) 
	, number
	, CAST(number as nvarchar(3))
	from 
	(values 
		 (0,@number1)
		,(1,@number2)
		,(2,@number3)
		,(3,@number4)
		,(4,@number5)
		,(5,@number6)
	)x(id, number)

	declare @previousCount bigint = 0
	, @newCount bigint = (select COUNT(1) from #solutions)
	, @tempCount bigint
	
	while @previousCount < @newCount --repeat whilst there are still untried combos
		and not exists (select top 1 1 from #solutions where number=@target) --and we're still looking for a solution
	begin

		set @tempCount = @newCount
		
		insert #solutions
		select a.binaryFlags | b.binaryFlags
		, case x.o
			when '+' then a.number + b.number
			when '-' then a.number - b.number
			when '*' then a.number * b.number
			when '/' then a.number / b.number
		end
		, '(' + a.equation + ' ' + x.o + ' ' + b.equation + ')'
		from #solutions a
		inner join #solutions b
			on a.binaryFlags & b.binaryFlags = 0 --ensure we're not reusing source numbers
			and a.id != b.id --minor (potential) optimisation
			and --no point comparing things we've already checked (this may allow for some performance improvement- it doesn't affect the logic)
			(
				   a.id > @previousCount
				or b.id > @previousCount
			)
		inner join --doing things this way means we only need to find the new combinations from a&b once, not 4 times
		(
			values ('+'), ('-'), ('*'), ('/')
		) x(o)
			on  not(x.o = '/' and b.number = 0) --avoid div 0 errors
			and not(x.o = '-' and a.number - b.number = 0) --don't create 0 value entries (if 0's not an allowed solution this result just adds overhead without value)
			and not(x.o = '+' and a.number + b.number = 0) --don't create 0 value entries (if 0's not an allowed solution this result just adds overhead without value)
			and not(x.o = '+' and a.id > b.id) --commutative function optimisation
			and not(x.o = '*' and a.id > b.id) --commutative function optimisation
		
		set @previousCount = @tempCount
		select @newCount = COUNT(1) from #solutions	
		
	end

	--limited to one answer to fit with the game / avoid the assumption that all possible solutions would be listed 
	--(since we exit the above loop the moment we have an answer rather than continuing to check all combos)
	select top 1 
	  @number1 [#1]
	, @number2 [#2]
	, @number3 [#3]
	, @number4 [#4]
	, @number5 [#5]
	, @number6 [#6]
	, @target  [target] 
	, equation [solution]
	from #solutions 
	where number = @target
	order by len(equation) --show the shortest equation as that's probably simplest
	option (fast 1)	

	if object_id('tempdb..#solutions','U') is not null drop table #solutions

	return 0
end

Here’s an example of how to run it:

--run the proc using random numbers (the proc replaces nulls with random #s)
exec dbo.CountdownNumberSolver null, null, null, null, null, null, null

--run the proc on user defined values
exec dbo.CountdownNumberSolver 396, 7, 3, 10, 9, 50, 75

2012-08-21

Run VBScript from SQL

Filed under: Microsoft, SQL Server, Technology — Tags: , , , — Developer42 @ 18:13

Though it’s fairly well publicised how to call an existing VBS file from SQL, or how to execute VBScript within a SQL agent, there’s not much info out there on how to have SQL run VBS directly. The code below allows you to manage your scripts in a database table, creating temp files to host and run the scripts as required. The sp_RunSQL stored proc can also be used to run any VBS script; including dynamically generated code.
As always, feedback, suggestions, etc are welcome.

Code:

    use [insert-db-name-here]
    go
    ----use below code to enable commands (required for xp_cmdshell to work)
    --exec sp_configure 'show advanced options', 1
    --go
    --reconfigure
    --go
    --exec sp_configure 'xp_cmdshell', 1  
    --go
    --reconfigure
    --go
    if OBJECT_ID('vbsScripts') is not null drop table vbsScripts
    go
    create table vbsScripts 
    (
        id bigint not null identity(1,1) constraint pk_vbsScripts primary key clustered 
        , name nvarchar(256) not null constraint uk_vbsScripts_name unique
        , script nvarchar(max) not null
        , timeoutSecs int null constraint df_vbsScripts_timeoutSecs default(86400)--leave as null if you don't want a timeout / defaults to 1 day / 24*60*60
        , batchMode bit not null constraint df_vbsScripts_batchMode default(1)
    )
    go
    /*
    Gets the temp directory from environment variables
    usage:
        declare @tempPath nvarchar(max)
        exec GetTempDirectory @tempPath out 
        select @tempPath 
    */
    if OBJECT_ID('GetTempDirectory') is not null drop proc GetTempDirectory
    go
    create proc GetTempDirectory(@path nvarchar(max) out)
    as
    begin
        set @path = ''
        declare @tempTable table(data nvarchar(max))
        insert @tempTable exec master..xp_cmdshell 'echo %temp%'
        select top 1 @path = data from @tempTable
        if SUBSTRING(@path,len(@path),1)  '\' set @path = @path + '\'
    end
    go
    /*
    Creates a unique filename (using guid to ensure uniqueness and datetime to make the name friendlier)
    usage:
        declare @tempPath nvarchar(max)
        exec GetTempDirectory @tempPath out 
        select @tempPath 
    */
    if OBJECT_ID('GetTempFilename') is not null drop proc GetTempFilename
    go
    create proc GetTempFilename(@fn nvarchar(max) out)
    as
    begin
        --exec GetTempDirectory @fn out --can just use environment variable - originally had issues testing as was looking at the wrong user's temp directory :/
        --set @fn = @fn + 'sqlTemp_' + replace(replace(replace(convert(nvarchar(24), getutcdate(),127),'-',''),':',''),'.','') + '_' + CAST(NEWID() as nvarchar(36)) + '.tmp'
        set @fn = '%temp%\' + 'sqlTemp_' + replace(replace(replace(convert(nvarchar(24), getutcdate(),127),'-',''),':',''),'.','') + '_' + CAST(NEWID() as nvarchar(36)) + '.tmp'
    end
    go
    if OBJECT_ID('dbo.fn_EscapeDosCharachters') is not null drop function dbo.fn_EscapeDosCharachters
    go
    create function dbo.fn_EscapeDosCharachters
    (
        @text nvarchar(max)
    )
    returns nvarchar(max)
    as
    begin
        --http://www.dostips.com/?t=Snippets.Escape
        set @text = REPLACE(@text,'^','^^')
        set @text = REPLACE(@text,'!','^!')
        set @text = REPLACE(@text,'&amp;','^&amp;')
        set @text = REPLACE(@text,'|','^|')
        set @text = REPLACE(@text,'%','%%')

	--based on experience
	set @text = REPLACE(@text,'&gt;','^&gt;')
	set @text = REPLACE(@text,'&lt;','^&lt;')
        return @text
    end
    go
    if OBJECT_ID('createTempTextFile') is not null drop proc createTempTextFile
    go
    create proc createTempTextFile
    (
        @fn nvarchar(max) out 
                          --the filename to output to (nb: environment variables don't currently work (e.g. you can't use %temp%\myFile.vbs)
                          --works fine with spaces in filename (so far at least)
                          --if user passes null a temporary filename will be auto allocated &amp; returned in this variable
        , @content nvarchar(max)
    )
    as
    begin

        declare @charPos int
        , @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation

        if @fn is null or LEN(@fn)=0
        begin
            exec GetTempFilename @fn out
        end

        set @cmd = '@echo.&gt;' + @fn --create a new file for our script output
        EXEC master..xp_cmdshell @cmd, no_output

        set @content = replace(@content,char(13) + char(10), char(10))--ensure uniform line endings (i.e. \r\n -&gt; \n)
        set @content = replace(@content,char(13), char(10))--ensure uniform line endings (i.e. \r -&gt; \n)
        set @content = @content + CHAR(10) --ensure last character of script is new line
        set @charPos = CHARINDEX(char(10),@content)
        while (@charPos &gt; 0)
        begin
            --todo: consider what additional escaping is required to prevent injection issues
            set @cmd = '@echo.' + dbo.fn_EscapeDosCharachters(SUBSTRING(@content,1,@charPos-1)) + '&gt;&gt; ' + @fn
            EXEC master..xp_cmdshell @cmd, no_output
            set @content = SUBSTRING(@content,@charPos+1,len(@content))
            set @charPos = CHARINDEX(char(10),@content)
        end

    end 
    go
    if OBJECT_ID('deleteTempTextFile') is not null drop proc deleteTempTextFile
    go
    create proc deleteTempTextFile
    (
        @fn nvarchar(max)
    )
    as
    begin
        declare @cmd varchar(8000)
        if CHARINDEX(' ',@fn)&gt;0 and CHARINDEX('&quot;',@fn)&gt;1 set @fn = QUOTENAME(@fn,'&quot;')
        set @cmd = 'del ' + @fn
        EXEC master..xp_cmdshell @cmd, no_output
    end
    go
    if OBJECT_ID('sp_RunScript') is not null drop proc sp_RunScript
    go
    create proc sp_RunScript
    (
        @script nvarchar(max)
        , @arguments nvarchar(max)
        , @timeoutSecs int = null
        , @batchMode bit = 1
        , @tempfileUri nvarchar(max) out
    )
    as
    begin
    
        declare @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation
        
        exec createTempTextFile @tempfileUri out, @script
        
        if CHARINDEX(' ',@tempfileUri)&gt;0 and CHARINDEX('&quot;',@tempfileUri)&gt;1 set @tempfileUri = QUOTENAME(@tempfileUri,'&quot;')
        set @cmd = 'cscript ' + @tempfileUri + ' //E:vbscript //NOLOGO '
        
        --batch mode or interactive
        if @batchMode=1 
            set @cmd = @cmd + '//B '
        else
            set @cmd = @cmd + '//I '
        
        --should script timeout after x seconds?
        if @timeoutSecs is not null
            set @cmd = @cmd + '//T:' + CAST(@timeoutSecs as nvarchar(18)) + ' '
        
        set @cmd = @cmd + isnull(@arguments,'')
        --select @cmd --if debugging enable this line to see the script file / etc
        
        EXEC master..xp_cmdshell @cmd --if required we can capture output as has been done in GetTempDirectory
        
        exec deleteTempTextFile @tempfileUri --tidyup the temp script - disable this line for debugging
            
    end
    if OBJECT_ID('sp_RunScriptByID') is not null drop proc sp_RunScriptByID
    go
    create proc sp_RunScriptByID
    (
        @scriptId bigint
        , @arguments nvarchar(max)
    )
    as
    begin
    
        declare @timeoutSecs int
        , @batchMode bit
        , @script nvarchar(max)
        , @tempfileUri nvarchar(max)
        , @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation
            
        select @timeoutSecs=timeoutSecs 
        , @batchMode = batchMode
        , @script = script
        from vbsScripts 
        where id = @scriptId
        
        exec sp_RunScript  
			@script
			, @arguments 
			, @timeoutSecs 
			, @batchMode 
			, @tempfileUri out 
        
    end
    go
    if OBJECT_ID('sp_RunScriptByName') is not null drop proc sp_RunScriptByName
    go

    /*
    provides a friendly interface to sp_RunScriptByID
    */
    create proc sp_RunScriptByName
    (
        @scriptName nvarchar(256)
        , @arguments nvarchar(max)
    )
    as
    begin

        declare @id bigint

        select @id = id
        from vbsScripts 
        where name = @scriptName

        exec sp_RunScriptByID @id, @arguments

    end
    go

Example Usage:

    --demo

    --register a new script in the scripts table
    insert vbsScripts 
    select 'demo', '
    option explicit
    dim objFSO, i, path
    path = "c:\example1\"
    wscript.echo "hello" ''show what console output looks like (if interactive)
    for i = 0 to wscript.Arguments.Count-1 ''show that we can handle command line arguments
        wscript.echo wscript.arguments.item(i)
    next
    set objFSO = CreateObject("Scripting.FileSystemObject")    
    if not objFSO.FolderExists(path) then
		on error resume next
        objFSO.CreateFolder(path) ''create a folder to demonstrate that the vbs is running / affecting the outside environment
        if err.number = 0 then
			wscript.echo "Folder " &amp; path &amp; " successfully created."
		else
			wscript.echo "Folder " &amp; path &amp; " was not created.  " &amp; cstr(Err.number) &amp; ": " &amp; Err.Description
			err.clear
		end if
        on error goto 0
    else
        wscript.echo "Folder " &amp; path &amp; " already exists."
    end if
    set objFSO = Nothing
    wscript.echo "Done"
    ', null, 0
    go

    --execute above script via friendly name
    sp_RunScriptByName 'demo','"this is a demo" "hopefully it will work" yes it does'

2011-12-09

SQL Optimisation :: Convert Primary Keys to Clustered Indexes

We recently spotted that one of the systems we use did not make use of clustered indexes on any tables out of the box. As a result performance was not as good as it could have been. The below script allows for an easy win optimisation by finding all tables which do not include a clustered index, and converting the table’s primary key to be a clustered index.

In a future post I’ll put up more details on what clustered indexes are, why you should always (pretty much) use them and other useful info for anyone playing with databases.

--This script is designed for MS SQL Server
use DbNameToOptimise
go

--disable all constraints on all tables (to avoid these causing errors when altering the indexes)
sp_msforeachtable 'alter table ? nocheck constraint all'
go
 
declare @sqls table(object_id bigint, sort int, sql nvarchar(max))
 
insert @sqls
select t.object_id, ic.key_ordinal, case when ic.key_ordinal=1 then  'CREATE UNIQUE CLUSTERED INDEX [' + i.name + '] ON [' + t.name + ']([' else ',[' end + c.name + case when ic.key_ordinal=icagg.maxko then ']) WITH DROP_EXISTING' else ']' end sql
from sys.tables t
inner join sys.indexes i on t.object_id = i.object_id
inner join sys.index_columns ic on i.object_id=ic.object_id and  i.index_id = ic.index_id
inner join sys.columns c on ic.object_id = c.object_id and ic.column_id = c.column_id
inner join (
      select object_id, index_id, MAX(key_ordinal) maxko from sys.index_columns group by object_id,index_id
) icagg on i.object_id = icagg.object_id and i.index_id = icagg.index_id
where t.is_ms_shipped=0
and i.is_primary_key=1
and not exists (
      --ignore tables which already have a clustered index
      select 1
      from sys.indexes i2
      where t.object_id = i2.object_id
      and i2.type = 1
)
order by t.name, i.name, ic.key_ordinal
 
declare @objid bigint
, @sql nvarchar(max)
 
while exists (select top 1 1 from @sqls)
begin
      set @sql=''
      select top 1 @objid=object_id from @sqls
      select @sql = @sql + sql from @sqls where object_id=@objid order by sort
      delete from @sqls where object_id = @objid
      exec (@sql)
end
 
go

--reenable constraints to leave the db as we found it (aside from the fix)
sp_msforeachtable 'alter table ? check constraint all'
go
Older Posts »

Create a free website or blog at WordPress.com.

%d bloggers like this: