Developer42

2016-02-03

PowerShell & SQL :: A Quick Fix for the SQLPS Problems

Filed under: Microsoft, powershell, SQL Server, Technology — Tags: , , , — Developer42 @ 18:19

There are a couple of issues with the SQLPS PowerShell module.

  1. It changes the current directory to PS SQLSERVER:\>.
  2. It may cause warnings (see end of this post for examples).

Calling the Import-Module-SQLPS function (the definition of which you’ll find below) instead of running Import-Module SQLPS will prevent such issues.

cls
function Import-Module-SQLPS {
    #pushd and popd to avoid import from changing the current directory (ref: http://stackoverflow.com/questions/12915299/sql-server-2012-sqlps-module-changing-current-location-automatically)
    #3>&1 puts warning stream to standard output stream (see https://connect.microsoft.com/PowerShell/feedback/details/297055/capture-warning-verbose-debug-and-host-output-via-alternate-streams)
    #out-null blocks that output, so we don't see the annoying warnings described here: https://www.codykonior.com/2015/05/30/whats-wrong-with-sqlps/
    push-location
    import-module sqlps 3>&1 | out-null
    pop-location
}

"Is SQLPS Loaded?"
if(get-module sqlps){"yes"}else{"no"}

Import-Module-SQLPS

"Is SQLPS Loaded Now?"
if(get-module sqlps){"yes"}else{"no"}

NB: Though from PowerShell 3.0 onwards you don’t need to import modules; for SQLPS I’d recommend that you do import it; this ensures that you can control when in your script this happens, rather than the first command from that library (e.g. Invoke-SqlCmd statement) causing the script to load and thus your directory being changed and warnings being spewed.

(more…)

2015-12-04

SQL Server :: Compare Database Schemas

This is a script to list basic differences in table definitions between two databases.

Background Waffle
When we make database changes in a transactional system, we need to ensure that these won’t adversely affect our BI solution. Normally our developers give us this information when we plan which items to put into a release, so we can liaise with BI; unfortunately the developers machines are down at present due to a disaster at the data center of the company to whom we outsourced development. Since the changes for the release are already in our UAT environment (and only those changes are there), we could determine what would be changing in production by comparing the schemas of the two environments.

My first thought was to use the Database Comparison Tool that comes with Visual Studio. However whilst this does a great job for a developer looking for and resolving differences, it’s not so easy to communicate this information to others (i.e. giving a before and after image of columns side by side. You have to either click on each difference to get a definition of the before and after, then make notes or take screenshots, or you have to generate the DDL, in which case you don’t see the before picture; only the code to amend from the current to the to-be definition.

Instead I realised that I could simply pull the definitions of the tables & columns from SQL, compare these and list the differences; resulting in the code below.
This code can handle where both databases are on the same instance, or where they’re on different instances. For the latter scenario you’d need to create a linked server.
If you frequently compare different combinations of databases and want to avoid amending the instance and catalog names in the code each time you may want to look into views or synonyms (potentially aliases work too if the catalog name is consistent across instances).

Since our BI solution uses replication to get the data from the transactional system, I also included information on which columns were replicated; thus we can filter on those replicated to see just those which BI would be interested in, or can ignore replication to see all differences.

Caveats

  1. We’re not comparing everything; only basic table and column data.
  2. The maxLength value gives length in bytes; not characters (see previous post on how to correct for this).
  3. There are probably other issues; this was a quick fix to resolve an immediate issue which worked for us; there may be issues we’ve not encountered (e.g. maybe the linked server only gives access to a subset of the tables; thus giving false responses)…

Code

set transaction isolation level read uncommitted
go

;with uat as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myUatDbServer\uatInstance].[myUatDbCatalog].sys.tables t
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myUatDbServer\uatInstance].[myUatDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
, prd as
(
       select t.name TableName
       , c.Name ColumnName
       , ty.name ColType
       , c.max_length ColMaxLength
       , coalesce(c.is_nullable,0) ColIsNullable
       , c.scale ColScale
       , c.[precision] ColPrecision
       , coalesce(c.collation_name,'') ColCollation
       , c.is_replicated
       from [myProdDbServer\prodInstance].[myProdDbCatalog].sys.tables t
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.columns c
              on c.object_id = t.object_id
       inner join [myProdDbServer\prodInstance].[myProdDbCatalog].sys.types ty
              on ty.system_type_id = c.system_type_id
              and ty.user_type_id = c.user_type_id
)
select coalesce(uat.TableName, prd.TableName) TableName
, coalesce(uat.ColumnName, prd.ColumnName) ColumnName
, case
       when prd.TableName is null and not exists (select top 1 1 from prd x where x.TableName = uat.TableName) then 'Add Table'
       when uat.TableName is null and not exists (select top 1 1 from uat x where x.TableName = prd.TableName)  then 'Remove Table'
       when prd.ColumnName is null then 'Add Column'
       when uat.ColumnName is null then 'Remove Column'
       else 'Change Column Definition'
end [AXtion] --our transaction system's Dynamics AX; I'm hilarious :S
, prd.ColType ColTypeFrom
, uat.ColType ColTypeTo
, prd.ColMaxLength ColMaxLengthFrom --not going to fuss about char byte size for now; just want to get a list of changes
, uat.ColMaxLength ColMaxLengthTo --not going to fuss about char byte size for now; just want to get a list of changes
, prd.ColIsNullable ColIsNullableFrom
, uat.ColIsNullable ColIsNullableTo
, prd.ColCollation ColCollationFrom
, uat.ColCollation ColCollationTo
, prd.ColPrecision ColPrecisionFrom
, uat.ColPrecision ColPrecisionTo
, prd.ColScale ColScaleFrom
, uat.ColScale ColScaleTo
, prd.is_replicated PrdIsReplicated --\_these aren't compared; just returned to make it easy to tell what's interesting to BI
, uat.is_replicated UatIsReplicated --/
from uat
full outer join prd
       on prd.TableName = uat.TableName
       and prd.ColumnName = uat.ColumnName
where prd.TableName is null
or uat.TableName is null
or prd.ColumnName is null
or uat.ColumnName is null
or (uat.ColType != prd.ColType)
or (uat.ColMaxLength != prd.ColMaxLength)
or (uat.ColIsNullable != prd.ColIsNullable) 
or (uat.ColCollation != prd.ColCollation) 
or (uat.ColPrecision != prd.ColPrecision) 
or (uat.ColScale != prd.ColScale)
order by coalesce(uat.TableName, prd.TableName) 
, coalesce(uat.ColumnName, prd.ColumnName) 

2014-09-17

T-SQL: Generate Series: Getting a list of numbers in a given range.

Filed under: Microsoft, SQL Server, Technology — Tags: , , , , , , , , , — Developer42 @ 01:27

I recently came across the Postgres generate_series function whilst reading a blog post.
So far as I can tell, there’s no equivalent in T-SQL. To make up for this, I coded my own, making use of the recursive nature of common table expressions:

create function dbo.generate_series
(
	  @start bigint
	, @stop bigint
	, @step bigint = 1
	, @maxResults bigint = 0 --0 = unlimited
)
returns @results table(n bigint)
as
begin

	--avoid infinite loop (i.e. where we're stepping away from stop instead of towards it)
	if @step = 0 return
	if @start > @stop and @step > 0 return
	if @start < @stop and @step < 0 return
	
	--ensure we don't overshoot
	set @stop = @stop - @step

	--treat negatives as unlimited
	set @maxResults = case when @maxResults < 0 then 0 else @maxResults end

	--generate output
	;with myCTE (n,i) as 
	(
		--start at the beginning
		select @start
		, 1
		union all
		--increment in steps
		select n + @step
		, i + 1
		from myCTE 
		--ensure we've not overshot (accounting for direction of step)
		where (@maxResults=0 or i<@maxResults)
		and 
		(
			   (@step > 0 and n <= @stop)
			or (@step < 0 and n >= @stop)
		)  
	)
	insert @results
	select n 
	from myCTE
	option (maxrecursion 0) --sadly we can't use a variable for this; however checks above should mean that we have a finite number of recursions / @maxResults gives users the ability to manually limit this 

	--all good	
	return
	
end

Example Usage:

--check we get expected results
select * from generate_series(1, 10, default, default)
select * from generate_series(10, 5, -1, default)
select * from generate_series(1, 10, 4, default)
select * from generate_series(1, 10, default, 2)
select * from generate_series(1, 10, 4, -1)

--check we don't get results if we have "invalid" input
select * from generate_series(1, 10, 0, default)
select * from generate_series(10, 1, default, default)
select * from generate_series(10, 5, 1, default)
select * from generate_series(1, 10, -4, default)

NB: Should you wish to generate a series of dates instead of a series of numbers, check my comments here: http://stackoverflow.com/questions/1478951/generate-a-resultset-of-incrementing-dates-in-tsql/25881077#25881077

2012-08-21

Run VBScript from SQL

Filed under: Microsoft, SQL Server, Technology — Tags: , , , — Developer42 @ 18:13

Though it’s fairly well publicised how to call an existing VBS file from SQL, or how to execute VBScript within a SQL agent, there’s not much info out there on how to have SQL run VBS directly. The code below allows you to manage your scripts in a database table, creating temp files to host and run the scripts as required. The sp_RunSQL stored proc can also be used to run any VBS script; including dynamically generated code.
As always, feedback, suggestions, etc are welcome.

Code:

    use [insert-db-name-here]
    go
    ----use below code to enable commands (required for xp_cmdshell to work)
    --exec sp_configure 'show advanced options', 1
    --go
    --reconfigure
    --go
    --exec sp_configure 'xp_cmdshell', 1  
    --go
    --reconfigure
    --go
    if OBJECT_ID('vbsScripts') is not null drop table vbsScripts
    go
    create table vbsScripts 
    (
        id bigint not null identity(1,1) constraint pk_vbsScripts primary key clustered 
        , name nvarchar(256) not null constraint uk_vbsScripts_name unique
        , script nvarchar(max) not null
        , timeoutSecs int null constraint df_vbsScripts_timeoutSecs default(86400)--leave as null if you don't want a timeout / defaults to 1 day / 24*60*60
        , batchMode bit not null constraint df_vbsScripts_batchMode default(1)
    )
    go
    /*
    Gets the temp directory from environment variables
    usage:
        declare @tempPath nvarchar(max)
        exec GetTempDirectory @tempPath out 
        select @tempPath 
    */
    if OBJECT_ID('GetTempDirectory') is not null drop proc GetTempDirectory
    go
    create proc GetTempDirectory(@path nvarchar(max) out)
    as
    begin
        set @path = ''
        declare @tempTable table(data nvarchar(max))
        insert @tempTable exec master..xp_cmdshell 'echo %temp%'
        select top 1 @path = data from @tempTable
        if SUBSTRING(@path,len(@path),1)  '\' set @path = @path + '\'
    end
    go
    /*
    Creates a unique filename (using guid to ensure uniqueness and datetime to make the name friendlier)
    usage:
        declare @tempPath nvarchar(max)
        exec GetTempDirectory @tempPath out 
        select @tempPath 
    */
    if OBJECT_ID('GetTempFilename') is not null drop proc GetTempFilename
    go
    create proc GetTempFilename(@fn nvarchar(max) out)
    as
    begin
        --exec GetTempDirectory @fn out --can just use environment variable - originally had issues testing as was looking at the wrong user's temp directory :/
        --set @fn = @fn + 'sqlTemp_' + replace(replace(replace(convert(nvarchar(24), getutcdate(),127),'-',''),':',''),'.','') + '_' + CAST(NEWID() as nvarchar(36)) + '.tmp'
        set @fn = '%temp%\' + 'sqlTemp_' + replace(replace(replace(convert(nvarchar(24), getutcdate(),127),'-',''),':',''),'.','') + '_' + CAST(NEWID() as nvarchar(36)) + '.tmp'
    end
    go
    if OBJECT_ID('dbo.fn_EscapeDosCharachters') is not null drop function dbo.fn_EscapeDosCharachters
    go
    create function dbo.fn_EscapeDosCharachters
    (
        @text nvarchar(max)
    )
    returns nvarchar(max)
    as
    begin
        --http://www.dostips.com/?t=Snippets.Escape
        set @text = REPLACE(@text,'^','^^')
        set @text = REPLACE(@text,'!','^!')
        set @text = REPLACE(@text,'&amp;','^&amp;')
        set @text = REPLACE(@text,'|','^|')
        set @text = REPLACE(@text,'%','%%')

	--based on experience
	set @text = REPLACE(@text,'&gt;','^&gt;')
	set @text = REPLACE(@text,'&lt;','^&lt;')
        return @text
    end
    go
    if OBJECT_ID('createTempTextFile') is not null drop proc createTempTextFile
    go
    create proc createTempTextFile
    (
        @fn nvarchar(max) out 
                          --the filename to output to (nb: environment variables don't currently work (e.g. you can't use %temp%\myFile.vbs)
                          --works fine with spaces in filename (so far at least)
                          --if user passes null a temporary filename will be auto allocated &amp; returned in this variable
        , @content nvarchar(max)
    )
    as
    begin

        declare @charPos int
        , @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation

        if @fn is null or LEN(@fn)=0
        begin
            exec GetTempFilename @fn out
        end

        set @cmd = '@echo.&gt;' + @fn --create a new file for our script output
        EXEC master..xp_cmdshell @cmd, no_output

        set @content = replace(@content,char(13) + char(10), char(10))--ensure uniform line endings (i.e. \r\n -&gt; \n)
        set @content = replace(@content,char(13), char(10))--ensure uniform line endings (i.e. \r -&gt; \n)
        set @content = @content + CHAR(10) --ensure last character of script is new line
        set @charPos = CHARINDEX(char(10),@content)
        while (@charPos &gt; 0)
        begin
            --todo: consider what additional escaping is required to prevent injection issues
            set @cmd = '@echo.' + dbo.fn_EscapeDosCharachters(SUBSTRING(@content,1,@charPos-1)) + '&gt;&gt; ' + @fn
            EXEC master..xp_cmdshell @cmd, no_output
            set @content = SUBSTRING(@content,@charPos+1,len(@content))
            set @charPos = CHARINDEX(char(10),@content)
        end

    end 
    go
    if OBJECT_ID('deleteTempTextFile') is not null drop proc deleteTempTextFile
    go
    create proc deleteTempTextFile
    (
        @fn nvarchar(max)
    )
    as
    begin
        declare @cmd varchar(8000)
        if CHARINDEX(' ',@fn)&gt;0 and CHARINDEX('&quot;',@fn)&gt;1 set @fn = QUOTENAME(@fn,'&quot;')
        set @cmd = 'del ' + @fn
        EXEC master..xp_cmdshell @cmd, no_output
    end
    go
    if OBJECT_ID('sp_RunScript') is not null drop proc sp_RunScript
    go
    create proc sp_RunScript
    (
        @script nvarchar(max)
        , @arguments nvarchar(max)
        , @timeoutSecs int = null
        , @batchMode bit = 1
        , @tempfileUri nvarchar(max) out
    )
    as
    begin
    
        declare @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation
        
        exec createTempTextFile @tempfileUri out, @script
        
        if CHARINDEX(' ',@tempfileUri)&gt;0 and CHARINDEX('&quot;',@tempfileUri)&gt;1 set @tempfileUri = QUOTENAME(@tempfileUri,'&quot;')
        set @cmd = 'cscript ' + @tempfileUri + ' //E:vbscript //NOLOGO '
        
        --batch mode or interactive
        if @batchMode=1 
            set @cmd = @cmd + '//B '
        else
            set @cmd = @cmd + '//I '
        
        --should script timeout after x seconds?
        if @timeoutSecs is not null
            set @cmd = @cmd + '//T:' + CAST(@timeoutSecs as nvarchar(18)) + ' '
        
        set @cmd = @cmd + isnull(@arguments,'')
        --select @cmd --if debugging enable this line to see the script file / etc
        
        EXEC master..xp_cmdshell @cmd --if required we can capture output as has been done in GetTempDirectory
        
        exec deleteTempTextFile @tempfileUri --tidyup the temp script - disable this line for debugging
            
    end
    if OBJECT_ID('sp_RunScriptByID') is not null drop proc sp_RunScriptByID
    go
    create proc sp_RunScriptByID
    (
        @scriptId bigint
        , @arguments nvarchar(max)
    )
    as
    begin
    
        declare @timeoutSecs int
        , @batchMode bit
        , @script nvarchar(max)
        , @tempfileUri nvarchar(max)
        , @cmd varchar(8000) --has to be varchar rather than nvarchar due to xp_cmdshell implementation
            
        select @timeoutSecs=timeoutSecs 
        , @batchMode = batchMode
        , @script = script
        from vbsScripts 
        where id = @scriptId
        
        exec sp_RunScript  
			@script
			, @arguments 
			, @timeoutSecs 
			, @batchMode 
			, @tempfileUri out 
        
    end
    go
    if OBJECT_ID('sp_RunScriptByName') is not null drop proc sp_RunScriptByName
    go

    /*
    provides a friendly interface to sp_RunScriptByID
    */
    create proc sp_RunScriptByName
    (
        @scriptName nvarchar(256)
        , @arguments nvarchar(max)
    )
    as
    begin

        declare @id bigint

        select @id = id
        from vbsScripts 
        where name = @scriptName

        exec sp_RunScriptByID @id, @arguments

    end
    go

Example Usage:

    --demo

    --register a new script in the scripts table
    insert vbsScripts 
    select 'demo', '
    option explicit
    dim objFSO, i, path
    path = "c:\example1\"
    wscript.echo "hello" ''show what console output looks like (if interactive)
    for i = 0 to wscript.Arguments.Count-1 ''show that we can handle command line arguments
        wscript.echo wscript.arguments.item(i)
    next
    set objFSO = CreateObject("Scripting.FileSystemObject")    
    if not objFSO.FolderExists(path) then
		on error resume next
        objFSO.CreateFolder(path) ''create a folder to demonstrate that the vbs is running / affecting the outside environment
        if err.number = 0 then
			wscript.echo "Folder " &amp; path &amp; " successfully created."
		else
			wscript.echo "Folder " &amp; path &amp; " was not created.  " &amp; cstr(Err.number) &amp; ": " &amp; Err.Description
			err.clear
		end if
        on error goto 0
    else
        wscript.echo "Folder " &amp; path &amp; " already exists."
    end if
    set objFSO = Nothing
    wscript.echo "Done"
    ', null, 0
    go

    --execute above script via friendly name
    sp_RunScriptByName 'demo','"this is a demo" "hopefully it will work" yes it does'

2011-12-09

SQL Optimisation :: Convert Primary Keys to Clustered Indexes

We recently spotted that one of the systems we use did not make use of clustered indexes on any tables out of the box. As a result performance was not as good as it could have been. The below script allows for an easy win optimisation by finding all tables which do not include a clustered index, and converting the table’s primary key to be a clustered index.

In a future post I’ll put up more details on what clustered indexes are, why you should always (pretty much) use them and other useful info for anyone playing with databases.

--This script is designed for MS SQL Server
use DbNameToOptimise
go

--disable all constraints on all tables (to avoid these causing errors when altering the indexes)
sp_msforeachtable 'alter table ? nocheck constraint all'
go
 
declare @sqls table(object_id bigint, sort int, sql nvarchar(max))
 
insert @sqls
select t.object_id, ic.key_ordinal, case when ic.key_ordinal=1 then  'CREATE UNIQUE CLUSTERED INDEX [' + i.name + '] ON [' + t.name + ']([' else ',[' end + c.name + case when ic.key_ordinal=icagg.maxko then ']) WITH DROP_EXISTING' else ']' end sql
from sys.tables t
inner join sys.indexes i on t.object_id = i.object_id
inner join sys.index_columns ic on i.object_id=ic.object_id and  i.index_id = ic.index_id
inner join sys.columns c on ic.object_id = c.object_id and ic.column_id = c.column_id
inner join (
      select object_id, index_id, MAX(key_ordinal) maxko from sys.index_columns group by object_id,index_id
) icagg on i.object_id = icagg.object_id and i.index_id = icagg.index_id
where t.is_ms_shipped=0
and i.is_primary_key=1
and not exists (
      --ignore tables which already have a clustered index
      select 1
      from sys.indexes i2
      where t.object_id = i2.object_id
      and i2.type = 1
)
order by t.name, i.name, ic.key_ordinal
 
declare @objid bigint
, @sql nvarchar(max)
 
while exists (select top 1 1 from @sqls)
begin
      set @sql=''
      select top 1 @objid=object_id from @sqls
      select @sql = @sql + sql from @sqls where object_id=@objid order by sort
      delete from @sqls where object_id = @objid
      exec (@sql)
end
 
go

--reenable constraints to leave the db as we found it (aside from the fix)
sp_msforeachtable 'alter table ? check constraint all'
go

Create a free website or blog at WordPress.com.

%d bloggers like this: