Trigger Happy Rotating Header Image

Transfer files with RDP

In the spirit of “you learn something new every day” I’ve been using Remote Desktop for years.  I use it almost every day.  Sometimes I’m remoted into machines that are remoted into other machines and so on (it can be very confusing).  But I did not know that you can map a drive on the machine you’re remoting into back to the machine you are remoting from.

In a lot of scenarios you are remoting into machines on the same network so transferring files is not an issue.  Sometimes, though, you’re remoting into a machine on the other end of the internet – it may not be running an FTP service or it may just be a pain to upload files to somewhere that this machine can download them from securely or conveniently. 

If, before you establish the RDP connection, you go into options and look at the local resources tab (and depending on your version of RDP, click the “more” button) you’ll see you can check off “drives”.  Once in the session, you’ll see the local drives under “My Computer”.  It’s not the fastest interface, but it is pretty slick.

SQL MVP Nominee

I am proud to say that I have been nominated for the MVP award.  It is gratifying to know that somebody out there thinks that what I do is helpful.  I will find out in July if I made it.

How to run a server-side trace

This is my first “how to” post.  In stark contrast to the “for dummies” and idiot’s books, my posts will assume a little basic knowledge.  For example, when I say “run profiler” below, I assume you don’t need a screenshot showing you where in the start menu it probably is on your computer.  If you need a little more detail, that doesn’t make you a dummy or an idiot, just post a comment and I’ll do my best to help out.

I assume you already know how to run a profiler trace.  Note that when I say “profiler trace” vs. a “Server-side trace” I’m talking about a trace that is run from within profiler.  Both types run traces on the server, but a profiler trace captures the output and displays it on your screen while a server-side trace sends the output directly to an output device (usually a file on the server) with no interaction from the client while it is running.  Linchi Shea’s article illustrates why you might want to do a server-side trace – this article is not about the “why” but I will only mention that my rule of thumb is that if the server you are tracing is a “production” server or if other users are relying on their connections to it in any way (which is pretty much the definition of a production server, but it could be a UAT server or some other well-used testing server) I don’t go near it with a profiler trace (well you’ll see later that I go *near* it, but you know what I mean).

In broad strokes, we will be generating a script for our trace from profiler, running that script on our server, then looking at a couple of ways to analyze the output.

Here are the steps:

  • Run profiler and configure the trace as you would normally.  As you probably know, it’s a good idea to add some filters, limit the number of events you are tracing and limit the columns to what you are looking for – this post isn’t about “the what” either.  What to trace is dependent on what you’re looking for – I hope you know that if you’re messing with profiler in the first place.
  • Run the trace in profiler (yes, you heard me) but stop it right away.  If you feel you absolutely can’t even do this to your server, you won’t be able to generate the script from profiler, you’ll have to go through the tedious process of writing the script yourself.  We need to do this in order to get access to the menu item we’ll need in the next step.
  • Now that you have access to the File menu, go File | Export | Script Trace Definition and choose the appropriate option for your server after that.  You will be asked to save a script file somewhere
  • Open the script file in Management Studio.  You will need to make one change – You will see the string “InsertFileNameHere” (probably on line 19).  The comment above this line has useful information – read it.  I will only add that, depending on the nature of your server and your trace, your DBA might be quite particular about where files like this go.  If you’re not sure, run the following line to get the default backup directory and put it there – this should be relatively safe.

EXEC master.dbo.xp_instance_regread N'hkey_local_machine',N'software\microsoft\mssqlserver\mssqlserver',N'backupdirectory'

  • Execute the script – take note of the number it gives you at the end for TraceID – you will need it to shut down the trace.  At this point your trace is running – if you shut down your computer and go on holidays, it will continue running on the server until something happens – it is a server side trace, after all.
  • You can view the results of the trace at any time in profiler by just double-clicking on the TRC file you chose in the script – even while it is running (note that in Windows 7 you probably have to explicitly give yourself access to the file).  You can close the results and re-open them to get fresh entries.  This is not the same as a profiler trace because it’s just showing you what was in the file when you double-clicked it – profiler is not actively retrieving updates from your server.  This approach has most of the advantages of running profiler traces without the disadvantages.
  • When you need to stop the trace, you’ll need the following code:

EXEC sp_trace_setstatus @TraceID, 0
EXEC sp_trace_setstatus @TraceID, 2

  • The first line stops the trace, the second line deletes the trace definition from the server.  Note that you need to put the TraceID you made not of above in here – if you didn’t make note of it, you’ll have to figure it out by looking at the results of this:

SELECT * FROM fn_trace_getinfo(NULL)

  • Hint: it’s probably the one with the highest number
  • You can get the trace results into SQL server with the following command:

SELECT * FROM fn_trace_gettable('InsertFileNameHere',1)

That’s about it – there are many options with rotating files once they are of a certain size and stuff like that – read up on it and have fun – and don’t forget to stop your trace 🙂

Explanation for TSQL Challenge 14

First I want to mention that the challenges on databasechallenges.com are a blast.  I recommend trying some of them to hone your TSQL skills.  I haven’t had as much time as I would have liked to try as many as I would have liked, but I submitted solutions to a couple of them, and one of them was chosen as an accepted solution.

Jacob Sebastian has requested that I write an explanation for my submission, so here goes…

For a statement of the challenge itself, go here: http://beyondrelational.com/blogs/tc/archive/2009/09/28/tsql-challenge-14-identify-the-longest-sequence-of-characters-in-a-string.aspx

For my posted solution, go here: http://databasechallenges.com/SQLServer/TSQL/Challenge14/Submissions/Mike_DeFehr

In a nutshell, you’re given some strings and asked to identify and quantify repeating characters within the string.  As with many TSQL problems (and particularly with problems on the challenges site) the solution begins with a table of numbers or a tally table.  A table of numbers is just that:  a table with one row for each of the integers from 1 to as many as you need.  Typically, one might have such a table with potentially millions of rows in it somewhere in your SQL server – the rules of the challenge do not allow you to create one, but they do allow you to use one of the system tables (spt_values) as a table of numbers.  You get a table of a couple of thousand numbers with the following query (which is plenty for this challenge):

    SELECT number

    FROM master..spt_values

    WHERE number > 0

        AND type = ‘P’

 

 

One of the things we can do with a table of numbers is to effectively turn a string into an array of characters.  By feeding the number from the table into the SUBSTRING function, this can be accomplished quite easily:

    DECLARE @String varchar(100)

    SET @String = ‘abcdefghijk’

 

    SELECT      number AS ArrayIndex,

                SUBSTRING(@String,number,1) AS [Character]

    FROM master..spt_values

    WHERE number > 0

          AND type = ‘P’

          AND number <= LEN(@String)

    ArrayIndex  Character

    ———– ———

    1           a

    2           b

    3           c

    4           d

    5           e

    6           f

    7           g

    8           h

    9           i

    10          j

    11          k

 

    (11 row(s) affected)         

       

  

Note that if you like a zero-based array, a few subtle changes are required, but it’s no problem:

    DECLARE @String varchar(100)

    SET @String = ‘abcdefghijk’

 

    SELECT  number AS ArrayIndex,

            SUBSTRING(@String,number+1,1) AS [Character]

    FROM master..spt_values

    WHERE number >= 0

        AND type = ‘P’

        AND number < LEN(@String)

    ArrayIndex  Character

    ———– ———

    0           a

    1           b

    2           c

    3           d

    4           e

    5           f

    6           g

    7           h

    8           i

    9           j

    10          k

 

    (11 row(s) affected)

Now that we have our string in array form, we can partition a new row number by Character.  This puts all like characters together whether they are in sequence or not.  This is where the solution starts to depart from traditional iterative thinking.  We will not be “going through” the string looking at the characters and seeing if they are in sequence.  We are establishing properties about our set of characters that we will use to form our solution.  Here is what it looks like with the Partitioned number.  I’ll use a slightly more interesting string in this example.

    DECLARE @String varchar(100)

    SET @String = ‘AFBB04BFFF5’

 

    SELECT    Number AS ArrayIndex,

            SUBSTRING(@String,Number,1) AS [Character],

            ROW_NUMBER() OVER (

                    PARTITION BY SUBSTRING(@String,Number,1)

                    ORDER BY Number) AS PartitionedNumber

    FROM master..spt_values

    WHERE Number <= LEN(@String)

        AND number > 0

        AND [type] = ‘P’

   

    ArrayIndex  Character PartitionedNumber

    ———– ——— ——————–

    5           0         1

    6           4         1

    11          5         1

    1           A         1

    3           B         1

    4           B         2

    7           B         3

    2           F         1

    8           F         2

    9           F         3

    10          F         4

 

    (11 row(s) affected)

 

 

Notice that when you partition a row number, it starts over every time the character changes.  Our string is “out of order” now, but that is fine – this is one of the challenges when starting to think in sets – order is not important.  We have preserved the property, or the idea of string order in the ArrayIndex column.  The result set happened to come out this way because the optimizer found it most efficient to change the order to satisfy the requirements of the ROW_NUMBER operation, but it could come out in any order for the purposes of our solution.  This order *is* convenient, however, to visualize the next step.  Notice that the ArrayIndex and The PartitionedNumber columns track together when there is a sequence of characters.  They can be completely different numbers, but when a sequence starts happening, they go up together.  When the same character exists, but is out of sequence, the pattern is broken (as what happens with the ‘B’).  This is a technique I adapted from Itzik Ben-Gan’s solution for finding islands (unbroken ranges of values in a set).  If you get a chance to see his “tips and tricks” demos (or any of his presentations for that matter), there is much gold there.  Taking the difference between the two columns not only illustrates this better, but gives us something we can use:

    DECLARE @String varchar(100)

    SET @String = ‘AFBB04BFFF5’

 

    SELECT    Number AS ArrayIndex,

            SUBSTRING(@String,Number,1) AS [Character],

            ROW_NUMBER() OVER (

                    PARTITION BY SUBSTRING(@String,Number,1)

                    ORDER BY Number) AS PartitionedNumber,

            number – ROW_NUMBER() OVER (

                    PARTITION BY SUBSTRING(@String,Number,1)

                    ORDER BY Number) AS RNDiff

    FROM master..spt_values

    WHERE Number <= LEN(@String)

        AND number > 0

        AND [type] = ‘P’

   

    ArrayIndex  Character PartitionedNumber    RNDiff

    ———– ——— ——————– ——————–

    5           0         1                    4

    6           4         1                    5

    11          5         1                    10

    1           A         1                    0

    3           B         1                    2

    4           B         2                    2

    7           B         3                    4

    2           F         1                    1

    8           F         2                    6

    9           F         3                    6

    10          F         4                    6

 

    (11 row(s) affected)

 

We have 2 sequences in our string:  2 B’s and 3 F’s – these are highlighted by identical amounts in the new RNDiff column.  In fact, within any given character, if the RNDiff amounts are the same, they represent a sequence – so we can apply a group by.  We will make our existing query into a derived table and feed it into a group by query:

    DECLARE @String varchar(100)

    SET @String = ‘AFBB04BFFF5’

 

    SELECT    [Character],

            MIN(ArrayIndex) AS Pos,

            COUNT(*) AS [Len]

    FROM (    SELECT    Number AS ArrayIndex,

                    SUBSTRING(@String,Number,1) AS [Character],

                    ROW_NUMBER() OVER (

                        PARTITION BY SUBSTRING(@String,Number,1)

                        ORDER BY Number) AS PartitionedNumber,

                    number – ROW_NUMBER() OVER (

                        PARTITION BY SUBSTRING(@String,Number,1)

                        ORDER BY Number) AS RNDiff

            FROM master..spt_values

            WHERE Number <= LEN(@String)

                AND number > 0

                AND [type] = ‘P’    ) a

    GROUP BY [Character], RNDiff

   

    Character Pos         Len

    ——— ———– ———–

    A         1           1

    F         2           1

    B         3           2

    0         5           1

    B         7           1

    4         6           1

    F         8           3

    5         11          1

 

    (8 row(s) affected)

 

What this is telling us is that we have 8 sequences of characters – most of which have a length of only one.  We’re only interested in sequences greater than one, so a HAVING clause will take care of that:

    DECLARE @String varchar(100)

    SET @String = ‘AFBB04BFFF5’

 

    SELECT    [Character],

            MIN(ArrayIndex) AS Pos,

            COUNT(*) AS [Len]

    FROM (    SELECT    Number AS ArrayIndex,

                    SUBSTRING(@String,Number,1) AS [Character],

                    ROW_NUMBER() OVER (

                        PARTITION BY SUBSTRING(@String,Number,1)

                        ORDER BY Number) AS PartitionedNumber,

                    number – ROW_NUMBER() OVER (

                        PARTITION BY SUBSTRING(@String,Number,1)

                        ORDER BY Number) AS RNDiff

            FROM master..spt_values

            WHERE Number <= LEN(@String)

                AND number > 0

                AND [type] = ‘P’    ) a

    GROUP BY [Character], RNDiff

   

    Character Pos         Len

    ——— ———– ———–

    A         1           1

    F         2           1

    B         3           2

    0         5           1

    B         7           1

    4         6           1

    F         8           3

    5         11          1

 

    (8 row(s) affected)

 

Now that this is starting to look suspiciously like the answer to our question – note that the minimum of the ArrayIndex column is the position of the start of the sequence and the count is the length of the sequence.  We’re ready to apply this technique to the data in the challenge.  Funny I should use the word “apply” because the cross apply operator is what we need here.  Cross apply will execute the query using each row of data as it’s input and will expand the number of rows in the result by the number of rows it returns – we see that this is exactly what is required by the expected output.  Replacing the @String variable with the data from the sample data table, here is the final result:

    DECLARE @t TABLE (Data VARCHAR(40) )

 

    INSERT @t (Data) SELECT ‘9992EDC6-D117-4DEE-B410-4E5FAE46AE97’

    INSERT @t (Data) SELECT ‘0BFC936B-BD9A-4C6A-AFB2-CF3F1752F8B1’

    INSERT @t (Data) SELECT ‘4A73E7EB-7777-4A04-9258-F1E75097977C’

    INSERT @t (Data) SELECT ‘5AAF477C-274D-400D-9067-035968F33B19’

    INSERT @t (Data) SELECT ‘725DA718-30D0-44A9-B36A-89F27CDFEEDE’

    INSERT @t (Data) SELECT ‘8083ED5A-D3B9-4694-BB04-F0B09C588888’

 

    SELECT    t.Data,

            c.[Char],

            c.[Pos],

            c.[Len]

    FROM @t t

        CROSS APPLY (SELECT    [Char],

                        MIN(RowNum) AS Pos,

                        COUNT(*) AS [Len]

                FROM    (    SELECT    Number AS RowNum,

                                SUBSTRING(t.Data,Number,1) AS [Char],

                                ROW_NUMBER() OVER (

                                    PARTITION BY SUBSTRING(t.Data,Number,1)

                                    ORDER BY Number) – Number AS RNDiff

                        FROM master..spt_values

                        WHERE Number <= LEN(t.Data)

                            AND number > 0

                            AND [type] = ‘P’

                    ) b

                GROUP BY [Char], RNDiff

                HAVING COUNT(*) > 1

                ) c

    ORDER BY MAX(c.[Len]) OVER (PARTITION BY t.Data) DESC, Data, Pos

   

    Data                                     Char Pos         Len

    —————————————- —- ———– ———–

    8083ED5A-D3B9-4694-BB04-F0B09C588888     B    20          2

    8083ED5A-D3B9-4694-BB04-F0B09C588888     8    32          5

    4A73E7EB-7777-4A04-9258-F1E75097977C     7    10          4

    4A73E7EB-7777-4A04-9258-F1E75097977C     7    34          2

    9992EDC6-D117-4DEE-B410-4E5FAE46AE97     9    1           3

    9992EDC6-D117-4DEE-B410-4E5FAE46AE97     1    11          2

    9992EDC6-D117-4DEE-B410-4E5FAE46AE97     E    17          2

    5AAF477C-274D-400D-9067-035968F33B19     A    2           2

    5AAF477C-274D-400D-9067-035968F33B19     7    6           2

    5AAF477C-274D-400D-9067-035968F33B19     0    16          2

    5AAF477C-274D-400D-9067-035968F33B19     3    32          2

    725DA718-30D0-44A9-B36A-89F27CDFEEDE     4    15          2

    725DA718-30D0-44A9-B36A-89F27CDFEEDE     E    33          2

 

    (13 row(s) affected)

Note that the final piece was to get the sorting right.  The challenge required that the string with the longest sequence be on top, but then it should be sorted by the order in which the sequences appear in the string.  This is accomplished by partitioning a maximum of the length of the sequences by Data.  The important thing to note here is that the OVER clause can be applied to any aggregate and does not require a group by since the value is simply repeated for each member of the group.

This completes the solution.  This was an excellent challenge because it required an interesting set-based solution to what is intuitively an iterative problem and made good use of the newer language elements (CROSS APPLY and the OVER clause applied to both an aggregate and ROW_NUMBER).

Enjoy and be sure to try the next TSQL challenge.

 

Winnipeg Code Camp 2010 Scripts

This post is for those who attended my session on query tuning at the Winnipeg Code Camp on Saturday, February 27.

I hope you enjoyed the session.  Attached is the demo script I was using along with a setup script that will install the database I was using.  Note that the setup script does some randomization, so the results will be similar, but not identical to what we saw in the session.  Don’t hesitate to let me know if you have any questions.

Intro Query Tuning Scripts

Announcing March 2010 SQL Users Group Event

This month we will be hearing from Microsoft as part of their SQL Server 2008 R2 Community tour.

When: March 9th, 5:30 PM (The registration page says 6:00, but we’ll have doors open and food for 5:30)

Where: 17th Floor Conference Room, Richardson Building, One Lombard Place

 

SQL Server 2008 R2 Community Tour

SQL Server 2008 R2 is coming soon featuring several breakthrough capabilities that will enable your organization to scale database operations with confidence, help improve IT and developer efficiency, and enable highly scalable and well managed Business Intelligence on a self-service basis for your users.  Come and find out how these changes will impact your role as database administrator and see first-hand some of the new capabilities demonstrated in this session being delivered by Damir Bersinic of Microsoft Canada.

 

You can register for this event here:

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032443664&Culture=en-CA

Announcing Winnipeg Code Camp

On Saturday, February 27th there will be a code camp in Winnipeg at the Red River College Princess St. campus.  Details can be found at the code camp website.  I will be giving a talk on query tuning:

Introduction to Query Tuning

Mike will go over the basics of query plans, query IO statistics and profiler output as tools for tuning a query. We will look at what considerations to make when deciding whether to create an index and what it means to INCLUDE a column. This is more of an introductory talk, but there will definitely be something there for anyone who doesn’t tune queries every day.

Be sure to register and come out – it will be a great day

Why Trigger Happy?

For my first blog post, I thought I would explain the name of my blog.  I have always enjoyed working with database triggers – they represent a fairly unique and challenging way of programming, but it really goes back to when/why I started working with SQL server.

Back in the day, I was working with MS Access (I started with 1.0, but I believe I was already on 2.0 at the time I discovered SQL server).  I did not come to SQL server because of scalability – my application was pretty small, and believe it or not, Access was actually handling it fairly well.  Somehow I caught wind of the concept of a database trigger, though, and it intrigued me.  I read up on it and discovered that with a “real” DBMS, you could make the tables themselves run programs when you did data modifications on them – and these programs could go off and do anything you wanted them to.  This was just such a cool and novel idea that I upgraded my application to SQL server straightaway and became very “trigger happy” in a bad way.  I used triggers for everything and went too far with them, but this is what tends to happen when you’re self-teaching a technology.

Since then I have, of course, discovered many other advantages of SQL server over Access and I have never looked back.  I spent years writing applications front-ended with Access, but I haven’t used Access’s JET database engine for probably about 10 years now.  Toward the end of my Access career, I was working primarily with Access ADPs.  Today I’m almost exclusively a database developer and administrator and pretty much never touch front ends at all.  The title of my blog reminds me of how I started on the path to where I am.

The title is also a stark reminder to be careful with jumping on the bandwagon of a new technology.  I still have an affinity for triggers and jump at the chance to write one when the situation warrants, but as we all should know, a trigger runs as part of the transaction that fired it, and (typically) after the data modification it is responding to so they are not the best place for things like validation logic.  They can easily be misused and cause concurrency issues.  From this standpoint, it is important that I am not too “trigger happy”.

Finally, I am into hunting, so there is even the whole double-meaning thing going on.  I hope you are picking up on the sense of irony here as opposed to becoming more and more alarmed:  regardless of how one can be “trigger happy”, it is rarely a good thing.  The hope is that, in contrast to common implications of its title, this blog will definitely be a good thing.

Hello world!

OK – now that WordPress and hosting is set up, all I need to do is write a real blog post.  Stay tuned!