The Potential of the LINQ. Why I really Wish I Could Love LINQ to Entities.

Executive Summary

Generalized search (or catch all, or Swiss army knife) stored procedures are one of the cases where I personally feel that T-SQL does not offer a fantastic solution – my feeling is that we usually need to choose between maintainability (LINQ is likely to seem more maintainable than the “sea of red” that the stored procedure approach becomes) and performance (when parameter sniffing becomes an issue, LINQ will suffer). The careful use of LINQ to Entities would be able to help us if it were not for the fact that there is no way to supply a query hint for those cases where parameter sniffing becomes an issue. The only way that I’ve discovered to work around this shortcoming is to use plan guides which are not very maintainable IMO. That’ a long winded way of saying that dynamic SQL or dynamic string execution in T-SQL may currently be the least bad solution.

The Problem

Whether a database stored purchase order information, technical support tickets, employee data, descriptions of Doctor Who episodes, or something else entirely in this day it is likely that a software frontend will eventually sit between the database and the data consumer. Frequently the software will include some kind of search screen with about a gazillion optional fields where the user can potentially enter data. For example, if we are working on a system for creating and tracking tech support tickets then the user might expect to see a screen someplace in our product that lets them filter by things like creation date, assigned technician, customer, system location, completion date, etc.

Typically the expectation is that this search functionality will be implemented by calling a single stored procedure to perform this search regardless of what combination of parameters is supplied. If the development team has not been down this road before or warned about the pitfalls, typically the optional parameters will be handled by using the “@parm IS NULL OR @parm=val” approach. For example a greatly simplified procedure for our ticketing system may look like this

CREATE PROCEDURE dbo.SearchAllTheThings
  @StartDateCreation DATE = NULL,
  @EndDateCreation DATE = NULL,
  @TechnicianID TINYINT = NULL
AS
	SELECT TicketID, CustomerID, CompletionDate, TicketText
	FROM dbo.Ticket
	WHERE (@StartDateCreation IS NULL OR Ticket.CreationDate >= @StartDateCreation)
	AND (@EndDateCreation IS NULL OR Ticket.CreationDate <= @EndDateCreation)
	AND (@TechnicianID IS NULL OR EXISTS(SELECT * FROM dbo.TicketTechnician
		WHERE TicketTechnician.TicketID = Ticket.TicketID AND TicketTechnician.TechID = @TechnicianID))
GO

and if we ask for the actual plan when calling it like this (requesting all tickets created on October 1, IE all tickets created between October 1 and October 1).

exec dbo.SearchAllTheThings '20131001', '20131001'

we may see a plan like this

What’s up with that nested loop and the concatenation operator? Since we did not specify a parameter for the technician there is no reason for us to look at anything but the ticket table. The clustered index scan should have us covered – well, in the absence of a better index.

The reason, of course, is that SQL Server will reuse the plan for future calls. Even though the plan is optimized for the current set of parameters, it needs to produce correct results for all possible combinations of parameters. The only way this is possible is for the plan to include these extra operators.

Fixing With Dynamic SQL

Typically a good solution for this issue is to rewrite the stored procedure so that it builds an appropriate query on the fly, depending on which parameters are actually used, and then to execute using either exec or sp_executesql depending on whether compilation overhead or parameter sniffing are of greater concern. So in this case we could rewrite the stored procedure as

CREATE PROCEDURE dbo.ImprovedSearchAllTheThings
  @StartDateCreation DATE = NULL,
  @EndDateCreation DATE = NULL,
  @TechnicianID TINYINT = NULL
AS
	DECLARE @query NVARCHAR(300);
	DECLARE @parms NVARCHAR(300);

	SET @parms = N'@StartDateCreation DATE, @EndDateCreation DATE, @TechnicianID TINYINT';

	SET @query = N'SELECT TicketID, CustomerID, CompletionDate, TicketText FROM dbo.Ticket WHERE 1=1';
	IF @StartDateCreation IS NULL
	BEGIN
		SET @query = @query + N' AND Ticket.CreationDate >= @StartDateCreation';
	END
	IF @EndDateCreation IS NULL
	BEGIN
		SET @query = @query + N' AND Ticket.CreationDate <= @EndDateCreation';
	END
	IF @TechnicianID IS NULL
	BEGIN
		SET @query = @query + N' AND EXISTS(SELECT * FROM dbo.TicketTechnician WHERE TicketTechnician.TicketID = Ticket.TicketID AND TicketTechnician.TechID = @TechnicianID)';
	END

	EXEC dbo.sp_executesql @query, @parms, @StartDateCreation, @EndDateCreation, @TechnicianID
GO

When we request the actual execution plan while calling

exec dbo.ImprovedSearchAllTheThings '20131001', '20131001'

And now the plan looks a little cleaner

2013_12_isnull_improved_plan

A Face Only a DBA Could Love

As a rule, my experience has been that developers hate any form of dynamic SQL. At a visceral level, I believe it feels as if dynamic SQL flies in the face of at least 15 years of database programming best practice thinking (to be clear it does not, at least IMO). There was a time when it was common to slam tons of strings, some of which may have come from an end user, together in code to assemble a query. This practice is what made SQL injection attacks so effective at the time. Far and away the most popular mitigation against SQL injection has been various kinds of parameterized SQL such as the sp_executesql procedure used above or using parameters with the humble SQLCommand object. Over time as the community continues to hammer into developers heads that parameterized SQL is a very very good thing, many have lost sight of the fact that the real issue is including user-supplied input into dynamically assembled SQL … many today are of the opinion that ALL dynamic SQL is dangerous (or at least icky).

And I actually agree with them, dynamic SQL is icky. Don’t get me wrong, I’m not saying that dynamic SQL should not be used. I actually think it is often demonstrably the best approach in cases like the above. What I am saying is that it is the best of a handful of bad options. We keep hoping for a good one and that’s one of the reasons LINQ has become so popular over the objections of many many DBAs.

I’m sure different folks have different reasons for disliking dynamic SQL. My issues with it are testing and maintainability. Testing because if there are a large number of parameters and complicated logic the number of potential combinations that could be produced can increase exponentially which makes it impossible to be absolutely sure that there is no possible combination that is missing a space somewhere important. Maintainability because there is no syntax help from the IDE when working inside text strings everything is in a “sea of red” which means anyone who isn’t familiar with how the procedure is put together will need to spend some time orienting themselves before they touch anything.

Not such big problems if you hire some fancy-pants consultant with a blog and never have to modify the code after they leave, but these are problems for mere mortals.

So How Was LINQ Supposed to Help?

I don’t want to get too bogged down in discussing why LINQ is so exciting from a development perspective, but briefly it is an acronym for “Language-Integrated Query” and is pretty awesome because that’s exactly what it does. It allows a developer to drop something that looks an awful lot like a query into the middle of their code and work with the results of the query as objects instead of flat data. All this without tens or hundreds of lines mapping parameters, looping through readers, and stuffing data into objects by hand. As somebody who uses LINQ to Entities in secret, late at night, when nobody else is looking and I don’t care about parameter sniffing issues I can say it really is pretty awesome.

The reason I with I could love LINQ, as a DBA, is that all of this happens through SQL rather than through some proprietary library. In other words, LINQ is a SQL generator that is fully supported by IntelliSense and syntax checkers in Visual Studio. It does not generate concise SQL. It does not always generate comprehensible SQL. But it does generate correct SQL without leaving the developer an opportunity to forget an important space or misspell “SELECT”.

Here is a sample what some LINQ code may look like for this problem. I kept things kind of simple for this example, this isn’t necessarily code that I would put into production. Notice how, even though the syntax and order of clauses is a bit different, the database access code almost looks a little bit like an inline SQL query.

		Ticket[] LongVersion(int? TechnicianID, DateTime StartDate, DateTime EndDate)
		{
			ThrowAway.Blog_2013_11_LameTicketDBEntities ctx = new Blog_2013_11_LameTicketDBEntities();

			IEnumerable<Ticket> pancakes = null;   // "var" is for squids.

			if (TechnicianID == null)
			{
				pancakes = from tkt in ctx.Tickets
						   orderby tkt.TicketID
						   select tkt;
			}
			else
			{
				pancakes = (from tech in ctx.Technicians
							where tech.TechID == (int)TechnicianID
							select tech.Tickets).SingleOrDefault();
			}

			//Narrow down by StartDate
			if (StartDate != null)
			{
				pancakes = from tkt in pancakes
						   where tkt.CreationDate >= StartDate
						   select tkt;
			}

			//Narrow down by EndDate
			if (EndDate != null){
				pancakes = from tkt in pancakes
						   where tkt.CreationDate <= EndDate
						   select tkt;
			}

			return pancakes.ToArray();
		}

Or, more succinctly, if lambda expressions don’t make your head explode and if you aren’t that excited about the SQL-ish syntax

		Ticket[] CompactVersion(int? TechnicianID, DateTime StartDate, DateTime EndDate)
		{
			ThrowAway.Blog_2013_11_LameTicketDBEntities ctx = new Blog_2013_11_LameTicketDBEntities();

			IEnumerable<Ticket> pancakes = null;   // "var" is for squids.

			if (TechnicianID == null)
			{
				pancakes = ctx.Tickets;
			}
			else
			{
				pancakes = ctx.Technicians.Where( x => x.TechID == (int)TechnicianID ).SingleOrDefault().Tickets;
			}

			//Narrow down by StartDate
			if (StartDate != null)
			{
				pancakes = pancakes.Where( x => x.CompletionDate >= StartDate );
			}

			//Narrow down by EndDate
			if (EndDate != null){
				pancakes = pancakes.Where(x => x.CreationDate <= EndDate);
			}

			return pancakes.OrderBy(x => x.TicketID).ToArray();
		}

One key thing to understand about LINQ to Entities is that the query is not executed against the database until the results of the query are accessed. So in the above examples, the only thing that happens between the start of the function until just before the return statement is that the text of a SQL query is assembled. Since the optional parts of the query are enclosed in if/then statements the query will not contain any reference to conditions which do not apply, and since the query is not sent to the server until the results are actually needed no time is wasted generating results that may not be looked at. In both versions of the function, the query is actually executed at the time the ToArray() method is called.

But The Reader Gets the Impression Rick Dislikes LINQ
That would be correct, I do have issues with LINQ at least as it exists today. But this post is probably already more than long enough … stay tuned for my next post on why I feel LINQ fails to live up to its potential.

 

Why did execution times get unpredictable AFTER going production? Could be the ascending key.

Executive Summary

It is common for development to be performed against representative data which is much smaller than what may be expected in real life and, more significantly for this discussion, that does not update during the development process. If this data contains an ascending key (or a non-key date column that is frequently used to filter data) and statistics are not managed, there is a good chance that the statistics will auto update frequently and predictably immediately after deployment but infrequently and unpredictably later on as the size of the data set grows. This can lead to drastically different execution plans and wild swings in execution time for some operations from day to day. The most direct way to fix this issue is to manage any relevant statistics more closely. For example, if a large ETL operation is involved then updating statistics immediately after the data is loaded can work wonders for consistent performance. For more information on this issue, other than the below see, for example, sqlinthewild.

The Meat

Can’t quite recall why exactly I thought I wouldn’t be very busy this fall, but I do need to be better about making time to blog. As will probably be usual for me, this issue discussed in this post is actually well understood by the community but is not understood by many normal database developers that I encounter in my work. In other words, I’m not covering any new ground here, but knowledge does not seem to have been adequately transferred from the SQL gurus to the rank-and-file professionals (especially those in smaller shops). Another write-up certainly can’t hurt. And of course there are many many many things that can cause performance to go wonky after going production, this is only one of them.

This is a toy example loosely based on a recent case. At the point I was called in to help with the project a group of developers which did not include a dedicated DBA had been working on the project for a few years. The biggest concern that the development team had at this point was unpredictable performance in one of their ETL jobs. Each morning data was aggregated from a handful of different database servers and then some computation was done on the aggregated values. On some days the computation portion of this job was blazing fast, and on other days it was extremely slow. This behavior did not manifest itself during their testing. So let’s say that this is what we see when we graph out the computation time.

Chart of crazy runtimes

Except obviously in the real world the times would probably be measured in something other than milliseconds.

Without knowing more about the problem this kind of extreme difference in execution times kind of screams that an inappropriate execution plan may have been in play due to a statistics issue. As a starting point, let’s take a gander at the plan for the most recent day executed.

EXEC dbo.SimulateCalculation '20130930'

Wrong plan for this data

Sure enough, I see plenty to hate here just picking on the first statement in the plan. Without digging into any of the numbers, two things that jump out immediately are that we are using nested loops, which I would normally expect to see in smaller data sets. Additionally, all of the operations are seeks when I would have normally expected to see scans on two of the tables since thousands of records are coming back from each. Looking at the properties for the seek on LaborDetail would seem to clinch it. The plan only expected a single row to come back when there were actually almost 9000 rows.

At this point, it is tempting to say “I know what this is. This is a parameter sniffing issue! Lets throw a recompile or “optimize for” hint on this puppy and then get on with our day.” Unfortunately, it is not that simple. The compiled and runtime parameter values are identical (and if we wanted to cheat, we could look at the stored procedure definition actually has “WITH RECOMPILE” to simplify my problem recreation. Kids, don’t ever use “WITH RECOMPILE” in real life).

Properties window showing matching values

So where do we go from here? When it seems that the optimizer is making very poor decisions that does usually indicate some kind of statistics issue so this would probably be a good time to look directly at the relevant statistics.
Histogram showing no recent dates

Note that the window is actually scrolled all the way to the end. The last step in the histogram really is for September 12.

What’s going on here? The immediate problem is that the statistics are out of date. The last step in the histogram is for September 12, which means that for any date after September 12 the statistics indicate there “should” be no rows. Put simply, SQL Server “thinks” that it “knows” for certain that there is no data in the table more recent than September 12. SQL Server generally will not (never will?) actually use zero for an estimated cost or number of rows. It uses 1 instead. I assume the reason for this is that estimating zero rows would make it impossible for the optimizer to distinguish between good and bad plans on the effected branch of the query plan (I do need to remember to get verification on that someday). Anyway, when the stored procedure is compiled the optimizer operates on the assumption that only 1 row will come back from each of the three tables which is the reason seeks appear instead of scans, and also the reason we see a nested loop join.

The reason the statistics are out of date is that this example is relying on auto updating of statistics. Recall that for larger tables’ statistics are not invalidated until the table has accumulated at least (500 + 20% of the table size) changes, and statistics are not updated until the first use after invalidation. This means that if we load the same amount of data into the table each morning eventually we will reach a point where statistics are not updated each morning, or even every week, because each day’s load is a progressively smaller portion of the total data. There are at least a couple of reasons this is an issue for projects like this

  • Typically the unit tests that exercise the relevant code work against a static data set (because the load functionality is exercised in a separate set of unit tests – that is the nature of unit testing). No matter how robust and interesting this data set is, the fact that it doesn’t change means we have no chance of detecting this behavior in unit testing. Thorough integration testing could detect this but most smaller development shops that I’ve encountered are much stronger on unit than integration testing (if they are strong on testing at all).
  • When the system first goes into production, the tables are probably small at first so this issue will not appear for weeks or even months. It is entirely possible that by the time this happens the development team will have started to work on another project which means it may take them a while to come back up to speed on the code if their assistance is needed.

Since this post has been focused on large daily data loads, my preferred fix for this is to simply update the statistics at the end of each day’s data load using the UPDATE STATISTICS statement. Since each day extendeds the range of the histogram, each day is going to make a statistically significant change in a very focused period which should make a statistics update a no-brainer. That is not to say AUTO_UPDATE_STATISTICS should be turned off – it should definitely be left on. We just should not be depending on it for this particular index.

I would not be tempted to use trace flag 2389 in this case because of the fact that the entire data load happens at once so there is an obvious point in time at which statistics could be manually updated. There is no reason for us to consider mucking with the way statistics update works based on this information alone. Along the same lines, I also would not personally use trace flag 2371 for the same reason – increasing the frequency at which automatic updates happens does not change the fact that we ought to be managing this particular statistic. Further, flag 2371 doesn’t really kick in until the table is starting to get pretty large. That not only means that the issue would have kicked in long before 2371 starts to operate but in reality by the time 2371 starts to make a difference we probably would have already started breaking the table into smaller pieces for performance reasons (see for example Kimberly Tripp).

The Gristle

The observant reader will notice an issue with my chart. My example of September 30, one of the dates which had the statistics issue, actually corresponds to a fast runtime. In fact, towards the end of the chart where several days pass between updates, there are only a handful of spikes at a time when almost all of the days should have inappropriate execution plans. The really observant reader will actually notice that the last spike in the chart actually corresponds to example actually correspond to September 12 which was the last time the statistics auto updated. So this is a day that should have had a great execution plan.

What???!!!??? What is happening in this example is that, on the days where the statistics are auto updated, the statistics are invalidated when the data is loaded but are not actually recomputed until the tables are used by the daily computation. Since the example marks time in terms of milliseconds the statistics update actually dominates the runtime on the days where it occurs – that will not be the case in the real world because the computations will be a lot more expensive. Last time I encountered a real world example of this issue the job would run in tens of minutes on days where the statistics were current but would take several hours on days where they were not.

More to the point, even in the extremely unlikely event we saw this exact same pattern of the statistics update being more expensive than the computation that triggered it, in the real world it probably would not be a good idea to avoid updating statistics for the sake of speeding up the nightly jobs. The entire point of doing this kind of work overnight is to make the daily workload run as quickly as possible and the daily workload will run better if stats are current. If the stats need to be updated anyway, it may as well get done before any computation or aggregation is done on the data. I wouldn’t even really advise using AUTO_UPDATE_STATS_ASYNC in this case. I could see an argument for asynchronous update when data is coming in gradually, but when data changes suddenly I really do think it’s best to wait for the statistics before proceeding.

I thought about leaving the chart out entirely, but thought that it still had some value in giving the reader the flavor of the swings in execution time.

The Guts

The tables used for this example are defined below. I decided to use a time and effort system for this example. The employee table is populated by copying approximately 10,000 records form the AdventureWorks2012 employee table and then deleting those rows that have duplicated employee names. There are 8 Project_Hours tables. Imagine that the data from these 8 tables each came from a separate data source so computations could not be run across projects until all 8 tables were gathered onto the same server. No, I have never seen an actual time and effort system distributed in this way but my real world model for this was not a T&E system. Also note that the join between employee and the project tables is done by name instead of employee ID. Again, the real world model for this example was not a T&E system and the real world equivalent of this join was not as bizarre – but I needed to use something besides employee ID to ensure that table scans as opposed to seeks would be optimal for large amounts of data. Pretend each project manager manually tracked time themselves and did not know the ID’s of the employees on the project. Data from all 8 project tables is accumulated into a labor detail table (by date and employee ID). The TimeHistogram table keeps track of execution times for reporting purposes.

USE AscendingDate

CREATE TABLE dbo.Employee(
	EmployeeID  INT NOT NULL IDENTITY,
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	DeptID      INT NOT NULL CONSTRAINT DF_DeptID DEFAULT 0,
	Wage        NUMERIC(4,2) NOT NULL,
	AGazillionMoreFields NCHAR(2000) NOT NULL CONSTRAINT DF_AGazillionMoreFieds DEFAULT N'',
	CONSTRAINT PK_Employee PRIMARY KEY CLUSTERED(EmployeeID)
);

INSERT INTO	dbo.Employee(FirstName, MiddleName, LastName, Wage)
	SELECT FirstName, MiddleName, LastName, CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 50 + 10
	FROM Adventureworks2012.Person.Person WHERE BusinessEntityID <= 10000;

DELETE e1
FROM dbo.Employee e1
INNER JOIN(
	SELECT FirstName, MiddleName, LastName
	FROM dbo.Employee
	GROUP BY FirstName, MiddleName, LastName
	HAVING COUNT(*) > 1
) e2 ON e1.FirstName = e2.FirstName AND ISNULL(e1.MiddleName, '') = ISNULL(e2.MiddleName, '') AND e1.LastName = e2.LastName;

CREATE TABLE dbo.ProjectHours_A(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursA_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursA_LastName_FirstName_MiddleName ON dbo.ProjectHours_A(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_B(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursB_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursB_LastName_FirstName_MiddleName ON dbo.ProjectHours_B(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_C(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursC_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursC_LastName_FirstName_MiddleName ON dbo.ProjectHours_C(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_D(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursD_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursD_LastName_FirstName_MiddleName ON dbo.ProjectHours_D(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_E(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursE_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursE_LastName_FirstName_MiddleName ON dbo.ProjectHours_E(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_F(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursF_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursF_LastName_FirstName_MiddleName ON dbo.ProjectHours_F(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_G(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursG_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursG_LastName_FirstName_MiddleName ON dbo.ProjectHours_G(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_H(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursH_LastName_FirstName_MiddleName ON dbo.ProjectHours_H(LastName, FirstName, MiddleName);

CREATE TABLE dbo.LaborDetail(
	EmployeeID  INT  NOT NULL,
	WorkDay     DATE NOT NULL,
	ProjAHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjAHours DEFAULT 0,
	ProjBHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjBHours DEFAULT 0,
	ProjCHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjCHours DEFAULT 0,
	ProjDHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjDHours DEFAULT 0,
	ProjEHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjEHours DEFAULT 0,
	ProjFHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjFHours DEFAULT 0,
	ProjGHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjGHours DEFAULT 0,
	ProjHHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjHHours DEFAULT 0,
	HoursWorked DECIMAL(4,2) NOT NULL CONSTRAINT DF_LaborDetail_LaborCost DEFAULT 0,
	LaborCost   DECIMAL(5,2) NOT NULL CONSTRAINT DF_LaborCost DEFAULT 0,
	RecordBiggerInRealLife NCHAR(2000) NOT NULL CONSTRAINT DF_RecordBiggerInRealLife DEFAULT '',
	CONSTRAINT PK_LaborDetail PRIMARY KEY CLUSTERED(WorkDay, EmployeeID)
);

CREATE TABLE dbo.TimeHistogram(
	WorkDay  DATE NOT NULL,
	LoadTime INT  NOT NULL,
	CalcTime INT  NOT NULL,
	NewStats BIT  NOT NULL,
	TotalTime AS LoadTime + CalcTime,
	CONSTRAINT PK_TimeHistogram PRIMARY KEY CLUSTERED(WorkDay)
);

I used three stored procedures to generate the data for this writeup. SimulateDailyLoad and SimulateDailyCalc simulate the process of loading and transforming a day’s worth of data. To simplify my life, the third procedure, GenerateHistogram, loops through 3/4 of a year’s worth of calls to these routines and accumulates timing data into the TimeHistogram table.

CREATE PROCEDURE dbo.SimulateDailyLoad
	@LoadDate DATE
AS
BEGIN
	TRUNCATE TABLE dbo.ProjectHours_A;
	TRUNCATE TABLE dbo.ProjectHours_B;
	TRUNCATE TABLE dbo.ProjectHours_C;
	TRUNCATE TABLE dbo.ProjectHours_D;
	TRUNCATE TABLE dbo.ProjectHours_E;
	TRUNCATE TABLE dbo.ProjectHours_F;
	TRUNCATE TABLE dbo.ProjectHours_G;
	TRUNCATE TABLE dbo.ProjectHours_H;
	INSERT INTO dbo.LaborDetail(EmployeeID, WorkDay) SELECT EmployeeID, @LoadDate FROM dbo.Employee;
	INSERT INTO dbo.ProjectHours_A(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_B(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_C(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_D(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_E(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_F(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_G(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_H(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
END;

GO

CREATE PROCEDURE dbo.SimulateCalculation
	@LoadDate DATE
WITH RECOMPILE AS
BEGIN
	UPDATE d SET d.ProjAHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_A h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjBHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_B h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjCHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_C h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjDHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_D h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjEHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_E h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjFHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_F h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjGHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_G h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjHHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_H h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE dbo.LaborDetail SET HoursWorked = ProjAHours + ProjBHours + ProjCHours + ProjDHours + ProjEHours + ProjFHours + ProjGHours + ProjHHours
		WHERE WorkDay = @LoadDate;
	UPDATE d SET LaborCost = d.HoursWorked * e.Wage, RecordBiggerInRealLife = e.AGazillionMoreFields
		FROM dbo.LaborDetail d INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		WHERE d.WorkDay = @LoadDate;
END;

GO

CREATE PROCEDURE dbo.GenerateHistogram AS
BEGIN
	DECLARE @dt DATE;
	DECLARE @t0 DATETIME;
	DECLARE @t1 DATETIME;
	DECLARE @t2 DATETIME;
	DECLARE @t3 DATETIME;
	DECLARE @flag BIT;

	TRUNCATE TABLE dbo.LaborDetail;
	TRUNCATE TABLE dbo.TimeHistogram;
	CHECKPOINT;
	DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
	SET @dt = '20130101';

	WHILE @dt < '20131001'
	BEGIN
		SET @t0 = CURRENT_TIMESTAMP;
		EXEC dbo.SimulateDailyLoad @dt;
		SET @t1 = CURRENT_TIMESTAMP;
		CHECKPOINT;
		DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
		SET @t2 = CURRENT_TIMESTAMP;
		EXEC dbo.SimulateCalculation @dt;
		SET @t3 = CURRENT_TIMESTAMP;
		SET @flag = 0;
		IF STATS_DATE(OBJECT_ID('dbo.LaborDetail'), 1) >= @t0
		BEGIN
			SET @flag = 1;
		END;
		INSERT INTO dbo.TimeHistogram(WorkDay, LoadTime, CalcTime, NewStats)
			VALUES(@dt, DATEDIFF(ms, @t0, @t1), DATEDIFF(ms, @t2, @t3), @flag);
		SET @dt = DATEADD(d, 1, @dt);
	END;
END;

The CHECKPOINT and DROPCLEANBUFFERS in the last stored procedure are probably completely unnecessary. They were originally added when I was debugging an issue and I simply forgot to remove them before the last time I ran this code.

So … It Appears that September Will Be all About the MCM

News about the death of the MCM / MCSM broke right around the time I was starting to write up an interesting case study involving the ascending key problem (for those unfamiliar, see Gail Shaw’s excellent writeup for example), so that post will probably need to wait for a while. It’s not my intent to rehash the #SQLMCM issue here, if anyone who cares about the MCM (Microsoft Certified Master) program isn’t already up to speed on the basic issues a good starting point can be found from Jason Brimhall and at the #SQLMCM hash tag on twitter. Part of the reason that I am not really ranting is that I am actually relatively fortunate. I had been planning to take the lab exam (final hurdle before becoming a MCM) at the end of September anyway so the only real impacts on me are

  • I need to decide whether or not it’s worth following through with the exam when the certification is dying (the answer is probably yes).
  • I am sure I will need to cover this out of my pocket now. I’ve been fortunate enough to have a part-time W-2 gig (on top of my consulting load) that has actually been quite supportive of my MCM quest up to this point. Now that “the email” has gone out I am frankly too embarrassed to even ask if they would care to pony up a couple of grand more for the final test when the plug is getting pulled the day after (maybe literally) I take the exam. Maybe the only reason to consider doing so is that I actually wrote the MCM attempt into my performance plan for the year. This is actually not as bad as it sounds, I am dual employed so it really is only fair that Rick-the-consultant pays part of the certification expense which will benefit me equally in both of my current roles.
  • Some lost prep time. When I said I planned to take the exam at the end of September that was my optimistic estimate. In the back of my head I had actually started telling myself I may wait until the end of October. That exam is no joke, an extra month of prep time would have been handy.
  • It’s all riding on this attempt. Before the announcement I figured that if I did not pass this attempt I could consider regrouping for another try before the MCM exams were retired in favor of the new MCSM exams.

First : MCM vs MCSM – A Long Tangent

But I’m not interested in dwelling because I really am one of the lucky ones. I’m more interested in offering my perspective as one who is in the pipeline at this point in history. First and foremost I was always a lot more excited about the legacy MCM certification than I was about the MCSM. I just felt “Microsoft Certified Master” was an awesome description. It is intuitive. It rolls off the tongue. It is clear. I imagine my future self shaking hands slightly more firmly and standing slightly straighter as “Rick Lowe, Microsoft Certified Master”. It is immediately clear to any person who hears this exactly what I am claiming (that I am awesome), why I feel justified in claiming it (my awesomeness has been certified), and what evidence is available (this Rick guy should be able to cough up a transcript sharing code). This is actually why I am ready to take the exam, if I had not been concerned with becoming a MCM I probably would have been hanging back, updating my credentials from MCITP to MCSE and waiting from the MCSM exams to be published.

On the other hand, when I imagine a possible future self introducing himself as “Rick Lowe, MCSM”, or worse “Rick Lowe, Microsoft Certified Solution Master on the Data Platform” I am pretty sure I will be slouching as the eyes of whoever I’m speaking to glaze over slightly. “That is not just more alphabet soup”, my future self says, “that is a pinnacle certification. Here, let me draw you a diagram of all the new SQL Server certifications so you can see this blue pyramid thingie. Look, there’s no gray at all in this pyramid which means this is the good one. You should really be rather impressed with me about now”. I suppose the alphabet soup issue is what really bothered my about the change from MCM to MCSM. Aside from just being a cool title MCM, both when spoken and written, looks quite different from MCITP which could be important when speaking with somebody outside of the SQL Server community. Even if they may have seen hundreds of resumes for “paper” MCITPs it is possible that the fact MCM sounds different may be enough to get their attention. MCSM, on the other hand, visually and verbally kind of blends in with MCSA and MCSE.

If it seems like I’m taking cheap shots at a particular set of visual aids that is not my intent. I love visual aids and have nothing against blue and grey pyramids. If I haven’t been clear enough, the point I am really trying to make is that the visual aids are not just helpful for understanding the certification roadmap, they may actually be kind of necessary for understanding the current roadmap unless one has a very good head for acronyms. And this does not help when it comes to acceptance of the current generation of credentials by business leaders.

But more importantly, Can we chart our own destiny?

Plenty has been written on the tricky revenue problem posed by the MCM exams (rather steep fixed costs would need to be offset by a relative handful of test takers before this test is profitable). One commonly expressed concern is that it may just be too difficult for Microsoft to make a profit from the MCM/MCSM program and may simply never reintroduce the concept of a pinnacle certification. Another is that they may fix the revenue issue by dumbing down the tests enough to increase the percentage of SQL Server MCPs who would be able to pass. More potential conversions would probably mean more test takers which would definitely mean more revenue. Both of these cases are troublesome for many in the community who value the rigor and the “unfakeability” of the MCM/MCSM exams.

My question at this point is this : if we as a community do not have a lot of faith that Microsoft will bring back the MCM in a satisfactory form, or if we are concerned that we just care more about this particular certification than they do, why are we waiting for them to do so?

It’s probably not realistic for anyone to expect the creation of a “Community Certified Master of SQL Server” program. Don’t get me wrong, it would be fantastic if somebody could come to a conference, sign up for a “test your mettle” hands-on precon or postcon, and potentially walk out as a “CCM”. But getting the community involved in the master testing process does not change the underlying economic issues. Developing the test would take a tremendous amount of effort. Administering the test would be a nightmare. Do we offer the test online? Probably not because it would be too easy for somebody to either cheat or capture the questions to look up later. Establish a dedicated testing center? Probably not realistic for this volume of test takers. Remote proctor? Maybe, but that could be very expensive because it probably requires something close to a 1:1 ratio of proctors to test takers. Co-locate with a conference? Might be the most workable but is it a problem if the exam can only be attempted once or twice a year?

And of course, even if a delivery method is found this does not change the underlying issues. Developing a MCM-type test would be very expensive (it bears repeating). Convincing industry that it should care about the CCM would be even more difficult than it was to try to convince them to care about the MCM. I am sure there are many many more.

Unfortunately I actually do not have any productive suggestions here, all I can really do at this point is suggest that the death of the MCM could be an opportunity for us to do something even better. And it may be foolish to believe this means testing – as Brent Ozar points out there are a lot of cool experiments we could conduct that have nothing to do with a traditional certification program. But if anyone does have an idea I may be interested in pitching in. After October 1, of course. The rest of September is cut out for me.

Coming soon

Hi all,

I have a little extra time on my hands for the next few months, which means this wasn’t only the ideal time to start blogging but also that I should be able to post relatively frequently for a while. That said, I’m getting ready to disappear on a rafting trip for a week. If you discover this space while I’m gone and are wondering whether or not it’s worth coming back here are some topics I’m planning to write about in the next few months.

  • The correct way of getting Oracle Instant Client for work with SSRS / SSIS. Google search may be leading you astray.
  • For the DBAs : The potential of Entity Framework. Why I really wish I could love EF.
  • For the developers : The failure of Entity Framework to live up to its potential. Why EF may be causing your DBA to drink in the morning.
  • The ascending key problem. Why did performance suddenly get inconsistent shortly after we deployed?
  • CRUD squared. When stored procedures go awry (AKA Rick loses some friends part 1).
  • That time I turned on RCSI for the sole purpose of getting the developers to stop using nolock. Wasn’t that awesome? Or was it more of an evil waste of resources? (AKA Rick loses some friends part 2)
  • Social capital at the office. How to get the mean kids to realize how brilliant you are and start listening to you.
  • The limitations of self learning from the internet. Why I frequently pay out of my own pocket to go to conferences.

But more importantly, feel free to contact me to ask questions or even just suggest that I cover a particular topic. This request may be more relevant in the future because you probably can’t tell from a single blog post how valuable my opinion is, but ultimately I do this because I love talking about SQL Server. If nobody is reading this then I’m just talking to myself which I have been known to do that on occasion, but I would much rather talk to somebody else. The more I know about what issues you would most like my warped perspective on, the more productive that conversation can be.

Error 22050 in SQL Server 2008 Agent Jobs Can Be a Red Herring

Executive Summary

When working with legacy versions of SQL Server, if an agent job fails with error 22050 (“error formatting query…”), it is entirely possible that the issue has nothing whatsoever to do with query formatting or parameterization. It is possible for a job to fail due to other causes, even if the query is formatted perfectly, while still seeing this error. The original error message can often be obtained by using the profiler. Thank you to Gianluca Sartori for suggesting that I try using the profiler when I was stuck on this issue.

Long Version

The following was inspired by an actual story, but the names of all tables and indexes have been changed to protect the innocent. The specific system that I was working with at the time was running SQL Server 2008 R2 Standard edition on Windows Server 2008 R2. And yes, I know this blog post is already obsolete as I write it because I have not been able to reproduce the issue in SQL Server 2012 but it is still one of the more interesting issues that I’ve looked into lately. Additionally, at least as of the time of this writing, I’ve found that web search results for this particular error are less than helpful most treatments of this error are rather terse.

Enough with the disclaimers. A few months ago I was trying to get a handle on what tables had the most serious fragmentation issues. Intending to do something quick and dirty I just threw together a quick SQL job to send me email every morning while I got a gut feel for the database. If the below query doesn’t look familiar to you and you’re curious what I’m up to you can look, for example, here. The job contained a single step, which was

 

exec msdb.dbo.sp_send_dbmail @recipients='me@wherever.com', @subject='Fragmentation Report',
   @execute_query_database='IBrokeMyDB',
   @query = 'SELECT OBJECT_NAME(a.object_id) AS [Table], b.name, a.avg_fragmentation_in_percent,
      a.avg_page_space_used_in_percent, a.page_count
   FROM sys.dm_db_index_physical_stats(5, NULL, NULL, NULL, ''Sampled'') a
      INNER JOIN sys.indexes b ON a.object_id = b.object_id and a.index_id = b.index_id
   WHERE page_count >= 750
   ORDER BY avg_fragmentation_in_percent DESC'


Or if you prefer a screenshot

shot0

The first few steps in the deployment process went fine. There were no issues setting up Database Mail and all of the test emails came through fine. Also, executing the above call in a query window worked fine and the email arrived as expected. When set up as an agent job, though, there was no joy.

shot1

Hmmmm. So the error suggests there is an issue with parameters, and I do have a quoted string in the parameter which had to be put in double single quotes since the entire query was quoted. The default for the mode parameter is ‘Limited’, which is actually fine with me, so I updated my exec statement to

 

exec msdb.dbo.sp_send_dbmail @recipients='me@wherever.com', @subject='Fragmentation Report',
   @execute_query_database='IBrokeMyDB',
   @query = 'SELECT OBJECT_NAME(a.object_id) AS [Table], b.name, a.avg_fragmentation_in_percent,
      a.avg_page_space_used_in_percent, a.page_count
   FROM sys.dm_db_index_physical_stats(5, NULL, NULL, NULL, NULL) a
      INNER JOIN sys.indexes b ON a.object_id = b.object_id and a.index_id = b.index_id
   WHERE page_count >= 750
   ORDER BY avg_fragmentation_in_percent DESC'


Which, unfortunately, still doesn’t work

shot2

Hmmmmm. I swear I’ve done this before and it’s worked fine. At this point I’m not sure if I’m looking at a problem with dm_db_index_physical stats or some kind of permissions issue with the “sys” schema. I decide it’s easier to test for the latter and update the job step to

 

exec msdb.dbo.sp_send_dbmail @recipients='me@wherever.com', @subject='Fragmentation Report',
   @execute_query_database='IBrokeMyDB',
   @query = 'SELECT b.name FROM sys.indexes b'


Upon re-running the job, an email message listing all of the indexes created in the database appeared in my mailbox. Hmmmm. So the good news is that I’m not looking at some kind of weird issue with the sys schema, but the bad news is that I am stumped. Since I’ve done exactly this kind of thing many times before I can’t really convince myself there is an issue with using dm_db_index_physical_stats in this way. But is there any chance I’ve just been lucky so far? I wish I could say I figured the rest of this out on my own, but at this point I hit #sqlhelp on twitter to ask if there were any known issues like this with dm_db_index_physical_stats and after a few tweets Gianluca Sartori suggested that I try profiler.

In my case, the key to getting the real error was to include the “User Error Message” event to the profile. I used the standard profile and added this event to it. I then started the trace and ran the job as I had before. It helps a lot that the error messages appear in red, but after slogging through the trace output I eventually found the error message “The user does not have permission to perform this action.” In other words, the user that SQL Server Agent runs as has permission to access sys.indexes but not sys.dm_db_index_physical_stats.

shot3

This is the nudge that I needed. At this point it finally dawned on me that dm_db_index_physical_stats is actually a function, not a table or view. So one really big difference between it and sys.indexes is that a function requires execute permission. After a little more digging I discovered that

  1. The agent user was not a member of the sysadmins server role. I suspect this was accidental, the developer who handled the initial deployment phases probably did not fully appreciate the importance of using Configuration Manager.
  2. The user had been manually added to the SQLAgentOperatorRole, SQLAgentReaderRole, and SQLAgentUserRole database roles in the MSDB database so the agent worked fine, as long as appropriate database permissions are granted.
  3. The user had been added to the db_datareader and db_datawriter database roles in the database IBrokeMyDB, which means that all of the agent jobs which only operate on views and tables in this database worked fine.
  4. The user was not granted any other permissions in the database. Among other things, this means that the user could not execute any stored procedures or functions.
  5. Added 2013 Aug 28 Not so much a discovery as a lesson from the “do as I say not as I did” department. It’s usually best not to run agent jobs as the agent user which usually has administrative privileges. When possible it really is best to follow the principle of least privilege and run jobs as users with minimal permissions. Doing this will not make a user any more or less likely to encounter this particular issue but as long as I am dwelling on agent configuration it is important that I try not to get the reader into bad habits. More props to Gianluca for pointing out this omission in my first version of this post.

But that’s not the point of this post. The takeaway is that the information which I really needed to solve the problem did not find it’s way into the job history but was available when I dug deeper using profiler. I have not done the leg work to verify this, but I would expect that extended events could also be used to expose this information.

So what about SQL Server 2012

As I mentioned earlier, at least in my experience this seems to be much less of an issue in SQL Server 2012 because in my experience so far, 2012 seems to add the actual error which causes the query to fail to the generic 22050 error message. YMMV, I did not spend a tremendous amount of time trying to verify that this is always the case. But here is a screen shot taken from my 2012 instance which I tried to break in a similar way.

shot_2012