Category Archives: Troubleshooting

Why did execution times get unpredictable AFTER going production? Could be the ascending key.

Executive Summary

It is common for development to be performed against representative data which is much smaller than what may be expected in real life and, more significantly for this discussion, that does not update during the development process. If this data contains an ascending key (or a non-key date column that is frequently used to filter data) and statistics are not managed, there is a good chance that the statistics will auto update frequently and predictably immediately after deployment but infrequently and unpredictably later on as the size of the data set grows. This can lead to drastically different execution plans and wild swings in execution time for some operations from day to day. The most direct way to fix this issue is to manage any relevant statistics more closely. For example, if a large ETL operation is involved then updating statistics immediately after the data is loaded can work wonders for consistent performance. For more information on this issue, other than the below see, for example, sqlinthewild.

The Meat

Can’t quite recall why exactly I thought I wouldn’t be very busy this fall, but I do need to be better about making time to blog. As will probably be usual for me, this issue discussed in this post is actually well understood by the community but is not understood by many normal database developers that I encounter in my work. In other words, I’m not covering any new ground here, but knowledge does not seem to have been adequately transferred from the SQL gurus to the rank-and-file professionals (especially those in smaller shops). Another write-up certainly can’t hurt. And of course there are many many many things that can cause performance to go wonky after going production, this is only one of them.

This is a toy example loosely based on a recent case. At the point I was called in to help with the project a group of developers which did not include a dedicated DBA had been working on the project for a few years. The biggest concern that the development team had at this point was unpredictable performance in one of their ETL jobs. Each morning data was aggregated from a handful of different database servers and then some computation was done on the aggregated values. On some days the computation portion of this job was blazing fast, and on other days it was extremely slow. This behavior did not manifest itself during their testing. So let’s say that this is what we see when we graph out the computation time.

Chart of crazy runtimes

Except obviously in the real world the times would probably be measured in something other than milliseconds.

Without knowing more about the problem this kind of extreme difference in execution times kind of screams that an inappropriate execution plan may have been in play due to a statistics issue. As a starting point, let’s take a gander at the plan for the most recent day executed.

EXEC dbo.SimulateCalculation '20130930'

Wrong plan for this data

Sure enough, I see plenty to hate here just picking on the first statement in the plan. Without digging into any of the numbers, two things that jump out immediately are that we are using nested loops, which I would normally expect to see in smaller data sets. Additionally, all of the operations are seeks when I would have normally expected to see scans on two of the tables since thousands of records are coming back from each. Looking at the properties for the seek on LaborDetail would seem to clinch it. The plan only expected a single row to come back when there were actually almost 9000 rows.

At this point, it is tempting to say “I know what this is. This is a parameter sniffing issue! Lets throw a recompile or “optimize for” hint on this puppy and then get on with our day.” Unfortunately, it is not that simple. The compiled and runtime parameter values are identical (and if we wanted to cheat, we could look at the stored procedure definition actually has “WITH RECOMPILE” to simplify my problem recreation. Kids, don’t ever use “WITH RECOMPILE” in real life).

Properties window showing matching values

So where do we go from here? When it seems that the optimizer is making very poor decisions that does usually indicate some kind of statistics issue so this would probably be a good time to look directly at the relevant statistics.
Histogram showing no recent dates

Note that the window is actually scrolled all the way to the end. The last step in the histogram really is for September 12.

What’s going on here? The immediate problem is that the statistics are out of date. The last step in the histogram is for September 12, which means that for any date after September 12 the statistics indicate there “should” be no rows. Put simply, SQL Server “thinks” that it “knows” for certain that there is no data in the table more recent than September 12. SQL Server generally will not (never will?) actually use zero for an estimated cost or number of rows. It uses 1 instead. I assume the reason for this is that estimating zero rows would make it impossible for the optimizer to distinguish between good and bad plans on the effected branch of the query plan (I do need to remember to get verification on that someday). Anyway, when the stored procedure is compiled the optimizer operates on the assumption that only 1 row will come back from each of the three tables which is the reason seeks appear instead of scans, and also the reason we see a nested loop join.

The reason the statistics are out of date is that this example is relying on auto updating of statistics. Recall that for larger tables’ statistics are not invalidated until the table has accumulated at least (500 + 20% of the table size) changes, and statistics are not updated until the first use after invalidation. This means that if we load the same amount of data into the table each morning eventually we will reach a point where statistics are not updated each morning, or even every week, because each day’s load is a progressively smaller portion of the total data. There are at least a couple of reasons this is an issue for projects like this

  • Typically the unit tests that exercise the relevant code work against a static data set (because the load functionality is exercised in a separate set of unit tests – that is the nature of unit testing). No matter how robust and interesting this data set is, the fact that it doesn’t change means we have no chance of detecting this behavior in unit testing. Thorough integration testing could detect this but most smaller development shops that I’ve encountered are much stronger on unit than integration testing (if they are strong on testing at all).
  • When the system first goes into production, the tables are probably small at first so this issue will not appear for weeks or even months. It is entirely possible that by the time this happens the development team will have started to work on another project which means it may take them a while to come back up to speed on the code if their assistance is needed.

Since this post has been focused on large daily data loads, my preferred fix for this is to simply update the statistics at the end of each day’s data load using the UPDATE STATISTICS statement. Since each day extendeds the range of the histogram, each day is going to make a statistically significant change in a very focused period which should make a statistics update a no-brainer. That is not to say AUTO_UPDATE_STATISTICS should be turned off – it should definitely be left on. We just should not be depending on it for this particular index.

I would not be tempted to use trace flag 2389 in this case because of the fact that the entire data load happens at once so there is an obvious point in time at which statistics could be manually updated. There is no reason for us to consider mucking with the way statistics update works based on this information alone. Along the same lines, I also would not personally use trace flag 2371 for the same reason – increasing the frequency at which automatic updates happens does not change the fact that we ought to be managing this particular statistic. Further, flag 2371 doesn’t really kick in until the table is starting to get pretty large. That not only means that the issue would have kicked in long before 2371 starts to operate but in reality by the time 2371 starts to make a difference we probably would have already started breaking the table into smaller pieces for performance reasons (see for example Kimberly Tripp).

The Gristle

The observant reader will notice an issue with my chart. My example of September 30, one of the dates which had the statistics issue, actually corresponds to a fast runtime. In fact, towards the end of the chart where several days pass between updates, there are only a handful of spikes at a time when almost all of the days should have inappropriate execution plans. The really observant reader will actually notice that the last spike in the chart actually corresponds to example actually correspond to September 12 which was the last time the statistics auto updated. So this is a day that should have had a great execution plan.

What???!!!??? What is happening in this example is that, on the days where the statistics are auto updated, the statistics are invalidated when the data is loaded but are not actually recomputed until the tables are used by the daily computation. Since the example marks time in terms of milliseconds the statistics update actually dominates the runtime on the days where it occurs – that will not be the case in the real world because the computations will be a lot more expensive. Last time I encountered a real world example of this issue the job would run in tens of minutes on days where the statistics were current but would take several hours on days where they were not.

More to the point, even in the extremely unlikely event we saw this exact same pattern of the statistics update being more expensive than the computation that triggered it, in the real world it probably would not be a good idea to avoid updating statistics for the sake of speeding up the nightly jobs. The entire point of doing this kind of work overnight is to make the daily workload run as quickly as possible and the daily workload will run better if stats are current. If the stats need to be updated anyway, it may as well get done before any computation or aggregation is done on the data. I wouldn’t even really advise using AUTO_UPDATE_STATS_ASYNC in this case. I could see an argument for asynchronous update when data is coming in gradually, but when data changes suddenly I really do think it’s best to wait for the statistics before proceeding.

I thought about leaving the chart out entirely, but thought that it still had some value in giving the reader the flavor of the swings in execution time.

The Guts

The tables used for this example are defined below. I decided to use a time and effort system for this example. The employee table is populated by copying approximately 10,000 records form the AdventureWorks2012 employee table and then deleting those rows that have duplicated employee names. There are 8 Project_Hours tables. Imagine that the data from these 8 tables each came from a separate data source so computations could not be run across projects until all 8 tables were gathered onto the same server. No, I have never seen an actual time and effort system distributed in this way but my real world model for this was not a T&E system. Also note that the join between employee and the project tables is done by name instead of employee ID. Again, the real world model for this example was not a T&E system and the real world equivalent of this join was not as bizarre – but I needed to use something besides employee ID to ensure that table scans as opposed to seeks would be optimal for large amounts of data. Pretend each project manager manually tracked time themselves and did not know the ID’s of the employees on the project. Data from all 8 project tables is accumulated into a labor detail table (by date and employee ID). The TimeHistogram table keeps track of execution times for reporting purposes.

USE AscendingDate

CREATE TABLE dbo.Employee(
	EmployeeID  INT NOT NULL IDENTITY,
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	DeptID      INT NOT NULL CONSTRAINT DF_DeptID DEFAULT 0,
	Wage        NUMERIC(4,2) NOT NULL,
	AGazillionMoreFields NCHAR(2000) NOT NULL CONSTRAINT DF_AGazillionMoreFieds DEFAULT N'',
	CONSTRAINT PK_Employee PRIMARY KEY CLUSTERED(EmployeeID)
);

INSERT INTO	dbo.Employee(FirstName, MiddleName, LastName, Wage)
	SELECT FirstName, MiddleName, LastName, CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 50 + 10
	FROM Adventureworks2012.Person.Person WHERE BusinessEntityID <= 10000;

DELETE e1
FROM dbo.Employee e1
INNER JOIN(
	SELECT FirstName, MiddleName, LastName
	FROM dbo.Employee
	GROUP BY FirstName, MiddleName, LastName
	HAVING COUNT(*) > 1
) e2 ON e1.FirstName = e2.FirstName AND ISNULL(e1.MiddleName, '') = ISNULL(e2.MiddleName, '') AND e1.LastName = e2.LastName;

CREATE TABLE dbo.ProjectHours_A(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursA_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursA_LastName_FirstName_MiddleName ON dbo.ProjectHours_A(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_B(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursB_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursB_LastName_FirstName_MiddleName ON dbo.ProjectHours_B(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_C(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursC_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursC_LastName_FirstName_MiddleName ON dbo.ProjectHours_C(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_D(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursD_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursD_LastName_FirstName_MiddleName ON dbo.ProjectHours_D(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_E(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursE_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursE_LastName_FirstName_MiddleName ON dbo.ProjectHours_E(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_F(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursF_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursF_LastName_FirstName_MiddleName ON dbo.ProjectHours_F(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_G(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_ProjectHoursG_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursG_LastName_FirstName_MiddleName ON dbo.ProjectHours_G(LastName, FirstName, MiddleName);

CREATE TABLE dbo.ProjectHours_H(
	FirstName   NVARCHAR(50) NOT NULL,
	MiddleName  NVARCHAR(50)     NULL,
	LastName    NVARCHAR(50) NOT NULL,
	HoursWorked DECIMAL(3,2) NOT NULL CONSTRAINT DF_HoursWorked DEFAULT CAST(CAST(CRYPT_GEN_RANDOM(1) AS TINYINT) as DECIMAL(5,2)) / 128
);

CREATE UNIQUE CLUSTERED INDEX UQ_ProjectHoursH_LastName_FirstName_MiddleName ON dbo.ProjectHours_H(LastName, FirstName, MiddleName);

CREATE TABLE dbo.LaborDetail(
	EmployeeID  INT  NOT NULL,
	WorkDay     DATE NOT NULL,
	ProjAHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjAHours DEFAULT 0,
	ProjBHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjBHours DEFAULT 0,
	ProjCHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjCHours DEFAULT 0,
	ProjDHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjDHours DEFAULT 0,
	ProjEHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjEHours DEFAULT 0,
	ProjFHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjFHours DEFAULT 0,
	ProjGHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjGHours DEFAULT 0,
	ProjHHours  DECIMAL(3,2) NOT NULL CONSTRAINT DF_LaborDetail_ProjHHours DEFAULT 0,
	HoursWorked DECIMAL(4,2) NOT NULL CONSTRAINT DF_LaborDetail_LaborCost DEFAULT 0,
	LaborCost   DECIMAL(5,2) NOT NULL CONSTRAINT DF_LaborCost DEFAULT 0,
	RecordBiggerInRealLife NCHAR(2000) NOT NULL CONSTRAINT DF_RecordBiggerInRealLife DEFAULT '',
	CONSTRAINT PK_LaborDetail PRIMARY KEY CLUSTERED(WorkDay, EmployeeID)
);

CREATE TABLE dbo.TimeHistogram(
	WorkDay  DATE NOT NULL,
	LoadTime INT  NOT NULL,
	CalcTime INT  NOT NULL,
	NewStats BIT  NOT NULL,
	TotalTime AS LoadTime + CalcTime,
	CONSTRAINT PK_TimeHistogram PRIMARY KEY CLUSTERED(WorkDay)
);

I used three stored procedures to generate the data for this writeup. SimulateDailyLoad and SimulateDailyCalc simulate the process of loading and transforming a day’s worth of data. To simplify my life, the third procedure, GenerateHistogram, loops through 3/4 of a year’s worth of calls to these routines and accumulates timing data into the TimeHistogram table.

CREATE PROCEDURE dbo.SimulateDailyLoad
	@LoadDate DATE
AS
BEGIN
	TRUNCATE TABLE dbo.ProjectHours_A;
	TRUNCATE TABLE dbo.ProjectHours_B;
	TRUNCATE TABLE dbo.ProjectHours_C;
	TRUNCATE TABLE dbo.ProjectHours_D;
	TRUNCATE TABLE dbo.ProjectHours_E;
	TRUNCATE TABLE dbo.ProjectHours_F;
	TRUNCATE TABLE dbo.ProjectHours_G;
	TRUNCATE TABLE dbo.ProjectHours_H;
	INSERT INTO dbo.LaborDetail(EmployeeID, WorkDay) SELECT EmployeeID, @LoadDate FROM dbo.Employee;
	INSERT INTO dbo.ProjectHours_A(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_B(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_C(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_D(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_E(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_F(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_G(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
	INSERT INTO dbo.ProjectHours_H(FirstName, MiddleName, LastName)
		SELECT FirstName, MiddleName, LastName FROM dbo.Employee
END;

GO

CREATE PROCEDURE dbo.SimulateCalculation
	@LoadDate DATE
WITH RECOMPILE AS
BEGIN
	UPDATE d SET d.ProjAHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_A h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjBHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_B h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjCHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_C h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjDHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_D h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjEHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_E h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjFHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_F h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjGHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_G h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE d SET d.ProjHHours = h.HoursWorked
	FROM dbo.LaborDetail d
		INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		INNER JOIN dbo.ProjectHours_H h ON e.FirstName = h.FirstName AND e.MiddleName = h.MiddleName AND e.LastName = h.LastName
	WHERE d.WorkDay = @LoadDate
	UPDATE dbo.LaborDetail SET HoursWorked = ProjAHours + ProjBHours + ProjCHours + ProjDHours + ProjEHours + ProjFHours + ProjGHours + ProjHHours
		WHERE WorkDay = @LoadDate;
	UPDATE d SET LaborCost = d.HoursWorked * e.Wage, RecordBiggerInRealLife = e.AGazillionMoreFields
		FROM dbo.LaborDetail d INNER JOIN dbo.Employee e ON d.EmployeeID = e.EmployeeID
		WHERE d.WorkDay = @LoadDate;
END;

GO

CREATE PROCEDURE dbo.GenerateHistogram AS
BEGIN
	DECLARE @dt DATE;
	DECLARE @t0 DATETIME;
	DECLARE @t1 DATETIME;
	DECLARE @t2 DATETIME;
	DECLARE @t3 DATETIME;
	DECLARE @flag BIT;

	TRUNCATE TABLE dbo.LaborDetail;
	TRUNCATE TABLE dbo.TimeHistogram;
	CHECKPOINT;
	DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
	SET @dt = '20130101';

	WHILE @dt < '20131001'
	BEGIN
		SET @t0 = CURRENT_TIMESTAMP;
		EXEC dbo.SimulateDailyLoad @dt;
		SET @t1 = CURRENT_TIMESTAMP;
		CHECKPOINT;
		DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;
		SET @t2 = CURRENT_TIMESTAMP;
		EXEC dbo.SimulateCalculation @dt;
		SET @t3 = CURRENT_TIMESTAMP;
		SET @flag = 0;
		IF STATS_DATE(OBJECT_ID('dbo.LaborDetail'), 1) >= @t0
		BEGIN
			SET @flag = 1;
		END;
		INSERT INTO dbo.TimeHistogram(WorkDay, LoadTime, CalcTime, NewStats)
			VALUES(@dt, DATEDIFF(ms, @t0, @t1), DATEDIFF(ms, @t2, @t3), @flag);
		SET @dt = DATEADD(d, 1, @dt);
	END;
END;

The CHECKPOINT and DROPCLEANBUFFERS in the last stored procedure are probably completely unnecessary. They were originally added when I was debugging an issue and I simply forgot to remove them before the last time I ran this code.

Error 22050 in SQL Server 2008 Agent Jobs Can Be a Red Herring

Executive Summary

When working with legacy versions of SQL Server, if an agent job fails with error 22050 (“error formatting query…”), it is entirely possible that the issue has nothing whatsoever to do with query formatting or parameterization. It is possible for a job to fail due to other causes, even if the query is formatted perfectly, while still seeing this error. The original error message can often be obtained by using the profiler. Thank you to Gianluca Sartori for suggesting that I try using the profiler when I was stuck on this issue.

Long Version

The following was inspired by an actual story, but the names of all tables and indexes have been changed to protect the innocent. The specific system that I was working with at the time was running SQL Server 2008 R2 Standard edition on Windows Server 2008 R2. And yes, I know this blog post is already obsolete as I write it because I have not been able to reproduce the issue in SQL Server 2012 but it is still one of the more interesting issues that I’ve looked into lately. Additionally, at least as of the time of this writing, I’ve found that web search results for this particular error are less than helpful most treatments of this error are rather terse.

Enough with the disclaimers. A few months ago I was trying to get a handle on what tables had the most serious fragmentation issues. Intending to do something quick and dirty I just threw together a quick SQL job to send me email every morning while I got a gut feel for the database. If the below query doesn’t look familiar to you and you’re curious what I’m up to you can look, for example, here. The job contained a single step, which was

 

exec msdb.dbo.sp_send_dbmail @recipients='me@wherever.com', @subject='Fragmentation Report',
   @execute_query_database='IBrokeMyDB',
   @query = 'SELECT OBJECT_NAME(a.object_id) AS [Table], b.name, a.avg_fragmentation_in_percent,
      a.avg_page_space_used_in_percent, a.page_count
   FROM sys.dm_db_index_physical_stats(5, NULL, NULL, NULL, ''Sampled'') a
      INNER JOIN sys.indexes b ON a.object_id = b.object_id and a.index_id = b.index_id
   WHERE page_count >= 750
   ORDER BY avg_fragmentation_in_percent DESC'


Or if you prefer a screenshot

shot0

The first few steps in the deployment process went fine. There were no issues setting up Database Mail and all of the test emails came through fine. Also, executing the above call in a query window worked fine and the email arrived as expected. When set up as an agent job, though, there was no joy.

shot1

Hmmmm. So the error suggests there is an issue with parameters, and I do have a quoted string in the parameter which had to be put in double single quotes since the entire query was quoted. The default for the mode parameter is ‘Limited’, which is actually fine with me, so I updated my exec statement to

 

exec msdb.dbo.sp_send_dbmail @recipients='me@wherever.com', @subject='Fragmentation Report',
   @execute_query_database='IBrokeMyDB',
   @query = 'SELECT OBJECT_NAME(a.object_id) AS [Table], b.name, a.avg_fragmentation_in_percent,
      a.avg_page_space_used_in_percent, a.page_count
   FROM sys.dm_db_index_physical_stats(5, NULL, NULL, NULL, NULL) a
      INNER JOIN sys.indexes b ON a.object_id = b.object_id and a.index_id = b.index_id
   WHERE page_count >= 750
   ORDER BY avg_fragmentation_in_percent DESC'


Which, unfortunately, still doesn’t work

shot2

Hmmmmm. I swear I’ve done this before and it’s worked fine. At this point I’m not sure if I’m looking at a problem with dm_db_index_physical stats or some kind of permissions issue with the “sys” schema. I decide it’s easier to test for the latter and update the job step to

 

exec msdb.dbo.sp_send_dbmail @recipients='me@wherever.com', @subject='Fragmentation Report',
   @execute_query_database='IBrokeMyDB',
   @query = 'SELECT b.name FROM sys.indexes b'


Upon re-running the job, an email message listing all of the indexes created in the database appeared in my mailbox. Hmmmm. So the good news is that I’m not looking at some kind of weird issue with the sys schema, but the bad news is that I am stumped. Since I’ve done exactly this kind of thing many times before I can’t really convince myself there is an issue with using dm_db_index_physical_stats in this way. But is there any chance I’ve just been lucky so far? I wish I could say I figured the rest of this out on my own, but at this point I hit #sqlhelp on twitter to ask if there were any known issues like this with dm_db_index_physical_stats and after a few tweets Gianluca Sartori suggested that I try profiler.

In my case, the key to getting the real error was to include the “User Error Message” event to the profile. I used the standard profile and added this event to it. I then started the trace and ran the job as I had before. It helps a lot that the error messages appear in red, but after slogging through the trace output I eventually found the error message “The user does not have permission to perform this action.” In other words, the user that SQL Server Agent runs as has permission to access sys.indexes but not sys.dm_db_index_physical_stats.

shot3

This is the nudge that I needed. At this point it finally dawned on me that dm_db_index_physical_stats is actually a function, not a table or view. So one really big difference between it and sys.indexes is that a function requires execute permission. After a little more digging I discovered that

  1. The agent user was not a member of the sysadmins server role. I suspect this was accidental, the developer who handled the initial deployment phases probably did not fully appreciate the importance of using Configuration Manager.
  2. The user had been manually added to the SQLAgentOperatorRole, SQLAgentReaderRole, and SQLAgentUserRole database roles in the MSDB database so the agent worked fine, as long as appropriate database permissions are granted.
  3. The user had been added to the db_datareader and db_datawriter database roles in the database IBrokeMyDB, which means that all of the agent jobs which only operate on views and tables in this database worked fine.
  4. The user was not granted any other permissions in the database. Among other things, this means that the user could not execute any stored procedures or functions.
  5. Added 2013 Aug 28 Not so much a discovery as a lesson from the “do as I say not as I did” department. It’s usually best not to run agent jobs as the agent user which usually has administrative privileges. When possible it really is best to follow the principle of least privilege and run jobs as users with minimal permissions. Doing this will not make a user any more or less likely to encounter this particular issue but as long as I am dwelling on agent configuration it is important that I try not to get the reader into bad habits. More props to Gianluca for pointing out this omission in my first version of this post.

But that’s not the point of this post. The takeaway is that the information which I really needed to solve the problem did not find it’s way into the job history but was available when I dug deeper using profiler. I have not done the leg work to verify this, but I would expect that extended events could also be used to expose this information.

So what about SQL Server 2012

As I mentioned earlier, at least in my experience this seems to be much less of an issue in SQL Server 2012 because in my experience so far, 2012 seems to add the actual error which causes the query to fail to the generic 22050 error message. YMMV, I did not spend a tremendous amount of time trying to verify that this is always the case. But here is a screen shot taken from my 2012 instance which I tried to break in a similar way.

shot_2012