When trying to guage the level of database activity, you can use SQL Profiler to view the type and volume of transactions in motion at any given time and to view the overall level of database IO, but you can't use it to directly tell which database tables are being updated.
However, there's a handy dynamic management view called sys.dm_db_index_usage_stats that tells you the number of rows that have been updated in each database index since the instance was last restarted (or since the table/index was created, if that happened more recently):
SELECT * FROM sys.dm_db_index_usage_stats
The view also has some additional information on index usage, including the number of scans, seeks, and lookups performed on each index - super helpful information if you're looking for unused indexes or which objects are heaviest-hit. If you look at indexes 0 and 1 (zero is the table heap, 1 is the clustered index), you'll see activity on the underlying table data itself.
I needed to see the row updates per second for every table in the database, so rather than run that select over and over (and compare the results), I wrote a quick script to do the comparison repeatedly for me:
SET NOCOUNT ON -- Remove the working table if it already exists -- so it doesn't get in the way IF OBJECT_ID('tempdb..#TableActivity_After') IS NOT NULL DROP TABLE #TableActivity_After -- Collect our working data SELECT object_name(us.object_id) as TableName, user_updates as UpdatedRows, last_user_update as LastUpdateTime INTO #TableActivity_After from sys.dm_db_index_usage_stats us join sys.indexes si on us.object_id = si.object_id and us.index_id = si.index_id where database_id = db_id() and user_seeks + user_scans + user_lookups + user_updates > 0 and si.index_id in (0,1) order by object_name(us.object_id) -- Figure out if we're running it the first time or again -- Put the data into the correct tables IF OBJECT_ID('tempdb..#TableActivity_Before') IS NULL BEGIN -- First time it's being run - stage the existing data PRINT 'Initial table usage collected - execute again for changes' END ELSE BEGIN -- Running script a subsequent time -- Compare this set of data to our last set -- See how long it's been since we ran this script last -- Or at least since last change in any table in the database DECLARE @SecondsSince DECIMAL(10,2) SELECT @SecondsSince = CONVERT(FLOAT, DATEDIFF(ms, MAX(LastUpdateTime ), GETDATE()))/1000 FROM #TableActivity_BEFORE SELECT @SecondsSince as 'Seconds since last execution' -- Do actual table comparison and give results SELECT a.TableName, a.updatedrows - isnull(b.UpdatedRows,0) as RowsUpdated, CONVERT(INT, (a.updatedrows - isnull(b.UpdatedRows,0)) / @SecondsSince) as RowsPerSecond FROM #TableActivity_After a LEFT JOIN #TableActivity_Before b ON b.TableName = a.TableName WHERE a.updatedrows - isnull(b.UpdatedRows,0) > 0 ORDER BY RowsUpdated DESC END -- Swap the tables so the AFTER table becomes the new BEFORE -- Then clean up AFTER table since we'll get a new one next time IF OBJECT_ID('tempdb..#TableActivity_Before') IS NOT NULL DROP TABLE #TableActivity_Before SELECT * INTO #TableActivity_Before FROM #TableActivity_After DROP TABLE #TableActivity_After
Running that script the first time will grab an snapshot of table activity. Running it again will tell you what has changed since you ran it the first time, and running it again will continue to tell you (updating the "before" image each time so you're getting an update on only the most recent database activity).
If you wanted to see activity on all database indexes, you could update the query at the top to show index name and remove the "WHERE si.index_id in (0,1)" and you'd see all the index details.
I hope this is helpful - if you have any feedback or would like to see something added, please feel free to leave a comment below!
SQL Reporting Services doesn't do a very good job keeping the SQL Agent clean by removing expired or otherwise unusable subscriptions from the job list. To deal with this, we created a script that pulls some details about these old subscriptions, including the report responsible, the last run time and status, and the user who originally scheduled it. If you notice your SQL Agent job list getting excessively long, you can use this query to identify the culprit reports and owners, and then either notify them or remove the old subscriptions manually yourself (run this on the server with your SSRS databases):
select c.Name as ReportName, s.EventType, s.Description as SubscriptionDescription, s.LastStatus as LastSubscriptionStatus, s.LastRunTime SubscriptionLastRunTime, case when recurrencetype = 1 then 'One Time' when recurrencetype = 2 then 'Hourly' when recurrencetype = 4 then 'Daily' when recurrencetype = 5 then 'Monthly' when recurrencetype = 6 then 'Month Week' else 'Other' end as RecurranceType, s.DeliveryExtension, u.UserName as SubscriptionSetUpBy, s.ModifiedDate as SubscriptionLastModifiedDate from [ReportServer].[dbo].[Subscriptions] s join [ReportServer].[dbo].[Catalog] c on c.ItemID = s.Report_OID join [ReportServer].[dbo].[Users] u on u.UserID = s.OwnerID join [ReportServer].[dbo].[reportschedule] rs on c.itemid = rs.reportid and s.subscriptionid = rs.subscriptionid join [ReportServer].[dbo].[schedule] sch on rs.scheduleid = sch.scheduleid where s.EventType <> 'RefreshCache' and s.LastRunTime < dateadd(m, -3, getdate()) order by c.name
There are a number of similar scripts out there that pull much of this information together, but there wasn't one that collected all the details we were looking for in one place. From here, you can deal with the subscriptions as you see fit.
Note that you can just remove the old subscriptions by brute force if you'd prefer, and SSRS will clean up the orphaned SQL jobs, but I've preferred to review the list and notify users as we've never had too much volume to deal with. If you want to just delete them straight away, you can do so here:
DELETE ReportServer.dbo.Subscriptions WHERE InactiveFlags != 0 OR LastRunTime < dateadd(m, -3, getdate())
SQL Server can easily export to CSV file, but it exports just the data, without the column names included. In order to export the column names, you need to actually perform two exports - one with the column names, and one with the data - and then combine the two files into a single file. It populates
You could do this using any query you want - native SQL, a linked server, a stored procedure, or anything else - and the results will export the same way once they're in the temp table. Since it builds the list of column name dynamically as well, you only need to change out the query being executed and set the export location - no other configuration is necessary.
-- Declare the variables DECLARE @CMD VARCHAR(4000), @DelCMD VARCHAR(4000), @HEADERCMD VARCHAR(4000), @Combine VARCHAR(4000), @Path VARCHAR(4000), @COLUMNS VARCHAR(4000) -- Set values as appropriate SET @COLUMNS = '' SET @Path = '\\servername\share\outputpath' -- Set up the external commands and queries we'll use through xp_cmdshell -- Note that they won't execute until we populate the temp tables they refer to SET @CMD = 'bcp "select * from ##OutputTable" queryout "' + @Path + '\Temp_RawData.csv" -S ' + @@SERVERNAME + ' -T -t , -c' SET @HEADERCMD = 'bcp "SELECT * from ##cols" queryout "' + @Path + '\Temp_Headers.csv" -S ' + @@SERVERNAME + ' -T -t , -c' SET @Combine = 'copy "' + @Path + '\Temp_Headers.csv" + "' + @Path + '\Temp_RawData.csv" "' + @Path + '\MyCombinedFile.csv"' SET @DelCMD = 'del "' + @Path + '\Temp_*.csv"' -- Create and populate our temp table with the query results SELECT * INTO ##OutputTable FROM YourSourceTable -- Generate a list of columns SELECT @COLUMNS = @COLUMNS + c.name + ',' from tempdb..syscolumns c join tempdb..sysobjects t on c.id = t.id where t.name like '##OutputTable order by colid SELECT @COLUMNS as Cols INTO ##Cols -- Run the two export queries - first for the header, then for the data exec xp_cmdshell @HEADERCMD exec xp_cmdshell @CMD -- Combine the two files into a single file exec xp_cmdshell @Combine -- Clean up the two temp files we created exec xp_cmdshell @DelCMD -- Clean up our temp tables drop table ##cols drop table ##OutputTable
If you have any suggestions or run into any issues, please let me know!
Decoding bitwise masks can be a bit confusing, so I wanted to share a couple of T-SQL functions I've created to make them easier to deal with. If you'd like to read more about bitmasks and applying them in T-SQL, you can read about it at SQL Fool: T-SQL Bitwise Operators.
The first will return the value in a given bit position of an integer - it accepts two parameters (the lookup value and the bit position) and returns a bit for the value in that position. Note that it starts with position zero, so make sure you're counting correctly:
CREATE FUNCTION dbo.BitwisePosition( @value BIGINT, @PositionNumber INT ) RETURNS BIT WITH SCHEMABINDING AS BEGIN DECLARE @Result BIT DECLARE @Mask BIGINT SET @Mask = POWER(2,@PositionNumber) SET @Result = (CASE @value & @Mask WHEN @Mask then 1 else 0 end) RETURN @Result END GO
The second function returns a bit (true/false) based on whether a provided bitmask applies to a reference value:
CREATE FUNCTION dbo.BitwiseMask( @value BIGINT, @Mask BIGINT ) RETURNS BIT WITH SCHEMABINDING AS BEGIN DECLARE @Result BIT SET @Result = (CASE @value & @Mask WHEN @Mask then 1 else 0 end) RETURN @Result END GO
Don't forget to grant them permissions:
GRANT EXECUTE ON dbo.BitwiseMask TO PUBLIC GRANT EXECUTE ON dbo.BitwisePosition TO PUBLIC
To use these functions, you'd call them as in these examples:
-- Value: 1110001000 -- Position 9876543210 -- Checkpoing position 7, 4, and 0 should return 1, 0, 0 select dbo.bitwiseposition(904, 7), dbo.bitwiseposition(904, 4), dbo.bitwiseposition(904, 0) -- Value: 1110001000 = 904 -- Bitmask: 0010001000 = 136 -- Will return true since mask "fits inside" value select dbo.bitwisemask(904, 136) -- Value: 1110001000 = 904 -- Bitmask: 0010001001 = 137 -- false since mask has a bit "outside" value select dbo.bitwisemask(904, 137)
I hope you find them helpful!
We use Oracle Goldengate (expensive and probably overkill for Oracle->MSSQL, but good at what it does) to replicate data from an Oracle database into a SQL Server. However, I got an alert the other day that replication had stopped, and when I checked the status of replication, all the REPs we had set up were in status "Starting...", but none we actually doing anything.
Attempting to stop them got the following error:
GGSCI (GGSERVER) 68> stop rep MYREP Sending STOP request to REPLICAT MYREP ... ERROR: opening port for REPLICAT MYREP (TCP/IP error: Connection refused).
Stopping/Starting the manager service or rebooting the PC didn't help either - they still said "Starting" and were unresponsive. Even stranger, deleting and recreating the REP gave the same result - before I even attempted to start the REP for the first time, it said "Starting", and an attempt to start it gave me "Process is starting up - try again later".
The cause was the REP process status file, located in the DIRPCS folder under the Goldengate root - there should be a file for each REP that's currently running giving details about the status. When a REP stops, this file is deleted. Since all of the current REPs weren't doing anything (they were all sitting at the end of the previous trail file), they should have been stopped. I deleted the PCR files for the affected REP streams, and then manager reporting "STOPPED" - at that point, I was able to start up each REP without issue.
I'm not sure how they got that way, but once started again, they all worked without issue. I hope this saves you the troubleshooting time of hunting down these files!
Anybody who has the rights to run a report in SSRS can set up a subscription, delivering the report to their own email address. However, if you try to deliver the report as an email to somebody else, you might be greeted with this unclear error message:
A subscription delivery error has occurred. (rsDeliveryError) One of the extension parameters is not valid for the following reason: The account you are using does not have administrator privileges. A subscription cannot be created for firstname.lastname@example.org. (rsInvalidExtensionParameter)
I was setting up a subscription in our Sharepoint-integrated SQL Reporting Services site, and was attending to send the report to our shared group email address. I could set it up to deliver to me and I could forward it, but it wasn't letting me send the email to anybody else's email address, either inside or outside the organization.
Solution: Grant your user "Create Alerts" and "Manage Alerts" permission
I received this error because I lacked the "Create Alerts" and "Manage Alerts" on the report in question (or folder, or higher up). The error says you need to be an administrator, but doesn't really tell you what's wrong with your request - it's really complaining that you're delivering an alert to somebody else. Granting those rights to your user account (for that report, or to an object above it so they propagate down to that report) allows you to send the report to any email address you want.
I hope this helps!
Computed columns can be a great tool if you want to add a value to a table that's dependent on other columns in the table, and you don't want to maintain it on its own every time the source columns change. In a view, these can provide a single, consistent calculation to the end users - say, for a
-- Set up a view with the same columns and calculation errors in it -- The first calculation will always work -- The second gives a divide by zero error on every 10th row CREATE VIEW SourceObject AS SELECT object_id AS KeyColumn1, column_id as KeyColumn2, object_id - column_id as CalcColumn1, (object_id - (column_id % 10)) as CalcColumn2 FROM msdb.sys.columns
Now that it's set up, we can try selecting all the rows from the view we just created, and we'll see about 100 rows output, and then the query will stop with a "Divide by zero" error:
SELECT * FROM SourceObject
The calculation in this query is pretty straightforward, and you can see which rows are causing a problem (where column_id is divisible by 10), but what if it was more complicated? The problem is that SQL doesn't display the row that had a problem - it stops on the row before the problem, so finding the row with the error is bit more difficult. If there were multiple columns involved in the calculation, or different combinations of values that could result in an error? Tracking down the rows causing an error can be difficult - you have to find all the possible conditions that could cause an error, and then query for each of them.
This script will allow you to find all the rows in a view with a calculation error, all at once. It uses a cursor to pull the rows from the view one at a time, test the calculation, and then write any errors it finds to a table where you can see the rows that are causing problems. Using a CURSOR generally isn't ideal, but in this case, it's the only way to react to a bad calculation on a row-by-row basis and deal with it.
The script can use two key values from your view - they're called KeyColumn1 and KeyColumn2 - and you can modify the script to name them whatever you want, or just a single a value if that makes more sense in your scenario. It also supports two computed columns - CalcColumn1 and 2 - though again, it could be changed to just check a single column.
-- Set up variables DECLARE @KeyColumn1 INT, @KeyColumn2 INT, @CalcColumn1 INT, @CalcColumn2 INT DECLARE @CurrentRow BIGINT SET @CurrentRow = 1 -- Set up a place to hold key values for rows that work SELECT TOP 0 KeyColumn1, KeyColumn2 INTO #WorkingRows FROM SourceObject -- Set up a place to hold errors for rows that don't work CREATE TABLE #ErrorRows ( RowNumber BIGINT, KeyColumn1 INT, KeyColumn2 INT, [ERROR_NUMBER] INT, [ERROR_MESSAGE] nvarchar(4000) ) -- Begin loop to look through rows in the view DECLARE cur INSENSITIVE CURSOR FOR SELECT KeyColumn1, KeyColumn2 FROM SourceObject ORDER BY KeyColumn1, KeyColumn2 OPEN cur FETCH NEXT FROM cur INTO @KeyColumn1, @KeyColumn2 WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY -- Try to select the calculated columns -- If there's an error, it will jump to the CATCH block SELECT @CalcColumn1 = CalcColumn1, @CalcColumn2 = CalcColumn2 FROM SourceObject WHERE KeyColumn1 = @KeyColumn1 AND KeyColumn2 = @KeyColumn2 -- This lookup succeeded INSERT INTO #WorkingRows SELECT @KeyColumn1, @KeyColumn2 END TRY BEGIN CATCH -- The lookup failed - save details INSERT INTO #ErrorRows SELECT @CurrentRow, @KeyColumn1, @KeyColumn2, ERROR_NUMBER(), ERROR_MESSAGE() END CATCH SET @CurrentRow = @CurrentRow + 1 FETCH NEXT FROM cur INTO @KeyColumn1, @KeyColumn2 END -- Show the key columns of rows with errors SELECT * FROM #ErrorRows -- Show the key columns of working rows SELECT * FROM #WorkingRows -- Clean things up close cur deallocate cur drop table #ErrorRows drop table #workingrows
At the end, you'll have two tables with results in them - #ErrorRows, which contains the key values for rows with errors in them, as well as details about the error message, and #WorkingRows, which contains the key values for all of the working rows from the view.
Note: I could just as easily set up a table with a computed column in it that causes the same problem You'd be unable to select the entire table without an error, and hunting down the row with an error is painful. The script to find the error is the same, but here's an example of a table that has a computed column with this problem:
-- Set up table with a list of numbers in it SELECT object_id AS KeyColumn1, RANK() OVER (PARTITION BY 0 ORDER BY NEWID()) as KeyColumn2 INTO SourceObject FROM msdb.sys.columns -- Add two calculations to the table -- The first will always work -- The second will give a "Divide by zero" every 100 rows ALTER TABLE SourceObject ADD CalcColumn1 as (KeyColumn1 - KeyColumn2) ALTER TABLE SourceObject ADD CalcColumn2 as (KeyColumn1 / (KeyColumn2 % 100)) -- Note that you can't add a persisted computed column to a table -- if there's a calculation error in any of the rows, so this -- command will fail with a "Divide by zero" error ALTER TABLE SourceObject ADD CalcColumn3 as (KeyColumn1 / (KeyColumn2 % 100)) PERSISTED