TryCatchFinally.net Some SQL, some .NET, and whatever else

24Aug/170

Passing multi-value report parameters to SSRS using the URL querystring

Passing parameters to reports in SSRS using the URL can be really helpful, especially when you want to link to a report with some of the data aleady filled out. We had an email alert that some accounts in active directory were incorrect, and we wanted a way to link to a report that provided some additional detail for those accounts. Querystring parameters aren't very well documented (and I couldn't find anything on multi-value parameters), and I hope this saves you some time if you're in the same situation.

There's a pretty good reference on Technet, but it's got some gaps (for me, the biggest was even mentioning multi-value/multi-select parameters). The basic syntax is pretty straightforward - normally, a report is called with a URL like the following:

http://YourSsrsServer/Reports/Some%20Folder/YourReportName

From there, you can select the parameters and run the report. To provide a link to the report with the parameters already filled out (if they're blank or if you gnat to override the defaults), just append this to the end of the URL:

?ReportParam1=SomeValue&ReportParam2=SomeOtherValue

You can pass any type of parameter with this same syntax - text, datetime, integer, float, or boolean (use Yes/No). Your full url would look like this:

http://YourSsrsServer/Reports/Some%20Folder/YourReportName?ReportParam1=SomeValue&ReportParam2=SomeOtherValue

To do a multi-value parameter, you repeat the name of the parametr to provide it multiple values:

http://YourSsrsServer/Reports/Some%20Folder/YourReportName?ReportMultiParam=SomeValue&ReportMultiParam=SomeOtherValue

If this is the only parameter on the report and it doesn't need any other user input, the report will automatically execute because it has everything it needs. If you want the report to wait for the user to hit "View Report", you need to leave at least one parameter empty.

Two quick notes if you're having trouble:

  • Use the parameter name, not the prompt text. If your parameter is called "Office" but the prompt says "Which office?", use "Office". Parameter names aren't always intuitive and you may need to open the report in Visual Studio to get these names if you can't guess them.
  • In the same way, you need to use the field's value, not the label. If the dropdown contains the full name of the office, but the value behind the list is the office number, you'll need to provide the numbers on the querystring.

Good luck!

10Jan/170

Microsoft Connect item for an SSRS pre-report canvas

Current state:

About six months ago, I submitted a connect request for SQL Server (my first one) to suggest an improvement to the SSRS interface - a customizable canvas that's displayed before an SSRS report renders, while parameters are being entered. My issue was that there was information I wanted report users to have (like expected wait time, suggestions for the parameters, the report title and a short description that might help them, etc.) and there was no way to get it to them.

With a giant empty screen staring at them, it seemed like a great place to add that information:

SSRS Report pane current layout

The suggestion:

Ideally, while the user is considering parameters, I'd love to see something like this (ignoring that it's not a super-photogenic report):

proposed-layout

This would be a great place for details about the report that help the user before (or while) they run it, like:

  • The name of the report (not always clearly visible in the URL or browser title bar, especially when you're Sharepoint-integrated. Especially helpful if they've left multiple report tabs open or if report generation fails, they're often left with a white screen and no easy way to tell the tabs apart.
  • A short description of the report, including when you'd use it or any notable caveats.
  • An estimated runtime - this isn't always possible exactly as parameters vary, but even a general estimate (or better yet, an average or a 95% confidence range, pulled from the report server database) would be better than the complete blank they get today.
  • Suggestions/restrictions on the parameters - If a report can't be run for all offices at once, you can say so here. If running it for everybody at once means you can't export to Excel because it's too large, let the user know.
  • A mock-up/screenshot of the report - so they user can see if it's the one they want. They may be looking for a report with a specific chart, and waiting for 3 minutes for each one to render may not be ideal.
  • A link to any other documentation/resources, or related reports, or pretty much anything else.

It wouldn't need to be anything fancy - I'd even settle for text you can format, or some kind of additional panel on the report that you can design, but isn't rendered with the report itself. Access to data (to pull estimated on execution times or dynamic owner/description details, for example) would be nice, but not required - I'd rather have a design-only, no-data canvas than nothing at all.

The plea:

t-sql-tuesdayThanks to the motivation from the folks at Brent Ozar Unlimited, I'm asking for your help in voting for this suggestion (if you think it would be beneficial, of course). As part of T-SQL Tuesday, they've asked people to find a connect request that they'd like to see filled and post about it, so here I am asking humbly for your support. : )

If you like the idea, please vote for it - if you don't, please comment and let me know why not (I'm always open to understanding the opposition). There are a ton of great Connect items out there and this is only one, but I think it would help out (especially for our longer-running reports).

Thanks for your help!

Direct URL for the Connect item: https://connect.microsoft.com/SQLServer/feedback/details/2809098/ability-to-customize-report-splash-screen-during-parameter-submission

Side note:

I've attached the two images (current/proposed) to my connect item three different times and they're still not showing up there - if anybody knows how to get those images published, people can see what I'm talking about when I describe my suggestion! : )

8Sep/160

Query Business Objects repository for list of reports

We're migrating away from Business Objects Enterprise (XI-R2, not the newer 3.X version) and into SQL Reporting Services (it's about time!) and I needed an inventory of everything that was still in BOE. The repository is obfuscated so that it's not human readable, but thanks for some sleuthing and an 8-year-old post from a Business Objects forum, I wanted to share the script I was able to come up with.

The hard part (decoding the obfuscated text from the database into human-readable format) came from a post by "marklee" on a Business Objects board (his post is about 1/4 of the way down):

http://www.forumtopics.com/busobj/viewtopic.php?t=124272

His query was also written for Oracle, so I converted it to one that would run against SQL Server, and then I added some additional content types based on what we had in our database, and then attached a few additional rollups (like showing the root folder, building out the full path to the report, and returning the number of instances of each report that had been generated.

You can download the full script if you'd like, or you can view the contents below. Enjoy!

select 
 objectid, 
 parentid, 
 ownerid, 
 convert(datetime, REPLACE(left(lastmodifytime, 10),' ','-') + ' ' +
			replace(substring(lastmodifytime, 12, 8), ' ', ':') +
			'.' + substring(lastmodifytime, 21, 3)) as lastmodifytime, 
 upper(left(objectname,len(objectname)-2)) ObjectName, 
 typeid, 
 type, 
 convert(bit, 0) as IsInstance,
 si_cuid 
 into #results
 from 
 ( 
 SELECT 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 replace( 
 cast(objname as varchar(2000)) 
 ,'S','v') 
 ,'M','s') 
 ,'A','m') 
 ,')','a') 
 ,'+','b') 
 ,'C','n') 
 ,'-','c') 
 ,'/','d') 
 ,'O','t') 
 ,'E','o') 
 ,'1','e') 
 ,'3','f') 
 ,'G','p') 
 ,'5','g') 
 ,'7','h') 
 ,'W','x') 
 ,'U','w') 
 ,'Q','u') 
 ,'I','q') 
 ,'9','i') 
 ,':','i') 
 ,';','j') 
 ,'K','r') 
 ,'=','k') 
 ,'?','l') 
 ,'[','y') 
 ,']','z') 
 ,'!@',' ') 
 ,'B~S','&') 
 ,'!BO','.') 
 ,'B|C"','(') 
 ,'!B|D',')') 
 ,'M|Z','-') 
 ,'M}L',',') 
 ,'M|N','_') 
 ,'M}Z',':') 
 ,'!B{B','''') 
 ,'<code>|&lt;','0') 
 ,'</code>|&gt;','1') 
 ,'<code>|@','2') 
 ,'</code>|B','3') 
 ,'<code>|D','4') 
 ,'</code>|F','5') 
 ,'<code>|H','6') 
 ,'</code>|J','7') 
 ,'<code>|L','8') 
 ,'</code>|N','9') 
 ,'{','') 
 ,'!','') 
 ,'&quot;','') 
 ,'@','') 
 ObjectName, 
 convert(varchar(100), case 
	 when TypeID = 1 then 'Folder' 

	 when TypeID = 2 then 'Crystal Report'
	 when TypeID = 260 then 'Web Intelligence Document'
	 when TypeID = 290 then 'Object Package' 
	 when TypeID = 265 then 'Report Instance (TXT)'
	 when TypeID = 262 then 'Webi Report' 
	 when TypeID = 314 then 'Deski Report' 

	 when TypeID = 283 then 'PDF' 
	 when TypeID = 267 then 'Text' 
	 when TypeID = 323 then 'Excel' 
	 when TypeID = 319 then 'Excel (XLS)'
	 when TypeID = 259 then 'Word (DOC)'
	 when TypeID = 279 then 'Report Instance (PDF)'

	 when TypeID = 266 then 'Universe' 
	 when TypeID = 278 then 'Publication' 
	 when TypeID = 299 then 'Connection' 
	 when TypeID = 8  then 'Shortcut' 
	 when TypeID = 20 then 'Groups' 
	 when TypeID = 13 then 'Server' 
	 when TypeID = 16 then 'BO Server' 
	 when TypeID = 21 then 'Event' 
	 when TypeID = 24 then 'License Key' 

	 when TypeID = 19 then 'User type 19' 
	 when TypeID = 18 then 'User type 18' 
	 when TypeID = 47 then 'User type 47' 
	 when TypeID = 48 then 'User type 48' 
 
	 else 'Other' 
 end) Type, 
 * 
 FROM [dbo].[CMS_InfoObjects5] 
 where typeid &lt;&gt; 284 -- appears to be some kind of temporary file
 ) BORepository 

create clustered index #idx_c_results on #results (objectid)
create index #idx_results_parentid on #results (parentid)
create index #idx_results_ownerid on #results (ownerid)

-- Mark the ones that are instances of another object (children of non-folder)
update r
   set r.IsInstance = 1
 from #results r
join #results r2
  on r.ParentID = r2.ObjectID
where r2.TypeID &lt;&gt; 1
  and r.TypeID &lt;&gt; 1




-- Define all the levels of the objects and add them to a lookup table
DECLARE @ObjectLevel INT = 0

select 23 as objectid, 0 as parentid, 0 as ownerid, GETDATE() as lastmodifytime,
		convert(varchar(100), '$') as objectname, convert(int, null) as ChildInstances, convert(datetime, null) as LastInstanceTime,
		1 as typeid, convert(varchar(100), 'Root Folder') as type, @ObjectLevel as objectlevel 
into #objects

while @@rowcount &gt; 0
begin
	;with p as (select parentid, count(*) as instances, max(lastmodifytime) as LastInstanceTime from #results group by parentid)
	update #objects
	   set ChildInstances = p.instances,
		   LastInstanceTime = p.LastInstanceTime
	  from #objects o
	  join p
	    on o.objectid = p.ParentID
	 where o.ChildInstances is null
	   and o.typeid &lt;&gt; 1
	
	SET @ObjectLevel = @ObjectLevel + 1

	insert into #objects
	select r.objectid, r.parentid, r.ownerid, r.lastmodifytime, r.objectname, null, null, r.typeid, r.type, @ObjectLevel as objectlevel
	from #results r
	join #objects o
      on r.parentid = o.objectid
	 and o.typeid = 1
	left
	join #objects o2
	  on r.objectid = o2.objectid
   where o2.objectid is null

end


-- Set up the object paths
select objectid, parentid, lastmodifytime, convert(varchar(4000), '') as ObjectPath,
		objectname, childinstances, lastinstancetime, typeid, type
  into #objectswithpath
  from #objects
 where typeid &lt;&gt; 1

while @@rowcount &gt; 0
BEGIN
	update #objectswithpath
	   set parentid = o.parentid,
			objectpath = o.objectname + '\' + objectpath
	  from #objectswithpath owp
	  join #objects o
	    on owp.parentid = o.objectid
END

select objectpath, objectname, lastmodifytime, ChildInstances, LastInstanceTime, type
 from #objectswithpath
order by ObjectPath, objectname


--select * from #results
--select typeid, type, count(*) from #objects group by typeid, type order by count(*) DESC

/*
select type, case when isnull(ChildInstances,0) = 0 then 0 else 1 end, count(*), max(LastInstanceTime) --objectpath, objectname, lastmodifytime, ChildInstances, LastInstanceTime, type
 from #objectswithpath
 where ObjectPath not like '$\REPORTS OFF LINE\%'
group by type, case when isnull(ChildInstances,0) = 0 then 0 else 1 end
order by type, case when isnull(ChildInstances,0) = 0 then 0 else 1 end
*/

--select * from #objectswithpath where type = 'Word (DOC)'


drop table #objectswithpath
drop table #objects
drop table #results

2Mar/160

SQL Agent job fails with QUOTED_INDENTIFIER error

If you have a SQL Agent job that's failing, but the query succeeds when you run it manually in SQL Management Studio, it's possibly a difference between the SQL Server default connection settings and those SSMS uses as the default. In my case, I was inserting from a stored procedure and received the following error:

[SQLSTATE 42000] (Error 1934) INSERT failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

To view the default connection options for your SQL Server instance, right-click on your instance in SSMS and select "Properties", and then browse to "Connections" - you'll see something like the following list (on my server, none of these are enabled by default for new connections):

Server Defaults

To compare it to the default settings for your installation of SSMS, click on "Tools" and then "Options...", and then browse to "Query Execution" -> "SQL Server" and view the settings under both "Advanced" and ANSI" (In my case, SSMS was enabling a number of settings by default for my connections that SQL Agent jobs wouldn't have enabled):

SSMS Results ANSI

SSMS Results Advanced

In my case, I just added an explicit "SET QUOTED_IDENTIFIER ON" to the beginning of the script in my SQL Agent job and it resolved the error message. Hope this helps you too!

23Nov/150

Viewing Supplemental Logging status in your Oracle database

Viewing the status of supplemental logging on all of our Oracle tables is done via the all_log_groups table, but it's not in an easy-to-use format and needs to be pivoted (and it only shows you which tables have logging of some kind, which makes it difficult to tell which tables don't have anything). I've done that below to provide an easier view of the status of supplemental logging on all tables, regardless of their current status:

  select t.owner, t.table_name,
         max(case when lg.log_group_type = 'USER LOG GROUP'      then lg.always end ) as "User Log Group",
         max(case when lg.log_group_type = 'ALL COLUMN LOGGING'  then lg.always end ) as "All Column",
         max(case when lg.log_group_type = 'PRIMARY KEY LOGGING' then lg.always end ) as "Primary Key",
         max(case when lg.log_group_type = 'UNIQUE KEY LOGGING'  then lg.always end ) as "Unique Key",
         max(case when lg.log_group_type = 'FOREIGN KEY LOGGING' then lg.always end ) as "Foreign Key"
    from dba_tables t
    left
    join all_log_groups lg
      on t.owner = lg.owner
     and t.table_name = lg.table_name
   where t.owner = 'Table Owner'
   group by t.owner, t.table_name
   order by t.table_name

In the recent spat of Goldengate troubleshooting, I've had to check these values a number of times and I hope this query is useful to others (since I couldn't find a good example online that made this information easy to view).

Please let me know below if you have any issues!

22May/150

Correcting OGG-01733 in Goldengate – Trail file header file size mismatch

Warning - Oracle/Goldengate support will probably get mad at you if you try this. It worked great for me, but they recommended we reload the data from scratch, so that's probably what they'd recommend for you too. Just know that this is 100% unofficial and unsupported 🙂

Quick summary:

We have Goldengate replication from Oracle 11.2 on Lunux to MSSQL 2012 on Windows, and we ran into an OGG-01733 error "Trail file header file size value {X} for trail file {Y} differs from actual size of the file ({Z})", which caused an ABEND where we were stuck. We opened a ticket with Oracle support and after a week with very little response, the concluded that I should just perform a new initial load on the destination - since the trail files had already been pumped to the destination server and removed from the extract server, they were unable to troubleshoot further.

It turns out the work-around was to open the trail file in a hex editor and manually update the trail file header to make it think it was supposed to be the size it actually was. After saving the file again and resuming replication, it continued on its merry way and applied the transactions without another complaint.

Steps to resolve this error message:

  1. Make a backup of your trail file - you know, since you're editing it and might want a second shot.
  2. Open the report file and make a note of the size the file is currently ("Z") and the size it's supposed to be ("X"). I'll refer to those as X and Z further down.
  3. Use a decimal-to-hex converter like this one to convert both of these values to their hex equivalent (now I'll call them "HX" and "HZ")
  4. Load up the trail file in your favorite hex editing tool - I like using Notepad++ in combination with the HEX-editor plug-in (once the file is loaded, select "HEX-Editor" from the plug-ins menu, and then select "View in Hex")
  5. Perform a search (if you're using Notepad++, ensure the data type is set to "Hexadecimal") for your "HX" value - the size the file thinks it should be. However, you need to search for an even number of digits - if your hex value is an odd number of digits, either drop the leftmost (largest) one or add a zero to the left (I dropped a digit):
  6. Goldengate - Hex Editor

    • Side note: You can see that my trail file size isn't too far into the file - under 300 bytes from the beginning. However, since it's stored in hex, it's not something that's easily viewable in the file (where you will see some file path and server version information if you look to the right where the ASCII is displayed. Also, in my image, the file size is preceded by a quite a few zeroes - my trail files are set to 100MB, but it appears Goldengate supports up to 4GB trail files using the 32 bytes in the header file. Back to fixing this...
  7. CAREFULLY edit the HX value you've found to be the new HZ value - the actual size of the file. In particular, don't move any of the bytes around or add/remove anything, just fix the values you need to change so that the file size is stored in the same location.
  8. Save the file and close it.
  9. Resume replication right where you left off (assuming you made a backup and the edited the original trail file) - it should check the new file size, see the transaction that was previously beyond the file size limit, and then apply it and move on!

Conclusion?

What causes this behavior? I can't find any clear documentation or explanation at all - when searching for this error, the only meaningful links I can find at all are either in an Oriental language and have basic details as well as a dire warning to call Oracle support immediately or a case where somebody receives it on an initial load and the forum's advice is "your table is too small to mess with this - just export it to CSV and reload it that way".

When we looked at the list of trail files, we noticed something particularly odd - the trail files near the offending file all had ascending "last modified" timestamps, as you'd expect, but this file was actually out of order:

05/01/2015  03:45 AM        99,999,462 SV002351
05/01/2015  04:38 AM        99,999,802 SV002352
05/01/2015  08:13 AM        99,999,367 SV002353
05/01/2015  10:09 AM        99,999,936 SV002354
05/01/2015  11:05 AM        99,999,630 SV002355
                                                 <-- File should be right here
05/01/2015  11:41 AM               891 SV002357
05/01/2015  11:47 AM        99,999,462 SV002358
05/01/2015  11:50 AM        99,999,280 SV002359
05/01/2015  11:58 AM        99,999,314 SV002360
05/01/2015  12:09 PM        99,999,910 SV002361
05/01/2015  12:40 PM        99,998,043 SV002362
05/01/2015  01:16 PM        99,999,754 SV002363
05/01/2015  01:34 PM        72,017,446 SV002356  <-- But it's down here
05/01/2015  02:05 PM        99,999,516 SV002364
05/01/2015  02:40 PM        99,999,966 SV002365

The file contained two additional transactions beyond the stated header size and the actual end of the file, and they were both time-stamped correctly to have been located in that file (they were both stamped 10:34AM, along with the transactions that were earlier in the file, and since the server is an hour off because of time zone, they were in the right file).

The fact that it's smaller than the others, and that it's followed by a file containing no transactions (just a header) led me to believe the file was cut short by a network interruption of some kind. We're using a local extract and a separate pump, as we're advised to do, but the connection still drops from time to time. In this case, I can only imagine it was in the middle of committing something, was interrupted, and then somehow these transactions were suspended for some reason and then added to the file later. I can't imagine why, but when they're added, the file header isn't updated.

Hopefully this explanation and work-around have helped somebody else - we pulled our hair out for a week going back and forth with Oracle support and scouring the internet (unsuccessfully) for any relevant information - in the end, going rogue and editing the file was the only way (short of a complete reload) to get things moving again!

23Jan/150

View SQL Server table updates per second

When trying to guage the level of database activity, you can use SQL Profiler to view the type and volume of transactions in motion at any given time and to view the overall level of database IO, but you can't use it to directly tell which database tables are being updated.

However, there's a handy dynamic management view called sys.dm_db_index_usage_stats that tells you the number of rows that have been updated in each database index since the instance was last restarted (or since the table/index was created, if that happened more recently):

SELECT *
FROM sys.dm_db_index_usage_stats

The view also has some additional information on index usage, including the number of scans, seeks, and lookups performed on each index - super helpful information if you're looking for unused indexes or which objects are heaviest-hit. If you look at indexes 0 and 1 (zero is the table heap, 1 is the clustered index), you'll see activity on the underlying table data itself.

I needed to see the row updates per second for every table in the database, so rather than run that select over and over (and compare the results), I wrote a quick script to do the comparison repeatedly for me:

SET NOCOUNT ON

-- Remove the working table if it already exists
-- so it doesn't get in the way
IF OBJECT_ID('tempdb..#TableActivity_After') IS NOT NULL
DROP TABLE #TableActivity_After


-- Collect our working data
SELECT object_name(us.object_id) as TableName,
		user_updates as UpdatedRows,
		last_user_update as LastUpdateTime
INTO #TableActivity_After
from sys.dm_db_index_usage_stats us
join sys.indexes si
	on us.object_id = si.object_id
	and us.index_id = si.index_id
where database_id = db_id()
and user_seeks + user_scans + user_lookups + user_updates &amp;gt; 0
and si.index_id in (0,1)
order by object_name(us.object_id)


-- Figure out if we're running it the first time or again
-- Put the data into the correct tables 
IF OBJECT_ID('tempdb..#TableActivity_Before') IS NULL
BEGIN
	-- First time it's being run - stage the existing data
	PRINT 'Initial table usage collected - execute again for changes'

END
ELSE
BEGIN
	-- Running script a subsequent time
	-- Compare this set of data to our last set

	-- See how long it's been since we ran this script last
	-- Or at least since last change in any table in the database
   DECLARE @SecondsSince DECIMAL(10,2)
	SELECT @SecondsSince = CONVERT(FLOAT, DATEDIFF(ms, MAX(LastUpdateTime ), GETDATE()))/1000
	  FROM #TableActivity_BEFORE

	SELECT @SecondsSince as 'Seconds since last execution'

	-- Do actual table comparison and give results
	SELECT a.TableName,
		   a.updatedrows - isnull(b.UpdatedRows,0) as RowsUpdated,
		  CONVERT(INT, (a.updatedrows - isnull(b.UpdatedRows,0)) / @SecondsSince) as RowsPerSecond
	 FROM #TableActivity_After a
	 LEFT
	 JOIN #TableActivity_Before b
	   ON b.TableName = a.TableName
    WHERE a.updatedrows - isnull(b.UpdatedRows,0) &amp;gt; 0
	ORDER BY RowsUpdated DESC

	
END

-- Swap the tables so the AFTER table becomes the new BEFORE
-- Then clean up AFTER table since we'll get a new one next time
IF OBJECT_ID('tempdb..#TableActivity_Before') IS NOT NULL
DROP TABLE #TableActivity_Before

SELECT *
  INTO #TableActivity_Before
  FROM #TableActivity_After

DROP TABLE #TableActivity_After

Running that script the first time will grab an snapshot of table activity. Running it again will tell you what has changed since you ran it the first time, and running it again will continue to tell you (updating the "before" image each time so you're getting an update on only the most recent database activity).

If you wanted to see activity on all database indexes, you could update the query at the top to show index name and remove the "WHERE si.index_id in (0,1)" and you'd see all the index details.

I hope this is helpful - if you have any feedback or would like to see something added, please feel free to leave a comment below!

Download the full script here

14Jan/150

Querying Active Directory from SQL Server

SQL Server provides some pretty flexible integration with Active Directory through the ADSI Linked Server provider, something that's present by default when you install SQL Server. If you've never used it before, it allows you to connect to a domain controller and query AD the same way you'd query any other linked server. For example, it gives you the option to:

  • Identify when logins to SQL Servers or databases that support financial applications exist, but have no matching AD account (either direct integrated logins, or if SQL logins or rows in a "User" table have been set up to match the AD login)
  • Kick off alerts to provision the user in various systems based on their AD group membership
  • Automatically trigger an action when a new account appears in active directory (for example, we auto-provision security badges and send an email alert to our head of security to assign the appropriate rights)

While much of this could also be done from Powershell as well, we use the SQL Server Agent to manage many of our scheduled job (because it's so handy to have the agent remotely accessible), as well as sometimes just needing data from AD in a query. To support a number of processes we have in place, we run a synchronization job every so often throughout the day that pulls about two dozen fields for all users and synchronizes them into a table if anything has changed.

Setting up the linked server itself is pretty straightforward (courtesy of http://community.spiceworks.com/how_to/show/27494-create-a-sql-linked-server-to-adsi):

  1. Create the linked server itself
  2. Set the security context (if you want to query AD as something other than the SQL Server Service account - by default, all domain users can do this and it's only required if the domain is remote or if, for some reason, your SQL Service account's AD rights have been restricted, like if you're running as "LOCAL SERVICE")
  3. Enable OPENQUERY (Ad Hoc Distributed Queries)

You'll notice that setting up the linked server itself doesn't actually specify where Active Directory is located or what domain/forest you'll be querying - that's actually done in the query itself. In each query, you'll need to specify the FQDN (Fully-qualified domain name) of the domain (or OU) of the domain you're querying. For example, we'd get all users from a domain by issuing the following query (in this example, "ADLinkedServerName" is the linked server we just created, and our domain is "corp.mycompany.local"):

SELECT EmployeeNumber, Name AS FullName, givenName as FirstName, sn as LastName,
L AS Location, samAccountName as ADAccount
FROM OPENQUERY(ADLinkedServerName,'SELECT Name, L, givenName, sn,
EmployeeNumber, EmployeeID,samAccountName,createtimestamp
FROM ''LDAP://OU=Users,DC=corp,DC=mycompany,DC=local''
WHERE objectClass =''user''') ad

This query will search that OU ("Users", in this case) and everything below it, so changing the FROM to "LDAP://DC=corp,DC=mycompany,DC=local" would fetch the entire directory (for all the "user" objects), regardless of what folder they appeared it - if your directory puts users in another OU (like "Associates", for example), you should adjust the query accordingly.

For column names, you can pull any AD properties at all that you’re looking for – even custom ones that aren't part of a standard AD configuration. To get an easy list of AD properties to choose from, I like using ADSIEDIT (part of Microsoft’s Remote Server Administration Tools - download RSAT for Windows 7 or RSAT for Windows 8.1) – just drill down all the way down to an object, like a user, right click on them and select “Properties”, and you can see a list of all the properties on that account. If you’ve got Domain Admin rights, this tool can be used to modify these values too, but for querying, you only need to be a domain user or somebody who has rights to browse AD. Make a note of the names of particular properties that you're interested in - also note that AD queries are case-sensitive, so you'll need to note the casing of these properties as well.

One potential gotcha that I've run into is that maximum result size that AD will return in a single query can be set as part of domain policy - by default it's 1000 records at once, and can be configured by setting or adjusting the "PageSize" property on your domain controllers (see https://support.microsoft.com/kb/315071/en-us). Also, there's a "MaxResultSetSize" property as well that's set to 256KB by default, but I've never hit it - unless you're pulling every single property back, you'd likely hit the PageSize row limit before you hit the ResultSize byte limit, but remember that both are there. If you do hit the AD result count limit, it will return the rows up to the limit, but then execution stops with a kind of cryptic error:

Msg 7330, Level 16, State 2, Line 2
Cannot fetch a row from OLE DB provider "ADsDSOObject" for linked server "YOURDOMAIN".

If your domain is larger than the PageSize limit, you'll need to cut your query into multiple return sets of data so you don't exceed the limit on any single query. Since our domain contains about 2400 users, we were able to do it in two queries, broken up like this:

SELECT samAccountName
  FROM OPENQUERY(ADLinkedServerName,'SELECT samAccountName
                                       FROM ''LDAP://OU=Users,DC=corp,DC=mycompany,DC=local''
                                      WHERE objectClass =''user''
                                        AND givenName<''L''') as c
UNION ALL
SELECT samAccountName
  FROM OPENQUERY(ADLinkedServerName,'SELECT samAccountName
                                       FROM ''LDAP://OU=Users,DC=corp,DC=mycompany,DC=local''
                                      WHERE objectClass =''user''
                                        AND givenName>=''L''') as c

By dividing the names on L, this cut the directory roughly in half - if yours was larger, you could divide it by querying each OU separately, or by looping through letters of the alphabet, or whatever makes sense in your setting. You could even do something dynamic like pull as many records as you can, then grab the value from the last record you pulled and use it as the baseline to pull the next set as far as you can, and then repeat until you run out of records. Linked servers don’t allow you to dynamically assemble your query at run-time – it has to be hard-coded in the query – but there are some ways around that (like building your OPENQUERY as a string and then executing it via sp_executesql, for example).

Now that you have your AD records stored in a temp table, you can identify new/changed records and merge them into a SQL table you already have ready using an INSERT/UPDATE/DELETE or MERGE statement, or possibly trigger notifications or some other business process.

I hope this is helpful - if you'd like some more detail, please leave a comment and I'm happy to elaborate where it's necessary!

2Jan/150

Removing expired/unused SSRS subscriptions

SQL Reporting Services doesn't do a very good job keeping the SQL Agent clean by removing expired or otherwise unusable subscriptions from the job list. To deal with this, we created a script that pulls some details about these old subscriptions, including the report responsible, the last run time and status, and the user who originally scheduled it. If you notice your SQL Agent job list getting excessively long, you can use this query to identify the culprit reports and owners, and then either notify them or remove the old subscriptions manually yourself (run this on the server with your SSRS databases):

  select c.Name as ReportName,
         s.EventType,
         s.Description as SubscriptionDescription,
         s.LastStatus as LastSubscriptionStatus,
         s.LastRunTime SubscriptionLastRunTime,
         case
            when recurrencetype = 1 then 'One Time'
            when recurrencetype = 2 then 'Hourly'
            when recurrencetype = 4 then 'Daily'
            when recurrencetype = 5 then 'Monthly' 
            when recurrencetype = 6 then 'Month Week' 
            else 'Other'
         end as RecurranceType,
         s.DeliveryExtension,
         u.UserName as SubscriptionSetUpBy,
         s.ModifiedDate as SubscriptionLastModifiedDate
    from [ReportServer].[dbo].[Subscriptions] s
    join [ReportServer].[dbo].[Catalog] c
      on c.ItemID = s.Report_OID
    join [ReportServer].[dbo].[Users] u
      on u.UserID = s.OwnerID
    join [ReportServer].[dbo].[reportschedule] rs
      on c.itemid = rs.reportid 
     and s.subscriptionid = rs.subscriptionid
    join [ReportServer].[dbo].[schedule] sch
      on rs.scheduleid = sch.scheduleid
   where s.EventType <> 'RefreshCache'
     and s.LastRunTime < dateadd(m, -3, getdate())
order by c.name

There are a number of similar scripts out there that pull much of this information together, but there wasn't one that collected all the details we were looking for in one place. From here, you can deal with the subscriptions as you see fit.

Note that you can just remove the old subscriptions by brute force if you'd prefer, and SSRS will clean up the orphaned SQL jobs, but I've preferred to review the list and notify users as we've never had too much volume to deal with. If you want to just delete them straight away, you can do so here:

DELETE ReportServer.dbo.Subscriptions
WHERE InactiveFlags != 0
	OR LastRunTime < dateadd(m, -3, getdate())
22Jul/146

Exporting from SQL Server to CSV with column names

SQL Server can easily export to CSV file, but it exports just the data, without the column names included. In order to export the column names, you need to actually perform two exports - one with the column names, and one with the data - and then combine the two files into a single file. It populates

You could do this using any query you want - native SQL, a linked server, a stored procedure, or anything else - and the results will export the same way once they're in the temp table. Since it builds the list of column name dynamically as well, you only need to change out the query being executed and set the export location - no other configuration is necessary.

-- Declare the variables
DECLARE @CMD VARCHAR(4000),
        @DelCMD VARCHAR(4000),
        @HEADERCMD VARCHAR(4000),
        @Combine VARCHAR(4000),
        @Path VARCHAR(4000),
        @COLUMNS VARCHAR(4000)

-- Set values as appropriate
    SET @COLUMNS = ''
    SET @Path = '\\servername\share\outputpath'

-- Set up the external commands and queries we'll use through xp_cmdshell
-- Note that they won't execute until we populate the temp tables they refer to
    SET @CMD = 'bcp "select * from ##OutputTable" queryout "' + @Path + '\Temp_RawData.csv" -S ' + @@SERVERNAME + ' -T -t , -c'
    SET @HEADERCMD = 'bcp "SELECT * from ##cols" queryout "' + @Path + '\Temp_Headers.csv" -S ' + @@SERVERNAME + ' -T -t , -c'
    SET @Combine = 'copy "' + @Path + '\Temp_Headers.csv" + "' + @Path + '\Temp_RawData.csv" "' + @Path + '\MyCombinedFile.csv"'
    SET @DelCMD = 'del "' + @Path + '\Temp_*.csv"'

-- Create and populate our temp table with the query results
SELECT *
  INTO ##OutputTable
  FROM YourSourceTable

-- Generate a list of columns	
 SELECT @COLUMNS = @COLUMNS + c.name + ','
   from tempdb..syscolumns c
   join tempdb..sysobjects t
     on c.id = t.id
  where t.name like '##OutputTable%'
  order by colid
  
  SELECT @COLUMNS as Cols INTO ##Cols
		
-- Run the two export queries - first for the header, then for the data
exec xp_cmdshell @HEADERCMD
exec xp_cmdshell @CMD

-- Combine the two files into a single file
exec xp_cmdshell @Combine

-- Clean up the two temp files we created
exec xp_cmdshell @DelCMD

-- Clean up our temp tables
drop table ##cols
drop table ##OutputTable

If you have any suggestions or run into any issues, please let me know!