SQL Server can easily export to CSV file, but it exports just the data, without the column names included. In order to export the column names, you need to actually perform two exports - one with the column names, and one with the data - and then combine the two files into a single file. It populates
You could do this using any query you want - native SQL, a linked server, a stored procedure, or anything else - and the results will export the same way once they're in the temp table. Since it builds the list of column name dynamically as well, you only need to change out the query being executed and set the export location - no other configuration is necessary.
-- Declare the variables DECLARE @CMD VARCHAR(4000), @DelCMD VARCHAR(4000), @HEADERCMD VARCHAR(4000), @Combine VARCHAR(4000), @Path VARCHAR(4000), @COLUMNS VARCHAR(4000) -- Set values as appropriate SET @COLUMNS = '' SET @Path = '\\servername\share\outputpath' -- Set up the external commands and queries we'll use through xp_cmdshell -- Note that they won't execute until we populate the temp tables they refer to SET @CMD = 'bcp "select * from ##OutputTable" queryout "' + @Path + '\Temp_RawData.csv" -S ' + @@SERVERNAME + ' -T -t , -c' SET @HEADERCMD = 'bcp "SELECT * from ##cols" queryout "' + @Path + '\Temp_Headers.csv" -S ' + @@SERVERNAME + ' -T -t , -c' SET @Combine = 'copy "' + @Path + '\Temp_Headers.csv" + "' + @Path + '\Temp_RawData.csv" "' + @Path + '\MyCombinedFile.csv"' SET @DelCMD = 'del "' + @Path + '\Temp_*.csv"' -- Create and populate our temp table with the query results SELECT * INTO ##OutputTable FROM YourSourceTable -- Generate a list of columns SELECT @COLUMNS = @COLUMNS + c.name + ',' from tempdb..syscolumns c join tempdb..sysobjects t on c.id = t.id where t.name like '##OutputTable order by colid SELECT @COLUMNS as Cols INTO ##Cols -- Run the two export queries - first for the header, then for the data exec xp_cmdshell @HEADERCMD exec xp_cmdshell @CMD -- Combine the two files into a single file exec xp_cmdshell @Combine -- Clean up the two temp files we created exec xp_cmdshell @DelCMD -- Clean up our temp tables drop table ##cols drop table ##OutputTable
If you have any suggestions or run into any issues, please let me know!
Decoding bitwise masks can be a bit confusing, so I wanted to share a couple of T-SQL functions I've created to make them easier to deal with. If you'd like to read more about bitmasks and applying them in T-SQL, you can read about it at SQL Fool: T-SQL Bitwise Operators.
The first will return the value in a given bit position of an integer - it accepts two parameters (the lookup value and the bit position) and returns a bit for the value in that position. Note that it starts with position zero, so make sure you're counting correctly:
CREATE FUNCTION dbo.BitwisePosition( @value BIGINT, @PositionNumber INT ) RETURNS BIT WITH SCHEMABINDING AS BEGIN DECLARE @Result BIT DECLARE @Mask BIGINT SET @Mask = POWER(2,@PositionNumber) SET @Result = (CASE @value & @Mask WHEN @Mask then 1 else 0 end) RETURN @Result END GO
The second function returns a bit (true/false) based on whether a provided bitmask applies to a reference value:
CREATE FUNCTION dbo.BitwiseMask( @value BIGINT, @Mask BIGINT ) RETURNS BIT WITH SCHEMABINDING AS BEGIN DECLARE @Result BIT SET @Result = (CASE @value & @Mask WHEN @Mask then 1 else 0 end) RETURN @Result END GO
Don't forget to grant them permissions:
GRANT EXECUTE ON dbo.BitwiseMask TO PUBLIC GRANT EXECUTE ON dbo.BitwisePosition TO PUBLIC
To use these functions, you'd call them as in these examples:
-- Value: 1110001000 -- Position 9876543210 -- Checkpoing position 7, 4, and 0 should return 1, 0, 0 select dbo.bitwiseposition(904, 7), dbo.bitwiseposition(904, 4), dbo.bitwiseposition(904, 0) -- Value: 1110001000 = 904 -- Bitmask: 0010001000 = 136 -- Will return true since mask "fits inside" value select dbo.bitwisemask(904, 136) -- Value: 1110001000 = 904 -- Bitmask: 0010001001 = 137 -- false since mask has a bit "outside" value select dbo.bitwisemask(904, 137)
I hope you find them helpful!
We use Oracle Goldengate (expensive and probably overkill for Oracle->MSSQL, but good at what it does) to replicate data from an Oracle database into a SQL Server. However, I got an alert the other day that replication had stopped, and when I checked the status of replication, all the REPs we had set up were in status "Starting...", but none we actually doing anything.
Attempting to stop them got the following error:
GGSCI (GGSERVER) 68> stop rep MYREP Sending STOP request to REPLICAT MYREP ... ERROR: opening port for REPLICAT MYREP (TCP/IP error: Connection refused).
Stopping/Starting the manager service or rebooting the PC didn't help either - they still said "Starting" and were unresponsive. Even stranger, deleting and recreating the REP gave the same result - before I even attempted to start the REP for the first time, it said "Starting", and an attempt to start it gave me "Process is starting up - try again later".
The cause was the REP process status file, located in the DIRPCS folder under the Goldengate root - there should be a file for each REP that's currently running giving details about the status. When a REP stops, this file is deleted. Since all of the current REPs weren't doing anything (they were all sitting at the end of the previous trail file), they should have been stopped. I deleted the PCR files for the affected REP streams, and then manager reporting "STOPPED" - at that point, I was able to start up each REP without issue.
I'm not sure how they got that way, but once started again, they all worked without issue. I hope this saves you the troubleshooting time of hunting down these files!
Anybody who has the rights to run a report in SSRS can set up a subscription, delivering the report to their own email address. However, if you try to deliver the report as an email to somebody else, you might be greeted with this unclear error message:
A subscription delivery error has occurred. (rsDeliveryError) One of the extension parameters is not valid for the following reason: The account you are using does not have administrator privileges. A subscription cannot be created for firstname.lastname@example.org. (rsInvalidExtensionParameter)
I was setting up a subscription in our Sharepoint-integrated SQL Reporting Services site, and was attending to send the report to our shared group email address. I could set it up to deliver to me and I could forward it, but it wasn't letting me send the email to anybody else's email address, either inside or outside the organization.
Solution: Grant your user "Create Alerts" and "Manage Alerts" permission
I received this error because I lacked the "Create Alerts" and "Manage Alerts" on the report in question (or folder, or higher up). The error says you need to be an administrator, but doesn't really tell you what's wrong with your request - it's really complaining that you're delivering an alert to somebody else. Granting those rights to your user account (for that report, or to an object above it so they propagate down to that report) allows you to send the report to any email address you want.
I hope this helps!
Computed columns can be a great tool if you want to add a value to a table that's dependent on other columns in the table, and you don't want to maintain it on its own every time the source columns change. In a view, these can provide a single, consistent calculation to the end users - say, for a
-- Set up a view with the same columns and calculation errors in it -- The first calculation will always work -- The second gives a divide by zero error on every 10th row CREATE VIEW SourceObject AS SELECT object_id AS KeyColumn1, column_id as KeyColumn2, object_id - column_id as CalcColumn1, (object_id - (column_id % 10)) as CalcColumn2 FROM msdb.sys.columns
Now that it's set up, we can try selecting all the rows from the view we just created, and we'll see about 100 rows output, and then the query will stop with a "Divide by zero" error:
SELECT * FROM SourceObject
The calculation in this query is pretty straightforward, and you can see which rows are causing a problem (where column_id is divisible by 10), but what if it was more complicated? The problem is that SQL doesn't display the row that had a problem - it stops on the row before the problem, so finding the row with the error is bit more difficult. If there were multiple columns involved in the calculation, or different combinations of values that could result in an error? Tracking down the rows causing an error can be difficult - you have to find all the possible conditions that could cause an error, and then query for each of them.
This script will allow you to find all the rows in a view with a calculation error, all at once. It uses a cursor to pull the rows from the view one at a time, test the calculation, and then write any errors it finds to a table where you can see the rows that are causing problems. Using a CURSOR generally isn't ideal, but in this case, it's the only way to react to a bad calculation on a row-by-row basis and deal with it.
The script can use two key values from your view - they're called KeyColumn1 and KeyColumn2 - and you can modify the script to name them whatever you want, or just a single a value if that makes more sense in your scenario. It also supports two computed columns - CalcColumn1 and 2 - though again, it could be changed to just check a single column.
-- Set up variables DECLARE @KeyColumn1 INT, @KeyColumn2 INT, @CalcColumn1 INT, @CalcColumn2 INT DECLARE @CurrentRow BIGINT SET @CurrentRow = 1 -- Set up a place to hold key values for rows that work SELECT TOP 0 KeyColumn1, KeyColumn2 INTO #WorkingRows FROM SourceObject -- Set up a place to hold errors for rows that don't work CREATE TABLE #ErrorRows ( RowNumber BIGINT, KeyColumn1 INT, KeyColumn2 INT, [ERROR_NUMBER] INT, [ERROR_MESSAGE] nvarchar(4000) ) -- Begin loop to look through rows in the view DECLARE cur INSENSITIVE CURSOR FOR SELECT KeyColumn1, KeyColumn2 FROM SourceObject ORDER BY KeyColumn1, KeyColumn2 OPEN cur FETCH NEXT FROM cur INTO @KeyColumn1, @KeyColumn2 WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY -- Try to select the calculated columns -- If there's an error, it will jump to the CATCH block SELECT @CalcColumn1 = CalcColumn1, @CalcColumn2 = CalcColumn2 FROM SourceObject WHERE KeyColumn1 = @KeyColumn1 AND KeyColumn2 = @KeyColumn2 -- This lookup succeeded INSERT INTO #WorkingRows SELECT @KeyColumn1, @KeyColumn2 END TRY BEGIN CATCH -- The lookup failed - save details INSERT INTO #ErrorRows SELECT @CurrentRow, @KeyColumn1, @KeyColumn2, ERROR_NUMBER(), ERROR_MESSAGE() END CATCH SET @CurrentRow = @CurrentRow + 1 FETCH NEXT FROM cur INTO @KeyColumn1, @KeyColumn2 END -- Show the key columns of rows with errors SELECT * FROM #ErrorRows -- Show the key columns of working rows SELECT * FROM #WorkingRows -- Clean things up close cur deallocate cur drop table #ErrorRows drop table #workingrows
At the end, you'll have two tables with results in them - #ErrorRows, which contains the key values for rows with errors in them, as well as details about the error message, and #WorkingRows, which contains the key values for all of the working rows from the view.
Note: I could just as easily set up a table with a computed column in it that causes the same problem You'd be unable to select the entire table without an error, and hunting down the row with an error is painful. The script to find the error is the same, but here's an example of a table that has a computed column with this problem:
-- Set up table with a list of numbers in it SELECT object_id AS KeyColumn1, RANK() OVER (PARTITION BY 0 ORDER BY NEWID()) as KeyColumn2 INTO SourceObject FROM msdb.sys.columns -- Add two calculations to the table -- The first will always work -- The second will give a "Divide by zero" every 100 rows ALTER TABLE SourceObject ADD CalcColumn1 as (KeyColumn1 - KeyColumn2) ALTER TABLE SourceObject ADD CalcColumn2 as (KeyColumn1 / (KeyColumn2 % 100)) -- Note that you can't add a persisted computed column to a table -- if there's a calculation error in any of the rows, so this -- command will fail with a "Divide by zero" error ALTER TABLE SourceObject ADD CalcColumn3 as (KeyColumn1 / (KeyColumn2 % 100)) PERSISTED
I recently stumbled across a great system view, sys.dm_db_persisted_sku_features, which identifies any enterprise features in use in the current database, which would prevent you from moving this database to a Standard Edition instance. Unfortunately, it appears in SQL 2008, and I wanted to run this check on a SQL 2005 system.
There are a number of server-level features of SQL 2005 that require Enterprise Edition, but only two database-level features - partitioning and the VarDecimal storage format. Both are easy to check for, so I put together this quick query to do it:
select * from (-- VarDecimal Storage Format select case when max(objectproperty(object_id, N'TableHasVarDecimalStorageFormat')) = 0 then '' else 'VarDecimal Storage' end as feature_name from sys.objects UNION ALL -- Partitioning select case when max(partition_number) > 1 then 'Partitioning' else '' end from sys.partitions ) t where feature_name <> ''
On a SQL 2005 server, this query will serve the same purpose that sys.dm_db_persisted_sku_features does on SQL 2008/2012 servers.
In SQL Server, the ISNULL function takes two arguments - the first is a potentially NULL value, and the second is another potentially NULL value. If the first value is NULL, the function returns the second value instead. It's always been this way - for example:
SELECT V1, V2, ISNULL(V1, V2) as [Result], Descr FROM (SELECT 10 as V1, 20 as V2, 'Returns first' as Descr UNION SELECT 10 as V1, NULL as V2, 'Returns first' as Descr UNION SELECT NULL as V1, 20 as V2, 'Returns second' as Descr UNION SELECT NULL as V1, NULL as V2, 'Both NULL - return NULL' as Descr) t
In SQL Server, you get the following results:
V1 V2 Result Descr 10 20 10 Returns first 10 NULL 10 Returns first NULL 20 20 Returns second NULL NULL NULL Both NULL - return NULL
However, SQL Compact implements ISNULL differently, in a way that's not compatible at all - instead, the ISNULL function returns a 0 or 1 based on whether the first parameter is NULL! It doesn't even process the second parameter, but instead of failing with an "Invalid Syntax" error, it happily accepts it, executes, and returns unexpected results. In SQL Compact, you get the following result set:
V1 V2 Result Descr 10 20 0 Returns first 10 NULL 0 Returns first NULL 20 1 Returns second NULL NULL 1 Both NULL - return NULL
Notice the difference? The ISNULL column is returning the value based only on the first parameter (a "0" if it's not NULL, and a "1" if it is), and if you're not expecting it, it silently mis-calculates any formulas containing that function. If you're relying on the function in any statements that add values together or make decisions based on the results, you're in for some very expected behavior (in my case, I noticed totals that were wildly incorrect).
As an alternative, you should be using the COALESCE - it accepts multiple parameters and it walks down the list until it finds one that's not null, returning a NULL value if all parameters are NULL. This function can be used in place of ISNULL in SQL Server code with exactly the same behavior (i.e. no changes), and as a bonus, it acts the same in SQL Compact, so your formulas will work properly there as well. The example above becomes:
SELECT V1, V2, COALESCE(V1, V2) as [Result], Descr FROM (SELECT 10 as V1, 20 as V2, 'Returns first' as Descr UNION SELECT 10 as V1, NULL as V2, 'Returns first' as Descr UNION SELECT NULL as V1, 20 as V2, 'Returns second' as Descr UNION SELECT NULL as V1, NULL as V2, 'Both NULL - return NULL' as Descr) t
And on both SQL Server and SQL Compact, the result set is the same:
V1 V2 Result Descr 10 20 10 Returns first 10 NULL 10 Returns first NULL 20 20 Returns second NULL NULL NULL Both NULL - return NULL
+1 for consistency!
MORAL of the story:
When executing statements on different platforms, ensure the functions you use behave in the same manner on both - and even when considering multiple platforms from the same vendor, named almost identically, don't assume things behave unless you've tested them
When initializing replication to a .NET Compact Framework client on a mobile device, I was receiving an error message when I attempted to start the synchronize:
Initializing the SQL Server Reconciler has failed. Try again.
I had confirmed that SQL Compact web replication was set up correctly, and checking the URL came back as expected. Searching for the error online comes back with a dozen recommendations, but when I traced the replication sync attempt, I saw the following statement executed:
exec sp_helpdistpublisher N'SQLSERVERNAME'
Followed immediately by the error message:
The remote server "SQLSERVERNAME" does not exist, or has not been designated as a valid Publisher, or you may not have permission to see available Publishers.
Sure enough, executing that command in SSMS, logged in as my replication user, gave me the same error message. At some point, I'd changed the user I was using to set up the subscription, and that user didn't have rights to view the publication list on my SQL Server. The fix was pretty easy:
- In SQL Management Studio, right-click the publication
- Select "Properties" and then open the "Publication Access List" tab
- Add the user you're connecting your subscriber with to this list
Here's a shot of the screen where I had to make this change, in case there's any confusion: