Querying msdb for job history


I recently had the need to create an SSRS report showing the status of some jobs from the 10 last days, so I wrote this script that I thought could come in handy to others as well. The contents should be pretty self explanatory except maybe for the RowNum column which is there to separate each iteration of the job:

WITH cte_jobhistory AS (
		--> Creating a cte only to make the columns run_date and run_time into a "datetime-friendly" format  
		--  so it can be used for filtering
		SELECT 
			RunDateString = SUBSTRING(CAST(run_date as VARCHAR(20)), 1, 4) + '-' +  SUBSTRING(CAST(run_date as VARCHAR(20)), 5, 2) + '-' + SUBSTRING(CAST(run_date as VARCHAR(20)), 7, 2),
			RunTimeString = SUBSTRING(RIGHT('000000' + CAST(run_time as VARCHAR(20)), 6), 1, 2) + ':' + SUBSTRING(RIGHT('000000' + CAST(run_time as VARCHAR(20)), 6), 3, 2) + ':' + SUBSTRING(RIGHT('000000' + CAST(run_time as VARCHAR(20)), 6), 5, 2),
			*
		FROM msdb.dbo.sysjobhistory
		)
SELECT 
	a.name,
	RowNum = ROW_NUMBER() OVER (PARTITION BY a.name, b.step_id ORDER BY instance_id DESC),
	b.step_id,
	b.step_name,
	b.message,
	RunStatus = CASE b.run_status 
		WHEN 0 THEN 'Failed'
		WHEN 1 THEN 'Succeded' 
		WHEN 2 THEN 'Retry' 
		WHEN 3 THEN 'Cancelled' 
		WHEN 4 THEN 'In Progress' 
		END,
	b.run_duration,
	b.RunDateString,
	b.RunTimeString
FROM msdb.dbo.sysjobs a
	INNER JOIN cte_jobhistory b
		ON a.job_id = b.job_id
WHERE CONVERT(datetime, b.RunDateString) > GETDATE()-10
ORDER BY a.name ASC, b.instance_id DESC
Advertisements

ORA-00936 Missing Expression in SSIS 2005


I’ve been doing quite a bit of SSIS 2005 lately and I’ve come across a weird problem that almost had the best of me. It’s no secret that I absolutely *hate* Oracle and this problem I just had didn’t make it any better.

What I had was a quite long and complex pl/sql query (over 250 lines of code!) that was running smoothly in sql developer, the Oracle OLE DB provider in SSIS had absolutely no problems with the code when previewing data or mapping the column metadata. But when I saved the package and ran it I got the error message saying “ORA-000938: missing expression”. Sooo…what to do? Obiously there was some kind of syntax error but I have a query that runs perfecty in SQL Developer, it validates without a hitch in SSIS and I can even preview the data in the OLE DB source object in SSIS. Quite amazing to be honest!

So, after 7 hours (!! I kid you not!) of debugging this query I had a colleague step in to my office for a few minutes. And then he dropped the bomb: “Did you remove all the comments in your query??”. I just stared back at him in disbelief but at that point I was ready to do just about anything so I tried it and; Voila. The query ran through. After a bit more debugging I learned that it was the “–” (double dash) it didn’t like and that the /* */ works just fine and I think it has something to do with th driver actually removing all the carriage returns in the statement so anything after the first comment is also commented out. Go Oracle!

In conclusion:
Don’t ever comment your PL/SQL queries in ssis using “–” (double dash), it simply will not work!

Create a point-in-time restore automatically – SQL 2005 version


A new version has been created! Check it out here: Updated: create point-in-time restore script automatically

This is a 2005-version of the script originally posted here: Create a full Point-In-Time restore script automatically! – 2008 version

/*
Generate Point-in-time restore script, given only DBName and Point-in-time timestamp
Works only on SQL2005 databases!!

Script for generating restore script for full backup originally created by Michael 
Valentine Jones, SQLTeam.com

The rest is created by Henning Frettem, www.thefirstsql.com
*/
SET NOCOUNT ON
GO

DECLARE 
  @DBName varchar(200),
  @PointInTime datetime,
  @Filename varchar(200),
  @tab varchar(1),
  @cr varchar(2),
  @Full_BackupStartDate datetime,
  @Diff_BackupStartDate datetime,
  @Log_BackupStartDate datetime,
  @SQL nvarchar(max)
  
SET @DBName = 'MyDatabaseName'
SET @PointInTime = '2011-11-22 09:45:00'
SET @tab = char(9)
SET @cr = char(13)+char(10)
SET @SQL = ''


BEGIN TRY
  --> Performing some checks
  IF NOT EXISTS (SELECT 1 FROM msdb.dbo.backupset WHERE database_name = @DBName AND type = 'D' AND backup_start_date <= @PointInTime)
    RAISERROR(N'No full backup exists prior to the specified PointInTime, restore is not possible at the given PointInTime.', 16, 1)
  ELSE IF NOT EXISTS (SELECT 1 FROM msdb.dbo.backupset WHERE database_name = @DBName AND type = 'L' AND backup_start_date > @PointInTime)
    RAISERROR(N'No backup of the log exist after the specified PointInTime, do a tail-log backup before running this script again', 16, 1)

  --> Getting the filename and the date of the last full backup prior to the PointInTime
  SELECT TOP 1 
    @Filename = b.physical_device_name,
    @Full_BackupStartDate = backup_start_date
  FROM msdb.dbo.backupset a
    INNER JOIN msdb.dbo.backupmediafamily b
      ON a.media_set_id = b.media_set_id
  WHERE a.database_name = @DBName
    AND a.type = 'D'
    AND a.backup_start_date <= @PointInTime
  ORDER BY a.backup_start_date DESC

  --> Create temp-tables for file header information
  DECLARE @header table (
	  BackupName varchar(128) NULL,
	  BackupDescription varchar(255) NULL,
	  BackupType smallint NULL,
	  ExpirationDate datetime NULL,
	  Compressed tinyint NULL,
	  Position smallint NULL,
	  DeviceType tinyint NULL,
	  UserName varchar(128) NULL,
	  ServerName varchar(128) NULL,
	  DatabaseName varchar(128) NULL,
	  DatabaseVersion int NULL,
	  DatabaseCreationDate datetime NULL,
	  BackupSize numeric(20,0) NULL,
	  FirstLSN numeric(25,0) NULL,
	  LastLSN numeric(25,0) NULL,
	  CheckpointLSN numeric(25,0) NULL,
	  DatabaseBackupLSN numeric(25,0) NULL,
	  BackupStartDate datetime NULL,
	  BackupFinishDate datetime NULL,
	  SortOrder smallint NULL,
	  CodePage smallint NULL,
	  UnicodeLocaleId int NULL,
	  UnicodeComparisonStyle int NULL,
	  CompatibilityLevel tinyint NULL,
	  SoftwareVendorId int NULL,
	  SoftwareVersionMajor int NULL,
	  SoftwareVersionMinor int NULL,
	  SoftwareVersionBuild int NULL,
	  MachineName varchar(128) NULL,
	  Flags int NULL,
	  BindingID uniqueidentifier NULL,
	  RecoveryForkID uniqueidentifier NULL,
	  Collation varchar(128) NULL,
	  FamilyGUID uniqueidentifier NULL,
	  HasBulkLoggedData bit NULL,
	  IsSnapshot bit NULL,
	  IsReadOnly bit NULL,
	  IsSingleUser bit NULL,
	  HasBackupChecksums bit NULL,
	  IsDamaged bit NULL,
	  BeginsLogChain bit NULL,
	  HasIncompleteMetaData bit NULL,
	  IsForceOffline bit NULL,
	  IsCopyOnly bit NULL,
	  FirstRecoveryForkID uniqueidentifier NULL,
	  ForkPointLSN numeric(25,0) NULL,
	  RecoveryModel varchar(60) NULL,
	  DifferentialBaseLSN numeric(25,0) NULL,
	  DifferentialBaseGUID uniqueidentifier NULL,
	  BackupTypeDescription varchar(60) NULL,
	  BackupSetGUID uniqueidentifier NULL,
	  Seq int NOT NULL identity(1,1)
    )

  --> Create temp-table for db file information
  DECLARE @filelist TABLE  (
	  LogicalName varchar(128) NULL,
	  PhysicalName varchar(260) NULL,
	  Type char(1) NULL,
	  FileGroupName varchar(128) NULL,
	  Size numeric(20,0) NULL,
	  MaxSize numeric(20,0) NULL,
	  FileID bigint NULL,
	  CreateLSN numeric(25,0) NULL,
	  DropLSN numeric(25,0) NULL,
	  UniqueID uniqueidentifier NULL,
	  ReadOnlyLSN numeric(25,0) NULL,
	  ReadWriteLSN numeric(25,0) NULL,
	  BackupSizeInBytes bigint NULL,
	  SourceBlockSize int NULL,
	  FileGroupID int NULL,
	  LogGroupGUID uniqueidentifier NULL,
	  DifferentialBaseLSN numeric(25,0) NULL,
	  DifferentialBaseGUID uniqueidentifier NULL,
	  IsReadOnly bit NULL,
	  IsPresent bit NULL,
	  Seq int NOT NULL identity(1,1)
    )

  --> Get header and filelist information from the backup file
  INSERT INTO @header
  EXEC ('RESTORE HeaderOnly FROM DISK = ''' + @Filename + '''')

  INSERT INTO @filelist
  EXEC ('RESTORE FilelistOnly FROM DISK = ''' + @Filename + '''')

  --> Generate the full backup restore script
  SELECT
    @SQL = @SQL + 
      CASE
        WHEN a.Seq = 1 THEN 
          @cr + 'RESTORE DATABASE [' + c.DatabaseName + ']' +
          @cr + 'FROM DISK =' + @cr + @tab + '''' +
          @Filename + '''' + @cr + 'WITH' 
        ELSE ''
      END
      + @cr + @tab + 'MOVE ''' + a.LogicalName + ''' TO ''' + a.PhysicalName + ''',' 
      +
      CASE 
        WHEN a.Seq = b.Seq THEN
          @cr + @tab + 'REPLACE, STATS = 5, NORECOVERY'
        ELSE ''
      END
  FROM
    @filelist a
    CROSS JOIN
      (SELECT Seq = MAX(b1.Seq) FROM @filelist b1 ) b
    CROSS JOIN
      (SELECT DatabaseName = MAX(c1.DatabaseName) FROM @header c1) c
  ORDER BY
    a.Seq
    
  SELECT @SQL = @SQL + @cr + 'GO' + @cr + @cr 
    
  --> Restore the last differential backup if it exists
  SELECT TOP 1
    @SQL = @SQL + 'RESTORE DATABASE [' + @DBName + '] FROM DISK = ''' + b.physical_device_name + ''' WITH NORECOVERY' + @cr + 'GO',
    @Diff_BackupStartDate = a.backup_start_date
  FROM msdb.dbo.backupset a
    INNER JOIN msdb.dbo.backupmediafamily b
      ON a.media_set_id = b.media_set_id
  WHERE a.database_name = @DBName
    AND a.type = 'I'
    AND a.backup_start_date > @Full_BackupStartDate
    AND a.backup_start_date < @PointInTime
  ORDER BY a.backup_start_date DESC
  
  IF @Diff_BackupStartDate IS NULL
    SET @Diff_BackupStartDate = @Full_BackupStartDate
    
  --> Generate all log restores except the last one
  SELECT 
    @SQL = @SQL + 'RESTORE LOG [' + @DBName + '] FROM DISK = ''' + b.physical_device_name + ''' WITH NORECOVERY' + @cr + 'GO' + @cr,
    @Log_BackupStartDate = a.backup_start_date   
  FROM msdb.dbo.backupset a
    INNER JOIN msdb.dbo.backupmediafamily b
      ON a.media_set_id = b.media_set_id
  WHERE a.database_name = @DBName
    AND a.type = 'L'
    AND a.backup_start_date > @Diff_BackupStartDate
    AND a.backup_start_date < @PointInTime
  ORDER BY a.backup_start_date
  
  --> Generate last log restore script with the stopat command and recovery
  SELECT TOP 1
    @SQL = @SQL + 'RESTORE LOG [' + @DBName + '] FROM DISK = ''' + b.physical_device_name + ''' 
      WITH RECOVERY, STOPAT = ''' + CONVERT(varchar(20), @PointInTime, 120) + '''' + @cr + 'GO' + @cr
  FROM msdb.dbo.backupset a
    INNER JOIN msdb.dbo.backupmediafamily b
      ON a.media_set_id = b.media_set_id
  WHERE a.database_name = @DBName
    AND a.type = 'L'
    AND a.backup_start_date > @PointInTime
  ORDER BY a.backup_start_date ASC

  PRINT @SQL

END TRY
BEGIN CATCH
  PRINT ERROR_MESSAGE()
END CATCH

SQL Server Profiler – a tutorial


During all kinds of database interaction there are times when you want to see what is actually being sent to the database, the actual queries that are executed by the database engine. Some times you have 3rd party apps that do all kinds of different stuff under the hood, or maybe you have inherited a website from some developer that made a serious spaghetti of everything. The only way of knowing what is really going on in the database is to use SQL Server Profiler.

Now, although profiler is a simple yet *very* powerful tool, it does take some tweaking to make it work they way you want it to. It comes with some default templates but in my experience they are not nearly good enough to be useful in a working environment. There are a whole bunch of objects/events that can be monitored but what I’m gonna do is show you how to make a trace template that will provide the information you need most of the time.

Here is the order of things you need to do:

  1. First of all you need to open up Profiler. You can either find it in the Tools menu in SQL Server Management Studio, or you can find it on the Windows Start Menu under Microsoft SQL Server -> Performance Tools.
  2. When Profiler opens it will ask you to connect to the server you want to profile. Click “Cancel” in this window, no connection is needed to create a Profiler Template like we are doing now.
  3. You should now have a blank Profiler window and it’s time to create the new template. Go to File -> Templates and then click “New Template”
  4. The General window of the Template generator pops up and you need to fill out the selected server type for the template and a new template name. I usually go for “MyTemplate” (quite original right :) as the name  as I usually only have one custom template in there. You should also check “Base the new template on an existing one” and choose “TSQL_Duration” as your existing template. Finally check the last checkbox to make this the default template for new traces. Do NOT click Save!
  5. Now you need to click “Event Selection” which is the second tab in the open window and for people not familiar with Profiler this is where the tricky part starts. After all, how do you know which events you need to choose if you’ve never used it before??
  6. Since we chose to base the new template on an existing one (TSQL_duration), the events selection window will be preconfigured with some events. Be aware that the two events that are chosen will only take in to account those queries/procedures that have actually finished. Check the checkbox in the lower right part of the window that says “Show all columns”. A bunch of checkboxes will appear next to the two preconfigured events.
  7. Now you need to do the following:
    • Remove the checkbox for the column BinaryData for the event RPC:Completed.

    Then make sure that the following checkboxes are checked for BOTH events:

    • Duration (the number of milliseconds it took to run the query)
    • TextData (the actual sql query text/stored procedure call that is sent to the database, including parameters)
    • SPID (the system process ID of the current connection)
    • ApplicationName (the name of the application the query/procedure originates from, could sometimes be empty)
    • DatabaseName (the database name the query is sent to)
    • HostName (the server name of the server sending the query)
    • LoginName (the username of the authenticated user that opened the connection)
    • StartTime (the timestamp when the query was started)
  8. Ok, we’ve come a long way now! What we need to do now is to set some Column Filters so our traces wont be cluttered with mumbo-jumbo that is not relevant to what we are doing:
    • Uncheck the “Show all columns”
    • Click “Column Filters”
  9. Setting the correct Column Filters is essential to getting good trace output and if you don’t do it right your trace might get flooded a lot of stuff that will make it harder for you to find what you’re looking for. And the only column filter we will set for the *template* is a SPID filter of Greater than or equal to 50. All SPIDs that have a number less than 50 are reserved for internal processes and are usually not of any value to us. Also make sure that the “Exclude rows that do not contain values” is checked. Click OK to save your column filters. When at a later stage you are  using this template, it will be useful to add additional filters. Remember that this is only a template, so when your’e running a trace using this template you can change all the events and columns and filters to your liking. What I usually do though is to create filters on the TextData (the sql query/procedure that is being exeuted) or the DatabaseName. Filters are ususlly specified as LIKE with a value of i.e “%CREATE TABLE%” (without any quotation marks) or %MyDatabaseName% and always remember to check the “Exclude rows that do not contain values” when making filters.
  10. Now you need to click “Organize Columns”. And I have always found Profiler to be a little weird when it comes to this organizing, because Profiler actually sorts the output of the trace according to the order you specify here. This in itself might not be so weird but the fact that it is impossible to sort the output of a trace after it has started makes this step really important! And it took me quite a while to figure it out actually :) And to me at least it’s vital that the trace is always sorted by StartTime. If you sort by StartTime it has no real importance what the order of the other columns are so these can be put in whichever order you prefer. After your finished organizing your windows should look more or less like this. The important thing is that the StartTime is on top!
  11. Once you have done all this you are set to go, and all you need to do now is to click “Save”. Your template is done!
  12. To test the newly created template you need to start a new trace by going to File -> New Trace. You need to log on to the server you want to trace, and once you have done that the new trace window will open. If you set the template you just created to be the default template it will automatically be selected for you, if not you will have to select it manually in the “Use the template” dropdown.
  13. Click Run and all new trace events that satisfy your filters will now appear in the order of when they were started :) By selecting one or more lines (use the Ctrl-key to select several)  the actual t-sql that was sent to the database will be displayed in the details window. If you select several lines, they will be separated by the GO statement automatically. 

Happy Tracing!

How to monitor replication latency using code?


I while back I had the need for monitoring quite a few replicated databases, so I had to make a procedure to automate the monitoring process. This procedure will insert a tracer token for each publication and monitor how long it takes for the token to by committed at each subscriber.

CREATE PROCEDURE [dbo].[proc_replication_latency_monitor]
AS

-- Procedure created by Henning Frettem, www.thefirstsql.com

BEGIN

SET NOCOUNT ON

DECLARE 
  @PublishedDBName varchar(200),
  @SQL nvarchar(2000),
  @PublicationPK int, 
  @PublicationName varchar(200),
  @PublicationDatabase varchar(200),
  @TracerTokenID int,
  @Parmameters nvarchar(500)
  
DECLARE @Publications table (
  PublicationPK int IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
  PublicationDatabase varchar(200),
  PublicationName varchar(200),
  TracerTokenID int,
  TracerTokenPostTime datetime
)

DECLARE @Latency table (
  PublicationPK int,
  DistributorLatency bigint,
  Subscriber sysname,
  SubscriberDB sysname,
  SubscriberLatency bigint,
  OverallLatency bigint
)

--> Cursor for fetching all publications in all databases
DECLARE curPublishedDatabases CURSOR LOCAL FAST_FORWARD FOR 
  SELECT [name] FROM sys.databases WHERE is_published > 0
  
OPEN curPublishedDatabases
FETCH curPublishedDatabases INTO @PublishedDBName
WHILE @@FETCH_STATUS = 0
  BEGIN 
    SET @SQL = N'SELECT ''' + @PublishedDBName + ''', [name] FROM ' + @PublishedDBName + '.dbo.syspublications'
    INSERT INTO @Publications (PublicationDatabase, PublicationName)
    EXEC sp_executesql @SQL
    
    FETCH NEXT FROM curPublishedDatabases INTO @PublishedDBName
  END

CLOSE curPublishedDatabases
DEALLOCATE curPublishedDatabases 

--> Cursor for posting tracer tokens
DECLARE curPublications CURSOR LOCAL FAST_FORWARD FOR 
  SELECT PublicationPK, PublicationDatabase, PublicationName FROM @Publications

OPEN curPublications
FETCH curPublications INTO @PublicationPK, @PublicationDatabase, @PublicationName
WHILE @@FETCH_STATUS = 0
  BEGIN 
    SET @Parmameters = N'@PublicationName varchar(200), @TracerTokenID_OUT int OUTPUT';
    SET @SQL = N'EXEC ' + @PublicationDatabase + '.sys.sp_posttracertoken @publication = @PublicationName, @tracer_token_id = @TracerTokenID_OUT OUTPUT'
    EXEC sp_executesql @SQL, @Parmameters, @PublicationName = @PublicationName, @TracerTokenID_OUT = @TracerTokenID OUTPUT
  
    UPDATE @Publications SET 
      TracerTokenID = @TracerTokenID,
      TracerTokenPostTime = GETDATE()
    WHERE PublicationPK = @PublicationPK

    FETCH NEXT FROM curPublications INTO @PublicationPK, @PublicationDatabase, @PublicationName
  END
CLOSE curPublications
DEALLOCATE curPublications

--> Wait two minutes for all tokens to be commited at all subscribers
WAITFOR DELAY '000:02:00.000'

--> Then check the results for each posted token 
DECLARE curTokens CURSOR LOCAL FAST_FORWARD FOR 
  SELECT PublicationPK, PublicationDatabase, PublicationName, TracerTokenID FROM @Publications

OPEN curTokens
FETCH curTokens INTO @PublicationPK, @PublicationDatabase, @PublicationName, @TracerTokenID
WHILE @@FETCH_STATUS = 0
  BEGIN 
    --> Insert token history for each token 
    SET @SQL = N'EXEC ' + @PublicationDatabase + '.dbo.sp_helptracertokenhistory @publication = ''' + @PublicationName + ''', @tracer_id = ' + CAST(@TracerTokenID AS VARCHAR(50)) + ''
    INSERT INTO @Latency
      (DistributorLatency, Subscriber, SubscriberDB, SubscriberLatency, OverallLatency)
    EXEC sp_executesql @SQL

    --> Make sure that the PublicationPK is added to the token history
    UPDATE @Latency SET PublicationPK = @PublicationPK WHERE PublicationPK IS NULL
  
    --> Clean up the tracer token
    SET @SQL = N'EXEC ' + @PublicationDatabase + '.sys.sp_deletetracertokenhistory @publication = ''' + @PublicationName + ''', @tracer_id = ' + CAST(@TracerTokenID AS VARCHAR(50)) + ''
    EXEC sp_executesql @SQL

    FETCH NEXT FROM curTokens INTO @PublicationPK, @PublicationDatabase, @PublicationName, @TracerTokenID
  END
CLOSE curTokens
DEALLOCATE curTokens

SELECT 
  b.PublicationDatabase,
  b.PublicationName,
  a.Subscriber,
  a.SubscriberDB,
  a.OverallLatency,
  b.TracerTokenPostTime 
FROM @Latency a
  INNER JOIN @Publications b
    ON a.PublicationPK = b.PublicationPK

END

Finding the most active 15 minute time period


A guy over at sqlteam.com wanted to find the most active 15-minute period for a series of different stores his company runs and I thought this would be a nice topic for a blog post. I modified the problem a little to fit a sample table I had lying around and the result wasn’t too hard to come by.

Basically I have a table with sample call records for a telephone company and I want to find the most active 15-minute period where the most calls are started. First we have to generate a table with some sample data:

create table calls (
	call_id int identity(1, 1) primary key clustered,
	a_number bigint,
	b_number bigint,
	start_time datetime,
	duration int)
 go

--> Function to generate random numbers, used to create sample data
create function randomnumber (@min bigint, @max bigint, @rand float)
returns bigint
as
begin
	return CAST(((@Max + 1) - @Min) * @Rand + @Min AS bigint)
end
go

--&gt; Generate 500  rows of sample data
declare 
	@end int = 500,
	@current int = 1
	
while (@current <= @end)
	begin
		insert into calls (a_number, b_number, start_time, duration)
		select 
			dbo.randomnumber (22000000, 22999999, RAND()),
			dbo.randomnumber (22000000, 22999999, RAND()),
			DATEADD(MINUTE, dbo.randomnumber (1, 300, RAND()),GETDATE()),
			dbo.randomnumber (3, 3600, RAND())
			
		select @current = @current + 1
	end
go

Now once the test table with sample data has been established, we can run the actual query:

DECLARE @dummydate datetime = '2010-06-23 00:00:00'

;WITH cte AS (
	SELECT 
		[Mod] = DATEDIFF(MILLISECOND, @dummydate, start_time) % (15 * 60 * 1000), --&gt; 15 mins to milliseconds
		*
	FROM calls
	)
SELECT TOP 10 
	Period = DATEADD(MILLISECOND, -[Mod], start_time), 
	CountOfTransID = COUNT(call_id)
FROM cte
GROUP BY DATEADD(MILLISECOND, -[Mod], start_time)
ORDER BY COUNT(call_id) DESC

Poor mans partitioning


There is absolutely nothing new about what I’m writing here, this is a really old trick back from the SQL 2000 days but there are quite a few people who are not aware of the powers of partitioning your data and this is why I’m posting. Unfortunately, native partitioning is for large corporations and companies with thick wallets where Enterprise version is the only way there is, while the rest of us will have to tweak that standard version until our eyes pop out. Well…for those of you that are not aware: there is a way to partition even with standard version and I’m gonna show you how. It will take a bit more effort than the native way and it doesn’t do everything you would hope for but it’s still pretty darn powerful.

And this little script here illustrates the whole concept; you create a few tables with some check constraints, you create a union all view and VOILA! There is your partitioning! Before you run the script make sure to turn on “Show actual execution plan” in your Management studio.

CREATE TABLE part_table1 (
  ID INT NOT NULL PRIMARY KEY CLUSTERED,
  SomeValue VARCHAR(20)
  )
GO
CREATE TABLE part_table2 (
  ID INT NOT NULL PRIMARY KEY CLUSTERED,
  SomeValue VARCHAR(20)
  )
GO
CREATE TABLE part_table3 (
  ID INT NOT NULL PRIMARY KEY CLUSTERED,
  SomeValue VARCHAR(20)
  )
GO

ALTER TABLE part_table1 WITH CHECK ADD CONSTRAINT CK_part_table1 CHECK (ID >= 0 AND ID <= 10) 
ALTER TABLE part_table1 CHECK CONSTRAINT CK_part_table1 

ALTER TABLE part_table2 WITH CHECK ADD CONSTRAINT CK_part_table2 CHECK (ID &gt;= 11 AND ID <= 100) 
ALTER TABLE part_table2 CHECK CONSTRAINT CK_part_table2 

ALTER TABLE part_table3 WITH CHECK ADD CONSTRAINT CK_part_table3 CHECK (ID &gt;= 101)
ALTER TABLE part_table3 CHECK CONSTRAINT CK_part_table3
GO

CREATE VIEW part_table
AS
SELECT * FROM part_table1
UNION ALL
SELECT * FROM part_table2
UNION ALL
SELECT * FROM part_table3
GO

INSERT INTO part_table SELECT 9, 'sqlteam'
INSERT INTO part_table SELECT 24, 'sqlteam'
INSERT INTO part_table2 SELECT 25, 'sqlteam'

SELECT * FROM part_table WHERE ID = 25

Notice in the execution plan when you do the inserts and the select. The insert is a clustered insert but as you can see it scans all the tables to find the right one which is somewhat annoying. But when you look at the final select you see where the “magic” is happening: when you do a select based on the “partitioning column” the query optimizer actually knows which data is where so it only has to search one of the tables! Pretty cool stuff…at least somewhat cool hehe. Using this method you can utilize quite a few of the partitioning techniques that are built in the native partitioning. You can for example have each partition on a different disk and you can add and remove partitions from the view as you like, just remember to do changes in the view first before you delete or rename any tables permanently…