Tuesday, February 26, 2008

SQL Server 2008 - sp_repladdcolumn and sp_repldropcolumn deprecated

This is one of the common question in SQL Server 2000 DBA interview. How to add a column to a replicated / published table ? The answer is use sp_repladdcolumn and sp_repldropcolumn. If we get this question now, we may ask which version of sql server is this? Since SQL Server 2005 you may use simple DDL command like ALTER to add or drop a column from a published table (of course condition applies (***) like if the subscriber republishes the data then you should use the system sps). In SQL Server 2008 these system sps are deprecated, so you will not have this in next version.

SQL Server 2008- Compatibility Level Supported

This is one important thing to be known before switching or planning to switch to SQL Server 2008. SQL Server 2008 support only the last two versions compatibility levels ie. SQL Server 2000 (80) and SQL Server 2005(90). You can not set Compatibility level to 70 (SQL Server 7.0 ) in sql server 2008.

Saturday, February 23, 2008

SQL Server 2008 - Truncating Transaction Log

How to shrink (re-size ) the transaction log in SQL Server 2008

Scenario :

I have a database with the following size

SELECT name,size from sys.database_files

Test 173056
Test_Log 579072 -- 565 MB

Now I want to truncate Transaction log. In earlier version what we do is we truncate the log and shrink the file.

BACKUP LOG Test with Truncate_only

This statement throw an error with below message

Backup log Test with Truncate_Only
LHI-115\SQL2008(sa): Msg 155, Level 15, State 1, Line 1
'Truncate_Only' is not a recognized BACKUP option.

The errior is obevious, this command no more exists in SQL Server 2008. So the question is what is the alternative? As per books online “The transaction log is automatically truncated when the database is using the simple recovery model. If you must remove the log backup chain from a database, switch to the simple recovery model.” So this is the one command you should check before migrating to SQL Server 2008. If you have any script which have WITH TRUNCATE_ONLY option then you have to re-write the script.

How to shrink (re-size ) the transaction log in SQL Server 2008

As per books online , if you switch the Recovery Model to Simple inactive part of the transaction log should be removed. Let us see what happens

(a) select name,recovery_model_desc from sys.databases
name recovery_model_desc

Test Full

(b) Alter database Test SET Recovery simple

name recovery_model_desc

Test Simple

© select name,size from sys.database_files
Test 173056
Test_Log 451352 -- 440 MB

This process reduced the the transaction log file size. But not to the size what i want. I have no option to set the required size as we had this option in SHRINKFile. Do we need shrink the file after switching the Recovery model? Not sure. I have tested the same database in 2005 and I was able to shrink the file to 1024 KB.

I have tried to shrink the TL after switching the Recovery model and the result is as follows :-

DBCC SHRINKFILE (N'test_log' , 1)

DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
6 2 451352 128 451352 128

So.. its obevious that its not shrinking. There is no active transaction in TL, that also i checked.

Note : From my SQL Server 2005 and 2000 server i restored few database in 2008 and tried to Truncate TL. For some database this process worked. others do not. I need to look into it what is the problem with the database which could not shrink.


In SQL Server 2008 , Transaction can be shrinked in two step.
(a) Change the Recovery Model to Simple
(c) Shrink the file using DBCC ShrinkFile

Wednesday, February 13, 2008

SQL Server 2008 - Transact-SQL Error List Window

SQL Server Management Studio includes an Error List window that displays the syntax and semantic errors generated from the IntelliSense code in the Transact-SQL Query Editor. Its looks like more or less the Error List window available in .NET.

To display the Error List, do one of the following:
• On the View menu, click Error List.

In development environment this enhancement will really come handy .

SQL Server 2008 - Change Data Capture

What is Change Data Capture?

This new feature of SQL Server 2008 will capture all the data modifications(ie. DML- Insert /update / delete) on a table if you configure that table for Change Data Capture. SQL Server community were waiting for this feature for long time. Though earlier versions were provided with few functions and DBCC commands to read Transaction Log, it was never been a complete solution. In couple projects we had these kind of requirements and had to track each Data changes for Audit purpose. With lot of efforts we wrote our own application using Triggers and History table(s).

Change Data Capture is the mechanism to track and capture DML (Insert/Update/Delete) on a Table and stored the captured data in Relational format. A table or few columns in a table can be configured for Data capturing. All data manipulation language (DML) are read from the transaction log and captured in the associated change table. This may be considered as Customized Replication log reader. This is an asynchronous mechanism and there will not be any performance issue for the actual DML which changed the data what so ever. SQL 2008 has set of table valued function to query this trapped data to create report or synchronize standby db. Of course when you log / tack data you may have extra IO that you can not avoid. You can also capture only required columns changes from the table.

How it works?

Each insert or delete operation that is applied to a source table appears as a single row within the change table. The data columns of the row resulting from an insert operations contain the column values after the insert; the data columns of the row resulting from a delete operation contain the column values prior to the delete. An update operation, however, requires two row entries: one to identify the column values before the update and a second to identify the column values after the update.

Pre-Requisite :

(a) SQL Server 2008 Enterprise, Developer, and Evaluation editions : Many of use would not like to here this but it’s the fact.
(b) SQL Server Agent should be up and running


(a) You should not have any Schema already existing with the name cdc. change data capture requires exclusive use of the cdc schema and cdc user. If either a schema or database user named cdc currently exists in a database, the database cannot be enabled for change data capture until the schema and or user are dropped or renamed.

(b) The table columns which you are going to capture should not contain the following reserved column names: __$start_lsn, __$end_lsn, __$seqval, __$operation, and __$update_mask.

(c) captured_column_list cannot contain columns defined with a data type introduced in SQL Server 2008.

Best Practices

(a) Create a separate filegroup for change data capture change tables
(b) Configure only the required table and required columns for CDC


A member of SYSADMIN fixed server role can enable CDC on the database.
A member of DB_Owner fixed database role can enable can create a capture instance for individual source tables

My questions before starting Change Data Capture process testing

(a) What Edition will support this feature?
(b) Do we need to have Primarykey on the table which we need to track?
(c) What will happen when and CDC enabled table schema changes?
(d) Do we need to follow any extra care when we do Schema changes to CDC enabled tables?
(e) If you Disable CDC from a table, what will happened to the previous data captured.

1. Enable database for CDC

Check whether the database is already enabled CDC or not. Run this query
Select is_cdc_enabled From sys.databases where name='GPx'.

If this query return 0 then its not enabled.

USE AdventureWorks;
EXECUTE sys.sp_cdc_enable_db ;

When a database is enabled for change data capture, the cdc schema, cdc user, metadata tables, and other system objects are created for the database. Just make sure these objects are created.

2. Enabling a Source Table for Change Data Capture :-

By default, all of the columns in the source table are identified as captured columns. If only a subset of columns need to be tracked then use the @captured_column_list parameter to specify the subset of columns

Dynamic Management views for CDC.

Following DMVs can be used to display information about Change Data Capture and Log sessions

(a) sys.dm_cdc_log_scan_sessions
(b) sys.dm_cdc_errors
(c) sys.dm_repl_traninfo

Test Case :-

I have a table called CDCTest to be enabled for CDC and I planned to capture all the columns of the tables.

Enable Table CDCTest for Change Data Capturing

EXECUTE sys.sp_cdc_enable_table
@source_schema = N'dbo'
, @source_name = N'CDCTest'
, @role_name = N'cdc_Admin';

Job 'cdc.GPx_capture' started successfully.
Job 'cdc.GPx_cleanup' started successfully.

Do some DML Operation on this table

delete top (3) from cdctest
Insert CDCTest (PLIContainerID,CenterTypeID,TotalVisits) select 1,222,333

Update CDCTest set Code='xxxxx' where plicontainerid between 1 and 20

Query Functions to get the trapped data.

DECLARE @begin_time datetime, @end_time datetime, @from_lsn binary(10), @to_lsn binary(10);

SET @begin_time = GETDATE()-1;
SET @end_time = GETDATE();
-- Map the time interval to a change data capture query range.
SELECT @from_lsn = sys.fn_cdc_map_time_to_lsn('smallest greater than or equal', @begin_time);
SELECT @to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', @end_time);

-- Return the changes occurring within the query window.
SELECT * FROM cdc.fn_cdc_get_all_changes_dbo_CDCTest(@from_lsn, @to_lsn, 'all');

Wrapper Stored Procedure to get CDC captured information for a given table

IF OBJECT_ID(N'dbo.spGetCDCInformationForATable') IS NOT NULL
PRINT 'Dropping Stored Procedure: [dbo].spGetCDCInformationForATable'
DROP PROCEDURE dbo.spGetCDCInformationForATable

PRINT 'Creating Stored Procedure: [dbo].spGetCDCInformationForATable'


Copyright 2008

Procedure : [spGetCDCInformationForATable]
Author :
on : <08-Feb-2007>
Description : Get Data captured through Change Data Capture for a given Table

Pending Issue : Nil

Returns : @Err

Date Modified By Description of Change

------- ---------------- --------------------------------

Create Procedure spGetCDCInformationForATable
@TableName sysname,
@SourceTableSchmea sysname,
@begin_time datetime,
@end_time datetime

DECLARE @from_lsn binary(10), @to_lsn binary(10);

-- Map the time interval to a change data capture query range.
SELECT @from_lsn = sys.fn_cdc_map_time_to_lsn('smallest greater than or equal', @begin_time);
SELECT @to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', @end_time);

-- Return the changes occurring within the query window.

Declare @SqlStmt nvarchar(1000)
Set @SqlStmt ='SELECT * FROM cdc.fn_cdc_get_all_changes_'+@SourceTableSchmea+'_'+@TableName+'(@from_lsn, @to_lsn, ''all'')'

Exec sp_ExecuteSQL @SqlStmt,@params=N'@from_lsn binary(10), @to_lsn binary(10)' ,@from_lsn =@from_lsn ,@to_lsn=@to_lsn


What will happen when and CDC enabled table schema changes?

You can very well add a new column. Obeviously, data in this column will not be tracked. Then what will happen if the column is dropped. CDC is not tracking any Schema Changes it seems. If you add or drop column it don’t care. CDC function still return the same number of columns as when the CDC is enabled

alter table cdctest Add column ColAddedAfterEnableCDC int

insert into cdctest(PLIContainerID,ColAddedAfterCDCEnabled) select 111111111 ,222

alter table cdctest drop column completedon

DDL modifications to the source table that change the source table column structure, such as adding or dropping a column, or changing the data type of an existing column, are maintained in the cdc.ddl_history table. These changes can be reported by using this stored procedure. Entries in cdc.ddl_history are made at the time the capture process reads the DDL transaction in the log.

You can track the DDL activity on a CDC enabled Table using sys.sp_cdc_get_ddl_history system stored procedure. This sp Returns the data definition language (DDL) change history associated with the specified capture instance since change data capture was enabled for that capture instance

Disable Change Data Capturing

sys.sp_cdc_disable_db_change_data_capture :
Disable Change Data Capturing on current database

Disable Change data capture for the specified source table in the current database. sys.sp_cdc_disable_table_change_data_capture drops the change data capture change table and system functions associated with the specified source table and capture instance. It deletes any rows associated with the specified capture instance from the change data capture system tables and sets the is_tracked_by_cdc column in the sys.tables catalog view to 0.

SQL Server 2008 - Table-Valued Parameter

It has always been a problem in earlier versions to pass multi row –multi column (record set) parameter from a Stored procedure or a function. In 2008 this problem is solved and now you can pass Table variable as parameter between stored procedures and functions.

How it works?
The first step in this is to create a User Defined Table datatype in the database. Once you created the User Defined datatype(UDT) of table you can declare a variable of that UDT as input parameter of a sp or function.

(a) Create a User TABLE Type

CREATE TYPE TestTableTypeParm AS TABLE (EmpID int primary key,
EmpType varchar(10),
EmpName varchar(50),
CHECK (EmpType IN ('MNGR', 'CLRK', 'TECH') ))

(b) Create stored procedure with a Table type as input parameter.
Create PROCEDURE spTestTableTypeParm

(@TVP TestTableTypeParm READONLY,
@someotherparam int )
Select *From @TVP

(c) How to call this stored procedure

Declare @TVP as TestTableTypeParm
Insert @TVP select 1,'MNGR','aaaa'
Insert @TVP select 2,'CLRK','bbb'
Insert @TVP select 3,'TECH','ccc'

Exec spTestTableTypeParm @TVP,1

(d) Create a function with table type input parameter

CREATE FUNCTION ufn_TestTablevaluedParameter(@TVP TestTableTypeParm READONLY)
select *From @TVP

(f) How to call function with tabletype input parameter

Declare @TVP as TestTableTypeParm
Insert @TVP select 1,'MNGR','aaaa'
Insert @TVP select 2,'CLRK','bbb'
Insert @TVP select 3,'TECH','ccc'

Select *From ufn_TestTablevaluedParameter(@TVP)

There are many Limitations mentioned in Books online. These are probably most significant among them

(a) SQL Server does not maintain statistics on columns of table-valued parameters.

(b) Table-valued parameters must be passed as input READONLY parameters to Transact-SQL routines. You cannot perform DML operations such as UPDATE, DELETE, or INSERT on a table-valued parameter in the body of a routine.

(c) You cannot use a table-valued parameter as target of a SELECT INTO or INSERT EXEC statement. A table-valued parameter can be in the FROM clause of SELECT INTO or in the INSERT EXEC string or stored-procedure.

(d) A user-defined table type cannot be used as a column in a table or a field in a structured user-defined type.

(e) A DEFAULT value cannot be specified in the definition of a user-defined table type.

(f) The user-defined table type definition cannot be modified after it is created.

(g) User-defined functions cannot be called within the definition of computed columns of a user-defined table type.


(a) We can use declare table-valued variables within dynamic Transact-SQL statements and pass these variables as table-valued parameters to stored procedures and functions.
(b) The table valued variable data can not be modified inside the object where its referred. It has to be READONLY.
© In SQL Server 2000, it was not allowed to insert the output of an stored procedure to a table type variable. This was solved in SQL Server 2005 and it was possible to insert the output of an sp to table variable. But Table-Valued parameter has again have this limitation. You can not use in Select Into or INSERT EXEC.
(d) sys.table_types : This system object will retrun all the Table types in the database.
select *from sys.table_types

SQL Server 2008 - New functions for Date and Time

I am not aware of any functions date function deprecation in 2008. There are new functions added to compliment new datatypes.

(a) SYSDATETIME() : Returns the current database system timestamp as a datetime2(7) value. The database time zone offset is not included. This value is derived from the operating system of the computer on which the instance of SQL Server is running.
select SYSDATETIME() : Result : 2008-02-10 10:12:54.8750000

(b) SYSDATETIMEOFFSET () : Returns the current database system timestamp as a datetimeoffset(7) value
select SYSDATETIMEOFFSET() --Result : 2008-02-10 10:12:09.6406250 +05:30

(c) SYSUTCDATETIME() : Returns the current database system timestamp as a datetime2(7) value. The database time zone offset is not included.
Select SYSUTCDATETIME() –- Result 2008-02-10 04:44:52.1875000

(d) SWITCHOFFSET() : Returns a datetimeoffset value that is changed from the stored time zone offset to a specified new time zone offset.
SELECT SWITCHOFFSET ('2008-02-10 7:45:50.71345 -5:00', '-08:00') as SWITCHOFFSETResult

(e) TODATETIMEOFFSET () : Changes the time zone offset for a date and time value. This function can be used to update a datetimeoffset column.
SELECT TODATETIMEOFFSET ('2008-02-10 7:45:50.71345 -5:00', '-08:00') as TODATETIMEOFFSET

Books online have sample script to explain these functions.

SQL Server 2008 - New Datatypes

Major enhancement in this front is new Date and Time datatypes and related functions.

Here Overview of the new Date and time datatypes

(a) Date : Date only which can store date raging from 0001-01-01 through 9999-12-31 ie. January 1, 1 A.D. through December 31, 9999 A.D. The storage size is 3 byte and Date datatype gives One day accuracy.

(b) Datetime2 : Its an extension to existing Datetime datatype and datetime2 can have larger range and accuracy. Existing Datetime range limitation is January 1, 1753, through December 31, 9999. Datetime2 provides range from 0001-01-01 through 9999-12-31. The storage size is 8 byte for both datetime2 and datetime. For all new developments its recommended to use Datetime2 because is more portable and it gives more second precision.

(c) Time : Defines a time of a day. The time is without time zone awareness and is based on a 24-hour clock. This provides more accuracy in terms of time than existing datetime datatype. Time datatype ranges 00:00:00.0000000 through 23:59:59.9999999 where as existing datetime data type only gives 00:00:00 through 23:59:59.997. It also provides a feature in which user can specify the precision they wanted from 0 to 99 nanoseconds. The storage size is 5 bytes, fixed, is the default with the default of 100ns fractional second precision.
SELECT CAST(getdate() AS time) AS 'time' – Default
Result : 09:43:13.6230000
SELECT CAST(getdate() AS time(2)) AS 'time' – User specific 2 percision
Result : 09:43:57.69
In the above example, the figure within brackets (ie 2) specify what precision we want. By default (if we not specify) its 7.

(d) datetimeoffset : Defines a date that is combined with a time of a day that has time zone awareness and is based on a 24-hour clock. datetimeoffset have three parts. Date part , time part and timeoffset part. Datetimeoffset ranges from 0001-01-01 through 9999-12-31 and it takes 10 byte for storage.

(e) Hierarchyid : I have not tried this yet, for now I just wanted to mention that there is a new system datatype called Hierarchyid in SQL Server 2008. Books online has very detailed section on this, I need to check that


These date and time datatypes are definitly mosuse wanted datatypes for long time. For any new development purpose use these new datatypes available. I need to explore more with these datatypes. Books online has very detailed documentation with examples which needs to be referred.

SQL Server 2008 - MERGE DML Statement

What is MERGE Statement ?

Merge is a Single DML statement which can be used in place of multiple DML statements. In other words, it can do Insert/Update/Delete in single statement itself. MERGE will be ideal choice in ETL scenario or in a data synchronizing application. MERGE is SQL 2006 Standard compliant

How MERGE works ?

Merge statement has a source and target object. Source can be a table , query , view which provides the data to merge into target table. Target can be a Table or updatable view.

In ETL process , it is a very common to synchronize target table from a source. So what we generally do ? What all are the condition we check ?

(a) The new rows in the source which is not there in the target needs to be inserted
(b) The rows which is matching according to the specified matching columns needs to be updated.
© The extra rows in target which is not there in Source should be deleted

In earlier version , we had to write this logic in three different statements.

Test Case

I have two table MergeSource and MergeTarget and schema is something like this

Create table MergeTarget (Col1 Int, Name varchar(30), Address varchar(100))

Create table MergeSource (Col1 Int, Name varchar(30), Address varchar(100))

I have target tables with rows as follows :-

Insert MergeTarget select 1,'Madhu','Chennai'
Insert MergeTarget select 2,'Mike','delhi'
Insert MergeTarget select 4,'aaa','bangalore'

And the source table rows are as follows :-

Insert MergeSource select 1,'Madhu','Hyderabad'
Insert MergeSource select 2,'Mike','delhi'
Insert MergeSource select 3,'Ram','bangalore'

Points to be noted

(a) Source table have new address value for ‘Madhu’. Which needs to be updated (only address column). Ie. Row exists but different data
(b) Source table have a new row col1=4 and name=’aaa’ which needs to be inserted to target
© Target table have an extra row (col1=3 and name=’Ram’) which is not there in the Source , which needs to be deleted from target.

The Merge statement for the above requirement will be some thing like this

Merge Mergetarget T USING MergeSource S
On T.Col1=S.COl1


What I learnt here

(a) Merge works only with single target table.

(b) You need terminate merge statement with semi-colon (;)
Msg 10713, Level 15, State 1, Line 6
A MERGE statement must be terminated by a semi-colon (;).

(c) Merge Statement will fail when a target row matches more than one source row (Source table have duplicate rows for the given condition). For eg. In my given scenario if I have two rows for col1=3 in Source table the merge statement will fail with the following error message :-

Msg 8672, Level 16, State 1, Line 2
The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.

I think the error message is self explanatory and you must use GROUP BY to group source rows.

OUTPUT Clause with MERGE Statement

The OUTPUT clause (which was introduced in SQL 2005 in conjunction with Merge is very powerful. You can get which action and fired and what is the value.

Merge Mergetarget T USING MergeSource S
On T.Col1=S.COl1


OUTPUT $action, Inserted.Col1,deleted.col1;

Screen Short of Merge with OUTPUT clause.

Source replaced with a Query ( It can be Query / Table / View)

Mergetarget T
( SELECT 1 as Col1,'Madhu' As Name ,'Hyderabad' as Address
SELECT 2,'Mike','delhi'
SELECT 3,'Ram','bangalore' ) S

On T.Col1=S.COl1

OUTPUT $Action, Inserted.Col1,deleted.col1,inserted.address;


No doubt this is one of the major feature of SQL Server 2008. As Data migration is part of more or less all the projects this will be one of the desired DML statement of SQL Developers.


It looks really good. I tested it and its really gives me 3-4 times better performance and less size than Normal / NoCompressed Backup

Compressed Backup

--BACKUP DATABASE successfully processed 172926 pages in 40.806 seconds (34.715 MB/sec).

Not Compressed Backup

BACKUP DATABASE MyDatabase TO DISK='E:\MyDatabaseNoCompression.BAK'
BACKUP DATABASE successfully processed 172926 pages in 111.007 seconds (12.761 MB/sec).

See the time taken; its almost 1/3 of NoCompression Backup. The size of the backup files Compressed is 197 MB and Uncompressed is 1386 MB.

So next question is , is there any difference in restoring from these two backups. How the compression is affected Restoration? So here we go…

Normal / NOCOMPRESSED Database Backup Restoration Result

RESTORE DATABASE successfully processed 172927 pages in 245.748 seconds (5.764 MB/sec).

COMPRESSED Database Backup Restoration Result

RESTORE DATABASE successfully processed 172927 pages in 199.832 seconds (7.089 MB/sec).

See the time each one took. The time taken in restoration from Compressed Backup took less time . So it seems to be a very good enhancement.

By default this property is Off. Ie. Backup are No Compression. You can set this server property true to make By default all backups are compressed. See Books Online Backup Database page for more details

I would love to use this. But if this feature only available in Enterprise Edition then? That we needs to wait and see.

SQL Server 2008 - Assignment operators

For instance, while looping when we need to increment counter, in earlier version we used to set the the counter=Counter+1 kind of code. But new assignment operator will make assignment more simple (may be less readable). This may be useful in Join

SQL 2005/ 2000 way of Assigning .

Declare @Counter int
Set @Counter =1
While @Counter <10
Print @Counter
Set @Counter =@Counter +1

SQL 2008 way of Assigning .

Declare @Counter int=1
While @Counter <10
Print @Counter
Set @Counter +=1

Basically, in earlier versions, equal sign (=) was the only Transact-SQL assignment operator. Now in 2008 you have += , -=, *=, /=

SQL Server 2008 - Declaration and initialization of Variable

I would say one of the simple but genuine enhancement in 2008 is Declaration and initialization of variable in the same step. Something like this

Declare @Variable Varchar(10) =’Madhu’.

Earlier vrsion we had to do this in two steps.

Declare @Variable Varchar(10)
Set @Variable=’Madhu’ OR Select @Variable=’Madhu’

Small but very effective

SQL Server 2008 - IntelliSense

Since the release of SQL 2008 , i have been attending webcasts , usergroup meeting and Microsoft virtual lab courses blah.., blah... to get acquaintance with the product . Its really nice to hear that many features pending for long time is addressed in this version. There are many new feature even if it looks small make the programmer life very easy.

You can get the list of new features from Microsoft site. Check white paper

But, here what I am intended to do is, try my self first and then pen down my understanding and comment about the feature. I planned to start from simple feature and here I go…

The first thing one observe when open a Query Analyser in SQL Server 2008 is IntelliSense. Though I am not used to Intellisense, but I have seen many folks are very found of these kind of feature and uses third party tools in earlier SQL versions. This is of course a nice feature. But there are few discrepancy with BOL and the functionality available in Management Studio. Few setting like Parameter Info is not currently available. I may be wrong, if not I hope RTM version will address this. You can have line number kind of options available in QA itself. Just you need to switch on that feature from Tools à Options. This of course demand more tweaking and its worth to do also.

I think there is some issues. I tested the following script

create table test(col int,col1 int)
If i type select *from dbo. I should get the objects in DBO schema and current i am not getting this. It may be a bug.