Wednesday, November 30, 2011

Free Training Webinar Series

December Speakers and Topics
Speaker: Adam Jorgensen
Topic: Performance Tuning SSAS
Date: December 1, 2011, 11:00AM EDT
This session will teach you how to identify and resolve performance problems in your SSAS environment along with some internals as to how things work to improve performance.
Speaker: Mike Davis
Topic: New SSIS Features in Denali
Date: December 6, 2011, 11:00AM EDT
The newest version of SQL Server is just around the corner. Come see all of the newest features and best practices in SSIS. Learn about the new SSIS parameters, and all of the great improvements that will make your life easier.
Speaker: Mike Davis
Topic: Monitoring SQL Server with SSIS
Date: December 8, 2011, 11:00AM EDT
So your boss just called and said his database has been getting slower for weeks. Wouldn’t it be nice to have a trend analysis of all of the DMO’s to help determine the issue? Well now you can, with SSIS you can monitor all of your servers and databases. Mike will give you a complete BI solution that you can immediately start using to pull in all of the DMO data to a data warehouse and a cube. You will be able to catch trends that indicate problems before the boss calls.
Speaker: Adam Jorgensen
Topic: Building Tier 1 SSAS Solutions
Date: December 13, 2011, 11:00AM EDT
Come listen to Adam as he shares his Tech-ed presentation with the masses. This is focused on building highly scalable cubes and driving ultimate performance in the process.
Speaker: Adam Jorgensen
Topic: Operational Best Practices in SSAS
Date: December 15, 2011, 11:00AM EDT
Come learn from Adam as he explores all the best practices you should be taking in managing SSAS in your enterprise. Great for developers and administrators.
Speaker: Mark Brown
Topic: Beginning Development for Business – Part 3: Playing in closets
Date: December 20, 2011, 11:00AM EDT
In the third session, we will finish looking at controlling application flow using standard techniques. We will look at the most commonly used data types in business software development and how to keep our data around after the program closes. We will look at what flat files are and ways to create them, XML files, and provide a brief look at what databases are and how they can be used to store data. Finally, we will talk about how these will tie into our evolving application we started designing in our last session.
Speaker: Brian Knight
Topic: Dynamically Configuring Packages
Date: December 22, 2011, 11:00AM EDT
In this session, you’ll learn how to control your SSIS packages in a more powerful way by using expressions. The expression language is a bit of an elusive language that can be used to make any component in SSIS dynamic. Then, you’ll learn how to use configuration files to externalize the configuration of your package and lastly how to pass variables into a package.

Thursday, November 24, 2011

OUTPUT clause in SQL Server 2005/2008

The execution of a DML statement such as INSERT, UPDATE, or DELETE does not produce any results that indicate what was changed. Prior to SQL Server 2005, an extra round trip to the database was required to determine the changes. In SQL Server 2005/2008 the INSERT, UPDATE, and DELETE statements have been enhanced to support an OUTPUT clause so that a single round trip is all that is required to modify the database and determine what changed. You use the OUTPUT clause together with the inserted and deleted virtual tables, much as in a trigger. The OUTPUT clause must be used with an INTO expression to fill a table. Typically, this will be a table variable. The following example creates a table, inserts some data, and finally deletes some records.

--create table and insert data
CREATE TABLE outputtbl 

INSERT INTO outputtbl VALUES('row1')
INSERT INTO outputtbl VALUES ('row2')
INSERT INTO outputtbl VALUES ('row5')
INSERT INTO outputtbl VALUES ('row6')
INSERT INTO outputtbl VALUES ('row7')
INSERT INTO outputtbl VALUES ('row8')
INSERT INTO outputtbl VALUES ('row9')
INSERT INTO outputtbl VALUES ('row10')

--make a table variable to hold the results of the OUTPUT clause
DECLARE @del AS TABLE (deletedId INT, deletedValue VARCHAR(15))
--delete two rows and return through
--the output clause
DELETE outputtbl
WHERE id < 3
deletedId  deletedValue
—————- ———————-
1                row1
2                row2

(2 row(s) affected)
The previous example inserted the id and col1 values of the rows that were deleted into the table variable @del.
When used with an UPDATE command, OUTPUT produces both a DELETED and an INSERTED table. The DELETED table contains the values before the UPDATE command, and the DELETED table has the values after the UPDATE command. An example follows that shows OUTPUT being used to capture the result of an UPDATE.
--update records, this populates
--both the inserted and deleted tables
(id INT, oldValue VARCHAR(15), newValue VARCHAR(15))
UPDATE outputtbl
SET col1 = 'updated'
OUTPUT, deleted.col1, inserted.col1
INTO @changes
WHERE id < 5
SELECT * FROM @changes
id               oldValue           newValue
—————- ———————- ———————-
3                row5                  updated
4                row6                  updated

(2 row(s) affected)

TOP clause in SQL Server 2005/2008

TOP was introduced in SQL Server 7. Until SQL Server 2005, the TOP clause allowed the user to specify the number or percent of rows to be returned in a SELECT statement. In SQL Server 2005, the TOP clause can be used also for INSERT, UPDATE, and DELETE (in addition to SELECT), and the syntax is as follows: TOP (expression) [PERCENT]. Notice the parentheses around the expression; this is required when TOP is used for UPDATE, INSERT, and DELETE.

The following code shows some examples of using TOP.
—create a table and insert some data
CREATE TABLE #tmpTopSample (Name VARCHAR(150))
INSERT INTO #tmpTopSample (Name)  VALUES('Harpreet1'),('Harpreet2'),('Harpreet3'),('Harpreet4'),('Harpreet5')

TOP with SELECT statement: 
SELECT TOP(2) * FROM toptest

The above query will return 'Harpreet1' and 'Harpreet2'

TOP with UPDATE statement:
UPDATE TOP(2) #tmpTopSample SET Name= 'hi'

The above query will update 'Harpreet1' and 'Harpreet2' to 'hi'

TOP with DELETE statement:
DELETE TOP(2)  #tmpTopSample

The above query will delete top two rows.

TOP with INSERT statement:

CREATE TABLE #tmpTopSample1 (Name VARCHAR(150))
INSERT INTO #tmpTopSample (Name)  VALUES('Harpreet1'),('Harpreet2')

INSERT top(2) #tmpTopSample
SELECT * FROM #tmpTopSample1

The above query will insert top two rows from #tmpTopSample1 table to #tmpTopSample table.

In SQL 2005/2008 we can use TOP clause with expressions for number definition. See below example for the reference.

@b INT,
@c INT;
--set values
SET @a = 10
SET @b=5 
SELECT  @c=@a/@b

Select TOP(@c) * from #tmpTopSample

The above query will return 2 rows from #tmpTopSample table.

How many types of SSIS Log Providers available for native SSIS Logging?

There are five native log providers are available but most common are SQL Server and Text File Log Provider.

  1. SQL Server Log Provider
  2. Text file Log Provider
  3. Windows Event Log Provider
  4. XML File Log Provider
  5. SQL Server Profiler Log Provider

FILESTREAM data type in SQL Server 2008

FILESTREAM is a new datatype in SQL SERVER 2008. To use FILESTREAM, a database needs to contain a FILESTREAM filegroup and a table which contains a varbinary(max) column with the FILESTREAM attribute set. This causes the Database Engine to store all data for that column in the file system, but not in the database file. A FILESTREAM filegroup is a special folder that contains file system directories known as data containers. These data containers are the interface between Database Engine storage and file system storage through which files in these data containers are maintained by Database Engine.

Keynotes about FileStream data type:

  • FILESTREAM allows large binary data (documents, images, videos, etc.) to be stored directly in the Windows file system. This binary data remains an integral part of the database and maintains transactional consistency. 
  • FILESTREAM enables the storage of large binary data, traditionally managed by the database, to be stored outside the database as individual files that can be accessed using an NTFS streaming API. Using the NTFS streaming APIs allows efficient performance of common file operations while providing all of the rich database services, including security and backup.
  • So to use FILESTREAM, a database needs to contain a FILESTREAM filegroup and a table which contains a varbinary (max) column with the FILESTREAM attribute set. This causes the Database Engine to store all data for that column in the file system, but not in the database file. 
  • A FILESTREAM filegroup is a special folder that contains file system directories known as data containers. These data containers are the interface between Database Engine storage and file system storage through which files in these data containers are maintained by Database Engine.
  • A data container is created which will take care of DML statements.
  • FILESTREAM will use Windows API for streaming the files so that files can be accessed faster. Also instead of using SQL SERVER cache it will use Windows cache for caching the files accessed.
  • When you use FileStream storage, keep the following things in mind:
    • If the table contains a FILESTREAM column, each row must have a unique row id.
    • FILESTREAM filegroups can be on compressed volumes.
    • FILESTREAM data containers cannot be nested.
  • When applications need to store large files i.e. larger than 1 MB and also don’t want to affect database performance in reading the data, the use of FILESTREAM will provide a better solution.
  • FILESTREAM data is secured by granting permissions at the table or column level.

Wednesday, November 23, 2011

Monitoring SQL Server with SSIS

12/8/2011 11:00:00 AM
Mike Davis

So your boss just called and said his database has been getting slower for weeks. Wouldn’t it be nice to have a trend analysis of all of the DMO’s to help determine the issue? Well now you can, with SSIS you can monitor all of your servers and databases. Mike will give you a complete BI solution that you can immediately start using to pull in all of the DMO data to a data warehouse and a cube. You will be able to catch trends that indicate problems before the boss calls.

Click here to register

Performance Tuning SSAS

Date: 12/1/2011 11:00:00 AM

Adam Jergensen

This session will teach you how to identify and resolve performance problems in your SSAS environment along with some internals as to how things work to improve performance.

Click here to Register

Tuesday, November 22, 2011

Except and Intersect operators in SQL Server

Returns distinct values by comparing the results of two queries. 

EXCEPT returns the distinct data value from the left query (query on left side of the operand) which does not exist in the right query (query on the right side of the operand). INTERSECT returns data value which is common in both queries. In Oracle, the equivalent of the EXCEPT operator is MINUS and INTERSECT is same as INTERSECT. In SQL 2005, using EXCEPT and INTERSECT, one can also simulate the MERGE command in Oracle (see the blog post that is mentioned above).


CREATE TABLE #tmpName1 (name VARCHAR(100))
CREATE TABLE #tmpName2 (name VARCHAR(100))

INSERT INTO #tmpName1 VALUES('Harpreet'),('Sundeep'),('Mithun')

INSERT INTO #tmpName2 VALUES('Harpreet'),('Shashi'),('Nitin')

Now let see the result by running the query using EXCEPT and INTERSECT command.

SELECT Name FROM #tmpName1 EXCEPT SELECT Name FROM #tmpName2

Above query will return following values. Data which exists in the left query but not in the right query.

SELECT Name FROM #tmpName2 EXCEPT SELECT Name FROM #tmpName1

This query will return following values. This time we have reversed the table order.

And this query returns value returned by both the tables.

Things to remember when using these operators:
  • When the data types of comparable columns that are returned by the queries to the left and right of the EXCEPT or INTERSECT operands are character data types with different collations, the required comparison is performed according to the rules of collation precedence. If this conversion cannot be performed, the SQL Server Database Engine returns an error.
  • When you compare rows for determining distinct values, two NULL values are considered equal.
  • The column names of the result set that are returned by EXCEPT or INTERSECT are the same names as those returned by the query on the left side of the operand.
  • Column names or aliases in ORDER BY clauses must reference column names returned by the left-side query.
  • The nullability of any column in the result set returned by EXCEPT or INTERSECT is the same as the nullability of the corresponding column that is returned by the query on the left side of the operand.
  • If EXCEPT or INTERSECT is used together with other operators in an expression, it is evaluated in the context of the following precedence:
            a). Expressions in parentheses
            b). The INTERSECT operand
            c). EXCEPT and UNION evaluated from left to right based on their position in the expression

  • If EXCEPT or INTERSECT is used to compare more than two sets of queries, data type conversion is determined by comparing two queries at a time, and following the previously mentioned rules of expression evaluation.
  • EXCEPT and INTERSECT cannot be used in distributed partitioned view definitions, query notifications, or together with COMPUTE and COMPUTE BY clauses.
  • EXCEPT and INTERSECT may be used in distributed queries, but are only executed on the local server and not pushed to the linked server. Therefore, using EXCEPT and INTERSECT in distributed queries may affect performance.
  • Fast forward-only and static cursors are fully supported in the result set when they are used with an EXCEPT or INTERSECT operation. If a keyset-driven or dynamic cursor is used together with an EXCEPT or INTERSECT operation, the cursor of the result set of the operation is converted to a static cursor.
  • When an EXCEPT operation is displayed by using the Graphical Showplan feature in SQL Server Management Studio, the operation appears as a left anti semi join, and an INTERSECT operation appears as a left semi join.

Monday, November 21, 2011

Report Model in SQL Server Reporting Services


The report model is the metadata layer component that serves as the basis for building reports. Report designers, both end users and IT staff, will use the report model as the data source for the report instead of directly accessing a relational database or OLAP cube data source when designing reports.

The report model contains three key elements necessary for the ad hoc reporting process to function smoothly: bindings to the underlying data-source schema, a logical representation of the schema in the form of business entities that are familiar to end users, and the default information to be used according to the role that the entity plays in the report (primary entity, lookup entity, summary, and so on).

A report model definition is an XML file that follows the Semantic Model Definition Language (SMDL) specification. Therefore, report models are also referred to as semantic models, SMDL, or SMDL models.

Report Model Components
Report models primarily consist of the following three item types:

  • Entities are the equivalent of objects or tables in a database. Entities are composed of attributes and roles. 
  • Attributes are the equivalent of object properties or columns in a database table.
  • Roles are the equivalent of object references or relational foreign keys and join operations.

Thursday, November 17, 2011

Unique Index in SQL Server 2005/2008

An index that ensures the uniqueness of each value in the indexed column. If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual names can be duplicated.

Unique indexes are implemented in the following ways:

  • PRIMARY KEY or UNIQUE constraint : When you create a PRIMARY KEY constraint, a unique clustered index on the column or columns is automatically created if a clustered index on the table does not already exist and you do not specify a unique nonclustered index. The primary key column cannot allow NULL values. When you create a UNIQUE constraint, a unique nonclustered index is created to enforce a UNIQUE constraint by default. You can specify a unique clustered index if a clustered index on the table does not already exist.
  • Index independent of a constraint: Multiple unique nonclustered indexes can be defined on a table
  • Indexed view: To create an indexed view, a unique clustered index is defined on one or more view columns. The view is executed (materialized) and the result set is stored in the leaf level of the index in the same way table data is stored in a clustered index. 

Handling NULL Values
you cannot create a unique index, or UNIQUE constraint, if the key values are NULL in more than one row. Select columns that are defined as NOT NULL when you choose columns for a unique index or unique constraint.

ON Employee

Indexed View in SQL Server

An indexed view is a view that has a unique clustered index created on it. Normally views do not exist on disk as rows. That changes for indexed views, which exist in the database as rows that realize the view. There can also be non-clustered indexes on the view so long as it has the unique clustered index.
Since the indexed view exists on disk there are two types of overhead added to the database:
  • The disk space taken up by the view
  • The cost of maintaining the view on disk as the base tables are modified.
Both of these costs can be substantial and limit the usefulness of indexed views to situations where there is a high ratio of data retrieval operations to data modifications. They are best for decision support and analytic applications where the frequency of data modification is low. Adding indexed views to an OLTP type application may slow it down more than it helps.

What are the requirements for Indexed views?
There are several requirements that you must take into consideration when using Indexed views.
  1. View definition must always return the same results from the same underlying data.
  2. Views cannot use non-deterministic functions.
  3. The first index on a View must be a clustered, UNIQUE index.
  4. If you use Group By, you must include the new COUNT_BIG(*) in the select list.
  5. View definition cannot contain the following
    1. TOP
    2. Text, ntext or image columns
    5. SUM on a nullable expression
    6. A derived table
    7. Rowset function
    8. Another view
    9. UNION
    10. Subqueries, outer joins, self joins
    11. Full-text predicates like CONTAIN or FREETEXT
    13. Cannot include order by in view definition
How do I create an Indexed View?
  1. Make sure that session properties are properly set.
  2. Create a deterministic view with new SCHEMABINDING syntax.
  3. Create unique Clustered Index.
  4. Optionally, create additional nonclustered indexes.

Follow below steps to create a Indexed View. This example is based on the AdventureWorks sample database.
1. Set the options to support indexed views


2. Create Indexed view with schemabinding

IF OBJECT_ID ('Sales.vOrders', 'view') IS NOT NULL
DROP VIEW Sales.vOrders ;
CREATE VIEW Sales.vOrders
    SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Revenue,
        OrderDate, ProductID, COUNT_BIG(*) AS COUNT
    FROM Sales.SalesOrderDetail AS od, Sales.SalesOrderHeader AS o
    WHERE od.SalesOrderID = o.SalesOrderID
    GROUP BY OrderDate, ProductID;

3. Create an index on the view. The Index on the view should be unique clustered index.

    ON Sales.vOrders (OrderDate, ProductID);

Indexed views can provide a very significant improvement in performance.While the improvement in query performance can be substantial, the costs can be large enough to offset any benefits. Maintaining the index on a view has performance costs: it adds overhead to every insert, update or delete on a base table that participates in the view. This limits the applicability of indexed views to situations where data modifications are limited; preferably, where the data modification occurs in batches. Also, don't forget about the extra space consumed by the index. 

Optimising Lookups with the Cache Transform

The enhanced Lookup transform works hand-in-hand with the Cache transform. The Cache transform is a brand new feature in SQL 2008, allowing you to cache the data used in the Lookup transform. The Lookup transform can then utilize this cached data to perform the lookup operation. Working with cached data will most likely be more efficient than querying the database.  In addition the Cache Connection Manager (another new feature) can persist the cache to a local file, allowing the cache to be shared between multiple SSIS packages and steps within a package.  A perfect example of where this will be useful is in the extract, transform and load (ETL) packages that we create to update a data warehouse. We typically build dimension tables in the data warehouse that use a surrogate key as a primary key (in lieu of the source key in the business application). As we are processing our fact tables we need to lookup the dimension surrogate keys based on the key in our source system and store the surrogate key in our fact tables.

To demonstrate how to use the new Lookup and Cache transforms, Click here to see the video example by  Mr. Allan MitchellI found this example very useful.

Wednesday, November 16, 2011

Business Intelligence (BI) and Data Warehousing Session

Join this exciting session on NAV Day at Decisions Fall 2011,
Wednesday, December 7th

Sanjay Soni, Senior Technology Evangelist - Business Intelligence, Microsoft  
Tom Totten, Business Intelligence Strategist, Columbus IT 

Learn how BI is being delivered to the masses (39,600+ users globally) at Microsoft using SharePoint, Excel, PerformancePoint, SQL Server and other Microsoft BI products. Learn about Microsoft's Consolidated Business Intelligence framework, a successful BI story, 6 years in the making. We will showcase and share technical details and best practices on how Microsoft IT was able to consolidate 100+ applications from 50+ business organizations in a single enterprise report catalog and a BI portal. Learn best practices and technical details on how to implement large scale self-service Business Intelligence systems in your organization. 

Click Here to Register

Tuesday, November 15, 2011

Intro to SSRS Expressions

Date & Time: 11/29/2011 11:00:00 AM

Presenter: Chris Albrektson

In this session, join Chris as he will walk you through the basics of creating SSRS expressions. This will be an all demo session were we cover an array of examples in a short amount of time. This session is designed for someone who might need a crash course into the SSRS expression language or for anyone else that might need a refresher.

Click here to register.

Introduction to PowerPivot

Date & Time : 1/3/2012 11:00:00 AM

Presenter: Brian Knight

In this session, you’ll learn how to use PowerPivot to create self-service analytics in your organization. You’ll see how Office 2010 and SQL Server 2008 R2 will help your users answer their own questions and bring together datasets that may be heterogeneous like a SQL Server table and data feed from a website. Then, you’ll see how to take that data and create a striking report for your end users and collaborate with the data in SharePoint.

Click here to Register

Covering Indexes in SQL Server

A covering index is a form of a non-clustered composite index, which includes all of the columns referenced in the SELECT, JOIN, and WHERE clauses of a query. Because of this, the index contains the data the query is looking for and SQL Server does not have to look up the actual data in the table, reducing logical and/or physical I/O, and boosting performance.

Covering indexes are used to boost query performance because the index includes all the columns in the query. Non-clustered indexes include a row with an index key value for every row in a table. In addition, SQL Server can use these entries in the index’s leaf level to perform aggregate calculations. This means that SQL Server does not have to go to the actual table to perform the aggregate calculations, which can boost performance.

While covering indexes boost retrieval performance, they can slow down INSERT, DELETE, and UPDATE queries. This is because extra work is required for these procedures to maintain a covering index. This is generally not a problem, unless your database is subject to a very high level of INSERTs, DELETEs, and UPDATEs. You may have to experiment with covering indexes to see if they help more than they hurt performance before you implement them in your production systems.

For example, assume that you want to design an index to cover the following query.

Select [PostTime],[DatabaseUser],[Event],[Schema] from DatabaseLog

To cover the query, each column must be defined in the index. The following statement creates an index with included columns to cover the query.

ON DATABASELOG ([PostTime],[DatabaseUser],[Event],[Schema])

Case 1: Without Any Indexes
Where there are no indexes on the table, there is no other way of returning data except to perform a table scan. Your query will run through the entire table, row by row, to fetch the record(s) that matches your query conditions.
Case 2: With Covering Index
Our next scenario includes a covering index. As a covering index includes all the information for the query, SQL Server will retrieve the data faster and with less resource utilization. In addition, with a covering index, you won’t get as complex an Execution Plan.

With a covering index, the execution time of the SELECT query has been reduced. When you compare this result to not using any indexes, you can see that it has an improvement. CPU cost and I/O Cost also improved, which means that after the covering index was introduced, the query uses fewer resources for SELECT queries.
As the above statistics suggest, covering indexes offer both advantages and disadvantages. It is your job as the SQL Developer to determine whether the advantages outweigh the disadvantages, and whether implementing a covering index is best for your specific needs.

Monday, November 14, 2011

Filtered Index in SQL Server 2008

Filtered Index is a new feature in SQL SERVER 2008. Filtered Index is used to index a portion of rows in a table that means it applies filter on INDEX which improves query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes.

When we see an Index created with some WHERE clause then that is actually a FILTERED INDEX.

For Example:
ON DatabaseLog(Event)

Points to remember when creating Filtered Index:
  • They can be created only as Nonclustered Index
  • They can be used on Views only if they are persisted views.
  • They cannot be created on full-text Indexes.
Execute the below query and see the performance impact of filtered index on Execution Plan.

Query1: Select * from DatabaseLog where Event='CREATE_TABLE'
Query2: Select * from DatabaseLog where Event='ALTER_TABLE'

Query1 is using a Filtered index and it's Query Cost is 29%. Query2 is not using Filtered Index and it's cost is 71%. See below execution plan for reference.

A filtered Index is an optimized non clustered Index which is one of the great performance improvements in SQL SERVER 2008 reducing the Index storage cost and reduces maintenance cost.

Sunday, November 13, 2011

Sign Up for the Free SSIS Expo Next Friday

SSWUG’s virtual expo will review ways to build dynamic packages and develop data integration solutions with SQL Server Integration Services (SSIS), which is an ETL (extraction, transformation and load) tool that continues to grow in popularity.

Through our in-depth sessions with some of the leading SQL Server and Business Intelligence experts in the IT field, you will see many demonstrations and examples on building your own real world applications using this SQL Server subsystem.

By the end of our event, you should have the tools and understanding needed to develop high performance data integration solutions using SSIS.

Thursday, November 10, 2011

Index Types in SQL Server 2008

The bellow index Types are available in SQL Server 2008.

A clustered index sorts and stores the data rows of the table or view in order based on the clustered index key. The clustered index is implemented as a B-tree index structure that supports fast retrieval of the rows, based on their clustered index key values.

A nonclustered index can be defined on a table or view with a clustered index or on a heap. Each index row in the nonclustered index contains the nonclustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. The rows in the index are stored in the order of the index key values, but the data rows are not guaranteed to be in any particular order unless a clustered index is created on the table.

A unique index ensures that the index key contains no duplicate values and therefore every row in the table or view is in some way unique.
Both clustered and nonclustered indexes can be unique.
Index with included columns:
A nonclustered index that is extended to include nonkey columns in addition to the key columns.

A special type of token-based functional index that is built and maintained by the Microsoft Full-Text Engine for SQL Server. It provides efficient support for sophisticated word searches in character string data.

A spatial index provides the ability to perform certain operations more efficiently on spatial objects (spatial data) in a column of the geometry data type. The spatial index reduces the number of objects on which relatively costly spatial operations need to be applied.

An optimized nonclustered index, especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes.

A shredded, and persisted, representation of the XML binary large objects (BLOBs) in the xml data type column.

Compound Assignment Operators in SQL Server 2008

SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line.

The new operators are:

  •  += (plus equals)
  •  -=  (minus equals)
  • *=  (multiplication equals)
  •  /=  (division equals)
  • %=  (modulo equals)

You can use these operators wherever assignment is normally allowed—for example, in the SET clause of an UPDATE statement or in a SET statement that assigns values to variables. The following code example demonstrates the use of the += operator:

DECLARE @price AS MONEY = 10.00;
SET @price += 2.00;
SELECT @price;

This code sets the variable @price to its current value, 10.00, plus 2.00, resulting in 12.00.