Pages

Thursday, December 29, 2011

Free January Training Webinar Series



Pragmatic Works, SQL Server Central and Microsoft would like to invite you to a another month of Free SQL Server Training delivered by the people that write the books!
January Speakers and Topics
Speaker: Brian Knight
Topic: Introduction to PowerPivot
Date: January 3, 2012, 11:00AM EDT
In this session, you’ll learn how to use PowerPivot to create self-service analytics in your organization. You’ll see how Office 2010 and SQL Server 2008 R2 will help your users answer their own questions and bring together datasets that may be heterogeneous like a SQL Server table and data feed from a website. Then, you’ll see how to take that data and create a striking report for your end users and collaborate with the data in SharePoint.
Speaker: Mark Stacey
Topic: Analysis Services for the DBA
Date: January 5, 2012, 11:00AM EDT
Many DBAs have heard the term Analysis Services, but don't really know what it's for or who would use it. In this session, Mark will explain the use cases for Analysis Services, illustrate how it improves performance for large data sets and eases report writing, and will go through a practical example of building a first Analysis Services solution, and managing Analysis Services in a production environment.
Speaker: Arie Jones
Topic: Master Data Services in SQL Server 2012
Date: January 10, 2012, 11:00AM EDT
Most DBAs at one time or another have been taught the importance of master data management(MDM), which is comprised of a set of processes and tools in order to keep your non-transactional data in a consistent state. However, with today’s fast paced environment and tightening budgets most DBAs lack the resources to properly implement it. With the upcoming release of SQL Server 2012, Microsoft has taken their second swipe at providing a tool known as Master Data Services(MDS) to aid the DBA in this endeavor. In this session, you will learn the how Master Data Services is implemented in SQL Server 2012, setting up models, working through the Excel plug-in, and utilizing DQS and business rules to help with that dirty data. This session will definitely be an eye opener for some that were on the fence with the first release and those that are coming into the platform with a fresh set of eyes!
Speaker: K. Brian Kelley
Topic: Using Powershell to Check Your SQL Server Security
Date: January 12, 2012, 11:00AM EDT
Monitoring the security on your SQL Servers can be a tedious chore. Tune in to learn how to use Powershell to help automate this work. We'll look at how to use scripts to capture basic details, to do comparisons from scan to scan, as well as how to automate remediation of issues discovered.
Speaker: Hope Foley
Topic: Super-size Your SSIS Breakfast Sandwich: Performance Tuning 2012 SSIS Data Loads
Date: January 17, 2012, 11:00AM EDT
Importing data is as common to SQL Server as bacon is to a savory breakfast sandwich. As most of us love to eat those breakfast sandwiches, SQL Server Integration Services (SSIS) loves to eat data in an Import/Export kind of way. Although a green chain of boxes in SSIS can be the ruling “Happy Dance” of any DBA or SQL Developer, we must all still consider the performance of those tasks. In this session, we will go through performance tuning in SSIS 2012 after a successful SSIS package has been developed. When we are finished with this session, not only will you still have the mystic of green colors of successful execution of your SSIS import and export tasks, but the turbo boosters will be turned on as well.
Speaker: Mark Brown
Topic: Beginning Development for Business – Part 4: Running in the Halls
Date: January 24, 2012, 11:00AM EDT
In this session we look closer at what a databases are and what roles they play in business. We will talk about database design and how it can differ depending on its primary use. We will look at how to create a database by creating the database for our application. We will look at the tools used to create and manage databases. We will go over multiple methods of using the database from within an application.
Speaker: Eric Hanson
Topic: Performance Tuning with Column Store Indexes
Date: January 26, 2012, 11:00AM EDT
SQL Server columnstore indexes and the vector-based batch mode of query execution are new in the SQL Server 2012 release, and can speed up data warehouse queries by 100X or more. In their sweet spot, columnstores give jaw-dropping performance, but there are limitations in the initial release where performance may not improve as much as the technology allows, unless you tune your application. This talk presents how to work around the main limitations in the columnstore feature to get consistently great performance. We’ll address how to compensate for query performance limitations for outer joins and union operations, data type restrictions, and the fact that columnstore indexes make the data read-only (except for the ability to switch in partitions). We’ll also discuss how to capitalize on the new segment elimination feature that can skip million-row chunks of a table with a constant-time check.
Speaker: Mike Davis
Topic: Creating KPI's, Actions, and Calculations in SSAS Cubes
Date: January 31, 2012, 11:00AM EDT
Mike demonstrates different methods for KPI's, actions, and common calculations. After this session you will have a good understanding of how to build all of these functions in your SSAS Cube. Mike will show dynamic KPIs from a table, opening a URL with an Action, and some common MDX calculations.

Wednesday, December 14, 2011

Download Whitepaper - Migrating DTS Packages to Integration Services


A SQL Server Technical Article by Brian Knight & Dustin Ryan
SQL Server Integration Services (SSIS) brings a revolutionary concept of enterprise-class ETL to the masses. The engine is robust enough to handle hundreds of millions of rows with ease, but is simple enough to let both developers and DBAs engineer an ETL process. Learn about the benefits of migrating your SQL Server 2000 Data Transformation Services (DTS) packages to Integration Services by using two proven methods. Download this whitepaper by Brian Knight and see how you can run and manage your current DTS packages inside of the SQL Server 2005 and 2008 management tools.



Click Here to download

Monday, December 12, 2011

DATA WAREHOUSE ARCHITECTURE

The data warehouse is normal designed as three tier architecture.

  • The lower most tier is a relational database where the data from various servers are populated after several data cleansing, transformations which are done by the ETL tools. In the relational database the data is prepared in such a way that all the relevant information can be easily populated in the data warehouse. This lower tier database is also referred as a staging database.
  • The middle tier is the multidimensional OLAP cube where the data is populated in the form of facts and measures from the relational database. The data from the lower layer to middle layer does not undergo much transformation. The data is only organized into formulated dimensions and measures.
  • The top most tier is the front end for the analyst and the executives who would view the data from the cube. It contains tools which aid in analysis and data mining.

OLAP operations

ROLL UP
The roll up operation aggregates the data in an increasing order in the conceptual hierarchy from a lower level to higher level. In the example mentioned before the data is aggregated by week and then the data is aggregated according to its higher level in hierarchy month and so on.

DRILL DOWN
Drill down is the opposite of roll up. The data is aggregated in decreasing order in the conceptual hierarchy from a higher level to lower level. In the time dimension the data is grouped by ‘ week’ and then the next grouping is in the lower level ‘day’.

SLICE AND DICE
Slice operation is choosing one particular dimension in the n dimensional cube which results in a sub cube where as dice operation is done by choosing two or more dimensions in the cube to result in a sub cube.

PIVOT
This is also referred as rotate where the axis is rotated to provide a different presentation of the data in the n dimensional cube.

Download Whitepaper - Managing and Configuring SSIS



A SQL Server Technical Article by Brian Knight, Devin Knight, & Mike Davis
Administering SSIS can be a challenging task for a DBA/Developer who is not familiar with the SSIS environment and some of the new hurdles it can present. This whitepaper discusses how to configure packages so that you can have zero-touch deployments. We'll take a dive-deep look into a notification and logging framework that sends alerts if a problem occurs or if capacity is running out. Also, how to deploy and schedule SSIS packages will be covered.
Click Here to Download.

Wednesday, December 7, 2011

Working with SSIS Expressions Whitepaper by Brian Knight!


Whether you’ve been involved with SQL Server Integration Services (SSIS) for years or are just learning SSIS, you’ve probably found the learning curve of the SSIS expression language quite challenging. The expression language is a core component of SSIS that helps users make any component in an SSIS packages dynamic and also helps the data flow make workflow decisions. This whitepaper walks you through a crash course in the SSIS expression language across all components of SSIS. After you complete this whitepaper, you’ll know how to use expressions in the control flow, data flow and in connections to make your package change at runtime and maximize the full power of SSIS.


Download the Whitepaper Now!


SSIS Book

What you'll learn:

  • Learn the Language: Mathematical Expressions Functions, String Expression Functions, Date and Time Express Functions
  • Task Expressions
  • Precedence Contraint Expressions
  • Connection Manager Expressions
  • And much more!


Download the Whitepaper Now!

Tuesday, December 6, 2011

What is Fact Table?

Fact Table
The centralized table in a star schema is called as FACT table. A fact table typically has two types of columns: those that contain facts and those that are foreign keys to dimension tables. The primary key of a fact table is usually a composite key that is made up of all of its foreign keys.


Measure Types

  • Additive - Measures that can be added across all dimensions.
  • Non Additive - Measures that cannot be added across all dimensions.
  • Semi Additive - Measures that can be added across few dimensions and not with others.

A fact table might contain either detail level facts or facts that have been aggregated (fact tables that contain aggregated facts are often instead called summary tables).


In the real world, it is possible to have a fact table that contains no measures or facts. These tables are called as Factless Fact tables.


Steps in designing Fact Table

  • Identify a business process for analysis.
  • Identify measures or facts.
  • Identify dimensions for facts.
  • List the columns that describe each dimension.
  • Determine the lowest level of summary in a fact table.

Monday, December 5, 2011

What is Dimensional Data Model?

Dimensional data model is most often used in data warehousing systems. This is different from the 3rd normal form, commonly used for transactional (OLTP) type systems. As you can imagine, the same data would then be stored differently in a dimensional model than in a 3rd normal form model.

To understand dimensional data modeling, let's define some of the terms commonly used in this type of modelling:

Dimension: A category of information. For example, the time dimension.

Attribute: A unique level within a dimension. For example, Month is an attribute in the Time Dimension.

Hierarchy: The specification of levels that represents relationship between different attributes within a dimension. For example, one possible hierarchy in the Time dimension is Year → Quarter → Month → Day.

Fact Table: A fact table is a table that contains the measures of interest. For example, sales amount would be such a measure. This measure is stored in the fact table with the appropriate granularity. For example, it can be sales amount by store by day. In this case, the fact table would contain three columns: A date column, a store column, and a sales amount column.

Lookup Table: The lookup table provides the detailed information about the attributes. For example, the lookup table for the Quarter attribute would include a list of all of the quarters available in the data warehouse. Each row (each quarter) may have several fields, one for the unique ID that identifies the quarter, and one or more additional fields that specifies how that particular quarter is represented on a report (for example, first quarter of 2001 may be represented as "Q1 2001" or "2001 Q1").

A dimensional model includes fact tables and lookup tables. Fact tables connect to one or more lookup tables, but fact tables do not have direct relationships to one another. Dimensions and hierarchies are represented by lookup tables. Attributes are the non-key columns in the lookup tables.

In designing data models for data warehouses / data marts, the most commonly used schema types are Star Schema and Snowflake Schema.

Wednesday, November 30, 2011

Free Training Webinar Series


December Speakers and Topics
Speaker: Adam Jorgensen
Topic: Performance Tuning SSAS
Date: December 1, 2011, 11:00AM EDT
This session will teach you how to identify and resolve performance problems in your SSAS environment along with some internals as to how things work to improve performance.
Speaker: Mike Davis
Topic: New SSIS Features in Denali
Date: December 6, 2011, 11:00AM EDT
The newest version of SQL Server is just around the corner. Come see all of the newest features and best practices in SSIS. Learn about the new SSIS parameters, and all of the great improvements that will make your life easier.
Speaker: Mike Davis
Topic: Monitoring SQL Server with SSIS
Date: December 8, 2011, 11:00AM EDT
So your boss just called and said his database has been getting slower for weeks. Wouldn’t it be nice to have a trend analysis of all of the DMO’s to help determine the issue? Well now you can, with SSIS you can monitor all of your servers and databases. Mike will give you a complete BI solution that you can immediately start using to pull in all of the DMO data to a data warehouse and a cube. You will be able to catch trends that indicate problems before the boss calls.
Speaker: Adam Jorgensen
Topic: Building Tier 1 SSAS Solutions
Date: December 13, 2011, 11:00AM EDT
Come listen to Adam as he shares his Tech-ed presentation with the masses. This is focused on building highly scalable cubes and driving ultimate performance in the process.
Speaker: Adam Jorgensen
Topic: Operational Best Practices in SSAS
Date: December 15, 2011, 11:00AM EDT
Come learn from Adam as he explores all the best practices you should be taking in managing SSAS in your enterprise. Great for developers and administrators.
Speaker: Mark Brown
Topic: Beginning Development for Business – Part 3: Playing in closets
Date: December 20, 2011, 11:00AM EDT
In the third session, we will finish looking at controlling application flow using standard techniques. We will look at the most commonly used data types in business software development and how to keep our data around after the program closes. We will look at what flat files are and ways to create them, XML files, and provide a brief look at what databases are and how they can be used to store data. Finally, we will talk about how these will tie into our evolving application we started designing in our last session.
Speaker: Brian Knight
Topic: Dynamically Configuring Packages
Date: December 22, 2011, 11:00AM EDT
In this session, you’ll learn how to control your SSIS packages in a more powerful way by using expressions. The expression language is a bit of an elusive language that can be used to make any component in SSIS dynamic. Then, you’ll learn how to use configuration files to externalize the configuration of your package and lastly how to pass variables into a package.

Thursday, November 24, 2011

OUTPUT clause in SQL Server 2005/2008

The execution of a DML statement such as INSERT, UPDATE, or DELETE does not produce any results that indicate what was changed. Prior to SQL Server 2005, an extra round trip to the database was required to determine the changes. In SQL Server 2005/2008 the INSERT, UPDATE, and DELETE statements have been enhanced to support an OUTPUT clause so that a single round trip is all that is required to modify the database and determine what changed. You use the OUTPUT clause together with the inserted and deleted virtual tables, much as in a trigger. The OUTPUT clause must be used with an INTO expression to fill a table. Typically, this will be a table variable. The following example creates a table, inserts some data, and finally deletes some records.

--create table and insert data
CREATE TABLE outputtbl 
(id INT IDENTITY, col1 VARCHAR(15))
go

INSERT INTO outputtbl VALUES('row1')
INSERT INTO outputtbl VALUES ('row2')
INSERT INTO outputtbl VALUES ('row5')
INSERT INTO outputtbl VALUES ('row6')
INSERT INTO outputtbl VALUES ('row7')
INSERT INTO outputtbl VALUES ('row8')
INSERT INTO outputtbl VALUES ('row9')
INSERT INTO outputtbl VALUES ('row10')

--make a table variable to hold the results of the OUTPUT clause
DECLARE @del AS TABLE (deletedId INT, deletedValue VARCHAR(15))
--delete two rows and return through
--the output clause
DELETE outputtbl
OUTPUT DELETED.id, DELETED.col1 INTO @del
WHERE id < 3
SELECT * FROM @del
GO
deletedId  deletedValue
—————- ———————-
1                row1
2                row2

(2 row(s) affected)
The previous example inserted the id and col1 values of the rows that were deleted into the table variable @del.
When used with an UPDATE command, OUTPUT produces both a DELETED and an INSERTED table. The DELETED table contains the values before the UPDATE command, and the DELETED table has the values after the UPDATE command. An example follows that shows OUTPUT being used to capture the result of an UPDATE.
--update records, this populates
--both the inserted and deleted tables
DECLARE @changes TABLE 
(id INT, oldValue VARCHAR(15), newValue VARCHAR(15))
UPDATE outputtbl
SET col1 = 'updated'
OUTPUT inserted.id, deleted.col1, inserted.col1
INTO @changes
WHERE id < 5
SELECT * FROM @changes
GO
id               oldValue           newValue
—————- ———————- ———————-
3                row5                  updated
4                row6                  updated

(2 row(s) affected)




TOP clause in SQL Server 2005/2008

TOP was introduced in SQL Server 7. Until SQL Server 2005, the TOP clause allowed the user to specify the number or percent of rows to be returned in a SELECT statement. In SQL Server 2005, the TOP clause can be used also for INSERT, UPDATE, and DELETE (in addition to SELECT), and the syntax is as follows: TOP (expression) [PERCENT]. Notice the parentheses around the expression; this is required when TOP is used for UPDATE, INSERT, and DELETE.


The following code shows some examples of using TOP.
—create a table and insert some data
CREATE TABLE #tmpTopSample (Name VARCHAR(150))
INSERT INTO #tmpTopSample (Name)  VALUES('Harpreet1'),('Harpreet2'),('Harpreet3'),('Harpreet4'),('Harpreet5')

TOP with SELECT statement: 
SELECT TOP(2) * FROM toptest

The above query will return 'Harpreet1' and 'Harpreet2'

TOP with UPDATE statement:
UPDATE TOP(2) #tmpTopSample SET Name= 'hi'

The above query will update 'Harpreet1' and 'Harpreet2' to 'hi'

TOP with DELETE statement:
DELETE TOP(2)  #tmpTopSample


The above query will delete top two rows.


TOP with INSERT statement:

CREATE TABLE #tmpTopSample1 (Name VARCHAR(150))
INSERT INTO #tmpTopSample (Name)  VALUES('Harpreet1'),('Harpreet2')



INSERT top(2) #tmpTopSample
SELECT * FROM #tmpTopSample1

The above query will insert top two rows from #tmpTopSample1 table to #tmpTopSample table.

In SQL 2005/2008 we can use TOP clause with expressions for number definition. See below example for the reference.

DECLARE @a INT,
@b INT,
@c INT;
--set values
SET @a = 10
SET @b=5 
SELECT  @c=@a/@b


Select TOP(@c) * from #tmpTopSample


The above query will return 2 rows from #tmpTopSample table.

How many types of SSIS Log Providers available for native SSIS Logging?

There are five native log providers are available but most common are SQL Server and Text File Log Provider.

  1. SQL Server Log Provider
  2. Text file Log Provider
  3. Windows Event Log Provider
  4. XML File Log Provider
  5. SQL Server Profiler Log Provider

FILESTREAM data type in SQL Server 2008

Introduction:
FILESTREAM is a new datatype in SQL SERVER 2008. To use FILESTREAM, a database needs to contain a FILESTREAM filegroup and a table which contains a varbinary(max) column with the FILESTREAM attribute set. This causes the Database Engine to store all data for that column in the file system, but not in the database file. A FILESTREAM filegroup is a special folder that contains file system directories known as data containers. These data containers are the interface between Database Engine storage and file system storage through which files in these data containers are maintained by Database Engine.


Keynotes about FileStream data type:

  • FILESTREAM allows large binary data (documents, images, videos, etc.) to be stored directly in the Windows file system. This binary data remains an integral part of the database and maintains transactional consistency. 
  • FILESTREAM enables the storage of large binary data, traditionally managed by the database, to be stored outside the database as individual files that can be accessed using an NTFS streaming API. Using the NTFS streaming APIs allows efficient performance of common file operations while providing all of the rich database services, including security and backup.
  • So to use FILESTREAM, a database needs to contain a FILESTREAM filegroup and a table which contains a varbinary (max) column with the FILESTREAM attribute set. This causes the Database Engine to store all data for that column in the file system, but not in the database file. 
  • A FILESTREAM filegroup is a special folder that contains file system directories known as data containers. These data containers are the interface between Database Engine storage and file system storage through which files in these data containers are maintained by Database Engine.
  • A data container is created which will take care of DML statements.
  • FILESTREAM will use Windows API for streaming the files so that files can be accessed faster. Also instead of using SQL SERVER cache it will use Windows cache for caching the files accessed.
  • When you use FileStream storage, keep the following things in mind:
    • If the table contains a FILESTREAM column, each row must have a unique row id.
    • FILESTREAM filegroups can be on compressed volumes.
    • FILESTREAM data containers cannot be nested.
  • When applications need to store large files i.e. larger than 1 MB and also don’t want to affect database performance in reading the data, the use of FILESTREAM will provide a better solution.
  • FILESTREAM data is secured by granting permissions at the table or column level.

Wednesday, November 23, 2011

Monitoring SQL Server with SSIS


12/8/2011 11:00:00 AM
Mike Davis

So your boss just called and said his database has been getting slower for weeks. Wouldn’t it be nice to have a trend analysis of all of the DMO’s to help determine the issue? Well now you can, with SSIS you can monitor all of your servers and databases. Mike will give you a complete BI solution that you can immediately start using to pull in all of the DMO data to a data warehouse and a cube. You will be able to catch trends that indicate problems before the boss calls.


Click here to register

Performance Tuning SSAS


Date: 12/1/2011 11:00:00 AM

Adam Jergensen

This session will teach you how to identify and resolve performance problems in your SSAS environment along with some internals as to how things work to improve performance.

Click here to Register

Tuesday, November 22, 2011

Except and Intersect operators in SQL Server

Introduction:
Returns distinct values by comparing the results of two queries. 


EXCEPT returns the distinct data value from the left query (query on left side of the operand) which does not exist in the right query (query on the right side of the operand). INTERSECT returns data value which is common in both queries. In Oracle, the equivalent of the EXCEPT operator is MINUS and INTERSECT is same as INTERSECT. In SQL 2005, using EXCEPT and INTERSECT, one can also simulate the MERGE command in Oracle (see the blog post that is mentioned above).


Example:

CREATE TABLE #tmpName1 (name VARCHAR(100))
GO
CREATE TABLE #tmpName2 (name VARCHAR(100))
GO


INSERT INTO #tmpName1 VALUES('Harpreet'),('Sundeep'),('Mithun')
GO


INSERT INTO #tmpName2 VALUES('Harpreet'),('Shashi'),('Nitin')
GO

Now let see the result by running the query using EXCEPT and INTERSECT command.


SELECT Name FROM #tmpName1 EXCEPT SELECT Name FROM #tmpName2
GO


Above query will return following values. Data which exists in the left query but not in the right query.
Mithun
Sundeep

SELECT Name FROM #tmpName2 EXCEPT SELECT Name FROM #tmpName1
GO


This query will return following values. This time we have reversed the table order.
Nitin
Shashi

SELECT Name FROM #tmpName2 INTERSECT SELECT Name FROM #tmpName1
GO
And this query returns value returned by both the tables.
Harpreet

Things to remember when using these operators:
  • When the data types of comparable columns that are returned by the queries to the left and right of the EXCEPT or INTERSECT operands are character data types with different collations, the required comparison is performed according to the rules of collation precedence. If this conversion cannot be performed, the SQL Server Database Engine returns an error.
  • When you compare rows for determining distinct values, two NULL values are considered equal.
  • The column names of the result set that are returned by EXCEPT or INTERSECT are the same names as those returned by the query on the left side of the operand.
  • Column names or aliases in ORDER BY clauses must reference column names returned by the left-side query.
  • The nullability of any column in the result set returned by EXCEPT or INTERSECT is the same as the nullability of the corresponding column that is returned by the query on the left side of the operand.
  • If EXCEPT or INTERSECT is used together with other operators in an expression, it is evaluated in the context of the following precedence:
            a). Expressions in parentheses
            b). The INTERSECT operand
            c). EXCEPT and UNION evaluated from left to right based on their position in the expression

  • If EXCEPT or INTERSECT is used to compare more than two sets of queries, data type conversion is determined by comparing two queries at a time, and following the previously mentioned rules of expression evaluation.
  • EXCEPT and INTERSECT cannot be used in distributed partitioned view definitions, query notifications, or together with COMPUTE and COMPUTE BY clauses.
  • EXCEPT and INTERSECT may be used in distributed queries, but are only executed on the local server and not pushed to the linked server. Therefore, using EXCEPT and INTERSECT in distributed queries may affect performance.
  • Fast forward-only and static cursors are fully supported in the result set when they are used with an EXCEPT or INTERSECT operation. If a keyset-driven or dynamic cursor is used together with an EXCEPT or INTERSECT operation, the cursor of the result set of the operation is converted to a static cursor.
  • When an EXCEPT operation is displayed by using the Graphical Showplan feature in SQL Server Management Studio, the operation appears as a left anti semi join, and an INTERSECT operation appears as a left semi join.

Monday, November 21, 2011

Report Model in SQL Server Reporting Services

Introduction:

The report model is the metadata layer component that serves as the basis for building reports. Report designers, both end users and IT staff, will use the report model as the data source for the report instead of directly accessing a relational database or OLAP cube data source when designing reports.

The report model contains three key elements necessary for the ad hoc reporting process to function smoothly: bindings to the underlying data-source schema, a logical representation of the schema in the form of business entities that are familiar to end users, and the default information to be used according to the role that the entity plays in the report (primary entity, lookup entity, summary, and so on).

A report model definition is an XML file that follows the Semantic Model Definition Language (SMDL) specification. Therefore, report models are also referred to as semantic models, SMDL, or SMDL models.


Report Model Components
Report models primarily consist of the following three item types:

  • Entities are the equivalent of objects or tables in a database. Entities are composed of attributes and roles. 
  • Attributes are the equivalent of object properties or columns in a database table.
  • Roles are the equivalent of object references or relational foreign keys and join operations.


Thursday, November 17, 2011

Unique Index in SQL Server 2005/2008

Introduction
An index that ensures the uniqueness of each value in the indexed column. If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual names can be duplicated.



Implementations
Unique indexes are implemented in the following ways:

  • PRIMARY KEY or UNIQUE constraint : When you create a PRIMARY KEY constraint, a unique clustered index on the column or columns is automatically created if a clustered index on the table does not already exist and you do not specify a unique nonclustered index. The primary key column cannot allow NULL values. When you create a UNIQUE constraint, a unique nonclustered index is created to enforce a UNIQUE constraint by default. You can specify a unique clustered index if a clustered index on the table does not already exist.
  • Index independent of a constraint: Multiple unique nonclustered indexes can be defined on a table
  • Indexed view: To create an indexed view, a unique clustered index is defined on one or more view columns. The view is executed (materialized) and the result set is stored in the leaf level of the index in the same way table data is stored in a clustered index. 

Handling NULL Values
you cannot create a unique index, or UNIQUE constraint, if the key values are NULL in more than one row. Select columns that are defined as NOT NULL when you choose columns for a unique index or unique constraint.


Syntax:
CREATE UNIQUE CLUSTERED INDEX IX_Employee
ON Employee
([Id][NameDESC)
GO

Indexed View in SQL Server

Introduction:
An indexed view is a view that has a unique clustered index created on it. Normally views do not exist on disk as rows. That changes for indexed views, which exist in the database as rows that realize the view. There can also be non-clustered indexes on the view so long as it has the unique clustered index.
Since the indexed view exists on disk there are two types of overhead added to the database:
  • The disk space taken up by the view
  • The cost of maintaining the view on disk as the base tables are modified.
Both of these costs can be substantial and limit the usefulness of indexed views to situations where there is a high ratio of data retrieval operations to data modifications. They are best for decision support and analytic applications where the frequency of data modification is low. Adding indexed views to an OLTP type application may slow it down more than it helps.

What are the requirements for Indexed views?
There are several requirements that you must take into consideration when using Indexed views.
  1. View definition must always return the same results from the same underlying data.
  2. Views cannot use non-deterministic functions.
  3. The first index on a View must be a clustered, UNIQUE index.
  4. If you use Group By, you must include the new COUNT_BIG(*) in the select list.
  5. View definition cannot contain the following
    1. TOP
    2. Text, ntext or image columns
    3. DISTINCT
    4. MIN, MAX, COUNT, STDEV, VARIANCE, AVG
    5. SUM on a nullable expression
    6. A derived table
    7. Rowset function
    8. Another view
    9. UNION
    10. Subqueries, outer joins, self joins
    11. Full-text predicates like CONTAIN or FREETEXT
    12. COMPUTE or COMPUTE BY
    13. Cannot include order by in view definition
How do I create an Indexed View?
  1. Make sure that session properties are properly set.
  2. Create a deterministic view with new SCHEMABINDING syntax.
  3. Create unique Clustered Index.
  4. Optionally, create additional nonclustered indexes.

Follow below steps to create a Indexed View. This example is based on the AdventureWorks sample database.
1. Set the options to support indexed views

SET NUMERIC_ROUNDABORT OFF;
SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT,
    QUOTED_IDENTIFIER, ANSI_NULLS ON;

2. Create Indexed view with schemabinding

IF OBJECT_ID ('Sales.vOrders', 'view') IS NOT NULL
DROP VIEW Sales.vOrders ;
GO
CREATE VIEW Sales.vOrders
WITH SCHEMABINDING
AS
    SELECT SUM(UnitPrice*OrderQty*(1.00-UnitPriceDiscount)) AS Revenue,
        OrderDate, ProductID, COUNT_BIG(*) AS COUNT
    FROM Sales.SalesOrderDetail AS od, Sales.SalesOrderHeader AS o
    WHERE od.SalesOrderID = o.SalesOrderID
    GROUP BY OrderDate, ProductID;
GO

3. Create an index on the view. The Index on the view should be unique clustered index.

CREATE UNIQUE CLUSTERED INDEX IDX_V1 
    ON Sales.vOrders (OrderDate, ProductID);
GO

Conclusion
Indexed views can provide a very significant improvement in performance.While the improvement in query performance can be substantial, the costs can be large enough to offset any benefits. Maintaining the index on a view has performance costs: it adds overhead to every insert, update or delete on a base table that participates in the view. This limits the applicability of indexed views to situations where data modifications are limited; preferably, where the data modification occurs in batches. Also, don't forget about the extra space consumed by the index. 

Optimising Lookups with the Cache Transform


The enhanced Lookup transform works hand-in-hand with the Cache transform. The Cache transform is a brand new feature in SQL 2008, allowing you to cache the data used in the Lookup transform. The Lookup transform can then utilize this cached data to perform the lookup operation. Working with cached data will most likely be more efficient than querying the database.  In addition the Cache Connection Manager (another new feature) can persist the cache to a local file, allowing the cache to be shared between multiple SSIS packages and steps within a package.  A perfect example of where this will be useful is in the extract, transform and load (ETL) packages that we create to update a data warehouse. We typically build dimension tables in the data warehouse that use a surrogate key as a primary key (in lieu of the source key in the business application). As we are processing our fact tables we need to lookup the dimension surrogate keys based on the key in our source system and store the surrogate key in our fact tables.


To demonstrate how to use the new Lookup and Cache transforms, Click here to see the video example by  Mr. Allan MitchellI found this example very useful.