Generate an Empty Raw File Without Running your SSIS Package

A new feature was added to the Raw File Destination in SQL Server 2012 which allows you to output an empty raw file from the editor UI.

  1. Add a Raw File Destination to your data flow, and connect it to the previous transform
  2. Double click to open the Raw File Destination Editor
  3. Configure the file name
  4. Click on the Columns tab, and select the columns you want included
  5. Click on the Connection Manager tab
  6. Click the Generate initial raw file … button
  7. A dialog confirming the output columns will pop up. Click OK to create your file.




This functionality is handy when you are using raw files to stage data between data flows. You can use this empty raw file in the Raw File Source to configure the metadata it needs. In previous versions, you would have had to run the package to produce the initial file.

Another note – Raw Files were improved in SQL Server 2012 to include sort information, so now the IsSorted flag is automatically set in the Raw File Source.



Links from the SSIS Roadmap Session

Here are some of the resources I mentioned in the SSIS Roadmap session at the PASS Summit.

SSIS Reporting Pack from Jamie Thomson



DQS Matching Transform from OH22 Data



DQS Domain Value Import Destination from OH22 Data


There is also a great series of blog posts (part 1, part 2, part 3) on using these transforms on the DQS team blog.

Data Feed Publishing Components


SQL Server Data Tools – Business Intelligence downloads

(update 2014/11/14: added link to the VS 2013 release which came out in April)

Looking for the SSIS development tools? Formerly known as Business Intelligence Developer Studio (BIDS), the designer is now called SQL Server Data Tools – Business Intelligence (SSDT-BI) (not to be confused with the other SQL Server Data Tools). The version you want will depend on two things; your version of SQL Server, and the version of Visual Studio you want to work with.

SQL Server 2012

SQL Server 2014



Some notes:

  • The SQL Server 2014 CTP2 version of SSDT-BI should not be installed on the same machine as SSDT-BI for SQL Server 2012
  • There is no SQL Server 2014 SSDT-BI for Visual Studio 2010, however, SSIS packages developed in SQL Server 2012 can be automatically upgraded to SQL Server 2014

SELECT * From SSIS.DataFlow

If you’ve been looking through the documentation for the Power BI preview, you might have noticed a section on Publishing SSIS Packages as OData Feeds. This functionality lets you create a T-SQL View over an SSIS data flow using a new SSIS add-on called the Data Feed Publishing Components. This add-on works with SQL Server 2012, and is a free download from the Microsoft download center. While the components are useful for a number of Power BI scenarios, the components don’t require a Power BI subscription – all you need is a SQL Server/SSIS 2012 installation.

The Data Feed Publishing Components have three main components:

  1. Data Streaming Destination – A new Data Flow Destination that creates an “endpoint”, letting you stream data back to the caller (similar in concept to the DataReader Destination)
  2. OLE DB Provider for SSIS – A special OLE DB provider that allows SQL to treat an SSIS package as a Linked Server
  3. Data Feed Publishing Wizard – A wizard that deploys a project/package containing a Data Streaming Destination and creates a T-SQL View (and linked server) that kicks off the SSIS package when accessed

Publish SSIS Packages - Conceptual Diagram

If this sounds interesting to you, be sure to check out the step-by-step guide in the Power BI documentation. Note – if you’re currently not in the Power BI preview, you can stop at step #3.


I expect I’ll be blogging more about this in the coming months (as well as talking about it at PASS), but I wanted to briefly mention some of the main scenarios we’ve been working with customers on.

Invoking an SSIS Package from a Report

You’d do this in the case where a simple query isn’t enough – there are work flow steps (i.e. FTP files from remote server), you’re merging/transforming disparate data sources, require .NET scripting logic, or your data source requires a custom connector. Internally we’ve been referring to this scenario as “Complex Data Feeds”.

While it is possible to configure Reporting Services to read from an SSIS package, the approach has some limitations as to the account the package will execute as (and is actually removed from the default configuration file in SQL 2012). The Data Feed components provide a similar approach, but also let you leverage the logging and configuration provided by the SSIS Catalog.

On-Demand Execution of an SSIS Package

SELECT’ing from the View created by the Publishing Wizard dynamically invokes the SSIS package with a data flow, and streams back the results. While the majority of SSIS packages would run on a schedule, or write data to a fixed destination, there are cases where dynamic invocation and streaming results are preferred.

One customer we worked with had 500+ “Data Feeds” – data sets that were more than just a simple queries. This data sets were typically small and used for ad hoc reporting purposes. These feeds weren’t accessed regularly – some would not be used for months, and then be used heavily for a day or two (perhaps at the end of a quarter). Unfortunately, the access patterns weren’t predictable. Because the data had to be there when it was needed, the customer ended up with a very large infrastructure keeping every feed up to date. What they needed was something that could be run on demand (perhaps with built in caching), so the data could be refreshed only when it was needed.

Another customer was looking for a way to do dynamic auditing in their environments using SSIS. They had a set of packages with custom auditing logic that they’d deploy and run in various environments, getting a real-time snapshot of their systems.

Alternative to Linked Servers

Want to use a linked server, but don’t have an OLE DB provider for your data source? Want to enforce custom logic, or do dynamic transformations on the results of the data? No problem – use an SSIS package!


Just like Power BI, the current release of these components is in preview, and might not have all of the functionality you’re looking for (just yet). One thing to note is that the SSIS OLE DB provider currently does not support Distributed Query optimizations. Therefore, it currently doesn’t provide statistics or push down filters like other OLE DB providers used for Linked Servers. This functionality is best suited for one time executions of an SSIS package – if you find it’s something you’re accessing over and over, then you should probably be running your package on a schedule.


For more information, see one of the following:

The Natural Movement of Data

Data, information in raw form, is like a new force of nature in human civilizations. The Information Revolution produces too much data to manage, interpret, or even conceptualize without advanced computing tools. Like the real weather made of winds and rain, data has organic patterns and amazing ways to move, to become dynamic. Even simple websites these days can and do rely upon sophisticated programming that delivers responsive content rather than static content that does not take full advantage of the current day multimedia Web.


HTM5 and Data

With the advances to the native language of the Web, HTML, which took place in 2013 (version 5) people have been freed from the burden of sundry plugins to get pages to deliver live content. As the people of the Web — its citizens and designers and architects — learn to juice HTML5 for all its worth we will see another explosion in the amount of data transacted online.

The HTML5 global update would not have been as meaningful as it has been without two important conditions that we have today: 1) normalized 3G and 4G bandwidth, and 2) the mobile device revolution.


We Are Mobile, After All

The natural progression of personal computing technologies is to dislodge from the desk environment, to become ever-present and useful in vast new situations of human activity. Actually, when you think about it: the mobile computing revolution is more startling in terms of personal productivity and self-expression than the dawn of PCs itself.

PC computers (on desks, with their own simulated desktops, to boot) gave us new powers of individual publishing, but that stage was still dovetailing off ordinary paper-based business as usual. Mobile phones, tablets and unusual designs and shapes entering the market every year promise the convergence of all personal energies with the tools there to follow through — the result is more influential people sharing and an overall rise in global information, which of course means even more data to deal with for programmers and the hardware running the Web.


More Natural Programming

In a nutshell, the ways in which computer engineers and scientists relate to massive flows of data will begin to model Nature in order to achieve the same level of efficiency. The processes of weather patterns, water flows and animals migrations may lend powerful insights on humans can work with the element of pure data.

The fact that today there already are advanced systems for managing the complexity and security of real-cash gaming at popular sites like lends hope that we will be able to not only cope with the data glut but also discover more natural ways to thrive in oceans of information.

Server Execution Parameters with DTEXEC

The SSIS team blogged about executing packages deployed to the SSIS Catalog using DTEXEC a while ago. The post mentions the $ServerOption::SYNCHRONIZED parameter as a way to control whether the execution is synchronous or asynchronous, but there are some other server options you can set as well. Phil Brammer actually blogged about the options last year. You can also see the full list of options when you view the SSIS Catalog Execution report (note the Parameters Used section in the screenshot below).


These options can be specified on the command line when you run a catalog package with DTExec. For example, to change the logging level to VERBOSE (3) for a specific execution, you’d add the following to your dtexec command:

/Par "$ServerOption::LOGGING_LEVEL(Int32)";3

More information on logging levels can be found here.

Bulk Loading into MDS using SSIS

Each entity in SQL Server 2012 Master Data Services (MDS) will have it’s own staging table (stg.<name>_Leaf). Using this staging table, you can create, update, deactivate and delete left members in bulk. This post describes how to bulk load into an entity staging table and trigger the stored procedure to start the batch import process.

Staging Tables and Stored Procedures

The new entity based staging tables are an excellent feature in MDS 2012, and make it very easy to bulk load into MDS from SSIS. If you take a look at the SQL database used by your MDS instance, you’ll see at least one table in the stg schema for each entity. For this example I’ve created a Suppliers entity and I see a matching table called [stg].[Suppliers_Leaf]. If your entity is using hierarchies, you will have three staging tables (see BOL for details). If we expand the columns, we’ll see all of the attributes have their own columns, as well as some system columns that every staging table will have.


Each staging table will also have a stored procedure that is used to tell MDS that new data is ready to load. Details of the arguments can be found in BOL.


Import Columns

To load into this table from SSIS, our data flow will need to do the following:

  • Set a value for ImportType (see below)
  • Set a value for BatchTag
  • Map the column values in the data flow to the appropriate attribute columns

See the Leaf Member Staging Table BOL entry for details on the remaining system columns. If your Code value isn’t set to be generated automatically, then you’d also need to specify it in your data flow. Otherwise, the default fields can be safely ignored when we’re bulk importing.

The BatchTag column is used as an identifier in the UI – it can be any string value, as long as it’s unique (and under 50 characters).

MDS uses the same staging table for creating, updating and deleting entities. The ImportType column indicates which action you want to perform. The possible values are listed in the table below.


Value Description
0 Create new members. Replace existing MDS data with staged data, but only if the staged data is not NULL. NULL values are ignored. To change a string attribute value to NULL, set it ~NULL~. To change a number attribute value to NULL, set it to -98765432101234567890. To change a datetime attribute value to NULL, set it to 5555-11-22T12:34:56.
1 Create new members only. Any updates to existing MDS data fail.
2 Create new members. Replace existing MDS data with staged data. If you import NULL values, they will overwrite existing MDS values.
3 Deactivate the member, based on the Code value. All attributes, hierarchy and collection memberships, and transactions are maintained but no longer available in the UI. If the member is used as a domain-based attribute value of another member, the deactivation will fail. See ImportType 5 for an alternative.
4 Permanently delete the member, based on the Code value. All attributes, hierarchy and collection memberships, and transactions are permanently deleted. If the member is used as a domain-based attribute value of another member, the deletion will fail. See ImportType 6 for an alternative.
5 Deactivate the member, based on the Code value. All attributes, hierarchy and collection memberships, and transactions are maintained but no longer available in the UI. If the member is used as a domain-based attribute value of other members, the related values will be set to NULL. ImportType 5 is for leaf members only.
6 Permanently delete the member, based on the Code value. All attributes, hierarchy and collection memberships, and transactions are permanently deleted. If the member is used as a domain-based attribute value of other members, the related values will be set to NULL. ImportType 6 is for leaf members only.

When you are bulk loading data into MDS, you’ll use 0, 1 or 2 as the ImportType. To summarize the different modes:

  • Use 0 or 2 when you are adding new members and/or updating existing ones (i.e. doing a merge)
    • The difference between 0 and 2 is the way they handle NULLs when updating an existing member. With 0, NULL values are ignored (and require special handling if you actually want to set a NULL value). With 2, all values are replaced, even when the values are NULL.
  • Use 1 when you are only inserting new members. If you are specifying a code, then a duplicate value will cause the import to fail.

Package Design

You control flow will have at least two tasks:

  1. A Data Flow Task that loads your incoming data into the MDS staging table for your entity
  2. An Execute SQL Task which runs the staging table’s stored procedure which tells MDS to start processing the batch


Your data flow will have (at least) three steps:

  1. Read the values you want to load into MDS
  2. Add the BatchTag and ImportType column values (using a derived column transform)
  3. Load into the MDS staging table


As noted above, in your OLE DB Destination you’ll need to map your data flow columns to your member attributes (including Code if it’s not auto-generated), the BatchTag value (which can be automatically generated via expression), and the ImportType.


After the Data Flow, you’ll run the staging table stored procedure.

The first three parameters are required:

  1. The version name (i.e. VERSION_1)
  2. Whether this operation should be logged as an MDS transaction (i.e. do you want to record the change history, and make the change reversible?)
  3. The BatchTag value that you specified in your data flow


Additional resources:

Advanced SSIS Catalog presentation from TechEd North America 2013

Matthew Roche (blog | twitter) and I teamed up once again to present an advanced SSIS Catalog session at TechEd North America 2013 – Deep Inside the Microsoft SQL Server Integration Services Server. The video and slide deck are now available on the Channel9 site. The slide deck actually contains 10 additional slides that we didn’t have time to cover during the regular session (with some further details about the security model).

I want to extend a big thank you to everyone who attended, and to all the great feedback we got. It can be tough doing a 400 level SQL session at TechEd, and while I could see some people’s heads spinning, it sounded like most people were able to learn something new.

The TechEd team picked an excellent preview picture for the session (below). It comes from Matthew’s intro – you’ll have to watch the video to see how he worked a picture of kittens into a 400 level SSIS session.


If you didn’t already know, Channel 9 has many TechEd presentations available online. You can see recordings of my previous events on my speaker page, and Matthew’s as well.

Can I Automate SSIS Project Deployment?

Yes, yes you can. Scripted or automated deployment can be done in a number of ways in SQL Server 2012.

Integration Services Deployment Wizard

Yes, this is the primary UI tool for SSIS project deployment, but it can also be run silently on the command line. When you run through the wizard, the Review page will actually list all of the parameters you need to do the same deployment from the command line.


Run ISDeploymentWizard.exe /? from a command prompt, and you’ll see the full list of arguments it supports.


The SSISDB [catalog] schema has a number of public stored procedures, including one that can be used for deployment. We even provide samples on how to use it in Books Online.


All SSIS Catalog operations can be automated through PowerShell. I previously blogged about a deployment script I use to setup my demos.

Custom Utility

The SSIS Catalog management object model (MOM) exposes a set of SMO classes you can use to code your own catalog utilities. You’ll want to use the CatalogFolder.DeployProject method to do the actual deployment. If SMO or .NET isn’t your thing, you can also code a custom utility which interacts directly with the T-SQL API.