To test my Tableau knowledge, I attempted the Tableau product certification and got the “Tableau Desktop 8 Qualified Associate” certificate.
Problem:
Convert the following source data into a schema shown below:
Here’s the code that uses PIVOT function to get to the solution, please use this as a starting point.
Note the use of aggregation function avg – this will depend on the requirement. In the example, the Test_value need to be average if more than one tests were performed.
[code language=”SQL”]
— source data
SELECT [Product_ID],[Test_Desc],[Test_Val] FROM [dbo].[Address]
go
— Destination data using PIVOT function
select * from [dbo].[Address]
pivot( avg(test_val) for test_Desc IN (Test1,Test2,Test3,Test4,Test5))
as Tests
[/code]
Take a look at the following chart, do you see any issues with it?
Notice that the month values are shown as “distinct” values instead of shown as a “continuous” values and it misleads the person looking at the chart. Agree? Great! You already know based on your instincts what continuous and discrete values are, it’s just that we will need to label what you already know.
In the example used above, the “Date & Time” shown as a “Sales Date” is a continuous value since you can’t never say the “Exact” time that the event occurred…1/1/2008 22 hours, 15 minutes, 7 seconds, 5 milliseconds…and it goes on…it’s continuous.
But let’s say you wanted to see Number of Units Sold Vs Product Name. now that’s countable, isn’t it? You can say that we sold 150 units of Product X and 250 units of product Y. In this case, Units sold becomes discrete value.
The chart shown above was treating Sales Date as discrete values and hence causing confusion…let’s fix it since now you the difference between continuous and discrete variables:
Conclusion:
To develop effective data visualizations, it’s important to understand the data types of your data. In this post, you saw the difference between continuous and discrete variables and their importance in data visualization.
As a part of Business Intelligence projects, we spend a significant amount in extracting, transforming and loading data from source systems. So it’s always helpful to know as much as you can about the data sources like NULLS, keys, statistics among other things. One of the things that I like to do if the data is unknown is to make sure that I get the candidate keys correct to make sure the key used can uniquely identify the rows in the data. It’s really helpful if you do this upfront because it would avoid a lot of duplicate value errors in your projects.
So here’s a quick tutorial on how you can check the candidate key profile using data profiling task in SSIS, You need to perform two main tasks:
1. Generate the xml file using the Data profiling task in SSIS
2. View the content of the xml file using the Data Profile Viewer Tool or using the Open Profile Viewer option in the Data Profiling task editor in SSIS.
Here are the steps:
1a. Open SQL Server Data Tools (Visual Studio/BIDS) and the SSIS project type
1b. Bring in Data Profiling Task on Control Flow
1c. Open the Data Profiler Task editor and configure the destination folder that the tasks uses to create the XML file. You can either create a new connection or use an existing one. If you use an existing connection, make sure that you are setting the OverwriteDestination property to True if you want the file to be overwritten at the destination.
1d. Click on Quick Profile to configure the data source for the data profiler task
1e. In the quick profile form, you’ll need to select the connection, table/view and also specify what you to need to computer. For candidate key profile, make sure that the candidate key profile box is checked.
1f. Run the Task and a XML file should be placed at the destination you specified in step 1C.
Now, It’s time to view what profiler captured.
2a. you can open “Data Profile Viewer” by searching for its name in the start button.
2b. once it opens up, click on open and browse to the xml file generated by the data profiling task.
2c. once the file opens up, you can the candidate key profiles.
2d. Alternatively, You can also open the data profile viewer from the “Data Profiling Task” in SSIS. Go to the Editor > Open Profile Viewer:
Conclusion:
In this post, you saw how to profile data using the Data Profiling Task in SSIS.
Thu, Jul 17, 2014 12:00 PM – 1:00 PM EDT
Abstract:
Many companies are starting or expanding their use of data mining and machine learning. This presentation covers seven practical ideas for encouraging advanced analytics in your organization.
Bio:
Mark Tabladillo is a Microsoft MVP and SAS expert based in Atlanta, GA. His Industrial Engineering doctorate (including applied statistics) is from Georgia Tech. Today, he helps teams become more confident in making actionable business decisions through the use of data mining and analytics. Mark provides training and consulting for companies in the US and around the world. He has spoken at major conferences including Microsoft TechEd, PASS Summit, PASS Business Analytics Conference, Predictive Analytics World, and SAS Global Forum. He tweets @marktabnet and blogs at http://marktab.net.
hope to see you there!
Paras Doshi
Business Analytics Virtual Chapter’s Co-Leader
Business Goal:
Design and Develop a Business Leader Dashboard to keep an eye on the health of multiple business units under his leadership.
In other words,
Dashboard should provide an one-stop shop for executives to monitor the health of their business unit(s). Its analogous to a car driver’s dashboard that helps monitor important performance indicators that they need to focus on while driving the car while making sure the driver get alerted for things such as “engine check” and “oil levels”. Dashboards uses state-of-the-art features like Key performance indicators (KPI’s), interactive data visualizations and drill down capability to create an immersive user experience for an executive.
Technical Summary:
– Work with the Business Leader to identify key metrics he needed to see on the dashboard to keep an eye of he health of the business units.
– Work with the IT leaders of each business units to map data available to come up with (consistent) formula to get the metrics needed by business leader
– Develop the Dashboard. (Built iteratively by making sure to have two checkpoint meetings with business leader and working with business analysts to make sure the data is right)
– Develop drill down reports for each metric for each business to see detailed data plus trends.
Mockup
(I can’t write about role of the business leader or the metrics displayed because of non disclosure agreements. so this mockup may look generic but it’s intended to be this way)
Problem:
You are working on a query where you are trying to convert source data to numeric data type and you get an “Arithmetic overflow error”.
Solution:
Let’s understand this with an example:
Here’s the source data: 132.56000000 and you want to store just 132.56 so write a query that looks like:
cast([source language=”column”][/source] as numeric(3,2)) as destination_column_name
and after you run the query its throws an error “Arithmetic Overflow Error” – so what’s wrong?
The issue is that you incorrectly specified the precision and scale – by writing the query that says numeric(3,2) you are saying I want 3 data places with 2 on the right (after decimal point) which leaves just 1 place for left.
what you need to write is numeric(5,2) – and this will have 2 places on the right and leaves 3 places for left.
so after you run this, it shouldn’t complain about the arithmetic overflow error. you just need to make sure that the precision and scale of the numeric data type is correct.
Conclusion:
In this post, you saw an example of how to correctly use the precision and scale in the numeric data type and that should help you solve the arithmetic overflow errors.
Problem:
How to use Execute SQL Task in SSIS to assign value to a variable?
Solution:
This is a beginner level post so I’ll show you how you can use Execute SQL Task to assign a value to a variable. Note that variables can also be given full result set. With that said, here are the steps:
1. Create the query against the source system
Example: ((Note the column name, this will be handy later!)
2. Open SSIS Project > Create the variable
Example
3. Now, drag a Execute SQL Task to Control Flow. Rename it. And go to Edit. Configure SQL Statement Section
4. Now, since we want to store a value to the variable, change the Result Set property to Single Row
5. One last step, go to result set section and map Result Name (remember the column name from #1?!) with Variable Name:
That’s it! Related article: How to see value of variable during Run Time?
Conclusion:
In this post, you saw how to use Execute SQL Task in SQL server integration services to assign a value to a variable.
Problem:
How to sort the dimension attribute by something other than the key and name column? How do you set the “OrderBy” property?
Example: You have created an Inventory age buckets 1-50,51-100,101-150 and so if a business user uses this dimension attribute then the sorting won’t be logical. It would be 1-50, 101-150,51-100 – so how to show the buckets in the logical order?
Solution:
1. make sure that the table/view that you are bringing in has the sort key.
Example:
2. Now, switch to SSAS and open your dimension. I am assuming that you’ve already configured your data source views and you are already bringing in these columns in the dimension:
3. Let’s start with hiding Aging Bucket Sort key so that it’s not visible to user. Change the AttributeHierarchyVisible to False
4. Now, switch to Attribute Relationships – Right Click on Aging Bucket and click on New Attribute Relationship. And set the attribute relanship between Aging bucket and Aging Bucket Sort Key
And you should see something like this in your attribute relationship section:
5. Now, one more thing to configure. Go back to dimension structure section. Open the properties section for the Aging Bucket Attribute and change the OrderBy property to AttributeKey. Also, change the orderByAttribute property to Aging Bucket Sort Key (in your case, choose the sort key that you have)
That’s it, after you process the model then you should see the attribute being sorted based on the sort key that you had.
Conclusion:
In this post, you saw how to configure sort/order property of a dimension attribute.
Summary:
This is a beginner level post targeted at Developers who are new to SSIS and may not have worked on making a SSIS staging load package incremental. In this post, I’ll share a design pattern that I’ve used to make staging loads incremental which pulls in just new or changed rows from source system.
Tutorial:
Before we begin, why would you want to make a staging load incremental when pulling data from source systems? Two main reasons: 1) the source system may not keep historical data but your Business Intelligence system needs to have it 2) it is also faster and puts less strain on source system while doing data pull.
since this is a beginner’s level, I am going to show you a design pattern when you have a column in the source system that can identify New or Changed Rows. If you do not have a column in the source system that identifies new or changed rows then this topic becomes an advanced level and is out of scope for now.
with that said, let’s see the steps involved.
1) I’ve this kill and fill (a.k.a Full Load) package in my SSIS dev environment:
2) now, let’s make this incremental. so I’ll go ahead and delete the Execute SQL Task that truncates the data.
3) Now, we need a way to be able to pass in the query in our DFT that gets only the new or changed rows. The source system that I am using has a field called modified date and that’s what I’ll be using to pull in new or changed data.
4) Let’s create the query using the help of variables, execute sql task and script task. (Later, we’ll store in the query in a variable and use that variable in the Data Flow Task)
4a) create ModfiedDate and Query variables
4b) create an Execute SQL Task to run the query to get the max ModifiedDate and write it in the ModifiedDate variable that you created.
Related Post: How to use Execute SQL Task to assign value to a variable?
4c) create a Script Task to get the query using the ModifiedDate variable. This query will extract only new or changed rows from your source system
[code language=”vb”]
Dim ModifiedDate As String
Dim sQuery As String
ModifiedDate = Dts.Variables("ModifiedDate").Value.ToString
sQuery = String.Concat("SELECT [SalesOrderID],[SalesOrderDetailID],[CarrierTrackingNumber],[OrderQty],[ProductID],[SpecialOfferID],[UnitPrice],[ModifiedDate] FROM [sales].[SalesOrderDetail] where [ModifiedDate] >= ‘", String.Concat(ModifiedDate, "’"))
MsgBox(String.Concat(" ", sQuery))
Dts.Variables("Query").Value = sQuery
[/code]
5) Now, go to variables section and give a default value to user::Query variable because if you do not do this you won’t be able to go to next steps.
6) Go to Data Flow and change the OLEDB source to use the SQL Command from variable and use the user::Query variable
7) Switch to Control flow and Make sure your precedence constraints are set to run Execute SQL Task > Script Task > Data Flow Task
8) Run the package and you should see the dynamic query that gets generated.
Tip: sometimes it’s helpful to run this query that’s generated against the source system for troubleshooting purpose.
9) On the successful run of the package verify that only new rows got added to the staging table. Also, if there are duplicate rows in the staging table, this might need to handled during the dimension load or fact load. you can also consider having the logic in place here to avoid duplicate records in your staging table.
That’s it!
Conclusion:
In this post, you saw how to make a staging load package incremental.
Similar Blog:
SQL Server Integration services: How to write a package that does Set based updates?