How to swap the values of two columns of a table ?
Let me explain the scenario with an example –
Let’s create a table named “Employees” with columns “First_Name” and “Last_Name”. Then populate the table with some fictitious data. Then view the results once and then interchange the values in the “First_Name” and “Last_Name” columns of the table. Then see the results in the table again to verify. The below T-SQL does all this:
Usually the UPDATE statement as in the above picture gives the impression of the both columns returning the same set of values (here Last_Name column values in both columns). But SQL Server behaves intelligent and interchanges the values in the two columns. It internally stores the values in a temporary column and then does the interchange.
Hope you enjoyed reading this post and found somewhat interesting learning. Keep coming back or follow this “Data Language” blog for more such interesting posts.
I just came across a same old situation of deciding where to do the calculation of certain metrics like percentages ?
Here is my take on it with a fictitious example –
Scenario 1: Calculating it at database level and preserving it in the database:
see Fig.:1(Associate Level Sales) showing a table with sales made by a few sales executives in the years 2011 and 2012. Assuming that the percentage increase or decrease in the sales made by each of these executives is calculated at the database level and preserved in the database, the table appears as in the Fig.1 (Associate Level Sales).
Benifits: Now, since the calculations are materialized/preserved in the database, the performance of the reports containing this percentage calculation might be better, as the reports need to directly pull this calculation from the database instead of calculating on the fly.
Caveats: However in this scenario, the problems surfaces when these sales figures are expected to be aggregated at Manager level. See Fig.: 2(Manager Level Sales Pivot table) showing a pivot table built on top of the table data as in Fig.:1(Associate Level Sales) to arrive at the aggregated percentage change calculations. The percentage chanage calculations are wrong since they are doing the sum of the percentage changes at the associate levels to aggregate at the Manager level. whereas the calculations should have happenned as in the Fig.4.
In Fig.4 First manager level volumes are arrived at by adding up the associate level volumes. Percentage changes are then calculated on these manager level aggregated volumes.
Scenario 2: Calculating it at the report level (e.g. pivot table in MS Excel or OBIEE or some other reporting tool) on the fly:
In this scenario, a pivot table is built as in scenario:1, but the percentag change calculation is done inside the pivot table as a calculated item using the formula (sales of 2012 – sales of 2011)/sales of 2011. see the results in Fig.: 3. Now these calculations match exactly with those of the Fig.: 4.
Benifits: Aggregations at higgher levels are calculated correctly.
Drawbacks: Since the percentage change calculation is not preserved in the database, it is calculated on the fly everytime the pivot table is refreshed resulting in report performance drop down.
Conclusion: It’s just common sense that certain measures cannot be aggregated directly. Such measures are called Non-Additive measures. Additive measures should be preserved in database and Non-Additive measures should be calculated at the report level on the fly.
Hope you enjoyed reading this post. Please do share your comments, suggestions or questions, if any and help me improving this further..
Do rate this blog post on the top in terms of number of starts. The more you like it, the more the number of stars.. Happy time until my next blog post.. 🙂
I have seen that comparing a datetime column in a table in SQL Server database, that has both date and time parts (e.g. 2012-12-05 10:37:36.897) with a date literal that has only date part is a not straight forward. Let me explain with an example:
Have created a temporary table with two date columns – From_Date and To_Date and inserted a couple of random date values. First output shows the table contents. Observe that the timestamps are also included with the date values.
Now declared a date variable and assigned a date value without timestamp. Then retrieved results from the table where From_Date is less than or equal to the date variable in two ways:
– once by converting the From_Date column into string taking the format “mm/dd/yyyy” (note the format style 101 in the convert function syntax) and then casting this date string back to datetime data type. This removes the timestamp from the datetime values.
– the other time by comparing the direct comparison between From_Date datetime column and datetime variable. This comparison compares the both date and time pieces of the From_Date column and the datetime variable, hence resulting in incorrect(less) number if results.
Hope that helps. Do leave your comments or questions, if any.. 🙂