<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>jhunterj.com &#187; Database</title>
	<atom:link href="https://jhunterj.com/category/database/feed/" rel="self" type="application/rss+xml" />
	<link>https://jhunterj.com</link>
	<description>J. Hunter Johnson—I&#039;m just this geek you (should) know.</description>
	<lastBuildDate>Sat, 14 Mar 2015 12:10:39 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Transforming mostly static data to date ranges</title>
		<link>https://jhunterj.com/2013/04/06/transforming-mostly-static-data-to-date-ranges/</link>
		<comments>https://jhunterj.com/2013/04/06/transforming-mostly-static-data-to-date-ranges/#comments</comments>
		<pubDate>Sat, 06 Apr 2013 13:59:27 +0000</pubDate>
		<dc:creator><![CDATA[Hunter]]></dc:creator>
				<category><![CDATA[Database]]></category>
		<category><![CDATA[sql]]></category>
		<category><![CDATA[transact-sql]]></category>

		<guid isPermaLink="false">http://jhunterj.com/?p=288</guid>
		<description><![CDATA[I recently had to transform a set of data from typical date &#38; measurement to a more compact date range &#38; measurement format. The data in this case was very static: as the date incremented, the measurement was much more <a class="more-link" href="https://jhunterj.com/2013/04/06/transforming-mostly-static-data-to-date-ranges/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<div id="attachment_293" style="width: 310px" class="wp-caption alignright"><a href="http://www.flickr.com/photos/husseinabdallah/4035530069/" target="_blank"><img class="size-medium wp-image-293" alt="A cityscape showing flat peaks and troughs" src="http://jhunterj.com/wp-content/uploads/2013/04/FlatPeaksAndTroughs-300x228.jpg" width="300" height="228" /></a><p class="wp-caption-text">For data that has flat plateaus and canyons, like this cityscape. Derived from a <a href="http://creativecommons.org/licenses/by/2.0/" target="_blank">CC-BY-2.0</a> image by abdallah.</p></div>
<p>I recently had to transform a set of data from typical date &amp; measurement to a more compact date range &amp; measurement format. The data in this case was very static: as the date incremented, the measurement was much more likely to be remain the same than it was to change. So storing the starting date and ending date for each measurement takes less space than storing each date&#8217;s measurement separately. Sure, it makes some subsequent  queries more convoluted, but let&#8217;s say that you found this post because you also need a similar transformation.</p>
<p>I stumbled at first by subconsciously assuming that a measurement would not repeat once its range was ended. This assumption works if your measurements never decrease (or if they never increase), say for the total number of copies of a book printed. They&#8217;re printed in batches, and most days no new copies are printed. If your data does meet that criterion, the query is simple (and should be portable from Microsoft Transact-SQL, where I wrote it):</p>
<pre>SELECT d.[group],
       MIN(d.[date]) AS [start_date],
       MAX(d.[date]) AS [end_date],
       d.[measurement]
   FROM mydata d
   GROUP BY d.[group], d.[measurement]
   ORDER BY d.[group], MIN(d.[date])</pre>
<p>So for data like this:</p>
<table border="1">
<tbody>
<tr>
<td>group</td>
<td>date</td>
<td>measurement</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-01</td>
<td>12</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-02</td>
<td>12</td>
</tr>
<tr>
<td style="text-align: center;" colspan="3">…</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-12</td>
<td>12</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-13</td>
<td>18</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-14</td>
<td>18</td>
</tr>
<tr>
<td style="text-align: center;" colspan="3">…</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-31</td>
<td>18</td>
</tr>
</tbody>
</table>
<p>this generates the desired output:</p>
<table border="1">
<tbody>
<tr>
<td>group</td>
<td>start_date</td>
<td>end_date</td>
<td>measurement</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-01</td>
<td>2013-03-12</td>
<td>12</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-13</td>
<td>2013-03-31</td>
<td>18</td>
</tr>
</tbody>
</table>
<p>Unfortunately, my data did not meet this criterion, and my results from that query had overlapping ranges, which was quite incorrect:</p>
<table border="1">
<tbody>
<tr>
<td>group</td>
<td>date</td>
<td>measurement</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-01</td>
<td>12</td>
</tr>
<tr>
<td style="text-align: center;" colspan="3">…</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-07</td>
<td>12</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-08</td>
<td>15</td>
</tr>
<tr>
<td style="text-align: center;" colspan="3">…</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-11</td>
<td>15</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-12</td>
<td>12</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-13</td>
<td>18</td>
</tr>
<tr>
<td style="text-align: center;" colspan="3">…</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-22</td>
<td>18</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-23</td>
<td>21</td>
</tr>
<tr>
<td style="text-align: center;" colspan="3">…</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-27</td>
<td>21</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-28</td>
<td>18</td>
</tr>
<tr>
<td style="text-align: center;" colspan="3">…</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-31</td>
<td>18</td>
</tr>
</tbody>
</table>
<p>yields:</p>
<table border="1">
<tbody>
<tr>
<td>group</td>
<td>start_date</td>
<td>end_date</td>
<td>measurement</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-01</td>
<td>2013-03-12</td>
<td>12</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-08</td>
<td>2013-03-11</td>
<td>15</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-13</td>
<td>2013-03-31</td>
<td>18</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-23</td>
<td>2013-03-27</td>
<td>21</td>
</tr>
</tbody>
</table>
<p>I solved it by counting off my start dates and end dates, and then for each group and measurement placing the first start date with the first end date, the second with the second, and so on. This still seems like overkill; if you know an optimization I missed, please add your comment below!</p>
<p>In Microsoft&#8217;s Transact SQL, counting off like that involves the <a title="OVER Clause (Transact-SQL)" href="http://msdn.microsoft.com/en-us/library/ms189461(v=sql.90).aspx" target="_blank">OVER clause</a> and <a title="CASE (Transact-SQL)" href="http://msdn.microsoft.com/en-us/library/ms181765(v=sql.90).aspx" target="_blank">CASE expressions</a> against some <a title="Using Outer Joins" href="http://msdn.microsoft.com/en-us/library/ms187518(v=sql.90).aspx" target="_blank">LEFT JOINs</a>. I LEFT JOIN the table against itself twice, once on the previous date and once on the subsequent date. Finding the NULLs on those joins (in the OVER + CASE constructs) allows me to count the starts and ends of each block of measurements. I also need a future date to move all of the &#8220;middle&#8221; dates out of order to the end—and they get thrown out by the <code>WHERE COALESCE([start_date], [end_date]) IS NOT NULL</code> part later.</p>
<pre>/* any date well after all of the dates in the database will do */
DECLARE @futuredate DATE = '2100-01-01';

SELECT t.[group],
       MIN(t.[start_date]) AS [start_date],
       MIN(t.[end_date]) AS [end_date],
       t.[measurement]
   FROM (SELECT d.[group],
                CASE 
                   WHEN d2.[date] IS NULL THEN 
                      ROW_NUMBER()
                         OVER (PARTITION BY d.[group]
                               ORDER BY CASE
                                           WHEN d2.[date] IS NULL THEN d.[date]
                                           ELSE @futuredate
                                        END)
                   ELSE NULL
                END AS start_seq,
                CASE 
                   WHEN d3.[date] IS NULL THEN 
                      ROW_NUMBER()
                         OVER (PARTITION BY d.[group]
                               ORDER BY CASE
                                           WHEN d3.[date] IS NULL THEN d.[date]
                                           ELSE @futuredate
                                        END)
                   ELSE NULL
                END AS end_seq,
                CASE
                   WHEN d2.[date] IS NULL THEN d.[date]
                   ELSE NULL
                END AS [start_date],
                CASE
                   WHEN d3.[date] IS NULL THEN d.[date]
                   ELSE NULL
                END AS [end_date],
                d.[measurement]

            FROM mydata d
               LEFT JOIN mydata d2 ON d2.[group] = d.[group]
                                      AND d2.[date] = DATEADD(DD, -1, d.[date])
                                      AND d2.[measurement] = d.[measurement]
               LEFT JOIN mydata d3 ON d3.[group] = d.[group]
                                      AND d3.[date] = DATEADD(DD, 1, d.[date])
                                      AND d3.[measurement] = d.[measurement]

        ) t
   WHERE COALESCE([start_date], [end_date]) IS NOT NULL
   GROUP BY [group], COALESCE([start_seq], [end_seq]), [measurement]
   ORDER BY [group], [start_date]</pre>
<p>which gives me the correct results:</p>
<table border="1">
<tbody>
<tr>
<td>group</td>
<td>start_date</td>
<td>end_date</td>
<td>measurement</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-01</td>
<td>2013-03-07</td>
<td>12</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-08</td>
<td>2013-03-11</td>
<td>15</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-12</td>
<td>2013-03-12</td>
<td>12</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-13</td>
<td>2013-03-22</td>
<td>18</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-23</td>
<td>2013-03-27</td>
<td>21</td>
</tr>
<tr>
<td>Alpha</td>
<td>2013-03-28</td>
<td>2013-03-31</td>
<td>18</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-01</td>
<td>2013-03-12</td>
<td>12</td>
</tr>
<tr>
<td>Beta</td>
<td>2013-03-13</td>
<td>2013-03-31</td>
<td>18</td>
</tr>
</tbody>
</table>
<p>I&#8217;m still making an assumption, that a measurement was stored in the data for every date. If the measurements are not taken that rigorously, you will need to accommodate that gap with a generated table of the appropriate dates to join against. If you&#8217;re up against that, let me know and I&#8217;ll do a follow-up post.</p>
<p style="text-align: right;">—jhunterj</p>
]]></content:encoded>
			<wfw:commentRss>https://jhunterj.com/2013/04/06/transforming-mostly-static-data-to-date-ranges/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Decoding SQL&#8217;s Decode</title>
		<link>https://jhunterj.com/2013/02/05/decoding-sqls-decode/</link>
		<comments>https://jhunterj.com/2013/02/05/decoding-sqls-decode/#comments</comments>
		<pubDate>Tue, 05 Feb 2013 12:32:29 +0000</pubDate>
		<dc:creator><![CDATA[Hunter]]></dc:creator>
				<category><![CDATA[Database]]></category>
		<category><![CDATA[decode]]></category>
		<category><![CDATA[oracle]]></category>
		<category><![CDATA[sql]]></category>

		<guid isPermaLink="false">http://jhunterj.com/?p=205</guid>
		<description><![CDATA[I came across a SQL function I was not familiar with: decode. I looked it up and immediately replaced it with a CASE statement (in addition to other code cleanup). I&#8217;m afraid I don&#8217;t understand the existence of Oracle SQL&#8217;s <a class="more-link" href="https://jhunterj.com/2013/02/05/decoding-sqls-decode/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I came across a SQL function I was not familiar with: decode. I looked it up and immediately replaced it with a CASE statement (in addition to other code cleanup). I&#8217;m afraid I don&#8217;t understand the existence of Oracle SQL&#8217;s decode() function:</p>
<pre>decode(expression,
       search, result
       [, search , result]...
       [, default]
      )</pre>
<p>seems functionally equivalent to</p>
<pre>CASE expression 
   WHEN search THEN result
   [WHEN search THEN result]...
   [ELSE default]
END</pre>
<p><div style="width: 260px" class="wp-caption alignright"><a href="http://commons.wikimedia.org/wiki/File:Gorilla_Scratching_Head.jpg"><img alt="Gorilla Scratching Head" src="http://upload.wikimedia.org/wikipedia/commons/0/08/Gorilla_Scratching_Head.jpg" width="250" /></a><p class="wp-caption-text">By Steven Straiton (originally posted to Flickr as Gorilla) [<a href="http://creativecommons.org/licenses/by/2.0">CC-BY-2.0</a>], via Wikimedia Commons</p></div>except the CASE version</p>
<ul>
<li>has no limitation of &#8220;only&#8221; 255 total expression + search + result + default parameters</li>
<li>is easier to read</li>
<li>works in PL/SQL context</li>
<li>is ANSI-compliant</li>
</ul>
<p>The only &#8220;benefit&#8221; to decode I could find is that the decode function will attempt to convert all of the results to the type of the first result, while CASE just errors if you mix types. To me, that benefit would simply encourage sloppy coding and keep you from noticing buggy code as quickly.</p>
<p>So, what&#8217;s the point? Are their coders out there who &#8220;get&#8221; the function syntax more readily than the CASE syntax, so it helps their coding efficiency? Or is there a benefit I&#8217;m missing?</p>
<p style="text-align: right;">—jhunterj</p>
]]></content:encoded>
			<wfw:commentRss>https://jhunterj.com/2013/02/05/decoding-sqls-decode/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Spikes, Slopes, and SQL</title>
		<link>https://jhunterj.com/2012/11/08/spikes-slopes-and-sql/</link>
		<comments>https://jhunterj.com/2012/11/08/spikes-slopes-and-sql/#comments</comments>
		<pubDate>Thu, 08 Nov 2012 20:45:11 +0000</pubDate>
		<dc:creator><![CDATA[Hunter]]></dc:creator>
				<category><![CDATA[Database]]></category>

		<guid isPermaLink="false">http://jhunterj.com/?p=85</guid>
		<description><![CDATA[So you have data you&#8217;re tracking every day. Maybe a lot of data. Maybe tracking it a lot of ways. You might even have graphs of the data over time, so that you can spot trends and anomalies. And then <a class="more-link" href="https://jhunterj.com/2012/11/08/spikes-slopes-and-sql/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>So you have data you&#8217;re tracking every day. Maybe a lot of data. Maybe tracking it a lot of ways. You might even have graphs of the data over time, so that you can spot trends and anomalies. And then you find that you&#8217;ve got too many graphs…</p>
<p>It&#8217;s time to let the data tell you when it needs attention.</p>
<p>Get your data into a format (through a table, a view, or a subquery; we&#8217;ll say you have it in <code>measurements</code>) where you have records that have the thing you&#8217;re measuring (we&#8217;ll call it <code>aspectID</code>), when you measured it (we&#8217;ll use <code>date</code>, although this will work for monthly measurements, weekly, quarterly, yearly, hourly, or anything else), and what its measurement was then (<code>value</code>).</p>
<div id="attachment_106" style="width: 310px" class="wp-caption alignright"><a href="http://jhunterj.com/wp-content/uploads/2012/11/Linear_regression.png"><img class="size-medium wp-image-106" title="Linear regression" src="http://jhunterj.com/wp-content/uploads/2012/11/Linear_regression-300x198.png" alt="A graph of data points and simple linear regression" width="300" height="198" /></a><p class="wp-caption-text">Linear regression</p></div>
<p>What management calls a trend, statistics calls a slope. If your data happens to fall in a line, the slope is how much is goes up (or down) each unit of time. −2 dollars per day. +100 page views per hour. +5 accounts per month. Probably your data doesn&#8217;t line up nicely for you though.</p>
<p>Some flavors of SQL will make this pretty easy to get, through <a href="http://en.wikipedia.org/wiki/Simple_linear_regression">linear regression</a> functions. Oracle PL/SQL, for instance:</p>
<pre>SELECT aspectID,
       REGR_SLOPE(value, TO_NUMBER(TO_CHAR(date, 'J')) AS slope
   FROM measurements;</pre>
<p>while it&#8217;s a little less obvious in flavors without those built-in functions, such as Microsoft T‑SQL:</p>
<pre>SELECT aspectID,
       ( (COUNT(1) * SUM(CAST([date] AS FLOAT) * [value])) -
         (SUM(CAST([date] AS FLOAT)) * SUM([value]))
       ) /
       ( (COUNT(1) * SUM(POWER(CAST([date] AS FLOAT), 2))) -
         (POWER(SUM(CAST([date] AS FLOAT)), 2))
       ) AS slope
   FROM measurements
   WHERE [value] IS NOT NULL
   GROUP BY aspectID;</pre>
<p>Some additional things you may want to do:</p>
<ul>
<li>restrict (in the <code>WHERE</code> clause) the data to the recent past (30 days or 90 days for dailies, perhaps, or 72 hours for hourlies, etc.)</li>
<li>scale the slopes (something that&#8217;s going from 1 to 100 is probably more interesting than something that&#8217;s going from 50,001 to 50,100), by dividing them by the <code>MAX(value)</code> for the period. This will give you a more useful sorting column than the &#8220;raw&#8221; slope, for some problem spaces.</li>
</ul>
<p>The other thing you&#8217;ll often want to know is when is the current value &#8220;spiking&#8221; (up or down). A brief spike may not have a big impact on the linear regression, but it&#8217;s still a point of interest. Oracle PL/SQL first again:</p>
<pre>SELECT aspectID,
       CASE
          WHEN STDDEV(value) = 0 THEN 0
          ELSE (MAX(value) KEEP (DENSE_RANK LAST ORDER BY date)
                   -  AVG(value)) /
               STDDEV(value)
       END AS spike
   FROM measurements
   GROUP BY aspectID;</pre>
<p>&#8220;Spike&#8221; here is the number of standard deviations above (or below) average the most recent data point is. I like to look for instances that are more than two standard deviations from the current average (which should mean I&#8217;m getting hits 5% of the time for each measurement, or hits on 5% of the metrics with any given run) by adding</p>
<pre>WHERE CASE
         WHEN STDDEV(value) = 0 THEN 0
         ELSE ABS(MAX(value) KEEP (DENSE_RANK LAST ORDER BY date)
                  -  AVG(value)) /
              STDDEV(value)
      END &gt;= 2</pre>
<p>(Note the addition of <code>ABS</code> there—you want sudden drops as well as sudden increases.) If you put that into a view, your <code>WHERE</code> clause will be a little prettier. Restrict the time scale as needed, just like with slopes.</p>
<p>Getting to the most recent value as well as aggregations of the values in Microsoft T-SQL is a little more cumbersome, but I don&#8217;t find either particularly more readable than the other:</p>
<pre>SELECT m.aspectID,
       CASE
          WHEN STDEV(m.value) = 0 THEN 0
          ELSE (lastm.value - AVG(m.value)) / STDEV(m.value)
       END AS spike
   FROM measurements m
      JOIN (SELECT aspectID,
                   [value],
                   ROW_NUMBER()
                      OVER (PARTITION BY aspectID
                            ORDER BY MAX([date]) DESC) AS rank
               FROM #measurements
               GROUP BY aspectID,
                        [value]
           ) lastm ON lastm.aspectID = m.aspectID
   WHERE m.[value] IS NOT NULL
      AND lastm.rank = 1
   GROUP BY m.aspectID,
            lastm.value;</pre>
<p>And those are the base tools. Get them into a view or schedule a stored procedure to run periodically and email you the metrics that trip your thresholds. And please let me know if you have better (more readable) approaches in these or other SQL flavors, or if you&#8217;ve put these kinds of measurements to interesting uses.</p>
<p style="text-align: right;">—jhunterj</p>
]]></content:encoded>
			<wfw:commentRss>https://jhunterj.com/2012/11/08/spikes-slopes-and-sql/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SSRS Drillthrough Reports for Mobi Reports Pro</title>
		<link>https://jhunterj.com/2012/10/24/ssrs-drillthrough-reports-for-mobi-reports-pro/</link>
		<comments>https://jhunterj.com/2012/10/24/ssrs-drillthrough-reports-for-mobi-reports-pro/#comments</comments>
		<pubDate>Wed, 24 Oct 2012 11:46:14 +0000</pubDate>
		<dc:creator><![CDATA[Hunter]]></dc:creator>
				<category><![CDATA[Database]]></category>
		<category><![CDATA[drillthrough]]></category>
		<category><![CDATA[linked reports]]></category>
		<category><![CDATA[mobi reports]]></category>
		<category><![CDATA[ssrs]]></category>

		<guid isPermaLink="false">http://jhunterj.com/?p=10</guid>
		<description><![CDATA[The short answer: use Globals!ReportFolder in the middle drillthrough reports to fully qualify the target ReportName in any Action. The tech involved in this problem, although probably you know this if you reached this page: SSRS is a database reporting <a class="more-link" href="https://jhunterj.com/2012/10/24/ssrs-drillthrough-reports-for-mobi-reports-pro/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p style="padding-left: 8%;">The short answer: use <code>Globals!ReportFolder</code> in the middle drillthrough reports to fully qualify the target ReportName in any Action.</p>
<p>The tech involved in this problem, although probably you know this if you reached this page: <strong>SSRS</strong> is a database reporting solution for Microsoft SQL Server—you can turn data into graphs! <strong>Mobi Reports Pro</strong> is an iOS application for view SSRS reports on your iPad or iPhone—you can see your graphs and check your reports on the go!</p>
<p>The long problem description and solution explanation:</p>
<p>Mobi Reports has some caveats and can be a bit ticklish with some SSRS features, though. The feature that caused me the most headaches is <strong>Drillthrough</strong> reports.</p>
<p>SSRS gives you several ways to keep from burying your report user: Drillthrough, Drilldown, Subreports, and Nested Data Regions. Drillthroughs are a simple way to allow the user to click on one element of one report and load another report—drilling through the first, hopefully down to a second that makes sense and expands on the part they clicked on. (See the SQL Server TechCenter article <a href="http://technet.microsoft.com/en-us/library/dd207141(v=sql.105).aspx">Drillthrough, Drilldown, Subreports, and Nested Data Regions (Report Builder 3.0 and SSRS)</a> for full details.)</p>
<p>Mobi Reports supports Drillthroughs, as long as you keep all the reports in the same directory, or use SSRS <strong>Linked Reports</strong> to put a link to a remote report in the same directory as the base report. (See Mobi Weave Support Center&#8217;s <a href="http://support.mobiweave.com/customer/portal/articles/112946-does-mobi-reports-support-drill-down-and-drill-through-report-navigation-">Does Mobi Reports support Drill down and Drill through report navigation?</a>)</p>
<p>So far so good. I found that you also had to pass all parameters all the time (you cannot omit some parameters some of the time from SSRS&#8217;s Actions properties), and passing <code>NULL</code> (or <code>Nothing!</code> in SSRS terms) as a parameter value seemed to cause problems too:</p>
<div id="attachment_15" style="width: 310px" class="wp-caption aligncenter"><a href="http://jhunterj.com/wp-content/uploads/2012/10/MobiNullError.png"><img class="size-medium wp-image-15" title="MobiNullError" src="http://jhunterj.com/wp-content/uploads/2012/10/MobiNullError-300x225.png" alt="Mobi error message screenshot: &quot;An attempt was made to set a report parameter 'subtype:isnull' that is not defined in this report. Microsoft.ReportingServices.Diagnostics.Utilities.UnknownReportParameter" width="300" height="225" /></a><p class="wp-caption-text">Mobi error message for omitted parameter.</p></div>
<p>I have no clever workaround here. Just don&#8217;t omit any parameters, and don&#8217;t try to pass NULL values (pass <code>'DUMMY'</code> or <code>−99</code> or something else that won&#8217;t show up in real data, and add code on the Drillthrough report to turn it back into <code>NULL</code>).</p>
<p>The real problem happened when trying to launch a Top-Level Report, then drilling through to a remote 2nd-Level Report, <em>and then </em>drilling through again to a remote 3rd-Level report. Creating linked reports in every directory to all the remote reports won&#8217;t help; you&#8217;ll still see this:</p>
<div id="attachment_16" style="width: 310px" class="wp-caption aligncenter"><a href="http://jhunterj.com/wp-content/uploads/2012/10/MobiNotFoundError.png"><img class="size-medium wp-image-16" title="MobiNotFoundError" src="http://jhunterj.com/wp-content/uploads/2012/10/MobiNotFoundError-300x225.png" alt="Mobi error screenshot &quot;Loading 3rd-Level Report failed.Unable to find linked Report /System/Path/to/Top-Level Report &amp; Linked Reports not found." width="300" height="225" /></a><p class="wp-caption-text">Mobi error message for 3rd-Level Linked Drillthrough</p></div>
<p>The problem&#8217;s source is is the handling of those Linked Reports. Yes, the Linked Report is in the same directory as the base report. Mobi Reports will dutifully load it. But go back and load it in Report Manager on a desktop web browser again, and drill through there. Notice what happens in the Report Server Toolbar as each report loads. Even though you&#8217;re using Linked Reports, all in the same directory, Report Manager recognizes when you&#8217;ve gone afield. <em>This </em>is why Mobi Reports complains when you try to drill through a 2nd-level report into a 3rd-level report.</p>
<p>You can keep Report Manager and Mobi Reports from straying out of the base directory by giving a full path to each drillthrough report in the Action properties. At first I simply passed the base directory as a parameter all the way down the line, but there&#8217;s an easier way: the built-in global Globals!ReportFolder. Everywhere you want to set a drillthrough, follow these steps:</p>
<ul>
<li>Properties → Action</li>
<li>Enable as an action: Go to report</li>
<li>Specify a report: [<em>ƒ<sub>x</sub></em>]</li>
<li>Set expression for: Action.Drillthrough.ReportName <code>= Globals!ReportFolder + "/LinkedReportName"</code></li>
</ul>
<p>(replacing the &#8220;/LinkedReportName&#8221; with the name of your linked report, of course). That&#8217;s it! ReportManager will report that you never leave the starting directory, and Mobi Reports will happily load drillthrough after drillthrough.</p>
<p style="text-align: right;">—jhunterj</p>
]]></content:encoded>
			<wfw:commentRss>https://jhunterj.com/2012/10/24/ssrs-drillthrough-reports-for-mobi-reports-pro/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
