Friday 2 December 2016

SQL Server - The Future of Query Processing?


Earlier in the week I posted a link via the Data Platform Blog which was introducing new memory grant feedback functionality that is currently available in SQL Server vNext. Here’s the link again in case you missed it:
This new functionality allows a recalculation of the memory requirement for a query and updates the memory grant value in the cached execution plan. Subsequent executions of the query can then take advantage of this new updated value and operate more efficiently.
That’s pretty cool, but there’s more…
Here’s one line of the article that stood out for me:
is the first improvement under the adaptive query processing family of features
This is just the beginning, there’s more to come!
I’ve spent a little time reading up a bit and there’s some talk around SQL Server and potential self-tuning capabilities. That means queries making decisions during execution, changing the way the plan was going to operate, even automated index management and so on. Needless to say but this is exciting stuff.
So naturally I’ve been wondering about the how.
SQL Server introduced the Query Store in 2016; a fantastic feature for DBA’s to monitor the history of queries with runtime statistics. Useful for DBA’s sure, useful for SQL Server itself to look into? Absolutely, don’t forget the 2016 version saw the introduction of SQL Server R Services allowing powerful data analysis – put the two together and you can see how this could be used.
Just to add, it isn’t happening right now, the article also has this:
Memory grant feedback will only change the cached plan. Changes are not captured in Query Store for this version.
But that isn’t ruling it out and to me, might be suggesting its coming.
I’m really interested in knowing what else is in the pipeline for the adaptive query processing “family”. SP1 of 2016 features the availability of wait statistics in the actual execution plans and again, this is great information for DBA’s but there’s also so much potential for an instance to perform its own analysis on performance.
Of course I am speculating but at the same time it does make sense. I think we’re at the start of a very exciting time and now more than ever we should be looking into the new features because they might just end up being critical to how SQL Server operates...itself :-)

4 comments:

  1. That is certainly an interesting development. If I am understanding correctly the optimizer will be able to change a plan based on wait stats it observes during execution. This means that plans optimized for data structures and explicit limits of memory, cpu, cachable/cached items etc. can now react to contention more effectively than just acquiring locks and waiting for them but by building plans that consider the real operating environment. That certainly saves building a scheduling some recompile and cache eviction jobs based on historical wait parameters.

    ReplyDelete
    Replies
    1. Oh yes, very interesting. Memory grants seem a very useful place to start, what else is in store? Lots by the sounds of it. I may be speculating somewhat with the Query Store and R hookup but it seems feasible and I certainly expect the QS to be a big part of SQL Servers future.

      Delete
  2. Note that Azure is already doing aggressive self-tuning, just via indexes, and maybe a little bit via elastic pools - they are not very open about what it is they *are* doing.

    ReplyDelete
    Replies
    1. It is indeed, the automated index management partly excites me and partly frightens me at the same time, its worth adding that Azure does seem to be an ideal platform for just that sort of automation * with the right workload.

      Delete