Friday 30 December 2016

As one year ends another one starts...


My last post was my thoughts around SQL Server over the past 12 months and looking forward to 2017. This post, the last one of 2016, is about what I am planning to be doing next year and what direction I hope to be moving DTA towards.
Firstly there’s SQL Clarity, the humble little blog that hit well over 20k views this year. OK, that number might seem insignificant compared to some but for me it was a huge milestone; especially considering the amount of time I spent away from writing.
So what about 2017? Well the first thing to spring to mind is the monitoring script series to conclude. It’s at a nice halfway point at the moment with a couple of parts on wait statistics to publish very soon. Next year will see more series posts as I have a couple of topics that I really want to put out there as soon as I can! I’ll also be linking more to articles that I’ve been reading and really getting a wide range of content out there.
There will be the welcome return of the Q&A sessions. I had always hoped these would be much more of a regular feature, the feedback was always positive and there were certainly a lot of awesome people interested in doing them and once more, huge thanks to Kevin and Sergey for contributing!! If you are interested then keep a look out as I will be reaching out to the SQL community for volunteers very soon!
There is also an announcement which I am really excited about that I could be making as early as the beginning of January so keep your eyes peeled for that.
Then there is DTA; not to be confused with the tuning advisor by the way! For those that don’t know I entered the world of consulting this year finally fulfilling a long term ambition of mine. I’m delighted to have got to this point and if I had to thank everyone who has helped me on the way we would be here for a long time. Anyway, I hope you know who you are and how grateful I am.
DTA itself will start to build next year. There are a couple of areas that we will be focusing on to begin with but before anything else happens the online presence needs to be put into place so that will be happening very early on next year. It will be a very separate entity to the SQL Clarity blog; the latter will be much more from a personal perspective. Needless to say, there will be lots happening over the next couple of months as far as DTA is concerned.
As for technology, there is so much to be looking into its difficult to pin down a specific plan at the moment (I am after all reading three books in parallel!). Obviously SQL2016 comes with lots of bells and whistles to look into and get accustomed to using. vNext as well is something I will be spending lots of time with, the adaptive query processing features are a particular area I want to keep a keen eye on.
There's also R. I've spent a little bit of time using R this year and already I'm hooked. My personal opinion is that there will be another BI explosion next year so it makes sense to hone those skills in that area.
All in all it has been an amazing year and a big personal wish of mine is to be much more involved in the SQL community over the next year, whilst it might seem that I have enough on my plate I’m sure that isn’t the case, so I am looking forward to seeing what is in store on that front!
So all that is left is for me to say a huge thank to everyone. I really hope you all have happy and successful 2017’s!
All the very best,
David.
 
 
 
 

Thoughts on 2016 - 2017


2016 has been, as far as SQL Server is concerned, one of the best years that I can remember. The biggest point for sure is the release of the 2016 version which many people have classed as the biggest release since 2005, a view I wholeheartedly share.
Microsoft have added lots of new functionality, released some existing limitations and rather surprisingly made several features available to editions that don’t come with the hefty enterprise fee, In-Memory OLTP on Express Edition anyone?! I won’t dwell too much on the new functions; you can read about them here.
Then came the announcement that SQL Server will be available on Linux, a move heavily broadcast by Microsoft and bloggers alike. Personally, I haven’t tried it yet. I will get round to it and I am wondering if and when we’ll hear about Linux estates that are taking SQL Server on board because above all else, this is all about increasing SQL Servers user base.
One of the new features in 2016 that really interests me is the introduction of R Services for SQL Server. R is a rich and powerful analytical language used for data science activities such as data exploration and predictive modelling. The integration with SQL Server means that we can develop analytical solutions using very familiar tools but we can also scale our analysis to huge sets of data without performing any of the costly traditional movement of data - this is a very important factor on how we will approach data analytics.
On that note, it’s worth reading Microsoft’s vision for the data platform as this really is the way they’re heading. Traditional Business Intelligence platforms just don’t cut it anymore and must make way for technologies that can accommodate larger, broader sets of data as well as present much more immersive and interactive user experiences. The world of Business Intelligence (as an umbrella term), is going to have huge focus in 2017 with the likes of predictive analysis, AI and Machine Learning!
It goes without saying, but the cloud is a fundamental part in all of this. Whether you're thinking of using hosted SQL instances in Azure/AWS or utilising Power BI with Office 365 there are a wealth of options and features to look into. Cloud based solutions get an awful lot of publicity and whilst there are some clear advantages there is still very much a case for on-premise regardless of what anyone tries to tell you. For me, the new features coming into vNext demonstrate Microsoft's commitment to the on-premise versions.
So for administrators and developers alike it really is an exciting time to be working with SQL Server, I certainly can't recall a time where there has been so many different things happening at the same time and so many new learning opportunities (I haven't even mentioned PowerShell!), all in all 2017 promises to be a really interesting year.


I’ll definitely cover what my plans are for DTA (not to be confused with "the DTA") in another post but for now I would like to wish everyone a very happy new year and I hope you all have a hugely successful 2017 - whichever roads you are taking!

Friday 16 December 2016

Identity Columns and Rollbacks


Identity columns are a very commonly used feature within tables in SQL Server. Basically when specified as an identity a column will automatically increment by the specified value; so if we have an identity increment of 1 and insert 5 rows they will automatically be numbered 1 to 5.
One cautionary measure with identities is that they don’t reset themselves when rows are deleted. If we delete rows 4 and 5 the next row will still be populated as identity 6. That’s fine, but what happens if we rollback an insert.
Here’s a quick table with three columns one of them ID which is set as an identity with an increment of one.
CREATE TABLE [dbo].[IdentityRollback](
       [ID] [smallint] IDENTITY(1,1) NOT NULL,
       [FirstName] [varchar](50) NULL,
       [LastName] [varchar](50) NULL,
) ON [PRIMARY]

GO

Very simple and straightforward, let’s insert a row with the following:
INSERT INTO IdentityRollback (FirstName, LastName) VALUES ('David','Alcock')
Note that I haven’t inserted a value for ID, let’s make sure all is well in there.

SELECT * FROM IdentityRollback 


As expected the one row and the ID column has started at our first increment. Now we’ll add the same row but rollback the transaction:
BEGIN TRAN
 INSERT INTO IdentityRollback (FirstName, LastName) VALUES ('David','Alcock')
 
ROLLBACK TRAN

We can run the select statement again to make sure no row has been inserted:
SELECT * FROM IdentityRollback

Exactly the same! Which is good because it shows the transaction has rolled back exactly as intended. So what happens when we insert a brand new row, what will the ID column be?

I’ve always wanted to meet Tom Hanks so let’s put him in the table.

INSERT INTO IdentityRollback (FirstName, LastName) VALUES ('Tom','Hanks')

The same select again,

SELECT * FROM IdentityRollback

but this time…




Although the transaction was rolled back, it hasn’t reset the identity. That’s actually happened by design; if there were other transactions creating rows (and identities) then imagine having to wait to see if every identity was successfully committed or not. If you need to rely on a sequential column of integers then perhaps the identity functionality isn't going to be the best option for you.

For further reading Pinal Dave has a solution to the problem here

Wednesday 14 December 2016

A bit more on SQL Server alerting


If you ran the creation script from the previous post you’ll now have a group of alerts under SQL Agent that in Management Studio, they will look a bit like this:
The most important thing to remember about alerts is that no matter what, they will only ever be as effective as the response so this is where a sensible approach to alerting is required.
But what do I mean by sensible? Typically I see a number of problems with alerting setups; either alerts are inadequate and don’t cover the necessary errors (or there are none at all) but I also see the notifications to alerts not being set up correctly meaning problems go backwards and forwards delaying any fixes.
The other problem I see is an over provision of alerts. This usually is because one or more other monitoring systems have been deployed and error notifications have been duplicated as a result. Imagine having an operational tool like System Centre, some SQL monitoring software and native alerting all pinging the same message to the one recipient mailbox. Now on top of that let’s say the alerts have not been configured correctly so information emails are being issued every second. It’s a scary thought but it is easy to see how a critical error might be missed in this scenario.
So how to resolve these types of problems?
In the first instance having the alerts set up from the last post will cover a number of errors, so that is good, what we need to do now is set up appropriate responses. Natively we do this by either setting the alert to notify an operator or we can use it to trigger a SQL Agent job.
Please note at this point we’re working on an assumption that database mail has been setup and the profile set in SQL Agent.
Under SQL Agent if we right click Operators and select New Operator the following screen is displayed (I’ve added a random email address in this example):
Here’s the T-SQL to create an operator:
EXEC msdb.dbo.sp_add_operator @name=N'DBA',
              @enabled=1,
              @weekday_pager_start_time=90000,
              @weekday_pager_end_time=180000,
              @saturday_pager_start_time=90000,
              @saturday_pager_end_time=180000,
              @sunday_pager_start_time=90000,
              @sunday_pager_end_time=180000,
              @pager_days=0,
              @email_address=N'database-management@company.com',
              @category_name=N'[Uncategorized]'
GO


To have this operator informed of any errors covered by alerts we click on Notifications and tick any alerts we want to notify the operator of. As the email address is used for potentially critical notifications I do not like to use a personal E-mail address, if that person is off or even leaves then this can obviously leave alerts unnoticed so I prefer to use a shared mailbox where team members can have access. Note: as part of a SQL audit check any alerts for invalid E-Mail addresses.
The above example shows the database management mailbox being assigned to all the alerts we had set up so we’ve achieved our first aim. We can also double click the alert in Management Studio, click Response and configure the operator like in the example below:
The T-SQL to setup a notification is as follows (notification_method 1 being E-mail)
USE msdb ;
GO

EXEC dbo.sp_add_notification
 @alert_name = N'Severity 017 - Insufficient Resource',
 @operator_name = N'DBA',
 @notification_method = 1 ;
GO
 
The next thing to think about is utilising different operators. I'm not saying to not include the DBA on alerts but there are certainly scenarios where you want other teams to be notified in the very first instance of an error happening.
Other teams may want to see performance alerts or user based errors perhaps, certainly in testing or development systems (another rule, YES, you should have alerts set up in non production). As mentioned before, the trick is to be relevant; development might not want to be notified of hardware based issues like an Error 825 alert but you can be certain infrastructure or storage teams would. Alternatively, perhaps some errors can be directed to a service desk as part of an incident handling process?
EDIT: On a side note take the same approach with SQL Agent job failures. If you find yourself forwarding on failure messages to other mailboxes then they need adding as an operator and including in the notification process!
Finally we need to cover the over alerting and for this I’ll use the Page Life Expectancy performance based alert. If you double click the alert in Management Studio and click Options; the following screen is displayed:
This allows us to include the alert error text in the notification E-mail and also add additional text to any message. My preference here is to add a brief description of the error and perhaps some next steps; Errors 823-825 should be followed up with a consistency check so it’s good to add this bit of advice to the alert.
Back to the over alerting bit; this screen also allows us to specify a delay between responses. For performance alerts you HAVE to do this, imagine if our Page Life Expectancy had dipped under the threshold for a considerable amount of time; we’d get alerted hundreds if not thousands of times and if a critical alert had occurred in the midst of the notifications it very easily could get missed. This makes setting an appropriate delay to the alert very important and it is dependent on both the type of alert and the instance.
The final point to go over is where different monitoring systems are used in addition to our SQL Agent alerts. In this case it is very easy to end up with two or three notifications for the same alert and whilst it’s nice to have all bases covered the reality is that it can be a real pain.
Overall there is no set practice and it is very dependent on what tools you already have installed. If you have SQL monitoring applications already running then I would leave the performance related alerts to those and perhaps other alerts like failed logins too. Things can get even more complicated if in addition to native and third party monitoring tools for SQL you also have a cross platform monitoring application like SCOM with the SQL monitoring included also. In this case, from a SQL point of view I tend to leave SCOM for the high level alerts; service and resource related errors, those kinds of things.
SQL’s native alerting functionality will give you a good practical layer of error reporting and if you don’t have the budget for third party tools this can be a god send. Whether you should buy third party tools is another question altogether. For busy multi-server estates where there are lots of tier 1 instances then I would say they are pretty much essential.
Overall there are a number of ways we can handle errors coming from SQL Server. They key to doing it successfully is making the most out of what resource we have available and using it sensibly. A carefully planned alerting setup with different operators/recipients is a very useful approach so think about who needs to know about the different error types and tailor your alerts accordingly. Configure alerts (from all sources) to not flood mailboxes and you'll have a seriously effective error handling process at your fingertips!