Thursday, 31 December 2015

2015-2016

Been as its New Years Eve its rather likely that this will be my last post of 2015! 

In a relatively short space of time the blog has had a rather remarkable amount of visitors and looking through the stats I have to say I really am astounded! What I do know is that without the support and interest from fellow professionals on the various social media sites the blog wouldn't have been worth doing; so for that I am extremely grateful.

Moving forwards I have all kinds of plans and ideas for new content. I always wanted the blog to be a place where new and would be DBA's could come and share in other people's knowledge and experience. The Q&A sessions are something that I believe will offer an incredible insight and will really benefit those people new to SQL Server. 

(I have finalised the first draft of questions and will be sending them out some time next week btw) 

Of course, none of that would be possible without the people that are actually doing it, so for those people who have offered their time I sincerely thank you.

Been part of the SQL family is perhaps the biggest factor in why I enjoy life as a DBA. If I had to thank people individually I would still be here typing well into next year but for those people that I have connected with over the past 12 months especially, it has been an absolute pleasure. I have learnt a heck of a lot from all of you and all that is left for me to do (for this year) is wish you an extremely successful 2016.

All the best guys & many thanks.
David 


Wednesday, 30 December 2015

Bad fixes - When a solution is actually a new problem.


My favourite SQL meme of 2015

One of the things that frustrates any DBA is bad advice, especially when it appears to 'fix' a problem when in actual fact it could well be making things much worse. 

Why you shouldn't blindly follow advice is covered in my SQL’s 50 Shades of Grey Area's article but for now, here’s a couple of examples of how problems can be made worse by applying a quick fix:

I’m seeing lots of CXPACKETS waits, how do I fix these?
Bad Advice = Set the maximum degree of parallelism to 1, no more CXPACKET waits!

I’m seeing index fragmentation occur quite frequently on some of my indexes, what should I do?
Bad Advice = Set fill factor on the server to 70, no more fragmentation problems!

I’m worried about TempDB contention. What should I do?
Bad Advice = Set the number of files to the number of cores, no more contention issues!

Yes all of the above might look like they have magically made your system better. The bad news is, they haven't. To make matters worse, without taking the time to correctly diagnose (and understand) an issue you are putting your systems at risk. 

Setting MAXDOP to 1 will stop CXPACKET waits, but were they really a problem in the first place? Now with a setting of 1 any queries that might benefit from parallelism can’t. That means anything with even a slight level of complexity is going to be slower.

Blindly setting fill factor will leave space at the page level when an index is rebuilt which will reduce fragmentation. That “room” though will eat up space on disk and in memory, this can and will cause performance problems.

So what about TempDB? Well, what if you have 32 cores?! I’ve seen this and sure enough there was no evidence of contention but the performance impact of having so many files was quite evident.

Here's the point. Each of these cases could have been resolved correctly with a bit of research. 

So what about those CXPACKET waits? Well they are a natural product of parallelism and often could be considered benign. You should though check your MAXDOP and cost threshold for parallelism settings to see if they are appropriate for your workload. Only in specific cases should a MAXDOP setting of 1 be used.

A high frequency of index fragmentation should never be ‘fixed’ by a server side setting of fill factor. Instead monitor any potentially troublesome indexes closely and decrease the individual fill factor (on the index) by 5% at a time.

For TempDB you can read more here SQL 2016: Number of TempDB files again it’s a question of performing analysis to find the right balance between the number of files and reducing PAGELATCH waits.

So one of the golden rules for a DBA is never blindly use a setting within SQL Server, especially when looking to fix a issue. Always look to understand the problem, what actually is it and why is it occurring? Then take an evidence based approach to appropriately tune your instance.


Tuesday, 29 December 2015

Groan

I am trying to fix the links to stay in the same window!

The DBA job interview

I previously mentioned how a lot of new content on the blog is going to be aimed at would be DBA's, those people who are looking to get into working with SQL Server and are perhaps looking for a bit of a heads up on what to expect.

By the way, it probably won't look like this...




This particular series of posts is going to be focused on the DBA job interview. I have been fortunate enough to have been on both sides of the interview table many times so this post will show you what sort of questions get asked, what answers we are looking for and also what questions you should be asking too. After all, the interview is a two way process.

The first post is going to focus on the non-technical questions that you are likely to be asked. That might actually surprise some of you because of course a DBA is a very technical role. But actually when I am interviewing candidates I don't ask that many technical questions and I tend to focus on other aspects that are equally as important. Certainly in a first interview or if I am interviewing for a junior DBA then the technical bits would be at a absolute minimum, but don't worry, the technical parts will get covered too.

OK so on with the first post, the DBA job interview, non-technical questions

SQLIO Retired

SQLIO has now been retired by Microsoft. The download is now unavailable and the utility has been replaced with Diskspd.  More information is available from the SQL Server team here.

Friday, 25 December 2015

The DBA Interview: The questions you need to ask.

It's the golden rule of interviews, make sure you ask questions! It's true, the interview is very much a two way process but what should you ask that is actually going to be of some benefit to you rather than just fill time?

I've compiled a list of questions that you can ask at a job interview to help you understand what type of DBA they are looking for and what the role will entail. The information will also help you get a few pointers about what the company is like and give you a few ideas of things you could be looking into to help with the new role.

Does the role sit within a dedicated DBA team or does it sit in another area like infrastructure or development? 
Depending on where you sit could dictate the type of work you're expected to do. In a development team you could well be be code heavy conversely in an infrastructure team you might be expected to perform system administration tasks. I can hear many DBA's reading this and saying "well, I'm supposed to do everything!". There is that too :-)

Ask questions about the SQL estate. 
What is the largest database and what is the maximum concurrent users? Are your instances physical, virtual, cloud based? Are there defined maintenance periods where downtime is accepted? Ask about recovery objectives and what high availability solutions are currently employed? You might want to ask if the estate is documented because you want to know if you're paying Inheritance Tax

All of these questions will give you a good impression of what you will be working with. Lots of "don't knows" to these questions might raise an alarm bell or two! 

Don't just ask about production instances.
Ask about a typical route to production. Are changes promoted through development, test and pre-prod etc? I also like to ask if resource is available for dedicated SQL sandbox and development environments. Why? Well, I want to test my database restores, query tuning, patching etc etc.

Will you be working with other database types? 
This is an extremely important question to ask. Even more so if you'll be the only DBA in the office because that means you're the only database guy and that means every single database, whether its MySQL or Oracle. Maybe even Access if you're really unlucky!

In addition to databases are you going to working with other products in the Microsoft SQL stack? 
In my time of a DBA I have had to undertake a lot of work using Reporting Services and Integration Services too. For the record, not so much Analysis Services apart from backups but even so, you might be expected to.

What applications will I be supporting? 
SQL is the backbone of many applications used today, BizTalk, CRM and Dynamics AX to name just a couple. If you've managed any of these applications before then you have now opened up a great opportunity to discuss your previous experience. If you haven't then getting a heads up on gives you the opportunity to research their typical behaviour, configuration and maintenance requirements etc.

Is home working provisioned for? 
Silly question I hear you say, this is the 21st century man! Well take it from me, not every company out there embraces "new technology" like being able to dial in from home. If you're not able to do that then consider how you're going to fix a full transaction log at 3am in the morning.

What training opportunities are available?
The company may have a training library or prefer to send you on training courses. If you're really lucky, maybe they want to send you to SQL Cruise! For me, I like to know that a company is just as keen at keeping my skills up to date as I am. 

All of the above should give you plenty of information about the type of company you could be working for, what type of DBA they need and highlight some areas that might need your attention when you get there. 

Armed with this information you can hit the ground running like an Olympic champion when you arrive!

Good luck.

The DBA interview: Non-technical Questions

Welcome to the first post in a series dedicated to the DBA job interview process. 

This particular post isn't going to go into any of the technical questions you might get asked but rather stay on the more general side of things. Whilst relatively high level I use these questions (or rather your answers) to look into how you operate as a DBA, what your attitude is to learning and if you're passionate about what you do. 

I find that these questions are particularly useful for first interview situations or perhaps when interviewing for a junior DBA position.

One of the most common questions you will be asked is to name some of the industry experts that you follow. Maybe you read their blogs or watch their videos. I always ask this question and I'm not just looking for a list of names either, I will always follow up a response and ask why.

And it is the 'why?' that is most important. Anybody can mention a Brent Ozar or a Paul Randal, anyone could have also looked those names up that very morning. If a candidate can go into extra detail, perhaps talk about specific articles, training videos and why they like them then that is a big plus in my book.

This shows an active interest in the DBA life because we all use an awful lot of the resource out there. It also highlights if you enjoy learning new things so another question that I often ask is what a candidates preferred learning style is i.e. do you enjoy reading technical manuals or video how to guides.




Now I will always follow this question up by asking a candidate what was the last thing they learnt. You will be amazed at how often people have frozen at this question and it isn't even a googly (curve ball for my US friends). An appropriate answer doesn't have to be too in-depth just enough to show me that you take learning seriously or perhaps more to the point, that you enjoy it.

I can actually remember one particular candidate answering this question by talking about the forthcoming In-Memory technology (at the time) and we discussed some of the key benefits and limitations of it. It was a great little discussion and that really stood out for me. And when you think about it, that actually is the main point of an interview, to stand out! So keep these sort of things in mind.




You'll stand out for sure but for all the wrong reasons!

I mentioned the DBA community earlier. Another common question that you might get asked is how you interact with fellow professionals. Not so much team members (that's a given, you'll get asked that!) but in the wider community. Not just by reading blogs but do you contribute by posting in forums perhaps? Have you ever asked the community for help and if so how? It could be specific websites or using social media like the twitter hashtag #sqlhelp. Just being able to demonstrate how you interact with other database professionals is another big tick in the box, especially if you are helping others too.

To finish this particular post I will go into the #1 question in the DBA interview and that is why be a DBA? It encompasses answers from the last questions that I have mentioned; the community, learning, helping others and it could be certain technical aspects too. For me asking this question will show me if a person is passionate about working as a DBA or if they are just ticking over. No prizes for guessing which is better in my eyes!

So there you go, I've gone over some of the general non-technical questions that are often asked in DBA interviews. Of course there is a lot more to cover, soft skills and the technical questions too but for now hopefully there's a little bit to think about on what people are looking for when hiring a DBA.

Wednesday, 23 December 2015

A new content idea...and I need you!

As of next year I plan to dedicate a large chunk of this blog to new DBA's and people looking to move into the role. It was actually the original intention of this blog, hence the clarity name but so much has been going on in the world of SQL that the posts have been a little scattered.

One area I am keen to explore is the habits and routines of fellow SQL professionals but I also want to dig a bit deeper too. What do they like and dislike about the role? What personal attributes do they feel is important and what golden nuggets of advice can they give?

I think this will be really useful not just for new DBA's and perhaps developers but for all of us. Hopefully it will provide an insight into not just the role but the people themselves that are doing it day in and day out. 

So what do you think? Would you be willing to do a Q & A session with me or would you like to put forward some questions?

Tuesday, 22 December 2015

Monday, 21 December 2015

SQL's 50 Shades of Grey...Areas

No, this is not a forthcoming erotic SQL novel, that's next year’s big project :-) This piece has actually been brought about by a post on one of the LinkedIn forums that I frequently visit.

Recently a post went up about best practices for a new SQL Server. The article included a number of things we should do to configure and improve your SQL instance post install.

Let’s take an example question, how many TempDB files should I have? Commonly we might see one these answers (btw, all of the answers above have been taken from actual posts in reply to the question):

1 – Leave it at one, the system should look after itself.
2 – It should have one file per core.
3 – It depends.

Answer 1 is absolute rubbish. Answer 2 might be right in some cases but the guideline is wrong. Answer 3, well it’s kind of right but not very helpful!

Clearly the conflicting advice out there can be very confusing, especially for those new to SQL and the fact is that for many settings within SQL there are no definitive answers or best settings that can just be applied. Thanks to the varying mixture of advice out there many settings and actions get referred to as "grey areas", take parallelism, memory, maintenance, indexing to name just a few.

But are they really grey areas?

You see, we DBA’s are rather evidence based. Many of these grey area settings in SQL are dependent rather simply on what is going on.

Take our TempDB question above. There is no single right answer. It isn’t 2,4,8 or even 32. The answer lies in evidence, are we seeing TempDB contention or not? If we are, we need to add files. Have we added too much? Well, are we seeing an increase in IO waits?

Ultimately guidelines serve as just that, a guide to area’s that we should look into and not just to suggest a supposed magic number that should be used. It is the analysis and problem solving technique that needs to be taught because although there are grey area’s, all of the answers lie in black and white evidence.

Wednesday, 16 December 2015

SQL 2005 - As one door closes


I've seen lots of articles circulating about the end of life for SQL 2005 which is happening very soon, on the 12th of April next year.  At the same time we're all seriously anticipating the release of SQL Servers next version, 2016 which is claimed to be the biggest release of SQL Server ever.  

Talk about when one door closes...

When I first started using SQL I used SQL 2000. I don't recall too much about the product itself in any detail but being a developer as I was back then I do remember tools like DTS, Enterprise Manager and of course, good old Query Analyser. But oddly enough what I remember most about SQL 2000 was upgrading to 2005.

You see 2005 brought with it a bundle of new goodies. Database Mirroring and Snapshots, the Service Broker, enhancements to T-SQL like TRY...CATCH, CLR integration, a whole bunch of DMV and DMF's and plenty more enhancements and features than what I can list here but make no mistake, game changer was an understatement!



So when I say I remember upgrading to SQL 2005 I don't just mean the product. I changed with it too. Not least in that I started to move from development into database administration and I started a career that I am still doing and throughly enjoying now.

And why was SQL 2005 responsible? It wasn't that it was just around at the time. For me that version helped lift the lid on what was going on inside SQL, to me everything became more accessible. For example, the dynamic management objects provided me with new information and with that came the necessity to gain a new understanding of how it all worked. It was amazing.


Of course there have been releases since and bit by bit our 2005 estates began to disappear. 2008 and 2008R2 were fine, yeah they were good. 2012 and 2014 brought some more exciting new functionality to the table (pun intended) and started to create the way forward but nothing had that same impact as what 2005 did. That is until now.

SQL2016 CTP 3.1 has arrived. I've listened to all of the talk. I've read an awful lot about it for months now; release notes, blog posts and training material. I then jumped in. In-Memory OLTP has been enhanced from the rather disappointing SQL2014 introduction. We now have PolyBase, Stretch Database, GZip Compression, JSON support and the Query Store - among other things! 

This now feels very familiar. New and exciting functionality, new ways to explore what our instances are doing. New skills and new discovries. This is once again rather amazing and for anyone considering a career as a SQL DBA, there really is no better time - believe me!

So to SQL2005, "so long old friend".

Hello to my new friend!




Script: Last x Full Backup History

It's a very common task that we have to query backup timings and other bits of info from msdb. 

Now its pretty straight forward to select this data for a particular database or use the MAX function for example to return the last backup but how do we get the last 5 FULL backups per database? 

You might want to check for spikes in backup duration for example and for this particular bit of functionality the RANK and PARTITION functions in T-SQL come in really handy. 

SELECT database_name, backup_start_date, backup_finish_date, backup_duration 
    FROM (
        SELECT database_name, backup_start_date, backup_finish_date,
DATEDIFF(MINUTE, backup_start_date, backup_finish_date) AS backup_duration,
RANK() OVER (PARTITION BY database_name -- "group my data"
                ORDER BY backup_start_date DESC ) AS Rank -- sorting by backup start date descending
        FROM msdb.dbo.backupset WHERE [type] = 'D'  -- full backups
        ) rs 
WHERE RANK <= 3 -- controls the top x of returned records for each "group" of data

So now we can take the output of this script and include it in Excel or SSRS to provide some trend analysis charts on backup timings etc. Very useful part of your monitoring script collection.

Some things to bear in mind, the script will include backups taken with the COPY_ONLY option and if you perform maintenance on backup history removing data over x weeks old, make sure the script accommodates this.

Monday, 14 December 2015

SQL Server 2016 - TempDB Number of Files

One change to the installation process of SQL Server is the ability to configure TempDB at the point of install, not afterwards...which means another restart of the SQL Server.

Now the installation process gives us the ability to specify number of files, initial size and autogrowth options for our TempDB database.


Yes, I know its the C drive - this is my sandpit VM - in ALL other instances TempDB files are on a dedicated volume.

Why would you need to do this? 
Well multiple files help in some way to ease TempDB contention which roughly speaking refers to a bottleneck in accessing allocation pages...when an object like a temporary table is created in TempDB a page is allocated and a metadata page is held to essentially map pages to the table.

Multiple files assist in reducing the problem by spreading the allocation workload over our multiple files.  TempDB contention is seen on our SQL instances by the presence of PAGELATCH_UP or _EX waits if the resource description is 2:1:1 (PFS Page) or 2:1:3 (SGAM)

Here's a little code snippet to look for these wait types:

select session_id, wait_duration_ms,  resource_description
from sys.dm_os_waiting_tasks
where wait_type like 'PAGE%LATCH_%'
and resource_description like '2:%';

I've encountered SQL instances with only one file for TempDB so many times and a quick check for these waits can mean a very simple but effective fix for some resource problems. 

OK, thats fine but how many files? 
Good question, as a general rule it is widely accepted that there are no particular drawbacks for having four TempDB files as a minimum. You may read advice that you should assign one file per core but this is another one of those famous SQL Server myths. Evidence should be the driver here so add four files and check for contention. If it exists, add more files until a happy balance is found (note, too many files can increase PAGEIOLATCH waits).

Configuring TempDB in the install process is a real time saver and a welcome enhancement. It will (hopefully) also increase awareness of potential problems like contention...and that is only a good thing.

For more information, check out Paul Randals article on the subject of TempDB contention here: http://www.sqlskills.com/blogs/paul/the-accidental-dba-day-27-of-30-troubleshooting-tempdb-contention/

Sunday, 13 December 2015

In-Memory OLTP

SQL Server blog article on the benefits of using In-Memory OLTP in SQL Server 2016.

Although one of the most looked forward to features of the 2014 release of SQL Server In-Memory OLTP (Hekaton) had plenty of limitations to go with it. Many of these have been lifted in SQL 2016, so will it lead to more takers?

http://blogs.technet.com/b/dataplatforminsider/archive/2015/12/10/speeding-up-transactions-with-in-memory-oltp-in-sql-server-2016-and-azure-sql-database.aspx

Wednesday, 9 December 2015

Another great read from Redgate...



Following on from my post saying that Troubleshooting for the Accidental DBA was my favourite SQL Server read, here comes another from Redgate and again its available as a free PDF download, SQL Server Hardware by Glenn Berry.


http://www.red-gate.com/community/books/sql-server-hardware?utm_source=ssc&utm_medium=publink&utm_campaign=books&utm_content=hardware_ebook&utm_term=sqlserverhardware

Here is what they say about the book:
"SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.2"


Tuesday, 8 December 2015

Sample database and scripts for SQL 2016 CTP3



You can download the AdventureWorks sample database and sample scripts from the Microsoft website, here's the sample script topics:

  • Advanced Analytics          
  • Always Encrypted            
  • Clustered Columnstore Index 
  • Data Masking                
  • In-Memory OLTP              
  • JSON                        
  • PolyBase                   
  • Row-Level Security          
  • Query Store                 
  • Stretch DB                  
  • Temporal Tables 


And here's the link!

http://www.microsoft.com/en-us/download/details.aspx?id=49502

Monday, 7 December 2015

It's that time of year again...time to cross train!



No, I don't mean the post Xmas gym binge to burn off all those extra calories! I'm talking about skill set development and specifically those skills that fall outside of our primary area. Personally speaking, I haven't done too much outside of SQL for a couple of years now having been very focused on the internals of MSSQL, that is changing this year!

Just to head something up first. I won't be dropping any learning/development in SQL Server. SQL 2016 is coming after all, CTP 3.1 is out now and there are plenty of things to be getting stuck into in the new release - new data capabilities, analytics, data warehousing improvements and of course core SQL changes too (I know I keep mentioning it but the In-Memory OLTP functionality is getting more and more practical; it deserves more focus!!). 

The cloud too. Make no mistake, the cloud is the future. It might just be me but I haven't witnessed a big embrace of this technology by the leading SQL experts, not socially anyway. That is clearly changing now and lets be honest, even if your organisation isn't using it at the moment they're sure as hell thinking about it and you should be clued up.

So what else should we be focusing on, we are talking cross training here?! My opening paragraph mentioned being outside of SQL Server and I haven't really mentioned anything particularly out there yet!

Well, this is where it comes down to what interests you. The web is inundated with articles explaining what skills we should be focused on next year. What skills will be commanding the biggest salaries and of course, what skills are just plain trendy but I don't believe following for those reasons works out well in the long term, or the medium...maybe even the short!

So what about me? Well I used to be a developer and it might surprise a few that know me when I say I quite fancy at least getting my hands a little bit dirty in the development game again. The .NET framework has evolved somewhat, JavaScript is right up there of course, JSON too...which leads rather nicely to NoSQL...I know, databases again!

Like the cloud more and more organisations are using non relational databases (just look at the big players that are using MongoDB presently!) and its an area that I really want to know more about.

So, I've kind of hit the surface of my training plans for next year but I would like to know other peoples views on this. Is cross training beneficial or should we stay focused on our main area of expertise? Can we stray too far from our core skill set and if you are training in new or different area's, what are they?



Thursday, 3 December 2015

Incorrect PFS free space information for page...

This article is a bit of a follow up to my why corruption shouldn't be that scary article. Actually, I guess for this example it should be called why failed consistency checks aren't that scary.

Anyway, here's a real life example:

So let's say consistency check (SQL 2005 btw) has failed with the following:

Incorrect PFS free space information for page (1:538) in object ID 60, index ID 1, partition ID 281474980642816, alloc unit ID 71776119065149440 (type LOB data). Expected value   0_PCT_FULL, actual value 100_PCT_FULL.

Oh oh, consistency check failures are bad; time to...



Now before I jump into what is going on here and what to do to fix it I did a quick search on the internet to see what advice people were giving. I found restarting SQL Server to be fairly common, one mentioned connecting via DAC, run some commands and the most common bit of advice I found was to completely restore the database!

They're all wrong, and this is why in my previous article I said to know exactly what you're up against when corruption occurs. You see in this case, it isn't actually corruption that is causing the consistency check to fail but a bug in an internal tracking algorithm.

This particular consistency error is one that can be fixed by running a checkdb with repair allow data loss. Whoa, what did you say there!? Surely you know you should never, ever run that command unless its completely a last resort?!

Well, that's true if we were dealing with corruption at page levels but in this case as no data is specifically involved it is fine (and the consistency check hasn't given us any additional errors). 

So lets fix this by running a repair with the following command:

DBCC CHECKDB ('MyBrokenDatabase', REPAIR_ALLOW_DATA_LOSS)

That isn't going to work, in order for a repair we have to put the database in single user mode.

Msg 7919, Level 16, State 3, Line 1
Repair statement not processed. Database needs to be in single user mode.

Here's the remaining commands:

ALTER DATABASE MyBrokenDatabase SET SINGLE_USER
GO

DBCC CHECKDB ('MyBrokenDatabase', REPAIR_ALLOW_DATA_LOSS)
GO

ALTER DATABASE MyBrokenDatabase SET MULTI_USER
GO

DBCC CHECKDB ('MyBrokenDatabase')

CHECKDB found 0 allocation errors and 0 consistency errors in database 'MyBrokenDatabase'.

And there we have it. A database that has failed a consistency check that we haven't had to restore from backup and we didn't actually need to do any of this straight away. We could have waited for some acceptable downtime to avoid any resource usage by the process.

For another approach on dealing with this error check out this post from Dave Mason from his blog.

Wednesday, 2 December 2015

GREAT READS: Troubleshooting SQL Server: A Guide for the Accidental DBA

Troubleshooting SQL Server: A Guide for the Accidental DBA

By Jonathan Kehayias and Ted Krueger



I love this book. I believe that everyone that works or is planning to work with SQL Server should read it. 

OK, I could leave it there but I should explain why do I love it so much? 

I first read read this book very early on in my career as a DBA. As we all know, when you first jump into SQL it can be an incredibly overwhelming experience especially when things aren't working properly.

And this is where the book comes in. It's authors recognise just how many people are thrown in at the deep end and as such it is written specifically for the "Accidental DBA". It's because of this approach that I find it such an easy read but yet the technical content is not sacrificed in the slightest,

What this book does is teaches the correct approach to troubleshooting SQL Server, how to gather evidence from a number of sources to put all of the jigsaw pieces together to see what the server is actually doing and where the problem lies, and of course, what to do about it.

The fundamental techniques taught in this book have stayed with me for years, to this day, and whenever I get asked about what SQL Server books someone should read, this one is always the first I recommend. 

And you can get it HERE

You don't have to take my word for it though, did I mention its available as a free PDF download too!

Enjoy!

Tuesday, 1 December 2015

Get the most out of SQL Server 2016...

Upcoming blog series from the SQL Server Team about the 2016 release, kicks off on the 3rd of December. Corporate VP of Microsoft's Data Group is calling it the best SQL Server release in history - do you agree from what you have seen so far?!

http://blogs.technet.com/b/dataplatforminsider/archive/2015/12/01/get-the-most-out-of-sql-server-2016.aspx


Why database corruption shouldn't be that scary.

A rather clever SQL person by the name of Paul Randal once said. "There are two types of people in this world. Those that have experienced database corruption, and those that will."

He's absolutely right. It's more than an occupational hazard, it happens all of the time but despite it's commonality the mere mention of it can still strike fear into many would be database administrators.

Now be assured, I am not trivialising corruption here, not one bit! I've been there. It can turn a good day into a very bad one (the night too), but there are good reasons why you should not live in fear of it.

Firstly database corruption comes in different shapes and sizes and you need to know what you're up against. Yes its likely that it's irrecoverable meaning a full restore is the only way things are going back to normal. But it could a problem with something like metadata such as a view rowcount meaning the object just needs recreating. The time compared to doing that to a full restore could be huge, so lesson one, don't panic and know your enemy! 

The Panic Button - DON'T press it!


There is though a much more important reason as to why you shouldn't fear database corruption and that is because you're are prepared. As another wise man once said that we don't plan to fail, we fail to plan.

If you're having to restore a database into production then you should already be armed with a few bits of critical information. What services are affected, who needs informing and also, how long is this going to take. See my SQL inheritance tax article here on why you really need this info!

You also need to know that your backups are up to date and consistent. I have lost count of the amount of times that I have read on a forum that a database has become corrupt and the only backup is from four weeks ago, if one even exists that is. Seriously, all the time. 

This is where fear really comes from. It's the not knowing. You have to know the backups are there and ready. You have to know how long a restore is going to take and you know why you should know? Because you should have already done it. Recently, and documented it. You've consistency checked it too because you don't just rely on verifying your backups!

So here's a few guidelines:

Backup your databases, regularly.
It's a given I know. No, it should be a given but like I mentioned earlier it is often not the case!

Know your backup jobs.
Do you know if your backup jobs are running successfully? Your backup jobs should definitely notify you in the event of failure but also consider how are you going to know if the job actually ran. If SQL Agent is down, you won't be getting any notifications that way so think about success notifications and customised reporting on your backup tables.

Know your RPO and RTO. 
Standard backup routines are fine but different databases have different recovery objectives and you need to ensure they can be met in the event of disaster. You also need to know what the databases are doing, what services will be affected and who to notify when things go wrong.

Are your databases consistent?
Consistency checking is another given but how often should you do it? Personally, I like to check as often as possible (resource allowing), a minimum of 1 + number of full backups. System databases, every day...why not? And yes, that includes TempDB. Remember corruption often indicates a problem at the IO subsystem, you need to know about it sooner rather than later, speaking of which...

Are you alert?
Consistency checking is all well and good but its scheduled and you need almost real time notifications that things are not quite as they should be. Add SQL Agent alerts for errors 823, 824 and 825 among others.

Your backup strategy is only as good as your restore strategy.
Many times I have encountered SQL instances with sound backup plans and absolutely no restore checks. Verify only is not enough, neither is checksum for that matter...seriously, I'm not kidding! The only way to guarantee that your backup can be restored is to restore it and consistency check it. For critical databases plan a restore check on a regular schedule. 

In the event of corruption, know what you're dealing with.
The golden rule is simple...don't panic! Ensure you know what is being affected here and how. There are several different types of corruption with different methods on how to resolve the problem and it might not necessarily mean a stack of downtime (although it still might). If you are going to have to restore then make sure of one thing; that this is not the first time you have done this!! You want to be going from a plan that is documented, proven to work and rehearsed.

So remember, corruption is not a mythical beast it is very real, it will happen, but rather than be scared of it, be prepared for it.


SQL Server 2016 CTP 3.1

Community Technology Preview 3.1 for SQL 2016 is out now and includes enhancements in several areas.

For me, really looking forward to seeing In-Memory OLTP develop. This release enhances a feature that I was initially quite underwhelmed by. Very promising sure but extremely limited and perhaps now it is becoming more of a practical solution?

We can now look forward to unique indexes, LOB data types and Indexes with NULLable key columns!

Here's the link:

http://blogs.technet.com/b/dataplatforminsider/archive/2015/11/30/introducing-sql-server-2016-community-technology-preview-3-1.aspx

Monday, 30 November 2015

SQL 2016 Enhancements to In-Memory OLTP

SQL2014's introduction of In-Memory OLTP was GREAT, until you read the small print. SQL 2016 has enhanced the technology somewhat, making it a little better! 

Details here from Aaron Bertrand over on sqlserverperformance.com :

http://sqlperformance.com/2015/11/sql-server-2016/in-memory-oltp-enhancements

Sunday, 29 November 2015

SP3: Improvements to troubleshooting

Improvements for troubleshooting are in the latest service pack for SQL Server 2012. 

Applying service pack 3 for SQL Server 2012 means that the query statistics DMV (dm_exec_query_stats) will now report on memory grant, parallelism and thread information. 

For a full list of new columns and further information look here: https://support.microsoft.com/en-us/kb/3107398

More information on the service pack can be found here: http://blogs.msdn.com/b/sqlreleaseservices/archive/2015/11/24/sql-server-2012-service-pack-3-is-now-available.aspx


SQL 2012 Service Pack 3 is now available.

Service Pack 3 for SQL Server 2012 is now available. 

Obtain it here: https://support.microsoft.com/en-us/kb/3072779

Information on the service pack and its improvements can be found at the following link:

http://blogs.msdn.com/b/sqlreleaseservices/archive/2015/11/24/sql-server-2012-service-pack-3-is-now-available.aspx

Friday, 27 November 2015

Simple SQL?

I'm not sure how often you have encountered the comment, "SQL Server is easy. All you have to do is click next, next and finish". Well I'm good friends with quite a few Oracle DBA's (with no real sense of humour) so I hear it a lot, I hear it all the time!

Of course, it isn't true, not by a long way but it did make me wonder. Is the simplicity of SQL Server also one of it's downfalls?

It is of course wonderfully easy to use. Microsoft's user interface is second nature to most if not all of us and yes there are wizards, lots and lots of wizards. Installing SQL Server is that easy that anyone could do it, really. You want a new database? Easy, right click over there, select new and done. Brilliant, isn't it?

You see the problem I have is that anyone can do it. I am sure that everyone reading this has had to fix a SQL Server at some point and all of its default settings are intact, left completely unaltered. You also know the specific one's that I am talking about here because these are the one's that can cause you all kinds of pain (and we'd change them)! Parallelism, maximum memory, tempdb, autogrowth etc etc. I haven't even mentioned maintenance yet!

For the non database person all of that simply doesn't matter. They've got their instance. The application is connecting wonderfully, queries are running beautifully and all is fantastic. 

It's just that it isn't...

Sure in the beginning things are great but after a while strange things start to happen. Those queries start to run a heck of a lot slower than before. The OS comes under pressure, the CPU usage has gone through the roof and what the heck is a CXPACKET wait...is it the network?!?!

The person who set it all up is not to blame. We are all familiar with the term "accidental DBA" but what about the seriously reluctant one? They didn't ask for this and now they are doing all kinds of weird and wonderful things to rectify a problem they probably don't even understand.

This is an all too common scenario and it might be in part because the role is often misunderstood (seriously, we don't just look at graphs all day!). But the way I see it, the main reason is that there are often seemingly easier alternatives to bringing in a DBA to handle everything. And you know why? Because they believe SQL is simple, and anyone can do it. Right?

We know better.

Thursday, 19 November 2015

SQL Server Misconceptions - The Remote Query Timeout Setting

So this is the first post in a little series I’m doing about misconceptions within SQL Server. I’m going to be looking at a number of items that don’t work as you might think they do and a few of things that might look a good idea…but they’re really not.

So the particular setting that I am discussing today is the Remote Query Timeout option and you can find what this is currently set to by running the following bit of code in Management Studio:

sp_configure 'remote query timeout (s)'

Or, because SSMS is clever we could just use sp_configure 'remote query' instead and in both cases we'll get results similar to the following depending on your setting:


Now the common misconception with this option is probably because of the following screen within SQL Server Management Studio (Server Properties -> Connections) where we see the following.


So the first checkbox allows remote connections to the server so we could assume that the remote query timeout value underneath (here set to 600, the default) is for incoming connections to the server. We might think that we could change this because we're getting timeout errors within a client application when calling SQL and maybe changing this will help prevent the errors.

But we’d be wrong!

Here’s TechNet to explain why:

Use the remote query timeout option to specify how long, in seconds, a remote operation can take before Microsoft SQL Server times out. The default is 600, which allows a 10-minute wait. This value applies to an outgoing connection initiated by the Database Engine as a remote query. This value has no effect on queries received by the Database Engine.

That last line pretty much says everything we need to know, the value has NO EFFECT on queries RECEIVED by the Database Engine. None whatsoever, it is used purely for outgoing connections.

You can read more about this option here