Earlier today I saw an interesting book being advertised on a social media channel I use regularly. The books subject matter was around interview questions for the DBA and had been gathered by a bunch of people who had interviewed at various organisations and presumably the book (because I haven't read it) is being aimed as the how-to-get-that-DBA-job manual.
I'm not going to single the book out because I hope that the author had genuine good intentions when writing it however having on been on both sides of the interview table I can honestly say that an interview guide for this type of role isn't actually going to help as much as having two particular things; relevant skills and experience.
For a technical I'm sorry to say that is all you need. If you've simply studied on the "popular" interview questions and their corresponding answers and you don't have any technical understanding then I'm afraid to say you'll be worked out in a matter of seconds by whoever is conducting the interview and it's probably going to end very quickly.
The other thing to bear in mind is that no interviews are ever the same. Some favour intensive technical tests whilst others can be very informal but in both cases they're designed to and will assess your actual abilities and no matter how hard you try, you cannot take any shortcuts, no matter how well they're advertised!
Sadly this particular book isn't on its own; there are quite literally hundreds of books, ebooks and web articles out there that quite frankly are setting you up to fail and it's wrong. So if you are looking for a new role in the SQL platform please avoid the temptation for taking the fast route, because it's not going to happen.
Now it's fair to say actually preparing for an interview is very different and there are certain things you can do to help your cause, it's pretty high-level advice though; make sure you can talk about (and back-up) your skills listed on your CV, give examples that relate to your previous and/or current job(s) and research the subject matter that you might not be as clued up on if the job description asks for it.
The last point is crucial. I don't mean look into the top 5 interview questions about Always On but if the job asks for it and you haven't had too much exposure then look into the technical guides that are out there and spin up some test scenarios. You can't know everything, the interviewer should be aware of that and a candidate who makes that sort of effort does stand out in an interview, that I can assure you.
So if you are looking for a role right now in the data platform don't forget there are also a wealth of people in the community that there to help just avoid anyone who is offering a clear short-cut that will only end up leading to a dead end.
Friday, 17 November 2017
Saturday, 23 September 2017
Microsoft Visual Studio Dev Essentials
The last article that I posted was about my thoughts on the future of the DBA role and the direction that it and many others are going. If you haven't read it then please give it a read as it's been really interesting to read other peoples views and opinions on this topic and of course, huge thank you to anyone that has taken the time to do so already.
As always, really interested to hear others views.
The TL/DR version of the post is that whilst job roles will be changing to keep with all of the technical advancements going on around us this isn't necessarily something to be worried about and it's actually quite an exciting time for us with lots of new these avenues to explore.
That's all fine but how do we go about gaining these new skills and will it be cost-effective to do so? Keeping our skills up to date has been of paramount importance to IT professionals and traditionally it's been down to the individual to shell out for courses and training material just to stay constant. Now there has been a bit of shift in regards to training and thankfully it has swung very much in the favour of those seeking to learn the technologies that are now becomign more common place..
Behind this shift are the very same organisations advancing and pushing their platforms into the commercial spaces. The bottom line is that as well as offering these technical solutions they also need people to be able to both use and support them. The more people that can do that the more adoption rates increase and with pay-as-you-use services such as the cloud this is vital.
In a nutshell, this means that they're giving us lots of training, mainly for free!
I don't want to sound like a TV/radio advert and say things like "THIS OFFER WON'T BE HERE FOREVER" but there is a little bit of truth to this. Whilst there are skills shortages in areas such as the cloud platforms these really won't last forever, particularity with adoption rates on a such a steep upward curve. Whilst I'm sure any free training options won't disappear, it does make a lot of sense to get on board now.
One option that I would certainly recommend you go look at is Microsoft Visual Studio Dev Essentials. Although the same suggests is very development focused it's definitely been designed and put together for anyone working in Microsoft's Data Platform.
There's a bunch of goodies to download such as Visual Studio (surprise, surprise!), Developer Editions of Microsoft R and SQL Server, plans for Office Online and Power BI and crucially a trial subscription for Microsoft Azure.
Then there's the training options:
Now the image is a little blurry (maybe there's some copy and paste courses for me?!) but this is what you get:
3 months of online training with Opsgility (Microsoft Azure training),
3 months of full access to Pluralsight (um, everything!) ,
2 month subscription to Linux Academy (makes sense with SQL 2017 etc),
3 month subscription to WintellectNOW (for developers) and the various courses offered by Microsoft's Virtual Academy.
That is a lot of free training material and when you factor in all the resource available already out there like tutorials, labs and of course the community contributed materials, all in all it makes for one superb learning platform.
Choice is great but I would also recommend pausing for just a second before you hit the activate button on the training modules! Before you do make sure you have a good look at what courses are on offer, what interests you and start to formulate a plan for your learning. It doesn't have to be a strict timetable but being smart upfront will avoid any waste, after all, if you activate each training option at once and you are already pushed for time then some bits will be missed, it's bound to happen (and that would be a shame).
It is worth mentioning for those wondering if it's similar to an MSDN subscription then yes, it's very similar to a cut down version, last time I looked MSDN offers some of the same but for double the subscription period so if you want a paid option, or your organisation will pay for one then it might be worth going down that route.
It's a good time for many reasons; SQL Server 2017 now has a generally availability date of the 2nd of October and with it's native support for Linux, languages like R and Python then as always training is going to be really important and right now there is a lot of material out there for us to start exploring new areas and that is exactly what organisations like Microsoft want (and need), and as such they're heavily supporting it.
It's a really important time to be involved in the data platform right now and with things changing very quickly it makes a lot of sense to be both keeping up with changes and learning more about them. I'll post again shortly and explain some of the areas that I am focusing on but for now, I highly recommend if you haven't already take the time to learn a bit more about Azure or Linux or whatever appeals to you to advance your career as a data professionals.
It's a good time for many reasons; SQL Server 2017 now has a generally availability date of the 2nd of October and with it's native support for Linux, languages like R and Python then as always training is going to be really important and right now there is a lot of material out there for us to start exploring new areas and that is exactly what organisations like Microsoft want (and need), and as such they're heavily supporting it.
It's a really important time to be involved in the data platform right now and with things changing very quickly it makes a lot of sense to be both keeping up with changes and learning more about them. I'll post again shortly and explain some of the areas that I am focusing on but for now, I highly recommend if you haven't already take the time to learn a bit more about Azure or Linux or whatever appeals to you to advance your career as a data professionals.
As always, really interested to hear others views.
Wednesday, 30 August 2017
The future of the DBA role.
For quite some time now there has been a lot of talk on the various social media platforms regarding the future of the DBA role and whether or not it still has a place in the not so distant future.
I've actually wrote this post a few times but it's always ended up being a very lengthy read of epic proportions so I've decided to hack it to bits and get straight to the point(s) and hopefully open it up for some more discussion because I think it's a very hot topic still and I'm really interested to hear peoples opinions on where the role is heading.
Lets get straight to the point; is there a place for the DBA role? The answer to that is most definitely yes, whilst databases exist there will always be the need for administration, but as core administrative tasks are being automated there will be less for DBA to do along these lines.
One example I hear of why the DBA will still be very important is performance tuning, after all in cloud platforms you are literally going to pay for poor performance but then with the likes of automated index management and the arrival of the adaptive query processing family in SQL 2017 we can see that the time we spend on tuning activities could well be shrinking as well.
This is really where the concern is coming from but perhaps this is the wrong way of looking at things. Instead of worrying about what we're going to be doing, or rather not doing, we should be looking at how the technical landscape is changing and looking at the opportunities that lay within it.
Now I'm not saying for that we should all become data scientists (and nobody else is by the way), data science is hard but it is a great example of an emerging area within the data platform that we may seek to explore for own careers, in fact there is no real reason why anyone shouldn't spend at least a bit of time familiarising themselves with the technology and its capabilities. This goes for a lot of functionality now present within SQL Server; it's native support for R and Python, the likes of Always On and In-Memory OLTP becoming more prominent and the rise and rise of PowerShell automation, we can even run on Linux now and of course there is that cloud thing that everyone is talking about.
All of these technologies are integral parts of the Microsoft's vision for an ever widening data platform and as organisations look to implement them and leverage their advantages it is the DBA that can be at the forefront of this technical transformation, if they want to be.
This for me is the real point. The changing technical landscape is only a threat to those unwilling to explore new areas and learn new skills and this certainly doesn't apply exclusively to DBA's, whatever your involvement in IT this technical shift effects you and to put it bluntly, you can either go with it, or be left well behind.
For DBA's there could be some areas that are out of the comfort zone, perhaps the Dev/BI stacks or architecture but thankfully there is an abundance of training material out there which doesn't cost a small fortune or in some cases anything at all, not to mention all the support coming from within the technical communities. The decision really is yours how you'd like to advance.
Now it is fair to say that organisations won't be simply moving to a new platform overnight, after all how many companies are still on SQL 2005 for example (if you need upgrading, give me a call) but rather than sit back and worry about what might happen and even worse do nothing about it, it's time to start looking at how the emerging technologies can benefit not just the organisations that you work with but how they can benefit you as a data professional.
I've actually wrote this post a few times but it's always ended up being a very lengthy read of epic proportions so I've decided to hack it to bits and get straight to the point(s) and hopefully open it up for some more discussion because I think it's a very hot topic still and I'm really interested to hear peoples opinions on where the role is heading.
Lets get straight to the point; is there a place for the DBA role? The answer to that is most definitely yes, whilst databases exist there will always be the need for administration, but as core administrative tasks are being automated there will be less for DBA to do along these lines.
One example I hear of why the DBA will still be very important is performance tuning, after all in cloud platforms you are literally going to pay for poor performance but then with the likes of automated index management and the arrival of the adaptive query processing family in SQL 2017 we can see that the time we spend on tuning activities could well be shrinking as well.
This is really where the concern is coming from but perhaps this is the wrong way of looking at things. Instead of worrying about what we're going to be doing, or rather not doing, we should be looking at how the technical landscape is changing and looking at the opportunities that lay within it.
Now I'm not saying for that we should all become data scientists (and nobody else is by the way), data science is hard but it is a great example of an emerging area within the data platform that we may seek to explore for own careers, in fact there is no real reason why anyone shouldn't spend at least a bit of time familiarising themselves with the technology and its capabilities. This goes for a lot of functionality now present within SQL Server; it's native support for R and Python, the likes of Always On and In-Memory OLTP becoming more prominent and the rise and rise of PowerShell automation, we can even run on Linux now and of course there is that cloud thing that everyone is talking about.
All of these technologies are integral parts of the Microsoft's vision for an ever widening data platform and as organisations look to implement them and leverage their advantages it is the DBA that can be at the forefront of this technical transformation, if they want to be.
This for me is the real point. The changing technical landscape is only a threat to those unwilling to explore new areas and learn new skills and this certainly doesn't apply exclusively to DBA's, whatever your involvement in IT this technical shift effects you and to put it bluntly, you can either go with it, or be left well behind.
For DBA's there could be some areas that are out of the comfort zone, perhaps the Dev/BI stacks or architecture but thankfully there is an abundance of training material out there which doesn't cost a small fortune or in some cases anything at all, not to mention all the support coming from within the technical communities. The decision really is yours how you'd like to advance.
Now it is fair to say that organisations won't be simply moving to a new platform overnight, after all how many companies are still on SQL 2005 for example (if you need upgrading, give me a call) but rather than sit back and worry about what might happen and even worse do nothing about it, it's time to start looking at how the emerging technologies can benefit not just the organisations that you work with but how they can benefit you as a data professional.
Labels:
automation,
career,
cloud,
data,
data platform,
databases,
DBA,
management,
SQL Server
Sunday, 16 July 2017
A question on index and statistic columns.
This post is a question around how SQL Server creates statistics for a new index or in other words - how can the columns for an index and it's statistics be the opposite away round from one another?!
Here's what I originally found when having a poke around a database; it's a pretty basic clustered index (with names blanked out to protect the innocent).
We can see that the leading column is a varchar(20) type and the next one is varchar(50). Now let's have a look at the statistics for this particular index, just for info this is the only index on the table.
This time the the leading column is the varchar(50) which is then followed by the varchar(20) column, hmm. Now column order is pretty important and interestingly enough the varchar(50) column is actually the more selective of the two so I wondered if this why perhaps the statistics are in a different order.
In order to test this I've used an old SQL 2014 test database that's been hanging around on a dev instance of mine. It has a very simple table composing of an ID field, first name and last name. Incidentally the first name and last name columns are varchar fields with lengths of 20 and 50.
Here's a new clustered index based on the first and last name columns:
After I have created it (yeah I know the name sucks btw) I'll check the statistics:
That seems fine, or at least the order is the same as the index, which we'd expect.
Now lets recreate the index but modify the column order so it looks like this, with the last name now the leading column:
Now if I check the statistics...
They are in a different order to how I have just defined my index.
Now originally I did wonder if the statistics had been manually altered, however just to rule that out if you try to change the columns of statistics in SSMS, you get the following:
Now of course this is with a clustered index, what happens if I try the same with a non clustered index?
Here is my new index where once again I have altered the column order. The statistics this time look like this:
Okaaay, this time the statistics reflect the column order of the non clustered index that I've just created. This makes (at least in this example), the statistic creation process different between a clustered and non-clustered index.
So the question, why the difference? Has SQL Server decided on the best column order for statistics for the clustered and non clustered indexes or has the creation process for the clustered index just not picked up on the column modification or does it even use another method when creating stats?
Here's what I originally found when having a poke around a database; it's a pretty basic clustered index (with names blanked out to protect the innocent).
We can see that the leading column is a varchar(20) type and the next one is varchar(50). Now let's have a look at the statistics for this particular index, just for info this is the only index on the table.
This time the the leading column is the varchar(50) which is then followed by the varchar(20) column, hmm. Now column order is pretty important and interestingly enough the varchar(50) column is actually the more selective of the two so I wondered if this why perhaps the statistics are in a different order.
In order to test this I've used an old SQL 2014 test database that's been hanging around on a dev instance of mine. It has a very simple table composing of an ID field, first name and last name. Incidentally the first name and last name columns are varchar fields with lengths of 20 and 50.
Here's a new clustered index based on the first and last name columns:
After I have created it (yeah I know the name sucks btw) I'll check the statistics:
That seems fine, or at least the order is the same as the index, which we'd expect.
Now lets recreate the index but modify the column order so it looks like this, with the last name now the leading column:
Now if I check the statistics...
They are in a different order to how I have just defined my index.
Now originally I did wonder if the statistics had been manually altered, however just to rule that out if you try to change the columns of statistics in SSMS, you get the following:
Here is my new index where once again I have altered the column order. The statistics this time look like this:
Okaaay, this time the statistics reflect the column order of the non clustered index that I've just created. This makes (at least in this example), the statistic creation process different between a clustered and non-clustered index.
So the question, why the difference? Has SQL Server decided on the best column order for statistics for the clustered and non clustered indexes or has the creation process for the clustered index just not picked up on the column modification or does it even use another method when creating stats?
Monday, 19 June 2017
VIEW SERVER STATE
Quick reference post on the VIEW SERVER STATE permission within SQL Server. This is a server level permission that once granted enables a login to view the results of Dynamic Management Objects.
I find that it's typically used for troubleshooting or performance tuning related activities and is a good alternative to the good old sysadmin role membership route, especially for external people.
To demonstrate what the permission allows I'll first create a new login on a test instance with the following command:
CREATE LOGIN SQLClarity WITH PASSWORD = 'SQLCl@r1ty'
Now I've logged into Management Studio with the credentials I've created above. So let's try to select records from a DMV, in this case my instances cumulative wait statistics:
SELECT * FROM sys.dm_os_wait_stats
I get the following error:
Msg 300, Level 14, State 1, Line 1
VIEW SERVER STATE permission was denied on object 'server', database 'master'.
Msg 297, Level 16, State 1, Line 1
The user does not have permission to perform this action.
SQL Server has been quite specific on how to resolve the issue by stating that the VIEW SERVER STATE permission was denied.
There are a couple of ways we can grant this permission, from the server properties > permissions window as in the image below. Remember that although the error message indicates the issue is on the master database it is a server level permission not a database one (such as view database state).
Or we can use T-SQL syntax such as the following:
GRANT VIEW SERVER STATE TO SQLClarity
Now the results from the DMV are visible without error.
This is a really useful way of restricting access for what could typically be viewed as an administrative task, however, one final word of caution though. This permission is applied at the server level and gives access to all of the Dynamic Management Objects and whilst in this particular case something like wait statistics might not be that sensitive the DMVs and DMFs do expose a lot of information so you have to bear this in mind when applying this level of permission.
I find that it's typically used for troubleshooting or performance tuning related activities and is a good alternative to the good old sysadmin role membership route, especially for external people.
To demonstrate what the permission allows I'll first create a new login on a test instance with the following command:
CREATE LOGIN SQLClarity WITH PASSWORD = 'SQLCl@r1ty'
Now I've logged into Management Studio with the credentials I've created above. So let's try to select records from a DMV, in this case my instances cumulative wait statistics:
SELECT * FROM sys.dm_os_wait_stats
I get the following error:
Msg 300, Level 14, State 1, Line 1
VIEW SERVER STATE permission was denied on object 'server', database 'master'.
Msg 297, Level 16, State 1, Line 1
The user does not have permission to perform this action.
SQL Server has been quite specific on how to resolve the issue by stating that the VIEW SERVER STATE permission was denied.
There are a couple of ways we can grant this permission, from the server properties > permissions window as in the image below. Remember that although the error message indicates the issue is on the master database it is a server level permission not a database one (such as view database state).
Or we can use T-SQL syntax such as the following:
GRANT VIEW SERVER STATE TO SQLClarity
Now the results from the DMV are visible without error.
This is a really useful way of restricting access for what could typically be viewed as an administrative task, however, one final word of caution though. This permission is applied at the server level and gives access to all of the Dynamic Management Objects and whilst in this particular case something like wait statistics might not be that sensitive the DMVs and DMFs do expose a lot of information so you have to bear this in mind when applying this level of permission.
Tuesday, 13 June 2017
Databases and DevOps
This is my post for T-SQL Tuesday #91 hosted this month by Grant Fritchey, the subject this time around is Databases and DevOps. For those who aren't aware what T-SQL Tuesday is it's essentially a monthly blog party where the host (Grant this time) will decide on a topic and fellow bloggers will write a related post on the subject; you can read more about it here.
My post is going to be rather high level (what's new I hear you say!) and that's because this is where I often see DevOps fail, people don't quite grasp the fundamental concepts and requirements to make it work, but to begin with, sing a long with me for a second:
Now this is a story all about how
My life for flipped-turned upside down
And I'l like to take a minute
Just sit right there
and I'll tell you about how implementing DevOps sometimes fails.
Okay, it doesn't rhyme, I stole the lyrics and I certainly can't rap (not without whisky anyway) but for some people the concept of DevOps does bring with it the idea of having their (working) life being flipped upside down. People get confused about what it all means and this can cause resistance, an unwillingness to look at what DevOps is trying to achieve and essentially hold on to their current way of working.
The main cause of this tends to be how people go about implementing DevOps. I've seen organisations sort of grasp at the concept, try to introduce it too quickly or even try to impose it and then, quite understandably it fails miserably each time.
People (or indeed companies) tend to focus on the lower levels of DevOps or even try to get the benefits straight away, the "continuous this" and the "continuous that" when in actual fact they're not even starting at the right place, a case of crawl before you can walk if you like.
The phrase DevOps is the bringing together of two different terms; Development and Operations so to make a success of it we need to think along those exact same lines. That means we need to focus on two things; communication and collaboration.
Communication is easy right? After all everyone kind of talks to one another so what's the problem? Well look at the traditional relationship between Developer and the DBA (operations). Both have been working very different styles for many years now; developers are making constant changes, pushing out releases as often as possible whereas the DBA is trying to put the brakes on and keep the systems in a stable state.
This often results in push backs and whilst they will certainly communicate, it might not necessarily be the right kind of communication and now we've got to try to get them to meet in the middle somehow and work in a very coupled fashion!
Fundamentally what is needed is an understanding of each others role. For me this is the real starting point of DevOps and although in some cases this will mean the breaking down of walls in no way is it an impossible task. Introducing each others way of thinking without trying to abolish the existing mindset but rather have a purpose of helping one another is how this common approach should be formed, and taken advantage of.
Side Note: I have noticed that this sounds a lot like couples therapy!? Is this the real meaning of DevOps - are we been healed somehow!?
Ultimately this mutual understanding results in much more solid foundation that can be used to then implement the lower levels of DevOps such as the different technical methodologies and tool sets.
Some of the most successful DevOps cultures that I have seen are where teams contain developers that are ex database admins and vice versa - yeah it's true, people actually do this! In these cases people haven't just brought their technical skills over to a new team, they've brought their understanding of the other functions too and will often use that in a co-operative manner to find the best solution - essentially, isn't this what DevOps is all about?
Now I am not saying that anyone should start shuffling around their IT department because that's the last thing you want to be doing. You can't force or impose this concept, it needs to grow and to some degree let people find their own ways of understanding and working with one another. Whilst challenging yes, the process doesn't need to be threatening or overwhelming in any way and done correctly it won't feel like that.
So to go back to the lyrics right at the beginning of this post, no it won't flip your life upside down we just all need to take a minute, sit right there, and learn from one another.
Monday, 12 June 2017
SQL Server on Ubuntu - Installation Overview
As I have gotten into the habit of writing follow up articles lately this one is no different and is an overview of my last post where I installed Microsoft SQL Server onto the Ubuntu Operating System.
The article is a bit link heavy and that's because I wanted to provide links to the web pages that I have been using to construct this article. One of the great things that I have found about the Linux platform is its documentation, there is a wealth of information out there, both official and via blogs and forums. You certainly get the community sense from these pages, as you would do SQL Server I hasten to add!
I started out using the guide available from Microsoft: https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-ubuntu which is a pretty standard instruction document for getting SQL Server installed on to Linux. Although it is a pretty straightforward process I did have to deviate from the document from time to time, that's mainly because I have very little Linux experience so it's a good way to get used to using the CLI (Command Line Interface).
Here's the first command:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
This is actually where I got my first problem and I kind of skipped over it in the first article and went straight to the solution. Essentially the problem is that if you're following the guide without having installed curl you will get an error like I did:
So, this didn't work but the message rather handily gives us a solution!
sudo apt install curl
Let's break it down a little bit. First sudo, which is giving root permissions to a particular command this is as opposed to sudo su which I had to do later on in the install to switch to superuser mode for the session.
Next is apt. Apt is a command line tool which works with the Advanced Packaging Tool and enables to perform installs, updates and removals of software packages. In this case we're installing curl so we use the install command.
At this point our command is saying, as a superuser use the Advanced Packaging Tool to install; and finally curl. That went off fine and now I tried to run the command once more:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
This failed again with a connection refused error and my initial thought was that perhaps there was some network configuration that I need to do in the VM or indeed Ubuntu but a quick search brought me to the sudo su command.
Now I have to admit, I'm still reading into the differences between sudo, sudo su etc and I encourage anyone to pick the brains of any Linux friends they have on the security layers because whilst at a high level I can see that sudo is a one time prompt for root permission whereas sudo su actually switches user and because no parameter is specified it switches to the superuser account by default.
This enabled me to install the GPG key; the apt-key command is used to manage the keys within the Advanced Packaging Tool, add is going to add a new key to the list of keys. My assumption is that because we have specified the microsoft.asc file that the - specifies that the key is retrieved from there:
So now we're ready to register the SQL Server Ubuntu repository:
curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list
| sudo tee /etc/apt/sources.list.d/mssql-server.list
A repository is essentially a collection of software for Linux. We use tools to get information about the repository then download and install the software from the designated servers. Microsoft uses two repositories for software that it builds for Linux, prod which is used for commercially supported software and mssql-server which contains the packages for SQL Server.
Once registered we can install SQL Server. The apt-get command is another command for APT, this time we are using apt-get update which download the latest package lists and latest information for all the repositories. We then use apt-get install to tell APT we're installing a package, -y to automatically answer yes to all prompts and then finally mssql-server which is our package.
sudo apt-get update sudo apt-get install -y mssql-server
That is it as far as the actual install is concerned but we now need to configure our SQL Server. To do this we use mssql-conf tool. mssql-conf allows us to make several changes that are very familiar for those who are used to administering SQL Server such as modifying file locations or enabling/disabling trace flags.
sudo /opt/mssql/bin/mssql-conf setup
In this case we are using the tool to perform setup which allows us to specify the administrator password and once set we are informed that SQL Server has started. The final command systemctl is a central management tool that enables us to perform various service management tasks.
systemctl status mssql-server
Here's the final screenshot again that shows the Microsoft SQL Server service up and running on Linux. The whole process was extremely straightforward and I'm looking forward to getting some of the other tools installed and start putting the server through it's paces. It's worth adding that the VM is running on my laptop quite happily so as long as you have 3.5Gb RAM available for a Linux box then a fully working test instance is something that is very simple (and free) to create.
The article is a bit link heavy and that's because I wanted to provide links to the web pages that I have been using to construct this article. One of the great things that I have found about the Linux platform is its documentation, there is a wealth of information out there, both official and via blogs and forums. You certainly get the community sense from these pages, as you would do SQL Server I hasten to add!
I started out using the guide available from Microsoft: https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-ubuntu which is a pretty standard instruction document for getting SQL Server installed on to Linux. Although it is a pretty straightforward process I did have to deviate from the document from time to time, that's mainly because I have very little Linux experience so it's a good way to get used to using the CLI (Command Line Interface).
Here's the first command:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
This is actually where I got my first problem and I kind of skipped over it in the first article and went straight to the solution. Essentially the problem is that if you're following the guide without having installed curl you will get an error like I did:
The program 'curl' is currently not installed. You can install it by typing:
sudo apt-get install curl
Curl is a tool that enables us to transfer data to or from a server and specifically in this command we're attempting to import the public repository GPG keys from https://packages.microsoft.com/keys/microsoft.asc which will enable us to install the SQL Server Ubuntu repository.So, this didn't work but the message rather handily gives us a solution!
sudo apt install curl
Let's break it down a little bit. First sudo, which is giving root permissions to a particular command this is as opposed to sudo su which I had to do later on in the install to switch to superuser mode for the session.
Next is apt. Apt is a command line tool which works with the Advanced Packaging Tool and enables to perform installs, updates and removals of software packages. In this case we're installing curl so we use the install command.
At this point our command is saying, as a superuser use the Advanced Packaging Tool to install; and finally curl. That went off fine and now I tried to run the command once more:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
This failed again with a connection refused error and my initial thought was that perhaps there was some network configuration that I need to do in the VM or indeed Ubuntu but a quick search brought me to the sudo su command.
Now I have to admit, I'm still reading into the differences between sudo, sudo su etc and I encourage anyone to pick the brains of any Linux friends they have on the security layers because whilst at a high level I can see that sudo is a one time prompt for root permission whereas sudo su actually switches user and because no parameter is specified it switches to the superuser account by default.
This enabled me to install the GPG key; the apt-key command is used to manage the keys within the Advanced Packaging Tool, add is going to add a new key to the list of keys. My assumption is that because we have specified the microsoft.asc file that the - specifies that the key is retrieved from there:
add filename Add a new key to the list of trusted keys. The key is read from filename, or standard input if filename is -.
So now we're ready to register the SQL Server Ubuntu repository:
curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list
| sudo tee /etc/apt/sources.list.d/mssql-server.list
A repository is essentially a collection of software for Linux. We use tools to get information about the repository then download and install the software from the designated servers. Microsoft uses two repositories for software that it builds for Linux, prod which is used for commercially supported software and mssql-server which contains the packages for SQL Server.
Once registered we can install SQL Server. The apt-get command is another command for APT, this time we are using apt-get update which download the latest package lists and latest information for all the repositories. We then use apt-get install to tell APT we're installing a package, -y to automatically answer yes to all prompts and then finally mssql-server which is our package.
sudo apt-get update sudo apt-get install -y mssql-server
That is it as far as the actual install is concerned but we now need to configure our SQL Server. To do this we use mssql-conf tool. mssql-conf allows us to make several changes that are very familiar for those who are used to administering SQL Server such as modifying file locations or enabling/disabling trace flags.
sudo /opt/mssql/bin/mssql-conf setup
In this case we are using the tool to perform setup which allows us to specify the administrator password and once set we are informed that SQL Server has started. The final command systemctl is a central management tool that enables us to perform various service management tasks.
systemctl status mssql-server
Here's the final screenshot again that shows the Microsoft SQL Server service up and running on Linux. The whole process was extremely straightforward and I'm looking forward to getting some of the other tools installed and start putting the server through it's paces. It's worth adding that the VM is running on my laptop quite happily so as long as you have 3.5Gb RAM available for a Linux box then a fully working test instance is something that is very simple (and free) to create.
Saturday, 10 June 2017
Installing SQL Server on Ubuntu 16.04.2
The plan to make SQL Server available on Linux was announced way back in March 2016 and with the recent announcement of the SQL Server 2017 (and subsequent CTP releases) things certainly appear to be right on track for SQL and Linux.
It's worth adding that in recent weeks I have started to see organisations really take up the idea and have spoken to a few people who are creating their own test boxes and started to think about how to use this combination. Not only that, they've also started to ask for people with the right technical knowledge too so perhaps if you are a DBA who hasn't had a bit of exposure to Linux then now is probably the right time to start! All in all though, it's an encouraging sign for Microsoft.
Anyway, before we get going I'm going to be using VMWare Workstation 12 Player to create the Ubuntu virtual machine, you can download the software from this link and use VM's for non-commercial use.
To start off I need to download the Ubuntu Operating System ISO, which is available from here where you will find the following two download options:
I went for the 16.04.2 LTS version and once downloading and the following message was displayed, which I had great delight sharing with my open-source buddies (this one is for you Adrian).
Once the download has finished I can open up VMWare Player and select the Create a New Virtual Machine option as shown in the image below:
From here I can choose how the Operating System will be installed; from a DVD in my machine, from an ISO (what we will select) or I can install an OS later. Here we can see that I have browsed to the downloaded ISO file and the install process has recognised that it's the Ubuntu 16.04.2 OS.
Which actually tells me off as user names can only be lowercase apparently so I fixed that and carried on to the next part where I need to specify a name for my new Virtual Machine:
Clicking next takes me to the disk capacity screen, I left the options as default so used a 20Gb max disk size and left the split virtual disk into multiple files option selected:
After clicking next we move on to create the virtual machine however before we click Finish and proceed with the create/install process I need to make a slight modification to the configuration of my VM.
The system requirements for running SQL Server on Ubuntu 16.04.2 contains the following
You need at least 3.25GB of memory to run SQL Server on Linux. For other system requirements, see System requirements for SQL Server on Linux.
On the create VM window the Memory is currently set to 1024 MB so by clicking the Customize Hardware button I can change the allocated memory to 4GB (4096 MB) as in the screenshot below:
I can then click Close as there are no more hardware configurations that I need to make and now I can click Finish and the install process will start; if prompted to install VMware Tools for Linux then go ahead and Download and Install.
Pretty neat install screen, you just don't see enough purple these days!
Once installed the virtual machine will reboot and Ubuntu will start. I get presented with a login screen where I need to enter the username and password that I specified during the install process and now I am ready to go!
Ubuntu!
Now I was following the initial guide that is available here but ran into an error at the very beginning when trying to import the CPG keys, wasn't a biggie as it meant I just didnt have the curl tool so I just had to run the following command first:
sudo apt install curl
Before I could successfully run the curl command:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
This brought up a install progress type screen and because the Time Spent value was increasing I figured things were progressing...however after a few minutes I was given a connection refused error!!
A little bit of digging around soon led me to a solution, superuser mode which kind of reminded me of the run as administrator option in Windows a little bit.
To start superuser mode type the following:
sudo su
Then I was able to run the curl command once more for info here's a screenshot containing the connection error and the subsequent sudo su and completed curl command:
Now for the next step I need to register the mssql-server repository:
curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list |
sudo tee /etc/apt/sources.list.d/mssql-server.list
No errors, this is good and now I can also quit using superuser mode at this point:
exit
Now for the actual installation and to do this I need to run the following commands which will go ahead and install SQL Server:
sudo apt-get update
sudo apt-get install -y mssql-server
Next step is to run the conf-setup to specify and confirm the administrator password for SQL Server:
sudo /opt/mssql/bin/mssql-conf setup
Success!
SQL Server is now installed but I can run a quick test to see if the service is running correctly by using the following command:
systemctl status mssql-server
Which brings up the following screen (with a reassuring green selection of text):
That's it, I now have a brand new test instance of Ubuntu with SQL Server running quite happily, for now at least!
Labels:
blog,
how to,
Linux,
MSSQL,
open source,
SQL,
SQL Server,
Ubuntu
Subscribe to:
Posts (Atom)