Showing posts with label platform. Show all posts
Showing posts with label platform. Show all posts

Thursday, 21 March 2019

Presenting

It seems quite fitting that my last post, dipping into the cookie jar paid reference to my battles with anxieties and yet this post is all about me presenting for the first time, as I did last week at the Manchester and Leeds Data Platform User Groups. OMG I still cant believe it!!!!

Anyway.

It wasn't that long ago that I wouldn't have been able to achieve this, and not for the lack of wanting either as it's been a bit of a dream of mine for a long time. So as you can imagine I'm feeling quite elated by the whole experience! I took an awful lot from both sessions too, met some amazing people, had some great discussions about all sorts of things and also took away some great advice and feedback about presenting in general and my session too.

So for this post I thought I'd jot down some of the things I learnt about presenting and hopefully, if you're contemplating doing it some time, it might help a bit.

Nerves
My dad told me many, many times that if you don't feel nerves there's something wrong with you, you're just not human. He's not wrong, and if I'm totally honest as I headed over the M62 motorway on that Wednesday afternoon towards Manchester I couldn't help but think, "WHAT ARE YOU DOING?!?!".

I've come to not exactly embrace nerves, far from it, but I certainly accept them for what they are. If I couldn't care less about something then the chances are I wouldn't be nervous at all but when something means something, there they are. With that in mind I try not to be too concerned on trying to overcome them but instead use them to ensure I've prepared as well as I can and that I'm focused on what I'm trying to achieve, I guess it might be called "channeling the energy" if you're into that sort of thing.

TL/DR: Everyone gets nerves, so don't try to battle them and instead use them to focus/channel that energy into remembering what you're achieving.

Practice doesn't make perfect, but it makes you much better.
Things go wrong, fact. Preparation is without question key but there's always something. For the second session I forgot a key prop (chocolates) that I use to make a joke about bribery being such an evil word. I'd prepared everything, done the session (with chocolates) the night before but yet here we are.

I didn't even realise until the slide came up, talk about a "DOH" moment but then again, what could I do? So I apologised to the audience, made a mental note to not do that again (I promise), and carried on. The lesson for me was that it's impossible to prepare for every single little eventuality of "what could go wrong" so instead I focused (yeah, that again) my preparation on the good bits; I knew my slides (mostly), knew the code sections and also my lines (that's how I sort of viewed them but instead of being too rigid tried to develop a bit of flow and allow a natural conversation).

Above all, prepare best you can, don't fear the what if's and if something doesn't go to plan, don't beat yourself up, there's next time.

TL/DR: Don't waste time fearing mistakes, they happen to everyone. Relax. You know your stuff.

Self-Doubt and Imposter Syndrome
Another fact, I've never been to a session where the presenter knew everything.

It's true, but yet my old mate imposter syndrome was determined to make me question absolutely everything, even just standing up there. I went through it, seriously I did. Do you think you're some sort of expert? What happens when someone asks you a question and you don't know, what if somebody spots a mistake, what if you've got no idea what you're talking about?! You don't belong up there!!!

Oh yeah, that shit is real.

Imposter syndrome is nasty. It's as if your very own self becomes hell-bent determined to make you doubt pretty much everything that you're trying to do and it's going to come at you at every imaginable angle to try to stop you. Lovely isn't it?

Unlike nerves this is something to overcome, do not accept imposter syndrome.

I don't know or pretend to understand the psychology of it abut what I do know is that there is nothing trying to get in my way and stop me from achieving something - apart from my own self-doubt. A coach of sorts once told me that in those situations remind yourself that you have every single right to be where you are, you've done the hard yards (or metres) so look back on them and know you've earned it. [another cookie]. I love those words (thanks Mr Fisher).

Another thing that sprung to mind, and I can't remember if it was Paul Randal, Gandalf or some other wise old soul who said to remember that everyone who you look up to has been in the very same place where you are now. They might be very established speakers but at one point, they were right where you are now. I took a lot from that.

To lead off that; nerves, self-doubt, imposter syndrome, they effect absolutely everyone. The reason why you are where you are is because you've earned it, now is your time.

TL/DR: Your own self-doubt is yours, so own it, you're exactly where you deserve to be. You've done the hard bit, self-doubt doesn't know what its talking about.

Breaking the ice
The first five or so minutes is a quite a rollercoaster. I turned to humor, for Manchester I had a joke about it being my first time speaking, only that was a lie, it wasn't like I'd driven over and words started coming out of my mouth. I also have a particular slide, my "about me - part two", that shows I'm quite happy to poke fun out of myself. I quite like laughing and to date, it's always got a bit of a laugh from the audience as well.

The fun filled game EVERYONE is playing!!

For me the first few minutes are a great opportunity to give a pretty informal introduction to who you are and what your session is about. After the intro I used a bit of a quiz element (I sort of borrowed this idea from Paul Andrews Azure Icon Game), "fun facts about guillotines" to promote a bit of audience interaction. That really helped, it was very light hearted, a bit of a laugh (if you can laugh about guillotines), and it just broke the ice a little more.

TL/DR: The audience are real people. They know and appreciate what you're doing, they're there to support you so get to know them a bit.

Learn from the experience
Doing two sessions on two nights actually turned out to be a great decision, despite a lot of self-doubt telling me it was a ridiculous idea. I got a lot of feedback after night one so being able to put that into practice straight away was a big help, wish someone had said don't forget the chocolates.

I also got a lot from the questions being asked in the session and actually a couple of them I made mental notes to include the following night (cost - optimisation relationship). They were really excellent points and for me, improved the session for next time. Leeds was no different, I got asked about what query tools I use so when I next present the session, there's a slide on that (with some new bits I've learnt about recently)! 

TL/DR: Take all the learning points that you can from the experience.

Enjoy it
Yes I had nerves, yes I suffered greatly with imposter syndrome, yes I was absolutely dreading it.

At the same time I rehearsed, I had an approach in mind, used some bits and pieces to gradually get into delivering the presentation. I didn't aim for perfection, I really didn't need the pressure. Some minor things went wrong, I forgot a couple of bits but hey, these things happen, mental note, carry on.

TL/DR: We get the best rewards when we go out of our comfort zone and "DO IT".

I'm no expert, full stop, but I learnt stacks from what was an absolutely brilliant experience. Am I glad I did it? Too right I am and now I'm focusing on the next step! Is that a scary prospect? Too right. Is that, or any element of self-doubt going to stop me?

Hell no.

Sunday, 12 August 2018

SQL Server DBA: The worst days

In a recent blog post Steve Jones posed the question; what was the worst day in your career? Great idea by the way.

A couple of experiences that occurred early on in my DBA career sprung to mind. There was the rather nasty corruption of a critical but not highly available application database that happened mid-afternoon which led to a very manual and overnight full restore (legacy system means very legacy hardware).  

The subsequent post-restore checks were also quite lengthy meaning the entire recovery process concluded at around 5.30AM the next morning, which actually wasn't that far from my estimated ETA of a working system. Operationally the effects weren't too bad; transactions were captured using a separate system and then migrated into the restored database when it came back online. I'll never forget the post incident discussion either; no finger pointing, no blame whatsoever just a well done to all for having a successful recovery operation and a genuine interest in how we could further minimise any impact in future. 

Then there was the time the execution of an application patch with a slight (and undiscovered until then) code imperfection brought down an entire production server, that just happened to be attempting to process some rather critical financial workloads from various systems at the same time. In truth it was a completely freak event that had happened on a combination of very old systems that were considered flaky at best.

The systems were brought online quickly enough but tying together the results of the various processes that may or may not have worked took hours and hours of querying with lots of manual updates. It might sound terrible, but because of the coordinated effort between different teams and individuals it had actually taken a fraction of the time that it could have done and not only that, data was confirmed to be 100% accurate.  

Want another corruption tale? Why not. How about the time a system database on a 2005 instance went all corrupt rendering the instance completely useless? Of course it never happens to a system that nobody cares about, no, yet another critical system. The operational teams went to plan B very quickly but even better, a solution that avoided large restores was implemented quickly so the downtime, although handled well, was still significantly reduced.

Looking back there's plenty more, I think it's fair to say that disaster is a real occupational hazard for database professionals. And yet despite being labelled "worst days" I actually look back on them with a large degree of genuine fondness. 

You see disasters are always going to happen when databases are involved, it's a fact and how we deal with them at the time is equally as important as how we learn from these events. In each of these examples a recovery plan was in existence for both technical and operational viewpoints, as well as that everyone involved knew what was happening, what to do and critically not to add any additional pain to the situation but to arrive at the solution as quickly as possible.

Learning from these events meant asking the right questions and not taking a viewpoint of blame. How can we prevent this, how can we make a recovery process more robust and what can we implement technically and operationally to improve our response times and also critically, when can we schedule the next Disaster Recovery test? 

Worst days? In one sense most definitely yes. Nobody wants to be in the middle of a technical disaster that's going to take hours to resolve but a solid recovery plan, collaborative effort to a solution and an open forum to analyse and learn from the event makes these memories much less painful!

Building a DevOps culture

In my last post I described some of the reasons why organisations fail to implement a successful DevOps methodology. Often there is a misunderstanding of what actually DevOps is but often existing working cultures can be the thing hindering progress.

From webopedia: "DevOps (development and operations) is an enterprise software development phrase used to mean a type of agile relationship between development and IT operations."

Being a consultant I often work in the "space" between different technical roles which gives me an ideal view of how well companies are utilising DevOps practices or sometimes, where they're going wrong.

For me the most crucial part is building strong collaborative working relationships between teams. In the database world this isn't just between developers and DBA's but also any team who in some way interacts with SQL Server. This includes support teams, testers, release and change teams, architects and technical management.

How we seek to build these relationships is pivotal. As I mentioned in the last post, forced collaboration is a common approach that ends up being counter productive. Organisations in their rush to build a DevOps culture can be too rigid in how they look to develop increased inter-team working, often over-formalising and creating very process driven activities. 

Instead organisations should look to encourage rather than dictate and I've seen many successful ways that this achieved, often in a management hands-off style that lets technical teams freely integrate and discuss innovative ways of doing things in much more open forums. When consulting with database professionals we explore common pain points that are shared between teams and how solutions to which are in some way, arrived at by leveraging one another's expertise. 

I say in some way because often the the issue isn't strictly technical but comes down to process instead. Release and change management are great examples of this; developers naturally want to make more and more frequent changes to systems which is against the better nature of traditional DBA's. 

Understanding each others objectives is the first stage of developing a collaborative effort to build upon existing processes (not work around them) to help each other achieve common aims. The word I never use is compromise and it should never feel like that. All involved should feel like they are building solutions together and not feel like that are to required to relinquish something to get there.

This is a common side effect where the approach to DevOps is unbalanced where teams are becoming involved at different stages. Instead organisations must involve all parties as early as possible and avoid maintaining those traditional silos. 

Increased cross functional teams means that teams work much faster together and this effects both development and problem management. One of the obstacles for moving to a platform of more frequent deployment is the risk of introducing much more failure to production systems. Done correctly, a DevOps methodology negates this by increasing the stability of systems and reducing the complexity of releases to environments which in turn makes faults much easier to not just to recognise but also rapidly back out from. 

It sounds like a case of everyone wins and typically I would honestly agree with that statement. A DevOps methodology has benefits for both teams and businesses alike; better employee engagement, much more personal development opportunities, increased productivity, more stable environments, more frequent enhancements and improved response times to defects.   

Issues that are preventing a DevOps methodology from being implemented can be often be resolved from a cultural perspective. A key starting point for organisations is to encourage collaborative relationships early on and for teams/individuals to seize the initiative and start talking about common pain points, desired solutions and building shared knowledge.

Wednesday, 8 August 2018

Reasons why DevOps implementations fail.

Over the last few years we have seen a monumental rise in the number of organisations adopting a DevOps working culture. This trend shows no signs of slowing down whatsoever and whilst many are now realising the benefits of these working practices many are also struggling with the adoption and in some cases it's either stopped completely or not even started.
There's a number of reasons of why this happens and I've seen some common causes, which is actually a good thing because we can recognise where these are occurring or even prevent them before they start to become a problem.

No clear direction.
It's very difficult to drive to a destination if you don't know where you're actually going (trust me on this, I've tried it). This is obviously very reasonable advice however many DevOps projects fail because of a lack of direction. It's actually a common issue with many buzzing trends, particularly in IT where organisations rush into a technology stack or movement just for the sake of doing it. Inevitably this often always leads to failure.

Organisations need to fully understand what a DevOps culture is, its objectives and its close relationships with their existing business processes and frameworks. A common misconception is people often viewing DevOps as a direct replacement for ITIL when in actual fact it's more of a cultural change built on top of ITIL principles. By fully understanding DevOps the benefits of adoption become much more viable and ultimately the path to it's success becomes much simpler.     

Adopting a silo approach to DevOps.
I often see individual teams being very successful in implementing components of DevOps practices only for other teams being behind in terms of adoption and/or understanding. The classic case is the Developer and DBA; the developer is pushing for much more frequent releases (a realised benefit of DevOps) but then the DBA, who perhaps isn't on board, is then trying their best to slow down all of these changes to their production databases. In the words of Bon Jovi, "we're half way there".

This lack of cohesion or a shared direction can result in a significant bottleneck and the DevOps practices start to creak a little. Then other unintended side effects start to creep in, such as blame and finger pointing (some of the things that a healthy DevOps culture seeks to eliminate) and then it can all start to fall apart. 

DevOps for databases is one particular area that is so heavily reliant on lots of people in different roles working together in a collaborative manner. An organisation must identify this and start to encourage teams to engage and build with each other in the early phases of a DevOps implementation, but organisations also have to be very careful in how they seek to develop this collaborative culture...

Forced collaboration.
I believe collaboration underpins the entire DevOps methodology so it makes perfect sense for organisations to work towards developing much closer working relationships between teams however organisations can also over-formalise things, even making the activity seem very process-driven which often leads to much less buy in from individuals, even entire teams.

This causes obvious problems, not least the silo approach mentioned in the previous point, so organisations have to find the balance on being almost relaxed in how they let relationships build and at the same time provide a certain degree of steer. This isn't as easy as it sounds and it is certainly reliant on strong leadership. In my experience successful implementations have been led by those that enable positive change rather than those who try to dictate it.
Rushing into new tools.
New tools are awesome, fact and in a DevOps ecosystem there are so many to pick and choose from that each bring new ways of doing things and [hopefully] improving productivity. The advantages are great, without a doubt but often tools can be implemented way too early without a focus on the underlying processes. This can significantly reduce the effectiveness of what a particular toolset/platform is trying to achieve; a release management tool for example won't improve a change/release process if the process is fundamentally flawed.

The most successful DevOps implementations focus on people and process first, leveraging the strengths and principles of existing frameworks and building strong collaborative working practices. New tools are going to be an important factor of a system designed with DevOps principles in mind but they need to be leveraged correctly and for the right reasons.


These are some of the common pitfalls that I've seen occur during DevOps implementations, many of which are deeply rooted in the working culture, not the technologies. There is undoubtedly so much to gain by adopting the principles and often it requires organisations to step back and observe their working practices first and spend time exploring the benefits of working with DevOps.

In my next post I'll cover how to address some of these issues and offer some insights into the benefits of building collaborative relationships between data professionals.

Saturday, 23 September 2017

Microsoft Visual Studio Dev Essentials

The last article that I posted was about my thoughts on the future of the DBA role and the direction that it and many others are going. If you haven't read it then please give it a read as it's been really interesting to read other peoples views and opinions on this topic and of course, huge thank you to anyone that has taken the time to do so already.

The TL/DR version of the post is that whilst job roles will be changing to keep with all of the technical advancements going on around us this isn't necessarily something to be worried about and it's actually quite an exciting time for us with lots of new these avenues to explore.

That's all fine but how do we go about gaining these new skills and will it be cost-effective to do so? Keeping our skills up to date has been of paramount importance to IT professionals and traditionally it's been down to the individual to shell out for courses and training material just to stay constant. Now there has been a bit of shift in regards to training and thankfully it has swung very much in the favour of those seeking to learn the technologies that are now becomign more common place.. 

Behind this shift are the very same organisations advancing and pushing their platforms into the commercial spaces. The bottom line is that as well as offering these technical solutions they also need people to be able to both use and support them. The more people that can do that the more adoption rates increase and with pay-as-you-use services such as the cloud this is vital.

In a nutshell, this means that they're giving us lots of training, mainly for free!

I don't want to sound like a TV/radio advert and say things like "THIS OFFER WON'T BE HERE FOREVER" but there is a little bit of truth to this. Whilst there are skills shortages in areas such as the cloud platforms these really won't last forever, particularity with adoption rates on a such a steep upward curve. Whilst I'm sure any free training options won't disappear, it does make a lot of sense to get on board now.

One option that I would certainly recommend you go look at is Microsoft Visual Studio Dev Essentials. Although the same suggests is very development focused it's definitely been designed and put together for anyone working in Microsoft's Data Platform. 

There's a bunch of goodies to download such as Visual Studio (surprise, surprise!), Developer Editions of Microsoft R and SQL Server, plans for Office Online and Power BI and crucially a trial subscription for Microsoft Azure.

Then there's the training options:


Now the image is a little blurry (maybe there's some copy and paste courses for me?!) but this is what you get:

3 months of online training with Opsgility (Microsoft Azure training),
3 months of full access to Pluralsight (um, everything!) ,
2 month subscription to Linux Academy (makes sense with SQL 2017 etc),
3 month subscription to WintellectNOW (for developers) and the various courses offered by Microsoft's Virtual Academy.

That is a lot of free training material and when you factor in all the resource available already out there like tutorials, labs and of course the community contributed materials, all in all it makes for one superb learning platform.

Choice is great but I would also recommend pausing for just a second before you hit the activate button on the training modules! Before you do make sure you have a good look at what courses are on offer, what interests you and start to formulate a plan for your learning. It doesn't have to be a strict timetable but being smart upfront will avoid any waste, after all, if you activate each training option at once and you are already pushed for time then some bits will be missed, it's bound to happen (and that would be a shame).

It is worth mentioning for those wondering if it's similar to an MSDN subscription then yes, it's very similar to a cut down version, last time I looked MSDN offers some of the same but for double the subscription period so if you want a paid option, or your organisation will pay for one then it might be worth going down that route.

It's a good time for many reasons; SQL Server 2017 now has a generally availability date of the 2nd of October and with it's native support for Linux, languages like R and Python then as always training is going to be really important and right now there is a lot of material out there for us to start exploring new areas and that is exactly what organisations like Microsoft want (and need), and as such they're heavily supporting it.

It's a really important time to be involved in the data platform right now and with things changing very quickly it makes a lot of sense to be both keeping up with changes and learning more about them. I'll post again shortly and explain some of the areas that I am focusing on but for now, I highly recommend if you haven't already take the time to learn a bit more about Azure or Linux or whatever appeals to you to advance your career as a data professionals.

As always, really interested to hear others views.