You’ve heard that In-Memory OLTP is fast, but how do you know if your workload can take advantage of this awesome feature?
We will explore In-Memory OLTP workload evaluation, use cases, architecture, capacity planning, data migration, hot/cold data, potential impact on disk-based workloads, and more.
By the end of the session, you will have an understanding of how In-Memory OLTP works, and whether your workload is likely to see an increase in performance.
Get full notes, resources, and more information here: https://groupby.org/conference-session-abstracts/in-memory-oltp-survival-guide/
For years, you've heard that you're supposed to reorganize your indexes to make SQL Server go faster. It sounds like it makes sense - keep things in order, right? But you keep doing it, and SQL Server isn't getting any faster. You've even heard that setting fill factor will help prevent fragmentation, and you're doing that too - but your indexes still keep getting fragmented every day, and users aren't happy with performance.
This advice made a lot of sense at the turn of the century, but today, things are different - and we're not just talking solid state drives. In just the first 15 minutes, you'll have a series of ah-ha moments when you realize that your daily index maintenance jobs might just be making the problem worse instead of better. Then, you'll learn what you need to do instead.
Imagine your Data Warehouse is growing exponentially to multi-terabytes and you have been tasked to make analytics queries even faster while keeping the cost of storage low. Industry leading SQL Server 2016 columnstore technology can help you solve these by reducing the storage footprint by 10x (average) and speed up analytics queries up to 100x. SQL Server Tiger team has leveraged columnstore technology with many Tier-1 workloads to achieve these goals. This session will cover three different types of workload. For each workload, we will cover the application architecture, challenges and how SQL Tiger team used columnstore index technology to address them.
I was required to prove that I had successfully installed and configured a backup solution across a large estate. I had a number of success criteria that had to be met. Checking all of these by hand (eye) would have been error prone, so I wrote a test to do this for me and an easy for management to read HTML report using PowerShell and Pester.
The session has come from that situation and is about enabling you to provide an easy to read output to quickly and repeatedly show that infrastructure is as expected for a set of checks, also known as Operational Validation using Pester. There are many use cases for this type of solution; DR testing, installation, first line checks, presentation setups
What is Pester?
Pester is a Unit Testing framework for PowerShell which can be used for testing your code but also as shown in this session for validating your infrastructure. This is an excellent post by Adam Bertram to introduce Pester It is included with PowerShell on modern Operating Systems and free to download from the PowerShell Gallery or GitHub if not included.
After this session, you will have a basic understanding of how Pester works and the capability to examine your checklists and create your own validation tests and provide some reporting for management.
Attendees will be taken through the following:
How do you know your database change won’t affect something you haven’t thought of?
Database objects can have many dependent objects in the database and in the application. Keeping track of these dependencies can be difficult, especially if the database is used by multiple applications or services.
The first step is to have a project for the database and get it into source control.
Source control is the single source of truth that all deployments should be kept in sync with. Source controlling the database also acts as the foundation for automation of:
This automation is key to the Continuous Integration methodology. After every commit, builds and tests will run in the background and only alert if there is a problem. Builds test the deployment of the change and tests check that everything affected still works. Coverage reports indicate any gaps in the testing suite.
The tools I use to make this possible with SQL Server are:
By the end of this session, I hope you will see how a CI approach to database development can remove the unknowns from deploying database changes.
The war between DBAs and developers has been raging since the dawn of relational databases. One reason for disagreement comes from developers who want to store their data in JSON because it is fast, standard, and flexible. DBAs cringe when they hear of long text strings being stored in their SQL databases; they cry with concern, “No data validation? No schema binding?”. Is there any hope for these two warring factions to see eye-to-eye?
This session will explore the new JSON functionality introduced in SQL Server 2016. We will use T-SQL examples to learn how these functions can be used to parse, create, and modify JSON data. More importantly, we will discuss how to optimize performance when using these functions.
By the end of this session DBAs and developers will know how to efficiently work with JSON in SQL Server 2016. It will also usher in an era of peace between DBAs and developers…
… at least until someone brings up the topics of cursors, NOLOCKs, or Entity Framework.
Get full session notes, resources and more here: https://groupby.org/2017/03/dbas-vs-developers-json-in-sql-server-2016/
SQL Server releases every 2 years. SSMS releases every month. Power BI releases every week. It’s impossible to “keep up” with technology, and it’s foolish to try. Instead, you should realize you have a limited amount of TrainingBucks: time, energy and focus. And then you have to decide how to make the most of those TrainingBucks.
First, we’ll talk about how exposure and mastery are different goals with different paths. We’ll cover how exposure does not equal mastery. Second, we’ll talk about strategies for adding more learning with the time you already have available. Third, we’ll cover what types of training lead to deeper, stronger learning. Finally, we’ll talk about having a theme in your learning. We’ll talk about how learning the wrong things is wasting your time.
During the presentation, I will use an interactive Power BI report to visualize how using these strategies will lead to more growth for less effort. This report will be publically available, so the viewers can follow along as they watch.
The goal of this talk is to give you an analytical framework and multiple dimensions to look at, instead of the simplistic “more is better”. More is not better, and if you try to drink straight from the firehose, you’ll drown. By the end of this talk, you’ll have a roadmap for how to focus your training and grow your career.
Are you new to SQL Server and not sure where to begin with T-SQL? Does it feel like a foreign language? The objective of this session is to go over simple T-SQL Statements and to show you how to build on them. We will focus on simple select statements and translating English into T-SQL. Once you have the Rosetta stone, you will find that T-SQL becomes like second nature. We will also focus on adding a simple where clause, order by and join. We will also create a simple backup script and learn about the built-in intellisense in SQL Server Management Studio.
This isn’t the dark ages anymore. You’ve learned that you need to put your database in source control and you’re competent with source control systems like TFS or Git. You’ve also learned how to express your database in script form using a tool like SSDT, DbUp or Redgate.
However, you still haven’t written as many automated tests as you know you should and you aren’t convinced you’re going about it the right way. You haven’t looked at the build functionality in VSTS yet or gotten to grips with build servers like TeamCity or Jenkins, and even if you have you aren’t sure how the process should work for SQL Server builds and tests.
In this session, I’ll explain how to use tSQLt to build a suite of automated tests to give you confidence in the quality of your code. Then I’ll talk through various ways to automate your database builds and run those tests on some schedule or trigger. I’ll also discuss the pros and cons of various different approaches so that you can understand which approach would suit your teams and projects.
Are you accountable for databases or database servers? Do you fix problems only to see them pop up again months or years later in slightly different scenarios? Have you been called at 2am and asked to “health check” hundreds of servers for an issue that nobody in the world has ever documented before and wished there was an easy mode to do that or even avoid the problem in the first place?
If so then Operational Validation can help you. In this session you’re going to get:
A mindset on how to handle your day to day duties in a way that keeps you efficient, honest to yourself and to others, while covering your posterior.
Free tools to fix things and make sure they stay fixed over time. Really free. As in you couldn’t pay for them even if you wanted to.
If you work with one instance you’re going to learn how to manage five hundred instances. And if you work with five hundred instances you’re going to learn how to have fun doing it.
You’re going to learn:
Jenkins, a cool web application you can install on your workstation to provide orchestration and reports.
How to integrate PowerShell with Jenkins and multi-thread efficiently in the background with PoshRSJob using ready-to-go templates so you don’t need to worry about how it works.
Then we’re going to write some tests. They’re not going to be beautiful tests. We’re going to be testing some of the scummiest, most crapulent edge cases SQL Server has to offer. Please bring tissues because there will be tears. If you’ve ever said, “I don’t know what to test”, I’ll make you regret it.
I’ll also discuss a few other PowerShell modules like CimSession and DbData and why you might want to use them for reliable and scalable tests.
By the end of the session, you should be bouncing off the wall and reconsidering your life. I want you to start screaming about Operational Validation from the rooftops, and have the tools and know-how to quickly start on your own.
How easy is it to hack a SQL Server? In this session, we'll see examples on how to exploit SQL Server, modify data and take control, while at the same time not leaving a trace.
We'll start by gaining access to a SQL Server (using some "creative" ways of making man-in-the-middle attacks), escalating privileges and tampering with data at the TDS protocol level (e.g. changing your income level and reverting without a trace after payment), hacking DDM, and more.
Most importantly, we'll also cover recommendations on how to avoid these attacks, and take a look at the pros and cons of new security features in SQL Server 2016.
This is a demo-driven session, suited for DBAs, developers and security consultants.
You know about the cloud but you’re not there yet. Is it hard? Is it easy? How do you get started? Come to this session and see for yourself. We’ll start with nothing and end up with a deployed Azure SQL Database. We’ll even run a quick PowerBI report and enable Geo-Redundant Disaster Recovery with a couple clicks.
The goal is to take the mystery out, to show the capabilities and get you thinking about what going to the cloud could look like and what it can do for you and your company. I believe the future belongs to those who have this knowledge and know where to apply it.
This will be nearly PowerPoint free and we’ll log into my Azure Portal and build out an environment from scratch and learn as we go. We’ll migrate data from an “on-premises” database into our SQL DB and we’ll query it. You’ll leave with an understanding of the capabilities, some resource links outlining what we did and hopefully some curiosity to see what else is up there in the cloud as you start exploring with your own trial. Platform as a Service isn’t the answer to every problem, but after you see how simple it is to get started, maybe you’ll get some ideas of where it is.
DevOps is a hot topic in today’s software development world, however, most of the knowledge and experience with DevOps is based around application software and ignores the database. We will examine how the concepts and principles of DevOps can be applied to database development by explaining the core concepts of managing scripts, automated building, testing, and deployment of database changes. Once you understand the principles of a DevOps database development process, you can generate scripts used for direct deployment on your instances or sent to customers for them to execute.
In this session, we will cover tracking the changes made to your database and capturing them in version control. We will show automated building and testing of your code on a separate environment and the generation of an upgrade script that a DBA and developer can review and deploy to QA, test, staging, production, or any other environment.
This session will use Redgate Software tools to accomplish specific actions, with explanations on how the same actions can be alternatively built by anyone.
When storage doesn't perform and scale it causes issues for SQL Server and can make DBA life extremely challenging. In many cases, SQL Server storage is provisioned on a SAN. However, very few DBAs even get to see a SAN or understand how it works. Even though the SAN is someone else's responsibility, as a DBA you feel the pain and you're typically a first responder when there is a problem. Understanding the "magic" behind it will make you a better DBA and help you have productive conversations with your storage administrator. After all, pointing fingers has a negative impact on your superhero status!
In this session, you will learn what a SAN is, how it works, and why it's typically used vs. direct attached storage. These 'black boxes" have a lot of capabilities, so you will also learn about some of the common features today that they offer and how they can help you. You will also learn about what can go wrong, some basic troubleshooting when you're not getting what you expected, and how to communicate with your SAN administrator if you're not getting what you need.
It’s a work day, and you’re in front of your monitor battling a vicious query. You’ve already tuned a query or two in your life and have some tricks up your sleeve, but nothing seems to get this query down to a reasonable time and allow you to drink that cup of coffee you want so bad.
SQL Server 2016 adds a few impressive new features that might help you with that.
During the session, we will look at the different performance tuning aspects the new features enhance: Execution plan history and enforcement using the Query Store, live query execution visualization, DMV enhancements, new query hints, and more.
Join this session to see how SQL Server 2016 makes performance tuning easier and more efficient.
SQL Development is a broad subject with many roads to choose from. Often times SQL Developers feel overwhelmed and unsure which areas they need to focus on. The purpose of this session is to give a high-level overview and present paths for further study and growth. Subjects reviewed in this session will include coding patterns, error handling, and execution plans. Demonstrations of free tools will also help developers find ways of simplifying their daily tasks. Attendees will leave this session empowered with knowledge and directions for further growth.
Query Store in SQL Server 2016 is one of the most relevant improvements for production DBAs. It introduces the concept of click-once mitigation approach to query performance regression.
But that’s not the sum of all enhancements in the query performance analysis and troubleshooting space introduced in SQL Server recently.
In this session, we will learn about other enhancements that can help you troubleshoot query performance.
Ranging from new xEvents to Showplan improvements, from LQS (and underlying infrastructure) to the revised Plan Comparison tool, learn how these can help you streamline the process of troubleshooting query performance and gain faster insights.
Have you ever considered a situation where Columnstore Index can be quite the opposite of what one would expect from it? A slow, wasteful source of painfully slow queries, lagging the performance, consuming irresponsible amount of resources ...
Setting the wrong expectations (it won't run 100 times faster on EVERY query), selecting the wrong architecture (partition by 100s of rows instead of millions), using and aggregating by the large Strings in the fact tables - this list is actually quite large.
What about some of the less known limitations for building Columnstore Indexes? The ones that will bite you suddenly in the middle of the project - when you do not expect it at all?
Let me show you how to achieve those painful mistakes and you will surely know how to avoid them :)
With the release of SQL Server 2016, we were finally able to play with, in my opinion, one of the most exciting new features in SQL Server 2016, the Query Store!
The Query Store serves as a flight recorder for your query workload and provides valuable insights into the performance of your queries. It doesn’t stop there, however, using the performance metrics the Query Store records, we can decide which Execution Plan SQL Server should use when executing a specific query. If those two features aren’t enough, the Query Store provides all this information inside easy-to-use reports and Dynamic Management Views (DMVs) removing a great deal of the complexity of query performance analysis.
During this session, we will take a thorough look at the Query Store, its architecture, the performance impact of enabling the Query Store, DMVs, the built-in reporting and the custom Query Store Dashboard.
No matter if you are a DBA or developer, the Query Store has all the data you need to make your time analyzing query performance a whole lot more fun!
Parameters are a fundamental part of T-SQL programming, whether they are used in stored procedures, in dynamic statements or in ad-hoc queries. Although widely used, most people aren’t aware of the crucial influence they have on query performance. In fact, wrong use of parameters is one of the common reasons for poor application performance.
Does your query sometimes run fast and sometimes slow – even when nothing’s changed? Did it happen to you that a stored procedure, which had always been running for less than a second, suddenly started to run for more than 5 seconds consistently – even when nothing had changed?
In this session, we will learn about plan caching and how the query optimizer handles parameters. We will talk about the pros and cons of parameter sniffing (don’t worry if you don’t know what that means) as well as about simple vs. forced parameterization. But most important – we will learn how to identify performance problems caused by poor parameter handling, and we will also learn many techniques for solving these problems and boosting your application performance.
Move up to Management Studio Superstar with these great productivity tips! We’ll explore stuff like keyboard shortcuts, configuration options, how to keep your SSMS settings synchronized across multiple environments and other really useful everyday tricks and hacks. Think “hey, wait, how’d you do that!?”
Did you, for instance, know that you can hold down the Alt key while selecting text? The “Alt trick” works for copying, pasting, moving, even typing. Handy if you want to re-use text horizontally instead of just vertically. Like comparing the contents of columns in two tables
Some of the other things I’ll cover:
- Previewing and quick-reference stuff you can do with keyboard shortcuts
- Quickly navigating your code to find errors or bookmarks
No undocumented stuff, no registry editor, and nothing you wouldn’t put into production. Just really clever productivity tips that will take some of the burden off your everyday work.
But I’m not going to sugarcoat it. Your code may still run slow, and you’ll probably write just as many bugs – but you’ll write them quicker and with so much more style and panache.
Let’s face it, using SQL PowerShell prior to SQL Server 2016 was like installing training wheels on a Ducati. But things have massively changed. Your community feedback was incorporated before SQL 2016 shipped and presto, SQL PowerShell is actually useful all of a sudden.
But it gets better! The SQL Tools team at Microsoft has promised to keep adding new cmdlets every month and keep making improvements to the SQL Provider.
SQL PowerShell offers faster ways to manage SQL Servers, develop & deploy SQL databases, and identify performance bottlenecks. I will show you these new features in the new SqlServer module and why you will find it useful, whether you are a SQL Developer, B/I Developer, or DBA.
The SQL Server team has already delivered cmdlets to help you manage SQL Server Agent Jobs, review the SQL Error Log, Add / Manage & Remove Logins, and write data to tables in SQL Server.
2017 promises even more cmdlets and you don’t want to be left behind!
(P.S. That picture of the TreeMap up there, that was generated in PowerShell and I will show you why it’s so awesome for Performance Tuning.)
No one wants slow performance of their application.
In this session, you will learn about three mistakes which DBA often do which kills SQL Server’s performance. We will understand what are those errors, why those particular errors are prevailing in the industry and how we can fix it.
We will explore various settings as well as code which will kill SQL Server Performance. At the end of this session, every attendee will receive scripts which will help them improve their SQL Server’s performance. This session is of intermediate level – you just have to know a little bit about SQL Server and a lot more you will learn in this session. The session is carefully crafted from the real world experience for DBA so they can learn tricks which can help them to improve their server’s performance.
We will discuss various SQL Server Settings, File Placements as well as ignored details related to SQL Server Performance.
In 2009 John Allspaw and Paul Hammond delivered the session “10 deploys per day – Dev & ops cooperation at Flickr.” In forty-six minutes they changed the way millions of people would think about the software delivery process for years to come. It didn’t have a name yet, but DevOps was born. DevOps folk preached about the cloud, automation, rapid delivery and any database technology that wasn’t relational…
In 2013 Kenny Gorman declared “The DBA is Dead”.
For the record, I don’t believe that, but a lot of people do. What is certain is that the world of IT is changing, and the traditional DBA role, and most other data roles, are changing with it.
I’m going to explain what DevOps is, where it came from, and its implications for SQL Server. We’ll cover the human and technical basics of database DevOps – and I’m going to discuss some changes that data folk need to make.