There are tools available from Microsoft, as well as from a number of third party vendors, to ease the process of integrating database unit testing into your development environments and your continuous delivery pipeline.
This session isn't about any of those tools, but is an attempt to get back to basics - with the open source tSQLt framework - and address questions such as:
No particular experience of automated testing is required, we'll be talking about database testing "from the ground up". A good grasp of Transact-SQL constructs will be helpful in understanding the examples.
Do you work for an institution that fears its DBAs could go rogue on their production databases?
Do you want to revoke continuous sysadmin permissions from your DBA team in an attempt to avoid being the next WikiLeak?
Do you have concerns that the DBA team is querying sensitive databases?
Then you need Uptight Database Security. Come to this session to learn how to deploy a configurable permission model that provides on-demand access to SQL Server databases. This model allows DBAs to do routine work without sysadmin, and elevate permissions to sysadmin only during production emergencies or disaster recovery scenarios.
Attendees will leave this session with the fundamental knowledge and scripts to implement a low hassle solution that provides DBAs the minimum necessary access required to maintain a production database server.
Congratulations on your recent move into the world of SQL Server development!
You've got a handle on the basics -- SELECT, INSERT, UPDATE, and DELETE -- but now what? How do you know what's important to learn next, and what's most likely a waste of time?
In this session, we're going to answer these questions and more. You'll discover why set-based thinking is the foundation of SQL Server. You'll learn to use valuable T-SQL building blocks like window functions. I'll also share a handful of time-saving shortcuts that could save you an insane amount of typing, meaning you don't have to take another working lunch to clean up a poorly-formatted script.
BUT WAIT, THERE'S MORE!
In addition to learning new skills for today, you need to set yourself up for success next week, next month, and next year. That's why we'll also cover two things you can start doing right now to raise the incline of your career, and get you the recognition you'll deserve.
Journey with me through stories, concepts, and live demos, and you will emerge ready to kick-start your young SQL Server career.
You’re a DBA or Developer, and you have a gut feeling that these simple queries with an egregious number of columns in the SELECT list are dragging your server down. You’re not quite sure why, or how to index for them. Worst of all, no one seems to be okay with you returning fewer columns. In this session, you’ll learn why and when queries like this are a problem, your indexing options, and even query tuning methods to make them much faster.
I’m going to look into the basics of how encryption works and then we’ll learn how we can go about writing our own encryption routines within SQL Server. When we’re happy that those routines are secure, we’ll look at ways that we can go about cracking those routines.
SQL Server Failover Clustered Instances (FCI) and Availability Groups (AG) depend a lot on Windows Server Failover Clustering (WSFC). But when mission-critical databases go offline, bringing them back online could be a nerve-wracking experience.
This session will look at understanding how the underlying WSFC platform works to help us maintain availability of our databases. This is for DBAs and systems administrators who are responsible for administering SQL Server FCIs and AGs but are not so comfortable with figuring out why the databases went offline and what they can do about it. We’ll walk thru the troubleshooting process that I follow to quickly identify and resolve SQL Server FCI and AG availability issues.
You’ve just been handed a SQL Server, and you need to make sure there aren’t any skeletons in the closet. What things should you check to make sure everything is healthy? In this presentation, we’ll run you through the checklist I use when I provide a health check to a customer. First, I check for any resume generating events (RGEs) like broken backups. Then I check for best practices like email alerts and MAXDOP settings. Finally, these checks turn into more a scuba diving expedition, where we look for things that seem off and dig deeper. You won’t walk away with a perfect strategy, but you’ll have an idea of how one data professional does it.
One of the most exciting new features of SQL Server 2016 is the integration of the R statistical programming language into the platform with the brand-new SQL Server R Services. In this introductory session we’ll briefly introduce the R language and then look at how to install and administer SQL Server R Services. We’ll start to demonstrate the power of R by showing you how to create and run simple R scripts from within T-SQL. We’ll then discuss some of the powerful open source packages that are available to aid your data analyses. Finally, we’ll go through some examples of how to use R to integrate data analytical methods in real-world situations, and wow your customers with the power of statistics!
T-SQL can help solve many problems you are faced with at your job. Each problem can have many solutions, but not every solution is simple to write, understand and maintain, and not every solution is good in terms of performance. In this session, we will look at real-world problems and solve them using T-SQL. We won’t settle for just A solution, but show elegant, simple solutions that will yield optimal performance.
In this session, I will talk about advantages of Query Store, features, configuration and query optimization using Query Store on SQL Server 2017. The session will also cover how one can monitor query performance, how Query Store gathers data and which are the best practices for using Query Store. During the session, there will be various demos.
Once data leaves your SQL Server, do you know what happens, or is the world of networking a black box to you? Would you like to know how data is packaged up and transmitted to other systems, and what to do when things go wrong? Are you tired of being frustrated with the network team?
In this session, we introduce how data moves between systems on networks, then look at TCP/IP internals. We’ll discuss real-world scenarios showing you how your network’s performance impacts the performance of your SQL Server and even your recovery objectives.
Based on the successful 500 Level talk at PASS 2016, this session will provide an internals view of how In-Memory OLTP works for SQL Server 2016 and 2017. Because this session is Advanced, it is intended for those that understand the basic fundamentals of In-Memory OLTP but want to learn how the “Hekaton” engine works behind the scenes to deliver the amazing 30x performance improvements seen for customers moving to In-Memory OLTP. This session will include a look behind the scenes at threads, data and index design, transactions and concurrency, logging, storage, and natively compiled procedures.
I’m a big believer in visuals and demos so you will see plenty of that as I describe how the Hekaton engine is truly lock-free and latch-free. And of course since this is an advanced talk, the Windows Debugger will definitely make an appearance during the session.
And since this will be an advanced level session, of course, the demonstration of the looking at the Hekaton engine with the debugger is a must.
Are you faced with complaints from users, poor performing code from developers, and regular requests to build reports? Do you uncover installation and configuration issues on your SQL Server instances? Have you ever thought that in dire times avoiding Worst Practices could be a good starting point? If the answer is “yes”, then this session is for you: together we will discover how not to torture a SQL Server instance and we will see how to avoid making choices that turn out to be not so smart in the long run.
You are probably thinking: “Hey, wait, what about Best Practices?”. Sometimes Best Practices are not enough, especially for beginners, and it is not always clear what happens if we fail to follow them. Worst Practices can show the mistakes to avoid. I have made lots of mistakes throughout my career: come and learn from my mistakes!
Circles in the SQL Server Hell:
Microsoft has introduced native capability to the database engine around masking sensitive data and restricting access at the row level. Allowing us new options for building more secure Data Platform solutions. However, it is important to understand the differences from our previous options in order to be able to gain the most benefit from these new technologies.
Join me as we look in more detail at how we can engineer these features into our Data Platform solutions. Starting with identifying the problems they are trying to solve, through their core architecture and on to potential design patterns for their use. As with any security technology solution, there are a number of ways to use these features. However, as with all security features, there are gaps in coverage, using them effectively in a layered approach is vital.
After this session, you will be in a position to start looking at whether your systems can benefit from these features. Along with how you can potentially start building functionality into your applications.
On this session, we follow the movement of data through batch and speed layers via Azure Data Lake Store & Analytics, Data Factory, SQL Datawarehouse and Streaming Analytics, before looking briefly at Azure Analysis Services with PowerBI. This is a largely theory-based session to prime you for the future.
It just works – performance and scale in SQL Server 2016 database engine and what is being added to in-market versions. This session will showcase several improvements in SQL Server, focusing on the latest enhancements that address some of the most common customer pain points the Database Engine, involving tempdb, new CE, memory management, T-SQL constructs as well as diagnostics for troubleshooting query plans, memory grants, and backup/restore. Understand these changes in performance and scale, and the new and improved diagnostics for faster troubleshooting and mitigation.
TSQL was voted best programming language in 2013, let’s explore why!
In this session, we will look at what makes SQL such a fascinating language. We will do this by working through a few real-life development problems and common design challenges based on real questions asked on Ask.SQLServerCentral.com.
Once you have successfully configured Availability Groups, what comes next? In this session, we will go beyond setup and look at how to monitor your Availability Groups. We will define and cover important metrics and alerts you need to manage a database in an Availability Group.
You will walk away from this session with tools you need to monitor your environment and know how to respond to alerts.
We can be better at our jobs if we have a good grasp of basic statistics.
It doesn’t matter if you’re a DBA looking to understand query plan performance, a data warehouse person needing to come up with ETL load time estimates, or an analyst needing to report figures to managers. Statistics can help you all.
If only maths classes hadn’t been so darn boring!
Instead of going all mathsy, we’ll be doing some real-time data capture and taking an intuitive and visual approach through summary statistics right up to understanding how to produce simple predictive models.
By the end of the session, you’ll understand concepts like sampling, error, regression, and outliers – important day-to-day stuff and a great base upon which to build. By the end of the session, you’ll wonder how people could have it made seem so hard for so many years.
You’ve heard that In-Memory OLTP is fast, but how do you know if your workload can take advantage of this awesome feature?
We will explore In-Memory OLTP workload evaluation, use cases, architecture, capacity planning, data migration, hot/cold data, potential impact on disk-based workloads, and more.
By the end of the session, you will have an understanding of how In-Memory OLTP works, and whether your workload is likely to see an increase in performance.
Get full notes, resources, and more information here: https://groupby.org/conference-session-abstracts/in-memory-oltp-survival-guide/
For years, you've heard that you're supposed to reorganize your indexes to make SQL Server go faster. It sounds like it makes sense - keep things in order, right? But you keep doing it, and SQL Server isn't getting any faster. You've even heard that setting fill factor will help prevent fragmentation, and you're doing that too - but your indexes still keep getting fragmented every day, and users aren't happy with performance.
This advice made a lot of sense at the turn of the century, but today, things are different - and we're not just talking solid state drives. In just the first 15 minutes, you'll have a series of ah-ha moments when you realize that your daily index maintenance jobs might just be making the problem worse instead of better. Then, you'll learn what you need to do instead.
Imagine your Data Warehouse is growing exponentially to multi-terabytes and you have been tasked to make analytics queries even faster while keeping the cost of storage low. Industry leading SQL Server 2016 columnstore technology can help you solve these by reducing the storage footprint by 10x (average) and speed up analytics queries up to 100x. SQL Server Tiger team has leveraged columnstore technology with many Tier-1 workloads to achieve these goals. This session will cover three different types of workload. For each workload, we will cover the application architecture, challenges and how SQL Tiger team used columnstore index technology to address them.
I was required to prove that I had successfully installed and configured a backup solution across a large estate. I had a number of success criteria that had to be met. Checking all of these by hand (eye) would have been error prone, so I wrote a test to do this for me and an easy for management to read HTML report using PowerShell and Pester.
The session has come from that situation and is about enabling you to provide an easy to read output to quickly and repeatedly show that infrastructure is as expected for a set of checks, also known as Operational Validation using Pester. There are many use cases for this type of solution; DR testing, installation, first line checks, presentation setups
What is Pester?
Pester is a Unit Testing framework for PowerShell which can be used for testing your code but also as shown in this session for validating your infrastructure. This is an excellent post by Adam Bertram to introduce Pester It is included with PowerShell on modern Operating Systems and free to download from the PowerShell Gallery or GitHub if not included.
After this session, you will have a basic understanding of how Pester works and the capability to examine your checklists and create your own validation tests and provide some reporting for management.
Attendees will be taken through the following:
How do you know your database change won’t affect something you haven’t thought of?
Database objects can have many dependent objects in the database and in the application. Keeping track of these dependencies can be difficult, especially if the database is used by multiple applications or services.
The first step is to have a project for the database and get it into source control.
Source control is the single source of truth that all deployments should be kept in sync with. Source controlling the database also acts as the foundation for automation of:
This automation is key to the Continuous Integration methodology. After every commit, builds and tests will run in the background and only alert if there is a problem. Builds test the deployment of the change and tests check that everything affected still works. Coverage reports indicate any gaps in the testing suite.
The tools I use to make this possible with SQL Server are:
By the end of this session, I hope you will see how a CI approach to database development can remove the unknowns from deploying database changes.