3. PresenterInfo
1982 I started working with computers
1988 I started my professional career in computers industry.
1996 I started working with SQL Server 6.0
1998 I earned my first certification at Microsoft as Microsoft
Certified Solution Developer (3rd in Greece)
I started my career as Microsoft Certified Trainer (MCT)
with more than 25.000 hours of training until now!
2010 I became for first time Microsoft MVP on SQL Server
I created the SQL School Greece www.sqlschool.gr
2012 I became MCT Regional Lead by Microsoft Learning
Program.
2013 I was certified as MCSE : Data Platform & Business
Intelligence
Antonios Chatzipavlis
Database Architect,
SQL Server Evangelist
MCT, MCSE, MCITP, MCPD, MCSD, MCDBA, MCSA, MCTS,
MCAD, MCP, OCA, ITIL-F
4. SQLschool.gr
Team
Antonios Chatzipavlis
SQL Server Evangelist • Trainer
Vassilis Ioannidis
SQL Server Expert • Trainer
Fivi Panopoulou
System Engineer • Speaker
Sotiris Karras
System Engineer • Speaker
8. The Conference for Technical Data Professionals
• 200+ technical sessions
• New and expert industry speakers
• Networking opportunities with thousands of attendees
from around the world
Use Local Chapter Discount Code: LC15CPJ8 for $150 off*
October 25-28
Seattle
*Cannot be applied retroactively or combined with other offers.
18. Why SQL Server 2016?
Everything built-in
Industry leading TCO
Introducing SQL Server 2016
Core Database Platform
Enterprise Business Intelligence
Cloud Database Solutions
SQL Server 2016 Editions
SQL Server 2016 Components
SQL Server 2016 Compared to Azure SQL Database
Upgrading from Previous Versions
SQL Server Management Studio Enhancements
Enhanced Azure Support
19. SQL Server 2016: Everything built-in
The above graphics were published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research
publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties,
expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Consistent experience from on-premises to cloud
Microsoft Tableau Oracle
$120
$480
$2,230
Self-service BI per user
In-memory across all workloads
built-inbuilt-in built-in built-in built-in
at massive scale
0 1
4
0 0
3
34
29
22
15
5
22
6
43
20
69
18
49
3
0
10
20
30
40
50
60
70
80
2010 2011 2012 2013 2014 2015
SQL Server Oracle MySQL SAP HANA TPC-H non-clustered 10TB
Oracle
is #5#2
SQL Server
#1
SQL Server
#3
SQL Server
National Institute of Standards and Technology Comprehensive Vulnerability Database update 10/2015
20. IndustryleadingTCO
BUSINESS INTELLIGENCE
ADVANCED ANALYTICS
DATA WAREHOUSING
OLTP
$648K
+ $120
per user for Power BI
SQL Server 2016
ADVANCED ANALYTICS
BUSINESS INTELLIGENCE
OLTP
DATA WAREHOUSING
3.4x
20x
per user
$2.2M
+ $2,230
per user for BI
Note: For OLTP and DW scenario, the price comparison
is based on a server with 2 proc, 8 cores each
Built-in with SQL Server
vs. expensive add-ons
with Oracle
• Complete mobile BI
• In-memory
• End-to-end security
• Advanced Analytics
built-in
built-in
built-in
built-in
21. •Enterprise Data Platform
• Scalability and Performance
• High Availability
• Data Recoverability
•Business Intelligence
• Self-Service BI
• Reduced IT workload
• Big Data
•Cloud Infrastructure Solutions
• Public cloud
• Private cloud
• Hybrid cloud
Introducing SQL Server 2016
22. • Scalability and Performance
• Need to support thousands of simultaneous users with near-instantaneous response
• Database layer of the application must be able to optimize its use of OS and hardware resources to provide
concurrency without compromising data integrity
• High Availability
• Business users need continuous access to applications and their data:
• From PCs in the office
• A variety of devices outside of traditional business hours
• From anywhere in the world
• Increased requirement for high availability solutions
• Data Recoverability
• Business information is critical to an organization and they can tolerate little or no data loss
• A data recovery solution must:
• Minimize the time taken to restore data availability
• Ensure that data is recovered right up to the point of failure
Core Database Platform
23. • Self-Service BI
• Designed to empower users to explore data
• Users can create their own reports
• Supports inclusion of data from a wide range of sources
• Reduced IT Workload
• Business users perform much of their own analysis and reporting
• IT specialists can be freed to focus on high-value activities
• Big Data
• A huge volume of potentially useful information can now be generated from several sources
• SQL Server facilitates big data analysis using familiar tools
• Users are empowered to explore and combine data from a wide range of internal and external
sources
Enterprise Business Intelligence
24. • Benefits
• Reduced infrastructure management
• On-demand scalability
• Better control of finances
• Public Cloud
• Internet-based infrastructure
• Eliminates the need to install and manage hardware
• Reduces operational overheads
• Reduces data storage costs
• Private Cloud
• Retains internal control for operational, compliance or security requirements
• Uses company-dedicated hardware
• Can reduce database or server provisioning times
• Hybrid Cloud
• Enables an incremental move to the cloud
• Supports data storage across public and private cloud
Cloud Database Solutions
26. •Database Engine
•Analysis Services
•Reporting Services
•Integration Services
•Master Data Services
•Data Quality Services
•SQL Server R Services
•Replication
SQL Server 2016 Components
Not just a
database
engine
27. Upgrading from Previous Versions
In-place Upgrade Side-by-side Upgrade
• Easier, mostly automated • More granular control over process
• System data upgraded • Can be used to perform test migration
• No additional hardware • Relatively straightforward rollback
• No change to applications • Can leverage failover/switchover
SQL Server 2008 SP3
SQL Server 2008 R2 SP2
SQL Server 2012 SP1
SQL Server 2014
SQL Server 2016
Run the Upgrade Advisor to check for upgrade issues
28. •Separate Release Cycle to SQL Server Database Engine
•Updates can be made independently
•Updates to SSMS can be made available as soon as they are ready
•No need to wait for next update to SQL Server
SQL Server Management Studio Enhancements
29. •Always Encrypted technology
• Help you protect your data
• Regardless of where stored or accessed
•Stretch Database technology
• Transparently store data in databases stretched between on-premises and Microsoft Azure (cloud)
•Enhanced backup/restore to/from Microsoft Azure
• Faster hybrid backups
• High availability
• Disaster recovery scenarios
Enhanced Azure Support
31. • By default provides automatic, soft NUMA configuration
• Apply SQL Server 2016 and SQL Server internally leverages SOFT NUMA partitioning to achieve double digit
performance gains.
• DBCC Scales 7x Better
• Out of the box DBCC provides you better performance, scale while shrinking your maintenance window
• Native Spatial Implementation(s)
• Spatial activities are faster and scale better with Native Spatial Implementation
• SQL Server Parallel Query Placement Decision Logic
• TVPs with Spatial
• Apply SQL Server 2016 and TVPs using Spatial columns at 15x faster or better
• Spatial index building is 2 or more times faster
• Out of the box enables the -T1117 and -T1118 behavior for TEMPDB providing better
scalability and performance.
• XEvent Linq Reader
• The SQL Server 2016 client component processes XEvent files 10x+ faster improving the responsiveness of the
XEvent UI and reader capabilities.
It Just Runs Faster
32. •Automatic TEMPDB Configuration
•LDF Stamped
• SQL Server 16 changes the stamp to 0xC0’s instead of 0x00s.
• Changing the pattern to 0xC0’s avoids common reclamation techniques, improving performance
•Updated Scheduling Algorithms
• SQL Server 2016 scheduling algorithms balance the work load better leading to enhanced scalability
•SOS_RWLock Redesign
• The SOS_RWLock is a synchronization primitive used in various places throughout the SQL Server code base.
• As the name implies the code can have multiple shared (readers) or single (writer) ownership.
• Apply SQL Server 2016 and code paths leveraging reader/writer locks use less resources and scale better
•Indirect Checkpoint Default
• New databases in SQL Server 2016 use indirect checkpoint, improving performance of checkpoint activities
•Larger Data File Writes
• SQL Server uses WriteFileGather for the vast majority of data file write requests
• SQL Server 2016 takes advantage of newer hardware scalability by increasing the database file write
operations
It Just Runs Faster
33. •Multiple Log Writer Workers
• SQL Server 2016 uses up to 4, log writer workers to service log write activities improving LDF throughput
capabilities
•Column Store Uses Vector Instructions (SSE/AVX)
• SQL Server 2016 detects the CPU capabilities for AVX (Advanced Vector Extensions) or SSE (Streaming SIMD
Extensions 4) and leverages the hardware, based vector capabilities to improve scalability and performance.
•BULK INSERT Uses Vector Instructions (SSE/AVX)
• SQL Server 2016 takes advantage of CPU vector instructions to improve bulk insert performance
•In-Memory Optimized Database Worker Pool
• SQL Server 2016 dynamically adjusts the In-Memory Optimized worker pool to maximize scalability and
performance.
•Leverages On Demand MSDTC Startup
• SQL Server 2016 dynamically starts MSDTC as needed allowing resources to be used for other activities until
required
It Just Runs Faster
34. •AlwaysOn Log Transport Reduced Context Switches
• The AlwaysOn log transport uses a SQL Broker based design to send and receive messages between the
primary and secondary replicas.
• SQL Server 2016 improves the log block transportation throughput scaling AlwaysOn by 4x.
•AlwaysOn Parallel Compression / Improved Algorithms
• SQL Server 2016 introduces two distinct changes in the AlwaysOn transport, compression design.
• Improved compression algorithms
• Parallel compression of log block data
•AlwaysOn AES-NI Encryption
• SQL Server 2016 defaults endpoint creation to AES based encryption allowing hardware based AES-NI
encryption.
• The hardware based capabilities improve Always On log shipping scalability and performance by a significant
factor.
It Just Runs Faster
Throughput MB/s Average CPU utilization (secondary) MB sent on wire/sec
Today 82 17 35
2016 540 36 230
35. It Just Runs Faster
Simply
upgrading to
SQL 2016 could
bring 25%
performance
improvement
SQL 2016
supports 3X more
physical memory
than previous
versions
The new column
store engine and
query processing
technology could
increase query
performance up
to 100X
The new In-memory
OLTP engine can
process 1.25million
batches/sec on a
single 4 socket
server, 3X of SQL
2014.
37. Operational Analytics
Operational Analytics with a Time Delay
What is Real-Time Operational Analytics
SQL Server 2016 Operational Analytics
What are Columnstore Indexes?
How to Use Real-Time Operational Analytics
38. Operational Analytics with a Time Delay
Key issues
Complex implementation
Requires two servers
(capital expenditures and
operational expenditures)
Data latency in analytics
High demand;
requires real-time analytics
IIS Server
BI analysts
39. What is Real-Time Operational Analytics
IIS Server
BI analysts
Challenges
Analytics queries are resource intensive
and can cause blocking
Minimizing impact on operational
workloads
Sub-optimal execution of analytics on
relational schema
Benefits
No data latency
No ETL
No separate data warehouse
40. The ability to run analytics queries concurrently with
operational workloads using the same schema
Not a replacement for:
• Extreme analytics performance queries possible only using customized schemas (e.g.
Star/Snowflake) and pre-aggregated cubes
• Data coming from non-relational sources
• Data coming from multiple relational sources requiring integrated analytics
Goals:
• Minimal impact on operational workloads with concurrent analytics
• Performance analytics for operational schema
SQL Server 2016 Operational Analytics
41. •Columnstore indexes store data in a column-wise format
•Only one columnstore index can exist per table
•Improves query performance for large datasets, particularly in data
warehousing
•A filter expression can be included when creating the index
What are Columnstore Indexes?
CREATE CLUSTERED COLUMNSTORE INDEX CCIIdxFactOnlineSales
ON dbo.FactOnlineSales;
42. How to Use Real-Time Operational Analytics
CREATE TABLE dbo.Product
(
ProductId int IDENTITY(1,1) PRIMARY KEY,
ProductCode nvarchar(10),
ListPrice money,
UnitsInStock int,
UnitsSold int
);
CREATE NONCLUSTERED COLUMNSTORE INDEX NCCIdx_Products
ON dbo.Product (ProductId, ProductCode, UnitsInStock, UnitsSold);
44. In-Memory OLTP Enhancements
What is In-Memory OLTP?
New In-Memory OLTP Features
Updated In-Memory OLTP Processing
Greater T-SQL coverage
Updated In-Memory Garbage Collection
45. In-Memory OLTP boosts performance:
•Commonly known by its code name of “Hekaton”
•Tables reside in-memory, and are faster to access for reads and writes, having
no locks or latches
•Increased performance for short-running transactions in an OLTP system
•The CPU can execute natively compiled stored procedures
What is In-Memory OLTP?
46. In-Memory OLTP enhancements include:
•Up to 2 TB of user data in memory (up from 250 GB)
•Support for system-versioned (temporal) tables
•Support for row-level security (RLS)
•Query Store supports natively-compiled procedures
•Support for all Unicode and collations
•ALTER PROCEDURE supports natively-compiled procedures
•ALTER TABLE supports memory-optimized tables
•Support for Transparent Data Encryption (TDE)
•MARS connections can use memory-optimized tables and natively compiled
procedures
New In-Memory OLTP Features
47. In-Memory OLTP processing updates include:
•FOREIGN KEY constraints between memory-optimized tables
•NULLable index key columns
•LOB column types and UNIQUE indexes
•CHECK constraints
•UNION and UNION ALL
•SELECT DISTINCT
•OUTER JOIN
•OR, NOT
•Subqueries within SELECT statements, such as EXISTS, IN, and scalar
subqueries
Updated In-Memory OLTP Processing
48. • CREATE PROCEDURE (Transact-SQL)
• DROP PROCEDURE (Transact-SQL)
• ALTER PROCEDURE (Transact-SQL)
• SELECT (Transact-SQL) and INSERT SELECT statements
• SCHEMABINDING and BEGIN ATOMIC (required for natively compiled stored procedures)
• NATIVE_COMPILATION
• Parameters and variables can be declared as NOT NULL
• Table-valued parameters.
• EXECUTE AS OWNER, SELF, and user.
• GRANT and DENY permissions on tables and procedures.
• Nesting natively compiled stored procedures
• RIGHT OUTER JOIN, LEFT OUTER JOIN, INNER JOIN, and CROSS JOIN in SELECT statements
• NOT, OR, and IN operators in SELECT, UPDATE and DELETE statement
• UNION ALL and UNION
• SELECT DISTINCT
• GROUP BY clause with no aggregate functions in the SELECT clause (<select> list).
• COLUMNSTORE
• COLLATE
Greater T-SQL coverage
49. •In-Memory Garbage Collector:
• Controlled by the main garbage collection thread
• Runs every minute, or
• Runs when number of committed transactions exceeds an internal threshold
•In-Memory Garbage Collector Characteristics:
• Non-blocking
• Cooperative
• Efficient
• Responsible
• Scalable
Updated In-Memory Garbage Collection
51. Native JSON
Native JSON Support for SQL Server 2016
JSON in SQL Server 2016
JSON Facts
The FOR JSON Clause
JSON Functions
Indexing JSON Documents
52. Native JSON Support for SQL Server 2016
A top feature request on MS Connect (1050 votes)
OneDrive Office
Dynamics Bing Yammer
TFS
Service
integration
Exchange information with
various services
Generate
web service
content
Generate JSON that will be
returned to clients
Flexible
database
schema
Make reasonable trade-offs
in database schema design
Analyze
JSON
documents
Parse, query, and analyze
JSON documents
Usecases
53. JSON in SQL Server 2016
[
{
"Number":"SO43659",
"Date":"2011-05-31T00:00:00"
"AccountNumber":"AW29825",
"Price":59.99,
"Quantity":1
},
{
"Number":"SO43661",
"Date":"2011-06-01T00:00:00“
"AccountNumber":"AW73565“,
"Price":24.99,
"Quantity":3
}
]
SO43659 2011-05-31T00:00:00 MSFT 59.99 1
SO43661 2011-06-01T00:00:00 Nokia 24.99 3
Table 2 JSON
Formats result set
as JSON text.
JSON 2 table
Migrates JSON
text to table
Built-in functions
ISJSON
JSON_VALUE
JSON_MODIFY
54. No custom type or index
Store as NVARCHAR
Does JSON work with X?
Does NVARCHAR work with X?
Yes, with in-memory, row-level security, stretch, compression, encryption, and more
Yes, with all client drivers
Different from DocumentDB
SQL DB: Relational, easily exchanges data with modern services, unified insights
DocumentDB: Transactional schema-less document store
JSON Facts
55. •Format data for JSON with control over objects:
•Format data for JSON using automatic formatting:
The FOR JSON Clause
SELECT ProductNumber, Name, ListPrice
FROM Production.Product
FOR JSON PATH;
SELECT ProductNumber, Name, ListPrice
FROM Production.Product
FOR JSON AUTO;
56. JSON output:
Table to JSON
The FOR JSON Clause
SO43659 2011-05-31T00:00:00 MSFT 59.99 1
SO43661 2011-06-01T00:00:00 Nokia 24.99 3
SELECT Number AS [Order.Number], Date AS [Order.Date],
Customer AS Account,
Price AS 'Item.UnitPrice', Quantity AS 'Item.Qty'
FROM SalesOrder
FOR JSON PATH
57. JSON to Table
The FOR JSON Clause
@json:
SELECT *
FROM OPENJSON (@json)
WITH (
Number varchar(200) N'$.Order.Number',
Date datetime N'$.Order.Date',
Customer varchar(200) N'$.Account',
Quantity int N'$.Item.Quantity'
)
SO43659 2011-05-31T00:00:00 Microsoft 1
SO43661 2011-06-01T00:00:00 Nokia 3
58. •Use JSON functions to convert SQL Server data to JSON formatted data, and
extract data from JSON:
• ISJSON tests if a string contains valid JSON
• JSON_VALUE extracts a scalar value from a JSON string
• JSON_QUERY extracts an object or an array from a JSON string
JSON Functions
59. •JSON data is not a built-in data type in SQL Server 2016
•Create an index on JSON data:
• Create a computed column using the expression to be used in search queries
• Create the non-clustered index on the computed column
• Use the same expression in your queries as used to create the computed column, so SQL Server can perform a
seek, rather than a table scan
Indexing JSON Documents
61. Temporal Tables
What is Temporal Data?
How to start with temporal
Temporal database support: BETWEEN
How does system-time work?
Application-time temporal
Querying Temporal Tables
Temporal data continuum
In-Memory OLTP and temporal
62. What is Temporal Data?
Data changes over time
Tracking and analyzing changes is often
important
Temporal in DB
Automatically tracks history of data changes
Enables easy querying of historical data
states
Advantages over workarounds
Simplifies app development and
maintenance
Efficiently handles complex logic in DB
engine
Time travel Data audit
Slowly changing
dimensions
Repair record-level
corruptions
63. How to start with temporal
No change in programming model New Insights
INSERT / BULK INSERT
UPDATE
DELETE
MERGE
DML SELECT * FROM temporal
Querying
CREATE temporal
TABLE PERIOD FOR
SYSTEM_TIME…
ALTER regular_table
TABLE ADD
PERIOD…
DDL
FOR SYSTEM_TIME
AS OF
FROM..TO
BETWEEN..AND
CONTAINED IN
Temporal
Querying
ANSI 2011
compliant
64. Provides correct information
about stored facts at any
point in time, or between
two points in time
There are two orthogonal sets of scenarios with
regards to temporal data:
System (transaction)-time
Application-time
SELECT * FROM
Person.BusinessEntityContact
FOR SYSTEM_TIME BETWEEN @Start
AND @End
WHERE ContactTypeID = 17
Temporal database support: BETWEEN
65. Temporal table (actual data)
Insert / Bulk Insert
* Old versions
Update */ Delete *
History table
Performance
How does system-time work?
66. Temporal table (actual data)
Temporal queries *
(Time travel, etc.)
History table
Regular queries
(current data)
* Include historical
version
Performance
How does system-time work?
67. Limits of system-time
Time flows ‘forward only’
System-time ≠ business-time
(sometimes)
Immutable history, future does not
exist
App-time = new scenarios
Correct past records as new info is
available (HR, CRM, insurance,
banking)
Project future events (budgeting,
what-if, loan repayment schedule)
Batch DW loading (with delay)
Performance
CREATE TABLE Employee
(
[EmployeeNumber] int NOT NULL,
[Name] nvarchar(100) NOT NULL,
[LocationId] int NOT NULL,
[Position] varchar(50) NOT NULL,
[AnnualSalary] decimal (10,2) NOT NULL,
ValidFrom datetime2 NOT NULL,
ValidTo datetime2 NOT NULL,
PERIOD FOR VALID_TIME (ValidFrom,ValidTo),
CONSTRAINT PK_Employee
PRIMARY KEY CLUSTERED
(EmployeeNumber, VALID_TIME WITHOUT OVERLAPS)
)
ALTER TABLE Employee
ADD CONSTRAINT FK_Employee_Department
FOREIGN KEY (LocationId, PERIOD VALID_TIME)
REFERENCES Location (LocationId, PERIOD VALID_TIME);
UPDATE Employee
FOR PORTION OF VALID_TIME
FROM '2010-01-01' TO '2012-01-01'
SET [Position] = 'CEO'
WHERE EmployeeNumber = 1
DELETE FROM Employee
FOR PORTION OF VALID_TIME
FROM '2012-01-01' TO '2013-01-01'
WHERE EmployeeNumber = 1
SELECT * FROM Employee
WHERE VALID_TIME CONTAINS '2013-06-30'
SELECT * FROM Employee
WHERE EmployeeNumber = 1 AND
VALID_TIME OVERLAPS PERIOD ('2013-06-30', '2014-01-01')
/* Temporal join */
SELECT * FROM Employee E
JOIN Position D ON E.Position = D.Position AND
D.VALID_TIME CONTAINS PERIOD E.VALID_TIME
Consistency
Temporal
edits
Easy time
travel
querying
Application-time temporal
68. •System-versioned tables can be queried using the FOR SYSTEM_TIME clause
and one of the following four sub-clauses:
• AS OF <date_time>
• FROM <start_date_time> TO <end_date_time>
• BETWEEN <start_date_time> AND <end_date_time>
• CONTAINED IN (<start_date_time>, <end_date_time>)
•Or use ALL to return everything
Querying Temporal Tables
70. Extreme OLTP with
cost-effective data history
Disk-based history table
Super-fast DML and current data querying
Temporal querying in interop mode
In-Memory OLTP and temporal
Fast DML
Internal
data
retention
Performance
72. Using Always Encrypted
The need for Always Encrypted
Why Use Always Encrypted?
How it works
Always Encrypted Encryption Types
Always Encrypted Keys
Implementing Always Encrypted
73. The need for Always Encrypted
Prevents data
disclosure
Client-side encryption of
sensitive data using keys that
are never given to the
database system
Queries on
encrypted data
Support for equality
comparison, including join,
group by, and distinct
operators
Application
transparency
Minimal application changes
via server and client library
enhancements
Allows customers to securely store sensitive data outside of their trust boundary.
Data remains protected from high-privileged, yet unauthorized users.
74. •Always Encrypted is new in SQL Server 2016
•Always Encrypted solves the problem of data being displayed in plain text,
even though it is encrypted on disk
•Always Encrypted encrypts specific columns
•Encryption “at rest” and “in flight”
•Encryption keys are stored on the client, not the server
•Use Always Encrypted together with TDE
•Always Encrypted is already implemented in Azure SQL Database
Why Use Always Encrypted?
75. How it works
SQL Server or SQL Database
ADO .NET
Name
Wayne Jefferson
Name
0x19ca706fbd9a
Result SetResult Set
Client
Name SSN Country
0x19ca706fbd9a 0x7ff654ae6d USA
dbo.Customers
"SELECT Name FROM Customers WHERE SSN = @SSN",
"111-22-3333"
Encrypted sensitive data and corresponding keys
are never seen in plaintext in SQL Server
trust boundary
ciphertext
"SELECT Name FROM Customers WHERE SSN = @SSN",
0x7ff654ae6d
ciphertext
76. • Randomized encryption
• Encrypt ‘123-456-789’ = Ox573hshg2
• Repeat ‘123-456-789’ = Ox8b3pdi23
• Enables transparent retrieval of encrypted data
• Cannot perform operations with randomized encryption
• Cannot use randomized encryption on indexed columns
• More secure
• Deterministic encryption
• Encrypt ‘123-456-789’ = Ox573hshg2
• Repeat ‘123-456-789’ = Ox573hshg2
• Enables transparent retrieval of encrypted data
• Can perform equality comparisons
• Columns can be indexed
Always Encrypted Encryption Types
77. •Two keys are required:
• Column Master Key
• Column Encryption Key
•The Column Master Key encrypts the Column Encryption Key
•The Column Encryption Key encrypts the data
•Keys are stored client side, not on the server
•New catalog views:
SELECT * FROM sys.column_master_key_definitions
SELECT * FROM sys.column_encryption_Keys
Always Encrypted Keys
78. Security
Officer
1. Generate CEKs and master key
2. Encrypt CEK
3. Store master key securely
4. Upload encrypted CEK to DB
CMK store:
Certificate store
HSM
Azure Key Vault
…
Encrypted
CEK
Column
encryption key
(CEK)
Column
master key
(CMK)
CMK
Database
Encrypted CEK
Security
Key provisioning
79. Param
Encryption
Type/
Algorithm
Encrypted
CEK Value
CMK Store
Provider
Name CMK Path
@Name Non-DET/
AES 256
CERTIFICATE
_STORE
Current User/
My/f2260…
EXEC sp_execute_sql
N'SELECT * FROM Customers WHERE SSN = @SSN'
, @params = N'@SSN VARCHAR(11)', @SSN=0x7ff654ae6d
Param
Encryption
Type/
Algorithm
Encrypted
CEK Value
CMK Store
Provider
Name CMK Path
@SSN DET/ AES
256
CERTIFICATE
_STORE
Current User/
My/f2260…
Enhanced
ADO.NET
Plaintext
CEK
Cache
exec sp_describe_parameter_encryption
@params = N'@SSN VARCHAR(11)'
, @tsql = N'SELECT * FROM Customers WHERE SSN = @SSN'
Result set (ciphertext)
Name
Jim Gray
Result set (plaintext)
using (SqlCommand cmd = new SqlCommand(
"SELECT Name FROM Customers WHERE SSN =
@SSN“
, conn))
{
cmd.Parameters.Add(new SqlParameter(
"@SSN", SqlDbType.VarChar, 11).Value =
"111-22-3333");
SqlDataReader reader =
cmd.ExecuteReader();
Client (trusted) SQL Server (untrusted)
Encryptionmetadata
Name
0x19ca706fbd9
Encryptionmetadata
CMK store
Security
Example
80. •Prerequisites: .Net Framework 4.6 and SQL Client
•Using Object Explorer, select your database, then Security
• Create a Column Master Key
• Create a Column Encryption Key
• Encrypt individual columns by right-clicking and choosing Encrypt Column from the menu. The Always
Encrypted wizard steps through the process
•Encrypt individual columns using CREATE TABLE or ALTER TABLE
•Remove Always Encrypted by choosing “plain text”
Implementing Always Encrypted
82. High Availability Enhancements
The need for mission-critical availability
Enhanced AlwaysOn Availability Groups
Load balancing in readable secondaries
Distributed Transaction Coordinator (DTC) support
Database-level failover trigger
gMSA support
Domain-independent Availability Groups
83. The need for mission-critical availability
• Unified, simplified solution
• Easy to deploy, manage, and monitor
• Reuse existing investments
• SAN/DAS environments
• Able to use HA hardware resources
• Fast, transparent failover
• Detects failures reliably
• Able to handle multiple failures
84. Enhanced AlwaysOn Availability Groups
AG_Listener
New York
(Primary)
Asynchronous data
Movement
Synchronous data
Movement
Unified HA solution
AG
Hong Kong
(Secondary)
AG
New Jersey
(Secondary)
AG
Greater scalability
Load balancing readable secondaries
Increased number of automatic failover targets
Log transport performance
Improved manageability
DTC support
Database-level health monitoring
Group Managed Service Account
Domain-independent Availability Groups
85. Load balancing in readable secondaries
Computer5
DR site Computer2
Computer3
Computer4
Primary site
Computer1
(Primary)
READ_ONLY_ROUTING_LIST=
(('COMPUTER2', 'COMPUTER3',
'COMPUTER4'), 'COMPUTER5')
In SQL 2014, read-only
transactions routed by the
Listener went to the first
secondary that was available
Read-only Routing (ROR) lists
Now you can configure the ROR
lists to round-robin among a
specific set of secondaries (for
each primary)
86. •Currently, any distributed transactions touching a database in an
availability group are not allowed
• Many customers run unsupported, at risk to their data and reputation
• Many enterprise applications need cross-database transactions
•Fully supported in SQL Server 2016
• Joint effort with Windows
• Requires specific patch in order to work cleanly
• Other requirements:
• Availability groups must be running on Windows Server 2016 Technical Preview 2
• Availability groups must be created with the CREATE AVAILABILITY GROUP command and the WITH
DTC_SUPPORT = PER_DB clause. You cannot currently alter an existing availability group
• Learn more: https://msdn.microsoft.com/en-us/library/ms366279.aspx
Distributed Transaction Coordinator (DTC) support
87. •Currently, Availability Groups only monitor the health of the instance
• A database can be offline or corrupt, but will not trigger a failover as long as the instance itself
is healthy
•SQL Server 2016: option to also monitor the health of the databases in the
Availability Group
• Databases going offline trigger a change in the health status
Database-level failover trigger
88. •Group Managed Service Accounts (gMSA)
• Automatically set domain scope for Managed Service Accounts
• Automatic password rotation
• Much more secure than regular domain accounts
• Enables cross-system security context
•Why would I want a gMSA?
• No need to manually change passwords on all AlwaysOn instances
•How does it work?
• Passwords are managed by the domain
•What versions will it be supported in?
• Supported in SQL Server 2014 and SQL Server 2016
gMSA support
89. •New feature in Windows Server 2016
•Environments supported:
• Cross domains (with trust)
• Cross domains (no trust)
• No domain at all
•Cluster management via PowerShell only
•SQL management as normal
•Uses certificate-secured endpoints like DBM
Domain-independent Availability Groups
90. Stretch Database
Why Stretch Database?
Stretch SQL Server into Azure
Stretch Database Architecture
How to enable Stretch database
Queries continue working
Advanced security features supported
Backup and Restore
91. Why Stretch Database
What to do?
Expand server and storage
Move data elsewhere
Delete
Massive tables (hundreds of
millions/billions of rows, TBs size)
Users want/need to retain data
indefinitely
Cold data infrequently accessed but
must be online
Datacenter consolidation
Maintenance challenges
Business SLAs at risk
92. Stretch SQL Server into Azure
Capability
Stretch large operational tables
from on-premises to Azure with
the ability to query
Benefits
SQL
SERVER
2016
Azure
93. Stretch Database architecture
How it works
• Creates a secure linked server
definition in the on-premises SQL
Server
• Linked server definition has the
remote endpoint as the target
• Provisions remote resources and
begins to migrate eligible data, if
migration is enabled
• Queries against tables run against
both the local database and the
remote endpoint
Remote
endpoint
Remote
data
Azure
InternetBoundary
Local
database
Local
data
Eligible
data
Linked Servers
94. How to enable Stretch database
-- Enable local server
EXEC sp_configure 'remote data archive' , '1';
RECONFIGURE;
-- Provide administrator credential to connect to
-- Azure SQL Database
CREATE CREDENTIAL <server_address> WITH
IDENTITY = <administrator_user_name>,
SECRET = <administrator_password>
-- Alter database for remote data archive
ALTER DATABASE <database name>
SET REMOTE_DATA_ARCHIVE = ON (SERVER = server name);
GO
-- Alter table for remote data archive
ALTER TABLE <table name>
ENABLE REMOTE_DATA_ARCHIVE
WITH ( MIGRATION_STATE = ON );
GO;
High-level steps
• Configure local server for remote
data archive
• Create a credential with
administrator permission
• Alter specific database for remote
data archive
• Alter table for remote data
archive
95. Queries continue working
• Business applications continue working
without disruption
• DBA scripts and tools work as before. All
controls still held in local SQL Server
• Developers continue building or
enhancing applications with existing
tools and methods
96. Advanced security features supported
• Data in motion always via secure
channels (TLS1.1 / 1.2)
• Always Encrypted supported if enabled
by user
Encryption key remains on-premises
• Row-level security already working
• SQL Server and SQL Azure audit already
working
97. Backup and Restore
• DBAs only backup/restore local SQL
Server hot data
• StretchDB ensures remote data is
transactionally consistent with local
• Upon completion of local restore,
SQL Server reconciles with remote
using metadata operation, not data
copy
• Time of restore for remote not
dependent on size of data
98. Enhanced backup
Enhanced backup to Azure
Managed backup
Customized scheduling
Backup to Azure block blobs
Backup to Azure with file snapshots (SQL Server 2016)
Backup to Azure with file snapshots
Point-in-time restore with file snapshots
Summary: enhanced backup
99. Managed backup
Granular control of the backup
schedule
Local staging support for faster
recovery and resilient to transient
network issues
Support for system databases
Supports simple recovery mode
Backup to Azure block
blobs
Cost savings on storage
Significantly improved restore
performance
More granular control over Azure
Storage
Azure Storage snapshot
backup
Fastest method for creating
backups and running restores
Uses SQL Server database files on
Azure Blob storage
Hybrid solutions
Enhanced backup to Azure
100. •Support for system databases
•Supports databases in simple recovery mode
•Leveraging backup to block blobs: more granular control
•Allows customized backup schedules: full backup and log backup
Managed backup
105
Hybrid solutions
101. Customized scheduling
Step1: Run the Scheduling SP to configure custom scheduling
EXEC Managed_Backup.sp_backup_config_schedule
@database_name = 'testDB'
,@scheduling_option= 'Custom'
,@full_backup_freq_type = 'weekly’
,@days_of_week = 'Saturday'
,@backup_begin_time = '11:00'
,@backup_duration = '02:00'
,@log_backup_freq = '00:05'
Step2: Run the Basic SP to configure Managed Backup
EXEC msdb.managed_backup.sp_backup_config_basic
@database_name= 'testDB',
@enable_backup=1,
@container_url='https://storage account
name.blob.core.windows.net/container name',
@retention_days=30
Hybrid solutions
102. 2x cheaper storage
Backup striping and faster restore
Maximum backup size is 12 TB+
Granular access and unified credential story (SAS URIs)
Supports all existing backup/restore features (except append)
Backup to Azure block blobs
CREATE CREDENTIAL [https://<account>.blob.core.windows.net/<container>]
WITH IDENTITY = 'Shared Access Signature',
SECRET = 'sig=mw3K6dpwV%2BWUPj8L4Dq3cyNxCI'
BACKUP DATABASE database TO
URL = N'https://<account>.blob.core.windows.net/<container>/<blob1>',
URL = N'https://<account>.blob.core.windows.net/<container>/<blob2>'
Hybrid solutions
103. BACKUP DATABASE database TO
URL =
N'https://<account>.blob.core.windows.net/<container>/<backupfileblob1>‘
WITH FILE_SNAPSHOT
Backup to Azure with file snapshots (SQL Server 2016)
Instance
Azure Storage
MDF
Database
MDF
LDF
LDF
BAK
Hybrid solutions
104. •Available to users whose database files are located in Azure
Storage
•Make a copy of a database using a virtual snapshot within
Azure Storage
The database data does not move between the storage system and the server instance, removing the IO
bottleneck
•Uses only a fraction of the space that a traditional backup
would consume
Backup to Azure with file snapshots
Hybrid solutions
105. Traditional backup
Multiple backup types
Complex point-in-time restore process
Backup to Azure with file snapshots
Full backup only once
Point-in-time only needs two adjacent backups
Full Log Log Log Diff Log Log Log Diff Log Log Log
Full . . . . . Log Log Log Log Log Log Log Log Log Log Log
Hybrid solutions
Point-in-time restore with file snapshots
109. •The new ALTER DATABASE SCOPED CONFIGURATION statement gives you
control of certain configurations for your particular database.
•The configuration settings affect application behavior.
•The new statement is available in both SQL Server 2016 and SQL Database
V12.
Database Scoped Configurations
110. •Trace Flags 1117 and 1118 are not required for tempdb anymore.
• If there are multiple tempdb database files all files will grow at the same time depending on growth settings.
In addition, all allocations in tempdb will use uniform extents.
•By default, setup adds as many tempdb files as the CPU count or 8, whichever
is lower.
•During setup, you can configure the number of tempdb database files, initial
size, autogrowth and directory placement using the new UI input control on
the Database Engine Configuration - TempDB section of SQL Server
Installation Wizard.
•The default initial size is 8MB and the default autogrowth is 64MB.
•You can specify multiple volumes for tempdb database files. If multiple
directories are specified tempdb data files will be spread across the directories
in a round-robin fashion.
TempDB Database
111. •A table can reference a maximum of 253 other tables and columns as foreign
keys (outgoing references).
•SQL Server 2016 increases the limit for the number of other table and columns
that can reference columns in a single table (incoming references), from 253 to
10,000.
Foreign Key Relationship Limits
113. •New values for the model database and default values for new databases
(which are based on model).
•The initial size of the data and log files is now 8 MB.
•The default auto-growth of data and log files is now 64MB.
New Default Database Size and Autogrow Values
114. •Replication of memory-optimized tables are now supported
•Replication is now supported to Azure SQL Databasesee
Replication Enhancements
115. •ALTER ANY SECURITY POLICY permission
• is available as part of the implementation of row level security.
•ALTER ANY MASK and UNMASK permissions
• are available as part of the implementation of dynamic data masking.
•ALTER ANY COLUMN ENCRYPTION KEY, VIEW ANY COLUMN ENCRYPTION
KEY, ALTER ANY COLUMN MASTER KEY DEFINITION, and VIEW ANY
COLUMN MASTER KEY DEFINITION permissions
• are available as part of the implementation of the Always Encrypted feature.
New Permissions
116. •ALTER ANY EXTERNAL DATA SOURCE and ALTER ANY EXTERNAL FILE
FORMAT permissions
• are visible in SQL Server 2016 but only apply to the Analytics Platform System (SQL Data Warehouse).
•EXECUTE ANY EXTERNAL SCRIPT permissions
• are available as part of the support for R scripts.
•ALTER ANY DATABASE SCOPED CONFIGURATION permissions
New Permissions
117. •The TRUNCATE TABLE statement now permits the truncation of specified
partitions
•ALTER TABLE now allows many alter column actions to be performed while the
table remains available.
•A new query hint NO_PERFORMANCE_SPOOL can prevent a spool operator
from being added to query plans. This can improve performance when many
concurrent queries are running with spool operations
•The FORMATMESSAGE statement is enhances to accept a msg_string
argument.
Additional Enhancements
118. •The maximum index key size for NONCLUSTERED indexes has been increased
to 1700 bytes.
•New DROP IF syntax is added for drop statements related to
• AGGREGATE, ASSEMBLY, COLUMN, CONSTRAINT, DATABASE, DEFAULT, FUNCTION, INDEX, PROCEDURE,
ROLE, RULE, SCHEMA, SECURITY POLICY, SEQUENCE, SYNONYM, TABLE, TRIGGER, TYPE, USER, and VIEW.
•SESSION_CONTEXT can now be set.
• Includes the SESSION_CONTEXT function,
• CURRENT_TRANSACTION_ID function,
• and the sp_set_session_context procedure.
Additional Enhancements
119. •Advanced Analytics Extensions allow users to execute scripts written in a
supported language such as R.
• Transact-SQL supports R by introducing the sp_execute_external_script stored procedure, and the external
scripts enabled Server Configuration Option.
• Also to support R, the ability to create an external resource pool
•The COMPRESS and DECOMPRESS functions
• convert values into and out of the GZIP algorithm.
•A MAXDOP option has been added to
• DBCC CHECKTABLE DBCC CHECKDB and DBCC CHECKFILEGROUP to specify the degree of parallelism.
Additional Enhancements
120. •The DATEDIFF_BIG and AT TIME ZONE functions and the sys.time_zone_info
view are added to support date and time interactions.
•A credential can now be created at the database level (in addition to the
server level credential that was previously available)
•The input length limit of 8,000 bytes for the HASHBYTES function is removed.
•New string functions STRING_SPLIT and STRING_ESCAPE are added.
Additional Enhancements
121. •Eight new properties are added to SERVERPROPERTY :
• InstanceDefaultDataPath, InstanceDefaultLogPath
• ProductBuild, ProductBuildType,
• ProductMajorVersion, ProductMinorVersion,
• ProductUpdateLevel, ProductUpdateReference.
•Trace flag 1117 is replaced by the AUTOGROW_SINGLE_FILE and
AUTOGROW_ALL_FILES option of ALTER DATABASE, and trace flag 1117 has no
affect.
•For user databases, default allocation for the first 8 pages of an object will
change from using mixed page extents to using uniform extents.
• Trace flag 1118 is replaced with the SET MIXED_PAGE_ALLOCATION option of ALTER DATABASE, and trace flag
1118 has no affect
Additional Enhancements