Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

Sunday, November 24, 2019

Monitoring SQL Sessions with necessary parameters

Hello Readers !

Below is a useful T-SQL query having necessary monitoring parameters to do a live monitoring of SQL server sessions in SSMS. You can put more filters in the where clause to make it more relevant to your environment.

It includes isolation levels used by the sessions along with popular columns that are monitored like hostname, wait time, program name etc.

use master
go
SELECT   s.session_id,
            r.status,
            r.blocking_session_id as BlockedBy,                                 
            r.wait_type,
            wait_resource,
DB_name(s.database_id) as DBName,
            s.host_name,
CASE r.transaction_isolation_level
WHEN 1 THEN 'ReadUncomitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
ELSE 'Unspecified' END AS transaction_isolation_level,
            r.wait_time / (1000.0) as [WaitSec],
r.total_elapsed_time / (1000.0) as [ElapSec],
            r.cpu_time,
            r.logical_reads,
            r.reads,
            r.writes,

            Substring(st.TEXT,(r.statement_start_offset / 2) + 1,
                    ((CASE r.statement_end_offset
                        WHEN -1
                        THEN Datalength(st.TEXT)
                        ELSE r.statement_end_offset
                        END - r.statement_start_offset) / 2) + 1) AS SQL_text,
            Coalesce(Quotename(Db_name(st.dbid)) + N'.' + Quotename(Object_schema_name(st.objectid,st.dbid)) + N'.' + Quotename(Object_name(st.objectid,st.dbid)),
                    '') AS command_text,
            r.command,
            s.login_name,
            s.program_name, 
dateadd(HOUR, 3, S.last_request_end_time) as last_request_end_time,
            dateadd(HOUR, 3,s.login_time) as Login_Time,
            r.open_transaction_count
FROM     sys.dm_exec_sessions AS s
            JOIN sys.dm_exec_requests AS r
            ON r.session_id = s.session_id
            CROSS APPLY sys.Dm_exec_sql_text(r.sql_handle) AS st
WHERE    r.session_id != @@SPID 
ORDER BY r.cpu_time desc, r.status,
            r.blocking_session_id,
            s.session_id

Please let me know in comment section if you find any other columns that are useful to amend and improve this.


Sunday, October 20, 2019

Migrating your SQL server Databases to Azure Managed instances in offline mode using native backup restore method

Hello Readers !

The below is a small blog on how do we migrate SQL Server Databases to Azure Managed instance using SQL native backups we take from our On-premise SQL servers. This is an offline method where you would require downtime throughout the activity till the connection is re-established to your new database in the cloud and hence not suitable for giant databases with businesses not willing to put up with a bigger downtime window. This can still be adopted for small databases based on your convenience.

Managed instances databases do not support "norecovery" mode in restoring databases which means that we cannot use a restore plan involving restoration of latest full, differential, subsequent log backups.

Lets get to it.

Step 1: Creating a Blob container in Azure storage account

Go to your storage account in azure portal and click on Containers (basically blob folders),

Here we would get to create a container.



You can refer my previous detailed blog series on storage account to learn more. Part 1 2 3

Step 2: Creating SQL credentials with the SAS key of the storage account for storage account.

The concept of  "Credentials" has been there in MS SQL server space for long. It is used by SQL server to connect to resources outside SQL server, in our case the azure storage account where we have created the blob container.

The name of the credential we create should be the complete URL of the blob container which is going to be used for holding the backup files.
You can easily find it out by navigating to your storage account and by clicking the properties settings in the left pane.













Please check the top portion of my blog here to learn how to generate SAS token which contain the authentication information to connect to the storage account.

Below is a sample t-SQL script to create SQL credentials using the SAS token


Use master
go
CREATE CREDENTIAL [https://<Yourstorageaccount>.blob.core.windows.net/<Containername>] WITH IDENTITY = 'SHARED ACCESS SIGNATURE' , SECRET = 'sv=2017-11-09&ss=bfqt&srt=sco&sp=rwdlacup&se=2028-09-06T02:52:55Z&st=2018-09-04T18:52:55Z&spr=https&sig=WOTifdfdfdfdfdf%2BwndxW2CQ7%2B5fHd7Is%3D'


Please make sure to remove the '?' sign in front of the SAS token . That is a common mistake many do when they are creating the credentials. We need to have the SQL credential created in both the source SQL server instance (If you are directly taking backup to azure storage account/blob container) and in the managed instance.





Step3: Backing up your databases to azure blob container from the Source DB instance

Use the regular backup T-SQL command only with a difference of "To URL" to write your backup set to azure blob container.Authentication for this backup command to write the backup onto azure storage, will be taken care by the Credential we created above with the container URL and the SAS token.

T-SQL command



Use master
go
BACKUP DATABASE <DBName> TO
URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/Dbname.bak' with compression


You may also keep Copy_only if you are doing your test migration and dont want to affect the backup schedule in your DB instance.It also supports stripping of big DB backups to multiple files for ease of management. All you need to do is adding as many "URL ='Backup foldername+filename' in the backup command like below.


Use master
go
BACKUP DATABASE <DBName> TO
URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/Dbname1.bak' ,
URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/Dbname2.bak' ,
URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/Dbname3.bak'
with compression

Step4: Restoring the Backup from the Blob container into the managed instance
T-SQL command
use master
go
RESTORE DATABASE <DBname> FROM URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/WideWorldImporters-Standard.bak'

If you have created stripped backup from step3, make sure to give the URLs of all your backup in the restore command.

use master
go
RESTORE DATABASE <DBname> FROM
URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/Dbname1.bak',
URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/Dbname2.bak' ,
URL = 'https://<YourStorageaccount>.blob.core.windows.net/<Yourblobcontainer>/Dbname3.bak'



You can view your restored database in SSMS or in the portal in your managed instance page.

You can also see the progress of your restoration using DMVs like below.



use master
go
Select percent_complete from sys.dm_exec_requests where session_id=<SPID of your restore session>


I hope you are able to find your restore session's SPID using Sp_who/Sp_who2 active.

For online restoration to minimize downtime, we have methods like SQL replication and Azure Database migration service (DMS), which is a premium service listed in Azure market place.

That brings us to the end of this blog. Hope you find it useful and simple. I will write on Online DB migration in future with all pre-requisites.










































Saturday, October 5, 2019

Configuring Transactional replication between two Azure managed instances

Hello Readers !

Welcome to another blog on Azure Managed instance and this time I will be walking you through a demo on configuring transactional replication between two managed instances.
Hope you have read my blog on the HA and DR options in Azure managed instance. If you have not, please check it here .In the replication section, i have given some details on the limitations of transactional replication in Managed instances. I have done the below POC taking such into consideration.


We will be using T-SQL scripts for configuration this time as it looks to be too easy compared to the UI steps .

Pre-requisites that are needed to configure replication between two instances

1.A storage account and a SMB Fileshare in it to be used as snapshot folder.
2.Port 445 (TCP outbound) is open in the security rules of NSG for the managed instances to access the Azure file share
3.Also MSDN says the managed instances should not be participating in Active Geo-replication and there has to be a VNET peering done if the instances are in different VNets.

My setup for the POC Test

I have two managed instances sharing the same Vnet/Subnet and I am connected to them in SSMS over a point to site VPN connection that we have setup earlier. (I will write a different blog on configuring this ).I have renamed the private endpoint URL for security reasons, but have followed the same pattern in the scripts for you to identity them as publisher and Subscriber.

Publisher/Distributor : tcp:xxxxxxmi1.xxxxxx.database.windows.net,1433
Subscriber : tcp:xxxxxxgp.xxxxxx.database.windows.net,1433
Publisher Database(Source DB)  : AODB
Article to replicate : Action (Its a table i have in AODB Database)
Login used: AzureAdmin
 (I have used the same account name for the SQL logins in both the instances and gave sysadmin rights earlier for administration use. You can choose different logins as you please).
Storage Account : xxxxxstorageac

Subscriber Database : AODB_Repl_POC

I have created a fileshare in my storage account named 'ReplicationSnapshot' with a decent storage quota of 10 GB.For testing tiny table replications, you don't even need this much. 







I  went to the Network Security Group (NSG) of my managed instance subnet and have added 445 TCP outbound rule for enabling connectivity from the managed instances to the azure storage Fileshare. The service tag that you see below is corresponding to the location of my storage account.  (Basically i have every one of my POC resources in East US2). You should choose the service tag based on your storage account location.



















You should also take a note of your storage account connection string that contain the access keys for authenticating Storage account connections from your managed instance  to the Fileshare in your storage account.

Take a note of what is given against ConnectionString as given in the below screenshot which needs to be supplied in "setvar file_storage_key" variable in the script.














Scripts for configuring Replication

All the below scripts are to be executed in SQLCMD mode (You can select it from Query menu in your SQL management studio, when you have a new Query window opened) and need to be executed in the publisher server. Please make sure to change the parameter values corresponding to your setup.


Configuring distribution and distribution DB
EXEC sp_adddistributor @distributor = @@ServerName;
EXEC sp_adddistributiondb @database = N'distribution';
GO

Adding the publisher -distributor relationship in the distribution database and adding working directory for distributor

--Please select everything from the below line till the "End of the batch" and execute as a single batch.

--Set Variables
:setvar username AzureAdmin
:setvar password <Password>
:setvar file_storage "\\xxxxxstorageac.file.core.windows.net\replicationsnapshot"
:setvar file_storage_key "DefaultEndpointsProtocol=https;AccountName=xxxxxstorageac;AccountKey=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=;EndpointSuffix=core.windows.net"


USE [master] EXEC sp_adddistpublisher 
@publisher = @@ServerName,
@distribution_db = N'distribution',
@security_mode = 0,
@login = N'$(username)',
@password = N'$(password)',
@working_directory = N'$(file_storage)',
@storage_connection_string = N'$(file_storage_key)';

--End of the batch

Create Publication and Subscriber

--You may execute the below as a single batch against the publication Database. 

-- Set variables 
:setvar username AzureAdmin 
:setvar password <Password>
:setvar source_db AODB 
:setvar publication_name Publish_poc 
:setvar object Action 
:setvar schema dbo 
:setvar target_server "xxxxxxgp.xxxxxx.database.windows.net" 
:setvar target_username AzureAdmin 
:setvar target_password <Password> 
:setvar target_db AODB_Repl_poc 

-- Enable replication for your source database 
USE [$(source_db)] 
EXEC sp_replicationdboption 
  @dbname = N'$(source_db)', 
  @optname = N'publish', 
  @value = N'true'; 

-- Create your publication 
EXEC sp_addpublication 
  @publication = N'$(publication_name)', 
  @status = N'active'; 


-- Configure your log reaer agent 
EXEC sp_changelogreader_agent 
  @publisher_security_mode = 0, 
  @publisher_login = N'$(username)', 
  @publisher_password = N'$(password)', 
  @job_login = N'$(username)', 
  @job_password = N'$(password)'; 

-- Add the publication snapshot 
EXEC sp_addpublication_snapshot 
  @publication = N'$(publication_name)', 
  @frequency_type = 1, 
  @publisher_security_mode = 0, 
  @publisher_login = N'$(username)', 
  @publisher_password = N'$(password)', 
  @job_login = N'$(username)', 
  @job_password = N'$(password)'; 

-- Add the ReplTest table to the publication 
EXEC sp_addarticle  
  @publication = N'$(publication_name)', 
  @type = N'logbased', 
  @article = N'$(object)', 
  @source_object = N'$(object)', 
  @source_owner = N'$(schema)'; 

-- Add the subscriber 
EXEC sp_addsubscription 
  @publication = N'$(publication_name)', 
  @subscriber = N'$(target_server)', 
  @destination_db = N'$(target_db)', 
  @subscription_type = N'Push'; 

-- Create the push subscription agent 
EXEC sp_addpushsubscription_agent 
  @publication = N'$(publication_name)', 
  @subscriber = N'$(target_server)', 
  @subscriber_db = N'$(target_db)', 
  @subscriber_security_mode = 0, 
  @subscriber_login = N'$(target_username)', 
  @subscriber_password = N'$(target_password)', 
  @job_login = N'$(target_username)', 
  @job_password = N'$(target_password)'; 

-- Initialize the snapshot 
EXEC sp_startpublication_snapshot 
  @publication = N'$(publication_name)'; 
 --End of the batch


Its recommended to change the timeout of the agent jobs to workaround a known issue in azure as recommended by MS.

Changing timeout for agent jobs 
-- Increase login timeout to 150s 
update msdb..sysjobsteps set command = command + N' -LoginTimeout 150'  
where subsystem in ('Distribution','LogReader','Snapshot') and command not like '%-LoginTimeout %' 

After executing the scripts , I can see that the table got replicated to the subscriber database and you will get to see files added to your azure storage Fileshare too.


I am not adding the screenshots of sample inserts at the publisher to my article to limit the length of the blog.I am leaving that to you and you may also add more articles to the publications and reinitialize subscription to make it reflect in your subscriber database.


EXEC sp_addarticle
      @publication = Onprem_publish_poc,
      @article = <Tablename>,
      @source_object = <Tablename>

I will write another blog on my experience in fixing a replication monitor error in Azure managed instance.