Tuesday 21 March 2017

Partition in SQL Server

Hi All,

I would like to share my knowledge on partitioning in SQL Server,

You can follow me here if you like this post.

Creating table partition on Varchar Column(YEAR MONTH wise)


1. Create file groups equivalent to the ranges of months we have in column YEARMONTH_SNAP(Example column name).       (See below screen shot)

    For Example, If we are inserting data monthly wise and you have 12 months of total data then create 13 file groups.


    Note: We have to create one extra file group than range values we have in column


2. Create Partition function with all the ranges of month we have. It will map all the column values to     respective partitions. Make sure  to same datatype in partition function as partition column data type, if you are partitioning on Int column then mention int() data type in PARTITION FUNCTION.
         
    CREATE PARTITION FUNCTION [PF_YM2] (varchar(7)) AS RANGE RIGHT FOR VALUES 
  (N'2016-11', N'2016-12', N'2017-01', N'2017-02', N'2017-03', N'2017-04', N'2017-              05',  N'2017-06', N'2017-07', N'2017-08', N'2017-09')

Note: The Partition function will keep the above range values in respective partitions using below                   logic.

Example:

        CREATE PARTITION FUNCTION PF_YM2 (Varchar (7)) 
        AS RANGE RIGHT FOR VALUES (1, 100, 1000); 



3. 
Create Partition Scheme using the partition function and file groups, it will map all the partitions         to respective file groups. 
      CREATE PARTITION SCHEME [PS_YM2] AS PARTITION [PF_YM2] TO ([FG1], [FG2], [FG3], [FG4],       [FG5], [FG6], [FG7], [FG8], [FG9], [FG10], [FG11], [FG12])

Note: We have to add extra file group in partition scheme than the partition function range. If we insert more date range values in future it will use this additional file group, it means right now we have 12 partitions to store 12 months of data in case in future if you insert data for 13th month then that data will store in this additional file group.

Example: If we have 11 ranges in partition function we need to create partition scheme using 12 file                    groups.


4. Now create cluster index on partitioned column using partition scheme, it will map all the range values to particular partition

CREATE CLUSTERED INDEX [ClusteredIndex_on_PS_YM] ON [dbo].[stgp_dcsinvb_cur_monthly]
(
       [YEARMONTH_SNAP]
)WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PS_YM2]                  ([YEARMONTH_SNAP])


Validation:

       We can validate the partitions and mapped data on each partition using below script.

SELECT o.name objectname,i.name indexname, [partition_id], partition_number, [rows]
FROM sys.partitions p
INNER JOIN sys.objects o ON o.object_id=p.object_id
INNER JOIN sys.indexes i ON i.object_id=p.object_id and p.index_id=i.index_id
WHERE o.name = 'stgp_dcsinvb_cur_monthly'








Saturday 18 March 2017

SQL Server Query Performance- Free e-book


Hi friends, I would like to share most useful e-book to troubleshoot query performance issues in SQL Server. Download the e-book using below link.

SQL Server Query Performance e-book : Download e-book here


If you like my posts  Follow me here

Friday 17 March 2017

Cross Data Base Permissions on Views


  •  Let’s assume we have 2 databases Database_A & Database_B

o   We have a table dbo.Employee on Database_A

  • Now create view on Database_B for table dbo.Employee which is in Database_A like below


Use Database_B
go
CREATE View Employee_View
as

Select * from Database_A.dbo.Employee

  •  Assume we have user called ‘webuser’ now we have to grant select permission for user on view dbo.Employee_View, At the same time this user shouldn’t have any access on base table i.e., dbo.Employee on Database_A
  •  Now create user called ‘webuser’ on Database_B then grant SELECT permission to user on view.


use Database_B
go
create user webuser for login webuser

  •  Now grant SELECT PERMISSION TO user ‘webuser’ on view ‘dbo.Employee_View’



USE Database_B
GO
GRANT SELECT ON [dbo].[Employee_View] TO [webuser]
  • Now let’s test if user ‘webuser’ can able to select the data on view
  • Login to the server with user ‘webuser’ and select the view, see below screen shot.



  • User is not able to access the view and getting below error



Msg 916, Level 14, State 1, Line 2
The server principal "webuser" is not able to access the database "Database_A" under the current security context.
  • Now in order to fix this error we need to create user on ‘Database_A’ but don’t grant any permissions to user on this db to restrict him not to access the base table i.e., dbo.Employee.



  • Now let’s try to select the view on Database_B



  • This time also user not able to get the data, getting the below error.


Msg 229, Level 14, State 5, Line 2
The SELECT permission was denied on the object 'Employee', database 'Database_A', schema 'dbo'.

  • In order to fix the issue we need to enable the “cross db ownership chaining” on server


        ALTER DATABASE Database_A SET DB_CHAINING ON;
        ALTER DATABASE Database_B SET DB_CHAINING ON

  • Now let’s try to select the view on Database_B database



  • SUCCESSES! , now user ‘webuser’ can able to access the view ‘dbo.Employee_View’ on     Database_B
  • Now let’s try to test if the user ‘webuser’ can able to access the base table dbo.Employee on Database ‘Database_A’



  • In above screen shot we can clearly see that ‘webuser’ cannot able to see the table     dbo.Employee on Database_A.
  • It means webuser is having access on view on database Database_B but he doesn’t have any access on base table dbo.Employee on database Database_A.


Note: 1. The user should be created in both the databases.
      2. Make sure shouldn’t grant any permission to the user on Base Table Database.
      3. Make sure to enable Cross Database Chaining on both the databases

SQL Server Database - Best Practices


1.     The primary file group must be totally separate, and should be left to have only system objects, and no user defined object should be created on the primary file group. Also, the primary file group should not be set as the default file group. Separating the system objects from other user objects will increase performance and enhance the ability to access tables in cases of serious data failures. We can fallow this best practice if is there any huge database.

2.     If there are N physical disk drives available in the system, then try to create N files per file group and put each one in a separate disk. This will allow distributing disk I/O load over multiple disks, and will increase performance.

3.     Put the transaction log file on a different physical disk that is not used by the data files. The logging operation (Transaction log writing operation) is more write-intensive, and hence it is important to have the log on the disk that has good I/O performance.

4.     Do not let SQL Server fire the "Auto grow" feature too often because it is a costly operation. Set an "Auto grow" increment value so that the database size is increased less frequently (say, once per week). Similarly, do not use the "Auto shrink" feature for the same reason. Disable it, and either shrink the database size manually, or use a scheduled operation that runs in a timed interval (say, once a month)

5.     Compatibility level: Specify the latest version of SQL Server that the database supports. Possible values are SQL Server 2014 (120), SQL Server 2012 (110), and SQL Server 2008 /2008R2(100). Make sure to be set proper compatibility level as per the SQL version.

6.     Recovery Mode: Set the Recovery mode for user defined databases depends on the requirement.

Ex: For Production DB, the recovery mode should be in FULL
      For Dev/Test/UAT the recovery mode can be set either Full or Simple

Note: If we set Full Recovery mode then must be configured the T.LOG backups, if we set    Simple Recover mode then need not to configure T.LOG backups.

7.     Auto Shrink:  Should be disable. Make sure the Auto shrink option must be FALSE.

8.     Auto update statistics: Should be in Enable. Make sure the Auto update statistics option must be TRUE.


a.       Auto growth:

        i.      Small database (<10 GB = 128MB)
       ii.      Medium database (10 - 100 GB = 256 MB)
       iii.     Large Database (100 - 500 GB = 512 MB)
       iv.     Very Large Database (>500 GB = 1024 MB)
       v.     Set log files to grow appropriately – start with 128 MB and increase as necessary. (Keep an eye on the log file. At some point, it should level off.)


b.      Set databases to standards for:
     i.     Check Auto close - Off
     ii.    Check Auto shrink - Off
         iii.    Check Auto Create Statistics - On
         iv.    Check Auto Update Statistics - On
         v.     Check for appropriate level of recovery model (Simple, Full)
         vi.    Check for appropriate level of recovery (Checksum - 2005/2008, Torn Page - 2000)

         vii.   Check for appropriate level of compatibility. (80 - 2000,90 - 2005,100 - 2008/2008R2, 110-                        2012, 120-2014)  

Creation of SQL Server Database ( For Beginners )

                                                Creation of SQL Server Database

We can create Database in 2 ways  :

1.       Using GUI
2.       Using T-SQL Script

Using GUI:

Step 1: From the Windows Start Menu, select “Microsoft SQL Server”, and then “SQL Server Management Studio”. Once the Management Studio starts, connect to the SQL instance then right click the Databases folder and select new Database. Enter a name in the “Database name” text box. 



Step 2:  Click on new database and provide the database name
        For this example, well use the name “MyNewDB”.




a. Logical Name:
MyNewDB         - Data file is where the actual data is saved on the hard drive.
                  MyNewDB_log - Log file   keeps track of any changes to that data.

b.  File Type:
Data files will be defined as ROWS and Log file will be defined as LOG

             C: Filegroup:

                     Primary: The Primary data file contains the start-up information for the database and points to the other files in the database. User data and objects can be stored in this file or in secondary data files. Every database has one primary data file and. The recommended file name extension for primary data file is .mdf

      Transaction Log: The Transaction log files hold the log information that is used to recover the database. There must be at least one log file for each database. The recommended file name extension for transaction logs is .ldf  , we can add multiple Log files to database.

            Secondary: Secondary data files are optional, user-defined, and store user data. Secondary files can be used to spread data across multiple disks by putting each file on a different disk drive. Additionally, if a database exceeds the maximum size for a single Windows file, you can use secondary data files so the database can continue to grow.

        D. Initial Size(MB): 

      It always recommended to give some initial size for a user defined database based on Database average DB growth. SQL Server files can grow automatically from their originally specified size. When you define a file, you can specify a specific growth increment. Every time the file is filled, it increases its size by the growth increment. If there are multiple files in a filegroup, they will not auto grow until all the files are full. Growth then occurs using round-robin algorithm. The default value can be set 100MB for .MDF and 1024MB for .LDF.  Later will analyse the DB average growth by monthly and can set Initial size for MDF file.

. Autogrowth:  

     It always recommended to set auto growth for both .mdf and, ldf files Initially we can set auto growth 512MB later can change the value based on the growth of DB. 

For .mdf-  should be given in MB’s (Recommended)





For  .ldf should be given in MB's or Percent (ex10%) depends on the expected size of your log file.


    F. Path:

       You have select the path based on your requirement to be placed .MDF and .LDF files in drive.

  G.  File Name:

      We do not need to provide any file name since files will get generated automatically based on Logical name once click OK. If you want to give any separate name, then you can provide.

Using T-SQL Script:

Use the bellow T-SQL script and execute in master

USE master
GO
CREATE DATABASE Sales
ON
( NAME = MyNewDB,
   FILENAME = 'c:\program files\microsoft sql server\mssql\data\MyNewDB.mdf',
   SIZE = 102400kb,
   MAXSIZE = 10MB,
   FILEGROWTH = 10MB )
LOG ON
( NAME = 'Sales_log',
   FILENAME = 'c:\program files\microsoft sql server\mssql\data\ MyNewDB_log.ldf',
   SIZE = 102400KB,
   MAXSIZE = 10MB,
   FILEGROWTH = 10% )
GO




Resource Governor in SQL Server


Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server.

If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task.

For example, if two users are running queries parallel among them one user query should run an high priority then the ideal solution for this is resource governor.

SQL Server have two by default created resource governor component.
1) Internal –This is used by database engine exclusives and user have no control.
2) Default – This is used by all the workloads which are not assigned to any other group.
What are the major components of the resource governor?
  • Resource Pools
  • Workload Groups
  • Classification
We can configure the resource governor using below process
  • Create resource pool
  • Create a workload group
  • Create classification function based on the criteria specified
  • Enable Resource Governor with classification function

Step 1: Creating Resource Pool
We are creating two resource pools. 1) High_PriorityPool and 2) Low_PriorityPool





-----------------------------------------------
-- Step 1: Create Resource Pool
-----------------------------------------------
-- Creating Resource Pool for high priority users



CREATE RESOURCE POOL Low_PriorityPool
WITH
( MIN_CPU_PERCENT=0,
MAX_CPU_PERCENT=30,
MIN_MEMORY_PERCENT=0,
MAX_MEMORY_PERCENT=30)
GO
-- Creating Resource Pool for Low Priority users

CREATE RESOURCE POOL High_PriorityPool
WITH
( MIN_CPU_PERCENT=50,
MAX_CPU_PERCENT=100,
MIN_MEMORY_PERCENT=50,
MAX_MEMORY_PERCENT=100)

 Note: We can change the above values as per our requirement.
Step 2: Creating Workload Group
We are creating two workloads each mapping to each of the resource pool which we have just created.
-----------------------------------------------
-- Step 2: Create Workload Group
-----------------------------------------------

-- Creating Workload Group for High priority pool

CREATE WORKLOAD GROUP HighPriorityGroup
WITH
(IMPORTANCE = HIGH, MAX DOP =8)
USING High_PriorityPool;
GO
-- Creating Workload Group for Low priority pool

CREATE WORKLOAD GROUP LowPriorityGroup
WITH
(IMPORTANCE = LOW, MAX DOP =0)
USING Low_PriorityPool;
GO







Step 3: Creating user defined function which routes the workload to the appropriate workload group.
In this example we are checking SUSER_NAME() and making the decision of Workgroup selection. We can use other functions such as HOST_NAME(), APP_NAME(), IS_MEMBER() etc.

-----------------------------------------------
-- Step 3: Create UDF to Route Workload Group
-----------------------------------------------
CREATE FUNCTION dbo.UDFClassifier()
RETURNS SYSNAME
WITH SCHEMABINDING
AS
BEGIN
DECLARE @WorkloadGroup AS SYSNAME
IF(SUSER_NAME() = 'HighPriorityUser')
SET @WorkloadGroup = ' HighPriorityGroup'
ELSE IF (SUSER_NAME() = 'LowPriorityUser')
SET @WorkloadGroup = 'LowPriorityGroup'
ELSE
SET @WorkloadGroup = 'default'
RETURN @WorkloadGroup
END
GO


Note : Users which are not mapped to any work load group will go to the default work load group.
Step 4: In this final step we enable the resource governor with the classifier function created in earlier step 3.
-----------------------------------------------
-- Step 4: Enable Resource Governer
-- with UDFClassifier
-----------------------------------------------
ALTER RESOURCE GOVERNOR
WITH (CLASSIFIER_FUNCTION=dbo.UDFClassifier);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE
GO





Below will show how resource governor will process the workloads at a time.























To Verify the requests that are sent to different pools use below script :

SELECT
            rpool.name as PoolName,
            COALESCE(SUM(rgroup.total_request_count), 0) as TotalRequest,
            COALESCE(SUM(rgroup.total_cpu_usage_ms), 0) as TotalCPUinMS,
            CASE
                  WHEN SUM(rgroup.total_request_count) > 0 THEN
                        SUM(rgroup.total_cpu_usage_ms) / SUM(rgroup.total_request_count)
                        ELSE
                        0
                  END as AvgCPUinMS
      FROM
      sys.dm_resource_governor_resource_pools AS rpool
      LEFT OUTER JOIN
      sys.dm_resource_governor_workload_groups  AS rgroup
      ON
          rpool.pool_id = rgroup.pool_id
      GROUP BY
rpool.name;


Below is the example for classifier function :


CREATE FUNCTION dbo.UDFClassifier()
RETURNS SYSNAME
WITH SCHEMABINDING
AS
BEGIN
DECLARE @WorkloadGroup AS SYSNAME
IF(SUSER_NAME() = 'BOADMINUSR')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = 'Bobjrepusr')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = 'Dataocean_Redwood')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[Infa_SFDC]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[Infasfdc]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[info_do]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[informatica]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[Infouser]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[intg_otm]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[Intg_SIOP_INFO]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[Intg_Vannet_INFO]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[iwetlusr]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[iwusr]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[lot_attr]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[realtimeuserbissas]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[Redwood_User]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[sapecc_do_fin]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[SSAS_ANALYTICS]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[sa]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[ssas01_connect]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[TaskEDWAuditTst]')
SET @WorkloadGroup = ' HighPriorityGroup'
IF(SUSER_NAME() = '[VWRI\Informatica]')
SET @WorkloadGroup = ' HighPriorityGroup'

ELSE IF (SUSER_NAME() like 'VWRI\%')
SET @WorkloadGroup = 'LowPriorityGroup'
ELSE IF (SUSER_NAME() like 'NT SERVICE\%')
SET @WorkloadGroup = 'LowPriorityGroup'
ELSE
SET @WorkloadGroup = 'default'
RETURN @WorkloadGroup
END
GO