SQL Best Practices
SQL Server has evolved a great deal since I first started working with SQL Server 6.5 in 1998. Unfortunately I continually come across newer systems that are configured with old ‘best practices’. Things like 1 tempdb file per core, 1 core per socket on VMware machines, index rebuilds each night, and more.
Below are some common current best practices that I feel should be followed.
Updating Statistics – I prefer to update statistics daily with a SQL Agent job. The query optimizer uses statistics to generate the execution plan, so I like for the system to have as up to date statistics as possible.
Changing MAXDOP – Old school value was half the number of processors. With SQL Servers soft-NUMA, things have changed. In my experience, determining the proper value means measuring the SQL Server workload response with different values. There is no one setting fits all or calculator that gets it right every time.
Cost Threshold for Parallelism – The default value is 5, which is way to low. What is the correct value for you? You’ll have to review your system, however I always start with 50 and adjust up from there.
Index Maintenance – Some will debate the value of dealing with index fragmentation. I’ve found that it can still be beneficial. Fragmentation is empty space on data pages, that empty space takes up room in the buffer pool. Rarely do I find SQL Servers with more RAM than it needs to where I can waste RAM with empty space.
Disk IO – It really doesn’t make sense to have business critical database servers on anything other than Solid State Storage. With the cost of SQL Server license, why cheap out on the disk IO subsystem with anything other than solid state to improve throughput and IO latency.
Enable Optimize for Adhoc Workloads – This setting helps stop single use plans from bloating the plan cache. I’ve only heard of one situation where enabling this was bad, and in that situation it was a code issue on the clients side.
Setting a value for Max Server Memory – Lately I’ve been finding that setting a value that leaves at least 10% available to the OS has made for a much more stable environment. Years ago, having SQL Servers using 95% and more was ok as long as we left a gig or so available to the OS. With modern security tools on the servers and updated guidance from Microsoft, allowing more to the OS has become the new normal.
Leave a Reply