John’s post-install configuration recommendations
Let me start by saying this is the procedure I perform when configuring a new SQL Server instance, immediately after installation. “Best Practice” is a term a lot of people love to throw around, but I will not definitively say performing the following is a “Best Practice” as your environment and situation could be wildly different from mine. I find though, that these steps work best for me in most situations. With that said a lot of the steps outlined below do have “Best Practice” associated with them from one source or another, but they could range anywhere from highly useful to absolutely worthless for your environment so keep that in mind as you read on.
Properly configuring the instance is one of the most important things to do after installation. It’s much easier to handle these configurations ahead of time instead of scrambling to reconfigure the instance mid-day when you can’t afford an outage.
First let’s start with some sp_configure commands. Out-of-the-box, SQL Server isn’t configured all that well. We can’t even see a majority of the settings because the “show advanced options” option is disabled. Let’s rectify that now:
-- Enable Show Advanced Options EXEC sp_configure 'show advanced options', 1 GO RECONFIGURE GO
Much better; now we can start configuring things without concern of getting “configuration option ‘xxxx’ does not exist, or it may be an advanced option” error.
If you’re running SQL 2008 or SQL 2008 R2, I highly recommend you enable backup compression on the instance. Not only are backups quicker, they’re also smaller and I find people prefer fast over slow when it comes to database performance. There are, of course, some trade offs here but I find that CPU is rarely the performance bottleneck of a SQL Server, so unless I’m expecting heavy CPU loads on an instance, I enable this option:
-- Enable Backup Compression as Default EXEC sp_configure 'backup compression default', 1
We should also configure the instance to properly utilize all the logical CPUs we have available. MS recommends that in general you dedicate one thread per processor up to a maximum of 8. There are some exceptions here, so I encourage you to look at the KB 329204 if the server is utilizing Non-uniform Memory Allocation (NUMA), Hyperthreading, anything other than OLTP, or it’s not a dedicated SQL Server machine. To properly configure the default max degree of parallelism for the instance, run the following:
-- Configure MAXDOP (depends on CPU configuration of server) - http://support.microsoft.com/kb/329204 EXEC sp_configure 'max degree of parallelism', N --Where N is number of logical CPUs
Now, let’s configure the instance to utilize the proper amount of RAM. If you are working in a 32-bit environment, you will have some additional things to configure not mentioned in this article. An assumption that I’m making is you are working on a 64-bit instance as they are now becoming the standard build for new environments. On a 64-bit platform, memory is no longer a main driver to upgrade to Enterprise Edition, as Standard Edition can support up to 64GB of RAM (which tends to be more than enough for most applications). Setting up the server to only utilize memory it needs is rather important on 64-bit platforms as keeping the defaults will most likely end with the SQL Server Service consuming all the memory it can get its greedy hands on. Using Glenn Barry’s article on this setting as a reference, configure the instance to utilize up to a proper maximum amount of memory, as follows:
-- Configure Server Memory Threshholds (depends on RAM) EXEC sp_configure 'min server memory (MB)', I -- I is generally 0, but should be anything less than N below EXEC sp_configure 'max server memory (MB)', N -- N is dependent on system RAM available.
I also elect to set the min server memory option. If you want to keep the server default (0), feel free ignore this option outright, however there may be reasons for setting it higher (most likely equivalent to N). This might be required on a server hosting more than SQL Server or a server that has services constantly starting and stopping. Additionally, you will want to ensure that the service account running the SQL Server Service is included in the Lock Pages in Memory security policy. Jonathon Kehayias explains how including the service account into this security policy enables 64-bit instances to utilize the AWE mechanism when allocating memory, which helps to minimize paging.
Now let’s concentrate on being proactive about being reactive. Many of you are (hopefully) aware of what a dedicated administrator connection (DAC) is, but you may not realize that it’s disabled for remote connections by default. It can be cumbersome (or in cases impossible) to RDP into the server in order to establish a DAC during a time of crisis, so enabling this is something I believe should be considered a necessity (frankly it should be enabled out of the box). Let’s enable it now:
--Allow for a remote DAC to be established EXEC sp_configure 'remote admin connections', 1 GO -- Apply all sp_configure changes RECONFIGURE GO
That should cover our basic sp_configure-ations, but let’s be sure to loop back to default file paths for the instance. If you took full advantage of the following screen (probably my favorite) during the installer, feel free to skip to the next section:
Setting the default file paths is critical for two main reasons. Most importantly, segregating your user databases and logs from your system databases and logs can equate to improved I/O if these locations are on different LUNS (this is especially important with TempDB). It will also help to standardize your instances across the enterprise which is very handy when deploying scripts across multiple instances.
If you were in a rush during the instance installation and bypassed the previous screen, first kick yourself, now open up SQL Management Studio and click on the instance properties. Click the Database Settings Page and adjust the default Data and Log locations:
So where are the Backup and TempDB paths? Sadly, nowhere to be found in the GUI. You’re going to have to run some TSQL statements and/or modify the registry. Luckily if we script out the changes we want to make to the default Data and Log file paths using the GUI, we can duplicate one of them and modify it for the Backup directory as well (alternatively you can use regedit to edit all three in the registry). We will need to issue an ALTER DATABASE statement against TempDB in order to move it to another file location though, so there’s no GUI workaround here. The following is an example of what I run if default paths are not set up; DO NOT COPY/PASTE AND RUN IT WITHOUT MODIFYING IT FOR YOUR ENVIRONMENT:
-- DO NOT COPY AND PASTE THE FOLLOWING. This is an EXAMPLE, so use your own environment to generate these scripts. -- It's safest to generate the data/log path changes using the GUI and creating a duplicate entry for the backup space based on what's below. USE [master] GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', REG_SZ, N'D:UserData' GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', REG_SZ, N'E:UserLog' GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'BackupDirectory', REG_SZ, N'I:Backup' GO -- Ensure these filepaths exist before executing the following ALTER DATABASE [tempdb] MODIFY FILE (NAME = tempdev, FILENAME = N'G:\TempDB\Data\tempdb.mdf') GO ALTER DATABASE [tempdb] MODIFY FILE (NAME = templog, FILENAME = N'H:\TempDB\Log\templog.ldf') GO
After you execute the scripts above, a restart of the instance is required for the changes to take effect (you can do that now or wait until the end of all the configurations). Additionally, you will need to remove the tempdb.mdf and tempdb.ldf from the old (default) location as they will not be removed automatically (if you do wait to restart, be sure to make a note to do this after things come back up).
Model DB Changes
Configuring the Model database immediately after installation is another of the many things that will make your life easier throughout the lifetime of an instance. Since the Model database is used as a template for all new databases created on the instance (not restored to, but created on via CREATE DATABASE command), it is handy to set some default settings that will be inherited with each new database.
A common change is if you don’t want all of your new user databases requiring transaction log backups (e.g. this instance is a non-critical/non-production environment); setting the recovery model of the Model database to SIMPLE is the quickest way to achieve this. Just adjust the recovery model of the Model database to SIMPLE, like follows:
-- Adjust Recovery Model to Simple USE [master] GO ALTER DATABASE [model] SET RECOVERY SIMPLE WITH NO_WAIT GO
Autogrowth is a feature much contested over in the DBA community. I personally find it useful, but I don’t like the defaults of 1MB and 10% growth (for the log and data files respectively). Using a small static amount such as 1MB will ensure fragmentation within a file (not to mention a ridiculous amount of VLFs in the tlog). Using a percentage of growth is also troublesome because in the beginning of a database’s lifetime, these events will be numerous and small, causing fragmentation. When the database gets larger (e.g. hundreds of GB), a single autogrowth event can easily consume a vast quantity of disk space which you may not be ready for. Because of the various issues related to percentage based autogrowth settings, I find it safest to set this to a static amount (I find 512MB works well under most cases), as follows:
-- Adjust Autogrowth Settings to grow Log and Data files by .5 GB at a time USE [master] GO ALTER DATABASE [model] MODIFY FILE ( NAME = N'modeldev', FILEGROWTH = 512MB ) ALTER DATABASE [model] MODIFY FILE ( NAME = N'modellog', FILEGROWTH = 512MB ) GO
Others may argue that disabling autogrowth outright is the best practice, but I will officially disagree with that stance. MSSQL is known for its ability to stage databases quickly which in turn tends to lead to having quite a few databases to maintain (be they on a single instance or across the enterprise). Keeping track of free space in hundreds to thousands of databases just isn’t feasible so I recommend keeping autogrowth settings enabled and picking a static amount for reasons already stated.
Making modifications to the model database presents the perfect opportunity to create a custom database role that really should be included in MSSQL out of the box. The missing role that I’m talking about is one that grants EXECUTE rights to all stored procedures in the database. I find that users requiring read/write/execute permissions are common enough that this role should be included with the built-in roles, but MS disagrees with me on this point. Thankfully it’s a rather simple problem to solve, just by running the following statement:
USE [model] GO CREATE ROLE [db_execute] GRANT EXECUTE TO [db_execute]
Feel free to call this custom role whatever you want. I’ve seen it called db_executer to maintain the -er suffix structure of the built-in roles, though the proper way to spell executor is with an “o”. Because I think it’s silly either way, I just call it db_execute as is above.
The last modifications I tend to make to Model are setting database-level configurations that I want to propagate to any new database created on the instance. Some of these will be set as follows in future versions of MSSQL and others are just an attempt to waylay possible issues when dealing with indexed views and computed columns. Here are the database-level configurations I recommend making:
USE [master] GO -- MS Recommendations (Will be set ON automatically for future MSSQL versions) ALTER DATABASE [model] SET ANSI_NULLS ON --http://msdn.microsoft.com/en-us/library/ms188048.aspx ALTER DATABASE [model] SET ANSI_PADDING ON --http://msdn.microsoft.com/en-us/library/ms187403.aspx ALTER DATABASE [model] SET CONCAT_NULL_YIELDS_NULL ON --http://msdn.microsoft.com/en-us/library/ms176056.aspx -- Required for creating/modifying indexes on computed columns or indexed views ALTER DATABASE [model] SET ANSI_WARNINGS ON --http://msdn.microsoft.com/en-us/library/ms190368.aspx ALTER DATABASE [model] SET ARITHABORT ON --http://msdn.microsoft.com/en-us/library/ms190306.aspx ALTER DATABASE [model] SET QUOTED_IDENTIFIER ON --http://msdn.microsoft.com/en-us/library/ms174393.aspx ALTER DATABASE [model] SET NUMERIC_ROUNDABORT OFF --http://msdn.microsoft.com/en-us/library/ms188791.aspx GO
One change we’re not allowed to make is in regards to filegroups in the Model database. I would love to be able to reconfigure the default filegroup configuration so the PRIMARY filegroup is no longer the default. I would also like to have a dedicated INDEX filegroup in each database as well. Sadly when attempting to adjust filegroup settings in the Model database, you get the following error:
Msg 1826, Level 16, State 6, Line n User-defined filegroups are not allowed on "model".
If you are of the same opinion, I encourage you to vote up my SQL Connect issue to add this functionality to a future release of MSSQL.
TempDB is recreated upon restart of the SQL Server service and like any new user database is created based off of the Model database. However, any explicit configurations set against TempDB override settings made in model. Depending on the size of the instance or expected load it will be under, you can increase the number of data files TempDB uses. I would encourage you to read Paul Randal’s blog entry on this topic in case you are considering adding additional data files. For most instances though, this is not necessary so I recommend you make this determination on an instance-by-instance basis.
Database mail settings
You can configure Database Mail either through the GUI or using scripts. Since I like to standardize my instances throughout the company, I find scripts are the quicker option to use for most of the work. Below is a series of commands you should modify with your own settings as these will run, but most definitely not work if executed as is:
-- Create Database Account EXEC msdb.dbo.sysmail_add_account_sp @account_name = 'DBMail Account', @description = 'Mail account for use by all database users.', @email_address = 'email@example.com', @replyto_address = 'firstname.lastname@example.org', @display_name = 'DBMail Account', @mailserver_name = 'smtpserver.domain.com' -- Setup Mail Profile EXEC msdb.dbo.sysmail_add_profile_sp @profile_name = 'DBMail Profile', @description = 'Default Database Mail Profile' -- Add the Standard Account to the Profile EXEC msdb.dbo.sysmail_add_profileaccount_sp @profile_name = 'DBMail Profile', @account_name = 'DBMail Account', @sequence_number = 1 -- Make the Profile Public EXEC msdb.dbo.sysmail_add_principalprofile_sp @profile_name = 'DBMail Profile', @principal_name = 'public', @is_default = 0 -- Create Operator in SQL Server Agent EXEC msdb.dbo.sp_add_operator @name = 'DBAdmins', @enabled = 1, @pager_days = 0, @email_address = 'DL_DBAdmin@domain.com' GO
Make sure that the server the instance is being installed on has relay access to your company’s smtp server, otherwise you won’t see any alerts. Additionally, I recommend you have your Exchange Admins setup a “blackhole” email address that you use with the @reply-to parameter for the DB Mail account. This will save your inbox from the inevitable mailbox not found notifications you will most likely receive from Exchange when using the Reply To All button.
The final step to ensure Database Mail is setup completely is to enable the mail profile for the SQL Server Agent. To do this requires a change to the registry, so because many instances are not consistently pathed the same in the registry it’s easier/safer to enable this using the GUI. In SMSS, right-click on the SQL Server Agent, select Properties, and select the Alert System page in the left pane. Then check the box next to Enable mail profile and click Ok. You will need to restart the SQL Server Agent for this change to take effect (again, you can restart this service now or wait until the end).
Setting up maintenance plans at the inception of the instance ensures you have a minimal level of proactive performance tuning happening for all of your databases. It’s much easier to loop back and modify maintenance routines instead of creating them out of panic with no knowledge of how long they will execute, if they will be effective, or if they will cause unforeseen performance issues. Establishing nightly/weekly maintenance routines initially ensures you only need to make small changes (in most cases) to help ensure optimal performance during the lifetime of the instance.
Like many many, many other DBAs, I use Ola Hallengren’s Maintenance Solution. It’s very easy to implement and is a very comprehensive maintenance plan from start to finish. I completely bypass the usage of Maintenance Plans because they are not nearly as verbose. You will need to pick schedules that fit your environment and situation best, but I recommend you initially run all jobs nightly as it’s much easier delay the frequency of maintenance routines rather than increase them as time goes on.
There are several trace flags that I like to enable at all times on every instance in my environment, specifically related to deadlock logging (
1204 & 1222) and suppressing successful backup notifications (3226). Deadlock flags are obviously handy because when someone comes to you asking about deadlocking you can just check the SQL Server Log instead of replying “Uh, I wasn’t capturing that activity”. Suppressing the successful backups helps to reduce clutter in your SQL Server Logs but isn’t nearly as necessary to enable.
There are two ways you can ensure these flags are always enabled. The preferred method is to add these trace flags to the startup parameters for the instance. To do this, open SQL Server Configuration Manager, select the SQL Server Services page, right-click on the SQL Server service, click Properties, and click the Advanced tab. Enter the following at the end of the Startup Parameters line (NOTE: ensure you copy/paste the leading semicolon, if not you could very well run into this issue):
Once modified, you will need to restart the instance in order for the new startup parameters to take effect (again, feel free to wait until the end if you don’t want to constantly restart services).
The alternative method, which causes a little more clutter, is to create a job that runs when the SQL Server Agent starts up, executing the following code:
-- Enable Trace flags
1204,1222, and 3226 DBCC TRACEON( 1204,1222, 3226, -1) GO
Note: You may notice that the 1204 trace flag is crossed out above. I was informed by several MVPs that in addition to providing limited set of deadlock information in relation to trace flag 1222, it can also muddle up the output in the error logs making trace flag 1222 output more difficult to read. I figured it best to make this note and cross out the information above as this could also be a misconception for others stumbling across this post.
Cycling the SQL Error Logs
Error logs will become one of the initial places you check when an issue finally hits this instance. If you let the log grow too large, it can take quite some time for it to render in the native SMSS Log Reader. Cycling the log on a scheduled basis can help to reduce load times and it also has the added benefit of keeping the disk consumption in the log directory at a minimum. Picking a cycle frequency will depend on your situation, but I prefer to cycle the error logs weekly on Thursday evenings as weekends and the beginning of the week tend to be the time problems most frequently happen to me. Some may elect to perform daily cycles, but I find that in most cases if the log is cycled on a daily basis you either don’t have enough history to troll through when an issue pops up or you’re constantly opening up other logs to get more history. If you do elect to cycle the logs daily, I advise you to increase the number of logs you retain on the server . Here is a script to create a job that runs every Thursday evening at 8:00pm:
USE [msdb] GO EXEC msdb.dbo.sp_add_job @job_name=N'Cycle SQL Server Error Log', @enabled=1, @notify_level_eventlog=0, @notify_level_email=2, @notify_level_netsend=2, @notify_level_page=2, @delete_level=0, @category_name=N'[Uncategorized (Local)]', @owner_login_name=N'sa' GO EXEC msdb.dbo.sp_add_jobserver @job_name=N'Cycle SQL Server Error Log', @server_name = @@SERVERNAME GO EXEC msdb.dbo.sp_add_jobstep @job_name=N'Cycle SQL Server Error Log', @step_name=N'Cycle Log', @step_id=1, @cmdexec_success_code=0, @on_success_action=1, @on_fail_action=2, @retry_attempts=0, @retry_interval=0, @os_run_priority=0, @subsystem=N'TSQL', @command=N'EXEC sp_cycle_errorlog', @database_name=N'master', @flags=0 EXEC msdb.dbo.sp_update_job @job_name=N'Cycle SQL Server Error Log', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @notify_level_email=2, @notify_level_netsend=2, @notify_level_page=2, @delete_level=0, @description=N'', @category_name=N'[Uncategorized (Local)]', @owner_login_name=N'sa', @notify_email_operator_name=N'', @notify_netsend_operator_name=N'', @notify_page_operator_name=N'' EXEC msdb.dbo.sp_add_jobschedule @job_name=N'Cycle SQL Server Error Log', @name=N'Thursday Evenings', @enabled=1, @freq_type=8, @freq_interval=16, @freq_subday_type=1, @freq_subday_interval=0, @freq_relative_interval=0, @freq_recurrence_factor=1, @active_start_date=19900101, @active_end_date=99991231, @active_start_time=200000, @active_end_time=235959 GO
Thankfully Microsoft has made an effort to decrease the reconfiguration necessary to lock down either a SQL Server 2008 or 2008 R2 instance immediately after installation. Unlike SQL Server 2005, no unnecessary built-in groups (i.e. BUILTINAdministrators) are included in the sysadmin role by default. Only NT AUTHORITYSYSTEM (go to the bottom of this KB for explanation of this account), sa, and the group(s)/user(s) explicitly specified during the installation have sysadmin permissions which helps to reduce security concerns. The only other out-of-the-box security accounts are Certificate-based SQL Server Logins (accounts surrounded by ##) which are for internal use only and should not be deleted. Aside from trying to ensure that your service accounts only have the minimal permissions required, no additional reconfiguration is needed in regards to standard security setup.
Server Principal Names (SPN)
SPNs are an often forgot about configuration step after installing an instance. SQL Server 2008 will attempt to register the server’s SPN in AD, but for a laundry list of reasons, it may not be successful. Instead of attempting to troubleshoot why it doesn’t work automatically, let’s just manually check and see if the proper SPN exists, and if not, let’s create it (or request it be created by the appropriate party). The first step I will recommend taking is to create the SPNQuery.vbs file on your desktop (or some other easily accessible location). Once this is done, open up command prompt, navigate to the location the SPNQuery.vbs file is at, and type the following command:
cscript SPNQuery.vbs MSSQLSvc* > MSSQLSPNs.txt
This will generate an output file (MSSQLSPNs.txt) that you can easily search, looking for your instance name. If you do not find your instance, you can count on only being able to authenticate via NTLM. If you wish to enable Kerberos authentication, you will need to register this instance’s SPN with the SetSPN.exe utility. I recommend you do this now instead of tracing some anonymous authentication bug back to the fact that this instance doesn’t have an SPN.
Finishing Up and Final Thoughts
When all is said and done, I do one final restart of the services (or the initial restart if you’ve been waiting throughout the entire process) to ensure all the updates made to the registry have taken effect and also to ensure the instance comes back up as expected (also if this was the first restart, don’t forget about the tempdb mdf and ldf file cleanup mentioned above). Once the services are back online, I like to ensure my default paths are updated appropriately and my trace flags are enabled as expected.
If this instance is for production databases, I also setup a perfmon performance baseline just in case I need an objective reference when the inevitable “the database is extra slow today” comment hits my inbox.
The final task is to ensure your monitoring solution is set up for this instance (e.g. when an unexpected reboot happens, etc. the proper pages are sent out to the proper teams). You want to be the person contacting the client about an outage or performance issue. Being contacted by the client is not how you should find out about an issue. Generally, setting this up doesn’t fall on you, the DBA (unless you’re in a smaller shop), but it is your responsibility to verify it is working as expected.
Now that everything is configured, you’ve put yourself in a much better position to handle issues going forward. Feel free to start staging databases and pat yourself on the back as your life will be much easier going forward when issues do arise.
Let me know if you have any configuration steps you do that I may have missed. It’s always nice to hear about a feature or setting I can add to my guide.