Changing Log file retention using PowerShell

I love SQL Server. No matter how long you play with it, there is always a little setting here and there that I did not know about, or had forgotten over the years. This morning Mr Denny clued me into one of these on his blog (via Twitter). The setting was the number of SQL server error logs to retain. He showed us how to set this in SSMS and via a T-SQL change to the registry.

My first thought as I read this was “how can I do this in PowerShell?”.

Sometimes it is easy, sometimes it is harder than anticipated. In this case, it was way to easy, it took me all of 2 minutes looking at the SMO Server object to find a wonderful little property, NumberOfLogFiles. This led me to write this little script that I am going to add into my standard Post deployment configuration script:

$server = gss -computername SERVER1
$server.NumberOfLogFiles = 20
$server.Alter()
$server.Refresh()

gss is an alias to my own profile loaded function that simply returns an SMO object referencing the server I am looking at. Of course, as Mr Denny states, the number you want to configure will vary from install to install, so make sure you use the appropriate value.

Thanks for reading.

Posted in SQL Server | 2 Comments

Checking Alerts on remote server using PowerShell and SMO

It is common practice to configure a set of alerts across all of your servers. These alerts will cover standard Events (severity 017 – 025) and SQL Server conditions (Log space used etc. ). You can manage these alerts through SSMS and scripting, but PowerShell has made this job a lot easier with some very handy SMO classes (server, database, Agent).

For example, if you wanted to check what alerts are configured on all of your servers you can iterate through a table of your instances, or a text file with all of your servers listed, and pull back a list of all of the alerts configured on each instance. You can even expand this to find out how many times each alert has been fired, and when.

The script below shows you how to easily pull this information from one server, and can easily be adapted to manage all of your servers:

cls
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null

$server = "SERVER1"
$Instance = New-Object ("Microsoft.SqlServer.Management.Smo.Server") $server

$Alerts = $Instance.JobServer.Alerts

foreach ($Alert in $Alerts)
{
write-host "Name: " $Alert.Name
write-host "Occurrences: " $Alert.OccurrenceCount
write-host "Type: " $Alert.AlertType
write-host "Count Reset Date: " $Alert.CountResetDate
write-host "Last Occurrence: " $Alert.LastOccurrenceDate
Write-Host `
}

Thanks for reading.

Posted in SQL Server | Tagged , , , , | Leave a comment

[sqagtres] CheckServiceAlive: returning TRUE (success) SQL 2008 R2 Cluster

Quick blog for this one, took me a little while to track down exact problem and fix, so hopefully this will speed someones fix time.

I recently installed a new 2008 R2 cluster on Windows 2008. A week or so after the install, and before it went live, I noticed that the Application log was being spammed every 15 seconds with the error messages

  • [sqagtres] CheckServiceAlive: returning TRUE (success)
  • [sqagtres] LooksAlive request.

This is a known bug in SQL 2008 R2 and SQL 2012 that has been addressed by Microsoft in the (current) latest CUs. The link for detailed problem can be found here.

Posted in SQL Server | Tagged , , , | Leave a comment

Getting Disk Sizes using WMI and PowerShell

I love the 3rd party products that are on the market for SQL Server. Idera, Redgate and Quest (the big 3) have some amazing options for helping you manage your environment, but these can come at a pretty steep cost, and sometimes that budget just isn’t available to you. If you don’t have access to any of these tools, it can be a real challenge managing your environment, especially when you are dealing with thousands of databases across multiple instances.

PowerShell to the rescue. Here I go again… I have said it before and I will keep saying it, PowerShell is a very powerful tool, and once you get to know it, a pretty easy one as well.

One of the areas I have always had trouble with, is retrieving disk information. Before I knew PowerShell, I spent an inordinate amount of time writing overly complex T-SQL scripts to try and pull the data, but there were always barricades in the way (OLE not enabled, cmdshell disabled etc.) . Even VB scripts seemed to be a pain. Then I started playing with PS and WMI and realized I had been making my life a lot harder than it needed to be. With a simple one line script I could pull all the information I required:

Get-WmiObject Win32_LogicalDisk -Namespace "root\CIMV2" -Filter "DriveType=3" -ComputerName $server

I am not going to get into the details of WMI here (this is an excellent starting point, if you are interested), but I never realized quite how powerful it was.

The below script takes a list of servers in a simple text file, and iterates through each one using WMI to pull all of the available disks, their size and free space. There is a whole host of other information available in the WMI classes, but for the sake of the script, let’s keep it simple.

cls
#get the list of servers
$server_list = get-content 'c:\temp\servers.txt' | where-object{$_ -ne ''}
#loop through all of the servers individually
for($i=0;$i -le $server_list.length)
{
$server = $server_list[$i]
$server
$diskCol = Get-WmiObject Win32_LogicalDisk -Namespace "root\CIMV2" -Filter "DriveType=3" -ComputerName $server
foreach($disk in $diskCol)
{
#reset values, make sure nothing is carried across
$disk_name = ''
$disk_size = 0
$free_space = 0

#now get real values and then show them in the console
$disk_name = $disk.Name
$disk_size = [Math]::Round(($disk.Size / 1gb), 2 )
$free_space = [Math]::Round(($disk.FreeSpace / 1gb), 2 )

Write-Host "Drive : " $disk_name
Write-Host "Size : " $disk_size
Write-Host "free space : " $free_space
Write-Host

}
write-host
$i=$i+1
}

And there we have it. I am simply outputting the returned data to the console here, but you can do whatever you need to with the values, by adding a couple of SMO statements, you could feed the data back into a monitoring database, or you could output to an HTML report. Below is an example of the output I got for 2 of my servers:

Output

I hope this helps someone out in some small way, and thanks for reading.

Posted in SQL Server | Tagged , , , , | Leave a comment

Scripting Log re-sizing

I know, shrinking log files is bad, but sometimes it is also necessary. Sometimes things go wrong, queries go bad, indexes rebuilds run forever, and any of these events can cause a log file to grow beyond its usual size. When this happens, the best practice is to shrink the file down to it’s smallest possible unit, and then re-grow it in chunks to the desired size. The size of these chunks and the size you grow your log to, are completely down to the nature of the database and its business requirements. The following script doesn’t address these sizes, it simply allows you to get away from the tedious process of growing your log 1 chunk at a time using scripts or the SSMS interface. By simply setting your database, max size and the chunk size you can run and go.

Disclaimer – all growth operations (both data and log) should be coordinated to avoid potential conflicts with user activity.

USE master
GO

--our variables
DECLARE @maxsize_MB INT;
DECLARE @currentsize_MB INT;
DECLARE @growth_size INT;
DECLARE @first_growth INT;
DECLARE @log_name VARCHAR(60);
DECLARE @command VARCHAR(255);
DECLARE @db_name VARCHAR(60);

--the database you are growing, max size of the log and the block size you want to grow it in
SET @db_name = 'AdventureWorks'
SET @maxsize_MB = 2048;
SET @growth_size = 512;

--get the details for our database
SELECT
@log_name = name,
@currentsize_MB = (size * 8) / 1024 -- size is recorded in # of 8KB pages
FROM
sys.master_files
WHERE
database_id = DB_ID(@db_name)
AND
type = 1 --log

--set the initial size up to our growth size
SET @first_growth = @growth_size - @currentsize_MB

--grow the log file until it is at a size we like
WHILE @currentsize_MB < @maxsize_MB
BEGIN

SET @currentsize_MB = CASE
WHEN @first_growth = 0 THEN @currentsize_MB + @growth_size
ELSE @currentsize_MB + @first_growth
END
SET @command = 'ALTER DATABASE [' + @db_name + '] MODIFY FILE ( NAME = N''' + @log_name + ''', SIZE = ' + CAST(@currentsize_MB AS VARCHAR(25)) + ' MB )';
PRINT @currentsize_MB
PRINT @command
EXECUTE(@command)

--ignore this after first growth
SET @first_growth = 0

END

I usually comment out the execute statement to check the growth pattern before running live. Please feel free to add any comments on possible improvements or concerns.

Thanks for reading.

Posted in SQL Server | Tagged , , , , , | Leave a comment

SQL Server 2012 automated Install ini Feature list

Any DBA worth their salt will have automated the deployment of any version of SQL server using a SQL server configuration ini and a standard installation script. Through the ini, you can manage exactly what you required to be installed, where you want it installed and how you want your features to run.

With the introduction of SQL Server 2012, Microsoft introduced some new features that can be included in your network standard installation, so I figured it was time to update the list of features and their abbreviated name that needs to be included in the ini.

In your ini, you will have a line approximate to this:

FEATURES=SQLENGINE,REPLICATION,FULLTEXT,DQ,AS,RS,RS_SHP,RS_SHPWFE,DQC,BIDS,CONN,IS,BC,SDK,BOL,SSMS,ADV_SSMS,DREPLAY_CTLR,DREPLAY_CLT,SNAC_SDK,MDS

For reference, here are the full names:

Ini
name

Feature

SQLENGINE

SQL
Server Engine

REPLICATION

Replication

FULLTEXT

Full
text Search

DQ

Data
Quality Services

AS

Analysis
Services

RS

Reporting
Services – Native

RS_SHP

Reporting
Services – SharePoint

RS_SHPWFE

Reporting
Services Add-in for SharePoint Productsa

DQC

Data
Quality Client

BIDS

SQL
Server Data Tools

CONN

Client
Tools Connectivity

IS

Integration
Services

BC

Client
Tools Backward Compatibility

SDK

Client
Tools SDK

BOL

Documentation
Components

SSMS

Management
Tools – Basic

ADV_SSMS

Management
Tools – Complete

DREPLAY_CTLR

Distributed
Replay Controller

DREPLAY_CLT

Distributed
Replay Client

SNAC_SDK

SQL
Client Connectivity SDK

MDS

Master
Data Services

I hope this list proves useful in laying out your automated 2012 installs.

Thanks for reading.

Posted in SQL Server | Tagged , , , , , | 6 Comments

How many, is too many?

When I started at my current role, I was told there would be about 25 instances and around 100 databases. “That seems manageable” thought I, should have known better I guess.

1 year later, I am managing 88 instances and over 1200 databases, sometimes it seems a little over whelming, and I know that there are certain aspects of my job that are going neglected. I simply don’t have the time to dig into every single database pro-actively and see what is what, as much as I would love to, but as the lone SQL DBA in a company of 18,000, I had to be realistic.

I say this almost every time I post in a forum, and in my blogs, and I can’t under state this enough, POWERSHELL IS AWESOME. It has helped me keep a handle on all of the instances, I can pull data from all of my servers into a central repository, and use SSRS to send me daily reports on what is going on, and what needs my attention. I can even be somewhat pro-active with databases, by keeping an eye on growth patterns and free space without ever having to connect to the remote instance, alerts are configured on every server, and I can rest (somewhat) easy, knowing that I have the majority of the servers under control….

Until.. Recently we had a domain / network consolidation and the installation of LAN Desk on the network, so now we have everything in a neat little area, and servers are beginning to become centralized in one data center. So I thought I would be a little adventurous and check on all of the SQL engines running on our network. To my horror, a quick query of LAN Desk informed me that we had 625 unique instances of SQL running on servers on our domain! 625! (and that is just servers, I almost fainted when I saw the original count, workstations included!).

I am sure some of these are just dead installs, with no actual activity, but still… 625! I am steeling myself and trying to build up the courage to write some PowerShell scripts to work through this list and find which servers are active, with functioning user databases, but dreading the results.

I would love to just reach out and hire a junior DBA to help deal with all of this, but currently we lack the required # of SAN admins, Server admins and DC admins, so a new DBA is down on the list of priorities.

No real purpose to this blog, other than to vent and let off a little steam.

The positive? at least I won’t be bored…

Thanks for reading.

Posted in SQL Server | Tagged , , | Leave a comment