Category Archives: PowerShell

Automating SQL Express database backups

SQL Server Express is a fantastic, free database engine for small, standalone databases. However, the limitations of the Express database engine can make automating database backups tricky. Due to a lack of a SQL Server Agent, you will likely have to roll your own process for backing up Express databases. While there are enterprise database backup solutions, these solutions are typically licensed by server. Therefore, eating up a valuable license for a small, standalone Express database might be cost prohibitive. There is, however, an easy way to roll your own automated backup solution using a stored procedure, PowerShell, and a Windows scheduled task.

First, we need a stored procedure that accepts a parameter for the directory where we want to save backups. This stored procedure should back up every database except tempdb. Also, the stored procedure needs to be saved in a database that will never get deleted (e.g. master).

Here is a script to create the spBackupDatabases procedure in the master database…

Each database in the instance will save to the backup location specified by the path parameter. As written, the file name for each backup will be in the form _.bak (e.g. InventoryDB_201401072105.bak); however, you can change this to whatever naming convention you prefer. One important factor to note is that the directory you pass to the stored procedure to needs to allow the SQL Server service account “write” permissions. If you are running the SQL Express service as the system account, you will have to grant appropriate permissions to the backup directory for that system.

Let’s test the stored procedure to confirm everything is working. Be sure the destination folder already exists or you will get Sql ErrorNumber 3013.

Now that we have a stored procedure to backup all databases, we need to create a script that will run the stored procedure. As an added caveat, the script needs to delete the backup files that are older than 30 days. PowerShell is a perfect solution for this because it can natively interact with SQL Server using the .Net Framework. The following script will connect to the SQL Express instance, run the new stored procedure, and then “DELETE ALL FILES” that are over 30 days old. Note that the databases are backed-up using the SQL Server service account, but the old files are deleted using the account that is running the PowerShell script. Therefore, be sure both accounts can read and write to the backup directory.

You will note that the database connection string above is using “Integrated Security=True”. This is because I did not want to store the database connection user ID and password in the PowerShell script. Instead, I will schedule the PowerShell script via a Windows Scheduled Task with the credentials for a domain account saved in the Scheduled Task. This makes it harder for someone to obtain a user ID and password for the database. If you are using a SQL Account, and you are okay with putting the credentials in the connection string, then you can use the following…

Finally, the last step for automated SQL Express database backups is creating a Windows Scheduled task. The trick for running a PowerShell script from a Windows Scheduled Task is to specify “PowerShell” in the Program/Script section of the Edit Action setting. Then, in the “Add arguments” section, add the path to your saved PowerShell script.

PowerShellScheduledTaskNote… If you have a problem running the PowerShell script, you can add the “-noexit” switch to the “Add arguments” section so that the error text stays on-screen after the script runs. Just be sure to remove the “-noexit” switch after testing so that PowerShell closes after it finishes running.

Running Linux Commands from PowerShell.

In my lab, I occasionally need to automate maintenance tasks that involve Windows and Linux systems. For example, I need to backup Windows directories to a Linux-based NAS device, compress and decompress files, delete old backups, etc. Sometimes, what I need to do is run SSH commands from PowerShell in a dynamic way. I found some examples online but they only ran one command at a time. For me, it would be better if I could dynamically create a set of commands; then have those all run consecutively in one SSH call.

To do this, first you need to define the statements you want to run in an array. In my case, I wanted something dynamic so I came up with the following.

Basically, the above commands will display the Linux distribution release info, change the working directory, print the working directory, unzip a file, and then remove the zip file. Note the “;” after each command is required. Alternatively, you can use “and list” (&&) or “or list” (||) instead of “;” if you understand how they work.

Now that I have the SSH commands that I want to run, how do I pass them to Linux? Manually, when I want to remotely connect to Linux in an interactive way, I use PuTTY. However, by itself, PuTTY doesn’t have a Windows command-line interface. Thankfully, the makers of PuTTY released Plink, aka “PuTTY Link”, which is a command-line connection tool for PuTTY. Armed with this new information, I downloaded Plink to the same directory as PuTTY and added an alias to my PowerShell script.

Now that I have an alias for Plink, I can pass my array of SSH commands directly to my Linux machine in one line of code.

One thing that is nice about this approach, the output of the SSH commands are displayed in the PowerShell console. That way, you can see if any Linux-based warnings or errors occur.

In the above example, I’ve added my user name and password as parameters in the command-line. Obviously, in a production environment this is not desirable. You can get around this by using public keys for SSH authentication. For more information, check out PuTTY’s help documentation. At the time of this writing, Chapter 8 covered how to set up public keys for SSH authentication.

Here is the finished script.

Some notes worth sharing… Initially, my instinct told me that zipping a large directory locally on the NAS device would be faster than trying to remotely zip the files from my Windows PC. I assumed the network overhead of downloading the files and then uploading the compressed archive back to the NAS would be a bottleneck. In fact, in my case, it was faster to do it remotely from Windows. This is because the limited RAM and CPU for my consumer grade NAS device were quickly overwhelmed by the compression task. My Windows box, with a dual core CPU, 4GB RAM, a Gigabit NIC, and an SSD could compress the files faster than the NAS device despite having to send the data over the network both ways. Some tasks, such as deleting large directories were significantly faster when ran locally on the NAS. Therefore, you will have to experiment to find out what works best for you.

Monitoring Host System Temperature with Powershell.

FireI recently made some custom modifications to an old ESX virtual host to limit the noise produced by the CPU fans. After injecting resistors into the CPU fan circuits, I wanted to monitor the system temperature in case it ever exceeded the normal operating range. On my laptop, I prototyped a temperature monitoring script using WMI and PowerShell.

Everything was working beautiful until I tried scheduling the script on one of my VMs. The problem is, guest VMs know very little about the underlying host that they are running on. Since VMs can be migrated from host to host without interruption, it’s pointless for the OS to ask “What’s my temperature?” What I should have done, instead, is have the guest OS ask “EsxHost1, what is your temperature?” I admit, I had to laugh, when I realized the problem because I should have seen it coming. Back to the drawing board I went.

After my initial SNAFU, I abandoned WMI because it wasn’t going to work against ESX. Next, I decided to try VMware’s PowerCLI. Unfortunately, after a 200MB download and an hour of digging around, host temperature was nowhere to be found. Then I discovered VMware’s CIM interface. CIM is an open standard, similar to WMI, that allows you to control and manage hardware. PowerShell 3.0 has some new Cmdlets that improve the user experience of working with CIM. After a little Googling I found the class “CIM_NumericSensor” which contains, among other things, “Ambient Temp”.


Using the above script, I can remotely ask an ESX host for its system temperature; so far, everything has been “cool”. If you need to monitor your host from PowerShell 2.0, check out the cmdlet “New-WSManInstance”. Carter Shanklin wrote a post entitled “Monitoring ESX hardware with Powershell” on blogs.vmware.com. It should get you going in the right direction.