VIsual Aids for Communicating Project Management Concepts

With project members and stake holders, I occasionally find that it’s difficult to communicate the benefits of using certain project management tools or methods. For example, conversations about the benefits of using SCRUM or Test Driven Development (TDD) to improve a project’s probability of success are occasionally met with blank stares or even friction. For these meetings, it’s good to have a few visual aids to help digest how using tools can mitigate certain kinds of risk. Below is a sample of some of the visual aids I’ve used over the years.

The Triple Constraint

Triple ConstraintThe first diagram is typically the one almost everyone knows. The Triple Constraint, also known as the Project Management Triangle, is a good visual aid for communicating how a project’s cost, scope, and schedule relate to each other. Changes in one or two of the three metrics will adversely affect the remaining metric(s). For example, reducing cost will negatively impact the project’s scope or schedule. Limiting the schedule will negatively impact the scope or cost. Widening a project’s scope will negatively impact the schedule or cost. Trying to control all three constraints simultaneously is impossible. Additionally, there is significant risk to a project’s success if you materially change which constraints are most important. Successful projects – i.e. on time, on schedule, and within scope – strike a delicate equilibrium between the three constraints.

Product Management Trap

Product Management TrapAnother visual aid I like to use is the Product Management Trap diagram. In this diagram, the stakeholder controls the value axis while the development team controls the complexity axis. Often a project will have a wide portfolio of features or components. Identifying where features rank helps stakeholders focus on risk versus ROI. When trimming scope, focus on low value, high complex features. When trying to balance resource schedules, add or remove low complexity features. If possible, when negotiating scope, features should be removed in descending order (4, 3, 2, 1), so that you eliminate the most difficult features first. Be mindful, however, that although quadrant 3 contains difficult and complex features, trimming here could significantly change the overall ROI for the project. There is a great YouTube video by sketchcaster about this chart.

POS Vs Complexity

POS Vs ComplexityVisualizing the relationship between probability of success (POS) and complexity is useful when communicating the effectiveness of project management methodologies or tools (e.g. SCRUM, Kanban, TDD, etc.) The POS versus complexity diagram illustrates how utilizing proven methodologies and tools can positively improve the POS without significantly affecting complexity or scope. For example, SCRUM improves POS (on the y axis), but complexity (on the x axis) remains relatively unchanged with or without SCRUM. The relative improved POS for using project management methods and tools becomes more pronounced on projects of higher complexity. For projects of low complexity, the benefit is marginal.

Time Vs Complexity

Time Vs ComplexitySimilarly to the previous graph, time to develop versus complexity is a good tool to communicate the effectiveness of good project management methodologies and tools. Typically, the more complex a project, the more time it takes to complete. The introduction of effective project management methods and tools can often be compounding and, therefore, affect time in a logarithmic way. Low complexity projects only benefit marginally, but high complexity projects experience a significant time improvement. For example, a good TDD (Test Driven Development) strategy continuously exercises code and reduces the likelihood of bugs introduced by refactoring or changes in scope. Finding and fixing bugs before they surface in Quality Control (QC) translates to reduced communication (e.g. tickets, emails, phone calls, and meetings), reduced context switching, and reduced follow-up testing for QC agents. Over time, these improvements are compounding and can significantly reduce the overall time needed to develop a project.

Efficiency Vs Flexibility

Efficiency Vs FlexibilityFinally, efficiency versus flexibility is a good visual aid for illustrating that highly flexible systems generally perform less efficiently than rigid systems. I typically break out this diagram when I hear requirements that suggest the need to create a system that can do everything. For example, requirements might suggest users to be able to process data from lots of different data sources; or, requirements might propose that the users be able to dynamically build, customize, and maintain reports; or, requirements advocate that users be able to view and manipulate the data from multiple front end systems. Beyond the fact that these requirements may add significant complexity to a given project, the typical trade-off for these requirements is inefficiency in resource consumption. Disk space, memory, and CPU consumption is greater in highly flexible systems. As flexibility increases, views or reports typically shift from real-time to deferred, or batched. Also, highly flexible systems are more complex and therefore take longer to build. Ultimately, the currency to measure the efficiency in such systems is time. How fast does the system need to perform and how fast do you need it built?

I have many other visual aids that, unfortunately, I’ve omitted here for brevity (e.g. risk versus complexity, supportability versus complexity, etc.) Also, there are a whole host of artifacts generated in well managed projects (Gantt charts, burn-down charts, etc.) that are great communication visual aids. For now, I’ll save those for future blog posts. Until then, I’ll leave that homework up to you.

Automating SQL Express database backups

SQL Server Express is a fantastic, free database engine for small, standalone databases. However, the limitations of the Express database engine can make automating database backups tricky. Due to a lack of a SQL Server Agent, you will likely have to roll your own process for backing up Express databases. While there are enterprise database backup solutions, these solutions are typically licensed by server. Therefore, eating up a valuable license for a small, standalone Express database might be cost prohibitive. There is, however, an easy way to roll your own automated backup solution using a stored procedure, PowerShell, and a Windows scheduled task.

First, we need a stored procedure that accepts a parameter for the directory where we want to save backups. This stored procedure should back up every database except tempdb. Also, the stored procedure needs to be saved in a database that will never get deleted (e.g. master).

Here is a script to create the spBackupDatabases procedure in the master database…

Each database in the instance will save to the backup location specified by the path parameter. As written, the file name for each backup will be in the form _.bak (e.g. InventoryDB_201401072105.bak); however, you can change this to whatever naming convention you prefer. One important factor to note is that the directory you pass to the stored procedure to needs to allow the SQL Server service account “write” permissions. If you are running the SQL Express service as the system account, you will have to grant appropriate permissions to the backup directory for that system.

Let’s test the stored procedure to confirm everything is working. Be sure the destination folder already exists or you will get Sql ErrorNumber 3013.

Now that we have a stored procedure to backup all databases, we need to create a script that will run the stored procedure. As an added caveat, the script needs to delete the backup files that are older than 30 days. PowerShell is a perfect solution for this because it can natively interact with SQL Server using the .Net Framework. The following script will connect to the SQL Express instance, run the new stored procedure, and then “DELETE ALL FILES” that are over 30 days old. Note that the databases are backed-up using the SQL Server service account, but the old files are deleted using the account that is running the PowerShell script. Therefore, be sure both accounts can read and write to the backup directory.

You will note that the database connection string above is using “Integrated Security=True”. This is because I did not want to store the database connection user ID and password in the PowerShell script. Instead, I will schedule the PowerShell script via a Windows Scheduled Task with the credentials for a domain account saved in the Scheduled Task. This makes it harder for someone to obtain a user ID and password for the database. If you are using a SQL Account, and you are okay with putting the credentials in the connection string, then you can use the following…

Finally, the last step for automated SQL Express database backups is creating a Windows Scheduled task. The trick for running a PowerShell script from a Windows Scheduled Task is to specify “PowerShell” in the Program/Script section of the Edit Action setting. Then, in the “Add arguments” section, add the path to your saved PowerShell script.

PowerShellScheduledTaskNote… If you have a problem running the PowerShell script, you can add the “-noexit” switch to the “Add arguments” section so that the error text stays on-screen after the script runs. Just be sure to remove the “-noexit” switch after testing so that PowerShell closes after it finishes running.

Running Linux Commands from PowerShell.

In my lab, I occasionally need to automate maintenance tasks that involve Windows and Linux systems. For example, I need to backup Windows directories to a Linux-based NAS device, compress and decompress files, delete old backups, etc. Sometimes, what I need to do is run SSH commands from PowerShell in a dynamic way. I found some examples online but they only ran one command at a time. For me, it would be better if I could dynamically create a set of commands; then have those all run consecutively in one SSH call.

To do this, first you need to define the statements you want to run in an array. In my case, I wanted something dynamic so I came up with the following.

Basically, the above commands will display the Linux distribution release info, change the working directory, print the working directory, unzip a file, and then remove the zip file. Note the “;” after each command is required. Alternatively, you can use “and list” (&&) or “or list” (||) instead of “;” if you understand how they work.

Now that I have the SSH commands that I want to run, how do I pass them to Linux? Manually, when I want to remotely connect to Linux in an interactive way, I use PuTTY. However, by itself, PuTTY doesn’t have a Windows command-line interface. Thankfully, the makers of PuTTY released Plink, aka “PuTTY Link”, which is a command-line connection tool for PuTTY. Armed with this new information, I downloaded Plink to the same directory as PuTTY and added an alias to my PowerShell script.

Now that I have an alias for Plink, I can pass my array of SSH commands directly to my Linux machine in one line of code.

One thing that is nice about this approach, the output of the SSH commands are displayed in the PowerShell console. That way, you can see if any Linux-based warnings or errors occur.

In the above example, I’ve added my user name and password as parameters in the command-line. Obviously, in a production environment this is not desirable. You can get around this by using public keys for SSH authentication. For more information, check out PuTTY’s help documentation. At the time of this writing, Chapter 8 covered how to set up public keys for SSH authentication.

Here is the finished script.

Some notes worth sharing… Initially, my instinct told me that zipping a large directory locally on the NAS device would be faster than trying to remotely zip the files from my Windows PC. I assumed the network overhead of downloading the files and then uploading the compressed archive back to the NAS would be a bottleneck. In fact, in my case, it was faster to do it remotely from Windows. This is because the limited RAM and CPU for my consumer grade NAS device were quickly overwhelmed by the compression task. My Windows box, with a dual core CPU, 4GB RAM, a Gigabit NIC, and an SSD could compress the files faster than the NAS device despite having to send the data over the network both ways. Some tasks, such as deleting large directories were significantly faster when ran locally on the NAS. Therefore, you will have to experiment to find out what works best for you.