Using PowerShell Parameter Validation to Make Your Day Easier

Writing functions or scripts require a variety of parameters which have different requirements based on a number of items. It could require a collection, objects of a certain type or even a certain range of items that it should only accept.

The idea of parameter validation is that you can specify specific checks on a parameter that is being used on a function or script. If the value or collection that is passed to the parameter doesn’t meet the specified requirements, a terminating error is thrown and the execution of the code halts and gives you an error stating (usually readable) the reason for the halt. This is very powerful and allows you to have much tighter control over the input that is going into the function. You don’t want to have your script go crazy halfway into the code execution because the values sent to the parameter were completely off of the wall.

You can have multiple unique validations used on a single parameter and the style is similar to this:

[parameter(0]
[ValidateSomething()] #Not a legal type; just example
[string[]]$Parameter

Another important item is that you cannot use a default value in your parameter. Well, you can but it will never go through the parameter validation unless you specify a new value. While this will probably never apply or happen in your code, it is still something worth pointing out just in case you have something invalid that will not apply to whoever uses the function and wonders why it fails later in the code  rather than at the beginning.

I am going to go over some of the validation types and give examples of each as well as discuss potential issues with each approach.

[ValidateNotNullOrEmpty()] and [ValidateNotNull()]

I  have both of these listed instead of separately for a reason. Pick one or the other! I have seen some instances where both of these are being used to validate a single parameter and this simply does not need to happen and here is why:

  • ValidateNotNull only checks to see if the value being passed to the parameter is a null value. Will still work if it is passed an empty string.
  • ValidateNotNullorEmpty also checks to see if the value being passed is a null value and if it is an empty string or collection

Lets check out some examples with ValidateNotNull

Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateNotNull()] #No value
        $Item
    )
    Process {
        $Item
    }
}
# Will fail
Test-Something -Item $Null

image

# Will work because it is just an empty string, not a null value
Test-Something -Item ''

# Will work because we aren't checking for empty collections
Test-Something -Item @()

image

Notice that I didn’t specify a type for the parameter. If you specify a type, then this will not work properly. If you need to use a type for your parameter, then use ValidateNotNullOrEmpty instead.

Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateNotNull()]
        [string]$Item
    )
    $Item
}
 
# Will work
Test-Something -Item $Null
Test-Something -Item ''

image

Up next is ValidateNotNullOrEmpty which is great if you are using collections and/or require an object of a specific type.

Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateNotNullOrEmpty()] 
        [string[]]$Item
    )
    Process {
        $Item
    }
}

It doesn’t matter if the parameter has a type with it, is a collection or is just a simple string, all of the below will fail when attempted.

Test-Something -Item $Null

image

Test-Something -Item @()

image

Test-Something -Item ''

image

As you can see, this does handle all of the possible empty and null values thrown at it. I will again reiterate that you need to choose one or the other with these two validations; no need to duplicate effort if you are just trying to avoid null values being passed into the parameter.

[ValidateLength()]

This is useful if you are expecting values of a certain length; such as usernames.

Some important issues to take note of include that will throw an error with the validation attribute:

  • Max length less than min length
  • Max length set to 0
  • Argument is NOT a string or integer
Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateLength(1,8)]
        [string]$Item
    )
    Process {
        $Item
    }
}

The first value given is the minimum value and the second value will always be the maximum value. In this case, I am expecting a string that is at least 1 character and at most 8 characters long. Anything outside of those boundaries will throw an error.

# Works
Test-Something -Item Boe

image

# Will fail
Test-Something -Item Thisisalongstring

image

Note that this tells you the length of the value that was submitted (17).

[ValidateRange()]

This is useful when you want to validate a specific range of integers, such as testing age.

Some important issues to take note of include that will throw an error with the validation attribute:

  • Value of MinRange is greater than MaxRange
  • Argument is NOT same type as Min and Max Range parameters
Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateRange(21,90)]
        [int[]]$Age
    )
    Process {
        $Age
    }
}
# Will work
Test-Something -Age 34

21,36 | Test-Something

image

# Will fail
Test-Something -Age 16

Test-Something -Age 100,25

25,115,21 | Test-Something

image

[ValidateCount()]

This is useful to keep only a certain number of values in a collection for a parameter.

Some important issues to take note of include that will throw an error with the validation attribute:

  • Value of MinRange is greater than MaxRange
  • Range types must be Int32
  • Parameter must be an array type ([string[]])
    • If you just use [string] (or similar), then you are bound to only 1 item to pass into the parameter
  • Min cannot be less than 0
Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateCount(1,4)]
        [string[]]$Item
    )
    Process {
        $Item
    }
}
# Will work
Test-Something -Item 9,10

image

# Will fail
Test-Something -Item 9,6,7,8,9

image

As you can see, it will tell you in the error how many items were being assigned to the parameter as well as how many items are allowed.

Note that it has little effect on items being passed through the pipeline (this is by design as it is how the pipeline is supposed to work).

1,2,5,8,10 | Test-Something

image

But what if we pass a collection of collections?

@(1,2),@(1,2,5,8,6),@(10,15,6)  | Test-Something

image

It will in fact fail on the collection that had more than the allotted items.

[ValidateSet()]

Useful limiting a certain set of item  and allows for case sensitive sets if using $False after defining set. Default value is $True (case insensitive).

[ValidateSet('Bob','Joe','Steve', ignorecase=$False)]

Some important issues to take note of include that will throw an error with the validation attribute:

  • Used more than once on a parameter (multiple sets of sets)
  • Element of set is in each element of an array being passed or fails completely
  • Parameter doesn’t accept array and more than 1 item passed
Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateSet('Bob','Joe','Steve')]
        [string[]]$Item
    )
    Process {
        $Item
    }
}
# Will work
Test-Something -Item 'joe'

image

@('Joe',@('Bob','Steve')) | Test-Something

image

# Will not work
Test-Something -Item 'Boe'
Test-Something -Item 'Boe','Joe'

image

#Partial; note how the collection of Bill and Joe doesn't work
@('Bob',@('Bill','Joe'),'Boe','Steve') | Test-Something

image

For more cool stuff you can do with ValidateSet, check out this article from Matt Graeber (Blog | Twitter): http://www.powershellmagazine.com/2013/12/09/secure-parameter-validation-in-powershell/

[ValidatePattern()]

Useful to validate input matches specific regex pattern allows for case sensitive matches; Regex Options flags allow for more customization.  Very hard to gather requirements from error message that is thrown if the validation fails unless the individual has some RegEx experience.

Member name

Description

Compiled

Specifies that the regular expression is compiled to an assembly. This yields faster execution but increases startup time. This value should not be assigned to the Options property when calling the CompileToAssembly method. 

CultureInvariant

Specifies that cultural differences in language is ignored. See Performing Culture-Insensitive Operations in the RegularExpressions Namespace for more information.

ECMAScript

Enables ECMAScript-compliant behavior for the expression. This value can be used only in conjunction with the IgnoreCase, Multiline, andCompiled values. The use of this value with any other values results in an exception.

ExplicitCapture

Specifies that the only valid captures are explicitly named or numbered groups of the form (?<name>…). This allows unnamed parentheses to act as noncapturing groups without the syntactic clumsiness of the expression (?:…).

IgnoreCase

Specifies case-insensitive matching.

IgnorePatternWhitespace

Eliminates unescaped white space from the pattern and enables comments marked with #. However, the IgnorePatternWhitespace value does not affect or eliminate white space in character classes. 

Multiline

Multiline mode. Changes the meaning of ^ and $ so they match at the beginning and end, respectively, of any line, and not just the beginning and end of the entire string.

None

Specifies that no options are set.

RightToLeft

Specifies that the search will be from right to left instead of from left to right.

Singleline

Specifies single-line mode. Changes the meaning of the dot (.) so it matches every character (instead of every character except \n).

 

Some important issues to take note of include that will throw an error with the validation attribute:

  • Used only once per parameter
  • Collection being passed must pass pattern for each item or fails completely if not coming from pipeline
Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidatePattern('^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$')]
        [string[]]$Item
    )
    Process {
        $Item
    }
}
 

I am intentionally using a complex RegEx string for the IP address to prove a point on how difficult it could be to understand the error.

# Will work
Test-Something -Item 192.168.1.1
Test-Something -Item 192.168.1.1,168.125.12.15

image

# Will not work; note the error shows the regex, which only helps those that know regex
Test-Something -Item 'Joe'
Test-Something -Item 1
Test-Something -Item 192.168.1.1,23

image

As you can see, the error messages are pretty hard to read unless you know RegEx.

One last example showing input from the pipeline.

# Works a little better when using input from pipeline
@('192.168.1.1','23') | Test-Something

image

The error leads me to the last type of validation that we can use to make the error a little better.

[ValidateScript()]

Very powerful to test for various requirements and can do what others can do and provide better (custom) errors based on how you structure the code. Can also slow down your script execution if you have too many checks happening in the scriptblock or long running check.

Some important issues to take note of include that will throw an error with the validation attribute:

  • $True and $False values are not allowed to return when the attempt to validate fails or succeeds
    • I would highly recommend you not use the return value of $False and instead use Throw with a custom error message so the user knows what should be happening.
Function Test-Something {
    [cmdletbinding()]
    Param(
        [parameter(ValueFromPipeline)]
        [ValidateScript({If ($_ -match '^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$') {
            $True
        } Else {
            Throw "$_ is not an IPV4 Address!"
        }})]
        [string[]]$Item
    )
    Process {
        $Item
    }
}

 

# Will work
Test-Something -Item 192.168.1.1
Test-Something -Item 192.168.1.1,168.14.12.15

image

Now we can see a better error message when this fails.

# Will not work
Test-Something -Item 'Joe'
Test-Something -Item 1
Test-Something -Item 192.168.1.1,23

image

Now instead of a RegEx error message, this actually tells you that it expects an IPV4 address instead. Same as the example below.

@('192.168.1.1','23') | Test-Something

image

My last example with this is to check for invalid characters in a given path.

Function Test-Something {
    [cmdletbinding()]
    Param (
        [parameter(ValueFromPipeline)]
        [ValidateScript({
            If ((Split-Path $_ -Leaf).IndexOfAny([io.path]::GetInvalidFileNameChars()) -ge 0) {
                Throw "$(Split-Path $_ -Leaf) contains invalid characters!"
            } Else {$True}
        })]
        [string[]]$NewFile
    )
    Process {
        $NewFile
    }
}
#Works
Test-Something -NewFile "C:\Temp\File.txt"

image

#Fails
Test-Something -NewFile "C:\test\temp\File?.txt"

image

This was just one example of using ValidateScript, but you can pretty much test anything out and as long as you provide either a $True if good and Throw a custom error (or return $False), then you will have a pretty powerful method for validating parameters. I will reiterate again the need to keep this as efficient as possible as to not slow down your function if you are passing a large collection with each item being validated by your script block.

That’s it for working with parameter validation in PowerShell. Hopefully some of these examples and explanations will help you out in a future script/function!

Posted in powershell, Winter Scripting Games 2014 | Tagged , , , | 12 Comments

Sidebar on Ed Wilson’s (Scripting Guy) Latest Book

A while back, I was asked to submit a sidebar for Ed Wilson’s (Microsoft’s Hey, ScriptingGuy!) latest book titled “Windows PowerShell Best Practices ” on my use of PowerShell in the environment.

Of course, I couldn’t refuse this offer and proceeded to submit one to him. If you want to check out my sidebar (as well as many other excellent sidebars from other members of the PowerShell community), then click on the link below to pick up a copy of the book. Hint: my sidebar is in the chapter about Modules (Chapter 10).

Windows PowerShell Best Practices (V3)

Posted in News, powershell | Tagged , , , | Leave a comment

Avoiding System.Object[] (or Similar Output) when using Export-Csv

I’ve ran into this issue a number of times and have seen others as well when they attempt to pipe some data which may have a collection of items in one of its properties to a CSV file using Export-Csv. What happens is something which can drive you batty. Such as the image below.

[pscustomobject]@{
    First = 'Boe'
    Last = 'Prox'
    ExtraInfo = @(1,3,5,6)
    State = 'NE'
} | Export-Csv -notype Random.csv

image

As you can see, the ExtraInfo column has System.Object[] (or a different object type) instead of the 1,3,5,6. This can be frustrating to look at, especially when you have hundreds or thousands of rows of data which may have multiple columns that contains this type of information. Why does this happen? Well, it is because that anything which goes through to Export-Csv is casted as a string before being written, as in this example.

@(1,2,3,5).ToString()

image

There are a few ways that you can resolve this so that the collection is unrolled (or expanded if you will) that requires a little bit of extra code, but will help to make sure that you are getting human readable information in the spreadsheet.

Using –Join

One approach to this is to use the –Join operator on those properties which will have a collection of items in it.

[pscustomobject]@{
    First = 'Boe'
    Last = 'Prox'
    ExtraInfo = (@(1,3,5,6) -join ',')
    State = 'NE'
} | Export-Csv -notype Random.csv

image

Looks nice and is presentable to a person looking at the spreadsheet. Depending on the information, this may be the way for you. I’ve had data which may have 20 items in the collection and can cause that cell to become very long and if there are other various punctuations (such as working with IP addresses), then it could be harder to read.

[pscustomobject]@{
    First = 'Boe'
    Last = 'Prox'
    ExtraInfo = (@(1,3,5,6) -join ',')
    State = 'NE'
    IPs = (@('111.222.11.22','55.12.89.125','125.48.2.1','145.23.15.89','123.12.1.0') -join ',')
} | Export-Csv -notype Random.csv

image

I don’t know about you, but even if there was a space after the comma, it would still be painful to read. Because of that, I prefer to take the following approach with adjusting the output of the collection object.

Out-String and Trim()

My favorite approach (which requires a little more code and a little extra work at the end of it) to display the expanded collection in a spreadsheet by using a combination of Out-String and Trim().

[pscustomobject]@{
    First = 'Boe'
    Last = 'Prox'
    ExtraInfo = (@(1,3,5,6) | Out-String).Trim()
    State = 'NE'
    IPs = (@('111.222.11.22','55.12.89.125','125.48.2.1','145.23.15.89','123.12.1.0') | Out-String).Trim()
} | Export-Csv -notype Random.csv

image

Ok, first off you might be wondering where the rest of the data is at. Here is the part where you have to do a little formatting on the spreadsheet to get all of the data to show up. I typically will click on the upper left hand corner to select everything and then just double click on the row to expand all of the cells and then double click the columns to make sure it all looks good. I also make sure to set the vertical alignment to top as well.

image

After that, I then have this to view:

image

Now the IP Addresses and also the ExtraInfo show up as they normally would if we expanded it in the console. To me, and this is my own personal opinion, I prefer this much more than the other method. When I prepare my reports, I will typically use the ‘Format as table’ button in Excel to give it a little more color and then I ship it off to whoever needs it.

image

So there you go! These are just a couple of available options (I have no doubt that there are others) that you can use to make sure that your report is presentable to whoever needs to see it! As always, I am interested into seeing what others have done to get around this hurdle with sending objects with collections as properties to a spreadsheet.

A function to make things easier

I put together a function called Convert-OutputForCsv which serves as a middle man between the query for data and the exporting of that data to a CSV file using Export-Csv.

The function accepts input via the pipeline (recommended approach) and allows you to determine if you want the property to have the collection expanded to a comma separated value (comma) or if you want the stacked version that I showed above (stack). By default, the data being passed from this function to Export-Csv will not retain its order of properties (I am working on finding a solution to this) but you do have the option of defining the order manually which can be passed into the function.

Updated 02 FEB 2014: Removed OutputOrder parameter as it is no longer needed for this function. Bug has been fixed where output order didn’t match the order of the input object.

After dot sourcing the script file (. .\Convert-OutputForCsv.ps1) and loading the function into the current session, I will now demonstrate and example of how this works.

The following example will gather information about the network adapter and display its properties first without the use of the function and then using the function.

$Output = 'PSComputername','IPAddress', 'IPSubnet',
'DefaultIPGateway','DNSServerSearchOrder'

Get-WMIObject -Class Win32_NetworkAdapterConfiguration -Filter "IPEnabled='True'" |
Select-Object $Output | Export-Csv -NoTypeInformation -Path NIC.csv 

 

image

Pretty much useless at this point. Now lets run it and throw my function into the middle.

$Output = 'PSComputername','IPAddress', 'IPSubnet', 'DefaultIPGateway','DNSServerSearchOrder'

Get-WMIObject -Class Win32_NetworkAdapterConfiguration -Filter "IPEnabled='True'" |
Select-Object $Output | Convert-OutputForCSV -OutputOrder $Output | 
Export-Csv -NoTypeInformation -Path NIC.csv   

image

That looks a whole lot better! And just for another example, let’s see this using the comma format as well.

$Output = 'PSComputername','IPAddress', 'IPSubnet', 'DefaultIPGateway','DNSServerSearchOrder'

Get-WMIObject -Class Win32_NetworkAdapterConfiguration -Filter "IPEnabled='True'" |
Select-Object $Output | Convert-OutputForCSV -OutputOrder $Output -OutputPropertyType Comma | 
Export-Csv -NoTypeInformation -Path NIC.csv   

 

image

One more, this time with Get-ACL

$Output = 'Path','Owner', 'Access'

Get-ACL .\.gitconfig | Select-Object Path, Owner, Access, SDDL, Group| 
Convert-OutputForCSV -OutputOrder Path,Owner,Access |
Export-Csv -NoTypeInformation -Path ACL.csv

 

image

Works like a champ! Anything that I didn’t specify in the OutputOrder will just get tossed in at the end in no particular order.

The download for this function is below. Give it a spin and let me know what you think!

Download Convert-OutputForCsv.ps1

Convert-OutputForCSV.ps1

Posted in powershell | Tagged , , , | 17 Comments

Winter Scripting Games 2014 Tip #2: Use #Requires to let PowerShell do the work for you

In Version 2 of PowerShell, you had the ability to use #Requires –Version 2.0 to ensure that your scripts/functions would only run at a specified PowerShell version to prevent folks running an older version from wondering why things weren’t working that well.

image

In this article, I will show you a couple of new additions to the #Requires statement that will make your life easier when writing functions that require specific pre-requisites rather than coding your own methods.

Modules

This was fine, but only lent itself to scripts that were not version compatible. Fortunately, with Version 3, we gained a better #Requires statement for modules. Rather than adding extra code to handle the checking of whether a module existed or not, we can just add the following statement and if the module existed, the running of the code would continue and if it didn’t work, a terminating error is thrown and the codes stops running.

Let’s try it out!

#Requires -Module ActiveDirectory

Ok, I have this statement placed right after my commented help block and before the [cmdletbinding()] statement as so:

Function Get-SomeUser {
    #Requires -Module ActiveDirectory
    [cmdletbinding()]
    Param ($User)
    Get-ADUser -Identity $User
}

When I dot source the function, nothing happens, meaning that the function found the module required.

image

Nothing happens as expected. The module was found and it was also loaded up. So what happens when a module doesn’t exist on the system in which we are calling the function on?

image

My DoesntExists module…well…doesn’t actually exist and when I try to dot source the script to load that function which requires that specific module, it fails stating that it is missing and therefor unable to proceed any further. Pretty handy if you do not want your stuff to run without a specific module available.

Running as an Administrator

I’ve actually written a small script back in the day as well as a Hey, Scripting Guy! article that was used to detect if the current user was running PowerShell ‘as an administrator’ before running a script. Another possibility that could be used is this:

[bool]((whoami /all) -match "S-1-16-12288")

In PowerShell V4 we were gifted with really one of the best little additions that can really cut down on the amount of code that could be used to detect whether the script/function was being ‘run as an administrator’.  That little #Requires gem is called RunAsAdministrator.

Function Set-Something {
<#
    .SYNOPSIS
#>
#Requires -RunAsAdministrator
    [cmdletbinding()]
    Param ($Item)

}

image

Now lets try this when I am not running my console as an administrator and see what happens.

Capture

Perfect! With just a small line of text, we have made sure that this script can only be run by someone with Administrator rights and also is running the script in a console that was opened using the ‘run as administrator’ context.

By utilizing these two small things, you can ensure that you are letting PowerShell do the work for you and saving time coding checks that are already available to you out of the box!

Posted in powershell, V3, V4, Winter Scripting Games 2014 | Tagged , , , , , | 3 Comments

Winter Scripting Games 2014 Tip #1: Avoid the aliases

Having been a judge for the previous 2 Scripting Game competitions as well as competing in the 2 before that, I have seen my share of scripts submitted that didn’t quite meet the cut of what I felt were the best scripts. It doesn’t mean that they wouldn’t work out in the real world in a production environment (Ok, some wouldn’t Smile), but some were just really hard to read or others were doing things that I wouldn’t consider to be a good practice. 

I’m not judging this year and am instead taking on the role as a coach which gives me the great opportunity to provide input on a submission while the event is ongoing which also allows me to blog about what I am seeing to help everyone out. My goal over the course of the next few weeks is to provide some feedback based on the scripts that I have seen as well as bringing up some past things that have hindered some otherwise excellent scripts. Maybe you will agree with me, maybe you won’t. But if anything, it will make you think about what you might be writing and using in your environment.

I will start this little excursion by talking about the use of aliases in scripts. An alias is a shorthand way to run a command or use a parameter in a script/function. An example of this is here:

ls -di | ? {
    $_.LastWriteTime -gt (date).AddMonths(-24)
} | % {
    mv -pat $_.fullname -des C:\Temp -wh 
}

This is probably a little extreme, but I think you can appreciate what I am trying to point out, which is that aliases make it pretty hard to read the code (especially if you are just learning PowerShell) or if you are trying to read someone else’s code and make sense of the direction that they were going.

If you are just running code ad-hoc from the shell, then this is perfectly fine to do as only you are worried about what is being done and have no plans on giving this to someone else (maybe you are, but you might just say “run this and don’t ask questions!” Winking smile).

So back to our little code snippet above. Perfectly fine for a console run, but in a script, this may present a headache for others. Lets clean this up so everything has been expanded out to be readable.

Get-ChildItem -Directory | Where-Object {
    $_.LastWriteTime -gt (Get-Date).AddMonths(-24)
} | ForEach {
    Move-Item -Path $_.fullname -Destination C:\Temp -WhatIf 
}

Now I have a better idea as to what is being done in this code snippet. Unless otherwise noted, this is not about the shortest amount of characters that you can use in a script, it is about making the script do what you want it to do as well as making sure it is readable to whoever happens to be looking at it. This not only helps the next person understand more of what is going on, but also aids in the troubleshooting process in case the script doesn’t work properly or maybe more developing is done on it.

Posted in powershell, Winter Scripting Games 2014 | Tagged , , , | 5 Comments