Quantcast
Channel: Scripting – FoxDeploy.com
Viewing all 90 articles
Browse latest View live

Faster: ConfigMgr Collection Manipulation Speed Test

$
0
0

Recently at work, we had a task come up which saw us needing to move tens of thousands of devices between collections in CM. We decided to run some tests to find the fastest way! We compared:

  • The SCCM 1511 Era Collection Cmdlets
  • The newly released speedier Collection Cmdlets which shipped with Tech Preview 1803
  • Using Keith Garner’s super powerful CMPSLib Module
  • Query Based Membership
  • AD Group Query Membership
  • Direct SQL Membership Tampering ☠

I’d always kind of wondered myself, so it was a fun challenge to come up with some hard numbers.  And for the last item in the list…this is just for fun, I do not recommend using this in your production…or your testlab.  Or anywhere.

The test lab

All testing occurred in my VM Testlab, a Ryzen 7 1700 with 64 GB of RAM, with storage served on NVMe m.2 SSD drives.   A beastly machine (also hello to viewers from the year 2025 where we have 6TBs of storage on our phones and this is laughably quaint.  Here in 2018, we believed more RBG = more better, and we were happy, damn it!)

My ConfigMgr VM runs on Server 2016, 32 GB of RAM, SQL gets 16GB of that, and the SQL database and log files live on a separate NVMe drive for maximum performance.

The testing methodology

In this test, we’ll test two scenarios: adding 10,000 devices to a collection, and adding 30,000 devices to a collection.  In our experience we start to see collection slow down at around 30K, and this amount isn’t too big as to exclude the majority of CM Environments in the world.  Let me know if you think of something I forgot to test though!

We will resolve our input devices using DBATools Invoke-DBASqlQuery, with the following Syntax:

Function Get-CMDevice{
    param($CollectionID)

Invoke-DbaSqlQuery "Select Distinct Name,ResourceID from dbo.v_FullCollectionMembership where CollectionID = '$CollectionID'" -SqlInstance SCCM -Database CM_F0x
}

I used this method because I found it more performant than using the built-in command, and gave me just the two columns I needed, Name and ResourceID.

We will conduct add the users (resetting the collection count between tests) and measure both the time it takes to complete the Membership alter command with a refresh at the end of the process, then monitor CollEval.log for the following line items:

Results refreshed for collection F0X0001E, 30300 entries changed.
Notifying components that collection F0X0001E has changed.
PF: [Single Evaluator] successfully evaluated collection [F0X0001E] and used 2.875 seconds

Specifically the final line indicates that Collection Rules have finished processing and the devices will now be visible in CM.  Now let’s dive in!

The SCCM 1511 Era Collection Cmdlets

These cmdlets have something of a bad wrap, I feel, for being slow.  Without digging into the code, I couldn’t tell you specifically how they’re written, but I’ve heard it described that when you use them to add multiple devices to a collection, rather than adding all of the rules and saving the changes once, it would add each rule one at a time.

When I tried to directly add 10,000 rules at once, I ran into out-of-memory errors!


$d = Get-CMDevice -CollectionID SMS00001 | select -First 10000 -Property Name,ResourceID
Add-CMDeviceCollectionDirectMembershipRule -CollectionId "F0X00016" -ResourceId $UniqueDevices.ResourceID

>Add-CMDeviceCollectionDirectMembershipRule : One or more errors occurred.
At line:1 char:1
+ Add-CMDeviceCollectionDirectMembershipRule -CollectionId "F0X0001F" - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Add-CMDeviceCol...tMembershipRule], AggregateException
+ FullyQualifiedErrorId : System.AggregateException,Microsoft.ConfigurationManagement.Cmdlets.Collections.Commands.AddDeviceCollectionDirec
tMembershipRuleCommand

This would continue no matter what, until I found a stable amount of devices to add at a time.  527 was the max I could ever add with one step, so for consistencies sake, I added just 500 rules at a time.


$d = Get-CMDevice -CollectionID SMS00001 | select -First 10000 -Property Name,ResourceID
for ($i = 0; $i -lt $d.Count; $i += 500)
{
"processing $i -- $($i+500)"
Add-CMDeviceCollectionDirectMembershipRule -CollectionId "F0X00016" -ResourceId $d.ResourceID[$i..($i+500)]

}

The performance wasn’t okay.  Well, bad.

10,000 Devices took a leisurely 05:02 minutes to process, while 30,000 took a snooze inducing 50:51 to process!  Nearly FIFTY ONE Minutes.  Clearly something slow is happening under the covers here.

Super Speed Collection Moves CMPslib

Keith Garner wrote his own set of PowerShell cmdlets to deal with collection rules, after we experienced some frustration with the options that ship in the box.  You can download them here, and to use the them you pass in a collection of devices with Name and ResourceID properties to the -System parameter.
$d = Get-CMDevice -CollectionID SMS00001 |
    Select -First 10000 -Property Name,ResourceID
Add-CMDeviceToCollection -CollectionID F0X0001E -System $d -Verbose 
The performance is awesome.  
10,000 rules are applied in only 1:54 seconds, and CollEval processed the devices in practically no time at all:
This is a very nice improvement over the built-in cmdlets, and I was eager to see what happened with 30K rules.
It turns out that when we applied 30,000 rules, we saw performance scale linearly, taking 4:44  to create and apply the rules, with processing taking just a bit longer.

The total processing time for 30K devices is 4 minutes, 52 seconds using this method of adding direct rules.  By far the fastest!

1806 Cmdlets

The 1806 CM Cmdlets bring some nice features, and bug fixes.  On top of that, something has changed under the covers of the Collection Direct Membership Add cmdlet, giving us a HUGE speed improvement too! 

One Caveat the syntax has changed quite a bit, and you need to use the new cmdlets in a specific manner to ensure that you’ll experience the SUPER Speed!

First, don’t batch your collection addition rules, like we did previously.  Or, if you do batch them, do it in batches of 10K.  Next, the parameters have changed.  If you use the cmdlet in stand-alone mode, like so
Add-CMDeviceCollectionDirectMembershipRule -CollectionID <SomeCollection> -ResourceID $arrayOfResourceIDs

You will end up with the previous cmdlet performance.  From what I can tell, it looks like there may be an internal branch in the logic and the old code is alive and kicking down there!  What, I told you it was weird! 

The sweet spot to get super speed is like so:

#Load devices to add to the collection (should be a full device from Get-CMDevice)
$devices = Get-CMDevice -CollectionID SMS00001 | select -first 30000 
Get-CMCollection -CollectionId F0x00025 | Add-CMDeviceCollectionDirectMembershipRule -Resource $devices 

Note that we are not using -ResourceID, and furthermore we must pipe IResultObject#SMS_Collection object into Add-..Member to get it to work.  It’s wonky, it’s weird, and it’s always verbose too.

Like, it's mega SUPER verbose
Like, it’s mega SUPER verbose (note that this is line thirty THOUSAND of the output)

But you’re allowed the be weird when you’re fast as hell!  This cmdlet is the Usain Bolt of CM Cmdlets.

Adding 10,000 device rules is five times faster than the old way, clocking in at 1:05!  And adding 30,000 rules took only  3:16!!

For comparison, the new cmdlet is a beast for big collection moves, as it completes the same operation in 6% of the time of the old cmdlet, a performance increase of 17 times!

Query Rules

I’m going to go on the record and say that I was wrong about Query rules.  When I asked on Twitter, you guys had some interesting feedback for me about my ideas of what to do with Query rules…
So, I decided to test for myself…and they were amazing!  My plan was to add a query rule containing a big IN WQL Statement with the resource IDs I wanted to include, like this:
select ResourceId, 
    Name, 
    SMSUniqueIdentifier, 
    ResourceDomainORWorkgroup, 
    SMS_R_System.Client 
   from SMS_R_System 
   where ResourceID in ('$IDArray')

and bundle them up in batches of 1k devices at a time. Here’s the code I used , edited for your viewing pleasure.  You will need to make sure your own WQL query is on one-line, SCCM doesn’t like a multi-line string:

$d = Get-CMDevice -CollectionID SMS00001 | select -First 10000 -Property Name,ResourceID
$d.Count
for ($i = 0; $i -lt $d.Count; $i += 1000)
{ 
    Write-Host "processing $i..$($i+1000) ..."
    $IDArray = $d[$i..($i+1000)]
    $IDArray = $IDArray.ResourceID -join "','"
    
    $query = "
    select SMS_R_System.ResourceId, 
        SMS_R_System.Name, 
        SMS_R_System.SMSUniqueIdentifier, 
        SMS_R_System.ResourceDomainORWorkgroup, 
        SMS_R_System.Client 
    from SMS_R_System 
    where ResourceID in ('$IDArray')
    "
    #Add Query rule built in 
    Add-CMDeviceCollectionQueryMembershipRule -CollectionID F0X0001C -RuleName "AddRule$(($i+1000)/ 1000)" `
    -QueryExpression $query
    write-host -NoNewline "Done!"
}

At first I thought I had a typo in my code!

This is in real-time.

The speed…AMAZING! Only seven seconds to apply the rules!

CollEval fired up a few seconds later, and interestingly it does take a longer time to crunch the Query rules than it did the Direct Rules, but we’re talking 10K devices added to a collection in under 20 seconds.

At this point, I knew 30K would be equally fast.


Wow.  Only 26 seconds to apply the rules, and a total crunch time of 45 seconds to calculate membership.  Just one minute…let’s see what happens if we…

Let’s just add every device using a query rule

This was required of me at this point, all 115K machines in my testlab would be added with massive IN queries to really test performance.

A weird screen shot. The PowerShell line reflects the total time to run the command (2:18 for 115K rules), while the bottom half is the relevant lines from Collection Evaluation

Only 2 minutes, 18 seconds to apply the rules, and two minutes to run the query!  Incredible!   This is a huge improvement compared to adding devices with direct rules, in which case using CMPSLib took 1 hour, 15 minutes to add 115K rules.

Using AD Membership Queries

Using AD Group Membership queries is super super fast.  If your AD Replication is good and healthy.

If you begin to use AD Groups for membership in CM, keep in mind that if you make group changes at the periphery of your network, it will take some time to replicate from your remote site, to a global catalog, and then have to wait for CM to requery the Active Directory to see the change.
Unfortunately I don’t have a giant AD environment to play with Group Membership, but from what I’ve seen I would expect very good speed here too.  Sorry this section is lame.

Direct Membership Control using SQL

I’ve always kind of wondered what Collection Evaluation was doing under the covers.  We know that when you add rules to a collection, they’re processed in this order:

Image from Scott’s blog ‘Collection Evaluation Overview’

Which was covered in detail in this awesome blog post by Scott Breen [MSFT] titled ‘Collection Evaluation Overview’.  But what does CollEval do with this information?  Just keep it in memory?  Write it to a file?  E-mail it to DJam who is the furiously working Mechanical Turk inside the machine?  …Or does it store the information in the CM Database somewhere?

How it really works

In digging around under the covers, I spent a lot of time watching arcane log files and trying to make sense of strange views in SQL trying to uncover where certain info was stored.  I had to grant myself super admin rights, break all of the warranty labels and in the end, took the CM Database out to dinner and then dug around with my flashlight under the covers, looking for goodies.  And I found a totally unsupported method to directly manipulate collections with shocking speed.

How do I do it?

Well, a gentleman never tells. What I will share though is the impressive speed.  Using this method to directly control collection membership, I was able to place 30K devices in a collection in 0:00:01.  One second.

Code pixelated to protect you from yourself. Seriously, I’m not going to be the one arming you wild monkeys with razor bladed nunchuks

But at what cost?

Well, if we don’t actually add rules, but instead manipulate the collection via getting fresh with the database, we lost a lot.  We lose RBAC.  We don’t have include rules, we don’t have exclude rules, the collection membership would just be what we said it should be.  

Oh, and since we skipped CollEval, CollEval is going to have something to say about the weird ass stuff we’ve done to this poor innocent collection.  For instance, if we ever forget about the wonky, dark-magic joojoo we have performed on this poor collection and click ‘Update Membership’, CollEval will have its revenge.

CollEval Checked, found no rules, then deleted everyone

CM will helpfully look at the collection, look at its rules and say ‘WTF are you doing bro, are you drunk?’ and then delete everyone from the collection.  Not a member via a valid rule?  You’re not gonna stay in the collection.

I would not recommend using this approach. 

The speed of direct query rules is mindblowing enough, and the new CM Cmdlet aren’t far behind them, so we have plenty of performance options.  Seriously, don’t explore this route, if you do, the air conditioner will catch on fire with spiders coming out of it.

Don’t do this to your CM. And if you DO, don’t ask MSFT for support.

In Conclusion

So, to summarize our data in a chart

Basically any method is much, much better than the Old CM Cmdlets!

Basically anything is faster than using the old Cmdlets

If you’re considering your options outside of the old cmdlets, I’d recommend giving CMPSLib a try.  Lovingly written by Keith Garner, with help from yours truly, we believe this is a very resilient method of adding devices to a collection, without the wonky-ness of the new Add-DeviceDirectRule cmdlets kind of odd syntax.

Want the new ones?  It’s easy, just download the media for the tech preview, and use it to Install the CM Console on your machine.  The CM Console will give you the new cmdlets and they’ll work on an old environment, super easy!  Just be mindful of the syntax!

Of course, for true performance, if you’re looking to manage your collections from outside of CM, I would only recommend maintaining membership using query rules, it’s just too fast not to mention.

Let me know if I missed anything!


Quickie – Join video files with PowerShell and FFMPEG

$
0
0

Caption Text says 'Join Video Files quickly, gluing stuff with PowerShell and ffMpeg', overlaid on an arts and craft scene of glues, papers, scissors and various harvest herbs

While I’m working on some longer posts, I thought I’d share a quick snippet I came up with this weekend as I was backing up a number of old DVDs of family movies.

FFMPeg has the awesome ability to join a number of video files together for you, but the syntax can be kind of strange.  Once I learned the syntax, I sought to make sure I never had to do it again, and created this cmdlet.

Usage notes

In this basic version, it will join every file in a directory, giving you Output.mkv.  Be sure your files in the directory are sequentially ordered as well, to control their position.

Ensure that FFMpeg’s binaries are available in your Path variable as well.

Later on, I may add the ability to provide which specific files you want to join, if desired 🙂

Enjoy 🙂

 

Life after Write-Debug

$
0
0

Hey y’all.  I’ve been getting verrrry deep into the world of Asp.net Model View Controller and working on some big updates to ClientFaux, but I saw this tweet and it spoke to me:

Why?  Because until recently, I was notorious for leaving Write-Debug statements everywhere.  I mean, just take a look at my local git folder.

A PowerShell console window running the following command. Dir c:\git -recurse | select-string 'write-debug' | measure This shows that there are over 150 uses of this command in my PowerShell modules. Uh, probably too many!
I *wasn’t* expecting it to be *this* bad. I’m so, so sorry.

My code was just littered with these after practically every logical operation…just in case I needed to pause my code here at some point in the future.  Actually, someone could look at my code in the past and every Verbose or Debug cmd was basically a place that I got stuck while writing that cmdlet or script.  I mean, using the tools is not wrong, but it always felt like there should be better ways to do it.

Recently, I have learned of a much better way and I want to share it with everybody.

Why not use Write-Debug?

Write-Debug is wrong and if you use it you should feel bad

I’m just kidding!  You know, to be honest, something really gets under my skin about those super preachy posts like you always find on medium that say things like ‘You’re using strings wrong’, or “You’re all morons for not using WINS” or something snarky like that.

It’s like, I might have agreed with them or found the info useful, but the delivery is so irksome that I am forced to wage war against them by means of a passive aggressive campaign of refusing to like their Tweets any more as my retribution.

That being said, here’s why I think we should avoid Write-Debug.  It ain’t wrong, but you might like the alternative better.

Pester will annoy you

If you’re using Pester, you might like to use -CodeCoverage to help you identify which logical units of your code may not have test coverage.  Well, Pester will view each use of Write-Debug as a separate command and will prompt you in your code coverage reports to write a test for each.  A relatively simple function like this one:


Function My-ShoddyFactorialFunction {param($baseNumber)

    Write-Debug "Starting with base number of $baseNumber"

    $temp = $baseNumber

    ForEach($i in ($baseNumber-1)..1){

        Write-Debug  "multiplying $temp by $($i)"

        $temp = $temp * $i

    }

    Write-Debug "multiplying $temp by $($i)"

    return $temp

}

When this short script is run through CodeCoverage, Pester will call out each Write-Debug as a separate entities that need to be tested.  We both know that there’s no reason to write a Pester test for something like this, but if you work with sticklers for pristine CodeCoverage reports then you’ll have to look out for this.

Not guaranteed to be present on every PowerShell host

Did you know that not every PowerShell host supports Write-Debug?  Since it is an interactive cmdlet, consoles that operate headlessly don’t support it.  This means that Azure Automation for one does not support the cmdlet, so it will basically be ignored, at best.

As developers of PowerShell scripts and tools, we’re accustomed to having the fully fledged PowerShell console available to us, but our code may not always execute in the same type of environment.

For instance, once I was working on a project for a customer with very long PowerShell Run Script steps embedded into System Center Orchestrator.  I wrote some functions for them, one of which involved creating and deleting ServiceNow Tickets.

I was very big at that time on creating ‘Full and Proper’ advanced cmdlets and “Doing it the right way™ so I went totally overboard with $ConfirmImpact and PSCmdletShouldProcess usage.  The code worked great in my local IDE so we deployed it to production and our runbooks started failing.

Why?  Well the host in which Orchestrator runs PowerShell Scripts runs headless, and when it tried to run my cmdlets, it threw this error.

Exception calling "Invoke" with "0" argument(s): 
"A command that prompts the user failed because the host program or the 
command type does not support user interaction. The host was attempting to request confirmation with the following 
message : some error123

This lesson taught me the point that I shouldn’t always count on all input streams and forms of user interaction being available in my code.

Not a great user experience anyway

Back to our first function, imagine if I wanted to debug the value of the output right before we exited.  To do so, look at how many times I have to hit ‘Continue’!

This sucks.  And it really sucks when you’re doing code reviews.

Write-Debug make Peer Reviews super suck

If you’re fortunate enough to work on a team of Powershell slingers, you almost definitely have (and if you don’t, start on Tuesday!) a repository to check in and review code.

And if you’re doing this the right way, no one has access to push untested code until it goes through review in the form of a pull request.

So what happens when you need to test ‘why’ something happens in your coworkers code? If you were me, you would have to litter your colleagues (hopefully) clean code with tons of debug statements. These you have to remember to roll back or you get annoying messages from git when you try to change branches again.

I was changing my peers code while reviewing it.  It was bad and I feel bad.

So what should I do instead?

It turns out that there has been an answer to this problem just hiding in my consoles for years and I’ve mentally ignored them this whole time.

If you’ve never used a breakpoint before, prepare to be amazed!  Whether you use the ISE, Visual Studio, or VS Code, breakpoints are a great tool that let you set an ephemeral debug point without editing the original file!

Breakpoints allow for ephemeral debugging without editing the original file!

They essentially function just the same as a Write-Debug statement, but you can add and remove them without editing the original code, and are deeply integrated into our favorite editors to unlock all kinds of goodness.

How to use them

If you’re in the PowerShell ISE (obligatory WHAT YEAR IS THIS.png) , simply highlight a line on which you’d love to pause your code, then hit F9. Then run the code and PowerShell will automatically stop in a debug command line.

Hit ‘F9’ to set the breakpoint, then run the code.

The code will execute like it normally would until it reaches the breakpoint line at which point…

You get a Write-Debug style command prompt but never had to change the source code!

The same goes for Visual Studio Code, which is even better, as it includes a point-in-time listing of all variable values as well!

Depicts the Visual Studio Code application, paused at a breakpoint in a PowerShell script. The UI is broken into two columns, with the script on the left hand column with a command prompt beneath it. On the right column, there is a list of all variables and their current values.

It doesn’t stop here!  You can also hover over variables to see their value in real time! 

This was a huge game changer for me, as I used to type the names of variables over and over and over into the shell to see their current values.  Now, I just hover, like you see below.  Note the little boxes which appear over the cursor!

Shows a paused VS Code instance, where my cursor is moving above various variables, above which their current values are revealed! Awesome!

But the awesomeness doesn’t stop there!

When you’re paused at a breakpoint, you can also proceed through your code line by line.  The same keys work in either VS Code, VS or ISE.

Key Function
F5 Continue running when paused
F9 Set a breakpoint on this line
F10 Step Over – run this line and stop
F11 Step Into – go INTO the functions called on this line
Shift+F11 Step Out – move your paused breakpoint out to the calling function

These commands will change your debugging life.

In the demo below, I show how Step-Over works, which runs the current line but doesn’t jump into the definition of any functions within it, like Step-Into does.

Now, let’s go back to our initial example and set a breakpoint to test the value on that last line.

See how easy that was?  This is why I believe that once you learn of the power of ultra instinct–er, once you learn about Breakpoints, you’ll simply never need Write-Debug again!

Security camera footage of me using Breakpoints for the first time

Still confused about the difference between Step Over, Step Into and Step Out?  I don’t blame you, checkout this great answer from StackOverflow which does a good job shining light on the distinction.

Debugging
DebugInVsCode

Quickie: ConvertTo-PSCustomObject

$
0
0

Do you ever need to quickly hop between PowerShell tabs in VScode, or have data you want to move from one session to another?

Sure, you could output your data into a .CSV file, a .JSon file, or one of hundreds of other options.  But sometimes it’s nice to just paste right into a new window and get up and running again.  For that, I wrote this small little cmdlet.

 Function ConvertTo-PSCustomObject{
    Param($InputObject)
    $out = "[PSCustomObject]@{`n"
    $Properties = $InputObject | Get-Member | Where MemberType -eq Property
    ForEach ($prop in $Properties){
        $name = $prop.Name
        if ([String]::IsNullOrEmpty($InputObject.$name)){
            $value = $null
        }
        else {
            $value = $InputObject.$name
        }

        $out += "`t$name = '$value'`n"
    }

    $out += "}"
    $out
}

And the usage of it:

ConvertTo-PSCustomObject

ClientFaux 2.0 – Completely re-written, faster than ever

$
0
0

As mentioned on the stage at MMSMOA, ClientFaux 2.o is now available.  Completely re-written with as a WPF GUI with automated certificate generation, multi-threading, and all the bells and whistles.

Oh, and Hardware inventory now works!

Download it and give it a try now!  To use, install it on a desktop/laptop/VM which is on a network segment which can reach your CM server.

http://bit.ly/ClientFaux

Launch ClientFaux and click to the Configure CM tab and provide your CM Server FQDN and three letter Site code.

Then click to the Device Naming page and provide your desired naming pattern and starting and ending numbers.

You can also increase the number of threads (I’ve tested up to 12 threads and seven is a good happy medium for resource usage, but feel free to go crazy).

Then to see it in action…click to the ‘Ready’ page and hit ‘Ready!’ and away we go!

 

The Big Warning

This is designed for DEMO or TestLab CM instances.  I do not recommend running it against your Production CM instance as it can create thousands and thousands of CM clients if left running for a few hours!  This can be hard to filter out of data for reporting, dashboards and the like.

ClientFaux2.0Demo

PowerShell – Testing endpoints that perform Anti-forgery verification

$
0
0

First off, big thanks go to 🐦Ryan Ephgrave, an incredibly talented and easy to work with PowerShell and dotnet god I have the pleasure to learn from over at #BigBank™ (its a great thing LinkedIn doesn’t exist…)

We had a situation arise recently where we needed to create some Integration tests in Pester to validate a long list of web pages to be sure they responded after a deployment.  I started out manually writing a litany of Pester tests by hand like this:


Context 'Post Deployment Validation' {
    It 'Website #1 should be accessible' {
        $url = 'https://someserver:someport/someEndpoint'
        $results = Invoke-WebRequest -Uri $url -UseDefaultCredentials
        $results.StatusCode | should be 200
    }

    It 'Website #2 should be accessible' {
        $url = 'https://someOtherserver:someport/someEndpoint'
        $results = Invoke-WebRequest -Uri $url -UseDefaultCredentials
        $results.StatusCode | should be 200
    }[...]
}

I spoke with the team about what I was doing and Ryan drew my attention to the very neat TestCases of Pester, which you can read more about here.

With a bit of work, I converted my long list of tests (which I typed by hand…why?  Because I finally got a PS4 and I stayed up too late playing Sekiro!) into a JSON file like this.

[
    {
        "SiteName" : "Our Home Page",
        "Url" : "https://someserver:someport/someEndpoint"        
    },
    {
        "SiteName" : "Our WebApp #1",
        "Url" : "https://someOtherserver:someport/someEndpoint"        
    }
]

Then to hook this up to our Pester test from before and…

Context 'Post Deployment Validation' {
    $EndPointList = Get-Content $PSScriptRoot\Endpointlist.json | ConvertFrom-Json
    $paramArray = @()
    ForEach($instance in $EndPointList){
        $paramArray+= @{
            'SiteName' = $instance.SiteName
            'URL' = $instance.URL
        }
    }

    It '<SiteName> should be accessible' -TestCases $paramArray {
        Param(
            [string]$EndpointName,
            [string]$URL
        )

        $results = Invoke-WebRequest -Uri $url -UseDefaultCredentials
        $results.StatusCode | should be 200
    }
}

Then we run it to see…

But what about the post title?

You guys, always sticklers for details.  So this covered a lot of our use cases but didn’t cover an important one, that of making sure that one of our internal apps worked after a new deployment. It was a generic MVC app where an authorized user could enter some information and click a button to perform some automation after an approval process.

The issue was that as you could imagine, security is a concern, so time has been spent hardening tools against attacks like Cross-Site Request Forgery attacks.  Which is great and all, but made automated testing a pain, namely because any attempt I made to submit a test request resulted in an error of one of the following:

The required anti-forgery form field __RequestVerificationToken is not present.

The required anti-forgery cookie __RequestVerificationToken is not present

So what’s a dev to do?  Send a PR to disable security features?  Create some new super group who isn’t subject to the normal processes, just used for testing?

Of course not!

How MVC Antiforgery Tokens work

Any good production app is going to first and foremost use AspNet.Identity and some kind of user authorization system to ensure that only approved users have permission to use these tools.  If you don’t anyone who can route to the web app can use it.  This is bad.

So let’s assume we’ve done our diligence and we have our web app.  A user has permission to the app and they’re following safe browsing behavior.

Let’s imagine the app is a simple user management app, something like this, which has a simple class of Users, perhaps with a field to track if they have admin rights or not.

class FoxDeployUser
{
    public String UserName {get;set;}
    public String SamAccountName {get;set;}
    public bool IsManager {get;set;}
}

Now imagine if your user account has administrative rights to make changes to this system. If so, your account could easily navigate to a Users/Edit endpoint, where you’d be prompted with a simple form like this to make changes to a user account.

The scary thing…if the account we are using for this portal is always permitted, and doesn’t have a log in process, then any site while we are browsing the web could make a change to this portal.

Here’s how it would work, assume I want to make a change to this user.  I load up the /Users/Stephen endpoint and type in my values and hit Save, right?  What happens in the background (And which we can see in Chrome Dev tools) is that a form Submission is completed.

It simply POSTS back to the web server the contents of a form.  And you know what else?  Any website you visit can contain JavaScript that performs the exact same kind of AJAX Post to the web server.  There are even JavaScript utilities that will automatically discover webservers on your network.  So with this in mind, imagine visiting a webpage that looks pretty innocuous:

Black mode = evil website

Clicking the Post button there will send an AJAX Post formatted like the following:

$("button").click(function (e) {
            e.preventDefault();
            $.ajax({
                type: "POST",
                url: "https://MyInternalApp:44352/Users/Edit/1",
                data: {
                    UserID: 1,
                    UserName: "Stephen",
                    SamAccountName: "AD-ENT\\Stephen",
                    IsManager: "true"
                },
                success: function (result) {
                    alert('ok');
                },
                error: function (result) {
                    alert('error');
                }
            });
        });

So this is an attack from one-website, through the user’s PC, to another website they have access to!

Will it work? If it does, I’ll click the button from one site and we’ll see the user’s ‘InManager’ property change in the other site.

Wow that’s terrifying

Yep, I thought so too.  Fortunately for all of us, there are a lot of ways to mitigate this attack, and most MVC frameworks (citation needed) ship with them out of the box.  In ASP.net MVC you signal that we should protect an endpoint against a CRSF attack by adding this Annotation to the method.

// POST: Users/Edit/5
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Edit(int id, [Bind("UserID,UserName,SamAccountName,IsManager")] User user)
        {

This adds a novel little hidden form box to the UI which contains a one-time use token, embedded in both the form and the cookies.

Here’s an example of the normally hidden element, which I’ve revealed using Chrome Dev tools.

Now if I attempt to submit this form, I’ll encounter an error, since my attack won’t be able to retrieve the form as the user, get the cookies, and then repost back to the endpoint.  Since my post won’t have the one-time code needed to do this, it will be rejected at the Controller level.

Testing an endpoint which has CSRF Protection

Now, to the meat of the issue.  As part of my Test Suite, I need to run a post through this endpoint and validate that the service after an update is able to perform this business function.

I can do this by maintaining a PowerShell WebSession to get the matching cookies and then submit them using Invoke-RestMethod.

Describe 'WebApp Testing' {
$Request = Invoke-WebRequest -Uri https://someserver:someport/Users -SessionVariable Session -UseBasicParsing -UseDefaultCredentials
$TokenValue = ''
ForEach($field in $Request.InputFields){
    if ($field.Name -eq '__RequestVerificationToken'){
        $TokenValue = $field.value
    }
}

$header = @{
    '__RequestVerificationToken' = $TokenValue
}

$fields = @{
    '__RequestVerificationToken' = $TokenValue
    'UserName' = 'TestUser'
    'SamAccountName' = 'QA\TestUser'
    'IsManager' = $false
}

It 'WebApp1 : Should edit a User' {
    $Response = Invoke-WebRequest -Uri https://someserver:someport/Users -SessionVariable Session `
        -Method Post -UseBasicParsing -UseDefaultCredentials -Body $fields -Headers $header
    $Response.StatusCode | should be 200
}

It 'WebApp1 : Should throw when the user has no token' {
    {Invoke-WebRequest -Uri https://someserver:someport/Users `
        -Method Post -UseBasicParsing -UseDefaultCredentials -Body $fields -Headers $header } | should throw    
}
}

My first integration tests.  I’m so proud.  And I’m also kind of ashamed too, because up to this point I’d been manually loading two dozen web pages and making requests by hand to validate deployments.

Thanks for reading!

CSRF Attack

Progressive Automation: Part I

$
0
0

Progressive automation - real world automation in increasing complexity

In this series, I thought it’d be fun to walk through the common phases of an automation initiative and specifically show how I love to handle this sort of situation when it arises today.

We’ll walk through recognizing a good opportunity to move a manual task to automation covering these three main steps, over the next few posts here:

  • Begin with something terrible and manual and ease the pain by adding a simple script
  • Increase the sophistication and take it to the next level by adding a User Interface
  • Migrate our Automation from a PowerShell UI to a simple and easy asp.net portal which calls a script to run the task

Depending on the amount of steam I have left, we may even go one step further and make our dotnet site more advanced, if you all are interested ☺

Our goal is to go from ‘hey it actually worked’ to ‘It works pretty well now’, to ‘hey it actually still works!’

Tell me where it hurts

You should always start your automation by finding the biggest pain points or wastes of time and starting there.  Ideal cases are things that:

  • Require your specific manual intervention (+3 points)
  • Have to happen in an off hour or over the weekend (+5 points)
  • Are hard to do, or repetitive  (+5 points)
  • Have a nasty penalty if you get them wrong (+5 points)

Add them up and if you’re over 10 then you should think about automating it. Hell, if it’s over 6, you should automate it.

A printable checklist of the points from the 'when to automate' list above
Surely Stephen didn’t really spend three hours on this thing. Or make a ‘chillwave’ version of it for basically no reason!

😎🌴Alternate Super Synth Wave Version also available🌴😎

Background for the task at hand

A sysadmin and engineer friend of mine posed an interesting question at MMSMOA this year (easily the best conference I’ve been to in a long time, I’d go if you have the chance!)

He has a domain migration taking place at a client and they needed to put just the devices that were migrating that week into a collection which would have the appropriate Task Sequences and software available for it.  The penalty for missing this?  Machines not getting upgraded (+5 points)

When the primary sccm client is installed on the machine in the acquisition domain, he needed the machines to go into a collection in the primary sccm environment. That collection would have the cross-domain migration TS advertised to it as required.

His process for this had been to have some technicians deploy the client out to the target devices and then they’d e-mail him the computer names, and he would have to go edit the Collection, adding those new devices to it. Other folks couldn’t do it because they weren’t familiar with CM, so it had to be him too!  (Requires his attention?  +3 points) He ended up having to very closely watch his email during migration weekends… Working over the weekend?  (+5 points)

People, we are at a thirteen here, this of course is totally unacceptable. Get stuff from an email? Do things manually? No no we had to engineer a fix (and this kind of thing is why MMS is awesome, we had a few wines, enjoying the music and atmosphere of the Welcome Reception and whiteboarded out a solution)

Solving this problem with automation

If the technicians were trained in CM, they could have simply set the devices as collection members and called it a day. But there was no time or budget to train 5-10 people in CM. So we had to think of an alternative.

We couldn’t just add all devices ahead of time to a collection because their device name would change in the process, and furthermore we didn’t want to disturb users during the day with the migration and the tons of apps they would be getting. So we then thought about using the device’s BIOS Serial Number (GUID) which would stay the same even after a reimage (since he wanted the devices to stay in the collection as well).

But the devices who would get migrated could fluctuate even up to the hours before a migration, when my friend was already out of the office.  Furthermore, for reporting purposes, they wanted to ‘babysit’ the recently migrated devices for a few days to ensure they recieved all software, so we couldn’t just put everybody there.

But we were getting close to a solution.

  • Line of business admins would know who were definitely going to migrate towards the end of day on Friday
  • Those Users would leave their devices in the office to be updated over the weekend
  • Inventory data from their devices in the old CM environment would be available and the user’s computer names would be known and confirmed
  • Devices would be manually F12’ed and PXE boot into the USMT Migration Task Sequence for the new CM Environment and domain
  • If their devices could only somehow end up in the ‘Migrated Devices’ collection in the new CM, we would be set, because the Required apps in that collection would have all of the apps those users would need

The Process we came up with

There are probably a number of different and better ways this could be handled (I was thinking of something clever using the time since the devices were added to the new CM instance as a Query Rule for the collection, but didn’t vet it out), but we hashed out retrieving the BIOS Serial Number and using that as a query rule for the Collection.

We came up with a simple scheduled task that would run a script. It ain’t amazing but it’s enough to get though this need and we can then use the bought time to make something a bit nicer too.

The script will :

  • Look for devices which have been added to a CSV file by the LOB guys
    • If nothing there, exit
  • Compare them and see if any of them are not already in our Processed list ( a separate CSV file we will control which they cannot alter)
    • If all devices have been processed, exit
  • Hit the old CM server via SQL and retrieve the needed GUID info from V_R_System
  • Add new collection rules for each item found, trigger a collection refresh
  • Add records to processed list and then exit

Or, in flowchart form, complete with a little XKCD guy.

pictured is a flow diagram which repeats the bullet points of this process
Guess how long it took to make this flowchart? Now multiply it by four. Now we’re in the neighborhood of how long it took

Since we will only ever add a device once to the new collection, we could safely set this to run on a pretty aggressive schedule, maybe once every 15 minutes or so.  If the new CM were really under a lot of load, of course this could be altered greatly.

And now, let’s code

OK, enough theory and planning (although this is kind of my favorite part about having been an automation consultant, and now my current role).

To begin with, users have their own spreadsheet they update like this, it’s a simple CSV format.

HostName	Processed	ProcessedDate
SomePC123
SomePC234
SomePC345

They are free to add new hostnames whenever they like.   Their files live on a network drive which the central automation server can access.  The script is pretty self-explanatory for the first half, standard checking to see if the file shares are there, then checking the files themselves to see if we have any rows which we haven’t marked as processed yet.

$Date = Get-date
$LOBDrives = "\\someDC\fileShare\ITServices\LOB_01\migrationfile.csv",
             "\\someDC\fileShare\ITServices\LOB_02\migrationfile.csv",
             "\\someDC\fileShare\ITServices\LOB_03\migrationfile.csv"
$masterLog = "\\someDC\fileShare\ITServices\master\migrationfile_reference.csv"
$ValidLobFiles = @()
$RecentlyMigratedCollectionName = "w10_Fall_Merger_RecentlyMigratedDevices"

Write-Verbose "Online at $($date)"
Write-Verbose "Looking for new items to process"
Write-Verbose "Found $($LOBDrives.count) paths for processing"

If (Test-path $masterLog){
	Write-Verbose "Found master file for reference"
	$ProcessedLog = import-csv $masterLog -Delimiter "`t"
}
else{
	Throw "Master file missing!!!"
}
ForEach($LOBFile in $LOBDrives){
	If (Test-Path $LOBFile){
		Write-Verbose "Found $($LOBFile)"
		$ValidLobFiles += $LOBFile
	}
	else{
		Write-warning "Could not resolve $($LOBFile) for processing"
	}
}

$itemsToProcess = New-Object System.Collections.ArrayList
ForEach($validLObFile in $ValidLobFiles){
	$fileCSV = Import-CSV $ValidLobFile -Delimiter "`t"
	ForEach($item in $fileCSV){
		If ($item.Processed -ne $true){
			If($ProcessedLog.hostname -notContains $item.HostName){
				[void]$itemsToProcess.Add($item)
			}
			else {
				Write-warning "$($item.Name) was already processed, ignoring"
			}

		}
	}
}

Write-Verbose "Found $($itemsToProcess.Count) items to process"

This was all pretty boiler plate, but it’s about to get more interesting. Next up, we have a short custom PowerShell cmdlet which uses a custom SQL cmdlet one of my peers–the venerable and always interesting Fred Bainbridgepublished for lightweight SQL Queries.

Function Get-FoxCMBiosInfo {
	param([string[]]$ComputerNames)

	$items = New-Object System.Collections.ArrayList
	ForEach($computerName in ($ComputerNames.Split("`n").Split())){
		If ($computerName.Length -ge 3){
			[void]$items.Add($computerName.Trim())
		}
	}

	$inStatement = "('$($items -Join "','")')"

	$query = "
	select vSystem.Name0,vSystem.ResourceID,BIOS.Caption0,Bios.SerialNumber0
		from v_r_system as vSystem
		join dbo.v_GS_PC_BIOS as BIOS on BIOS.ResourceID = vSystem.ResourceID

	where vSystem.Name0 in ($inStatement"

	Invoke-mmsSQLCommand $query
}

It returns objects like this.

Name0	ResourceID	SerialNumber0                     Caption0
SCCM	16777219	4210-1978-6105-2643-9803-3385-35  NULL
DC2016	16777220	7318-9742-4948-8961-3362-1212-32  NULL
W10-VM	16901071	6145-4101-5130-6042-4046-8711-91  NULL
SomeFox	16901086	7318-9742-4948-8961-3362-1212-32  Hyper-V UEFI Release v3.0	

This lets me then run the rest of the script, stepping through each item we need to process and adding Query Rules for the BIOS Serial Number to CM in the new environment.

#Look up SQL values
$BIOSValues = Get-FoxCMBiosInfo $itemsToProcess.Name

#Add new direct rules

$Collection = Get-CMDeviceCollection -Name $RecentlyMigratedCollectionName

ForEach($item in $BIOSValues){
Add-CMDeviceCollectionQueryMembershipRule -CollectionName $CollectionName `
  -QueryExpression 'select SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from  SMS_R_System inner join SMS_G_System_PC_BIOS on SMS_G_System_PC_BIOS.ResourceId = SMS_R_System.ResourceId where SMS_G_System_PC_BIOS.SerialNumber = "$($item.SerialNumber0)"'
  -RuleName '$($item.Name0)'
} 

#loop back through original files and mark all as processed 

ForEach($LOBFile in $LOBDrives){
	If (Test-Path $LOBFile){
		Write-Verbose "Found $($LOBFile)"
		$fileCSV = Import-CSV $ValidLobFile -Delimiter "`t"
		forEach($line in $fileCSV){
			if ($itemsToProcess.HostName -contains $line.HostName){
				$line.Processed	= $true
				$line.ProcessedDate = get-date 

			}

			$newCSV += $line
		}
		export-csv -InputObject $newCSV -Path $LOBFile -Delimiter "`t"
	}
	else{
		Write-warning "Could not resolve $($LOBFile) for processing"
	}
}

#update master file
ConvertTo-Csv $itemsToProcess -Delimiter "`t" | select-object -skip 1 | add-content $masterLog

Finally, we update the migration files for each LOB, as well as our central master record and then sleep until the next hour comes along.

Why don’t you use the Pipe anywhere?

We found at work that there are various performance penalties which can add up when performing complex operations in the PowerShell Pipeline.  For that reason, we still use the pipeline for one off automation tasks but in scripts, it just much easier to debug and test and support to use ForEach commands instead.

Next time

So that takes us from the task, through ideation, through a pretty good working solution to handle this terrible task.

Join us in phase two where we make this more sophisticated with a UI, and then phase three where we move the whole process to a centralized web UI instead.  Have some other ideas?  Drop me a line on twitter or reddit and we’ll see if we can work it into a future post.

YouTube Video Metadata Scraping with PowerShell

$
0
0

Trigger Warning : I discuss eating disorders and my opinions pro-eating disorder media briefly in this post. If this content is difficult for some, I recommend scrolling past The Background and resuming at The Project instead.

Background

I ❤ YouTube. I have learned so much about development from folks like I am Tim Curry, or from the amazing Microsoft Virtual Academy courses from Jeffrey Snover and Jason Helmick (original link ).

Most days I catch the repeats from Stephen Colbert, and then jam out to synthwave or chillhop. In fact, I listened to one particular mix so many times while learning c# that I still get flashbacks when I hear the songs on it again…sleepness nights trying to uncover everything I don’t know. I even have my own Intro to PowerShell Video that I think my mom watched 70,000 times.

My kids grew up singing songs from Dave and Eva, Little Baby Bum, Super Simple Songs and now Rachel and the TreeSchoolers, and it was one of the first services I signed up for and still pay for today (aside from NetFlix, and that one stint where I got CDs through the mail, yeah…)

But a few months ago I heard that YouTube will recommend videos which are pro eating-restriction and bulimia within four videos of the sorts of content targeted at young children. I have a history with people who experience these disorders and want to be sure we face it head on in my family, but that doesn’t mean I will allow impressionable minds to be exposed to content which presents this issue in a positive light.

If YouTube is not going to be safe for the type of stuff my children want to watch, I needed to know.  Unfortunately the person who told me of this can not remember their source, nor could I find any decent articles on the topic, but I thought that this smelled like a project in the making.

 The Project

I wanted to see which sorts of videos YouTube will recommend as a user continues to watch videos on their site. I started with two sets of videos, one for girls fashion and the other for weight loss information.

Fashion 1, Fashion 2, Fashion 3

Weight 1, Weight 2, Weight 3

For each video, we would get the video details, its tags, its thumbnail and then also the next five related videos.  We’d continue until we hit 250 videos.

 Getting set up

Setting up a YouTube API account is very simple. You can sign up here. Notice how there is no credit card link? Interestingly from what I could tell, there is no cost to working with the YoUTube API. But that is not to say that it’s unlimited. YouTube uses a Quota based program where you have 10,000 units of quota to spend a day on the site. Sounds like a lot but it is really not when doing research.

Operation Cost Description
v3/videos?part=snippet,contentDetails 5 retrieves info on the video, the creator, and also the tags and the description
v3/Search 100 retrieves 99 related videos
SaveThumbnail 0 retrieves the thumbnail of a video given the videoID

I hit my quota cap within moments and so had to run my data gathering over the course of a few days.

As for the thumbnail, I couldn’t find a supported method of downloading this using the API, but I did find this post on StackOverflow which got me started.

The Functions

Once I wrote these functions, I was ready to go:

Connect-PSYouTubeAccount is just another credential storage system using SecureString.  Be warned that other administrators on the device where you use this cmdlet could retrieve credentials stored as a SecureString.  If you’re curious for more info, read up on the DPAPI here , or here,  or ask JeffTheScripter, as he is very knowledgable on the topic.  FWIW this approach stores the key in memory as a SecureString, then converts to string data only when needed to make the web call.

The Summary

You can access the data I’ve already created here in this new repository, PSYouTubeScrapes.   But just be aware that it is kind of terrible UX looking through 8,000 tags and comments, so I took a dependency on the awesome PSWordCloud PowerShell module which I used to make a wordcloud out of the most common video tags.

A note on YouTube Comments: they contain the worst of humanity and should never ever be entered by any person.  I intentionally decided not to research them or publish the work I did on them, because, wow.

So, here is a word cloud of the two datasets, generated using this script.

A word cloud of the most commong tags for Weight loss videos traversed with this tool, including 'theStyleDiet', 'Commedy' Beauty', and 'Anna Saccone', who seems to be a YouTuber popular in this area
Anna Saccone has a LOT of fashion and weight videos, but seemed pretty positive from what I saw

The Conclusion

All in all, I felt that the content was pretty agreeable!  if the search for children’s videos DID surface some stranger children’s videos like this one, I have to say that I didn’t think any of the videos were overly negative, exploitative, nor did I see any ‘Elsagate’ style content.  That’s not to say that YouTube is perfect, but I think it seems safe enough, even if I will probably review their YouTube history and let them use YouTube Kids instead of the full app.

Have a set of recommended videos you’d like me to search like this?  Post them in a thread on /r/FoxDeploy or leave a comment with your videos and I’ll see what we come up with.

If you conduct your own trial with this code and example and want to share, feel free to submit a pull request to the repo as well (note that we .gitignore all jpeg and png files to keep the repo size down).  You can access the data I’ve already created here in this new repository, PSYouTubeScrapes.

YouTubeScraping Header

Progressive Automation Pt II – PowerShell GUIs

$
0
0

In our previous post in the series, we took a manual task and converted it into a script, but our users could only interface with it by ugly manual manipulation of a spreadsheet. And, while I think sheetOps (configuring and managing a Kubernetes cluster with a GoogleSheets doc!) are pretty cool we can probably do better.

So in this post, I’ll show how I would typically go about building a PowerShell WPF GUI from an existing automation that kind of works OK.

Analysis

To begin making a UI we need to start by analyzing which values a user will be entering, considering what inputs make sense for that, and then thinking if there is anything the user will need to see in the UI as well, so, looking back to the first post…

To begin with, users have their own spreadsheet they update like this, it’s a simple CSV format.

 

HostName Processed ProcessedDate
SomePC123
SomePC234
SomePC345

Previous our users were manually adding computers to a list of computer names. That kind of scenario is best handled by the TextBox input. Or if we hate our users, we can make them provide input with a series of sliders.

Me: The ideal phone number input control doesn’t exis–


Gif Credit – Twitter

So we need at least a TextBox.

We need a confirmation button too, to enter the new items. We also need some textblocks to explain the UI. Finally, a Cancel/Reset button to zero out the text box.

We should also provide feedback of how many items we see in their input, so we should add a label which we can update.

That brings us up to:

  • Inputs
    • TextBox for ComputerNames
    • Buttons
      • OK
      • Cancel
  • Display Elements
    •  Welcome / Intro Text
    •  Confirmation Area
    •  Updatable Label to show count for devices input
    • DataGrid to show current contents

A note on TextBoxes:
As soon as we provide TextBoxes to users, all kinds of weird scenarios might happen.  Expect it!

For instance, users will copy and paste from e-mails in Outlook, or from Spreadsheets in Excel. They might also type in notepad a list of computers separated by Newlines (/r/n) carriage returns. Or maybe they’re more of the comma-separated type, and will try to separate entries with Commas.  These are all predictable scenarios we should account for in our UI, so we should give the user some kind of confirmation of what we see from their typing in the TextBox, and our form should handle most of the weird things they’ll try.

That’s why we need Confirmation. If you provide UI without confirmation, users will hate you and e-mail (or worse, they might call you!!) for help, so be sure to do it the right way and think of their needs from the get go, or you will enjoy getting to hear from them a lot.

Don’t make UI that will make your users hate you, like this one
depicts a Microsoft Windows 95 Era application with Volume Control as the title.  Instead of a volume dial as normally seen, this app in the screenshot has 100 different radio buttons to click on to change volume.

With all of these components in mind, time to get started.

Making the thing

We’re going to open up Visual Studio, pick a WPF app and then do some drag and dropping. If you are getting a bit scared of how to do it, or what you should do to install it, check out some of the previous posts in my GUI Series, here!

You should end up with something like this:

Which will look like this when rendered!

Shows a pretty ugly UI
Easily the ugliest UI we’ve done so far

To wire up the buttons, I wrote a few helped functions for the logic for the buttons, which look like this.


function loadListView(){
    $global:deviceList = new-object -TypeName System.Collections.ArrayList
    $devices = import-csv "$PSScriptRoot\devices.csv" | Sort-Object Processed
    ForEach($device in $devices){
        $global:deviceList.Add($device)
    }
    $WPFdevice_listView.ItemsSource = $global:deviceList
}

function cancelButton(){
    $WPFok.IsEnabled = $false
    $wpfdeviceTextbox.Text = $null
    $wpflabelCounter.Text="Reset"
    }

$wpfdeviceTextbox.Add_TextChanged({
    if ($wpfdeviceTextbox.Text.Length -le 5){
        return
    }
    $WPFok.IsEnabled = $true
    $deviceTextbox = $wpfdeviceTextbox.Text.Split(',').Split([System.Environment]::NewLine).Where({$_.Length -ge 3})
    $count = $deviceTextbox.Count
    $wpflabelCounter.Text=$count
})

$WPFCancel.Add_Click({
    cancelButton
})

$WPFok.Add_Click({
    $deviceTextbox = $wpfdeviceTextbox.Text.Split(',').Split([System.Environment]::NewLine).Where({$_.Length -ge 3})
    ForEach($item in $deviceTextbox){
        $global:deviceList.Add([pscustomObject]@{HostName=$item})
    }
    set-content "$PSScriptRoot\devices.csv" -Value $($deviceList | ConvertTo-csv -NoTypeInformation)
    cancelButton
    loadListView
})

To walk through these, we set an arrayList to track our collection of devices from the input file in loadListView, then define behavior in the $WPFok.Add_Click method to save the new items to the output.csv file. This is simple, and much harder to mess up than our previous approach of telling users to update a .csv file manually.

🔗Get the complete source here 🔗

Wait, where’s the beef XAML Files?

You may also notice a new method of loading up the .XAML files.

[void][System.Reflection.Assembly]::LoadWithPartialName('presentationframework')

$xamlPath = "$($PSScriptRoot)\$((split-path $PSCommandPath -Leaf ).Split(".")[0]).xaml"
if (-not(Test-Path $xamlPath)){
    throw "Ensure that $xamlPath is present within $PSScriptRoot"
}
$inputXML = Get-Content $xamlPath
$inputXML = $inputXML -replace 'mc:Ignorable="d"','' -replace "x:N",'N' -replace '^<Win.*', '<Window'
[xml]$XAML = $inputXML

After some time away from writing PowerShell GUIs, I now think it is unnecessarily verbose to keep your .xaml content within the script, and now recommend letting your xaml layouts live happily next to the script and logic code. So I’ve modified the template as shown here, to now automatically look for a matching named .xaml file within the neighboring folder. Simple and easy to read!

Next time

And that’s that! Was this the world’s best GUI? Yes. Yes of course it was!

Join us next time where we explore a whole new world, don’t you dare close your eyes, of aspnet core as an alternative way of approaching automation.

If you’re still looking for something to do, try this out this great walkthrough of terrible UI traits by a UI design consulting firm. Whatever you do, don’t do this in your UI and you’ll be off to a good start.

EXGf0WuU0AAv2zo[1]

PowerShell quickie – function to make your Mocks faster

$
0
0


In C#, writing unit tests is king, and Moq is the hotness we use to Mock objects and methods, like the MockObjects we get with Pester in PowerShell.

But one rough part of it is the syntax for Moq, which requires you to write a handler and specify each input argument, which can get pretty verbose and tiresome.

To ease this up, try this function, which will take a method signature and convert it into a sample Mock.Setup or Mock.Verify block, ready for testing

Viewing all 90 articles
Browse latest View live