Quantcast
Channel: Scripting – FoxDeploy.com
Viewing all 90 articles
Browse latest View live

Adding tab-completion to your PowerShell Functions

$
0
0

 

upgrade-your-code

This post is part of the series on AutoCompletion options for PowerShell! Click the banner for more posts in the series!


Probably my single favorite feature of PowerShell isn’t exciting to most people…but I love Auto-Completion.  I have my reasons:

As I have the typing skills of a preying mantis (why did I mention them…they’re easily the creepiest and worst insect…ewww) and constantly typo everything, I LOVE auto-completion.

Add to that the fact that I have lost a memory competition to a gold fish, and I REALLY Depend upon it.

goldfish_1
If you have a memory like me, and like this guy, you’ll love Auto-complete

PowerShell helps deeply flawed people like me by offering tons of built-in help and autocomplete practically everywhere.  Some of it is done for us, automatically, while others require a bit more work from us as toolmakers in order to enable the sweet sweet tab expansion.

In the world of AutoCompletion, there are two real types of AutoComplete that PowerShell offers. In this series, we’ll cover these two types of PowerShell autocompletion:

  • Part 1  – (This post) Parameter AutoComplete
  • Part 2 – (Coming soon) Output AutoComplete

This post is going to be all about the first one.

Parameter AutoComplete

In PowerShell, when you define a Function, any of your parameter names are automatically compiled and available via autocompletion.  For instance, in this very simple function:

Function Do-Stuff {
param(
    $Name,$count)

    For($i = 1 ; $i -le $count; $i++){

        "Displaying $name, time $i of $count"

    }

}

As you’ll see in the GIF below, PowerShell will compile my function and then automatically allow me to tabcomplete through the available parameter names.

autocomplete-1

That’s nice and convenient, but what if I want to prepopulate some values, for the user to type through those?  There’s two ways of doing that (well, at least two).  If we constrain the values a user can provide using [ValidateSet()], we’ll automatically get some new autocomplete functionality, like so.

param(
    [ValidateSet("Don", "Drew", "Stephen")]
    $Name,

    $count)

autocomplete-2

Now, for most of our production scripts…this is actually pretty good. We might only want our code to run one on or two machines, or accounts, or whatever.

But what if we wanted our function to instead display a dynamic list of all the available options?  We can do this by adding dynamic parameters.

Dynamic Parameters

You can read about it here at the very bottom of the help page entry for about_Function_Advanced_Parameters, but I don’t really like the description they give.  These parameters work by executing a script block and building up a list of the available options at the time of execution, Dynamically.

In our example, we’re going to recreate the wheel and make our own Restart-Service cmdlet, and replicate the feeling of it auto-populating the available services.  But this time, it’s going to work on remote computers! The code and technique were both originally covered by Martin Schvartzman in his post Dynamic ValidateSet in DynamicParameters on Technet.

For a starting point, here’s a super basic function to use Get-WmiObject to start and stop services on remote computers.  There is NO error handling either.

Function Restart-RemoteService{
Param($computername=$env:COMPUTERNAME,$srv="BITS")

  ForEach($machine in $computername){
    write-host "Stopping service $srv..." -NoNewline
    Get-WmiObject -ClassName Win32_Service -ComputerName $machine |
       Where Name -eq $srv | % StopService | Out-Null
    write-host "[OK]" -ForegroundColor Cyan

    Write-Host "Starting Service $srv..." -NoNewline
    Get-WmiObject -ClassName Win32_Service -ComputerName $machine |
       Where Name -eq $srv | % StartService | Out-Null
    write-host "[OK]" -ForegroundColor Cyan
    }
}

Thus far, it will work, but t doesn’t give us Dynamic Autocomplete.  Let’s add that.

First things first, in order to have a Dynamic parameter, we have to be using [CmdletBinding()] and we also need to define our DynamicParam in its own special scriptblock, after the regular params.

Function Restart-RemoteService{
[CmdletBinding()]
Param($computername=$env:COMPUTERNAME)
    DynamicParam {
    #define DynamicParam here
    }

Now, within our DynamicParam block, we have to do a few things:

  • Name the param
  • Create a RuntimeDefinedParameterDictionary object
  • Build all of the properties of this param, including its position, whether it is mandatory or not, etc, and add all of these properties to a new AttributeCollection object
  • Define the actual logic for our param values by creating a dynamic ValidateSet object
  • Add these all up and return our completed DynamicParam, and end the dynamic block
  • Add a Begin and Process block to our code, and within the Begin block, commit the user input to a friendly variable (otherwise the value lives within $PSBoundParameters

First, we name the Param here:

DynamicParam {

            # Set the dynamic parameters' name
            $ParameterName = 'Service'

You know how when we normally define a parameter, we can specify all of these nifty values, like this?

[Parameter(Mandatory=$true,
                   ValueFromPipeline=$true,
                   ValueFromPipelineByPropertyName=$true,
                   ValueFromRemainingArguments=$false,
                   Position=0,
                   ParameterSetName='Parameter Set 1')]
        [ValidateNotNull()]
        [ValidateNotNullOrEmpty()]
        [ValidateCount(0,5)]
        [ValidateSet("sun", "moon", "earth")]
        [Alias("p1")]
        $Param1

If we want to do this for a dynamic parameter, we have to create a System.Management.Automation.RuntimeDefinedParameterDictionary and add all of the properties we want to it.  In fact, that’s the next thing we do, and we have to do it.   We make a new Dictionary, then make a new collection of attributes (like Mandatory, Position, etc), then we manually add all of the Parameters to the dictionary.  Yeah, it totally blows.

# Create the dictionary
$RuntimeParameterDictionary = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary

# Create the collection of attributes
$AttributeCollection = New-Object System.Collections.ObjectModel.Collection[System.Attribute]

With that, we’re ready to make some attributes. Stick with me, I promise we’re about to do something fun. In the next step, we’ll make the ServiceName mandatory, and specify a position of 1 if the user is lazy.

# Create and set the parameters' attributes
$ParameterAttribute = New-Object System.Management.Automation.ParameterAttribute
$ParameterAttribute.Mandatory = $true
$ParameterAttribute.Position = 1

#Add the attributes to the attributes collection
$AttributeCollection.Add($ParameterAttribute)

Alright, finally the cool part! Here’s where we populate our dynamic parameter list! We do this by running our arbitrary code (remember, these are values we’re specifying, so you need to remember to append Select -ExpandProperty #YourPropertyName to the end of your statement, or nothing will happen), and then we take the output of our code (which we want to become the values the user can tab through) and we add them as a custom ValidateSet.

Yup, that’s all we were doing this whole time, setting up a big structure to let us do a script based ValidateSet. Sorry to spoil it for you.

#Code to generate the values that our user can tab through
$arrSet = Get-WmiObject Win32_Service -ComputerName $computername | select -ExpandProperty Name
$ValidateSetAttribute = New-Object System.Management.Automation.ValidateSetAttribute($arrSet)

OK, we’re in the home stretch. All that remains is to crete a new Parameter object using all of the stuff we’ve done in the previous 10 lines, then we add it to our collection, and Bob’s your uncle.

# Add the ValidateSet to the attributes collection
$AttributeCollection.Add($ValidateSetAttribute)

# Create and return the dynamic parameter
$RuntimeParameter = New-Object System.Management.Automation.RuntimeDefinedParameter($ParameterName, [string], $AttributeCollection)
$RuntimeParameterDictionary.Add($ParameterName, $RuntimeParameter)
return $RuntimeParameterDictionary
    }

begin {
# Bind the parameter to a friendly variable
$Service = $PsBoundParameters[$ParameterName]
    }

Particularly of note is that last bit, in the Begin block. Strangely enough, PowerShell will receive the values the user inputs, but saves them within $PSBoundParameters, its up to us to actually commit the value the user inputs into the variable name of $service so that we can use it.

Putting that all together, here’s the complete DynamicParam{} scriptblock.

DynamicParam {

            # Set the dynamic parameters' name
            $ParameterName = 'Service'

            # Create the dictionary
            $RuntimeParameterDictionary = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary

            # Create the collection of attributes
            $AttributeCollection = New-Object System.Collections.ObjectModel.Collection[System.Attribute]

            # Create and set the parameters' attributes
            $ParameterAttribute = New-Object System.Management.Automation.ParameterAttribute
            $ParameterAttribute.Mandatory = $true
            $ParameterAttribute.Position = 1

            # Add the attributes to the attributes collection
            $AttributeCollection.Add($ParameterAttribute)

            # Generate and set the ValidateSet
            $arrSet = Get-WmiObject Win32_Service -ComputerName $computername | select -ExpandProperty Name
            $ValidateSetAttribute = New-Object System.Management.Automation.ValidateSetAttribute($arrSet)

            # Add the ValidateSet to the attributes collection
            $AttributeCollection.Add($ValidateSetAttribute)

            # Create and return the dynamic parameter
            $RuntimeParameter = New-Object System.Management.Automation.RuntimeDefinedParameter($ParameterName, [string], $AttributeCollection)
            $RuntimeParameterDictionary.Add($ParameterName, $RuntimeParameter)
            return $RuntimeParameterDictionary
    }

    begin {
        # Bind the parameter to a friendly variable
        $Service = $PsBoundParameters[$ParameterName]
    }

And in progress.  Keep your eyes on the birdy here, as you’ll see the Services start to populate almost immediately after I hit tab, then the service on the left side will very quickly stop and start.

autocompleteexample

Here’s the Completed Function.

Sources

This was a hard one to write, and really helped me a lot to formalize my knowledge on the matter.  I couldn’t have done it without these posts.

Join us next post as we delve into how to add additional autocomplete to our function, by means of defining the output type for our function.

Link to post 2 goes here



SOLVED: What happens to WINRM when certs die

$
0
0

the-case-of-the-ghost-certificate-p2

Oh boy, this has been a rollercoaster of emotions.  But guys…we made it.  We have finally, and definitively answered what happens to WinRM with HTTPs when certificates expire.  If you’re curious about why this is a big question, see my previous posts on this topic.

Up until now, I’ve been able to say, conclusively, that WinRM generally seems to work, even as Certs expire and are renewed.  But I’ve never known why: did WinRM automatically update the certs?  Does Windows just not care about certs?  What is the purpose of life?

Well, I can now shed light on at least some of those questions.  I knew what I needed to do

Record a WireShark transfer and extract the certificate to tell definitively, which cert is being used to validate the session.  Then we’ll know what happens.

Setting the stage

Two VMs, one domain.  Server 2016 server, connected to from a Server 2012 R2 client. Newly created WinRM capable Certificate Template available to all domain members with a 4 hour expiration and 2 hour renewal period.

00-cert-temp

With the stage set, and the cert was present on both machines, I ran winrm quickconfig -transport:https on each, then made sure they could see each other, and remoted from one into the other.  I recorded a WireShark trace of the remote session, uh remoting, then ran a command or two, then stopped recording.  Then I opened the trace.

Swimming with the Sharks

sharks
How I felt looking at all of these packets

When you first open WireShark and start recording, you may be a bit dismayed…

01-wireshark-no-ssl

If you were to browse a website, or do other transaction with SSL, WireShark is smart enough to break it down and show you each step in the transaction.  However, with PowerShell remoting using SSL over the non-standard support of 5986, you have to tell WireShark how to treat this data.  Do this by clicking one of the first SYN \ ACK \ ECN commands, then click  Analyze\ Decode as...

02-decode

You’ll need to provide both the Source and Destination port (don’t worry, if you clicked one of the packets as I recommended, you can just select them from the dropdown for Value), and then pick ‘SSL’ from the dropdown list on the right.

02-5-decode-rules

03-now-decoding
This is a REALLY big image (captured from my 4k), open in it’s own tab!

Now you can finally see the individual steps!

decoded

Since we can see these steps, we can now drill down and see which cert is being used.  That’s right, we can actually extract the certificate.

Extracting a certificate

Find the step which has the lines Server Hello, Certificate ... and other values in it.

Now, in the Details pane below, click on Secure Sockets Layer

05

06

Follow the arrows above, and click through to TLS, Handshake Protocol: Certificate, Certificates, and finally right-click Certificate

07

Choose Extract Packet Bytes and then choose where to dump the file.

08
Make sure to save as .DER format

With this done, you can now double-click to open the cert and see what was transmitted over the wire.  Pretty crazy, huh?  This is one reason why man-in-the-middle attacks are so scary.  But then again, they’d have to worry about network timing, cert chains and name resolution too in order to really appear as you.  But anyway, lets look and see which cert was used to authenticate this WinRM Session.

09
Click over to the details tab

In this next screen shot, on the left is the cert I recovered from WireShark.  The one on the right is the original cert from the MMC from the same computer.

10
Note that the Cert Thumbprint matches…this will become critical later

So, now we’ve found out how we can recover certificates from a WireShark trace.  Now all that remains is to wait the four hours for this cert to expire, and see what happens!

Waiting for the cert to renew

maxresdefault1

While I was away, I left a little chunk of code running, which will list the valid certs on the computer, and echo out their thumbprints.  It also echoes out the cert listed in the HTTPS listener of WinRM.  By keeping an eye on this, I know when the cert has been renewed.  Here’s the code:

while ($true){
"
$(get-date | select -expand DateTime) pulsing the cert store"| tee -append C:\temp\Winrm.log ;

$cert = get-childitem Cert:\LocalMachine\My |? ThumbPrint -ne '9D9362043DF0027552B1B41F6F68D208F8433152' | ? ThumbPrint -ne 'FEFFA38303FA0A3748683196E350D97F869AD690' | ? ThumbPrint -ne 'A878CC677E87D5FDC852A82ECD6AFDDD6EDC3C5C'| ? ThumbPrint -ne '315E6950EB9B8DD7BCBD8263BACBDB6B35F820DF' |  ? ThumbPrint -ne '232E14112D50209B2575451D63A3F7CA80AFC6EE'
"--current valid thumbprint $(get-childitem Cert:\LocalMachine\My |? ThumbPrint -ne '9D9362043DF0027552B1B41F6F68D208F8433152' | ? ThumbPrint -ne 'FEFFA38303FA0A3748683196E350D97F869AD690' | ? ThumbPrint -ne 'A878CC677E87D5FDC852A82ECD6AFDDD6EDC3C5C'| ? ThumbPrint -ne '315E6950EB9B8DD7BCBD8263BACBDB6B35F820DF' |  ? ThumbPrint -ne '232E14112D50209B2575451D63A3F7CA80AFC6EE' |select -ExpandProperty ThumbPrint)"| tee -append C:\temp\Winrm.log ;

"--current WSman thumbprint $((get-item WSMan:\localhost\Listener\Listener_1305953032\CertificateThumbprint | select -expand Value) -replace ' ')" | tee -append C:\temp\Winrm.log ;

"--cert valid $([math]::Round(($Cert.NotAfter - (get-date) | Select -expand TotalMinutes),2)) minutes, for pausing for 30 mins"

start-sleep (60*30)

}

So, I was really happy to see this when I came back

cert-renewed

The difference between the current thumbprint and the one listed in WinRM told me that the cert had renewed…but strangely enough WinRM on a Server 2016 machine still references the old thumbprint.

This old thumbprint was listed EVERYWHERE.  Now, the moment of truth, to run a new WireShark trace and extract the cert.  I was sitting there with baited breath, very excited to see the results!  And…

Jesus Christ man, just tell us what happened

Alright, here is what I saw when I opened the cert from the machine and saw what was listed in the MMC.  It’s listed side by side with what you see in WinRM or WSMan

How long are you going to drag this on...
How long are you going to drag this on…

OK, the moment of truth.  Which actual cert was used for this communication?

Did WinRM:

  • A: Use the original, now expired cert
  • B: Not use a cert at all?
  • C: Actually use the renewed cert, even though all evidence points to the contrary?

To find out, I had to take another WireShark trace and run through all of these steps again.  But what I found shocked me…

15-conclusion-winrm-does-present-the-new-cert

Yep.  Sure enough!  When decoding the certificate on the machine, I found that WinRM does actually use the renewed certificate, even though all evidence (and many sources from MSFT) point to the contrary.  This is at least the case on a Server 2012 R2 machine remoting into Server 2016.  Later today I’ll update with the results of 2012 to 2012, 2016 to 2016, and two goats on a chicken while a sheep watches.

15-conclusion

What does it all mean?

In conclusion, WinRM does actually seem to handle cert expiry gracefully, at least on PowerShell 4 and up and Server 2012 R2 and newer.  I’ve tested client and server connection mode from Server 2012R2 and 2016 thus far.

Credit to these fellers:


Is WinRM Secure or do I need HTTPs?

$
0
0

One of the things I absolutely love about my job is being thrown into the deep end of the rapids with little to no time to prepare  given the opportunity to try new things and new technologies, pushing me out of my comfort zone.  It normally goes okay.

whitewater
actual camera footage of my last project

Case in point: a client of ours recently was investigating WinRM and whether or not it was secure, leading me down a rabbit hole of Certificates, Enterprise CA’s, SSL Handshakes, WireShark and more.

At the end of the initiative, I was asked to write up a summary to answer the question

Is WinRM secure or do I really need HTTPs too

In this post, I’ll talk us through my findings after days of research and testing, stepping through the default settings and some edge cases, hopefully covering the minimum you need to know in a short little post.

Authentication Security

Consider the following scenario: two computers, both members of the same domain.  We run winrm quickconfig on both computers and don’t take any additional steps to lock things down.  Is it secure?  Are credentials or results passed in the clear?  Until stated otherwise, assume HTTP until I mention it again.

From the very first communications and with no additional configuration, connections between the two computers will use Kerberos for initial authentication.  If you’re not familiar with it, the bare minimum to know is that Kerberos is a trusted mechanism which ensures that credentials are strongly protected, and has a lot of nifty features like hashing and tickets which are used to ensure that raw credentials never go over the wire.  So, domain joined computers do not pass creds in the clear.

Well, what if the two machines are in a workgroup instead?  Workgroup machines trust each other, but don’t have a domain controller to act as the central point of authority for identity, so they have to use the dated NT LAN Manager (NTLM) protocol instead.  NTLM is known to be less secure than Kerberos, and has it’s own vulnerabilities, but still obfuscates credentials with a strong one-way hash.  No credentials go over the wire in the clear in this scenario either.

On-going Security

For those keeping track, thus far we’ve found that neither domain joined or workgroup PCs will transmit creds in the clear, or in easily reversed encryption for the initial connection.  But what about further communications?  Will those be in plaintext?

Once the authentication phase has completed, with either Kerberos (used in a domain) or NTLM (when machines aren’t in a domain) all session communications are encrypted using a symmetric 256-bit key, even with HTTP as the protocol.

This means that by default, even with plain old HTTP used as the protocol, WinRM is rolling encryption for our data.  Awesome!

In that case, when do we need HTTPs

Let’s go back to the workgroup / DMZ scenario.  In this world, NTLM is the authentication mechanism used.  We mentioned earlier however, that NTLM has known issues in that it is relatively trivial for a skilled attacker to impersonate another server.

Fortunately, we have a perfect remedy to this impersonation issue! We can simply use HTTPS as the transport for NTLM communications.  HTTPS’ inclusion of SSL resolves issues of Server Identity, but requires some configuration to deploy.   With SSL, both computers must be able to enroll and receiving a valid Server Authentication certificate from a mutually trusted Certification Authority.  These certificates are used to satisfy the need to validate server identity, effectively patching the server impersonation vulnerability of NTLM.

In the world of WinRM over HTTPs, once initial authentication has concluded, client communication is now doubly secured, since we’ve already got our default AES-256 Symmetric keys from WinRM mentioned earlier, which is within the outer security layer of the SSL secured transport tunnel.

I was told it would be in the clear?

In case you’re just reading the headings, at no point so far are connections sent in the clear with the steps we’ve outlined here.

However, if you’re really interested in doing it, is possible to allow for cleartext communications…it just requires one taking the safety off, propping one’s foot up, and really, really circumventing all of the default security in order to shoot one’s self in one’s foot.

On both the client and server, one must make a handful of specific modifications to the winrm server and client, to specify Basic Authentication mode and place the service in AllowUnecrypted mode.

If we take these steps, and then force the actual remote connection into Basic mode with

Enter-PSSession -ComputerName $Name -Authentication Basic

Then and only then will we pass communications in the clear.  The actual payload of messages will be viewable by anyone on the network, while the credentials will be lightly secured with easily reversible base64 encryption.  Base64 is used SO often to lightly secure things that some folks call it ‘encraption’.  In fact, if you’re listening on a network and see some base64 packets, you might want to try to decrypt them, could be something interesting.  For more on this topic, read Lee’s excellent article Compromising yourself with WinRM.

Conclusion

For machines which are domain joined and will have access to a domain controller for Kerberos authentication, SSL is just not necessary.

However, for machines which may be compromised within a DMZ or workgroup, SSL provides an added layer of protection which elevates confidence in a potentially hazardous environment.

TL:DR WinRM is actually pretty good and you probably don’t need HTTPs


Lessons on ProcMon and how to force On-screen keyboard

$
0
0

No On Screen Keyboard-

Recently, I had a customer looking at setting up potentially tens of thousands of Point of Sale Kiosks running Windows 10 on an LTSB branch.  We wanted users to have to input their password, but noticed that if a Windows 10 machine is in the docking station, the Touch Keyboard will never display!

Paradoxically, if the user has a Windows Hello Pin specified, that version of the touch keyboard will appear. But for a regular password?  Nope, no On-Screen Keyboard.  And using the dated compatibility keyboard (OSK.exe) was not an option.

To illustrate how weird this confluence of conditions was, I’ve provided a video

While we wait for Microsoft to create a patch to fix this, I’ve created my own workaround, using WMI Events and PowerShell!

In a perfect world, we’d wait for a hotfix. If it affected many people, Microsoft would roll out a patch for it.

Life isn’t perfect and we don’t have time to wait!  Sometimes all you really need is to open up Process Monitor and then write your own hack.

Why is this happening?

Regardless of the values you set elsewhere on the system for keyboard preference (like these keys below):

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\TabletTip\1.7\EnableDesktopModeAutoInvoke=1
LastUsedModalityWasHandwriting = 1

while these values allow the Windows keyboard to appear anywhere within Windows, it has no affect on the lock screen if the system is in a docking station.

The weirdest part?  If the tablet is undocked, even if you plug a USB Keyboard into the tablet…the On Screen keyboard will display!

The Cause

This strange behavior told me that something was happening related to the system being docked, which was telling Windows to suppress the keyboard on the login screen.  All of this pointed to some specific registry key being set when the tablet was docked, which instructed the Touch Keyboard (TableTip.exe) to be suppressed at login when docked.

How to use ProcMon

Because we could control the behavior (i.e. recreate the issue) we could narrow down the scope and look for changes.  This spells ProcessMonitor to me!  Now ProcMon can DROWN you in data and bring even a powerful system even to its knees so effective filtering is key to getting anything done.

I opened the program, started a trace and then logged off, tried to bring up the keyboard, then logged back in and paused the trace.  Next.. because I suspected that (and I hoped, as it would be easier for me if it were a simple regkey) it was a regkey hosing me up here, I filtered everything else out by clicking these icons. Remember, we need to filter out superfluous data so we can find what we want!

This dropped me down to only 235K events instead of 267K.

Next, I knew the program for the keyboard is called TabTip so I filtered for just that.  If you need to, you can click the cross hairs and drag down onto a process to lock to just that.This should really drop down the number of entries (down to 30k for me!)

Finally, let’s filter for only RegQueryValue events, which tells us that ProcMon looked for a Regkey.  This is a hint that we are able to possibly influence things by changing a key.

And now…hit Control+F and get clever trying to find your value!  I knew that Windows called this SlateMode, so I searched around for that…and this one kept calling my name.

interesting

Both CSRSS and TabTip kept checking for this value …hmm…Let’s try fiddling with it

I set it to Zero and logged out and…BOOM baby!

I finally had a keyboard on the lock screen while docked!  Oh yeah!

giphy

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\PriorityControl]

"ConvertibleSlateMode"=dword:00000000

If this key is set to 0, the Touch Keyboard will always be available.

Unfortunately, when a device is docked or undocked, Windows recalculates the value of this key and no amount of restrictive permissions can prevent Windows from changing the value.

Nothing prevents us from changing the value right back though!  To sum it up in GIF form, this is what we’re about to do here:

The Fix

To resolve this issue, the following PowerShell script should be deployed as a scheduled task, to execute at boot and with the highest privilege.  It will run silently in the background and recognize docking/undocking events.  When one occurs, it will reset the value of the key to 0 again, ensuring the keyboard is always available.

set-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Control\PriorityControl -Name ConvertibleSlateMode -Value 0 -PassThru

#Register for device state change
Register-WMIEvent -query "Select * From Win32_DeviceChangeEvent where EventType = '2'" `
-sourceIdentifier "dockingEvent_Ocurred" `
-action {#Do Something when a device is added
$val = get-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Control\PriorityControl | select -expand ConvertibleSlateMode
write-output "$(get-date) current value of $val"  >> c:\utils\reglog.log
set-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Control\PriorityControl -Name ConvertibleSlateMode -Value 0 -PassThru
$val = get-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Control\PriorityControl | select -expand ConvertibleSlateMode
write-output "$(get-date) current value of $val"  >> c:\utils\reglog.log

}

while($true){
start-sleep 3600
#perform garbage collection in case we're getting clingy with our memory
[System.GC]::Collect()

}

 

Is this safe for production?

Certainly! Now, ideally, I’d rather find and set a single registry key value and I think that Microsoft will eventually fix this in a Windows Update or new release of LTSB. If that happens, I’ll update this post, but as of today, this is the necessary work around for Windows 10 2016 LTSB and release 1702.

Have you solved this problem too?  Have a better way?  I’d love to hear it!  Normally I would lock down permissions to the registry key to keep Windows from changing the value back, but that wouldn’t work in this case.  I’m open to other options if you’d like to share.


Advanced Autocompletion: adding output types

$
0
0

upgrade-your-code

This post is part of the series on AutoCompletion options for PowerShell! Click the banner for more posts in the series!


Previously in this series, we reviewed a few ways to add AutoComplete onto your functions, covering Param AutoCompletion and Dynamic Parameters.  In this post, we’ll spend a LOT of time typing in the present to help our future selves save fractions of a second, because there’s no way we’ll become less lazy, right?  At the end of the day, we will have achieved the holy grail of Attaboys, and have Output Autocomplete working in our function.

Output AutoComplete

You know how in PowerShell you can type a cmdlet, then pipe into Select-Object or another cmdlet and start tabbing through property names?  This is the type of Autocompletion we are going to add to our function in this post!

gif

Not only does this save you from making mistakes, but it is amazingly convenient and really gives our functions a polished and professional look and feel.  PowerShell’s ability to do this highlights one of its distinguishing features as well!

Dynamic Type System

Warning: this next part is probably kind of boring

If you’re like me, you read things and then just gloss over all of the words and symbols you don’t know, assuming they’re unimportant.  If I just described you, then I hate to be the one to tell you this, but that is kind of a tremendous character flaw.  I’ll get around to why this is bad and how it relates to PowerShell, but first, take us on a detour into my past.

Back in High School,  I REALLY liked anime and wanted to learn Japanese.  I was a cool kid, believe you me.  So I took a semester of Japanese after which I kind-of, sort-of knew how to read their alphabet.

Well, one of their three.  And only the easy alphabet.  Surely that will not come back to bite me, right?

So, me, being a very cocky and attractive (read 200 lb redhead with braces and a predilection for silky anime shirts with muscle dudes on them) was sure that I knew enough Japanese to survive in Japan and I signed up for the foreign exchange student program.

And on my first night in Japan, was greeted with this in the washroom.

Except mine had only Japanese characters on it…and two of the three were kanji (which I couldn’t read at all).  What the heck could the other ones be?  I knew that one was Shampoo but the other two?

I’d seen that my host family had been taking their toothbruses with them into the washroom, so one of these had to be toothpaste, right.  There’s no way they had a tooth paste tube in the shower…right?  (Hint: they did).  So one of them has to be toothpaste!

That means the other had to be body wash!

And that’s how I spent a week in Japan, brushing my teeth with bodywash and trying to get clean using conditioner.  I will say this though…the hair on my arms was positively luxurious!  Eventually my host mom realized what I was doing and boy did she have a good laugh.

How does this relate to PowerShell again?

Well, I was guilty of skipping over things in the PowerShell world too…like the phrase ‘dynamically typed scripting language’.  I knew what a scripting language was, but had no clue what the hell types were, or why I’d want them to be dynamic.  If you stop reading right now and go off and google about PowerShell, chances are you’ll see it explained like this:

Windows PowerShell includes a dynamically typed scripting language which can implement complex operations […] – WikiPedia.

 

You’ll find it described this way EVERYWHERE, in books, forums, blog posts.  I even used to say the phrase in my training classes, and just hoped no one would ask me what it meant.  If they did ask me what it meant, I would call an emergency bathroom break and hide until they hopefully forget their question.

Now, let’s talk about why DynamicTyping is awesome.

Why PowerShell’s dynamic type system is awesome

In a lot of programming languages, the type of variable or object must be specified before you can use it, like in C#.

int i, j, k;
char c, ch;
float f, salary;
double d;

If you want to use these variables, you’d better specify them ahead of time!

In PowerShell, variable types can be inferred based on the type of object. You can even have many types of object living in a variable, like so:

$a = 1, "ham", (get-date)

We don’t have to define the type of object ahead of time, PowerShell does it all for us.  We can also convert items back and forth into different types as well.  This kind of flexibility is PowerShell’s Dynamic Type system in action!

PowerShell further offers an adaptive type system. By default, we can run Get-ChildItem, which gives us a list of files, and by default shows us only the Mode, LastWriteTime, Length, and Name properties.

default-properties

How does PowerShell know what properties to display?  This all comes down to the PowerShell type system again.

If we pull a single object and pipe it over to Get-Member, we can see which type of object we’re working with:

seeingthetype

This means that somewhere, PowerShell knows what type of properties a System.IO.FileInfo object should emit, and informs IntelliSense so that we can autocomplete it.  It also knows which properties to display by default and how to display them.  This all comes down to a whole boatload of .ps1xml files that live on your system.

However, we don’t have to go editing XML files if we want to tweak which properties are displayed, PowerShell is adaptive.  We just need to Adapt…or Update things a bit.

But wait, does that mean I can change the properties for the type?

That’s a great question and it’s one of my absolutely favorite tricks in PowerShell.  And thanks to its Adaptive Type System, we CAN change the properties for a type.

PowerShell 3.0 added the awesome Update-TypeData cmdlet, which let’s us append new properties to existing types.  And it’s SO easy.

I used to always run some code like this, which would allow me to see the file size of a file in MBs, and show me some of the existing properties, then append my own calculated property to it.

Dir | select -first 4 LastWriteTime,Name,Length,`
@{Name='MB';exp={[math]::Round(($_.Length / 1mb))}}

Here it is in action:

But…there’s a better way!  I took the same logic, and implemented it by modifying the typedata for System.IO.FileInfo.  This is done using Update-TypeData and providing a scriptblock to instruct PowerShell as to how it should calculate our new property.  Just swap your $_ references for $this and you’re golden.

Update-TypeData -TypeName system.io.fileinfo -MemberName MB `
    -MemberType scriptproperty -Value {
	if ($this.Length -le 10000){
         'NA'
         }
         else{
        [math]::Round(($this.Length / 1mb),2)}
}

One caveat, you have to manually specify this new property with a select statement, I haven’t found a way around it…yet!

The Type Information we’ve been talking about here is the key to how PowerShell knows which properties to display, and also how PowerShell cmdlets know which properties your cmdlet will output.  This in turn is how we’re able to populate AutoComplete Data!

How do we tell PowerShell what our cmdlet is going to output?

There are two things we need to do to instruct PowerShell as to what our cmdlet will be emitting, which is needed to enable that cool AutoCompletion.

  • Define a new object type by creating a .ps1xml  file
  • Add the .ps1xml file to our module manifest or module file
  • Modify our functions to add an [OutputType()] value
  • Wonder why Stephen can’t count to 3
PS1XML files aren’t that scary

If you’re like me, you’ve avoided .ps1xml files for your whole PowerShell career.  Time to buck up cowboy, they’re not so bad!

We’ll start by modifying one of the built-in files, or use this one from my soon-to-be-released PSAirWatch PowerShell module.

Let’s look into what we need to define here:

<?xml version="1.0" encoding="utf-8" ?>
<Types>
  <Type>
    <Name>AirWatch.Automation.Object.Device</Name>
    <Members>
      <ScriptProperty>
        <Name>AcLineStatus</Name>
        <GetScriptBlock>
          $this.AcLineStatus
        </GetScriptBlock>
    </ScriptProperty>

First, you define the name of this new type of object. You can pick literally anything but I like the format of ModuleName.Automation.Object.TypeOfObject. Next, you add a <Members> node and within it you place a pretty self-descriptive block which includes the name of a property, and then the code used to resolve it.

In this syntax, you’ll be using the special $this variable, which we don’t see too often in PowerShell.  Think of it as a stand-in for $PSItem or $_.

Rinse and repeat, defining each of the properties you want your object to emit.  This is also where you can use a nifty value called the DefaultDisplayPropertySet to choose a small subset of your properties as the default values to be displayed.

This is a very nice ‘warm-fuzzy’ feature to have in your functions, because it makes them act more like standard PowerShell cmdlets.  Go ahead and define a dozen properties for your objects and then also provide a default set, and when the user runs your cmdlet, they’ll see just the most relevant properties.  However, a PowerShell PowerUser will know to pipe into  Get-Member or Format-List and be impressed when they suddenly have a lot of extra properties to choose from.

Here’s how it looks to specify a DefaultDisplayPropertySet, if you’re interested.


<MemberSet>
<Name>PSStandardMembers</Name>
<Members>
<PropertySet>
<Name>DefaultDisplayPropertySet</Name>
<ReferencedProperties>
<Name>DeviceFriendlyName</Name>
<Name>DeviceID</Name>
<Name>Model</Name>
<Name>LastSeen</Name>
</PropertySet>
</Members>
</MemberSet>
</Members>
</Type>

That’s it for creating the type in XML.  Now, you need to modify your PowerShell module to import your type file.  You can do this in a Manifests file, (which I’ll cover in a future blog post), or you can also very easily do it by adding a line like this to the bottom of your .psm1 module file.

Update-TypeData -PrependPath $PSScriptRoot\Types.ps1xml -Verbose

Finally, we simply modify our Functions in our module like so


function Get-AWDevice
{
[CmdletBinding()]
[Alias()]
[OutputType('AirWatch.Automation.Object.Device')]
Param
(
# How many entries to provide, DEFAULT: 100

Now when the module is imported, and I pipe into Get-Member and now my object type is displayed.

And all of my new properties are there too…but the real test…do I see my values?

VICTORY!

One last thing…

If you spent a lot of time in your .ps1xml file, or if you went over and above and made a Format.ps1xml file, customizing how your objects should be formatted or displayed in -Table or -List view you might be dismayed to see that PowerShell ignores your beautifully tailored formatting instructions.  I know I was.

So, earlier when we added an [OutputType()] to our function, we were providing instructions that the IntelliSense engine uses to provide AutoCompletion services to our end-user.  However, PowerShell does not force our output or cast it into our desired OutputType, we’ve got to do that ourselves.

You could get really fancy, and instantiate and instance of your type and use that to cast your object into it…but the really easy way to do this is to scroll to the bottom of your function, wherever you actually emit an output object, and add this line.


$output | % {$_.PSobject.TypeNames.Insert(0,'AirWatch.Automation.Object.Device') }

This will instruct PowerShell to interpret your custom object output as the desired type, at which point the formatting rules will be applied.

And if you haven’t created a Format.ps1xml file, worry not, as we’ll be covering it in a later blog post.

Sources

This was one of those posts that in the beginning seem deceptively simple and make me say ‘hmm, I know enough about the topic…surely I can write this in two hours’.  Incorrect.  I probably spent a solid 40 hours researching and writing this post, easily.  And I had to do a lot of reading along the way.   If you’ve got this far and wonder how I learned about it, these articles might be of interest to you.


Extracting and monitoring web content with PowerShell

$
0
0

 

Extract PowerShellThis kind of request comes up all the time on StackOverflow and /r/PowerShell.  “How can I extract content from a webpage using PowerShell”.

This post COULD have been called ‘Finding a Nintendo Switch with PowerShell’, in fact!  I have been REALLY wanting a Nintendo Switch, and since I’ll be flying up to NYC next month for Tome’s NYC TechStravaganza (come see me if you’ll be in Manhattan that day!), it’s the perfect justification for She-Who-Holds-The-Wallet for me to get one!

But EVERYWHERE is sold out.  Still!  😦

However, the stores have been receiving inventory every now and then, and I know that when GameStop has it in stock, I want to buy it from them!  So, since I’ve got a page I want to extract, my first step is to load the page!

GameStop Nintendo Switch with Neon Joycons

First thing’s first, let’s load this page in PowerShell and store it in a variable, we’ll be using Invoke-WebRequest to handle this task.

$url ='http://www.gamestop.com/nintendo-switch/consoles/nintendo-switch-console-with-neon-blue-and-neon-red-joy-con/141887'
$response = Invoke-WebRequest -Uri $url

Next, I want to find a particular element on the page, which I’ll parse to see if it looks like they have some in stock. For that, I need to locate the ID or ClassName of the particular element, which we’ll do using Chrome Developer Tools.

On the page, right-click ‘Inspect Element‘ on an element of your choosing.  In my case, I will right-click on the ‘Unavailable’ text area.

This will launch the Chrome Developer Console, and should have the element selected for you in the console, so you can just copy the class name.  You can see me moving the mouse around, I do this to see which element is the most likely one to contain the value.

 

You want the class name, in this case ats-prodBuy-inventory.  We can use PowerShell’s wonderful HTML parsing to do some heavy lifting here, by leveraging the HTMLWebResponseObject‘s useful ParsedHTML.getElementsByClassName method.

So, to select only the element in the body with the class name of ats-prodBuy-inventory, I’ll run:

$rep.ParsedHtml.body.getElementsByClassName('ats-prodBuy-inventory')

This will list ALL the properties of this element, including lots of HTML info and properties that we don’t need.

To truncate things a bit, I’ll select only properties which have text or content somewhere in the property name.

$rep.ParsedHtml.body.getElementsByClassName($classname) | select *text*,*content*

The output:

innerText         : Currently unavailable online
outerText         : Currently unavailable online
parentTextEdit    : System.__ComObject
isTextEdit        : False
oncontextmenu     :
contentEditable   : inherit
isContentEditable : False

Much easier to read.  So, now I know that the innerText or outerText properties will let me know if the product is in stock or not.  To validate, I took a look at another product which was in stock, and saw that it was the same properties.

All that remained was to take this few-liner and and convert it into a script which will loop once every 30 mins, with the exit condition of when the message text on the site changes.  When it does, I’m using a tool I wrote a few years ago Send-PushMessage, to send a PushBullet message to my phone to give me a head’s up!


$url ='http://www.gamestop.com/nintendo-switch/consoles/nintendo-switch-console-with-neon-blue-and-neon-red-joy-con/141887'

While ($($InStock -eq $notInStock)){
$response = Invoke-WebRequest -Uri $url
$classname ='ats-prodBuy-inventory'
$notInStock = 'Currently unavailable online'

$InStock = $response.ParsedHtml.body.getElementsByClassName($classname) | select -expand innertext
"$(get-date) is device in stock? $($InStock -ne $notInStock)`n-----$InStock"
Start-Sleep -Seconds (60*30)
}
Send-PushMessage -Type Message -title "NintendoSwitch" -msg "In stock, order now!!!!"

This is what I’ve been seeing…but eventually I’ll get a Push Message when the site text changes, and then, I’ll have my Switch!

Willing to help!

Are you struggling to extract certain text from a site?  Don’t worry, I’m here to help!  Leave me a comment below and I’ll do my best to help you.  But before you ask, checkout this post on Reddit to see how I helped someone else with a similar problem.

reddit/r/powerhsell: Downloading News Articles from the Web

 


POWERSHELL DECONSTRUCTED

$
0
0

DECONSTRUCTED

We’re all adventurers.  That’s why we wake up in the morning and do what we do in our fields, for that feeling of mastery and uncovering something new.  Some of us chart new maps, cross the great outdoors, or climb mountains.

And some of us explore code.

In this post, I’ll outline my own such PowerShell adventure, and show you the tools I used to come out the other side with a working solution.  We’ll meet in basecamp to prepare ourselves with the needed gear, plan our scaling strategy and climb the crags of an unknown PowerShell module.  We’ll belay into treacherous canyons, using our torch to reveal the DLLs that make Windows work, then chart new ground using DotPeek and eventually arrive on the summit, victorious and armed with new tools.

 

Basecamp – The Background

I’ve been working through a big MDM roll-out concept for a client recently, looking to use Windows 10’s new mobile device management capabilities as an interesting and resilient alternative to tools like ConfigMgr, for a new management scenario.

I needed to script the process of un-enrolling and re-enrolling devices in MDM, because we expect that a small percentage of devices will stop responding after a time, and want to be prepared for that contingency.  This is done by removing and reinstalling a Provisioning Package, which is new management artifact available to us in Windows 10.

Windows 10 1703 Release (the Creator’s update) conveniently has a nice new PowerShell module full of cmdlets we can use for this task!
00 module

However, we’re targeting a different release which doesn’t have this module available.  When I brought this information to the client, the response was ‘we have confidence that you can make this work’. Let’s break out the sherpa hats!

First things first, I tried just copying and pasting the module folder, but that didn’t work, sadly.

If only there were some way to look inside a cmdlet and see what’s going on under the covers….

Understanding our Gear

We’ll have a few pieces of gear which will be absolutely essential to this expedition.

Tool Function
Get-Command Retrieves definition of modules and cmdlets
DotPeek Decompiler for binaries
Visual Studio Code Pretty Editor for code screen shots

Looking into a Cmdlet the easy way

Get-Command is definitely the tried and true method of uncovering what a function does and how it works.  Time to break out our handy climbing axe and start picking away here.

Take your command and run it through Get-Command <cmdname> | select -Expand Definition.  This will show you the compiled version of the cmdlet, which may have some good clues for us.

Any script cmdlets or functions,  like the very useful Out-Notepad,will have a field called Definition which shows exactly how the cmdlet works.  You can pick up some neat tricks this way.


gcm Out-Notepad | select -ExpandProperty Definition

#this function is designed to take pipelined input
#example: get-process | out-notepad

Begin {
#create the FileSystem COM object
$fso=new-object -com scripting.filesystemobject
$filename=$fso.GetTempName()
$tempfile=Join-Path $env:temp $filename

#initialize a placeholder array
$data=@()
} #end Begin scriptblock

Process {
#add pipelined data to $data
$data+=$_
} #end Process scriptblock

End {
#write data to the temp file
$data | Out-File $tempfile

#open the tempfile in Notepad
Notepad $tempfile

#sleep for 5 seconds to give Notepad a chance to open the fi
sleep 5

#delete the temp file if it still exists after closing Notepad
if (Test-Path $tempfile) {del $tempfile}
} #end End scriptblock

However in this case with our Provisioning Cmdlets, PowerShell was just NOT giving up any secrets.


Get-Command Install-ProvisioningPackage | select -exp Definition

Install-ProvisioningPackage [-PackagePath] <string> [-ForceInstall] [-QuietInstall] [-LogsDirectoryPath <string>] [-WprpFile <string>
] [-ConnectedDevice] [<CommonParameters>]

This will often be the case for binary, or dll-driven cmdlets.  We have to climb a little higher to see what’s going on here.

Looking inside a Cmdlet the fun way

When our Command Definition is so terse like that, it’s a clue that the actual logic for the cmdlet is defined somewhere else. Running Get-Command again, this time we’ll return all the properties.

It turns out that the hidden core of this whole Module is this DLL file.

If you’ve been working around Windows for a while, you have definitely seen a DLL before.  You may have even had to register them with regsvr but did you ever stop to ask…

What the heck is a DLL anyway? 

DLLs are Microsoft’s implementation of a programming concept known as shared libraries.

In shared libraries, common code which might be present in many applications (like dependencies) are instead bundled into a dynamic link library, and loaded into memory once. In this model, many apps can share the same core functionality (like copy and paste) without having to roll their own solution, or needing to reload the same code.

This model allows for smaller, more portable applications while also providing more efficient use of a system’s resources.

 

TL;DR: if code does contains something really useful that might be needed elsewhere(like procedures, icons, core OS behaviors), store it in a DLL so other things can reference it

And as it turns out, many PowerShell modules do just this!

We can find the path to this module’s DLL by running Get-Command Get-ProvisioningPackage | Select Dll

Now, let’s open it in my editor of choice, Notepad.

Yeah… we’re going to need a different tool.

Choosing a Decompiler

When it comes to picking a decompiler or text editor, if we’re not careful we’ll end up looking like this guy:

I choose mountain climbers because I think their tools look SOO cool. I should buy an ice-axe
I choose mountain climbers because I think their tools look SOO cool. I should buy an ice-axe

There are a lot of options, I worked through .net Reflector and Ida pro before finally stopping on .Peek, by JetBrains.  You can download it here.  I chose .Peek because it’s free, and not just a free trial like .net Reflector, and it’s very up-to-date and designed with .net in mind.  IDA Pro does a good job, but I got the impression that it is SO powerful and flexible that it isn’t as good as a tailor made .net tool.

It is free, as in beer, and is an AWESOME tool for digging into DLL files.  Install it, then launch it and click Open.

Next, paste in the path to our DLL file, then expand the little arrow next to the ProvCmdlets assembly.

Open DLL

Here’s a breakdown of what we’re seeing here.

Working our way through this, we can see the loaded Assemblies, or DLL files that we are inspecting.  If you expand an Assembly, you’ll see the NameSpaces and Metadata inside it.  We’re more concerned with NameSpaces here.

Protip: the References section lists out all of the assemblies (other DLL files) that this assembly references.  If you attempt an ill-advised effort to port a module to another version of Windows, you’ll need to bring along all of these files (or ensure they’re the right version) at a minimum to prevent errors.

Inside of NameSpaces, you can see Class definitions.  Most binary cmdlets are built around namespaces, and will often match the format of the cmdlets themselves.

Since I’m interested in seeing what happens when I call the Install-ProvisioningPackage I’ll take a look at the Class definition for the InstallProvisioningPackage Class by clicking the arrow.

This shows us the Methods and the Params that the class exposes.

We can also double-click the cmdlet itself to see the full source code, which is shown below.  I’ve highlighted the Action bits down below on line 38.


// Decompiled with JetBrains decompiler
// Type: Microsoft.Windows.Provisioning.ProvUtils.Commands.InstallProvisioningPackage
// Assembly: ProvCmdlets, Version=10.0.0.0, Culture=neutral, PublicKeyToken=null
// MVID: 2253B8FF-A698-4DE9-A7F2-E34EDF8A357E
// Assembly location: C:\Windows\System32\WindowsPowerShell\v1.0\Modules\Provisioning\provcmdlets.dll

using Microsoft.Windows.Provisioning.ProvCommon;
using System;
using System.IO;
using System.Management.Automation;

namespace Microsoft.Windows.Provisioning.ProvUtils.Commands
{
[Cmdlet("Install", "ProvisioningPackage")]
public class InstallProvisioningPackage : ProvCmdletBase
{
[Parameter(HelpMessage = "Path to provisioning package", Mandatory = true, Position = 0)]
[Alias(new string[] {"Path"})]
public string PackagePath { get; set; }

[Parameter]
[Alias(new string[] {"Force"})]
public SwitchParameter ForceInstall { get; set; }

[Parameter]
[Alias(new string[] {"Quiet"})]
public SwitchParameter QuietInstall { get; set; }

protected override void Initialize()
{
this.PackagePath = Path.IsPathRooted(this.PackagePath) ? this.PackagePath : Path.GetFullPath(Path.Combine(this.SessionState.Path.CurrentFileSystemLocation.Path, this.PackagePath));
if (File.Exists(this.PackagePath))
return;
this.ThrowAndExit((Exception) new FileNotFoundException(string.Format("Package '{0}' not found", (object) this.PackagePath)), ErrorCategory.InvalidArgument);
}

protected override void ProcessRecordVirtual()
{
this.WriteObject((object) PPKGContainer.Install(this.TargetDevice, this.PackagePath, (bool) this.ForceInstall, (bool) this.QuietInstall), true);
}
}
}

It feels familiar… it feels just like an Advanced Cmdlet, doesn’t it?  PowerShell has been sneakily tricking us into becoming Programmers, yet again!

Once we scroll past the param declarations, we can see this cmdlets Initialize() method determines if the user provides a valid package path, then .ProcessRecordVirtual() gets called.

protected override void ProcessRecordVirtual()
{
this.WriteObject((object) PPKGContainer.Install(this.TargetDevice, this.PackagePath, (bool) this.ForceInstall, (bool) this.QuietInstall), true);
}

This line of code determines which params have been provided, then calls the PPKGContainer class to use that class’ Install() method.  Let’s right-click on PPKGContainer , then ‘Go To Declaration’ to see how that works!

Higher and Higher

The PPKGContainer useful class is actually defined in a separate DLL file, Microsoft.Windows.Provisioning.ProvCommon, and contains a number of its own methods too.  We are concerned with Install().


public static ProvPackageMetadata Install(TargetDevice target, string pathToPackage, bool forceInstall, bool quietInstall)
{
{...}
int num = target.InstallPackage(pathToPackage, quietInstall);

There’s a lot to unpack here.  When this method is called, the cmdlet creates a new TargetDevice object, and refers to it as target, as seen in line 1.  Then, down on line 4, we call the  target's own InstallPackage() method.

That means just one more step and we’ll finally be there, the summit of this cmdlet.   We right-click on TargetDevice and then ‘Go to implementation’ and then hunt for the InstallPackage() Method.  Here it is y’all, feast your eyes!

Oh man, there’s a lot going on here…but if we pare it all away we see that it takes params of a path to the PPKG file, and then a switch of QuietInstall.  And then we…resolve the path to PROVTOOL.exe…huh, that’s weird.

Next…we build a string with the path to the file…then add’s a ‘/quiet' to the string…oh no, I see where this is going.  Surely the RemovePackage method is more advanced, so let’s take a look at that!

payload

Double-clicking on TargetDevice, we can then scroll down to the RemovePackage method to see how this one works.

digging into uninstall method

We’re so close now guys, I’ve got a feeling that this will be worth it!

The closest thing that I could find to a fox in a winter coat.

The Summit

What do we actually see at the very top of this module?  The true payload, hidden from all eyes until now?

That’s it?  It just calls ProvTool.exe <pathtoFile.ppkg> /quiet?

No GIF - Find & Share on GIPHY

I dunno, I expected a little more, something a bit cooler.  It’s like climbing Mt Fuji to see only this.

Image Courtesy of WaneringVegans

Well, after hours of work, I certainly had to see what happened if I just ran that exact command line on another machine.  What if we just try that on another machine?  Only one way to find out.

I ran it and then pulled the Windows EventLogs and…It worked!

That’s it kids, the Tooth Fairy ain’t real, I’m beginning to have some doubts about Santa, and sometimes deep, deep within a module is just an old unloved orphan .exe file.

Thanks DotPeek, I guess.

Where do we go from here?

I hope this post helped you!  Later on, I’ll be looking for other interesting PowerShell modules, and hope to learn about how they work under the covers!  If you find anything interesting yourself, be sure to share it here with us too!  Thanks for reading!

Want to read more about Climbing Mt. Fuji?  Check these out:


The quest for true silent MDM Enrollment

$
0
0

If you’ve been reading my blog recently, you’ve seen a lot of posts about MDM and Provisioning Options for Windows 10.  Previously we’ve covered:

And in this post we will dig further into the options available to us to deploy a Provisioning Package with the goal of allowing for silent MDM Enrollment and Silent application of a provisioning package!

Why are we doing this?

In my situation, my customer is deploying Win10 Devices managed with Air-Watch in order to use the native Windows 10 MDM client, but we need an easy way to enroll them into Air-Watch when they boot!

You can use the Windows Image Configuration Designer tool to capture all of the settings needed to enroll a device, then bake them into a Provisioning Package which an end-user can double-click and and enroll after a short prompt.

However, for our purposes, devices arrive built and ready for service at our endpoints, so we needed to examine other approaches to find a path to truly silent enrollment!

Prerequisites

First things first, you’ll need to be signed up for an MDM Service.  In this guide I’ll assume you’re signed up for Air-Watch already (I’ll update it later with InTune once I am able to get this working as well)

From the Air-Watch console, browse to Settings \ Devices \ Windows \ Windows Desktop \ Staging & Provisioning.  You’ll see the necessary URLs.

Make note of these and fire up Windows Imaging Configuration Designer.  You can obtain this via the Windows Store on Windows 1703 or higher🔗 .  It also ships as part of the Windows ADK as well, and if you want to bake a Provisioning Package into a Windows Image, you’ll need that.

Click ‘New Simple Provisioning Package’ and provide a name for this project.

This screen gives you a nice taste of some of the things you can do in a PPKG, but we are going to be digging deeper into the options, so click ‘Switch to Advanced Editor’

Click ‘All Settings’ under Available Customizations at the top, then scroll down to Runtime Settings \ Workplace \ Enrollments

Fill this in with the info we noted from the AirWatch screen earlier.

At this point, we’re ready to export the Provisioning Package and think about our options for distribution.

Click Export, then Provisioning Package.

For now, we can Next, Next, Finish through the wizard.

and the output is two files, a .PPKG and a .CAT file.  The Cat is a Security Catalog file, which is a management artifact which contains signatures for one or many files.

For 99% of your PPKG needs, you don’t need to worry about the .CAT file, just deploy the PPKG file and away you go.

How to distribute Provisioning Packages

We have a number of ways we can distribute this file, but this cool thing about it is that once invoked, the user is going to get automatically enrolled into MDM Management!  Here are our options, which we’ll cover for the rest of the post:

1. Send to Users (Not silent) 2. Apply during OOBE 3. Bake into Image 4. Truly Silent Enrollment and Control: Sign the PPKG

EASY – Send to Users

If you’re in a normal environment with users able to follow basic instructions (big assumption 🙂 ) you can just e-mail the PPKG file out to your end users and instruct them to double-click it.  They’ll be presented with the following prompt, which will enroll them in MDM and along they go.

However for my purposes, this wasn’t a viable option.  We’d heard about automatic provisioning being available at image time, so we decided to take a look into that approach.

Apply at OOBE

If you’re not familiar with the term, OOBE is the Out-Of-Box-Experience.  It’s a part of Windows Setup and can be thought of as the ‘Getting Devices Ready’ and Blue-background section of OS install, in which the user is asked to provide their name, password, etc.

Well, it turns out that if the PPKG file is present on the root of a Flash Drive or any volume during OOBE, the user will be automatically triggered and prompted to accept the package!

Protip: If your PPKG isn’t automatically invoked, hit the Windows Key Five times when at the ‘Let’s Pick a Region’ Screen.

However, this STILL requires someone to do something…and assumes we’ll have a keyboard attached to our systems.  This would be good for schools or other ‘light-touch’ scenarios, but was a non-starter for me, onto the next approach.

Bake into Image

You can also just bake all of your Provisioning Settings directly into an image too.  Going back to WICD you can choose ‘Export Production Media’ and follow the wizard, which will create a .WIM file structure.  You can then deploy that with MDT, SCCM or (ugh) ImageX.  However, if you want to convert this into a .WIM file, follow Johan’s wonderful guide to the topic here.

http://deploymentresearch.com/Research/Post/495/Beyond-Basic-Windows-10-Provisioning-Packages

Pro-tip: Note that in the PowerShell example there, you’ll need to change line 19 to match the desired path you specify in line 3.

If you have access to your hardware while imaging, this is a great strategy.  You could even use the ‘Apply Provisioning Package’ step as an alternative method to enroll devices.

Truly Silent Deployment – Signed PPKGs

Finally, the real reason for this post.  We order customized hardware from a vendor tailored for our needs but couldn’t use any of the methods covered here.  However…we CAN leverage a PKI instead.

Note: For ease of testing, this guide will cover using a Self-Signed Certificate instead.  However, you can easily do this using an internal Public Key Infrastructure if you have one available.

To outline what we’ll do here:

  • On WICD Workstation
    • Create a Code-Signing Cert
    • Move a copy of it into your Trusted Root Cert Authorities
    • Export a .cer copy of the cert
    • Sign your PPKG
  • On Base image or on each computer
    • Import .cer file into Trusted Root Cert Authority
    • Move copy into Trusted Provisioners Store
  • Execute the PPKG, which will run silently

GIANT DISCLAIMER: This approach is VERY tricky and has a lot of moving parts.  It’s easy to get wrong and has been mostly replaced by a new PowerShell Module titled ‘Provisioning’ which ships with Windows 10 1703 (Creators update ) release.  This cmdlet makes it a snap!

`Install-ProvisioningPackage -QuietInstall`

If you have that module / option available, you are advised to use it instead of the Signed PPKG approach.

Are you still here with me?  Then you’re my kinda coder!

On our PPKG Creation Computer

First, let’s create a new CodeSigning Certificate, then export a .cer version of it, which we reimport into Trusted Root Cert Authorities. We’re doing these steps on the workstation where we build our Provisioning Packages.


$NewCert = New-SelfSignedCertificate -DnsName 101Code.FoxDeploy.com -Type CodeSigning -CertStoreLocation Cert:\CurrentUser\My

#Export the cert
$NewCert | Export-Certificate -FilePath c:\temp\DistributeMe.cer

#Reimport to Trusted Root Cert Authorities
Import-Certificate -FilePath C:\temp\DistributeMe.cer -CertStoreLocation Cert:\CurrentUser\Root

You’ll see this prompt appear, asking if you’re really super sure you want to add a new Trusted Root Certificate Authority.  Say Yes.

With these steps done, fire up WICD again and go to Export Provisioning Package.

Provide a name and Version Number like normal and hit next.  The video below guides us through the rest.

Talking through that, in the next page, choose the Certificate to sign it.  This needs to be the same cert that will be trusted on your end computers as well.  If you don’t see your cert listed, make sure (for Self-Signed) that it’s also in your Trusted Root Cert Authority.  If you’re using PKI, be sure you have an authorized Server Auth or Code Signing Cert present from a CA that your computer trusts.

Copy the .cat and .PPKG file.  Yep, we must have the .CAT file this time, don’t forget it.

Preparing the image

Now, for our end-user actions.  There are a lot of ways to do this but the easiest way to do it is in your Windows Image before capturing it.

Take the cert we created earlier called DistributeMe.cer and push this out to your end computers.  You need to import this Into the Trusted Root Cert Authority & the hidden Trusted Provisioners Cert store, which is ONLY  available via PowerShell and NOT the Certificate Manager snapin.


Function Add-TrustedProvisioner
{
Param ([String]$Path)
Import-Certificate -FilePath $path -CertStoreLocation Cert:\LocalMachine\Trust | Out-Null
$Cert = Import-Certificate -FilePath $Path -CertStoreLocation Cert:\LocalMachine\My
$Thumbprint = $Cert.Thumbprint
New-Item HKLM:\SOFTWARE\Microsoft\Provisioning\TrustedProvisioners\ -name Certificates -ErrorAction Ignore | Out-Null
New-Item HKLM:\SOFTWARE\Microsoft\Provisioning\TrustedProvisioners\Certificates\ -name $Thumbprint -ErrorAction Ignore | Out-Null
Copy-ItemProperty "HKLM:\SOFTWARE\Microsoft\SystemCertificates\MY\Certificates\$Thumbprint" -Destination "HKLM:\SOFTWARE\Microsoft\Provisioning\TrustedProvisioners\Certificates\$Thumbprint" -Name blob
Remove-Item Cert:\LocalMachine\My\$thumbprint
}

Add-TrustedProvisioner C:\Temp\DistributeMe.cer

Import-Certificate -FilePath C:\temp\DistributeMe.cer -CertStoreLocation Cert:\LocalMachine\Root

Now, you can run SysPrep or otherwise capture this image, and the changes will persist. You could also run these steps by running a PowerShell script with SCCM, MDT, GPO or whatever you want.

With all of these steps in place, check out what happens when you invoke the Provisioning Package now!

Conclusion

Of course, in the cosmic ironies of the universe, the same week I worked through how to get Silent Enrollment working…AirWatch released a brand new .MSI based enrollment option which installs the AirWatch agent and handles all enrollment for you…but I thought that this must be documented for posterity.

Big, big thanks go to Microsoft’s Mark Kruger in the Enterprise Security R&D Team.  Without his advice, I would never have been able to get this working, so thanks to him!


OOBe
PPKGOOBE
SignedPPKG
SilentEnrollmentExample

Building a Windows 10 IoT C# traffic monitor: Part I

$
0
0

We’re counting down here at FoxDeploy, about to reach a traffic milestone (1 Million hits!) , and because I am pretty excited and like to celebrate moments like this, I had an idea…

I was originally inspired by MKBHD’s very cool YouTube subscriber tracker device, which you can see in his video here, and thought, boy I would love one of these!

It turns out that this is the La Metric Time, a $200 ‘hackable Wi-Fi clock’.  It IS super cool, and if I had one, I could get this working in a few hours of work.  But $200 is $200.

I then remembered my poor neglected rPi sitting in its box with a bunch of liquid flow control dispensers and thought that I could probably do this with just a few dollars instead(spoiler:WRONG)!

It’s been a LONGGG time since I’ve written about Windows IoT and Raspberry Pi, and to be honest, that’s mostly because I was getting lazy and hated switching my output on my monitor from the PC to the rPi.  I did some googling and found these displays which are available now, and mount directly to your Pi!

Join me on my journey as I dove into c# and buying parts on eBay from shady Chinese retailers and in the end, got it all working.  And try to do it spending less than $200 additional dollars!

Necessary Materials

To properly follow along, you’ll need a Raspberry Pi of your own. Windows 10 IoT will work on either the Raspberry Pi 2B + or Raspberry Pi 3, so that’s your choice but the 3 is cheaper now.  Click here for one!

You’ll also need a micro SD card as well, but you probably have one too.  Get an 8gb or bigger and make sure it is fast/high-quality like a Class 10 Speed card.

Writing an SD Card is MUCH easier than it was in our previous post.  Now, it’s as simple as downloading the ‘IoT Dashboard‘ and following the dead simple wizard for Setting up a new device.  You can even embed Wi-Fi Connections so that it will connect to Wi-Fi too, very cool.  So, write this SD Card and then plug in your rPi to your monitor or…

Optional Get a display:

There are MANY, many options for displays available with the Raspberry Pi and a lot of them work…in Linux.  Or Raspbian.  Or PiBuntu or who knows what.  A lot of them are made by fly-by-night manufacturers who have limited documentation, or worse, expansive documentation that is actually a work of fiction.  I’ve bought and tried a number of them, here is what I’ve found.

Choosing the wrong display and hating your life

First out the gate, I saw this tiny little display, called the “LCD MODULE DISPLAY PCB 1.8 ” TFT SPI 128 x 160″.  I immediately slammed that ‘BUY’ button…then decided to look it up and see if it would work.

It’s literally the size of a postage stamp

While it works in some Linux distros I could not make it work with Windows 10 IoT, as it just display a white screen.  It is well, well below the supported minimum resolution for Windows (it could barely render the start button and File Explorer icon on the start bar, in fact, if we could even get it working) so it was no surprise.  There’s $10 down the drain.

Kind of off to a rocky start, at 25% of the price of the bespoke solution…surely, spending more money is the way out of this mess.

Next up, I found this guy, the 3.5″ Inch Resistive Touch Screen TFT LCD Display 480×320.

This one easily worked in Raspian, but at such a low res, I could never get it to display a picture in Windows, just a white screen, indicating no driver.  I contacted the manufacturer and they confirmed support for Linux (via a driver written in Python!) but no Windows support.  At $35, it was more painful to box up,

From what I can tell, these are both Chinese counterfeits of displays made by WaveShare.   So at this point I decided to just legitimately buy the real deal from WaveShare, since they mention on their site that the screen does work with Windows 10 IoT.

If you’re doing the math, I was halfway there to the full solution already in pricing.

Wife: You spent HOW MUCH on this post?!

Choosing the right monitor and a sigh of relief

I eventually ponied up the dough and bought the 5inch HDMI LCD V2 800×480 HDMI display.  This one uses the HDMI connection on the rPi and also features a touch screen.  The screen even works with Windows 10 IoT!

It implements touch via a resistive touch panel rather than the standard capacitive touch.  And no one has written a driver for the touch panel  😦  So, it works, and it is a great size for a small project, but it doesn’t have touch.  At this point, I decided that this was good enough.

When I connected this display, I saw scrolling lines which changed patterns as the content on the screen changed.

This is a great sign, as it means that Windows is rendering to the display, but at the wrong refresh rate or resolution.

To fix this, remote in to your Raspberry Pi via the admin$ share, and change the Video section of your C:\EFSIS\Config.txt file.  Once you’ve made the change, reboot the device and display will just work!

#
# Video
#

framebuffer_ignore_alpha=1  # Ignore the alpha channel for Windows.
framebuffer_swap=1          # Set the frame buffer to be Windows BGR compatible.
disable_overscan=1          # Disable overscan

hdmi_cvt 800 480 60 6 0 0 0 # Add custom 800x480 resolution (group 2 mode 87)
hdmi_group=2                # Use VESA Display Mode Timing over CEA
hdmi_mode=87

What we’re doing here is adding a new HDMI display mode and assigning it the numerical ID of 87 (since Windows ships with 86 HDMI modes, and none are 800 x 480!) and then telling windows to use that mode.  With all of these changes in place, simply restart your Pi and you should see the following

At this point I decided that I ALSO wanted touch, so I bought the 7″ model too (jeeze how much am I spending on this project??).  Here’s that one WaveShare 7inch HDMI LCD (C ).

I’ll follow up on this later about how to get touch working.  Update: scroll down to see how to enable the 7″ display as well!

Here’s my current balance sheet without the 7″ display included.  Don’t wanna give my wife too much ammunition, after all.

 

Intentionally not adding the last display to this list (and hiding credit card statements now too)

So, now that we’ve got our Pi working, let’s quietly set it off to the side, because we’ve got some other work to do first before we’re ready to use it.

Update: How to enable a 7″ Display

Here’s the display I mentioned, the WaveShare 7inch HDMI LCD (C ).

I love this screen!  I wholly recommend using this display for your Pi, it has built in touch which is 100% supported, it’s also a capacitive touch model with fused glass, so it looks like a high-end smart phone screen.  It’s super sexy.

If you buy this one, you can actually enable support for the screen when you first record the Win10 IoT image.  To do this route, when you write the OS onto the SD Card, open explorer and go to the SD Card’s EFIESP partition.

EFIESP
If your Pi is on and the screen is off, or displaying scan-lines, you can hop in through the admin share instead.  Go to \\ipaddress\c$\EFIESP if you’re in that situation

Next, open Config.txt and add or change the final few lines to match this below.  Again only if you bought the 7″ display.  If you bought a different HDMI display, you can simply change the resolution to match.


init_uart_clock=16000000    # set uart clock to 16mhz
kernel_old=1                # load kernel.img at physical memory address 0x0

safe_mode_gpio=8            # a temp firmware limitation workaround
max_usb_current=1           # enable maximum usb current

gpu_mem=32
hdmi_force_hotplug=1        # enable hdmi display even if it is not connected
core_freq=250               # frequency of gpu processor core in mhz

framebuffer_ignore_alpha=1  # ignore the alpha channel for windows.
framebuffer_swap=1          # set the frame buffer to be windows bgr compatible.

disable_overscan=1          # disable overscan
hdmi_cvt 1024 600 60 6 0 0 0 # Add custom 1024x600 resolution (group 2 mode 87)

hdmi_group=2                # Use VESA Display Mode Timing over CEA
hdmi_mode=87

It’s that simple.  Be careful using this method, because if you go to the Device Portal on the device and check the Resolution settings there, our custom HDMI mode will not be displayed.  Fiddling with the settings in Device Portal can force your Pi to reboot and erase your settings, forcing you to go through this process again.

Getting Started with C#

Windows 10 IoT can run apps written in JavaScript, Python and C#.  It can also run PowerShell remoting as well, but if you go that route we cannot use the typical PowerShell and XAML approach we’ve been doing.  And a GUI was crucial to this whole project. So, for the first time ever, we are going to write this natively in c# and XAML.

Since I was just getting my toes wet, I decided to start super simply with a basic hello world console app in C#.  I followed this guide here.  Soon enough, I had my own hello world app! Launch Visual Studio, make a new Project and then choose Visual C# \ Console Application. Then, erase everything and paste this in.

// A Hello World! program in C#.
using System;
namespace HelloWorld
{
    class Hello
    {
        static void Main()
        {
            Console.WriteLine("Hello Foxy!");

            // Keep the console window open in debug mode.
            Console.WriteLine("Press any key to exit.");
            Console.ReadKey();
        }
    }
}

If you look through the code, it’s not THAT far away from PowerShell. Sure there are way more nested code blocks than we’d normally have in PowerShell, but essentially all we do is call Console.WriteLine() which is the c# equivalent of Write-Host, and provide an overload which is written to the screen.  Then we end this by waiting for the user to hit something with Console.ReadKey();.
I hit Compile (F5) and boom!

What does using mean?

C# makes use of Namespaces.  Namespaces are a way of organizing code into different modules that might be importable (on systems that don’t have them, you could add new namespaces with DLLs or by installing software) and prevents code collision.  Our new program begins with using System; (called a directive, we’re directing our program to use the System namespace), which contains a lot of cool functions we need, such as Console.WriteLine().  If we didn’t begin the code by importing the System Namespace we’d have to writeSystem.Console.WriteLine() everytime, instead of just Console.WriteLine().


With that out of the way, and now that we are C# experts (let’s pause and add ‘Developer’ to our LinkedIn and StackJobs profiles too) I decided to move on to a basic WebRequest, following this great example.

Babies first WebRequest

I copied and pasted the demo and hit F5, only to see that this is pretty boring, essentially just loads the Contoso page and displays the HTTP status code.  That, frankly will not fly.

To spice things up a bit under the covers, I decided to instead hit the awesome JSONTest.com page, which has a bunch of nifty test endpoints like ip.JSONTest.com.  Hitting this endpoint will give you back your public IP address.  Nifty!  I simply changed the URL on line 18 to string url ="http://ip.jsontest.com"; and BOOM smash that F5.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Net;
using System.Web;
using System.Threading.Tasks;
using System.IO;

namespace WebHelloWorldGetIp
{
    class Program
    {
        static void Main(string[] args)
        {

            string url = "http://ip.jsontest.com/";

            HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);

            request.Credentials = CredentialCache.DefaultCredentials;
            // Get the response.
            WebResponse response = request.GetResponse();

            // Display the status.
            Console.WriteLine(((HttpWebResponse)response).StatusDescription);
            // Get the stream containing content returned by the server.
            Stream dataStream = response.GetResponseStream();
            // Open the stream using a StreamReader for easy access.
            StreamReader reader = new StreamReader(dataStream);
            // Read the content.
            string responseFromServer = reader.ReadToEnd();

            // Write the response out to screen
            Console.WriteLine(responseFromServer);

            //clean up
            reader.Close();
            response.Close();

            //Wait for user response to close
            Console.WriteLine("Press any key to exit.");
            Console.ReadKey();
        }
    }
}

How different things are…

A quickly little note here, as you may have noticed on line 19, when we created a variable with  string url=".." we had to specify the type of variable we want.  PowerShell was dynamically typed, meaning it could determine the right type of variable for us as we go, but C# is NOT, it is statically typed.  Keep this in mind, furthermore where PowerShell was very forgiving and case insensitive, C# is case sensitive.  If I define string name = "Stephen" and then later write Console.WriteLine("Hello "  + NAME ); I will get an error about an undefined variable.

We hit F5 and…

Sweet!  Now we’ve done a working webrequest, the next step is to swap in the URL for WordPress’s REST API and see if we can get stats to load here in the console.  If we can, then we move the code over to Windows 10 IoT and try to iron out the bugs there too.

Querying WordPress from C#

In my usage case, I wanted to query the WordPress API, and specifically the /Stats REST Endpoint.  However, this is a protected endpoint and requires Authentication as we covered in a previous post on oAuth.

WordPress handles Authentication by adding an Authorization property to the header, which is simply a key value pair of this format

Key ValueName
Authentication Bearer #YourBearerTokenHere

We are using the System.Net.Http.Httpclient class, which supports adding extra headers using this format as seen below.

request.Headers["Authorization"] = "Bearer <YourKeyHere>";

Then I spiff things up a bit more as seen here (mostly adding a cool Fox ascii art), and get the following results:

This is nice, but it’s JSON and I want just the numerical value for Hits.

Visual Studio 2013 and up integrates Nuget right into the IDE, so it’s very easy to reference awesome community tools.  We’re going to add NewtonSoft JSON to the project, following the steps seen here.

With that in place, all we have to do is create a new jObject which has a nifty .SelectToken() method you use to pick an individual property when you parse JSON.

JObject Stats = JObject.Parse(responseFromServer);
Console.WriteLine(Stats.SelectToken("views"));  

If you’d like to see the completed code.  It’s here, and here’s the output.

Alright, now all I have to do is make a GUI, and port this over to Raspberry Pi–which runs on .netCore and only uses some of the libraries that full dotnet supports–surely that will be very easy, right?

A good stopping point

Phew, this was fun! We learned which components to use (and which to avoid) learned a bit about c# background terminology, and even wrote our first WebRequest, parsing JSON using Nuget packages we installed into our application.  This was awesome!

Stay tuned for Part II, dropping later this week! (this will link to Part II when available)


2017-07-01 03.26.15

Building a Windows 10 IoT C# traffic monitor: Part II

$
0
0

Previously 🔗, we took off our socks and put our feet into the sand, and wrote our first C# Console application.  We built on it and added the ability to complete a web request and even parsed some JSON as well!  Along the way, we learned a bit of how things work in C# and some background on programming terms and functionality.

In this post, we will take our code and port it over to run on .net core, and hook up the results to the GUI. Stick with me here, and by the end you’ll have a framework you can modify to list your Twitter followers, your Facebook Feed, or monitor your own blog stats as well.

And if you do modify it…

Then share it!  You’ll find a “LookWhatIbuilt” folder in the repository.  You are encouraged to share screenshots, snippets, even your own whole project if you like, by sending a PR.  Once we have a few of these, we’ll do a Spotlight post highlighting some of the cool things people are doing,

Cracking open the IoTDefaultApp

When we imaged our rPi with the Iot Dashboard, it wrote the OS and also delivered the ‘IoT Core Default App’ to the device.  It’s pretty slick looking and gives us a very good jumping off point to reskin things and have our app look nice.  We can view the code for the 🔗 Default App here on the MS IoT GitHub  .

Since this is ‘babies first application’ we are going to modify this existing app to suit our purposes.  Download the sample from the link above and then double-click the Visual Studio Project  .SLN file.  There’s kind of a lot going on when you first open it, but the file we want to edit is MainPage.XAML.

Over in the Solution Explorer in the right-gutter, expand out to IotCoreDefaultApp \ Views then click MainPage.xaml.

Here is the template we’re going to be modifying.

There’s kind of a lot going on here too, so I recommend that you power on your Pi now and see what the default app looks like, here’s a screen shot…

Please don't hack my internet IP address!

Redecorating the app

Me being me, of course I’m going to make it look pretty before I make it work, so I spent some time adding files, dragging the layout around, that sort of thing.  To add a new file, first, click to the Solution Explorer \ Assets folder, then right-click and choose ‘Add Existing Item’

Next, go to the Left Gutter \ Toolbox\ and choose the Image Control, then drag the area you’d like your image to appear.

Now, back on the Right GutterProperties \ Common, use the dropdown box Source and pick your image.

PROTIP: be sure to use this process of adding an image, relatively selecting it, rather than specifying the full-path to the file.

If you don’t, you can end up with the file not getting delivered with the app to your pi.  Not good!

 

I did a little bit of tweaking here, and here is where I ended up

I forgot to screen shot my first pass, sorry!

One of the core values of my job is to Make it work before you make it look pretty.  It really speaks to me, namely because I never do it.

We made it look pretty, now, to make it work

Hitting F7, or right-clicking and choosing ‘View Code‘ will show the c# behind this View.  Developers like to call the code behind a view the code-behind.

We see here a whole lot of references to assemblies


//using IoTCoreDefaultApp.Utils;
//using System;
//using System.Globalization;
//using System.IO;
//using System.Net;
//using System.Net.Http;
//using Windows.Data.Json;
//using Windows.Networking.Connectivity;
//using Windows.Storage;
//using Windows.System;
//using Windows.UI.Core;
//using Windows.UI.Xaml;
//using Windows.UI.Xaml.Controls;
//using Windows.UI.Xaml.Media.Imaging;
//using Windows.UI.Xaml.Navigation;
//using MyClasses;
//using Windows.System.Threading;

Then we define a namespace for our app, called IotCoreDefaultApp, then a class called a MainPage, which is where the entirety of the code for this app will live.  We also define a Dispatcher, which might be familiar from our post on 🔗multi-threaded GUIs with PowerShell.  Because our GUI is going to be multithreaded, we can’t just say Label.Text = "New Value", we’ll use a Dispatcher to enact the change for us.

namespace IotCoreDefaultApp
{
    public sealed partial class MainPage : Page
    {
    public static MainPage Current;
    private CoreDispatcher MainPageDispatcher;
    private DispatcherTimer timer;
    private DispatcherTimer GetStattimer;
    private DispatcherTimer countdown;
    private ThreadPoolTimer timerInt;
    private ConnectedDevicePresenter connectedDevicePresenter;

    public CoreDispatcher UIThreadDispatcher
    {
        get
        {
            return MainPageDispatcher;
         }

set
{
MainPageDispatcher = value;
}
}

Next a public class called MainPage() gets defined, which kicks off some interval timers which run, um, on an interval and update UI info.  We’ll skip over some boring stuff (which you can read here 🔗) ,which consists of  kind of boring house-keeping functions of this app.  Most of these run when something is clicked, or when a timer interval counts down.

Within the timer, (beginning line 65 or so) you’ll see that it gets started, then counts down 20 seconds and calls a function called timer_Tick.  All we have to do is define our own method, and then add it to timer_Tick and bam, it will automatically run on the interval specified (20 seconds, in this sample).


timer = new DispatcherTimer();
timer.Tick += timer_Tick;
timer.Interval = TimeSpan.FromSeconds(20);

this.Loaded += async (sender, e) =>
{
await MainPageDispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
UpdateBoardInfo();
UpdateNetworkInfo();
UpdateDateTime();
UpdateConnectedDevices();
timer.Start();
});
};
this.Unloaded += (sender, e) =>
{
timer.Stop();
};
}

Let’s see what else happens when timer_Tick get’s called.  Double-click timer_Tick and choose ‘Go to Definition’ to jump there.

private void timer_Tick(object sender, object e)
{
UpdateDateTime();

}

So, every 20 seconds, it runs and calls UpdateDateTime(), care to guess what this function does?

Now, that we’re familiar with how this works so far, let’s make our own method.

Making our own Method

I found a nice innocuous spot to add my method, in between two other methods and started typing.

I’m defining this as a private method, meaning that only this body of code can use it.  Next, because performing a web request can take a few seconds to complete, and we don’t want the code to lockup and freeze here, we add the async modifier.  Finally, we add void because this code block will run the web request and update the UI, but doesn’t return a value otherwise.

A word on Async and Await

We want our code to be responsive, and we definitely don’t want the UI to hang and crash, so running things asynchronously is a necessity.  We can do that using c#’s state machine (more on that here) to ensure that the app will not hang waiting for a slow web request

When you define a method as asynchronous, you also have to specify an await statement somewhere, to identify which code is allowed to run asynchronously while the rest of the app keeps running.

 

Now, let’s copy and paste the code we had working previously in the last post into the method and see if we get any squiggles.

Copying and Pasting our old code…why doesn’t it work?

We will have some squiggles here because we are bringing code from a full-fledged .net app and now targetting .net Core.  Core is cool…but it’s only got some of the features of full .net.  Some stuff just won’t work.  I am on a mission to kill these red squiggles.

First off, we don’t have a Console to write off to, so lets comment out or delete those lines( the double-frontslash // is used to comment in c#).

Next, the HttpWebRequest class doesn’t offer the GetResponse() method when we target .Net Core for Universal Windows Apps.

Let’s delete GetResponse() and see if there is an alternative.

Now that I’ve swapped this for GetResponseAsync(), I get MORE squiggles.  This time, the swiggles are because I’m telling the program to run this asynchronously and keep on going…but I don’t tell it to wait for the response anywhere.

The way to fix this is to add an await to the command as well.  This makes sense too, because there is always going to be a slight delay when I run a web request.  I want my app to know it can run this method we’re writing, and then proceed to do other things and come back when the webrequest has completed to finish the rest of my method.

Yay, no more squiggles, time to actually run this badboy

I’m going to want to test the results from this, so I’ll set a breakpoint within my Test() method, so that I can see the values and results when this code runs.  I’m going to highlight this line and hit F9 to create a breakpoint, which will tell the debugger and my program to stop right here.

With all that done, I’ll add modify the timer_Tick method to have it call my code Test()

Once every twenty seconds, the timer will expire and it will both update the time, and call our new method!

Pushing code to the Raspberry Pi

Pushing code to the Pi is easey peasey.  In the Right Gutter \ Solution Explorer , right-click your project and choose Properties.

Next, click Debug  then specify the Target Device as a Remote Machine. Then click Find

 Simply click your device and that’s it!

You might not even be asked for credentials. Nope, I don’t know why it doesn’t need credentials…

Now, finally, hit F5!

You’ll see a kind of lengthy build process, as the first boot or two of a pi is really pretty slow.  Then you’ll see a longggggggg Windows OOBE screeen displayed, which counts down and eventually picks the English language and Pacific Time Zone.  You can change this later by plugging in a mouse and keyboard.

Download link: Our code at this point should look something like this🔗.

Live Debugging

While our code is running, it will eventually swap over to the main page and display something along these lines.

If we have Visual Studio in the foreground, the app will pause when it reaches our breakpoint and we can see the values for each variable, in real time!

So, it looks like our web request completed sucessfully, but somehow the responseFromServer value looks like garbage.  Why might that be? Maybe HttpClient  is different between full .net and .net core?

Spoiler warning: it is different. 

We’re able to hit the endpoint, but then we get this gibberish.

\b\0\0\0\0\0\0\a`I�%&/m�{JJ��t\b�`$ؐ@������iG#

Fortunately I recognized this Gibberish as looking kind of like what a GZipped payload looks like.  See, all modern browsers support GZip as a pretty good style of compression.  It’s so common that event Invoke-RestMethod and HttpClient just natively support it.  However, in .net core it’s an option we have to turn on.

And we’ll do it by defining a net HttpClientHandler as a way of passing our preferences over to HttpClient when we spin up a new one.  Here’s how to do it, thanks to this  StackOverflow Answer.

HttpClientHandler handler = new HttpClientHandler()
{
    AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};

using (var client = new HttpClient(handler))
{
    // your code
}

I simply move all of the HTTP code within the //your code space, like so.


private async void GetStats()
{

HttpClientHandler handler = new HttpClientHandler()
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};

using (var client2 = new HttpClient(handler))
{
// your code
string url = "https://public-api.wordpress.com/rest/v1.1/sites/56752040/stats/summary/?fields=views&period=year&num=5";
//client.DefaultRequestHeaders.Add();
client2.DefaultRequestHeaders.Add("Authorization", "Bearer YourKeyHere");

HttpResponseMessage response1 = await client2.GetAsync(url);

//assign the response to a variable called ham
string ham = await response1.Content.ReadAsStringAsync();

}

}

Running it again, I can see that the fix worked, and the response isn’t GZipped anymore!

But…well, crap, I can’t use JSON.net (or if it’s possible, I couldn’t figure it out). What am I going to do?!?1

Learning how to parse JSON, again

I hope I didn’t leave you hanging with that cliff hanger.  Fortunately, dotnetCore has its own built-in JSON parser, under Windows.Data.JSON.

We can instantiate one of these badboys like this.


var Response = JsonObject.Parse(ham);

This will put it into a better and more parsable format, and store that in Response.  The last step is to pull out the value we want.

In PowerShell, of course, we would just run $Views = $Response.Views  and it would just work because PowerShell is Love.

In C#, and with Windows.Data.JSON, we have to pull out the value, like snatching victory from the jaws of defeat.


var Response = JsonObject.Parse(ham);
var hits = Response.GetNamedValue("views").GetNumber();

Response.GetNamedValue("views") gives us the JSON representation of that property as in {1000}, while .GetNumber() strips off the JSON envelope and leaves our number in its unadorned natural form like so 1000.

I am FINALLY ready to update the text block.

Crashing the whole thing

I was a bright-eyed summer child, like I was before I started reading Game of Thrones, so I decided to happily just try to update the .Text property of my big fancy count-down timer like so:


var Response = JsonObject.Parse(ham);
var hits = Response.GetNamedValue("views").GetNumber();

var cat = "Lyla"; //this was a breakpoint, named after my cat

HitCounter.Text = hits.ToString("N0");

I hit F5, waited, really thrilled to see the number change and…it crashed.  The error message said

The calling thread cannot access this object because a different thread owns it.

This one was really puzzling, but this helpful StackPost post explains that it’s because the very nature of threading and asynchronous coding means that I can’t always expect to be able to change UI elements in real time.

Instead, we have to schedule the change, which is SUPER easy.

How to update UI from within a thread

I just modify the call above like so, which makes use of the Dispatcher to perform the update whenever the program is ready.


await MainPageDispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
//Move your UI changes into this area
HitCounter.Text = hits.ToString("N0");
});

And now…it works.

Publishing the app and configuring Auto Start

When we’re finally done with the code (for now), publishing the finished version to the Pi is super easy.  Right click the solution in the right-hand side and choose properties.  In the windows that appears, go to the Debug tab and change the Configuration dropdown to Release.

Change the configuration to Release, and then F5 one last time.

Once you do that, the app is written and configured to run without remote debug.  Our little Raspberry is almost ready to run on it’s own!

The very last step here is ot configure our app to automatically run on power on.  Since we fiddled with it so much, we’ll need to set this again.  You can do this from the Windows IoT app by right-clicking the device and choosing Launch Windows Device Portal.

This launches a web console that is actually really slick.

You can watch live performance graphs, launch apps, configure wifi and updates and change the time zone here as well.  This is also where we configure which app launches when you turn the Pi on.

From this page, click Apps \ App Manager and find our app (you may have changed the name but I left it as IoTCoreDefaultApp)and then click the radio button for Startup.  

And now, Restart it.

In just a few minutes, you should see the Pi reboot and automatically launch our monitoring app.  Awesome, we’re developers now!

Completed Code Download Link – Here 

How to modify this to query your own WHATEVER

Simply change the body of GetStats() here to modify this to query whatever you like.  So long as it returns a JSON body, this format will work.


private async void GetStats()
{
//add your own query for ANYTHING here
HttpClientHandler handler = new HttpClientHandler()
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};

using (var client2 = new HttpClient(handler))
{
// whatever URL you want to hit should be here
string url = "https://yourAPIUrlhere.com";

//if your URL or service uses Bearer Auth, use this example
client2.DefaultRequestHeaders.Add("Authorization", "Bearer YourKeyHere");

HttpResponseMessage response1 = await client2.GetAsync(url);
string ham = await response1.Content.ReadAsStringAsync();

var Response = JsonObject.Parse(ham);
//var hits = Response.GetNamedValue("views").GetNumber();

//set a breakpoint here to inspect and see how your request worked.  Depending on the results, use the appropriate value for GetNamedValue() to get the syntax working

var bestCatname = "Lyla";

//this block below handles threading the request to change a UI element's value

/*await MainPageDispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
//Your UI change here
//HitCounter.Text = hits.ToString("N0");
});
*/
//HitCounter.Text =viewsdesuka.ToString();
cat = "Lyla";

}

}

 

Resources and thanks!

I could not have done this without the support of at least a dozen of my friends from Twitter.  Special thanks to Trond Hindes, and Stuart Preston, and those who took the time to weigh in on my StackOverflow question.

Additionally, these posts all helped get the final product cobbled together.

Now, what kind of stuff have you written or do you plan to write with this template?  Be sure to share here, or on Reddit.com/r/FoxDeploy!

Finally, please share!  If you come up with something cool, add it to a subfolder of Look what I made,  here!


Windows 10 Must-have Customizations

$
0
0

I’ve performed a number of Windows 10 Deployment projects, and have compiled this handy list of must-have customizations that I deploy at build time using SCCM, or that I bake into the image when capturing it.

Hope it helps, and I’ll keep updating it as I find more good things to tweak.

Remove Quick Assist

Quick Assist is very useful, but also on the radar of fake-Microsoft Support scammers, so we disable this on our image now.

Get-WindowsPackage -Online | Where PackageName -like *QuickAssist* | Remove-WindowsPackage -Online -NoRestart

Remove Contact Support Link

Because we were unable to customize this one to provide our own internal IT information, we disabled this one as well.

Get-WindowsPackage -Online | Where PackageName -like *Support*| Remove-WindowsPackage -Online -NoRestart 

Disable SMB 1

With the Petya and other similar scares, we also decided to just turn SMB off.  Surprisingly, almost nothing that we cared about broke.

Set-SmbServerConfiguration -EnableSMB1Protocol $false -force
Disable-WindowsOptionalFeature -Online -FeatureName smb1protocol -NoRestart

Disable People App

Users in testing became VERY confused when their Outlook contacts did not appear in the People app, so we got rid of it too.

Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*people*"} | Remove-AppxPackage 

Disable Music player

We deploy our own music app and were mistrusting of the music app bundled with Windows 10, so we got rid of this one too.


Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*zune*"} | Remove-AppxPackage

 

Disable Xbox App

Pretty silly that apps like this even get installed in the PRO version of Windows 10.  Maybe we need a non-shenanigan version of Win 10 ready for business…but…but I’ll finish this SCCM issue after a quick romp through Skellige.

Get-AppxPackage -AllUsers  |  Where-Object {$_.PackageFullName -like "*xboxapp*"} | Remove-AppxPackage 

 Disable Windows Phone, Messaging

We honestly aren’t sure who will want this or for what purpose this will fit into an organization.  Deleted.  Same goes with Messaging.

Get-AppxPackage -AllUsers  | Where-Object {$_.PackageFullName -like "*windowspho*"} | Remove-AppxPackage
Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*messaging*"} | Remove-AppxPackage 

 Disable Skype, Onenote Windows 10 App

Sure, let’s have a new machine deploy with FOUR different entries for Skype. No way will users be confused by this.  Oh yeah, and two OneNotes.  Great move.

Get-AppxPackage -AllUsers  | Where-Object {$_.PackageFullName -like "*skypeap*"} | Remove-AppxPackage
Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*onenote*"} | Remove-AppxPackage 

 Disable ‘Get Office’

We deploy our own music app and were mistrusting of the music app bundled with Windows 10, so we got rid of this one too.

Get-AppxPackage -AllUsers  |  Where-Object {$_.PackageFullName -like "*officehub*"} | Remove-AppxPackage | Remove-AppxPackage 

 Disable a bunch of other stuff

At this point I kind of got bored with making screen shots of each of these.  I also blocked a number of other silly things, so if you got bored too, here is the full script.

#this runs within the imaging process and removes all of these apps from the local user (SCCM / local system) and future users
#if it is desired to retain an app in imaging, just place a # comment character at the start of a line

#region remove current user
$packages = Get-AppxPackage -AllUsers

#mail and calendar
$packages | Where-Object {$_.PackageFullName -like "*windowscommun*"}     | Remove-AppxPackage

#social media
$packages | Where-Object {$_.PackageFullName -like "*people*"}            | Remove-AppxPackage

#microsoft promotions, product discounts, etc
$packages | Where-Object {$_.PackageFullName -like "*surfacehu*"}         | Remove-AppxPackage

#renamed to Groove Music, iTunes like music player
$packages | Where-Object {$_.PackageFullName -like "*zune*"}              | Remove-AppxPackage

#gaming themed application
$packages | Where-Object {$_.PackageFullName -like "*xboxapp*"}           | Remove-AppxPackage

# photo application (many leave this app)
$packages | Where-Object {$_.PackageFullName -like "*windowspho*"}        | Remove-AppxPackage

#
$packages | Where-Object {$_.PackageFullName -like "*skypeap*"}           | Remove-AppxPackage

#
$packages | Where-Object {$_.PackageFullName -like "*messaging*"}         | Remove-AppxPackage

# free/office 365 version of oneNote, can confuse users
$packages | Where-Object {$_.PackageFullName -like "*onenote*"}           | Remove-AppxPackage

# tool to create interesting presentations
$packages | Where-Object {$_.PackageFullName -like "*sway*"}              | Remove-AppxPackage

# Ad driven game
$packages | Where-Object {$_.PackageFullName -like "*solitaire*"}         | Remove-AppxPackage

$packages | Where-Object {$_.PackageFullName -like "*commsphone*"}        | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*3DBuild*"}           | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*getstarted*"}        | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*officehub*"}         | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*feedbackhub*"}       | Remove-AppxPackage

# Connects to your mobile phone for notification mirroring, cortana services
$packages | Where-Object {$_.PackageFullName -like "*oneconnect*"}        | Remove-AppxPackage
#endregion

#region remove provisioning packages (Removes for future users)
$appProvisionPackage = Get-AppxProvisionedPackage -Online

$appProvisionPackage | Where-Object {$_.DisplayName -like "*windowscommun*"} | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*people*"}        | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*surfacehu*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*zune*"}          | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*xboxapp*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*windowspho*"}    | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*skypeap*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*messaging*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*onenote*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*sway*"}          | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*solitaire*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*commsphone*"}    | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*3DBuild*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*getstarted*"}    | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*officehub*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*feedbackhub*"}   | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*oneconnect*"}    | Remove-AppxProvisionedPackage -Online
#endregion

<#restoration howto
To rol back the Provisioning Package removal, image a machine with an ISO and then copy the source files from
the c:\Program File\WindowsApps directory.  There should be three folders per Windows 10 app.  These need to
be distributed w/ SCCM to the appropriate place, and then run
    copy-item .\* c:\Appx
    Add-AppxProvisionedPackage -Online �FolderPath c:\Appx

    $manifestpath = "c:\appx\*Appxmanifest.xml"
    PS C:\> Add-AppxPackage -register $manifestpath �DisableDevelopmentMode
#>

#removes the Windows Fax feature but requires a reboot, returning a 3010 errorlevel.  Ignore this error
cmd /c dism /online /disable-feature /featurename:FaxServicesClientPackage /remove /NoRestart

 Do you have any recommendations

Did I miss any?  If so, comment here or on /R/FoxDeploy and I’ll add it!

Advertisements

QuickStart PowerShell on Red Hat

$
0
0

PowerShell On RedHat in five minutes

Configuring PowerShell on RHEL 7

Hey y’all. There are a lot of guides out there to installing PowerShell on Linux, but I found that they expected a BIT more Linux experience than I had.

In this post, I’ll walk you through installing PowerShell on a RHEL 7 machine, assuming you are running a RHEL 7.4 VM on Hyper-V. There are a couple stumbling blocks you might run into, and I know, because I ran into ALL of them.

200
Real footage of my attempts

Downloading RHEL

Much like Microsoft’s approach to ISO access, Red Hat greedily hordes their installer dvd’s like a classic fantasy dragon.

You’ll need to register here to download it.

RHEL Download Page

Once you have an account, choose to Continue to Red Hat Enterprise Linux ServerDownload

You’ll download this one here, the 7.4 binary DVD.

Download2

Installing RHEL in Hyper-V

Once you have the image, follow the standard process to create a Gen 2 Hyper-V VM, disabling Dynamic Memory but otherwise making everything the same as you normally do.

Why disable Dynamic Memory?

Good question,as we typically just leave this on for all Windows Systems!

Dynamic Memory AKA Memory Ballooning allows an enlightened VM Guest to release unneeded memory, allowing for RAM Over subscription and increased VM density.

Depending on the amount of RAM you have on your system, VMs may have PLENTY of free RAM and not feel ‘pressure’ to release memory, and in my new Altaro-Sponsored Ryzen 7 build with 64 GB of RAM, my VMs have plenty of resources.

However, I have witnessed many installs of Ubuntu and CentOS fail to complete, and in all cases, this was due to Dynamic Memory. So, don’t enable Dynamic Memory until at least the install has completed.

NoDynamicMemory

The next hurdle you’ll encounter is a failure to mount the ISO, as seen here.

CantBoot

The image’s hash and certificate are not allowed (DB).

This is due to the Secure Boot feature of Hyper-V. Secure Boot keeps your system from a number of attacks by only allowing approved boot images to load. It seems that Red Hat and Ubuntu boot images still are not included in this list.

You’ll need to disable Secure Boot in order to load the image. Right-click the VM, choose Settings \ Security \ Uncheck ‘Enable Secure boot’

NOSecureBoot

With these obstacles cleared, we can proceed through the install.

Installing PowerShell

The next step, downloading the shell script to install PowerShell for us!

Because I couldn’t copy-paste into my VM, I made a shell script to install PowerShell using the script Microsoft Provides here

I stored it in a Gist, and you can download and execute it in one step by running this.

bash <(curl -L https://bit.ly/RhelPS)

The -L switch for curl allows it to traverse a redirector service like Bit.Ly, which I used to download the shell file in Gist, because Gist URLs are TOO damned long!

Download And Execute

And that’s it. Now you’ve got PowerShell installed on Red Hat and you’re ready to go!

PSonRhel

References

How to traverse short-link

How to download and execute

Image credit  Benjamin Hung


Use PowerShell to take automated screencaps

$
0
0

I saw a post on Reddit a few days ago, in which a poster took regular screenshots of weather radar and used that to make a gif tracking the spread of Hurricane Irma.  I thought it was neat, and then read a comment asking how this was done.

How did you do this? Did you somehow automate the saves? Surely you didn’t stay up all night?

/u/SevargmasComment Link

It brought to mind the time I used PowerShell four years ago to find the optimal route to work.

Solving my lifelong issues with being on-time

You ever notice how if you leave at 6:45, you’ll get to work twenty minutes early. But if you leave at 6:55, more often than not you’ll be late? Me too, and I hated missing out on sleep!  I had a conversation with my boss and was suddenly very motivated to begin arriving on time.

I knew if I could just launch Google Maps and see the traffic, I could time it to see the best to time to leave for work.  But if I got on the PC in the morning, I’d end up posting cat gifs and be late for work.

Of course, Google Maps now provides a built in option to allow you to set your Arrive By time, which removes the need for a tool like this, but at the time, this script was the bees-knees, and helped me find the ideal time to go to work.  It saved my literal bacon.

There are many interesting uses for such a tool, like tracking the progress of a poll, tracking satellite or other imagery, or to see how a page changes over time, in lieu of or building on the approach we covered previously in Extracting and monitoring for changes on websites using PowerShell, when we learned how to scrape a web page.

How this works

First, copy the code over and save it as a .PS1 file.  Next, edit the first few lines

$ie         = New-Object -ComObject InternetExplorer.Application
$shell      = New-object -comObject Shell.Application
$url        = "http://goo.gl/1bFh5W"
$sleepInt   = 5
$count      = 20
$outFolder  = 'C:\temp'

Provide the following values:

$url      = the page you want to load
$sleepInt = how many seconds you want to pause
$count    = how many times you'd like to run
$outFolder= which directory to save the files

From this point, the tool is fully automated. We leverage the awesome Get-ScreenShot function created by Joe Glessner of http://joeit.wordpress.com/.  Once we load the function, we simply use the $shell .Net instance we created earlier to minimze all apps, then display Internet Explorer using the $ie ComObject.  We navigate to the page, wait until it’s finished loading, and then take a screenshot.

Then we un-minimize all apps and we’re set.  Simple, and it works!

Hope you enjoy it!

$ie         = New-Object -ComObject InternetExplorer.Application
$shell      = New-object -comObject Shell.Application
$url        = "http://goo.gl/1bFh5W"
$sleepInt   = 45
$count      = 20
$outFolder  = 'C:\temp\WhenToGoToWork'

#region Get-Screenshot Function
   ##--------------------------------------------------------------------------
    ##  FUNCTION.......:  Get-Screenshot
    ##  PURPOSE........:  Takes a screenshot and saves it to a file.
    ##  REQUIREMENTS...:  PowerShell 2.0
    ##  NOTES..........:
    ##--------------------------------------------------------------------------
    Function Get-Screenshot {
        <#
        .SYNOPSIS
         Takes a screenshot and writes it to a file.
        .DESCRIPTION
         The Get-Screenshot Function uses the System.Drawing .NET assembly to
         take a screenshot, and then writes it to a file.
        .PARAMETER <Path>
         The path where the file will be stored. If a trailing backslash is used
         the operation will fail. See the examples for syntax.
        .PARAMETER <png>
         This optional switch will save the resulting screenshot as a PNG file.
         This is the default setting.
        .PARAMETER <jpeg>
         This optional switch will save the resulting screenshot as a JPEG file.
        .PARAMETER <bmp>
         This optional switch will save the resulting screenshot as a BMP file.
        .PARAMETER <gif>
         This optional switch will save the resulting screenshot as a GIF file.
         session.
        .EXAMPLE
         C:\PS>Get-Screenshot c:\screenshots

         This example will create a PNG screenshot in the directory
         "C:\screenshots".

        .EXAMPLE
         C:\PS>Get-Screenshot c:\screenshot -jpeg

         This example will create a JPEG screenshot in the directory
         "C:\screenshots".

        .EXAMPLE
         C:\PS>Get-Screenshot c:\screenshot -verbose

         This example will create a PNG screenshot in the directory
         "C:\screenshots". This usage will also write verbose output to the
         comsole (inlucding the full filepath and name of the resulting file).

        .NOTES
         NAME......:  Get-Screenshot
         AUTHOR....:  Joe Glessner
         LAST EDIT.:  12MAY11
         CREATED...:  11APR11
        .LINK
         http://joeit.wordpress.com/
        #>
        [CmdletBinding()]
            Param (
                    [Parameter(Mandatory=$True,
                        Position=0,
                        ValueFromPipeline=$false,
                        ValueFromPipelineByPropertyName=$false)]
                    [String]$Path,
                    [Switch]$jpeg,
                    [Switch]$bmp,
                    [Switch]$gif
                )#End Param
        $asm0 = [System.Reflection.Assembly]::LoadWithPartialName(`
            "System.Drawing")
        Write-Verbose "Assembly loaded: $asm0"
        $asm1 = [System.Reflection.Assembly]::LoadWithPartialName(`
            "System.Windows.Forms")
        Write-Verbose "Assembly Loaded: $asm1"
        $screen = [System.Windows.Forms.Screen]::PrimaryScreen.Bounds
        $Bitmap = new-object System.Drawing.Bitmap $screen.width,$screen.height
        $Size = New-object System.Drawing.Size $screen.width,$screen.height
        $FromImage = [System.Drawing.Graphics]::FromImage($Bitmap)
        $FromImage.copyfromscreen(0,0,0,0, $Size,
            ([System.Drawing.CopyPixelOperation]::SourceCopy))
        $Timestamp = get-date -uformat "%Y_%m_%d_@_%H%M_%S"
        If ([IO.Directory]::Exists($Path)) {
            Write-Verbose "Directory $Path already exists."
        }#END: If ([IO.Directory]::Exists($Path))
        Else {
            [IO.Directory]::CreateDirectory($Path) | Out-Null
            Write-Verbose "Folder $Path does not exist, creating..."
        }#END: Else
        If ($jpeg) {
            $FileName = "\$($Timestamp)_screenshot.jpeg"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Jpeg));
        }#END: If ($jpeg)
        ElseIf ($bmp) {
            $FileName = "\$($Timestamp)_screenshot.bmp"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Bmp));
        }#END: If ($bmp)
        ElseIf ($gif) {
            $FileName = "\$($Timestamp)_screenshot.gif"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Gif));
        }
        Else {
            $FileName = "\$($Timestamp)_screenshot.png"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Png));
        }#END: Else
        Write-Verbose "File saved to: $target"
    }#END: Function Get-Screenshot
#endregion

for ($i=0;$i -le $count;$i++){

    $ie.Navigate($url)
    $shell.MinimizeAll()
    $ie.Visible = $true
    start-sleep 15
    Get-Screenshot $outFolder -Verbose

    "Screenshot Saved, sleeping for $sleepInt seconds"
    start-sleep $sleepInt

    $shell.UndoMinimizeALL()
    }

When this runs, you’ll have a moment or two to rearrange the screen before the first screen capture is taken. While executing, should leave the computer unattended, as we’re simply automating taking a screencap. If you’re using the computer, it will attempt to minimize your windows, display IE, SNAP, then restore your programs. If you have other windows up, they could be mistakenly included in the screen shot.

Afterwards, you will find the files in whichever path you specified for $outFolder.

Pro-tip, you can exit this at any point by hitting CONTROL+C.

Photo credit: Nirzar Pangarkar


At Microsoft Ignite? Come find me!

$
0
0

WhereToFindFox

I’m speaking at Microsoft Ignite again this year!  Come find me at the PowerShell Community Meetup and the Intro to PowerShell sessions to talk code, scripting, OSD, automation or beer!

Here are the links to my sessions

Tuesday, 3:15 PM, Microsoft Ignite PowerShell Meetup – BRK 1061 – OCCC West Hall, 204 AB.

Thursday, 1:40 PM, Microsoft Ignite Intro To PowerShell – OCCC South Hall, Expo, Theater 1

I’ve got a backpack full of PowerShell and DSC Girl stickers, and look forward to meeting you!


Ignite, decompressed

$
0
0

MSFTignite2017 (1)

Ignite Orlando WAS AWESOME! In this post, I’ll give you some of my fun memories and commentary about the event, and then end with a bunch of the best videos from Microsoft Ignite 2017.

My sessions

We had a HUGE turn out for the PowerShell Community Event, in fact, it was so big that we had an overflow room with 200 people in it!

2017-09-26 15.07.11

There were a lot of folks waiting outside who weren’t able to attend, and at this point Adam Bertram and Simon Whalin were the REAL MVPs.  They left the room and lead an impromptu session to get the discussion going in the overflow room.

Not pictured, Adam Bertram standing on a table, shouting into the crowd!  Oh, and did I mention that Jeffrey Snover came on stage as well?  Talk about a dream come true!

OnStageWJeffrey
Jeffrey is the four greenish pixels near the screen

Fortunately I was prepared and stammered through a terrible soft-ball question about Azure Cloud Shell.  Jeffrey said ‘that’s your question, Stephen?’

My final session was at the end of the day on Thursday, which effectively maximized my stress for the entirety of Ignite.  Fortunately I had plenty of time to practice and work on my transitions and I felt that I really gave it my all.

Next year, I’d like to lead a one hour session, or one focused on real world usage of PowerShell as a glue language.  We’ll see if they get approved!

Don and Jeffrey filled a colossal 5300 person auditorium to capacity, in their PowerShell Unplugged session.

Even these two unflappable speakers looked a tiny bit overwhelmed (just for a moment) by the colossal turnout!

Brad Anderson continued his ‘Lunchbreak With Brad series’, but this time opened it up to everyone at Ignite!  I joined and was actually featured in the video (around the 9 min mark)

Getting to meet Brad in person was great, as I’ve seen him deliver presentations so many times in person and virtually!  I would have liked to have had a full lunch break with him!

Other photos

Click to view slideshow.

Spinners were…everywhere.

My Top Ten Must Watch Sessions

I love the trend of recording all of the bigger sessions.  Here are some of my favorite (which happened to be recorded).

The keynote was…interesting, but ended with an odd deep dive on Quantum computing, which was a bit odd.  I could have done with more explanation on what MSFT365 is…

Fortunately, Brad Anderson explained that here in this session.  Microsoft 365 is essentially a new tier of Office 365 license which now includes Intune, Advanced Threat Protection, and all the O365 goodness we already had.  I believe it includes pricing for Windows Licensing as well.

Azure Automation session with Joey Aiello, Hemant and Aemon

Donovan Devops in any language, with Damian Brady and Donovan Brown.  A dynamic and exciting session talking through VSTS’s devops capabilities.

Expert level Windows 10 Deployment.  Johan and Mikael killed this talk, as expected!

Ask the experts, Windows 10 Deployment.  This was one of those ‘deeper word’ sessions.  A super, real-world deep dive into how the hell we’re supposed to OSD upgrade all of our machines twice a year.

Chris Jackson – Deep dive on Win 10 Fall Update Security Internals

Your attacker thinks like my attacker, an awesome security minded session

Red Teaming Windows

I love these business & personal growth style sessions.  Jeffrey had a great one here which covered staying relevant and providing value as keys to always remaining hirable.

Moving 65,000 Microsofties to DevOps, definitely going to be helpful for me in my new role here!

Securing your data at rest, which had some good info I need.

Coding at 88MPH, a session full of tricks and tips for working in Visual Studio.  The keyboard shortcuts alone were worth the price of entry.

Conference Feedback

It’s important to categorize and honestly think through takeaways for a conference like this one.  Here are my thoughts.

Venue

I really liked the venue, but my favorite aspect of it is how close it is to great after hours entertainment and hotels.  A huge jump from Atlanta (ironically, my home city).   Speaking of Hotels, I was placed in the wonderful Orlando Renaissance at Seawold, a beautiful property with stunning rooms and a lovely pool (that my children made use of!)

The architecture was cool and inspiring and I liked the huge outdoor bridges connecting the venues while keeping us up and out of traffic.  I also am relatively young and in shape with brand new nice running shoes.  Many people might not have liked the tremendous amount of walking involved in this venue, so I would understand the negative feedback I’m hearing there.  Additionally, the walk on the bridge could be sweltering!

I didn’t mind though, I was freezing my butt off in every session, so I welcomed the sun’s warm embrace.

Food and Snacks

I heard a lot of complaining about this but I eat a LOT of vegatarian food anyway, so I’m accustomed to eating cardboard.  Actually, I thought the veggie options were very good.  We could have used more fresh fruit and veggies though.

The afternoon snacks were pretty good, with nice variation of snacks.  The expo floor could have used more water stations.  I found myself leaving the expo for water, which was odd to have to do.

I loved the pop-up coffee stations around the show floor.  I developed a two-a-day nitro iced coffee habit.

Session Quality and Topics

This part is challenging.  It was, frankly, shocking that at a conference in which we celebrated the 25th anniversary of SCCM, there were only two ConfigMgr sessions! One was ‘Whats new in SCCM‘, the other was ‘System Center, what’s coming‘ (in which we learned that Orchestrator and SMA are effectively dead 😦  )

Sure, it’s not a new product anymore, but the only sessions to truly feature ConfigMgr were ones showcasing add-ons to the product, in the case of Adaptiva and 1E.  I really appreciate what these companies have done for the community, but a drought in content like this has me a bit worried.

This leads me to my main concern.  If you’re a seasoned expert, you might find two or three ‘deeper word’ sessions worthy of attending, like Deploying Windows 10 in the real world.  It feels like the session catalog was heavy on business decision maker, 200 and 300 level content.

If you’re a beginner, good luck.  If you’re an expert, I dunno, talk to the dev team in the booths.

Do you think I’m approaching this from the wrong angle?  Should a conference like this have a beginners track for lucky newbies to get hands on learning?  Is it meant to be mostly messaging from the sponsors?  Is it really all about swag?


Glorious PowerShell Dashboards

$
0
0

I’ve covered the topic of dashboards on this blog a few times before, from layering CSS on PowerShell’s built-in HTML capabilities, to hacking together HTML 5 templates with PowerShell, as the hunt continues for the next great thing in PowerShell reporting. Guys, the hunt is OVER!  Time to ascend to the next level in reporting…

It’s the motherlode!  Adam Driscoll’s AWESOME PowerShell Universal Dashboard, a gorgeous and dead-simple dashboard tool which makes it super easy to retrieve values from your environment and spin them into adaptive, animated dashboards full of sexy transitions and colors.   Click here to see it in action. Or just look at these sexy animations and tasteful colors.  Deploy this and then show your boss.  It’s guaranteed to impress, blow his pants off, and get you a huge raise or maybe a $5 Starbucks gift card.

1GIF

In this post, we’ll learn what the PowerShell Universal Dashboard is, how to quickly get setup, and I’ll share my own TOTALLY PIMPED OUT CUSTOM Dashboard with you free, for you to modify to fit your environment, and get that free Pumpkin Spice, son!

What is it?

The PowerShell Universal Dashboard is an absolutely gorgeous module created by the great Adam Driscoll.  It seeks to make it dead-simple to create useful, interactive dashboards anywhere you can run PowerShell.  It’s built using .net Core Kestrel and ChartJS, and you can run it locally for folks to connect to see your dashboard, or deploy right to IIS or even Azure!

If you didn’t earlier, you really should click here to see it in action!!!

Getting Started

To begin, simply launch PowerShell and run the following command.

Install-Module UniversalDashboard

Next, copy the code for Adam’s sample Dashboard from here and run it.  You should see this screen appear

Now, PowerShell Pro Tools IS a paid piece of software.  But the trial license is super generous, so simply put in your e-mail and you’ll receive a license automatically in a few minutes.

Warning –Preachey part–And, between you and me, now that we’re all adults, we should put our money where our mouth is and actually support the software we use and love.  In my mind, $20 is an absolute steal for this incredible application.

Once you receive your key, paste it in and you’re ready to go

 

A sign of a happily licensed PowerShell Pro Tools.

Let’s start customizing this badboy! 

Customizing the Dashboard

For my project, I wanted to replace the somewhat aging (“somewhat”) front-end I put on my backup Dropbox script, covered here in this post : Automatically move old photos out of DropBox with PowerShell to free up space .  At the time, I thought it was the slickest think since really oiley sliced bread.

I still think you look beautiful

So, to kick things off, I copied and pasted the code Adam shares on the PowerShell Universal Dashboard homepage, to recreate that dashboard.  Once it’s pasted in, hit F5 and you should see the following, running locally on your machine:

First up, to delete the placeholder ‘About Universal Dashboard’, let’s delete the New-UDColumn from lines 15~17.

Start-UDDashboard -port $i -Content {
    New-UDDashboard -NavbarLinks $NavBarLinks -Title "PowerShell Pro Tools Universal Dashboard" -NavBarColor '#FF1c1c1c' -NavBarFontColor "#FF55b3ff" -BackgroundColor "#FF333333" -FontColor "#FFFFFFF" -Content {
        New-UDRow {
            New-UDColumn -Size 3 {
                New-UDHtml -Markup "
<div class='card' style='background: rgba(37, 37, 37, 1); color: rgba(255, 255, 255, 1)'>
<div class='card-content'>
<span class='card-title'>About Universal Dashboard</span>

Universal Dashboard is a cross-platform PowerShell module used to design beautiful dashboards from any available dataset. Visit GitHub to see some example dashboards.</div>
<div class='card-action'><a href='https://www.github.com/adamdriscoll/poshprotools'>GitHub</a></div>
</div>
"
}
                New-UDColumn -Size 3 {
                    New-UDMonitor -Title "Users per second" -Type Line -DataPointHistory 20 -RefreshInterval 15 -ChartBackgroundColor '#5955FF90' -ChartBorderColor '#FF55FF90' @Colors -Endpoint {
Get-Random -Minimum 0 -Maximum 100 | Out-UDMonitorData
}

With that removed, the cell vanishes.

I took a look at the Components page on the PowerShell Universal Dashboard, and really liked the way the Counter design looked, so I decided to copy the example for Total Bytes Downloaded and use that in-place of the old introduction.  I added these lines:


 New-UDColumn -Size 4 {
     New-UDCounter -Title "Total Bytes Saved" -AutoRefresh -RefreshInterval 3 -Format "0.00b" -Icon cloud_download @Colors -Endpoint {
             get-content c:\temp\picSpace.txt
         }
     }

     New-UDColumn -Size 3 {

I also created a new text file at C:\temp\picSpace.txt and added the value 1234 to it.  With those changes completed, I hit F5.

Ohh this is a VERY nice start

Now, to actually populate this value when my code runs.  Editing Move-FilesOlderThan.ps1(note: I’m very sorry about this name, I wrote the script when my daughter was not sleeping through the night yet…not clue why I choose that name), the function of that code is to accept a cut-off date, then search for files older than that date in a folder.  If it finds files that are too many days old, they get moved elsewhere. Here’s the relevant snippet:


$MoveFilesOlderThanAge = "-18"
####End user params

$cutoverDate = ((get-date).AddDays($MoveFilesOlderThanAge))
write-host "Moving files older than $cutoverDate, of which there are `n`t`t`t`t" -nonewline
$backupFiles = new-object System.Collections.ArrayList

$filesToMove = Get-ChildItem $cameraFolder | Where-Object LastWriteTime -le $cutoverDate
$itemCount = $filesToMove | Measure-Object | select -ExpandProperty Count
$FileSize = $filesToMove | Measure-Object -Sum Length

In order to sum the file space saved every day, I only had to add these lines.  I also decided to add a tracking log of how many files are moved over time.  I decided to simply use a text file to track this.

[int](gc c:\temp\picSpace.txt) + [int]$FileSize.Sum | Set-content c:\temp\picSpace.txt
[int](gc c:\temp\totalmoved.txt) + [int]$itemCount | set-content c:\temp\totalmoved.txt

Now, after running the script a few times to move files, the card actually keeps track of how many files are moved!

Further Customizations

Now, to go really crazy customizing it!

Hook up the File Counter

I decided to also add a counter for how many files have been moved.  This was super easy, and included in the code up above.  I simply modified the Move-FilesOlderThan.ps1 script as depicted up above to pull the amount of files migrated from a file, and add today’s number of files to it.  Easy peasey (though at first I did a string concatenation, and instead of seeing the number 14 after two days of moving 7 files, I saw 77.  Whoops!)

To hook up the counter, I added this code right after the Byte Counter card.

New-UDColumn -Size 4 {
New-UDCounter -Title "Total Files Moved" -Icon file @colors -Endpoint {
get-content C:\temp\totalmoved.txt
}
}

 

Modify the table to display my values

Next up, I want to reuse the table we start with in the corner.  I wanted to tweak it to show some of the info about the files which were just moved.  This actually wasn’t too hard either.

Going back to Move-FilesOlderThan.ps1 I added one line to output a .csv file of the files moved that day, excerpted below:

$backupFiles |
    select BaseName,Extension,@{Name=‘FileSize‘;Expression={"$([math]::Round($_.Length / 1MB)) MB"}},Length,Directory |
        export-csv -NoTypeInformation "G:\Backups\FileList__$((Get-Date -UFormat "%Y-%m-%d"))_Log.csv"

This results in a super simple CSV file that looks like this

Day,Files,Jpg,MP4
0,15,13,2
1,77,70,7
2,23,20,3
3,13,10,3
4,8,7,1

Next, to hook it up to the dashboard itself.  Adam gave us a really nice example of how to add a table, so I just modified that to match my file types.

New-UDGrid -Title "$((import-csv C:\temp\movelog.csv)[-1].Files) Files Moved Today" @Colors -Headers @("BaseName", "Directory", "Extension", "FileSize") -Properties @("BaseName", "Directory", "Extension", "FileSize") -AutoRefresh -RefreshInterval 20 -Endpoint {
dir g:\backups\file*.csv | sort LastWriteTime -Descending | select -First 1 -ExpandProperty FullName | import-csv | Out-UDGridData
}

And a quick F5 later…

 

Add a graph

The final thing to really make this pop, I want to add a beautiful line graph like these that Adam provides on the Components site.

This was daunting at first, but the flow isn’t too bad in hindsight.

  • Create an array of one or more Chart Datasets, using New-UDChartDataSet, the -DataProperty defines which property you want to chart, while the -Label property let’s you define the name of propery in the Legend
  • Pass your input data files as the -Data  property to the New-UDChart cmdlet. and define a -Title for the chart as well as the chart type, of either Area, Line, or Pie.

Here’s the code sample of what my finished chart looked like:

New-UDChart -Title "Files moved by Day" -Type Line -AutoRefresh -RefreshInterval 7 @Colors -Endpoint {
 import-csv C:\temp\movelog.csv | Out-UDChartData -LabelProperty "Day" -DataProperty "Files" -Dataset @(
 New-UDChartDataset -DataProperty "Jpg" -Label "Photos" -BackgroundColor "rgb(134,342,122)"
 New-UDChartDataset -DataProperty "MP4" -Label "Movies" -BackgroundColor "rgb(234,33,43)"
)
}

And the result:

Satisfy my Ego and add branding

Now, the most important feature, branding this bad boy.

Up on line 14, change the -Title property to match what you’d like to name your dashboard.

New-UDDashboard -NavbarLinks $NavBarLinks -Title "FoxDeploy Space Management Dashboard - Photos"

You can also add an image file with a single card.  In my experience, this image needs to already live on the web somewhere.  You could spin up a quick Node http-server to serve up the files, leverage another online host, or use a standalone server like Abyss.  I always have an install of both Abyss and Node on my machines, so I tossed the file up and linked it.

 New-UDImage -Url http://localhost/Foxdeploy_DEPLOY_large.png

Finally, to clean up all of the extra cards I didn’t use, and fix some layout issues.

Finished Product

See, wasn’t that easy?

finished

And it only took me ~100 tabs to finish it.

Actual screenshot of my Chrome tab situation after an hour of tweaking

If you want to use my example and modify it, feel free to do so (and please share if you create something cool!)  Here are some ideas:

  • Server Health Dashboard
  • SCCM Dashboard
  • SCOM Dashboard
  • Active Directory monitoring dashboard

Source Files

The script that actually creates a dashboard and opens it, Create-BlogDashboard.ps1, followed by the updated Dropbox backup script, then a sample input file.

Download here

Afterword

I realized my preaching about paying for software, and yet this whole thing was spawned from my desire to cheaply get away with using Dropbox but not wanting to pay for it.  Ok….I’ve cracked.  I’ve actually now paid for Dropbox as well!  Time for me to practice what I preach too!

drop

Making an Azure Function Reddit Bot

$
0
0

Around the time I was celebrating my 100th post, I made a big to-do about opening my own subreddit at /r/FoxDeploy. I had great intentions, I would help people in an easier to read format than here in the comments…but then, I just kind of, you know, forgot to check the sub for four months.

But no longer!  I decided to solve this problem with the only tool I know…code.

Azure Functions

A few months ago, I went to ‘The Red Shirt’ tour with Scott Guthrie in which he talked all about  the new Azure Hotness.  He covered Functions, an awesome headless, serverless Platform as a Service offering which can run a variety of languages including C#, F#, Node.js, Java, and of, course, Best Language, PowerShell.

I was so intrigued by this concept when I first learned of it at an AWS event years ago in Chicago, where they introduced Lambda. Lambda was cool, but it couldn’t run bestgirl language, PowerShell.

With this in mind, I decided to think of how to attack this problem.

Monitoring a sub for new posts

I did some googling and found that you can get a list of the newest posts in a sub by just adding a keyword.json to the subreddit, like so: https://www.reddit.com/r/FoxDeploy/new.json, get’s me a JSON response back with the newest posts.  You can also use top.json, controversial.json, etc.

$posts = Invoke-RestMethod https://www.reddit.com/r/FoxDeploy/new.json, next I needed a way to track if I’d already processed the post or not.  That means a database of some kind.

The best DB is a CSV

At first, I planned to use Azure’s new Cosmos DB for this task, but I quickly got bogged down trying to learn my way through creating Graphs, SQL Tables, etc.  All of these sounded cool but pushed me farther away from my goal.  So I decided to roll the worlds simplest Database format, and just track this in a CSV.

Making my schema was simple, just open Notepad and type:

PostName,Date

Done, Schema created in five seconds.  Now to write some logic to step through a post and see if it is in my ‘database’ .

#load DB file
$Processed = Import-CSV .\processed.txt

#process posts
$posts = Invoke-RestMethod https://www.reddit.com/r/FoxDeploy/new.json

ForEach ($post in $posts.data.children){
    if ($Processed.PostName -notcontains $post.data.title){
    #We need to send a message
    Write-output "We haven't seen $($post.data.title) before...breaking loop"
    break
    }

}

I decided that I didn’t want to get bombarded with alerts so I added the break command to pop out of the loop when it first encountered a post which was not in the ‘database’.   Next, to simply dig back into the Reddit REST API and just send a Message. How hard can that be?

Fun with the Reddit API

I dabbled with the Reddit API a few years back, in one of my first PowerShell modules.  It was so hard, poorly documented and so difficult that it turned me off of APIs for months.  I’d always suffered from imposter syndrome, and I felt that That Day ( that dark day in which I finally wasn’t smart enough to figure it out) had finally come for me.

Honestly, compared to other REST APIs, and especially the fully featured and well documented ones like Zenoss and ServiceNows, Reddit’s is terrible to learn.  Don’t give up!

In order for this script to work, it needs to access my credentials.  To do that, I have to delegate credentials using oAuth.  I first covered the topic in this blog post here so read here if you have o clue what oAuth is.  If you don’t, no worries, you’ll be able to gather an idea from our next few steps:

  • Create an oAuth Application In Reddit
  • Grab my RedirectURI, ClientID, and ClientSecret
  • Plug these in to retrieve an AccessToken and RefreshToken
  • Send a message
  • Handle Refreshing an API token when the token Expires

Making an oAuth application

Getting access to Reddit’s API is easy.  Log on to your account, then click Preferences \ Apps.

Click ‘Apps’

Scroll down to Create Application and fill this form in.

Click “Create App’ to finish

The Redirect URI doesn’t need to go anywhere specifically (it’s used because the assumption is that oAuth will be used when a user grants their DropBox access to their Office account, for instance.  After they click ‘OK’ to delegate access, they need to be redirected somewhere.) but you must provide one here and you must use the same value when you request a token in the next step.

Now, make note of and save each of these in PowerShell.  You’ll need these to get your token, then we’ll embed them in our script as well.

$ClientID = 'ClientIDIsHere12345'
$ClientSecret = 'ThisLongStringIsYourSecret'
$redirectURI = 'http://www.foxdeploy.com'

Getting an oAuth Token

Now that we’ve completed this step, download the PSReddit module and run Connect-RedditAccount to exchange these IDs for an AccessToken and a RefreshToken.  Let’s call the cmdlet and see what happens.

Connect-RedditAccount -ClientID $ClientID -ClientSecret $ClientSecret `
   -redirectURI $redirectURI

and then passes that along to Show-oAuthWindow(here’s the code), which pops up a browser window like so.

Running the command stands up a number $global: variables we can use to interact with the reddit API, including the all important AccessCode which we must provide for any API request.  Here’s the full list of REST endpoints, but we’re after the /compose endpoint.

Using our token to send a reddit private message

This part would not have been possible without the awesome help of the awesome @Mark Kraus, who helped me figure out the syntax.

We hit the endpoint of oauth.reddit.com/api/compose, which has a few restrictions.  First off, you have to provide headers to prove who you are.  Also, reddit insists that you identify yourself with your reddit user name with every API call as well, so you have to provide that info too.  Here’s how I handled that.

$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("User-Agent", 'AzureFunction-SubredditBot:0.0.2 (by /u/1RedOne)')
$headers.Add("Authorization", "bearer $AccessToken")

Next, here’s the body params you MUST pass along.

$body = @{
api_type = 'json'
to = '1RedOne'
subject = 'Message sent via PowerShell'
text= 'Hello World'
}

Finally, pass all of this along using Invoke-RestMethod and you’ll see…

Ohhh yeah, dat envelope.

I went ahead and prettied it all up and packaged it as a cmdlet.  Simply provide your values like so:

Send-RedditMessage -AccessToken $token.access_token -Recipient 1RedOne `
   -subject 'New Post Alert!' -post $post

This function is highly customized to my needs, thus the kind of weird -post param.  You’ll want to customize this for your own purposes, but the example usage describes how to pass in a JSON representation of a Reddit API Post object for a full featured body message.

Here’s the download for the completed function.  Send-RedditMessage.ps1.  One last wrinkle stands in the way though.

Don’t get too cocky! Reddit API tokens expire in an hour.

Yep.  Other APIs are reasonable, and expire only after an extended period of inactivity.  Or they last for three months, or forever.  Nope, not reddit’s; their tokens expire in one hour.

Fortunately though, refreshing a token is pretty easy.  When we made our initial request for a token earlier using the Connect-RedditAccount cmdlet, the cmdlet specified a URL parameter duration=permanent which instructed the reddit API to provide us a refresh token.

The cmdlet also helpfully stored this token for you, and can refresh your token as well.

How to refresh tokens

Refreshing your token isn’t actually that bad.  If you’re interested in doing this manually, simply send a REST Post to this URL https://www.reddit.com/api/v1/access_token with the following as the payload.  You’ll need the same values for scope, client_id, and redirect_uri, and should provide the refresh token you received with the first auth token as well.

$body=@{
client_id = 'YourApiKey'
grant_type = 'refresh_token'
refresh_token = 'YourRefreshTokenHere'
redirect_uri = 'YourRedirectURL
duration=  'permanent'
scope=  'Needs to be the same scope from before'}

Finally, need to provide a Basic authentication header.

What’s Basic Auth?

Basic Authentication is a relatively insecure and yet very common method of authenticating a request.  In Basic Auth, you provide credentials in this format username;password, and the string is then encoded in base64.  Curious what that looks like?  Click here to see.

It is barely a step up from sending a plaintext string, and in fact, can actually signal that something worth obfuscating is being transmitted.  Still, it’s what Reddit wants so…

The easiest way to do this in PowerShell is to instantiate a Credential object and pass that along.  Username should be your clientID, while the ClientSecret should be your password.

$tempPW = ConvertTo-SecureString 'YourClientSecret' -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ('YourclientID', $tempPW)
Provide all of this in a post like this:
Invoke-RestMethod https://www.reddit.com/api/v1/access_token `
-Body $body -Method Post  -Credential $credential
and you’ll receive another authcode you can use for the next hour.
Of course, all of this is done for you with the PSReddit module.  On import, it will look in the module path for a pair of .ps1xml files, which contain some information about your reddit account, including your oAuth token and Refresh Token, which will be loaded if found as $PSReddit_accessToken and $PSReddit_RefreshToken.  If you haven’t linked an account yet, you’re instead prompted on how to do so.

Making this work in Azure

With all of the work done locally, all that remained was to find a way to reproduce this in Azure.

I began by logging on to my Azure Portal and then clicking the + button to add a new resource.

Search for ‘Function App’ (I swear they were called Azure Functions like a week ago…)

Then fill in the mandatory questions.  Be sure to choose a region which makes sense.

The actual UI is a long vertical panel. I awkwardly cut and paste it into this equally awkward square. It looks bad, but at least it took way too long.

Once you’ve filled these in, all that remains is to wait a few minutes for the resource to be created.  Click ‘Go to resource’ when you see the prompt.

Next, we’ll want to click down to ‘Functions’ and then hit the Plus sign to trigger the wizard.

If we wanted to use JavaScript or C# we could choose from a variety of pre-made tools, but instead we’ll choose ‘Create your own custom function’

Next we’re prompted to choose how we want this thing to run.  Do we want the code to run when a URL is hit (commonly referred to as a ‘webhook)’, or when a file is uploaded?  Do we want it to run if the face recognition Cortana API finds a new photo of us on Imgur?  The options are endless.  We’re going plain vanilla today though, so choose Timer.

The last pages of the wizard, we’re here!  Azure uses the cron standard for formatting dates, which is a nightmare if you’ve only been around Windows and the vastly superior Task Scheduler.  (Except the part where it only generates configurations with XML, ew).  Fortunately you can easily create your own Cron expression using this site.

I wanted mine to run once an hour from 09:00 to 13:00, and only on Monday through Friday.  I’m in UTC -6, so the expression worked out to:  0 0 15-20 * * 1-5.  That translates roughly to 0 minutes, 0 seconds, hours 15 through 20, any month, any year, days 1 - 5

Clicking Create will show you…

Writing PowerShell in (mostly) real-time in Azure

That UI excites me in my deepest nerdy places, down deep where I fantasize about having telekinesis or being able to do cool parkour moves.  I ❤ that they provide a PowerShell example for us to start hacking away!

The curious mind SHOULD be tempted to click Run and see what happens. So…

Right from the start, I knew I couldn’t use the same method of displaying an oAuth window to authorize the delegated token in Azure, as Azure Functions, much like Orchestrator, SMA and PowerShell workflows do not support interactivity, and thus commands like Write-Host (which writes to the console) and -Debug are not permitted.  There’s simply no console to support that interaction.

Once the UI is displayed to a user a single time, you can forever refresh your token by posting the refresh token and credential back to the right endpoint, as mentioned above.  So, I decided to simply create a JSON file, in which I would store the relevant bits for the Refresh request, here’s what my file looked like.

Settings.json
{
    "scope":  [
                  "privatemessages",
                  "save",
                  "submit"
              ],
    "secret":  "xAqXHdh-mySecret_PleaseDontSteal_rV3MY",
    "client_id":  "123Ham4uandMe",
    "duration":  "permanent",
    "refresh_token":  "1092716171-RefreshMe123Please4meySifmKQ",
    "redirect_uri":  "http://www.foxdeploy.com"
}
Just click upload on the right side

Uploading files is easy, just click the upload icon on the far right side then give it a moment.  It may take up to a minute for your file to appear, so don’t hit Upload over and over, or you’ll end up with multiple copies of it.  I uploaded the Refresh-Token.ps1 and Send-RedditMail.ps1 functions as well.

Next, to modify my full script to work with settings stored in a .JSON file, and update the code to reflect its new headless life.

You’ll notice that I had to change the directory at the head of the script.  All the source files for an Azure function will be copied onto a VM and placed under D:\home\site\wwwroot\<functionName>\, so in order to find my content, I needed to Set-Location over to there.  In a future release of Azure Functions, we will likely see them default to the appropriate directory immediately.

With all of this completed, I hit Save and then…waited.

The first version of this function never checked to see if an alert had been sent before, so every four hours I received a private message for every post on my subreddit!

With this in place, I received notices every few hours until I was caught up, and had personally responded to every post on the sub!  And I now get a PM within hours of a new post, so posts will never go unanswered again!  It was a huge success and is still running today, smoothly.

In conclusion…how much does it cost?

I was curious to see how expensive this would be, so after a month (and about ~100 PMs sent), here’s my stats.  Mind you that as of this moment, Microsoft allows for a super generous free plan, which “…includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month.”  More pricing details here.

To date, it has still yet to cost me a penny.  I think function apps are a wonderful addition to Azure, and will definitely be deploying them over VMs in the future!

I could not have written this blog post within the help of Mark Kraus, so you should definitely follow him on Twitter and check out his blog.

I also learned a lot about Azure Functions from Stefan Stranger’s post on the topic, here.

And last, but not least, I learned a load from David O’Brien as well.  Not just on Functions, but on a number of other topics too over the years from his wonderful blog.  He’s a super star!

 

Faster Web Cmdlet Design with Chrome 65

$
0
0

If you’ve been following my blog for a while, you know that I LOVE making PowerShell cmdlets, especially ones that consume an API or scrape a web site.

However when it comes to tools that peruse the web, this can get a bit tricky, especially if a site doesn’t publish an API because then you’re stuck parsing HTML or loading and manipulating an invisible Internet Explorer -COMObject barfs in Japanese.  And even this terrible approach is closed to us if the site uses AJAX or dynamically loads content.

In that case, you’re restricted to making changes on a site while watching Fiddler 4 and trying to find interesting looking method calls (this is how I wrote my PowerShell module for Zenoss, by the way.  Guess and checking my way through with their ancient and outdated Python API docs my sole and dubious reference material, and with a Fiddler window MITM-ing my own requests in the search to figure out how things actually worked.  It…uh…took a bit longer than I expected…)

This doesn’t have to be the case anymore!  With the new release of Chrome 65 comes a PowerShell power tool so powerful that it’s like moving from a regular apple peeler to this badboy.

What’s this new hotness?

For a long time now if you load the Chrome Developer Tools by hitting F12, you’ve been able to go to the Network tab and copy a HTTP request as a curl statement.

Image Credit : google developers blog

This is super useful if you use a Linux or Mac machine, but cURL statements don’t help us very much in the PowerShell Scripting world.  But as was recently brought to my attention on Twitter, Chrome now amazingly features the option to copy to a PowerShell statement instead!

I had to check for myself and…yep, there it was!  Let’s try and slap something together real quick, shall we?

How do we use it

To use this cool new feature, we browse to a page or resource, interact with it (like filling out a form, submitting a time card entry, or querying for a result) and then RIGHT when we’re about to do something interesting, we hit F12, go to the network tab then click ‘Submit’ and look for a POST , PUT or UPDATE method.

More often than not, the response to this web request will contain all or part of the interesting stuff we want to see.

I check the pollen count online a lot.  I live in the South-Eastern United States, home to some of the worst pollen levels recorded on the planet.  We get super high pollen counts here.

Once, I was out jogging in the pine forest of Kennesaw Mountain, back before I had children, when I had the time to exercise, or perform leisure activities, and a gust of wind hit the trees and a visible cloud of yellow pollen flew out.  I breathed it in deeply…and I think that was the moment I developed allergies.

Anyway, I often check the pollen counts to see how hosed I’ll be and if I need to take some medicine, and I really like Weather.com’s pollen tracker.

So I thought to see if I could test out this neat new feature.  I started to type in my zip code in the lookup form and then, decided to record the process.

Full screen recommended!  I’ve got a 4k monitor and recorded in native resolution, you’ll probably need a magnifying glass if you don’t full screen this.

So, to break that down:

  • Prepare to do something interesting – you need to know exactly what you’re going to click or type, and have an idea of what data you’re looking for.  It pays to practice.
  • Open Developer Tools and go to the Network tab and click Record
  • Look through the next few requests – if you see some to a different domain (or an end-point like /api/v1 or api.somedomain.com then you may be on the right track.

In my case, I ran through the steps of putting in my zip code, and then hitting enter to make the pollen count display.  I noticed on my dry run with the network tab open that a lot of the interesting looking stuff (and importantly, none of the .js or images) came from a subdomain with API in the name.  You can apply a filter at any point while recording or after using the filter box, so I added one.

Filtering out the cruft is a MUST. Use the filter box in the upper left to restrict which domains show up here.

Now, to click through these in Chrome and see the response data.  Chrome does a good job of formatting it for you.

Finally I found the right one which would give me today’s pollen count (actually I’m being dramatic, I was amazingly able to find the right one in about a minute, from the start of this project!)

All the values I need to know that it is the pine trees here which are making my nose run like a faucet.

All that remained was to see if this new stuff actually worked…

Simply Right Click the Request – Copy – Copy Request as PowerShell!

And now, the real test…

I popped over to the ISE and Control-V’ed that bad boy.  I observed this following PowerShell command.

Invoke-WebRequest -Uri "https://api.weather.com/v2/turbo/vt1pollenobs?apiKey=d522aa97197fd864d36b418f39ebb323&format=json&geocode=34.03%2C-84.69&language=en-US" `
   -Headers @{"Accept"="*/*"; "Referer"="https://weather.com/"; "Origin"="https://weather.com"; "User-Agent"="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"}

We can see in the geocode= part of the URL that entering my zip code converted the location code into lat/long coordinates and then the actual request for the local counts present the coordinates to the vt1PollenObs endpoint of their Turbo internal API.  You can learn a lot from a request’s formatting.

In all likelihood we could probably omit the majority of those Header values and it would still work.  We could likely truncate the URL as well, but I had to see what would happen!


StatusCode        : 200
StatusDescription : OK
Content           : {"id": "34.03,-84.69",
                    "vt1pollenobs": 

                       {"reportDate":"2018-03-30T12:43:00Z","totalPollenCount":2928,"tree":4,"grass":0,"weed":1,"mold":null}

                        }
RawContent        : HTTP/1.1 200 OK
                    Access-Control-Allow-Origin: *
                    X-Region: us-east-1
                    Transaction-Id: e64e09d7-b795-4948-8e09-d7b795d948c6
                    Surrogate-Control: ESI/1.0
                    Connection: keep-alive
                    Content-Length: 159
                    Cac...
{...}

I mean, you can see it right there, in the Content field, a beautiful little JSON object!  At this point, sure, you could pipe the output into ConvertFrom-JSON to get back a PowerShell object but I would be remiss (and get an ear-full from Mark Krauss) if I didn’t mention that Invoke-RESTMethod automatically converts JSON into PowerShell objects!  I swapped that in place of  Invoke-WebRequest and stuffed the long values into variables and…

Wow, that ‘Just worked’! That never happens!!

Let’s make a cmdlet

OK, going back to that URL, I can tell that if I presented a different set of lat and lng coordinates, I could get the pollen count for a different place.

We could make this into a cool Get-PollenCount cmdlet if we could find a way to convert a ZIP over to a real set of coordinates…

A quick search lead me to Geoco.io, which is very easy to use and has superb Documentation.

Zenoss, why can’t you have docs like this?

Sign up was a breeze, and in just under a minute, I could convert a ZIP to Coords (among many other interesting things) in browser.

I needed them back in the format of [$lat]%2c[$lng], where $lat is the latitude to two degrees of precision and $lng is predictably also the same.  This quick and dirty cmdlet got me there.

Function Get-GeoCoordinate{
param($zip)
$lookup = Invoke-RestMethod "https://api.geocod.io/v1.3/geocode?q=$zip&api_key=$($global:GeocodeAPI)"
"$([math]::Round($lookup.results[0].location.lat,2))%2c$([math]::Round($lookup.results[0].location.lng,2))"
}

Make sure to set $global:GeocodeAPI first.  So, now a quick test and…

Okie doke, that was easy enough.   Now to simply modify the URL to parameterize the inputs


Function Get-PollenCount{
param($coords)

$headers = @{"Accept"="*/*"; "Referer"="https://weather.com/"; "Origin"="https://weather.com"; "User-Agent"="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"}
$urlbase = "https://api.weather.com/v2/turbo/vt1pollenobs?apiKey=$global:PollenAPI&format=json&geocode=$coords&language=en-US"
$totalPollen = Invoke-RestMethod -Uri $longAssURL  -Headers $headers
$totalPollen.vt1pollenobs

}

On to the final test…

It was…that easy??

What’s next?

This new tool built in to Chrome really is a game changer to help us quickly scrape together something working to solve an issue.  It’s AWESOME!

Do you need help developing a cmdlet for your scenario?  We can help!  Post a thread on reddit.com/r/FoxDeploy and I’ll respond in a timely manner and help you get started with a solution for free!  

 

Chrome65

Hard to test cases in Pester

$
0
0

Recently at work I have finally seen the light and begun adding Pester tests to my modules.  Why is this a recent thing, you may ask?  After all, I was at PowerShell Summit and heard the good word about it from Dave Wyatt himself way back in 2015, I’ve had years to start doing this.

Honestly, I didn’t get it…

To tell the truth, I didn’t understand the purpose of Pester. I always thought ‘Why do I need to test my code? I know it works if it accomplishes the job it’s supposed to do’.

For instance, I understood that Pester was a part of test-driven development, a paradigm in which you start by writing tests before you write any code.  You’d write a ‘It should make a box’ test and wire it up before you actually wrote the New-Box function.  But I was only looking at the outside of my code, or where it integrates into the environment.  In truth, all of the tests I wrote earlier on were actually integration tests.

See, Pester is a Unit Driven Test Framework.  It’s meant to test the internal logic of your code, so that you can develop with certainty that new features to your function don’t break your cmdlet.

CodeCoverage made Pester finally click

It wasn’t until I learned about the powerful -CodeCoverage parameter of Pester that it actually clipped.  For instance, here’s a small piece of pseudo code, which would more or less add a user to a group in AD.

Function Add-ProtectedGroupMember {
    Param(
    [ValidateSet('PowerUsers','SpecialAdmins')]$GroupName,
    $UserName)

    if ($GroupName -eq 'SpecialAdmins'){
        $GroupOU = 'CN=DomainAdmins,OU=Groups,DC=FoxDeploy,DC=COM'
    }else{
        $GroupOU = 'CN=PowerUsers,OU=Groups,DC=FoxDeploy,DC=COM'
    }

    try {Add-ADGroupMember -Path $GroupOU -Member $UserName -ErrorAction Stop}
    catch {throw "Check UserName; Input [$UserName]" }

}

And to go along with this, I made up a pseudo function called Add-ADGroupMember, defined as the following.

Function Add-ADGroupMember {
    param($GroupOU, $UserName)
    [pscustomobject]@{Members=@('EAAdmin','Calico','PB&J', $UserName);Name=$GroupOU}
}

When I run Pester in -CodeCoverage mode and pass in the path to my Add-ProtectedGroupMember cmdlet,  Pester will highlight every branch of logic which probably needs to be tested.  Here’s what it looks like if I run the Pester again in that mode, without having created any tests.

PS>Invoke-Pester -CodeCoverage .\Add-ProtectedGroupMember.ps1
Code coverage report:
Covered 0.00% of 5 analyzed commands in 1 file.

Missed commands:

File               Function          Line Command
----               --------          ---- -------
Add-ProtectedGroup Add-ProtectedGrou    6 if ($GroupName -eq 'SpecialAdmins'){...
Add-ProtectedGroup Add-ProtectedGrou    7 $GroupOU = 'CN=DomainAdmins,OU=Groups,DC=FoxDeploy,...
Add-ProtectedGroup Add-ProtectedGrou    9 $GroupOU = 'CN=PowerUsers,OU=Groups,DC=FoxDeploy,DC...
Add-ProtectedGroup Add-ProtectedGrou   12 Add-ADGroupMember -Path $GroupOU -Member $UserName ...
Add-ProtectedGroup Add-ProtectedGrou   13 throw "Check UserName; Input [$UserName]"             

As we can see, Pester is testing for the Internal Logic of our Function.  I can look at this report and realize that I need to write a test to make sure that the logic on line 6 works.  And more than highlighting which logic needs to be tested, it’s also basically a challenge.  Can you cover every case in your code?

Pester was stirring something within me, this gamified desired for completion and min-maxing everything.  (If it had Achievement Messages too, I would write Pester tests for everything!)

So, challenge accepted, let’s think through how to write a test to cover the first issue, line 6.  If a user runs my cmdlet and chooses to place the object in the SpecialAdmins OU, the output will always be ”CN=DomainAdmins,OU=Groups,DC=FoxDeploy,DC=COM”.  I can test for that with the following Pester test, saved in a file called Add-ProtectedGroupMember.tests.ps1

Describe "Add-ProtectedGroupMember" {
    It "The if branch for 'SpecialAdmin' use case should work" {
        $A = Add-ProtectedGroupMember -GroupName SpecialAdmins -UserName FoxAdmin
        $A.Name | Should -Be 'CN=DomainAdmins,OU=Groups,DC=FoxDeploy,DC=COM'
    }
}

I run the Pester test again now and…

Wow, with one test, I have now covered 80% of the guts of this cmdlet, that was sweet. That’s because for this one test to execute successfully, all of these lines in my cmdlet are involved.

All of the lines in Blue were covered under just one test!

Completing The Tests

The next line that needs to be covered is called when the user runs with -GroupName PowerUsers, so we can cover that with this test.

It "The else branch for 'PowerUsers' use case should work" {

        $A = Add-ProtectedGroupMember -GroupName PowerUsers -UserName FoxAdmin
        $a.Name | Should -Be 'CN=PowerUsers,OU=Groups,DC=FoxDeploy,DC=COM'
}

After this test, we’re practically done

All that’s left now is to write a test for this chunk of code.

But I can only test that my error handling works if I can find some way to force the cmdlet in the try block to error somehow.  How the heck do I make my cmdlets poop the bed to test that this cmdlet has good error handling?

How to test your error handling

This is where the Pester keywords of Mock and Context come into play.  Pester allows you to ‘Mock’ a command, which basically replaces that command for one of your own design to ‘Mock’ up what a cmdlet would do.  For instance, when I’m running a test that uses Active Directory commands, I don’t want the tests to actually touch AD.  I would mock Get-ADUser and then have this fake function just output the results from one or two users.

Run the function, Select the first two results, then paste them into the body of the Function as a PowerShell Object.  Easey-peasey.

🦊Take-away 🦊 Mock clips the wings of any cmdlet, preventing them from actually running

If I want to test error handling, I write a new test showing when I expect my function to error (when it should throw).  To make it throw, especially when I am calling external cmdlets, I just mock that cmdlet, and replace that cmdlets guts with something that will throw an error.   To paraphrase:

So, in order to write a test to see if my code respects error handling, I need to overwrite the default behavior of Add-AdGroupMember to a state which will reliably fail.  It’s really simple to do!

 #We need to be able to test that try/catch will work as expected
Mock Add-ADGroupMember {
    throw
}    

It "Should throw if we're unable to change group membership" {

    {Add-ProtectedGroupMember -GroupName PowerUsers -UserName FoxAdmin } | Should -Throw

}

I run the tests again and now…

Oh yeah, 100%!   In my development work, I work towards 100% code coverage to ensure that the guts of my logic is well covered by tests.  This is worth the time to do (so build it into your schedules and planning timelines) because having the tests ensures that I don’t break something when I come back to make changes three months from now.

Let’s move on to some of the scenarios which really stumped me for a while, as I’m still basically a newbie at Pester.

Verify Credentials or params are passed as expected

I wrote a cmdlet which called Get-CIMInstance,it was something like this.

Function Get-DiskInfo {
    param ($Credential)

    Get-CimInstance Win32_DiskDrive | select Caption,@{Name=‘SerialNumber‘;Expression={$_.SerialNumber.Trim()}},`
        @{Name=‘Size‘;Expression={$_.Size /1gb -as [int]}}

}

We decided to add support for an optional -Credential param, for cases in which we would need to use a different account.  The difficulty appeared when we wanted to ensure that if the user provided a Parameter, it was actually handed off down the line.

To solve this problem, first we had to rewrite the cmdlet a little, to prevent having multiple instances of Get-CimInstance in the same cmdlet.  Better to add some extra logic and build up a hashtable containing the parameters to provide than to have multiple instances of the same command in your function.

Function Get-DiskInfo {
    param ($Credential)

    if ($Credential){
        $ParamHash = @{Credential=$Credential;ClassName='Win32_DiskDrive'}
    }
    else{
        $ParamHash = @{ClassName='Win32_DiskDrive'}
    }
    Get-CimInstance @ParamHash | select Caption,@{Name=‘SerialNumber‘;Expression={$_.SerialNumber.Trim()}},`
        @{Name=‘Size‘;Expression={$_.Size /1gb -as [int]}}

}

Next, to test if the $Credential param was passed in, we mocked Get-CimInstance and configured the code to save the input param’s outside of the function scope for testing.

Mock Get-CimInstance {
        param($ClassName)
        $script:credential = $credential
        $global:ClassName = $ClassName

    } -Verifiable

Finally, in the test itself, we run the mocked cmdlet and then validated that after execution, the value of $Credential was not null.

It 'When -Credential is provided, Credential should be passed to Get-CimInstance' {
        $somePW = ConvertTo-SecureString 'PlainPassword' -AsPlainText -Force
        $cred = New-object System.Management.Automation.PSCredential('SomeUser', $somePW)
        Get-DiskInfo -Credential $cred
        $Credential | should -Not -be $null
    }

Once we came up with this structure to validate parameters were passed in to child functions, it really opened up a world of testing, and allowed us to validate that each of our parameters was tested and did what it was supposed to do.

Test Remote Types which won’t exist in the test environment

Recently I was working on a PowerShell module which would reach into ConfigMgr over WMI and pull back an instance of the SMS_Collection Class, and then we would call two methods on it.

$CollectionQuery = Get-WMIObject @WMIArgs -class SMS_Collection -Filter "CollectionID = '$CollectionID' and CollectionType='2'"

This gives a SMS_Collection object, which we can use to call the .AddMemberShipRules() method and add devices to this collection.

I didn’t want my Pester tests to be dependent on being able to reach a CM Server to instantiate the object type (nor did I want my automated testing pipeline to have access to ConfigMgr) so…I just mocked everything that I needed.  It turns out that you can easily fake the methods your code needs to call using the Add-Member -MemberType ScriptMethod cmdlet.


Mock Get-WmiObject {
        param($ClassName)
        $script:credential = $credential
        $global:ClassName = $ClassName

        $mock = [pscustomobject]@{CollectionID='FOX0001'
                CollectionRules=''
                CollectionType =2
                Name = 'SomeCollection'
                PSComputerName = 'SomePC123'}

        Add-Member -InputObject $mock -MemberType ScriptMethod <code>AddMemberShipRules</code>{ Write-Verbose 'Mocked' } -Verifiable 

Now I could validate that this line of code is run and that the rest of my code later on calls this method with the following code.

It 'Should Receive an Instance of the SMS_Collection object'{
  Add-CMDeviceToCollection -CollectionID SMS0001
  Assert-MockCalled -CommandName Get-WMIObject -Time 1 -Exactly -Scope It -ParameterFilter {$Class -eq 'SMS_CollectionRuleDirect'}

}

Move method calls into their own functions

Looking back to the code for Add-CMDeviceToCollection, note line 84.

$MemberCount = Get-WmiObject @WMIArgs -Class SMS_Collection -ErrorAction Stop -Filter $Filter
$MemberCount.Get()

You can try until you are blue in the face, but Pester does not have the capability to mock .Net objects, or handle testing for methods being called. But it DOES excel with functions, so let’s just put the Method call from above into its own function, then we can check to see if the method was called by adding Assert-MockCalled.

Function Call-GetMethod {
   param($InputObject)
    $InputObject.Get()
}
 Function Add-CMDeviceToCollection {
     

        $MemberCount = Get-WmiObject @WMIArgs -Class SMS_Collection -ErrorAction Stop -Filter $Filter
        $MemberCount = Call-GetMethod -InputObject $MemberCount
        Write-Verbose "$Filter direct membership rule count: $($MemberCount.CollectionRules.Count)"

And the test to validate that this line of code works as expected.

It 'Should call the .Get() method for the collection count'{
  Add-CMDeviceToCollection -CollectionID SMS0001
  Assert-MockCalled -CommandName Call-GetMethod -Time 1 -Exactly -Scope It 

}

And that’s it!

And that’s all for now folks!  Have you encountered any of these situations before?  Or run into your own tricky case that you’ve solved?  Leave a comment below or post it on reddit.com/r/FoxDeploy to share!

ClientFaux – the fastest way to fill ConfigMgr with Clients

$
0
0

Recently at work, we were debating the best way to handle mass collection moves in ConfigMgr.  We’re talking moving 10,000 or more SCCM devices a day into Configuration Manager collections.

To find out, I installed CM in my beastly Altaro VM Testlab (the build of which we covered here), and then wondered…

how the heck will I get enough clients in CM to test in the first place?

Methods we could use to populate CM with Clients

At first I thought of using SCCM PXE OSD Task Sequences to build dozens of VMs, which my lab could definitely handle.  But a PXE Image was taking ~24 minutes to complete, which ruled that out.  Time to thousand clients even running four images at a time would be over one hundred hours, no go.

Then I thought about using differencing disks coupled with AutoUnattend images created using WICD, like we covered here on  (Hands-off deployments), but that still takes ~9 minutes per device, which is a lot of time and will use up my VM resources.  Time to thousand clients, assuming four at a time? 36 hours.

I thought I remembered seeing someone come up with a tool to create fake ConfigMgr clients, so I started searching…and it turns out that other than some C# code samples,  I had a fever dream basically, it didn’t exist.

So I decided to make it, because after all, which is more fun to see when you open the console in your testlab, this?

Or this?

And it only took me ~40 hours of dev time and troubleshooting.  But my time per client?  Roughly eight seconds!  That means 450 clients PER hour, or a time to thousand clients of only two hours!  Now we’re cooking…

How is this possible?

This is all made possible using the powerful ConfigMgr SDK, available here.

But really, none of this would have been possible without the blog posts by Minfang Lu of Microsoft and the help of @Adam Meltzer also of Microsoft.  Minfang’s post provided some samples which helped me to understand how to Simulate a SCCM Client.   And Adam is a MSFT SUPERSTAR who responded to my emails at all hours of the night and helped me finally solve the pesky certificate issue which was keeping this from working.  His blog posts really helped me get this working.  It was his samples that got me on the right path in the first place.

So, what does it even do?

The ClientFaux Client Simulation tool allows us to use the super powerful ConfigMgr SDK and its assemblies to simulate a ConfigMgr Client.  We can register a client, which will appear in CM as a new Device. We are able to specify the name of our fake client, and some of its properties, and even run a client discovery.  This concludes the list of what is working at this point 🙂

On the roadmap, we will be able to populate and provide custom fake discovery classes which we can see in Resource Explorer (though this has some issues now).  Imaging testing queries in your test CM and being able to exactly replicate a deployment of an app with multiple versions, for Collection Queries or reporting…This is only the beginning, and I hope that with a good demo of what this does, we’ll quickly add more and more features.  If you’re interested…

Here’s the source, help make this better!

Standard Boiler -Plate warning

The focus of this tool is to allow us to stage our CM with a bunch of clients, so we can do fun things like have huge numbers of devices appear in our Console, test our skills with Querying, and have interesting and real looking data to include as we practice our custom SQL Reporting skills.  This should be done in your test lab.  I don’t see how this can cause your CM serious issues, but I’ve only got a sample size of one so far.  Consider yourself warned, I can’t help you if you create 100K devices and your donut charts in CM suddenly look weird.  Do this is test.

How do I use the ClientFaux tool

Getting up and running is easy, simply click on the releases tab and download the newest binary listed there.  Extract it somewhere on your PC.

Next, download and install the ConfigMgr SDK, then open up Explorer and copy the ​Microsoft.ConfigurationManagement.Messaging.dll file from (“C:\Program Files (x86)\Microsoft System Center 2012 R2 Configuration Manager SDK\Redistributables\Microsoft.ConfigurationManagement.Messaging.dll”) to the same path where you put the ClientFaux.

Your directory should look like this now.

dir
Yes, that IS a handdrawn icon

At this point you’re probably noticing the .exe file and wondering…

Wait, no PowerShell Cmdlets?!

I know, I know, I deserve shame.  Especially given the theme of my blog is basically shoe-horning and making everything work in PowerShell.  I’ve been working in C# a bit at work now, and sort of have a small clue, it felt easier to start in C# and then, plan to add PowerShell later.  (It’s on the plan, I swear!)  I also have a GUI planned as well, worry not, this is the early days.

To start creating clients, we need five things:

  • A desired name for the new client in CM
  • The path to a CM Compatible Certificate in PFX format
  • The Password to the above cert
  • The ConfigMgr Site Code
  • The Name of the CM Server

Making the certs was kind of tricky (I’ll cover the woes I faced in the upcoming ‘ClientFaux Build Log’ post, to come next week), so I wrote a PowerShell script to handle all of this.  Run this from a member server which can route to CM.  In my lab, I have a small domain with a CM Server, an Admin box and a Domain Controller.  I ran this from the Admin box.

$newCert = New-SelfSignedCertificate `
    -KeyLength 2048 `
    -HashAlgorithm "SHA256" `
    -Provider  "Microsoft Enhanced RSA and AES Cryptographic Provider" `
    -KeyExportPolicy Exportable -KeySpec KeyExchange `
    -Subject 'SCCM Test Certificate' -KeyUsageProperty All -Verbose 

    start-sleep -Milliseconds 650

    $pwd = ConvertTo-SecureString -String 'Pa$$w0rd!' -Force -AsPlainText

Export-PfxCertificate -cert cert:\localMachine\my\$($newCert.Thumbprint) -FilePath c:\temp\ClientFaux\CMCert.pfx -Password $pwd -Verbose
Remove-Item -Path cert:\localMachine\my\$($newCert.Thumbprint) -Verbose

ClientFaux MynewPC123 c:\temp\ClientFaux\CMCert.pfx 'Pa$$w0rd!' F0X SCCM

This will create the cert (which has to use the SHA1 or SHA256 Hashing Algorithm, and be 2048 bits long), then export it with a password, and then delete the cert from your cert store. I ran into issues when I had more than 10,000 certs, and we don’t need it in our store anymore to actually use it.

Then, it will trigger ClientFaux.exe with those params.

This particular configuration above says: “Register a new client using the Cert found at C:\temp\ClientFaux\CMCert.pfx, with the password of ‘Pa$$w0rd!’, and then register with the F0X ConfigMgr site using the Management Point SCCM.  Here’s what it will look like:

Enroll

If you run into errors, there will be a log file created with every enrollment in the same directory as the binary.  The log file is super verbose, but you can also find logging info on the Management Point itself, look to MP_Registration.log and report any errors you see (but if you use this configuration, you should not run into any).

What will it do?

At this point, we can see the log files on the Management Point, which will be found under the SCCM Drive\SMS_CCM\Logs\MP_RegistrationManager.log file, a completed request will look like this:

Mp Reg: Reply message
MP Reg: Processing completed. Completion state = 0
MP Reg: Message ReplyTo : direct:DC2016:SccmMessaging
MP Reg: Message Timeout : 60000
Parsing done.
Processing Registration request from Client 'Fox93481.FoxDeploy.local'
Successfully created certificate context.
MP Reg: Successfully created context from the raw signing certificate.
Begin validation of Certificate [Thumbprint 941D7F46903BEE8A7A67BF7B416453BFC0F18FFE] issued to 'SCCM Test Certificate'
Completed validation of Certificate [Thumbprint 941D7F46903BEE8A7A67BF7B416453BFC0F18FFE] issued to 'SCCM Test Certificate'
Successfully created certificate context.
MP Reg: Successfully created context from the raw encryption certificate.
Registration Signature: SuperLongHashHere
MP Reg: DDR written to [E:\CM\inboxes\auth\ddm.box\regreq\RPB886P6.RDR] for Client [GUID:A698D203-C0F9-4E5D-8525-3AA55572BF5F] with Certificate Thumbprint [941D7F46903BEE8A7A67BF7B416453BFC0F18FFE]
Mp Reg: Reply message

Give it a moment, and it will appear in the ConfigMgr console!

NewDevice

But, how do I get–like–10k of them

If you want to get your console really filled with devices, then you can run this script to create boatloads of devices!  I’m assuming you placed ClientFaux under C:\Temp\ClientFaux. Simply edit line 1 and 2 with the starting and ending numbers, and then edit line 7 with your desired name. If you change nothing, this will create PCs labeled Fox1, Fox2, and so on up to 50,000.

$str = 1
$end = 50000
while ($str -le $end){
    if(-not(test-path C:\temp)){
        new-item -Path C:\temp -ItemType Directory -Force
    }
    $NewName = "Fox$str"
    $newCert = New-SelfSignedCertificate `
        -KeyLength 2048 `
        -HashAlgorithm "SHA256" `
        -Provider  "Microsoft Enhanced RSA and AES Cryptographic Provider" `
        -KeyExportPolicy Exportable -KeySpec KeyExchange `
        -Subject 'SCCM Test Certificate' -KeyUsageProperty All -Verbose 
    
        timeout 3

    $pwd = ConvertTo-SecureString -String 'Pa$$w0rd!' -Force -AsPlainText

    Export-PfxCertificate -cert cert:\localMachine\my\$($newCert.Thumbprint) -FilePath "c:\temp\Client_$NewName.pfx" -Password $pwd -Verbose 
    Remove-Item -Path cert:\localMachine\my\$($newCert.Thumbprint) -Verbose
    C:\temp\ClientFaux\ClientFaux.exe $NewName c:\temp\Client_$NewName.pfx 'Pa$$w0rd!' 'F0X' 'SCCM'
    $str+=1
}

You can also run three or four instances of this at a time as well! If you do that, I’d recommend using multiple copies of the .exe in their own folder, one per thread, to prevent two instances from trying to create the same named log file.

What’s Next

So, this is represents my alpha build.  It is working reliably but it could use a lot of features and testing.  For one, how about named parameters?  How about making a GUI for it?  What about making PowerShell cmdlets instead of a binary (more in-line with the theme of this blog!)?

These are all planned and will come…eventually. But I could use some help!  If you’d like to contribute, please test the project here, and send me issues as you come across them.  If you want to resolve issues, I’ll happily accept pull Requests too!

Source Code here on GitHub!

Compiled Binary – Alpha 

Sources

I learned so much writing this post and so I wanted to call out down here a listing of all of the resources I used to write this project.  In the build-log post, we’ll talk about each of this and how they came up, in the hopes that it will help you on your own ConfigMgr integrations 🙂

Viewing all 90 articles
Browse latest View live