Quantcast
Channel: FoxDeploy – FoxDeploy.com
Viewing all 109 articles
Browse latest View live

Building a Windows 10 IoT C# traffic monitor: Part II

$
0
0

Previously 🔗, we took off our socks and put our feet into the sand, and wrote our first C# Console application.  We built on it and added the ability to complete a web request and even parsed some JSON as well!  Along the way, we learned a bit of how things work in C# and some background on programming terms and functionality.

In this post, we will take our code and port it over to run on .net core, and hook up the results to the GUI. Stick with me here, and by the end you’ll have a framework you can modify to list your Twitter followers, your Facebook Feed, or monitor your own blog stats as well.

And if you do modify it…

Then share it!  You’ll find a “LookWhatIbuilt” folder in the repository.  You are encouraged to share screenshots, snippets, even your own whole project if you like, by sending a PR.  Once we have a few of these, we’ll do a Spotlight post highlighting some of the cool things people are doing,

Cracking open the IoTDefaultApp

When we imaged our rPi with the Iot Dashboard, it wrote the OS and also delivered the ‘IoT Core Default App’ to the device.  It’s pretty slick looking and gives us a very good jumping off point to reskin things and have our app look nice.  We can view the code for the 🔗 Default App here on the MS IoT GitHub  .

Since this is ‘babies first application’ we are going to modify this existing app to suit our purposes.  Download the sample from the link above and then double-click the Visual Studio Project  .SLN file.  There’s kind of a lot going on when you first open it, but the file we want to edit is MainPage.XAML.

Over in the Solution Explorer in the right-gutter, expand out to IotCoreDefaultApp \ Views then click MainPage.xaml.

Here is the template we’re going to be modifying.

There’s kind of a lot going on here too, so I recommend that you power on your Pi now and see what the default app looks like, here’s a screen shot…

Please don't hack my internet IP address!

Redecorating the app

Me being me, of course I’m going to make it look pretty before I make it work, so I spent some time adding files, dragging the layout around, that sort of thing.  To add a new file, first, click to the Solution Explorer \ Assets folder, then right-click and choose ‘Add Existing Item’

Next, go to the Left Gutter \ Toolbox\ and choose the Image Control, then drag the area you’d like your image to appear.

Now, back on the Right GutterProperties \ Common, use the dropdown box Source and pick your image.

PROTIP: be sure to use this process of adding an image, relatively selecting it, rather than specifying the full-path to the file.

If you don’t, you can end up with the file not getting delivered with the app to your pi.  Not good!

 

I did a little bit of tweaking here, and here is where I ended up

I forgot to screen shot my first pass, sorry!

One of the core values of my job is to Make it work before you make it look pretty.  It really speaks to me, namely because I never do it.

We made it look pretty, now, to make it work

Hitting F7, or right-clicking and choosing ‘View Code‘ will show the c# behind this View.  Developers like to call the code behind a view the code-behind.

We see here a whole lot of references to assemblies


//using IoTCoreDefaultApp.Utils;
//using System;
//using System.Globalization;
//using System.IO;
//using System.Net;
//using System.Net.Http;
//using Windows.Data.Json;
//using Windows.Networking.Connectivity;
//using Windows.Storage;
//using Windows.System;
//using Windows.UI.Core;
//using Windows.UI.Xaml;
//using Windows.UI.Xaml.Controls;
//using Windows.UI.Xaml.Media.Imaging;
//using Windows.UI.Xaml.Navigation;
//using MyClasses;
//using Windows.System.Threading;

Then we define a namespace for our app, called IotCoreDefaultApp, then a class called a MainPage, which is where the entirety of the code for this app will live.  We also define a Dispatcher, which might be familiar from our post on 🔗multi-threaded GUIs with PowerShell.  Because our GUI is going to be multithreaded, we can’t just say Label.Text = "New Value", we’ll use a Dispatcher to enact the change for us.

namespace IotCoreDefaultApp
{
    public sealed partial class MainPage : Page
    {
    public static MainPage Current;
    private CoreDispatcher MainPageDispatcher;
    private DispatcherTimer timer;
    private DispatcherTimer GetStattimer;
    private DispatcherTimer countdown;
    private ThreadPoolTimer timerInt;
    private ConnectedDevicePresenter connectedDevicePresenter;

    public CoreDispatcher UIThreadDispatcher
    {
        get
        {
            return MainPageDispatcher;
         }

set
{
MainPageDispatcher = value;
}
}

Next a public class called MainPage() gets defined, which kicks off some interval timers which run, um, on an interval and update UI info.  We’ll skip over some boring stuff (which you can read here 🔗) ,which consists of  kind of boring house-keeping functions of this app.  Most of these run when something is clicked, or when a timer interval counts down.

Within the timer, (beginning line 65 or so) you’ll see that it gets started, then counts down 20 seconds and calls a function called timer_Tick.  All we have to do is define our own method, and then add it to timer_Tick and bam, it will automatically run on the interval specified (20 seconds, in this sample).


timer = new DispatcherTimer();
timer.Tick += timer_Tick;
timer.Interval = TimeSpan.FromSeconds(20);

this.Loaded += async (sender, e) =>
{
await MainPageDispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
UpdateBoardInfo();
UpdateNetworkInfo();
UpdateDateTime();
UpdateConnectedDevices();
timer.Start();
});
};
this.Unloaded += (sender, e) =>
{
timer.Stop();
};
}

Let’s see what else happens when timer_Tick get’s called.  Double-click timer_Tick and choose ‘Go to Definition’ to jump there.

private void timer_Tick(object sender, object e)
{
UpdateDateTime();

}

So, every 20 seconds, it runs and calls UpdateDateTime(), care to guess what this function does?

Now, that we’re familiar with how this works so far, let’s make our own method.

Making our own Method

I found a nice innocuous spot to add my method, in between two other methods and started typing.

I’m defining this as a private method, meaning that only this body of code can use it.  Next, because performing a web request can take a few seconds to complete, and we don’t want the code to lockup and freeze here, we add the async modifier.  Finally, we add void because this code block will run the web request and update the UI, but doesn’t return a value otherwise.

A word on Async and Await

We want our code to be responsive, and we definitely don’t want the UI to hang and crash, so running things asynchronously is a necessity.  We can do that using c#’s state machine (more on that here) to ensure that the app will not hang waiting for a slow web request

When you define a method as asynchronous, you also have to specify an await statement somewhere, to identify which code is allowed to run asynchronously while the rest of the app keeps running.

 

Now, let’s copy and paste the code we had working previously in the last post into the method and see if we get any squiggles.

Copying and Pasting our old code…why doesn’t it work?

We will have some squiggles here because we are bringing code from a full-fledged .net app and now targetting .net Core.  Core is cool…but it’s only got some of the features of full .net.  Some stuff just won’t work.  I am on a mission to kill these red squiggles.

First off, we don’t have a Console to write off to, so lets comment out or delete those lines( the double-frontslash // is used to comment in c#).

Next, the HttpWebRequest class doesn’t offer the GetResponse() method when we target .Net Core for Universal Windows Apps.

Let’s delete GetResponse() and see if there is an alternative.

Now that I’ve swapped this for GetResponseAsync(), I get MORE squiggles.  This time, the swiggles are because I’m telling the program to run this asynchronously and keep on going…but I don’t tell it to wait for the response anywhere.

The way to fix this is to add an await to the command as well.  This makes sense too, because there is always going to be a slight delay when I run a web request.  I want my app to know it can run this method we’re writing, and then proceed to do other things and come back when the webrequest has completed to finish the rest of my method.

Yay, no more squiggles, time to actually run this badboy

I’m going to want to test the results from this, so I’ll set a breakpoint within my Test() method, so that I can see the values and results when this code runs.  I’m going to highlight this line and hit F9 to create a breakpoint, which will tell the debugger and my program to stop right here.

With all that done, I’ll add modify the timer_Tick method to have it call my code Test()

Once every twenty seconds, the timer will expire and it will both update the time, and call our new method!

Pushing code to the Raspberry Pi

Pushing code to the Pi is easey peasey.  In the Right Gutter \ Solution Explorer , right-click your project and choose Properties.

Next, click Debug  then specify the Target Device as a Remote Machine. Then click Find

 Simply click your device and that’s it!

You might not even be asked for credentials. Nope, I don’t know why it doesn’t need credentials…

Now, finally, hit F5!

You’ll see a kind of lengthy build process, as the first boot or two of a pi is really pretty slow.  Then you’ll see a longggggggg Windows OOBE screeen displayed, which counts down and eventually picks the English language and Pacific Time Zone.  You can change this later by plugging in a mouse and keyboard.

Download link: Our code at this point should look something like this🔗.

Live Debugging

While our code is running, it will eventually swap over to the main page and display something along these lines.

If we have Visual Studio in the foreground, the app will pause when it reaches our breakpoint and we can see the values for each variable, in real time!

So, it looks like our web request completed sucessfully, but somehow the responseFromServer value looks like garbage.  Why might that be? Maybe HttpClient  is different between full .net and .net core?

Spoiler warning: it is different. 

We’re able to hit the endpoint, but then we get this gibberish.

\b\0\0\0\0\0\0\a`I�%&/m�{JJ��t\b�`$ؐ@������iG#

Fortunately I recognized this Gibberish as looking kind of like what a GZipped payload looks like.  See, all modern browsers support GZip as a pretty good style of compression.  It’s so common that event Invoke-RestMethod and HttpClient just natively support it.  However, in .net core it’s an option we have to turn on.

And we’ll do it by defining a net HttpClientHandler as a way of passing our preferences over to HttpClient when we spin up a new one.  Here’s how to do it, thanks to this  StackOverflow Answer.

HttpClientHandler handler = new HttpClientHandler()
{
    AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};

using (var client = new HttpClient(handler))
{
    // your code
}

I simply move all of the HTTP code within the //your code space, like so.


private async void GetStats()
{

HttpClientHandler handler = new HttpClientHandler()
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};

using (var client2 = new HttpClient(handler))
{
// your code
string url = "https://public-api.wordpress.com/rest/v1.1/sites/56752040/stats/summary/?fields=views&period=year&num=5";
//client.DefaultRequestHeaders.Add();
client2.DefaultRequestHeaders.Add("Authorization", "Bearer YourKeyHere");

HttpResponseMessage response1 = await client2.GetAsync(url);

//assign the response to a variable called ham
string ham = await response1.Content.ReadAsStringAsync();

}

}

Running it again, I can see that the fix worked, and the response isn’t GZipped anymore!

But…well, crap, I can’t use JSON.net (or if it’s possible, I couldn’t figure it out). What am I going to do?!?1

Learning how to parse JSON, again

I hope I didn’t leave you hanging with that cliff hanger.  Fortunately, dotnetCore has its own built-in JSON parser, under Windows.Data.JSON.

We can instantiate one of these badboys like this.


var Response = JsonObject.Parse(ham);

This will put it into a better and more parsable format, and store that in Response.  The last step is to pull out the value we want.

In PowerShell, of course, we would just run $Views = $Response.Views  and it would just work because PowerShell is Love.

In C#, and with Windows.Data.JSON, we have to pull out the value, like snatching victory from the jaws of defeat.


var Response = JsonObject.Parse(ham);
var hits = Response.GetNamedValue("views").GetNumber();

Response.GetNamedValue("views") gives us the JSON representation of that property as in {1000}, while .GetNumber() strips off the JSON envelope and leaves our number in its unadorned natural form like so 1000.

I am FINALLY ready to update the text block.

Crashing the whole thing

I was a bright-eyed summer child, like I was before I started reading Game of Thrones, so I decided to happily just try to update the .Text property of my big fancy count-down timer like so:


var Response = JsonObject.Parse(ham);
var hits = Response.GetNamedValue("views").GetNumber();

var cat = "Lyla"; //this was a breakpoint, named after my cat

HitCounter.Text = hits.ToString("N0");

I hit F5, waited, really thrilled to see the number change and…it crashed.  The error message said

The calling thread cannot access this object because a different thread owns it.

This one was really puzzling, but this helpful StackPost post explains that it’s because the very nature of threading and asynchronous coding means that I can’t always expect to be able to change UI elements in real time.

Instead, we have to schedule the change, which is SUPER easy.

How to update UI from within a thread

I just modify the call above like so, which makes use of the Dispatcher to perform the update whenever the program is ready.


await MainPageDispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
//Move your UI changes into this area
HitCounter.Text = hits.ToString("N0");
});

And now…it works.

Publishing the app and configuring Auto Start

When we’re finally done with the code (for now), publishing the finished version to the Pi is super easy.  Right click the solution in the right-hand side and choose properties.  In the windows that appears, go to the Debug tab and change the Configuration dropdown to Release.

Change the configuration to Release, and then F5 one last time.

Once you do that, the app is written and configured to run without remote debug.  Our little Raspberry is almost ready to run on it’s own!

The very last step here is ot configure our app to automatically run on power on.  Since we fiddled with it so much, we’ll need to set this again.  You can do this from the Windows IoT app by right-clicking the device and choosing Launch Windows Device Portal.

This launches a web console that is actually really slick.

You can watch live performance graphs, launch apps, configure wifi and updates and change the time zone here as well.  This is also where we configure which app launches when you turn the Pi on.

From this page, click Apps \ App Manager and find our app (you may have changed the name but I left it as IoTCoreDefaultApp)and then click the radio button for Startup.  

And now, Restart it.

In just a few minutes, you should see the Pi reboot and automatically launch our monitoring app.  Awesome, we’re developers now!

Completed Code Download Link – Here 

How to modify this to query your own WHATEVER

Simply change the body of GetStats() here to modify this to query whatever you like.  So long as it returns a JSON body, this format will work.


private async void GetStats()
{
//add your own query for ANYTHING here
HttpClientHandler handler = new HttpClientHandler()
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};

using (var client2 = new HttpClient(handler))
{
// whatever URL you want to hit should be here
string url = "https://yourAPIUrlhere.com";

//if your URL or service uses Bearer Auth, use this example
client2.DefaultRequestHeaders.Add("Authorization", "Bearer YourKeyHere");

HttpResponseMessage response1 = await client2.GetAsync(url);
string ham = await response1.Content.ReadAsStringAsync();

var Response = JsonObject.Parse(ham);
//var hits = Response.GetNamedValue("views").GetNumber();

//set a breakpoint here to inspect and see how your request worked.  Depending on the results, use the appropriate value for GetNamedValue() to get the syntax working

var bestCatname = "Lyla";

//this block below handles threading the request to change a UI element's value

/*await MainPageDispatcher.RunAsync(CoreDispatcherPriority.Low, () =>
{
//Your UI change here
//HitCounter.Text = hits.ToString("N0");
});
*/
//HitCounter.Text =viewsdesuka.ToString();
cat = "Lyla";

}

}

 

Resources and thanks!

I could not have done this without the support of at least a dozen of my friends from Twitter.  Special thanks to Trond Hindes, and Stuart Preston, and those who took the time to weigh in on my StackOverflow question.

Additionally, these posts all helped get the final product cobbled together.

Now, what kind of stuff have you written or do you plan to write with this template?  Be sure to share here, or on Reddit.com/r/FoxDeploy!

Finally, please share!  If you come up with something cool, add it to a subfolder of Look what I made,  here!



Wow, ONE MILLION HITS!

$
0
0

I hope you guys will tolerate a little bit of navel gazing for this post!

Today at the FoxDeploy Global Headquarters in Marietta, Georgia, we counted down to a very special milestone here!

1 MILLION HITS!

First and foremost, thank you very much for sticking with me through the years and through the awesome engagement, comments and corrections!  In this post, we’ll take a quick look back at some of the history of the site, what I’ve learned, some hard knocks and some of the fun to look forward too!

Looking back to my First post

Hard to believe that just under four years ago, on August 21st I made my first post here!  It was this helpful little post here, How to Reset your local admin password on Hyper-V.

This post has done well over the years, accruing a little more than 1,000 hits over the years.  I’m not why this might be, but this post on how to recover your password spikes every year after Cinco De Mayo.  What may have happened so that people accidentally locked themselves out?

The Monday after Cinco de Mayo had the greatest hits ever for this post. Looks like people all forgot their passwords for some reason…maybe Margaritas?

This post slowly gained some traffic and gave me confidence to focus on my writing in the evenings, but children would prove to make that a little bit difficult.

I tried writing on vacations, while my daughter played with sand.

Care to guess how likely it was that sand got poured in my laptop

And writing while singing songs to my newborn son.

Blogging while bouncing my three week old son to sleep.

Fortunately for me, the Internet loved me, and I knew I could count on their support.  Or so I thought…

That one time when I was wrong

Oh boy.  I researched and researched this post comparing the performance of Storage Space vs Raid…and then totally got it wrong.

Actual video footage of my understanding of how Storage Spaces are meant to work

I knew I made an especially egregious error when I got an e-mail from people at Microsoft asking to meet with me, in person, when I was in Redmond the following week for the MVP Summit, hoping to explain the fundamentals of just how incorrect I’d been.  The comments from Reddit were…not super supportive.

Lesson learned : don’t be wrong.

That time Reddit banded together to make fun of me

I believe my most upvoted FoxDeploy post on Reddit was when I wrote a guide to Recovering Deduped files from Windows 10.

In the scenario the post covered, we had imported binaries from a Server version of Windows into Windows 8.1 desktop and installed them using DISM.  This was really unsupported, but worked and gave you full dedupe in Windows desktop!

However, Windows 10 didn’t know about this and would allow an in-place upgrade of Windows 8 and would remove the binaries, leaving users unable to access their files!

It affected me and a number of my friends, and the only solution was poorly outlined in forums on the web, so I wrote this guide.  Reddit thought that it was a hilarious predicament though and definitely gave me some, uh, constructive feedback.

That being said, this post brought in my single highest day of traffic EVER and continues to get good traffic and comments every month.  Maybe this issue impacted more Redditors than they would like to admit?

Most popular posts

By far, no contest, my Creating PowerShell GUIs in Minutes using Visual Studio – A New Hope post is my single most popular post ever written, representing more than 140K views since writing it back in April, 2015.

Since publishing, it has been viewed an average of 200 times a day.  Wow!

Note to self: write more GUI stuff

What I’ve learned about you all

The awesome thing about writing this site has been the two-way communication I’ve had throughout.  Whether on Reddit, Twitter, StackOverflow or here on the comments at FoxDeploy, you have been SURE to let me know your opinions on everything I’ve written.

Here are some things I’ve learned about you over the years.

You LOVE contests and challenges

Probably my most fun posts ever, particularly where we look at the number of comments and audience interaction.

These contests were the most fun and had the most entries.

I will keep this in mind and make sure to create more of these in the future months!

You’re probably not from Svalbard, Guinea, Western Sahara or the Central African Republic

I’ve gotten traffic from all over the world, mostly from the US, UK and EU, which makes sense. But never even a single hit from these countries.

If you’re backpacking or traveling with a sat phone there sometime, make sure to load the site and then hit me up and let me know 🙂

I had to lookup Svalbard on a map to figure out what country that even was.

You love to check out the site at 8:00 on Wednesday

Not sure why this is, but it seems like Humpday is a good day for FoxDeploy.

My theory is that you’ve managed to put out of all of your sysadmin fires, and finally regained some composure and…most likely, want to make some cool GUIs 🙂

What’s New and What’s next?

So, here’s what we have to look forward to in the coming months at FoxDeploy!

New FoxDeploy Subreddit Now Open!

Commenting on WordPress is kind of hard to keep track of, particularly with the lack of threading and kind of iffy code-highlighting.  To solve this….

I’ve opened Reddit.com/R/FoxDeploy!  Yep!  It’s our very own subreddit!

From now on with every post, I will include a link to the comment thread for that post.  This will make it much easier for me to be aware of comments and make sure I am able to help out that much quicker.

New Image!

To celebrate this milestone, I have asked the awesome Joie Foster of the awesome HeyJoie Art.  We went back and forth on designs a number of times and she made so many excellent concepts, it was hard to choose one.  I absolutely love the way this image turned out and the way the image feels.  She’s really incredible.

If you’re looking for an incredible skilled and talented illustrator to help with your site, keep her in mind.

More Contests

I had a GREAT time with some of the fun contests we’ve done on the site so far.  Expect more to come!  (And comment below or on our Subreddit if you have an idea!)

New Hyper-V Lab with Walkthrough

In coordination with my sponsor Altaro Software, I’ll be building a BEASTLY Hyper-V server and outlining the build and how to safely backup your VMs with Altaro.  This will be a performance monster and should be a lot of fun!  The hardware is starting to come in the mail, so I will be working on this one in the next few months.

Going even further into devops

I anticipate some exciting changes in the coming months and expect that I’ll have more and more material to write about the interesting worlds of PowerShell and Devops, in addition to more C# and GUI related posts.  Stay tuned!

In closing

I’ve had such a wonderful time writing for this site these last four years and have learned so much in trying to share what I know and have learned.

Thank you so much for tuning in, offering feedback and helping me to grow as an author, and as a person.

Yours truly,

A humble Fox

 

 


MDM errors failures and how to fix them

$
0
0

Over the course of this many month Air-Watch MDM project I’ve been conducting, I have run into WAY more than my fair share of MDM enrollment related issues.

Troubleshooting MDM issues presents a whole new set of difficulties, because where SCCM provides glorious log files with tons of community engagement and answers, MDM gives you hard to locate Windows Event logs. Every SCCM error code is meticulously documented on the web, where MDM errors give you this result:

This is how you know you are WAY off the reservation!

Never fear though, for I have compiled the most common and frustating errors which I have painstakingly worked through into this, very originally named volume

Where to find enrollment errors

You can monitor the status of an enrollment in the Windows Event Viewer, under this area:

Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin

It is routine to see some errors here, so not all errors need to be solved, however when you’re trying to troubleshoot why a machine won’t enroll in MDM, then you should be looking here first.  

When you do find an error message, it’s not going to look like ‘Cannot find file’, instead, it will look something like this:

MDM ConfigurationManager: Command failure status. Configuration Source ID: (f5a99910-eb59-4f19-89d2-4cab0fa591b8),

Enrollment Name: (Provisioning), Provider Name: (Provisioning),

Command Type: (CmdType_Add), CSP URI: ./Vendor/MSFT/Provisioning/Enrollments/staging@ITTT.com">./Vendor/MSFT/Provisioning/Enrollments/staging@FoxDeploy.com),

Result: (Unknown Win32 Error code: 0x80192ee7).

The error code at the very end is what we’re looking for.

How to decipher most errors

 

You can always use the reliable and venerable SCCM Log File Viewer, CMtrace.exe to track down an error code. Simply open the app and hit Control+L

This utility contains most Windows core error messages, and is particularly good when it comes to SCCM errors, but some are not documented here…

Err.exe, an oldie but goodie

This tool, also known as the Microsoft Exchange Server Error Lookup tool 2007 hearkens from an era in which Microsoft was still paid by the letter for application names, and we had tools with easy to say names like System Center 2012 R2 Configuration Manager with Service Pack 1 Cumulative Update 5.

The tool was deployed along with Exchange 2007, but is still awesome and amazingly useful. You can download it here.

Simply install it, and then add the folder to your path, or copy it into C:\Windows\System32. Then, call it like so

err 80192ee7
# as an HRESULT: Severity: FAILURE (1), Facility: 0x19, Code 0x2ee7

# for hex 0x2ee7 / decimal 12007 :
ERROR_INTERNET_NAME_NOT_RESOLVED inetmsg.h
ERROR_INTERNET_NAME_NOT_RESOLVED wininet.h
# 2 matches found for "80192ee7"

For the REALLY tough errors

For the weirdest of the weird ones, you can search the header source symbols for Windows, which have kindly been placed online in this GitHub repo for the Windows Software Development Kit. Amazingly, I found answers to my issues here more often than I should have.

Windows Software Development Kit GitHub Page

Simply go there and Control+F your way to victory. There are also a lot of interesting tidbits to be found there as well.

Common Enrollment Failure Codes and Resolutions

Now that I’ve covered how you can find your own answers, here are some of the most common MDM Enrollment errors I’ve encountered.  Often times, the first few characters of the code may be different.  The final five characters are most important:

0x8004002 – File Not Found –

Something happened and the Provisioning Package file can’t be read. We observed this happen when our antivirus blocked the Provisioning engine, Provtool.exe from, decompressing itself, or blocked a file copy operation.

If you’re running the ppkg from a thumb drive or network drive, try to copy the file to the C:\ and run instead.

Ensure Bit9, Microsoft Forefront, or Symantec isn’t blocking the PPKG file.

0x8004005 – Access Denied

This occurs when UAC is disabled, or when someone clicks ‘No’ at the MDM enrollment prompt

0x8600023 – Already Imported this Package

This PPKG has been attempted before and failed.

Remove the PPKG file by navigating to PC Settings \ Accounts \ Access Work and School \ Add Remove a provisioning Package. Click the Provisioning Package and choose Remove.

This UI often freezes in Windows 2016 LTSB.  If it does, close the Settings page and attempt to remove again.

If the PPKG is missing upon returning to this screen, attempt to run the PPKG again
If the package fails again, you can remove all PPKG references within the registry by deleting all children from these two keys.

HKEY_LOCAL_MACHINE/SOFTWARE/MICROSFT/Enrollments
HKEY_LOCAL_MACHINE/SOFTWARE/MICROSFT/EnterpriseResourceManager

0x80180026 – Device is ExternallyManaged

This occurs when a device is locked in ProvisioningMode. Repair this by changing the following registry key:

HKLM\Softrware\Microsoft\Enrollments\ExternallyManaged, set the key equal to 0.

0x80192ee7 – Network Name Not resolved

This one, which resulted in ZERO Google Results before, simply means that either DNS isn’t available, or (more likely) that your machine does not have internet access.

Ensure your MDM target device has web access and relaunch the package and it should enroll again.

Will Windows attempt to re-enroll?

If initial provisioning fails, the Provisioning Image will retry three times in a row.  If these attempts fail, a scheduled task will be created to retry four additional times in a decaying rate of time.

15 Minutes -> 1 hour -> 4 hours -> Next System Start

If this still fails, the machine will persistently attempt to re-enroll at each login, when idle.

I’ll update this document as I run into additional errors.  Find an error that I haven’t covered here?  Hit me up in the Comments or /r/FoxDeploy!


Windows 10 Must-have Customizations

$
0
0

I’ve performed a number of Windows 10 Deployment projects, and have compiled this handy list of must-have customizations that I deploy at build time using SCCM, or that I bake into the image when capturing it.

Hope it helps, and I’ll keep updating it as I find more good things to tweak.

Remove Quick Assist

Quick Assist is very useful, but also on the radar of fake-Microsoft Support scammers, so we disable this on our image now.

Get-WindowsPackage -Online | Where PackageName -like *QuickAssist* | Remove-WindowsPackage -Online -NoRestart

Remove Contact Support Link

Because we were unable to customize this one to provide our own internal IT information, we disabled this one as well.

Get-WindowsPackage -Online | Where PackageName -like *Support*| Remove-WindowsPackage -Online -NoRestart 

Disable SMB 1

With the Petya and other similar scares, we also decided to just turn SMB off.  Surprisingly, almost nothing that we cared about broke.

Set-SmbServerConfiguration -EnableSMB1Protocol $false -force
Disable-WindowsOptionalFeature -Online -FeatureName smb1protocol -NoRestart

Disable People App

Users in testing became VERY confused when their Outlook contacts did not appear in the People app, so we got rid of it too.

Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*people*"} | Remove-AppxPackage 

Disable Music player

We deploy our own music app and were mistrusting of the music app bundled with Windows 10, so we got rid of this one too.


Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*zune*"} | Remove-AppxPackage

 

Disable Xbox App

Pretty silly that apps like this even get installed in the PRO version of Windows 10.  Maybe we need a non-shenanigan version of Win 10 ready for business…but…but I’ll finish this SCCM issue after a quick romp through Skellige.

Get-AppxPackage -AllUsers  |  Where-Object {$_.PackageFullName -like "*xboxapp*"} | Remove-AppxPackage 

 Disable Windows Phone, Messaging

We honestly aren’t sure who will want this or for what purpose this will fit into an organization.  Deleted.  Same goes with Messaging.

Get-AppxPackage -AllUsers  | Where-Object {$_.PackageFullName -like "*windowspho*"} | Remove-AppxPackage
Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*messaging*"} | Remove-AppxPackage 

 Disable Skype, Onenote Windows 10 App

Sure, let’s have a new machine deploy with FOUR different entries for Skype. No way will users be confused by this.  Oh yeah, and two OneNotes.  Great move.

Get-AppxPackage -AllUsers  | Where-Object {$_.PackageFullName -like "*skypeap*"} | Remove-AppxPackage
Get-AppxPackage -AllUsers | Where-Object {$_.PackageFullName -like "*onenote*"} | Remove-AppxPackage 

 Disable ‘Get Office’

We deploy our own music app and were mistrusting of the music app bundled with Windows 10, so we got rid of this one too.

Get-AppxPackage -AllUsers  |  Where-Object {$_.PackageFullName -like "*officehub*"} | Remove-AppxPackage | Remove-AppxPackage 

 Disable a bunch of other stuff

At this point I kind of got bored with making screen shots of each of these.  I also blocked a number of other silly things, so if you got bored too, here is the full script.

#this runs within the imaging process and removes all of these apps from the local user (SCCM / local system) and future users
#if it is desired to retain an app in imaging, just place a # comment character at the start of a line

#region remove current user
$packages = Get-AppxPackage -AllUsers

#mail and calendar
$packages | Where-Object {$_.PackageFullName -like "*windowscommun*"}     | Remove-AppxPackage

#social media
$packages | Where-Object {$_.PackageFullName -like "*people*"}            | Remove-AppxPackage

#microsoft promotions, product discounts, etc
$packages | Where-Object {$_.PackageFullName -like "*surfacehu*"}         | Remove-AppxPackage

#renamed to Groove Music, iTunes like music player
$packages | Where-Object {$_.PackageFullName -like "*zune*"}              | Remove-AppxPackage

#gaming themed application
$packages | Where-Object {$_.PackageFullName -like "*xboxapp*"}           | Remove-AppxPackage

# photo application (many leave this app)
$packages | Where-Object {$_.PackageFullName -like "*windowspho*"}        | Remove-AppxPackage

#
$packages | Where-Object {$_.PackageFullName -like "*skypeap*"}           | Remove-AppxPackage

#
$packages | Where-Object {$_.PackageFullName -like "*messaging*"}         | Remove-AppxPackage

# free/office 365 version of oneNote, can confuse users
$packages | Where-Object {$_.PackageFullName -like "*onenote*"}           | Remove-AppxPackage

# tool to create interesting presentations
$packages | Where-Object {$_.PackageFullName -like "*sway*"}              | Remove-AppxPackage

# Ad driven game
$packages | Where-Object {$_.PackageFullName -like "*solitaire*"}         | Remove-AppxPackage

$packages | Where-Object {$_.PackageFullName -like "*commsphone*"}        | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*3DBuild*"}           | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*getstarted*"}        | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*officehub*"}         | Remove-AppxPackage
$packages | Where-Object {$_.PackageFullName -like "*feedbackhub*"}       | Remove-AppxPackage

# Connects to your mobile phone for notification mirroring, cortana services
$packages | Where-Object {$_.PackageFullName -like "*oneconnect*"}        | Remove-AppxPackage
#endregion

#region remove provisioning packages (Removes for future users)
$appProvisionPackage = Get-AppxProvisionedPackage -Online

$appProvisionPackage | Where-Object {$_.DisplayName -like "*windowscommun*"} | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*people*"}        | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*surfacehu*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*zune*"}          | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*xboxapp*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*windowspho*"}    | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*skypeap*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*messaging*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*onenote*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*sway*"}          | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*solitaire*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*commsphone*"}    | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*3DBuild*"}       | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*getstarted*"}    | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*officehub*"}     | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*feedbackhub*"}   | Remove-AppxProvisionedPackage -Online
$appProvisionPackage | Where-Object {$_.DisplayName -like "*oneconnect*"}    | Remove-AppxProvisionedPackage -Online
#endregion

<#restoration howto
To rol back the Provisioning Package removal, image a machine with an ISO and then copy the source files from
the c:\Program File\WindowsApps directory.  There should be three folders per Windows 10 app.  These need to
be distributed w/ SCCM to the appropriate place, and then run
    copy-item .\* c:\Appx
    Add-AppxProvisionedPackage -Online �FolderPath c:\Appx

    $manifestpath = "c:\appx\*Appxmanifest.xml"
    PS C:\> Add-AppxPackage -register $manifestpath �DisableDevelopmentMode
#>

#removes the Windows Fax feature but requires a reboot, returning a 3010 errorlevel.  Ignore this error
cmd /c dism /online /disable-feature /featurename:FaxServicesClientPackage /remove /NoRestart

 Do you have any recommendations

Did I miss any?  If so, comment here or on /R/FoxDeploy and I’ll add it!

Advertisements

QuickStart PowerShell on Red Hat

$
0
0

PowerShell On RedHat in five minutes (1)

Configuring PowerShell on RHEL 7

Hey y’all. There are a lot of guides out there to installing PowerShell on Linux, but I found that they expected a BIT more Linux experience than I had.

In this post, I’ll walk you through installing PowerShell on a RHEL 7 machine, assuming you are running a RHEL 7.4 VM on Hyper-V. There are a couple stumbling blocks you might run into, and I know, because I ran into ALL of them.

200
Real footage of my attempts

Downloading RHEL

Much like Microsoft’s approach to ISO access, Red Hat greedily hordes their installer dvd’s like a classic fantasy dragon.

You’ll need to register here to download it.

RHEL Download Page

Once you have an account, choose to Continue to Red Hat Enterprise Linux ServerDownload

You’ll download this one here, the 7.4 binary DVD.

Download2

Installing RHEL in Hyper-V

Once you have the image, follow the standard process to create a Gen 2 Hyper-V VM, disabling Dynamic Memory but otherwise making everything the same as you normally do.

Why disable Dynamic Memory?

Good question,as we typically just leave this on for all Windows Systems!

Dynamic Memory AKA Memory Ballooning allows an enlightened VM Guest to release unneeded memory, allowing for RAM Over subscription and increased VM density.

Depending on the amount of RAM you have on your system, VMs may have PLENTY of free RAM and not feel ‘pressure’ to release memory, and in my new Altaro-Sponsored Ryzen 7 build with 64 GB of RAM, my VMs have plenty of resources.

However, I have witnessed many installs of Ubuntu and CentOS fail to complete, and in all cases, this was due to Dynamic Memory. So, don’t enable Dynamic Memory until at least the install has completed.

NoDynamicMemory

The next hurdle you’ll encounter is a failure to mount the ISO, as seen here.

CantBoot

The image’s hash and certificate are not allowed (DB).

This is due to the Secure Boot feature of Hyper-V. Secure Boot keeps your system from a number of attacks by only allowing approved boot images to load. It seems that Red Hat and Ubuntu boot images still are not included in this list.

You’ll need to disable Secure Boot in order to load the image. Right-click the VM, choose Settings \ Security \ Uncheck ‘Enable Secure boot’

NOSecureBoot

With these obstacles cleared, we can proceed through the install.

Installing PowerShell

The next step, downloading the shell script to install PowerShell for us!

Because I couldn’t copy-paste into my VM, I made a shell script to install PowerShell using the script Microsoft Provides here

I stored it in a Gist, and you can download and execute it in one step by running this.

bash <(curl -L https://bit.ly/RhelPS)

The -L switch for curl allows it to traverse a redirector service like Bit.Ly, which I used to download the shell file in Gist, because Gist URLs are TOO damned long!

Download And Execute

And that’s it. Now you’ve got PowerShell installed on Red Hat and you’re ready to go!

PSonRhel

References

How to traverse short-link

How to download and execute

Image credit  Benjamin Hung


Use PowerShell to take automated screencaps

$
0
0

I saw a post on Reddit a few days ago, in which a poster took regular screenshots of weather radar and used that to make a gif tracking the spread of Hurricane Irma.  I thought it was neat, and then read a comment asking how this was done.

How did you do this? Did you somehow automate the saves? Surely you didn’t stay up all night?

/u/SevargmasComment Link

It brought to mind the time I used PowerShell four years ago to find the optimal route to work.

Solving my lifelong issues with being on-time

You ever notice how if you leave at 6:45, you’ll get to work twenty minutes early. But if you leave at 6:55, more often than not you’ll be late? Me too, and I hated missing out on sleep!  I had a conversation with my boss and was suddenly very motivated to begin arriving on time.

I knew if I could just launch Google Maps and see the traffic, I could time it to see the best to time to leave for work.  But if I got on the PC in the morning, I’d end up posting cat gifs and be late for work.

Of course, Google Maps now provides a built in option to allow you to set your Arrive By time, which removes the need for a tool like this, but at the time, this script was the bees-knees, and helped me find the ideal time to go to work.  It saved my literal bacon.

There are many interesting uses for such a tool, like tracking the progress of a poll, tracking satellite or other imagery, or to see how a page changes over time, in lieu of or building on the approach we covered previously in Extracting and monitoring for changes on websites using PowerShell, when we learned how to scrape a web page.

How this works

First, copy the code over and save it as a .PS1 file.  Next, edit the first few lines

$ie         = New-Object -ComObject InternetExplorer.Application
$shell      = New-object -comObject Shell.Application
$url        = "http://goo.gl/1bFh5W"
$sleepInt   = 5
$count      = 20
$outFolder  = 'C:\temp'

Provide the following values:

$url      = the page you want to load
$sleepInt = how many seconds you want to pause
$count    = how many times you'd like to run
$outFolder= which directory to save the files

From this point, the tool is fully automated. We leverage the awesome Get-ScreenShot function created by Joe Glessner of http://joeit.wordpress.com/.  Once we load the function, we simply use the $shell .Net instance we created earlier to minimze all apps, then display Internet Explorer using the $ie ComObject.  We navigate to the page, wait until it’s finished loading, and then take a screenshot.

Then we un-minimize all apps and we’re set.  Simple, and it works!

Hope you enjoy it!

$ie         = New-Object -ComObject InternetExplorer.Application
$shell      = New-object -comObject Shell.Application
$url        = "http://goo.gl/1bFh5W"
$sleepInt   = 45
$count      = 20
$outFolder  = 'C:\temp\WhenToGoToWork'

#region Get-Screenshot Function
   ##--------------------------------------------------------------------------
    ##  FUNCTION.......:  Get-Screenshot
    ##  PURPOSE........:  Takes a screenshot and saves it to a file.
    ##  REQUIREMENTS...:  PowerShell 2.0
    ##  NOTES..........:
    ##--------------------------------------------------------------------------
    Function Get-Screenshot {
        <#
        .SYNOPSIS
         Takes a screenshot and writes it to a file.
        .DESCRIPTION
         The Get-Screenshot Function uses the System.Drawing .NET assembly to
         take a screenshot, and then writes it to a file.
        .PARAMETER <Path>
         The path where the file will be stored. If a trailing backslash is used
         the operation will fail. See the examples for syntax.
        .PARAMETER <png>
         This optional switch will save the resulting screenshot as a PNG file.
         This is the default setting.
        .PARAMETER <jpeg>
         This optional switch will save the resulting screenshot as a JPEG file.
        .PARAMETER <bmp>
         This optional switch will save the resulting screenshot as a BMP file.
        .PARAMETER <gif>
         This optional switch will save the resulting screenshot as a GIF file.
         session.
        .EXAMPLE
         C:\PS>Get-Screenshot c:\screenshots

         This example will create a PNG screenshot in the directory
         "C:\screenshots".

        .EXAMPLE
         C:\PS>Get-Screenshot c:\screenshot -jpeg

         This example will create a JPEG screenshot in the directory
         "C:\screenshots".

        .EXAMPLE
         C:\PS>Get-Screenshot c:\screenshot -verbose

         This example will create a PNG screenshot in the directory
         "C:\screenshots". This usage will also write verbose output to the
         comsole (inlucding the full filepath and name of the resulting file).

        .NOTES
         NAME......:  Get-Screenshot
         AUTHOR....:  Joe Glessner
         LAST EDIT.:  12MAY11
         CREATED...:  11APR11
        .LINK
         http://joeit.wordpress.com/
        #>
        [CmdletBinding()]
            Param (
                    [Parameter(Mandatory=$True,
                        Position=0,
                        ValueFromPipeline=$false,
                        ValueFromPipelineByPropertyName=$false)]
                    [String]$Path,
                    [Switch]$jpeg,
                    [Switch]$bmp,
                    [Switch]$gif
                )#End Param
        $asm0 = [System.Reflection.Assembly]::LoadWithPartialName(`
            "System.Drawing")
        Write-Verbose "Assembly loaded: $asm0"
        $asm1 = [System.Reflection.Assembly]::LoadWithPartialName(`
            "System.Windows.Forms")
        Write-Verbose "Assembly Loaded: $asm1"
        $screen = [System.Windows.Forms.Screen]::PrimaryScreen.Bounds
        $Bitmap = new-object System.Drawing.Bitmap $screen.width,$screen.height
        $Size = New-object System.Drawing.Size $screen.width,$screen.height
        $FromImage = [System.Drawing.Graphics]::FromImage($Bitmap)
        $FromImage.copyfromscreen(0,0,0,0, $Size,
            ([System.Drawing.CopyPixelOperation]::SourceCopy))
        $Timestamp = get-date -uformat "%Y_%m_%d_@_%H%M_%S"
        If ([IO.Directory]::Exists($Path)) {
            Write-Verbose "Directory $Path already exists."
        }#END: If ([IO.Directory]::Exists($Path))
        Else {
            [IO.Directory]::CreateDirectory($Path) | Out-Null
            Write-Verbose "Folder $Path does not exist, creating..."
        }#END: Else
        If ($jpeg) {
            $FileName = "\$($Timestamp)_screenshot.jpeg"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Jpeg));
        }#END: If ($jpeg)
        ElseIf ($bmp) {
            $FileName = "\$($Timestamp)_screenshot.bmp"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Bmp));
        }#END: If ($bmp)
        ElseIf ($gif) {
            $FileName = "\$($Timestamp)_screenshot.gif"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Gif));
        }
        Else {
            $FileName = "\$($Timestamp)_screenshot.png"
            $Target = $Path + $FileName
            $Bitmap.Save("$Target",
                ([system.drawing.imaging.imageformat]::Png));
        }#END: Else
        Write-Verbose "File saved to: $target"
    }#END: Function Get-Screenshot
#endregion

for ($i=0;$i -le $count;$i++){

    $ie.Navigate($url)
    $shell.MinimizeAll()
    $ie.Visible = $true
    start-sleep 15
    Get-Screenshot $outFolder -Verbose

    "Screenshot Saved, sleeping for $sleepInt seconds"
    start-sleep $sleepInt

    $shell.UndoMinimizeALL()
    }

When this runs, you’ll have a moment or two to rearrange the screen before the first screen capture is taken. While executing, should leave the computer unattended, as we’re simply automating taking a screencap. If you’re using the computer, it will attempt to minimize your windows, display IE, SNAP, then restore your programs. If you have other windows up, they could be mistakenly included in the screen shot.

Afterwards, you will find the files in whichever path you specified for $outFolder.

Pro-tip, you can exit this at any point by hitting CONTROL+C.

Photo credit: Nirzar Pangarkar


At Microsoft Ignite? Come find me!

$
0
0

WhereToFindFox

I’m speaking at Microsoft Ignite again this year!  Come find me at the PowerShell Community Meetup and the Intro to PowerShell sessions to talk code, scripting, OSD, automation or beer!

Here are the links to my sessions

Tuesday, 3:15 PM, Microsoft Ignite PowerShell Meetup – BRK 1061 – OCCC West Hall, 204 AB.

Thursday, 1:40 PM, Microsoft Ignite Intro To PowerShell – OCCC South Hall, Expo, Theater 1

I’ve got a backpack full of PowerShell and DSC Girl stickers, and look forward to meeting you!


Ignite, decompressed

$
0
0

MSFTignite2017 (1)

Ignite Orlando WAS AWESOME! In this post, I’ll give you some of my fun memories and commentary about the event, and then end with a bunch of the best videos from Microsoft Ignite 2017.

My sessions

We had a HUGE turn out for the PowerShell Community Event, in fact, it was so big that we had an overflow room with 200 people in it!

2017-09-26 15.07.11

There were a lot of folks waiting outside who weren’t able to attend, and at this point Adam Bertram and Simon Whalin were the REAL MVPs.  They left the room and lead an impromptu session to get the discussion going in the overflow room.

Not pictured, Adam Bertram standing on a table, shouting into the crowd!  Oh, and did I mention that Jeffrey Snover came on stage as well?  Talk about a dream come true!

OnStageWJeffrey
Jeffrey is the four greenish pixels near the screen

Fortunately I was prepared and stammered through a terrible soft-ball question about Azure Cloud Shell.  Jeffrey said ‘that’s your question, Stephen?’

My final session was at the end of the day on Thursday, which effectively maximized my stress for the entirety of Ignite.  Fortunately I had plenty of time to practice and work on my transitions and I felt that I really gave it my all.

Next year, I’d like to lead a one hour session, or one focused on real world usage of PowerShell as a glue language.  We’ll see if they get approved!

Don and Jeffrey filled a colossal 5300 person auditorium to capacity, in their PowerShell Unplugged session.

Even these two unflappable speakers looked a tiny bit overwhelmed (just for a moment) by the colossal turnout!

Brad Anderson continued his ‘Lunchbreak With Brad series’, but this time opened it up to everyone at Ignite!  I joined and was actually featured in the video (around the 9 min mark)

Getting to meet Brad in person was great, as I’ve seen him deliver presentations so many times in person and virtually!  I would have liked to have had a full lunch break with him!

Other photos

Click to view slideshow.

Spinners were…everywhere.

My Top Ten Must Watch Sessions

I love the trend of recording all of the bigger sessions.  Here are some of my favorite (which happened to be recorded).

The keynote was…interesting, but ended with an odd deep dive on Quantum computing, which was a bit odd.  I could have done with more explanation on what MSFT365 is…

Fortunately, Brad Anderson explained that here in this session.  Microsoft 365 is essentially a new tier of Office 365 license which now includes Intune, Advanced Threat Protection, and all the O365 goodness we already had.  I believe it includes pricing for Windows Licensing as well.

Azure Automation session with Joey Aiello, Hemant and Aemon

Donovan Devops in any language, with Damian Brady and Donovan Brown.  A dynamic and exciting session talking through VSTS’s devops capabilities.

Expert level Windows 10 Deployment.  Johan and Mikael killed this talk, as expected!

Ask the experts, Windows 10 Deployment.  This was one of those ‘deeper word’ sessions.  A super, real-world deep dive into how the hell we’re supposed to OSD upgrade all of our machines twice a year.

Chris Jackson – Deep dive on Win 10 Fall Update Security Internals

Your attacker thinks like my attacker, an awesome security minded session

Red Teaming Windows

I love these business & personal growth style sessions.  Jeffrey had a great one here which covered staying relevant and providing value as keys to always remaining hirable.

Moving 65,000 Microsofties to DevOps, definitely going to be helpful for me in my new role here!

Securing your data at rest, which had some good info I need.

Coding at 88MPH, a session full of tricks and tips for working in Visual Studio.  The keyboard shortcuts alone were worth the price of entry.

Conference Feedback

It’s important to categorize and honestly think through takeaways for a conference like this one.  Here are my thoughts.

Venue

I really liked the venue, but my favorite aspect of it is how close it is to great after hours entertainment and hotels.  A huge jump from Atlanta (ironically, my home city).   Speaking of Hotels, I was placed in the wonderful Orlando Renaissance at Seawold, a beautiful property with stunning rooms and a lovely pool (that my children made use of!)

The architecture was cool and inspiring and I liked the huge outdoor bridges connecting the venues while keeping us up and out of traffic.  I also am relatively young and in shape with brand new nice running shoes.  Many people might not have liked the tremendous amount of walking involved in this venue, so I would understand the negative feedback I’m hearing there.  Additionally, the walk on the bridge could be sweltering!

I didn’t mind though, I was freezing my butt off in every session, so I welcomed the sun’s warm embrace.

Food and Snacks

I heard a lot of complaining about this but I eat a LOT of vegatarian food anyway, so I’m accustomed to eating cardboard.  Actually, I thought the veggie options were very good.  We could have used more fresh fruit and veggies though.

The afternoon snacks were pretty good, with nice variation of snacks.  The expo floor could have used more water stations.  I found myself leaving the expo for water, which was odd to have to do.

I loved the pop-up coffee stations around the show floor.  I developed a two-a-day nitro iced coffee habit.

Session Quality and Topics

This part is challenging.  It was, frankly, shocking that at a conference in which we celebrated the 25th anniversary of SCCM, there were only two ConfigMgr sessions! One was ‘Whats new in SCCM‘, the other was ‘System Center, what’s coming‘ (in which we learned that Orchestrator and SMA are effectively dead 😦  )

Sure, it’s not a new product anymore, but the only sessions to truly feature ConfigMgr were ones showcasing add-ons to the product, in the case of Adaptiva and 1E.  I really appreciate what these companies have done for the community, but a drought in content like this has me a bit worried.

This leads me to my main concern.  If you’re a seasoned expert, you might find two or three ‘deeper word’ sessions worthy of attending, like Deploying Windows 10 in the real world.  It feels like the session catalog was heavy on business decision maker, 200 and 300 level content.

If you’re a beginner, good luck.  If you’re an expert, I dunno, talk to the dev team in the booths.

Do you think I’m approaching this from the wrong angle?  Should a conference like this have a beginners track for lucky newbies to get hands on learning?  Is it meant to be mostly messaging from the sponsors?  Is it really all about swag?



Glorious PowerShell Dashboards

$
0
0

I’ve covered the topic of dashboards on this blog a few times before, from layering CSS on PowerShell’s built-in HTML capabilities, to hacking together HTML 5 templates with PowerShell, as the hunt continues for the next great thing in PowerShell reporting. Guys, the hunt is OVER!  Time to ascend to the next level in reporting…

It’s the motherlode!  Adam Driscoll’s AWESOME PowerShell Universal Dashboard, a gorgeous and dead-simple dashboard tool which makes it super easy to retrieve values from your environment and spin them into adaptive, animated dashboards full of sexy transitions and colors.   Click here to see it in action. Or just look at these sexy animations and tasteful colors.  Deploy this and then show your boss.  It’s guaranteed to impress, blow his pants off, and get you a huge raise or maybe a $5 Starbucks gift card.

1GIF

In this post, we’ll learn what the PowerShell Universal Dashboard is, how to quickly get setup, and I’ll share my own TOTALLY PIMPED OUT CUSTOM Dashboard with you free, for you to modify to fit your environment, and get that free Pumpkin Spice, son!

What is it?

The PowerShell Universal Dashboard is an absolutely gorgeous module created by the great Adam Driscoll.  It seeks to make it dead-simple to create useful, interactive dashboards anywhere you can run PowerShell.  It’s built using .net Core Kestrel and ChartJS, and you can run it locally for folks to connect to see your dashboard, or deploy right to IIS or even Azure!

If you didn’t earlier, you really should click here to see it in action!!!

Getting Started

To begin, simply launch PowerShell and run the following command.

Install-Module UniversalDashboard

Next, copy the code for Adam’s sample Dashboard from here and run it.  You should see this screen appear

Now, PowerShell Pro Tools IS a paid piece of software.  But the trial license is super generous, so simply put in your e-mail and you’ll receive a license automatically in a few minutes.

Warning –Preachey part–And, between you and me, now that we’re all adults, we should put our money where our mouth is and actually support the software we use and love.  In my mind, $20 is an absolute steal for this incredible application.

Once you receive your key, paste it in and you’re ready to go

 

A sign of a happily licensed PowerShell Pro Tools.

Let’s start customizing this badboy! 

Customizing the Dashboard

For my project, I wanted to replace the somewhat aging (“somewhat”) front-end I put on my backup Dropbox script, covered here in this post : Automatically move old photos out of DropBox with PowerShell to free up space .  At the time, I thought it was the slickest think since really oiley sliced bread.

I still think you look beautiful

So, to kick things off, I copied and pasted the code Adam shares on the PowerShell Universal Dashboard homepage, to recreate that dashboard.  Once it’s pasted in, hit F5 and you should see the following, running locally on your machine:

First up, to delete the placeholder ‘About Universal Dashboard’, let’s delete the New-UDColumn from lines 15~17.

Start-UDDashboard -port $i -Content {
    New-UDDashboard -NavbarLinks $NavBarLinks -Title "PowerShell Pro Tools Universal Dashboard" -NavBarColor '#FF1c1c1c' -NavBarFontColor "#FF55b3ff" -BackgroundColor "#FF333333" -FontColor "#FFFFFFF" -Content {
        New-UDRow {
            New-UDColumn -Size 3 {
                New-UDHtml -Markup "
<div class='card' style='background: rgba(37, 37, 37, 1); color: rgba(255, 255, 255, 1)'>
<div class='card-content'>
<span class='card-title'>About Universal Dashboard</span>

Universal Dashboard is a cross-platform PowerShell module used to design beautiful dashboards from any available dataset. Visit GitHub to see some example dashboards.</div>
<div class='card-action'><a href='https://www.github.com/adamdriscoll/poshprotools'>GitHub</a></div>
</div>
"
}
                New-UDColumn -Size 3 {
                    New-UDMonitor -Title "Users per second" -Type Line -DataPointHistory 20 -RefreshInterval 15 -ChartBackgroundColor '#5955FF90' -ChartBorderColor '#FF55FF90' @Colors -Endpoint {
Get-Random -Minimum 0 -Maximum 100 | Out-UDMonitorData
}

With that removed, the cell vanishes.

I took a look at the Components page on the PowerShell Universal Dashboard, and really liked the way the Counter design looked, so I decided to copy the example for Total Bytes Downloaded and use that in-place of the old introduction.  I added these lines:


 New-UDColumn -Size 4 {
     New-UDCounter -Title "Total Bytes Saved" -AutoRefresh -RefreshInterval 3 -Format "0.00b" -Icon cloud_download @Colors -Endpoint {
             get-content c:\temp\picSpace.txt
         }
     }

     New-UDColumn -Size 3 {

I also created a new text file at C:\temp\picSpace.txt and added the value 1234 to it.  With those changes completed, I hit F5.

Ohh this is a VERY nice start

Now, to actually populate this value when my code runs.  Editing Move-FilesOlderThan.ps1(note: I’m very sorry about this name, I wrote the script when my daughter was not sleeping through the night yet…not clue why I choose that name), the function of that code is to accept a cut-off date, then search for files older than that date in a folder.  If it finds files that are too many days old, they get moved elsewhere. Here’s the relevant snippet:


$MoveFilesOlderThanAge = "-18"
####End user params

$cutoverDate = ((get-date).AddDays($MoveFilesOlderThanAge))
write-host "Moving files older than $cutoverDate, of which there are `n`t`t`t`t" -nonewline
$backupFiles = new-object System.Collections.ArrayList

$filesToMove = Get-ChildItem $cameraFolder | Where-Object LastWriteTime -le $cutoverDate
$itemCount = $filesToMove | Measure-Object | select -ExpandProperty Count
$FileSize = $filesToMove | Measure-Object -Sum Length

In order to sum the file space saved every day, I only had to add these lines.  I also decided to add a tracking log of how many files are moved over time.  I decided to simply use a text file to track this.

[int](gc c:\temp\picSpace.txt) + [int]$FileSize.Sum | Set-content c:\temp\picSpace.txt
[int](gc c:\temp\totalmoved.txt) + [int]$itemCount | set-content c:\temp\totalmoved.txt

Now, after running the script a few times to move files, the card actually keeps track of how many files are moved!

Further Customizations

Now, to go really crazy customizing it!

Hook up the File Counter

I decided to also add a counter for how many files have been moved.  This was super easy, and included in the code up above.  I simply modified the Move-FilesOlderThan.ps1 script as depicted up above to pull the amount of files migrated from a file, and add today’s number of files to it.  Easy peasey (though at first I did a string concatenation, and instead of seeing the number 14 after two days of moving 7 files, I saw 77.  Whoops!)

To hook up the counter, I added this code right after the Byte Counter card.

New-UDColumn -Size 4 {
New-UDCounter -Title "Total Files Moved" -Icon file @colors -Endpoint {
get-content C:\temp\totalmoved.txt
}
}

 

Modify the table to display my values

Next up, I want to reuse the table we start with in the corner.  I wanted to tweak it to show some of the info about the files which were just moved.  This actually wasn’t too hard either.

Going back to Move-FilesOlderThan.ps1 I added one line to output a .csv file of the files moved that day, excerpted below:

$backupFiles |
    select BaseName,Extension,@{Name=‘FileSize‘;Expression={"$([math]::Round($_.Length / 1MB)) MB"}},Length,Directory |
        export-csv -NoTypeInformation "G:\Backups\FileList__$((Get-Date -UFormat "%Y-%m-%d"))_Log.csv"

This results in a super simple CSV file that looks like this

Day,Files,Jpg,MP4
0,15,13,2
1,77,70,7
2,23,20,3
3,13,10,3
4,8,7,1

Next, to hook it up to the dashboard itself.  Adam gave us a really nice example of how to add a table, so I just modified that to match my file types.

New-UDGrid -Title "$((import-csv C:\temp\movelog.csv)[-1].Files) Files Moved Today" @Colors -Headers @("BaseName", "Directory", "Extension", "FileSize") -Properties @("BaseName", "Directory", "Extension", "FileSize") -AutoRefresh -RefreshInterval 20 -Endpoint {
dir g:\backups\file*.csv | sort LastWriteTime -Descending | select -First 1 -ExpandProperty FullName | import-csv | Out-UDGridData
}

And a quick F5 later…

 

Add a graph

The final thing to really make this pop, I want to add a beautiful line graph like these that Adam provides on the Components site.

This was daunting at first, but the flow isn’t too bad in hindsight.

  • Create an array of one or more Chart Datasets, using New-UDChartDataSet, the -DataProperty defines which property you want to chart, while the -Label property let’s you define the name of propery in the Legend
  • Pass your input data files as the -Data  property to the New-UDChart cmdlet. and define a -Title for the chart as well as the chart type, of either Area, Line, or Pie.

Here’s the code sample of what my finished chart looked like:

New-UDChart -Title "Files moved by Day" -Type Line -AutoRefresh -RefreshInterval 7 @Colors -Endpoint {
 import-csv C:\temp\movelog.csv | Out-UDChartData -LabelProperty "Day" -DataProperty "Files" -Dataset @(
 New-UDChartDataset -DataProperty "Jpg" -Label "Photos" -BackgroundColor "rgb(134,342,122)"
 New-UDChartDataset -DataProperty "MP4" -Label "Movies" -BackgroundColor "rgb(234,33,43)"
)
}

And the result:

Satisfy my Ego and add branding

Now, the most important feature, branding this bad boy.

Up on line 14, change the -Title property to match what you’d like to name your dashboard.

New-UDDashboard -NavbarLinks $NavBarLinks -Title "FoxDeploy Space Management Dashboard - Photos"

You can also add an image file with a single card.  In my experience, this image needs to already live on the web somewhere.  You could spin up a quick Node http-server to serve up the files, leverage another online host, or use a standalone server like Abyss.  I always have an install of both Abyss and Node on my machines, so I tossed the file up and linked it.

 New-UDImage -Url http://localhost/Foxdeploy_DEPLOY_large.png

Finally, to clean up all of the extra cards I didn’t use, and fix some layout issues.

Finished Product

See, wasn’t that easy?

finished

And it only took me ~100 tabs to finish it.

Actual screenshot of my Chrome tab situation after an hour of tweaking

If you want to use my example and modify it, feel free to do so (and please share if you create something cool!)  Here are some ideas:

  • Server Health Dashboard
  • SCCM Dashboard
  • SCOM Dashboard
  • Active Directory monitoring dashboard

Source Files

The script that actually creates a dashboard and opens it, Create-BlogDashboard.ps1, followed by the updated Dropbox backup script, then a sample input file.

Download here

Afterword

I realized my preaching about paying for software, and yet this whole thing was spawned from my desire to cheaply get away with using Dropbox but not wanting to pay for it.  Ok….I’ve cracked.  I’ve actually now paid for Dropbox as well!  Time for me to practice what I preach too!

drop

Backing up your Testlab with Altaro VM Backup

$
0
0

To be a good engineer, you need a Testlab. End of sentence.

You need it so you can peruse flights of fancy, like making some web services, trying out that new language and other endeavors perhaps not specifically related to your day to day work.

It HAS to be your own too!  You can’t just use the one at your work.  If things go awry between you and your company, you definitely don’t want to lose your livelihood AND your hard-earned testlab in the same stroke!  This is also why you don’t want to have your life insurance purchased through your work too (or if you do, make sure you don’t get fired and die in the same day).

In consulting, I would get assigned to a project and have a month or so to come up to speed on new technologies. I found that when I had a testlab, it was so much quicker to get working, just make a new VM, domain join it and have SQL installed and ready for a new SCCM, Scorch, Air-Watch, whatever. In fact, the periods when I did the best engineering work over my career closely line up to the times that I had a working testlab available to model my customer’s environments and make mistakes on my own time, not theirs.

If you have read this and are convinced that you too need a testlab, and don’t yet have one, you can click here to read my guide here on setting up a Domain Controller with one-click!

The one-click domain controller UI in action

And what should we do with things that are important? We protect them. In this post I’ll walk you through some of the options available to protect and backup your testlab.

Disclaimer :  This blog has been supported by Altaro for a while now, but I’ve never written about their product before. I received a free NFR license to use for the purposes of this post (and MVPs can receive one as well!)

 In this post, I feature their Hyper-V backup product, but you should know that I will never recommend or feature any product that I haven’t used myself.   All words (and errors!) are my own.

Your DR plan should not involve the word ‘Hope’

I built my first VM Lab years and years ago.  It was an Intel i7-2600k, with 16GB of RAM and LEDs out the whazoo.  I was so proud of this little guy that I gave him the name BEHEMOTH, which was all caps because of how cool the name was (and I’d also just read House of Leaves which had some really interesting capitalization of words, not to mention bonkers type-setting.  The perfect /r/iAmVerySmart book for college kids)

Click to view slideshow.

I bought LED everything, because I needed to show that I was serious about performance.   I then bought the cheapest monitors I could find, because Who needs eyeballs, am I right?

VM Lab Failure 1

My VM backup approach was pretty nascent as well.  Once a month or so (or every six months…or never) I would launch the Hyper-V console and run an export of VMs onto another folder on the same disk.  Surely this could have no negative ramifications, right?  This lead to my first total loss of an environment, when my $60 Hitachi Deskstar drive died, taking down my VMs AND my Backups.

VM Lab Failure 2

I decided to bone up on how real world people do it and quickly became prideful, even though I had no true skills or experience.  I was experimenting with Storage Spaces and thought they were the bee’s knees, so I took two cheap OCZ Trion-III SSDs, and put them in a Storage Pool Parity set.   Then I wanted more space, so I re-partitioned 180GB of space from the same drive hosting my VMs and joined that to the Parity Set as well.  Two SSD, one chunk of spinning drive partitioned out of a larger drive.

So, for every write to the volume, Parity writes took place on each drive including the drastically slower spinning disk.  I got absolutely horrible performance (three minute boot times on a Win 8.1 image, 6 hour SCCM Site Install times) and eventually the strain of backups and prod use on the same volume and my moronic partitioning caused my second full loss of VMs and Backups.

I was flying down mount stupid without any brakes when I came up with that partitioning scheme

Not only did I lose the data on the Storage Space, but the spinning drive would never spin up again, I think I killed its spirit.  One of the SSDs quickly failed later (third loss of VMs, but at least I had backups.)

Getting Slightly Serious Now

Redeploying Active Directory over and over was killing me, and it was at this point that I dug into DSC, so that at least my rebuild wouldn’t take a full day (I could install AD and get DHCP, DNS and GPO configured as I wanted it in about two hours.  I had memorized the SCCM 2012 Install steps too, so it was a good learning experience).  This time of my life is when I wrote the One-click DSC Guide, linked above.

Here is where I decided to treat my testlab with some more respect.

Real World VM Backup

In the real world, most people use a backup product to backup their VMs.  I decided to catalog some of them:

  • System Center Virtual Machine Manager
  • Data Protection Manager
  • Azure Site Recovery
  • Veeam
  • Altaro VM Backup

Since  I was a System Center guy, I decided to try Virtual Machine Manager and then Data Protection Manager.  Regret.  Both were WAYYY too hard and finnicky for my needs.

Next up was Azure Site Recovery, which provides a method for you to backup your VMs from local Hyper-V or VMware directly into Azure OR to another physical site somewhere.  I deployed Azure Site Recovery to backup my VMs to Azure (at least my most important ones) and it was quite easy to use, but not everyone has a free MSDN license laying around to use.  Still consider this one if that sounds enticing.

Veeam Backup Suite is also really popular, and they have a number of free products too!   I gave their product a test run but never got very far (even though they had good support) and eventually on deploying a new computer, decided to broaden my horizons and try the other big name in the field.

New Test Lab + Altaro VM Backup = Perfection

I decided to take my test lab seriously too, no more half-gaming, half-VM lab.  I budgeted out the parts (listed here), including two NVMe SSD drives to be deployed in RAID 0 and got to work building it!

Installing Altaro VM Backup was very simple, but it does have some prerequisites, such as only running on Windows Server. If your testlab is also your gaming machine, consider moving it to Windows Server instead.  For years, my daily driver (and gaming PC) ran on Server 2012, and it’s only become better with Server 2016.

To get started, download Altaro VM Backup here.

Installation and Configuring a Host

Setup is an absolute breeze, next, next finish practically the whole way.

Altaro VM Backup should launch on its own and you simply need to choose to Connect to Local Instance.

Next, choose to Add Hyper-V / VMware Host.

Click Add Host and then provide the name of the Hyper-V host (and credentials if you need to)

Serious props to anyone who knows what my Testlab is named after

Configuring Backup Location

Next, click on ‘Backup Locations’ to decide where to stick the VM Backups.

In my case, I have a big spinning disk on my VM Testlab, so I’m using that as my Physical Drive location.  You can also backup over the network too, or configure backup to use both a local disk and a network location for optimal redundancy.

Now to pick the actual drive.

To pick a subfolder, click ‘Choose Folder’
One of the folders here is from another failed VM lab recovery. Care to guess which?

With this completed, click Finish and now you’ve configured this host for backups.  Now, to actually apply the backup location setting to all of the VMs.  This is really easy.  Just click the host, then drag over to the location, as demonstrated below.

Unfortunately, I haven’t yet found a method to apply this setting to all VMs on a host using the GUI, so expect to come back here at least once for each new VM.

Take your first backup and setting up a backup schedule

Taking a backup is a total breeze.  Click down to Virtual Machines and select a single VM (or all of them!) and then click ‘Take Backup’.  If you neglected to set a backup location, the program will remind you here.

You can also just watch the GIF to see it in action.

At this point, you’re safe!  Of course, you only have this one backup, so it’s time to set up a schedule to backup your VMs regularly.

Setting a Schedule

I have the memory of a mantis shrimp, I don’t want to remember to back things up.  Fortunately it only takes like two minutes to make a backup schedule.  First, click Schedule, then Add Backup Schedule.

Why can’t Recurring Calendars in Outlook be this easy!  Or Scheduled Tasks! Seriously, its so much easier to define a schedule here that I think this should be the new normal.

I want to back these VMs up every Monday, Wednesday, Friday and Sunday, and I want it to happen at midnight.

Click Save to Save…caption of the year

Now, for one last GIF.  Just drag the VMs (or the whole Host, or all of your Hosts, if you have more than one)) onto the schedule and you’re done.

Wrapping Up

That’s it!  You’re done.  Seriously  I did this one time like three months ago and it’s been fire and forget ever since then.  It is super nice to get reminder e-mails like this one too.

Having difficulties getting up and running?  Let me know!  I can help you out, or introduce you to some folks who can.

Making an Azure Function Reddit Bot

$
0
0

Around the time I was celebrating my 100th post, I made a big to-do about opening my own subreddit at /r/FoxDeploy. I had great intentions, I would help people in an easier to read format than here in the comments…but then, I just kind of, you know, forgot to check the sub for four months.

But no longer!  I decided to solve this problem with the only tool I know…code.

Azure Functions

A few months ago, I went to ‘The Red Shirt’ tour with Scott Guthrie in which he talked all about  the new Azure Hotness.  He covered Functions, an awesome headless, serverless Platform as a Service offering which can run a variety of languages including C#, F#, Node.js, Java, and of, course, Best Language, PowerShell.

I was so intrigued by this concept when I first learned of it at an AWS event years ago in Chicago, where they introduced Lambda. Lambda was cool, but it couldn’t run bestgirl language, PowerShell.

With this in mind, I decided to think of how to attack this problem.

Monitoring a sub for new posts

I did some googling and found that you can get a list of the newest posts in a sub by just adding a keyword.json to the subreddit, like so: https://www.reddit.com/r/FoxDeploy/new.json, get’s me a JSON response back with the newest posts.  You can also use top.json, controversial.json, etc.

$posts = Invoke-RestMethod https://www.reddit.com/r/FoxDeploy/new.json, next I needed a way to track if I’d already processed the post or not.  That means a database of some kind.

The best DB is a CSV

At first, I planned to use Azure’s new Cosmos DB for this task, but I quickly got bogged down trying to learn my way through creating Graphs, SQL Tables, etc.  All of these sounded cool but pushed me farther away from my goal.  So I decided to roll the worlds simplest Database format, and just track this in a CSV.

Making my schema was simple, just open Notepad and type:

PostName,Date

Done, Schema created in five seconds.  Now to write some logic to step through a post and see if it is in my ‘database’ .

#load DB file
$Processed = Import-CSV .\processed.txt

#process posts
$posts = Invoke-RestMethod https://www.reddit.com/r/FoxDeploy/new.json

ForEach ($post in $posts.data.children){
    if ($Processed.PostName -notcontains $post.data.title){
    #We need to send a message
    Write-output "We haven't seen $($post.data.title) before...breaking loop"
    break
    }

}

I decided that I didn’t want to get bombarded with alerts so I added the break command to pop out of the loop when it first encountered a post which was not in the ‘database’.   Next, to simply dig back into the Reddit REST API and just send a Message. How hard can that be?

Fun with the Reddit API

I dabbled with the Reddit API a few years back, in one of my first PowerShell modules.  It was so hard, poorly documented and so difficult that it turned me off of APIs for months.  I’d always suffered from imposter syndrome, and I felt that That Day ( that dark day in which I finally wasn’t smart enough to figure it out) had finally come for me.

Honestly, compared to other REST APIs, and especially the fully featured and well documented ones like Zenoss and ServiceNows, Reddit’s is terrible to learn.  Don’t give up!

In order for this script to work, it needs to access my credentials.  To do that, I have to delegate credentials using oAuth.  I first covered the topic in this blog post here so read here if you have o clue what oAuth is.  If you don’t, no worries, you’ll be able to gather an idea from our next few steps:

  • Create an oAuth Application In Reddit
  • Grab my RedirectURI, ClientID, and ClientSecret
  • Plug these in to retrieve an AccessToken and RefreshToken
  • Send a message
  • Handle Refreshing an API token when the token Expires

Making an oAuth application

Getting access to Reddit’s API is easy.  Log on to your account, then click Preferences \ Apps.

Click ‘Apps’

Scroll down to Create Application and fill this form in.

Click “Create App’ to finish

The Redirect URI doesn’t need to go anywhere specifically (it’s used because the assumption is that oAuth will be used when a user grants their DropBox access to their Office account, for instance.  After they click ‘OK’ to delegate access, they need to be redirected somewhere.) but you must provide one here and you must use the same value when you request a token in the next step.

Now, make note of and save each of these in PowerShell.  You’ll need these to get your token, then we’ll embed them in our script as well.

$ClientID = 'ClientIDIsHere12345'
$ClientSecret = 'ThisLongStringIsYourSecret'
$redirectURI = 'http://www.foxdeploy.com'

Getting an oAuth Token

Now that we’ve completed this step, download the PSReddit module and run Connect-RedditAccount to exchange these IDs for an AccessToken and a RefreshToken.  Let’s call the cmdlet and see what happens.

Connect-RedditAccount -ClientID $ClientID -ClientSecret $ClientSecret `
   -redirectURI $redirectURI

and then passes that along to Show-oAuthWindow(here’s the code), which pops up a browser window like so.

Running the command stands up a number $global: variables we can use to interact with the reddit API, including the all important AccessCode which we must provide for any API request.  Here’s the full list of REST endpoints, but we’re after the /compose endpoint.

Using our token to send a reddit private message

This part would not have been possible without the awesome help of the awesome @Mark Kraus, who helped me figure out the syntax.

We hit the endpoint of oauth.reddit.com/api/compose, which has a few restrictions.  First off, you have to provide headers to prove who you are.  Also, reddit insists that you identify yourself with your reddit user name with every API call as well, so you have to provide that info too.  Here’s how I handled that.

$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("User-Agent", 'AzureFunction-SubredditBot:0.0.2 (by /u/1RedOne)')
$headers.Add("Authorization", "bearer $AccessToken")

Next, here’s the body params you MUST pass along.

$body = @{
api_type = 'json'
to = '1RedOne'
subject = 'Message sent via PowerShell'
text= 'Hello World'
}

Finally, pass all of this along using Invoke-RestMethod and you’ll see…

Ohhh yeah, dat envelope.

I went ahead and prettied it all up and packaged it as a cmdlet.  Simply provide your values like so:

Send-RedditMessage -AccessToken $token.access_token -Recipient 1RedOne `
   -subject 'New Post Alert!' -post $post

This function is highly customized to my needs, thus the kind of weird -post param.  You’ll want to customize this for your own purposes, but the example usage describes how to pass in a JSON representation of a Reddit API Post object for a full featured body message.

Here’s the download for the completed function.  Send-RedditMessage.ps1.  One last wrinkle stands in the way though.

Don’t get too cocky! Reddit API tokens expire in an hour.

Yep.  Other APIs are reasonable, and expire only after an extended period of inactivity.  Or they last for three months, or forever.  Nope, not reddit’s; their tokens expire in one hour.

Fortunately though, refreshing a token is pretty easy.  When we made our initial request for a token earlier using the Connect-RedditAccount cmdlet, the cmdlet specified a URL parameter duration=permanent which instructed the reddit API to provide us a refresh token.

The cmdlet also helpfully stored this token for you, and can refresh your token as well.

How to refresh tokens

Refreshing your token isn’t actually that bad.  If you’re interested in doing this manually, simply send a REST Post to this URL https://www.reddit.com/api/v1/access_token with the following as the payload.  You’ll need the same values for scope, client_id, and redirect_uri, and should provide the refresh token you received with the first auth token as well.

$body=@{
client_id = 'YourApiKey'
grant_type = 'refresh_token'
refresh_token = 'YourRefreshTokenHere'
redirect_uri = 'YourRedirectURL
duration=  'permanent'
scope=  'Needs to be the same scope from before'}

Finally, need to provide a Basic authentication header.

What’s Basic Auth?

Basic Authentication is a relatively insecure and yet very common method of authenticating a request.  In Basic Auth, you provide credentials in this format username;password, and the string is then encoded in base64.  Curious what that looks like?  Click here to see.

It is barely a step up from sending a plaintext string, and in fact, can actually signal that something worth obfuscating is being transmitted.  Still, it’s what Reddit wants so…

The easiest way to do this in PowerShell is to instantiate a Credential object and pass that along.  Username should be your clientID, while the ClientSecret should be your password.

$tempPW = ConvertTo-SecureString 'YourClientSecret' -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ('YourclientID', $tempPW)
Provide all of this in a post like this:
Invoke-RestMethod https://www.reddit.com/api/v1/access_token `
-Body $body -Method Post  -Credential $credential
and you’ll receive another authcode you can use for the next hour.
Of course, all of this is done for you with the PSReddit module.  On import, it will look in the module path for a pair of .ps1xml files, which contain some information about your reddit account, including your oAuth token and Refresh Token, which will be loaded if found as $PSReddit_accessToken and $PSReddit_RefreshToken.  If you haven’t linked an account yet, you’re instead prompted on how to do so.

Making this work in Azure

With all of the work done locally, all that remained was to find a way to reproduce this in Azure.

I began by logging on to my Azure Portal and then clicking the + button to add a new resource.

Search for ‘Function App’ (I swear they were called Azure Functions like a week ago…)

Then fill in the mandatory questions.  Be sure to choose a region which makes sense.

The actual UI is a long vertical panel. I awkwardly cut and paste it into this equally awkward square. It looks bad, but at least it took way too long.

Once you’ve filled these in, all that remains is to wait a few minutes for the resource to be created.  Click ‘Go to resource’ when you see the prompt.

Next, we’ll want to click down to ‘Functions’ and then hit the Plus sign to trigger the wizard.

If we wanted to use JavaScript or C# we could choose from a variety of pre-made tools, but instead we’ll choose ‘Create your own custom function’

Next we’re prompted to choose how we want this thing to run.  Do we want the code to run when a URL is hit (commonly referred to as a ‘webhook)’, or when a file is uploaded?  Do we want it to run if the face recognition Cortana API finds a new photo of us on Imgur?  The options are endless.  We’re going plain vanilla today though, so choose Timer.

The last pages of the wizard, we’re here!  Azure uses the cron standard for formatting dates, which is a nightmare if you’ve only been around Windows and the vastly superior Task Scheduler.  (Except the part where it only generates configurations with XML, ew).  Fortunately you can easily create your own Cron expression using this site.

I wanted mine to run once an hour from 09:00 to 13:00, and only on Monday through Friday.  I’m in UTC -6, so the expression worked out to:  0 0 15-20 * * 1-5.  That translates roughly to 0 minutes, 0 seconds, hours 15 through 20, any month, any year, days 1 - 5

Clicking Create will show you…

Writing PowerShell in (mostly) real-time in Azure

That UI excites me in my deepest nerdy places, down deep where I fantasize about having telekinesis or being able to do cool parkour moves.  I ❤ that they provide a PowerShell example for us to start hacking away!

The curious mind SHOULD be tempted to click Run and see what happens. So…

Right from the start, I knew I couldn’t use the same method of displaying an oAuth window to authorize the delegated token in Azure, as Azure Functions, much like Orchestrator, SMA and PowerShell workflows do not support interactivity, and thus commands like Write-Host (which writes to the console) and -Debug are not permitted.  There’s simply no console to support that interaction.

Once the UI is displayed to a user a single time, you can forever refresh your token by posting the refresh token and credential back to the right endpoint, as mentioned above.  So, I decided to simply create a JSON file, in which I would store the relevant bits for the Refresh request, here’s what my file looked like.

Settings.json
{
    "scope":  [
                  "privatemessages",
                  "save",
                  "submit"
              ],
    "secret":  "xAqXHdh-mySecret_PleaseDontSteal_rV3MY",
    "client_id":  "123Ham4uandMe",
    "duration":  "permanent",
    "refresh_token":  "1092716171-RefreshMe123Please4meySifmKQ",
    "redirect_uri":  "http://www.foxdeploy.com"
}
Just click upload on the right side

Uploading files is easy, just click the upload icon on the far right side then give it a moment.  It may take up to a minute for your file to appear, so don’t hit Upload over and over, or you’ll end up with multiple copies of it.  I uploaded the Refresh-Token.ps1 and Send-RedditMail.ps1 functions as well.

Next, to modify my full script to work with settings stored in a .JSON file, and update the code to reflect its new headless life.

You’ll notice that I had to change the directory at the head of the script.  All the source files for an Azure function will be copied onto a VM and placed under D:\home\site\wwwroot\<functionName>\, so in order to find my content, I needed to Set-Location over to there.  In a future release of Azure Functions, we will likely see them default to the appropriate directory immediately.

With all of this completed, I hit Save and then…waited.

The first version of this function never checked to see if an alert had been sent before, so every four hours I received a private message for every post on my subreddit!

With this in place, I received notices every few hours until I was caught up, and had personally responded to every post on the sub!  And I now get a PM within hours of a new post, so posts will never go unanswered again!  It was a huge success and is still running today, smoothly.

In conclusion…how much does it cost?

I was curious to see how expensive this would be, so after a month (and about ~100 PMs sent), here’s my stats.  Mind you that as of this moment, Microsoft allows for a super generous free plan, which “…includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month.”  More pricing details here.

To date, it has still yet to cost me a penny.  I think function apps are a wonderful addition to Azure, and will definitely be deploying them over VMs in the future!

I could not have written this blog post within the help of Mark Kraus, so you should definitely follow him on Twitter and check out his blog.

I also learned a lot about Azure Functions from Stefan Stranger’s post on the topic, here.

And last, but not least, I learned a load from David O’Brien as well.  Not just on Functions, but on a number of other topics too over the years from his wonderful blog.  He’s a super star!

 

Faster Web Cmdlet Design with Chrome 65

$
0
0

If you’ve been following my blog for a while, you know that I LOVE making PowerShell cmdlets, especially ones that consume an API or scrape a web site.

However when it comes to tools that peruse the web, this can get a bit tricky, especially if a site doesn’t publish an API because then you’re stuck parsing HTML or loading and manipulating an invisible Internet Explorer -COMObject barfs in Japanese.  And even this terrible approach is closed to us if the site uses AJAX or dynamically loads content.

In that case, you’re restricted to making changes on a site while watching Fiddler 4 and trying to find interesting looking method calls (this is how I wrote my PowerShell module for Zenoss, by the way.  Guess and checking my way through with their ancient and outdated Python API docs my sole and dubious reference material, and with a Fiddler window MITM-ing my own requests in the search to figure out how things actually worked.  It…uh…took a bit longer than I expected…)

This doesn’t have to be the case anymore!  With the new release of Chrome 65 comes a PowerShell power tool so powerful that it’s like moving from a regular apple peeler to this badboy.

What’s this new hotness?

For a long time now if you load the Chrome Developer Tools by hitting F12, you’ve been able to go to the Network tab and copy a HTTP request as a curl statement.

Image Credit : google developers blog

This is super useful if you use a Linux or Mac machine, but cURL statements don’t help us very much in the PowerShell Scripting world.  But as was recently brought to my attention on Twitter, Chrome now amazingly features the option to copy to a PowerShell statement instead!

I had to check for myself and…yep, there it was!  Let’s try and slap something together real quick, shall we?

How do we use it

To use this cool new feature, we browse to a page or resource, interact with it (like filling out a form, submitting a time card entry, or querying for a result) and then RIGHT when we’re about to do something interesting, we hit F12, go to the network tab then click ‘Submit’ and look for a POST , PUT or UPDATE method.

More often than not, the response to this web request will contain all or part of the interesting stuff we want to see.

I check the pollen count online a lot.  I live in the South-Eastern United States, home to some of the worst pollen levels recorded on the planet.  We get super high pollen counts here.

Once, I was out jogging in the pine forest of Kennesaw Mountain, back before I had children, when I had the time to exercise, or perform leisure activities, and a gust of wind hit the trees and a visible cloud of yellow pollen flew out.  I breathed it in deeply…and I think that was the moment I developed allergies.

Anyway, I often check the pollen counts to see how hosed I’ll be and if I need to take some medicine, and I really like Weather.com’s pollen tracker.

So I thought to see if I could test out this neat new feature.  I started to type in my zip code in the lookup form and then, decided to record the process.

Full screen recommended!  I’ve got a 4k monitor and recorded in native resolution, you’ll probably need a magnifying glass if you don’t full screen this.

So, to break that down:

  • Prepare to do something interesting – you need to know exactly what you’re going to click or type, and have an idea of what data you’re looking for.  It pays to practice.
  • Open Developer Tools and go to the Network tab and click Record
  • Look through the next few requests – if you see some to a different domain (or an end-point like /api/v1 or api.somedomain.com then you may be on the right track.

In my case, I ran through the steps of putting in my zip code, and then hitting enter to make the pollen count display.  I noticed on my dry run with the network tab open that a lot of the interesting looking stuff (and importantly, none of the .js or images) came from a subdomain with API in the name.  You can apply a filter at any point while recording or after using the filter box, so I added one.

Filtering out the cruft is a MUST. Use the filter box in the upper left to restrict which domains show up here.

Now, to click through these in Chrome and see the response data.  Chrome does a good job of formatting it for you.

Finally I found the right one which would give me today’s pollen count (actually I’m being dramatic, I was amazingly able to find the right one in about a minute, from the start of this project!)

All the values I need to know that it is the pine trees here which are making my nose run like a faucet.

All that remained was to see if this new stuff actually worked…

Simply Right Click the Request – Copy – Copy Request as PowerShell!

And now, the real test…

I popped over to the ISE and Control-V’ed that bad boy.  I observed this following PowerShell command.

Invoke-WebRequest -Uri "https://api.weather.com/v2/turbo/vt1pollenobs?apiKey=d522aa97197fd864d36b418f39ebb323&format=json&geocode=34.03%2C-84.69&language=en-US" `
   -Headers @{"Accept"="*/*"; "Referer"="https://weather.com/"; "Origin"="https://weather.com"; "User-Agent"="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"}

We can see in the geocode= part of the URL that entering my zip code converted the location code into lat/long coordinates and then the actual request for the local counts present the coordinates to the vt1PollenObs endpoint of their Turbo internal API.  You can learn a lot from a request’s formatting.

In all likelihood we could probably omit the majority of those Header values and it would still work.  We could likely truncate the URL as well, but I had to see what would happen!


StatusCode        : 200
StatusDescription : OK
Content           : {"id": "34.03,-84.69",
                    "vt1pollenobs": 

                       {"reportDate":"2018-03-30T12:43:00Z","totalPollenCount":2928,"tree":4,"grass":0,"weed":1,"mold":null}

                        }
RawContent        : HTTP/1.1 200 OK
                    Access-Control-Allow-Origin: *
                    X-Region: us-east-1
                    Transaction-Id: e64e09d7-b795-4948-8e09-d7b795d948c6
                    Surrogate-Control: ESI/1.0
                    Connection: keep-alive
                    Content-Length: 159
                    Cac...
{...}

I mean, you can see it right there, in the Content field, a beautiful little JSON object!  At this point, sure, you could pipe the output into ConvertFrom-JSON to get back a PowerShell object but I would be remiss (and get an ear-full from Mark Krauss) if I didn’t mention that Invoke-RESTMethod automatically converts JSON into PowerShell objects!  I swapped that in place of  Invoke-WebRequest and stuffed the long values into variables and…

Wow, that ‘Just worked’! That never happens!!

Let’s make a cmdlet

OK, going back to that URL, I can tell that if I presented a different set of lat and lng coordinates, I could get the pollen count for a different place.

We could make this into a cool Get-PollenCount cmdlet if we could find a way to convert a ZIP over to a real set of coordinates…

A quick search lead me to Geoco.io, which is very easy to use and has superb Documentation.

Zenoss, why can’t you have docs like this?

Sign up was a breeze, and in just under a minute, I could convert a ZIP to Coords (among many other interesting things) in browser.

I needed them back in the format of [$lat]%2c[$lng], where $lat is the latitude to two degrees of precision and $lng is predictably also the same.  This quick and dirty cmdlet got me there.

Function Get-GeoCoordinate{
param($zip)
$lookup = Invoke-RestMethod "https://api.geocod.io/v1.3/geocode?q=$zip&api_key=$($global:GeocodeAPI)"
"$([math]::Round($lookup.results[0].location.lat,2))%2c$([math]::Round($lookup.results[0].location.lng,2))"
}

Make sure to set $global:GeocodeAPI first.  So, now a quick test and…

Okie doke, that was easy enough.   Now to simply modify the URL to parameterize the inputs


Function Get-PollenCount{
param($coords)

$headers = @{"Accept"="*/*"; "Referer"="https://weather.com/"; "Origin"="https://weather.com"; "User-Agent"="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"}
$urlbase = "https://api.weather.com/v2/turbo/vt1pollenobs?apiKey=$global:PollenAPI&format=json&geocode=$coords&language=en-US"
$totalPollen = Invoke-RestMethod -Uri $longAssURL  -Headers $headers
$totalPollen.vt1pollenobs

}

On to the final test…

It was…that easy??

What’s next?

This new tool built in to Chrome really is a game changer to help us quickly scrape together something working to solve an issue.  It’s AWESOME!

Do you need help developing a cmdlet for your scenario?  We can help!  Post a thread on reddit.com/r/FoxDeploy and I’ll respond in a timely manner and help you get started with a solution for free!  

 

Chrome65

Hard to test cases in Pester

$
0
0

Recently at work I have finally seen the light and begun adding Pester tests to my modules.  Why is this a recent thing, you may ask?  After all, I was at PowerShell Summit and heard the good word about it from Dave Wyatt himself way back in 2015, I’ve had years to start doing this.

Honestly, I didn’t get it…

To tell the truth, I didn’t understand the purpose of Pester. I always thought ‘Why do I need to test my code? I know it works if it accomplishes the job it’s supposed to do’.

For instance, I understood that Pester was a part of test-driven development, a paradigm in which you start by writing tests before you write any code.  You’d write a ‘It should make a box’ test and wire it up before you actually wrote the New-Box function.  But I was only looking at the outside of my code, or where it integrates into the environment.  In truth, all of the tests I wrote earlier on were actually integration tests.

See, Pester is a Unit Driven Test Framework.  It’s meant to test the internal logic of your code, so that you can develop with certainty that new features to your function don’t break your cmdlet.

CodeCoverage made Pester finally click

It wasn’t until I learned about the powerful -CodeCoverage parameter of Pester that it actually clipped.  For instance, here’s a small piece of pseudo code, which would more or less add a user to a group in AD.

Function Add-ProtectedGroupMember {
    Param(
    [ValidateSet('PowerUsers','SpecialAdmins')]$GroupName,
    $UserName)

    if ($GroupName -eq 'SpecialAdmins'){
        $GroupOU = 'CN=DomainAdmins,OU=Groups,DC=FoxDeploy,DC=COM'
    }else{
        $GroupOU = 'CN=PowerUsers,OU=Groups,DC=FoxDeploy,DC=COM'
    }

    try {Add-ADGroupMember -Path $GroupOU -Member $UserName -ErrorAction Stop}
    catch {throw "Check UserName; Input [$UserName]" }

}

And to go along with this, I made up a pseudo function called Add-ADGroupMember, defined as the following.

Function Add-ADGroupMember {
    param($GroupOU, $UserName)
    [pscustomobject]@{Members=@('EAAdmin','Calico','PB&J', $UserName);Name=$GroupOU}
}

When I run Pester in -CodeCoverage mode and pass in the path to my Add-ProtectedGroupMember cmdlet,  Pester will highlight every branch of logic which probably needs to be tested.  Here’s what it looks like if I run the Pester again in that mode, without having created any tests.

PS>Invoke-Pester -CodeCoverage .\Add-ProtectedGroupMember.ps1
Code coverage report:
Covered 0.00% of 5 analyzed commands in 1 file.

Missed commands:

File               Function          Line Command
----               --------          ---- -------
Add-ProtectedGroup Add-ProtectedGrou    6 if ($GroupName -eq 'SpecialAdmins'){...
Add-ProtectedGroup Add-ProtectedGrou    7 $GroupOU = 'CN=DomainAdmins,OU=Groups,DC=FoxDeploy,...
Add-ProtectedGroup Add-ProtectedGrou    9 $GroupOU = 'CN=PowerUsers,OU=Groups,DC=FoxDeploy,DC...
Add-ProtectedGroup Add-ProtectedGrou   12 Add-ADGroupMember -Path $GroupOU -Member $UserName ...
Add-ProtectedGroup Add-ProtectedGrou   13 throw "Check UserName; Input [$UserName]"             

As we can see, Pester is testing for the Internal Logic of our Function.  I can look at this report and realize that I need to write a test to make sure that the logic on line 6 works.  And more than highlighting which logic needs to be tested, it’s also basically a challenge.  Can you cover every case in your code?

Pester was stirring something within me, this gamified desired for completion and min-maxing everything.  (If it had Achievement Messages too, I would write Pester tests for everything!)

So, challenge accepted, let’s think through how to write a test to cover the first issue, line 6.  If a user runs my cmdlet and chooses to place the object in the SpecialAdmins OU, the output will always be ”CN=DomainAdmins,OU=Groups,DC=FoxDeploy,DC=COM”.  I can test for that with the following Pester test, saved in a file called Add-ProtectedGroupMember.tests.ps1

Describe "Add-ProtectedGroupMember" {
    It "The if branch for 'SpecialAdmin' use case should work" {
        $A = Add-ProtectedGroupMember -GroupName SpecialAdmins -UserName FoxAdmin
        $A.Name | Should -Be 'CN=DomainAdmins,OU=Groups,DC=FoxDeploy,DC=COM'
    }
}

I run the Pester test again now and…

Wow, with one test, I have now covered 80% of the guts of this cmdlet, that was sweet. That’s because for this one test to execute successfully, all of these lines in my cmdlet are involved.

All of the lines in Blue were covered under just one test!

Completing The Tests

The next line that needs to be covered is called when the user runs with -GroupName PowerUsers, so we can cover that with this test.

It "The else branch for 'PowerUsers' use case should work" {

        $A = Add-ProtectedGroupMember -GroupName PowerUsers -UserName FoxAdmin
        $a.Name | Should -Be 'CN=PowerUsers,OU=Groups,DC=FoxDeploy,DC=COM'
}

After this test, we’re practically done

All that’s left now is to write a test for this chunk of code.

But I can only test that my error handling works if I can find some way to force the cmdlet in the try block to error somehow.  How the heck do I make my cmdlets poop the bed to test that this cmdlet has good error handling?

How to test your error handling

This is where the Pester keywords of Mock and Context come into play.  Pester allows you to ‘Mock’ a command, which basically replaces that command for one of your own design to ‘Mock’ up what a cmdlet would do.  For instance, when I’m running a test that uses Active Directory commands, I don’t want the tests to actually touch AD.  I would mock Get-ADUser and then have this fake function just output the results from one or two users.

Run the function, Select the first two results, then paste them into the body of the Function as a PowerShell Object.  Easey-peasey.

🦊Take-away 🦊 Mock clips the wings of any cmdlet, preventing them from actually running

If I want to test error handling, I write a new test showing when I expect my function to error (when it should throw).  To make it throw, especially when I am calling external cmdlets, I just mock that cmdlet, and replace that cmdlets guts with something that will throw an error.   To paraphrase:

So, in order to write a test to see if my code respects error handling, I need to overwrite the default behavior of Add-AdGroupMember to a state which will reliably fail.  It’s really simple to do!

 #We need to be able to test that try/catch will work as expected
Mock Add-ADGroupMember {
    throw
}    

It "Should throw if we're unable to change group membership" {

    {Add-ProtectedGroupMember -GroupName PowerUsers -UserName FoxAdmin } | Should -Throw

}

I run the tests again and now…

Oh yeah, 100%!   In my development work, I work towards 100% code coverage to ensure that the guts of my logic is well covered by tests.  This is worth the time to do (so build it into your schedules and planning timelines) because having the tests ensures that I don’t break something when I come back to make changes three months from now.

Let’s move on to some of the scenarios which really stumped me for a while, as I’m still basically a newbie at Pester.

Verify Credentials or params are passed as expected

I wrote a cmdlet which called Get-CIMInstance,it was something like this.

Function Get-DiskInfo {
    param ($Credential)

    Get-CimInstance Win32_DiskDrive | select Caption,@{Name=‘SerialNumber‘;Expression={$_.SerialNumber.Trim()}},`
        @{Name=‘Size‘;Expression={$_.Size /1gb -as [int]}}

}

We decided to add support for an optional -Credential param, for cases in which we would need to use a different account.  The difficulty appeared when we wanted to ensure that if the user provided a Parameter, it was actually handed off down the line.

To solve this problem, first we had to rewrite the cmdlet a little, to prevent having multiple instances of Get-CimInstance in the same cmdlet.  Better to add some extra logic and build up a hashtable containing the parameters to provide than to have multiple instances of the same command in your function.

Function Get-DiskInfo {
    param ($Credential)

    if ($Credential){
        $ParamHash = @{Credential=$Credential;ClassName='Win32_DiskDrive'}
    }
    else{
        $ParamHash = @{ClassName='Win32_DiskDrive'}
    }
    Get-CimInstance @ParamHash | select Caption,@{Name=‘SerialNumber‘;Expression={$_.SerialNumber.Trim()}},`
        @{Name=‘Size‘;Expression={$_.Size /1gb -as [int]}}

}

Next, to test if the $Credential param was passed in, we mocked Get-CimInstance and configured the code to save the input param’s outside of the function scope for testing.

Mock Get-CimInstance {
        param($ClassName)
        $script:credential = $credential
        $global:ClassName = $ClassName

    } -Verifiable

Finally, in the test itself, we run the mocked cmdlet and then validated that after execution, the value of $Credential was not null.

It 'When -Credential is provided, Credential should be passed to Get-CimInstance' {
        $somePW = ConvertTo-SecureString 'PlainPassword' -AsPlainText -Force
        $cred = New-object System.Management.Automation.PSCredential('SomeUser', $somePW)
        Get-DiskInfo -Credential $cred
        $Credential | should -Not -be $null
    }

Once we came up with this structure to validate parameters were passed in to child functions, it really opened up a world of testing, and allowed us to validate that each of our parameters was tested and did what it was supposed to do.

Test Remote Types which won’t exist in the test environment

Recently I was working on a PowerShell module which would reach into ConfigMgr over WMI and pull back an instance of the SMS_Collection Class, and then we would call two methods on it.

$CollectionQuery = Get-WMIObject @WMIArgs -class SMS_Collection -Filter "CollectionID = '$CollectionID' and CollectionType='2'"

This gives a SMS_Collection object, which we can use to call the .AddMemberShipRules() method and add devices to this collection.

I didn’t want my Pester tests to be dependent on being able to reach a CM Server to instantiate the object type (nor did I want my automated testing pipeline to have access to ConfigMgr) so…I just mocked everything that I needed.  It turns out that you can easily fake the methods your code needs to call using the Add-Member -MemberType ScriptMethod cmdlet.


Mock Get-WmiObject {
        param($ClassName)
        $script:credential = $credential
        $global:ClassName = $ClassName

        $mock = [pscustomobject]@{CollectionID='FOX0001'
                CollectionRules=''
                CollectionType =2
                Name = 'SomeCollection'
                PSComputerName = 'SomePC123'}

        Add-Member -InputObject $mock -MemberType ScriptMethod <code>AddMemberShipRules</code>{ Write-Verbose 'Mocked' } -Verifiable 

Now I could validate that this line of code is run and that the rest of my code later on calls this method with the following code.

It 'Should Receive an Instance of the SMS_Collection object'{
  Add-CMDeviceToCollection -CollectionID SMS0001
  Assert-MockCalled -CommandName Get-WMIObject -Time 1 -Exactly -Scope It -ParameterFilter {$Class -eq 'SMS_CollectionRuleDirect'}

}

Move method calls into their own functions

Looking back to the code for Add-CMDeviceToCollection, note line 84.

$MemberCount = Get-WmiObject @WMIArgs -Class SMS_Collection -ErrorAction Stop -Filter $Filter
$MemberCount.Get()

You can try until you are blue in the face, but Pester does not have the capability to mock .Net objects, or handle testing for methods being called. But it DOES excel with functions, so let’s just put the Method call from above into its own function, then we can check to see if the method was called by adding Assert-MockCalled.

Function Call-GetMethod {
   param($InputObject)
    $InputObject.Get()
}
 Function Add-CMDeviceToCollection {
     

        $MemberCount = Get-WmiObject @WMIArgs -Class SMS_Collection -ErrorAction Stop -Filter $Filter
        $MemberCount = Call-GetMethod -InputObject $MemberCount
        Write-Verbose "$Filter direct membership rule count: $($MemberCount.CollectionRules.Count)"

And the test to validate that this line of code works as expected.

It 'Should call the .Get() method for the collection count'{
  Add-CMDeviceToCollection -CollectionID SMS0001
  Assert-MockCalled -CommandName Call-GetMethod -Time 1 -Exactly -Scope It 

}

And that’s it!

And that’s all for now folks!  Have you encountered any of these situations before?  Or run into your own tricky case that you’ve solved?  Leave a comment below or post it on reddit.com/r/FoxDeploy to share!

ClientFaux – the fastest way to fill ConfigMgr with Clients

$
0
0

Recently at work, we were debating the best way to handle mass collection moves in ConfigMgr.  We’re talking moving 10,000 or more SCCM devices a day into Configuration Manager collections.

To find out, I installed CM in my beastly Altaro VM Testlab (the build of which we covered here), and then wondered…

how the heck will I get enough clients in CM to test in the first place?

Methods we could use to populate CM with Clients

At first I thought of using SCCM PXE OSD Task Sequences to build dozens of VMs, which my lab could definitely handle.  But a PXE Image was taking ~24 minutes to complete, which ruled that out.  Time to thousand clients even running four images at a time would be over one hundred hours, no go.

Then I thought about using differencing disks coupled with AutoUnattend images created using WICD, like we covered here on  (Hands-off deployments), but that still takes ~9 minutes per device, which is a lot of time and will use up my VM resources.  Time to thousand clients, assuming four at a time? 36 hours.

I thought I remembered seeing someone come up with a tool to create fake ConfigMgr clients, so I started searching…and it turns out that other than some C# code samples,  I had a fever dream basically, it didn’t exist.

So I decided to make it, because after all, which is more fun to see when you open the console in your testlab, this?

Or this?

And it only took me ~40 hours of dev time and troubleshooting.  But my time per client?  Roughly eight seconds!  That means 450 clients PER hour, or a time to thousand clients of only two hours!  Now we’re cooking…

How is this possible?

This is all made possible using the powerful ConfigMgr SDK, available here.

But really, none of this would have been possible without the blog posts by Minfang Lu of Microsoft and the help of @Adam Meltzer also of Microsoft.  Minfang’s post provided some samples which helped me to understand how to Simulate a SCCM Client.   And Adam is a MSFT SUPERSTAR who responded to my emails at all hours of the night and helped me finally solve the pesky certificate issue which was keeping this from working.  His blog posts really helped me get this working.  It was his samples that got me on the right path in the first place.

So, what does it even do?

The ClientFaux Client Simulation tool allows us to use the super powerful ConfigMgr SDK and its assemblies to simulate a ConfigMgr Client.  We can register a client, which will appear in CM as a new Device. We are able to specify the name of our fake client, and some of its properties, and even run a client discovery.  This concludes the list of what is working at this point 🙂

On the roadmap, we will be able to populate and provide custom fake discovery classes which we can see in Resource Explorer (though this has some issues now).  Imaging testing queries in your test CM and being able to exactly replicate a deployment of an app with multiple versions, for Collection Queries or reporting…This is only the beginning, and I hope that with a good demo of what this does, we’ll quickly add more and more features.  If you’re interested…

Here’s the source, help make this better!

Standard Boiler -Plate warning

The focus of this tool is to allow us to stage our CM with a bunch of clients, so we can do fun things like have huge numbers of devices appear in our Console, test our skills with Querying, and have interesting and real looking data to include as we practice our custom SQL Reporting skills.  This should be done in your test lab.  I don’t see how this can cause your CM serious issues, but I’ve only got a sample size of one so far.  Consider yourself warned, I can’t help you if you create 100K devices and your donut charts in CM suddenly look weird.  Do this is test.

How do I use the ClientFaux tool

Getting up and running is easy, simply click on the releases tab and download the newest binary listed there.  Extract it somewhere on your PC.

Next, download and install the ConfigMgr SDK, then open up Explorer and copy the ​Microsoft.ConfigurationManagement.Messaging.dll file from (“C:\Program Files (x86)\Microsoft System Center 2012 R2 Configuration Manager SDK\Redistributables\Microsoft.ConfigurationManagement.Messaging.dll”) to the same path where you put the ClientFaux.

Your directory should look like this now.

dir
Yes, that IS a handdrawn icon

At this point you’re probably noticing the .exe file and wondering…

Wait, no PowerShell Cmdlets?!

I know, I know, I deserve shame.  Especially given the theme of my blog is basically shoe-horning and making everything work in PowerShell.  I’ve been working in C# a bit at work now, and sort of have a small clue, it felt easier to start in C# and then, plan to add PowerShell later.  (It’s on the plan, I swear!)  I also have a GUI planned as well, worry not, this is the early days.

To start creating clients, we need five things:

  • A desired name for the new client in CM
  • The path to a CM Compatible Certificate in PFX format
  • The Password to the above cert
  • The ConfigMgr Site Code
  • The Name of the CM Server

Making the certs was kind of tricky (I’ll cover the woes I faced in the upcoming ‘ClientFaux Build Log’ post, to come next week), so I wrote a PowerShell script to handle all of this.  Run this from a member server which can route to CM.  In my lab, I have a small domain with a CM Server, an Admin box and a Domain Controller.  I ran this from the Admin box.

$newCert = New-SelfSignedCertificate `
    -KeyLength 2048 `
    -HashAlgorithm "SHA256" `
    -Provider  "Microsoft Enhanced RSA and AES Cryptographic Provider" `
    -KeyExportPolicy Exportable -KeySpec KeyExchange `
    -Subject 'SCCM Test Certificate' -KeyUsageProperty All -Verbose 

    start-sleep -Milliseconds 650

    $pwd = ConvertTo-SecureString -String 'Pa$$w0rd!' -Force -AsPlainText

Export-PfxCertificate -cert cert:\localMachine\my\$($newCert.Thumbprint) -FilePath c:\temp\ClientFaux\CMCert.pfx -Password $pwd -Verbose
Remove-Item -Path cert:\localMachine\my\$($newCert.Thumbprint) -Verbose

ClientFaux MynewPC123 c:\temp\ClientFaux\CMCert.pfx 'Pa$$w0rd!' F0X SCCM

This will create the cert (which has to use the SHA1 or SHA256 Hashing Algorithm, and be 2048 bits long), then export it with a password, and then delete the cert from your cert store. I ran into issues when I had more than 10,000 certs, and we don’t need it in our store anymore to actually use it.

Then, it will trigger ClientFaux.exe with those params.

This particular configuration above says: “Register a new client using the Cert found at C:\temp\ClientFaux\CMCert.pfx, with the password of ‘Pa$$w0rd!’, and then register with the F0X ConfigMgr site using the Management Point SCCM.  Here’s what it will look like:

Enroll

If you run into errors, there will be a log file created with every enrollment in the same directory as the binary.  The log file is super verbose, but you can also find logging info on the Management Point itself, look to MP_Registration.log and report any errors you see (but if you use this configuration, you should not run into any).

What will it do?

At this point, we can see the log files on the Management Point, which will be found under the SCCM Drive\SMS_CCM\Logs\MP_RegistrationManager.log file, a completed request will look like this:

Mp Reg: Reply message
MP Reg: Processing completed. Completion state = 0
MP Reg: Message ReplyTo : direct:DC2016:SccmMessaging
MP Reg: Message Timeout : 60000
Parsing done.
Processing Registration request from Client 'Fox93481.FoxDeploy.local'
Successfully created certificate context.
MP Reg: Successfully created context from the raw signing certificate.
Begin validation of Certificate [Thumbprint 941D7F46903BEE8A7A67BF7B416453BFC0F18FFE] issued to 'SCCM Test Certificate'
Completed validation of Certificate [Thumbprint 941D7F46903BEE8A7A67BF7B416453BFC0F18FFE] issued to 'SCCM Test Certificate'
Successfully created certificate context.
MP Reg: Successfully created context from the raw encryption certificate.
Registration Signature: SuperLongHashHere
MP Reg: DDR written to [E:\CM\inboxes\auth\ddm.box\regreq\RPB886P6.RDR] for Client [GUID:A698D203-C0F9-4E5D-8525-3AA55572BF5F] with Certificate Thumbprint [941D7F46903BEE8A7A67BF7B416453BFC0F18FFE]
Mp Reg: Reply message

Give it a moment, and it will appear in the ConfigMgr console!

NewDevice

But, how do I get–like–10k of them

If you want to get your console really filled with devices, then you can run this script to create boatloads of devices!  I’m assuming you placed ClientFaux under C:\Temp\ClientFaux. Simply edit line 1 and 2 with the starting and ending numbers, and then edit line 7 with your desired name. If you change nothing, this will create PCs labeled Fox1, Fox2, and so on up to 50,000.

$str = 1
$end = 50000
while ($str -le $end){
    if(-not(test-path C:\temp)){
        new-item -Path C:\temp -ItemType Directory -Force
    }
    $NewName = "Fox$str"
    $newCert = New-SelfSignedCertificate `
        -KeyLength 2048 `
        -HashAlgorithm "SHA256" `
        -Provider  "Microsoft Enhanced RSA and AES Cryptographic Provider" `
        -KeyExportPolicy Exportable -KeySpec KeyExchange `
        -Subject 'SCCM Test Certificate' -KeyUsageProperty All -Verbose 
    
        timeout 3

    $pwd = ConvertTo-SecureString -String 'Pa$$w0rd!' -Force -AsPlainText

    Export-PfxCertificate -cert cert:\localMachine\my\$($newCert.Thumbprint) -FilePath "c:\temp\Client_$NewName.pfx" -Password $pwd -Verbose 
    Remove-Item -Path cert:\localMachine\my\$($newCert.Thumbprint) -Verbose
    C:\temp\ClientFaux\ClientFaux.exe $NewName c:\temp\Client_$NewName.pfx 'Pa$$w0rd!' 'F0X' 'SCCM'
    $str+=1
}

You can also run three or four instances of this at a time as well! If you do that, I’d recommend using multiple copies of the .exe in their own folder, one per thread, to prevent two instances from trying to create the same named log file.

What’s Next

So, this is represents my alpha build.  It is working reliably but it could use a lot of features and testing.  For one, how about named parameters?  How about making a GUI for it?  What about making PowerShell cmdlets instead of a binary (more in-line with the theme of this blog!)?

These are all planned and will come…eventually. But I could use some help!  If you’d like to contribute, please test the project here, and send me issues as you come across them.  If you want to resolve issues, I’ll happily accept pull Requests too!

Source Code here on GitHub!

Compiled Binary – Alpha 

Sources

I learned so much writing this post and so I wanted to call out down here a listing of all of the resources I used to write this project.  In the build-log post, we’ll talk about each of this and how they came up, in the hopes that it will help you on your own ConfigMgr integrations 🙂

Faster: ConfigMgr Collection Manipulation Speed Test

$
0
0

Recently at work, we had a task come up which saw us needing to move tens of thousands of devices between collections in CM. We decided to run some tests to find the fastest way! We compared:

  • The SCCM 1511 Era Collection Cmdlets
  • The newly released speedier Collection Cmdlets which shipped with Tech Preview 1803
  • Using Keith Garner’s super powerful CMPSLib Module
  • Query Based Membership
  • AD Group Query Membership
  • Direct SQL Membership Tampering ☠

I’d always kind of wondered myself, so it was a fun challenge to come up with some hard numbers.  And for the last item in the list…this is just for fun, I do not recommend using this in your production…or your testlab.  Or anywhere.

The test lab

All testing occurred in my VM Testlab, a Ryzen 7 1700 with 64 GB of RAM, with storage served on NVMe m.2 SSD drives.   A beastly machine (also hello to viewers from the year 2025 where we have 6TBs of storage on our phones and this is laughably quaint.  Here in 2018, we believed more RBG = more better, and we were happy, damn it!)

My ConfigMgr VM runs on Server 2016, 32 GB of RAM, SQL gets 16GB of that, and the SQL database and log files live on a separate NVMe drive for maximum performance.

The testing methodology

In this test, we’ll test two scenarios: adding 10,000 devices to a collection, and adding 30,000 devices to a collection.  In our experience we start to see collection slow down at around 30K, and this amount isn’t too big as to exclude the majority of CM Environments in the world.  Let me know if you think of something I forgot to test though!

We will resolve our input devices using DBATools Invoke-DBASqlQuery, with the following Syntax:

Function Get-CMDevice{
    param($CollectionID)

Invoke-DbaSqlQuery "Select Distinct Name,ResourceID from dbo.v_FullCollectionMembership where CollectionID = '$CollectionID'" -SqlInstance SCCM -Database CM_F0x
}

I used this method because I found it more performant than using the built-in command, and gave me just the two columns I needed, Name and ResourceID.

We will conduct add the users (resetting the collection count between tests) and measure both the time it takes to complete the Membership alter command with a refresh at the end of the process, then monitor CollEval.log for the following line items:

Results refreshed for collection F0X0001E, 30300 entries changed.
Notifying components that collection F0X0001E has changed.
PF: [Single Evaluator] successfully evaluated collection [F0X0001E] and used 2.875 seconds

Specifically the final line indicates that Collection Rules have finished processing and the devices will now be visible in CM.  Now let’s dive in!

The SCCM 1511 Era Collection Cmdlets

These cmdlets have something of a bad wrap, I feel, for being slow.  Without digging into the code, I couldn’t tell you specifically how they’re written, but I’ve heard it described that when you use them to add multiple devices to a collection, rather than adding all of the rules and saving the changes once, it would add each rule one at a time.

When I tried to directly add 10,000 rules at once, I ran into out-of-memory errors!


$d = Get-CMDevice -CollectionID SMS00001 | select -First 10000 -Property Name,ResourceID
Add-CMDeviceCollectionDirectMembershipRule -CollectionId "F0X00016" -ResourceId $UniqueDevices.ResourceID

>Add-CMDeviceCollectionDirectMembershipRule : One or more errors occurred.
At line:1 char:1
+ Add-CMDeviceCollectionDirectMembershipRule -CollectionId "F0X0001F" - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Add-CMDeviceCol...tMembershipRule], AggregateException
+ FullyQualifiedErrorId : System.AggregateException,Microsoft.ConfigurationManagement.Cmdlets.Collections.Commands.AddDeviceCollectionDirec
tMembershipRuleCommand

This would continue no matter what, until I found a stable amount of devices to add at a time.  527 was the max I could ever add with one step, so for consistencies sake, I added just 500 rules at a time.


$d = Get-CMDevice -CollectionID SMS00001 | select -First 10000 -Property Name,ResourceID
for ($i = 0; $i -lt $d.Count; $i += 500)
{
"processing $i -- $($i+500)"
Add-CMDeviceCollectionDirectMembershipRule -CollectionId "F0X00016" -ResourceId $d.ResourceID[$i..($i+500)]

}

The performance wasn’t okay.  Well, bad.

10,000 Devices took a leisurely 05:02 minutes to process, while 30,000 took a snooze inducing 50:51 to process!  Nearly FIFTY ONE Minutes.  Clearly something slow is happening under the covers here.

Super Speed Collection Moves CMPslib

Keith Garner wrote his own set of PowerShell cmdlets to deal with collection rules, after we experienced some frustration with the options that ship in the box.  You can download them here, and to use the them you pass in a collection of devices with Name and ResourceID properties to the -System parameter.
$d = Get-CMDevice -CollectionID SMS00001 |
    Select -First 10000 -Property Name,ResourceID
Add-CMDeviceToCollection -CollectionID F0X0001E -System $d -Verbose 
The performance is awesome.  
10,000 rules are applied in only 1:54 seconds, and CollEval processed the devices in practically no time at all:
This is a very nice improvement over the built-in cmdlets, and I was eager to see what happened with 30K rules.
It turns out that when we applied 30,000 rules, we saw performance scale linearly, taking 4:44  to create and apply the rules, with processing taking just a bit longer.

The total processing time for 30K devices is 4 minutes, 52 seconds using this method of adding direct rules.  By far the fastest!

1806 Cmdlets

The 1806 CM Cmdlets bring some nice features, and bug fixes.  On top of that, something has changed under the covers of the Collection Direct Membership Add cmdlet, giving us a HUGE speed improvement too! 

One Caveat the syntax has changed quite a bit, and you need to use the new cmdlets in a specific manner to ensure that you’ll experience the SUPER Speed!

First, don’t batch your collection addition rules, like we did previously.  Or, if you do batch them, do it in batches of 10K.  Next, the parameters have changed.  If you use the cmdlet in stand-alone mode, like so
Add-CMDeviceCollectionDirectMembershipRule -CollectionID <SomeCollection> -ResourceID $arrayOfResourceIDs

You will end up with the previous cmdlet performance.  From what I can tell, it looks like there may be an internal branch in the logic and the old code is alive and kicking down there!  What, I told you it was weird! 

The sweet spot to get super speed is like so:

#Load devices to add to the collection (should be a full device from Get-CMDevice)
$devices = Get-CMDevice -CollectionID SMS00001 | select -first 30000 
Get-CMCollection -CollectionId F0x00025 | Add-CMDeviceCollectionDirectMembershipRule -Resource $devices 

Note that we are not using -ResourceID, and furthermore we must pipe IResultObject#SMS_Collection object into Add-..Member to get it to work.  It’s wonky, it’s weird, and it’s always verbose too.

Like, it's mega SUPER verbose
Like, it’s mega SUPER verbose (note that this is line thirty THOUSAND of the output)

But you’re allowed the be weird when you’re fast as hell!  This cmdlet is the Usain Bolt of CM Cmdlets.

Adding 10,000 device rules is five times faster than the old way, clocking in at 1:05!  And adding 30,000 rules took only  3:16!!

For comparison, the new cmdlet is a beast for big collection moves, as it completes the same operation in 6% of the time of the old cmdlet, a performance increase of 17 times!

Query Rules

I’m going to go on the record and say that I was wrong about Query rules.  When I asked on Twitter, you guys had some interesting feedback for me about my ideas of what to do with Query rules…
So, I decided to test for myself…and they were amazing!  My plan was to add a query rule containing a big IN WQL Statement with the resource IDs I wanted to include, like this:
select ResourceId, 
    Name, 
    SMSUniqueIdentifier, 
    ResourceDomainORWorkgroup, 
    SMS_R_System.Client 
   from SMS_R_System 
   where ResourceID in ('$IDArray')

and bundle them up in batches of 1k devices at a time. Here’s the code I used , edited for your viewing pleasure.  You will need to make sure your own WQL query is on one-line, SCCM doesn’t like a multi-line string:

$d = Get-CMDevice -CollectionID SMS00001 | select -First 10000 -Property Name,ResourceID
$d.Count
for ($i = 0; $i -lt $d.Count; $i += 1000)
{ 
    Write-Host "processing $i..$($i+1000) ..."
    $IDArray = $d[$i..($i+1000)]
    $IDArray = $IDArray.ResourceID -join "','"
    
    $query = "
    select SMS_R_System.ResourceId, 
        SMS_R_System.Name, 
        SMS_R_System.SMSUniqueIdentifier, 
        SMS_R_System.ResourceDomainORWorkgroup, 
        SMS_R_System.Client 
    from SMS_R_System 
    where ResourceID in ('$IDArray')
    "
    #Add Query rule built in 
    Add-CMDeviceCollectionQueryMembershipRule -CollectionID F0X0001C -RuleName "AddRule$(($i+1000)/ 1000)" `
    -QueryExpression $query
    write-host -NoNewline "Done!"
}

At first I thought I had a typo in my code!

This is in real-time.

The speed…AMAZING! Only seven seconds to apply the rules!

CollEval fired up a few seconds later, and interestingly it does take a longer time to crunch the Query rules than it did the Direct Rules, but we’re talking 10K devices added to a collection in under 20 seconds.

At this point, I knew 30K would be equally fast.


Wow.  Only 26 seconds to apply the rules, and a total crunch time of 45 seconds to calculate membership.  Just one minute…let’s see what happens if we…

Let’s just add every device using a query rule

This was required of me at this point, all 115K machines in my testlab would be added with massive IN queries to really test performance.

A weird screen shot. The PowerShell line reflects the total time to run the command (2:18 for 115K rules), while the bottom half is the relevant lines from Collection Evaluation

Only 2 minutes, 18 seconds to apply the rules, and two minutes to run the query!  Incredible!   This is a huge improvement compared to adding devices with direct rules, in which case using CMPSLib took 1 hour, 15 minutes to add 115K rules.

Using AD Membership Queries

Using AD Group Membership queries is super super fast.  If your AD Replication is good and healthy.

If you begin to use AD Groups for membership in CM, keep in mind that if you make group changes at the periphery of your network, it will take some time to replicate from your remote site, to a global catalog, and then have to wait for CM to requery the Active Directory to see the change.
Unfortunately I don’t have a giant AD environment to play with Group Membership, but from what I’ve seen I would expect very good speed here too.  Sorry this section is lame.

Direct Membership Control using SQL

I’ve always kind of wondered what Collection Evaluation was doing under the covers.  We know that when you add rules to a collection, they’re processed in this order:

Image from Scott’s blog ‘Collection Evaluation Overview’

Which was covered in detail in this awesome blog post by Scott Breen [MSFT] titled ‘Collection Evaluation Overview’.  But what does CollEval do with this information?  Just keep it in memory?  Write it to a file?  E-mail it to DJam who is the furiously working Mechanical Turk inside the machine?  …Or does it store the information in the CM Database somewhere?

How it really works

In digging around under the covers, I spent a lot of time watching arcane log files and trying to make sense of strange views in SQL trying to uncover where certain info was stored.  I had to grant myself super admin rights, break all of the warranty labels and in the end, took the CM Database out to dinner and then dug around with my flashlight under the covers, looking for goodies.  And I found a totally unsupported method to directly manipulate collections with shocking speed.

How do I do it?

Well, a gentleman never tells. What I will share though is the impressive speed.  Using this method to directly control collection membership, I was able to place 30K devices in a collection in 0:00:01.  One second.

Code pixelated to protect you from yourself. Seriously, I’m not going to be the one arming you wild monkeys with razor bladed nunchuks

But at what cost?

Well, if we don’t actually add rules, but instead manipulate the collection via getting fresh with the database, we lost a lot.  We lose RBAC.  We don’t have include rules, we don’t have exclude rules, the collection membership would just be what we said it should be.  

Oh, and since we skipped CollEval, CollEval is going to have something to say about the weird ass stuff we’ve done to this poor innocent collection.  For instance, if we ever forget about the wonky, dark-magic joojoo we have performed on this poor collection and click ‘Update Membership’, CollEval will have its revenge.

CollEval Checked, found no rules, then deleted everyone

CM will helpfully look at the collection, look at its rules and say ‘WTF are you doing bro, are you drunk?’ and then delete everyone from the collection.  Not a member via a valid rule?  You’re not gonna stay in the collection.

I would not recommend using this approach. 

The speed of direct query rules is mindblowing enough, and the new CM Cmdlet aren’t far behind them, so we have plenty of performance options.  Seriously, don’t explore this route, if you do, the air conditioner will catch on fire with spiders coming out of it.

Don’t do this to your CM. And if you DO, don’t ask MSFT for support.

In Conclusion

So, to summarize our data in a chart

Basically any method is much, much better than the Old CM Cmdlets!

Basically anything is faster than using the old Cmdlets

If you’re considering your options outside of the old cmdlets, I’d recommend giving CMPSLib a try.  Lovingly written by Keith Garner, with help from yours truly, we believe this is a very resilient method of adding devices to a collection, without the wonky-ness of the new Add-DeviceDirectRule cmdlets kind of odd syntax.

Want the new ones?  It’s easy, just download the media for the tech preview, and use it to Install the CM Console on your machine.  The CM Console will give you the new cmdlets and they’ll work on an old environment, super easy!  Just be mindful of the syntax!

Of course, for true performance, if you’re looking to manage your collections from outside of CM, I would only recommend maintaining membership using query rules, it’s just too fast not to mention.

Let me know if I missed anything!


Excursion: Model View Controller Programming – Part I

$
0
0

The header graphic, titled - Excursion Model View Controllers. Subtitle: getting side-tracked in the land of dotnet. Pictures a single adventurer from far away, bundled up against the cold, trekking up the side of a snowy mountain

Well, it’s been a minute, hasn’t it?  This 🦊 needed a break after speaking at MMS and PSChatt! (both awesome events!  If you’re shopping for a big conference, I can’t recommend #MMSMOA enough!  And early bird pricing is open now!).

Since then at work I’ve had to dive deep and learn a whole new skill set.

Fortunately, I had a scuba tank and now that I’m back up for air, I’d love to share a bit of what I learned.

This is kind of different

It’s been a long term goal of mine to expand beyond my PowerShell capabilities and learn as much as I can of this ‘programmer’ stuff. I’d always wanted to learn C#, and my first deliverable was the ‘at least kind of working’ Client Faux (big updates to come).

Our goal was to make a cool website where end users could go and type in manually, or provide a CSV file of devices, and I’d use that to spin up new collections with deployments and then perform some automation on those devices within SCCM.  I want to talk through how I’m doing that, and the goal of this post should lay a foundation to answer the question of: what is a model view controller(mvc)?  Spoilers, MVCs are all around us!

So to recap our goal:  I needed to have a user give me input and a csv, then parse and slice it and show them the results to edit or tweak. That’s going to be our end game for this guide.
But before we talk about the technology…

But Stephen, are you qualified to teach me about this?

Uhhhh…maybe. I may not have all of the terminology down pat, and there might be a more efficient way of doing things than I have figured out.  But, ya know, I did just figure this out, plus I’m willing to share with you, so what else are you gonna do? 😁🦊

The technology stack

The goal was to host the site in IIS, with an ASP.Net Model View Controller and the powerful Entity Framework to handle my DB interactions. To throw some jargon, an ASP.net MVC with EF 6.

If I lost you… Don’t worry, the rest of the post will be talking through what an MVC is, with a simple demo. Once we’ve laid down the foundation, in the next post, we’ll talk through how I solved this master-detail scenario.  Wanna follow along?  I’d recommend you read the post first, but there are walkthrough steps at the bottom!

What’s a model view controller?

You’ve seen and been using MVCs forever (and it’s not Marvel vs Capcom (but on to that topic I love fighting games, and I promise that my Ken and Gambit team would whip the floor with you!)).

If we are brand new to web design and only know HTML and CSS, when it comes to making a website we take a simple approach and just make our site by hand. You have your About Page, your Contact Page, your Home page, not much to do, you know?

Now, imagine Amazon.com. There are simply GOBs and of GOBs of items on the site. Do we think that there are people who are spending all day long adding new items to the site? They could (and in the beginning they probably did) create their listings one at a time. But it’s super inefficient.

So, instead, sites like that–not sure what Amazon uses, though it is written in C++, probably– use what’s called a Model – View – Controller.

M is for Model, M-m-m-model

In an MVC, rather than spending bajillions of years writing pages that look super similar, instead you start with a database.

You fill your your database up with all of the books you’re going to sell, like this:

A screenshot from SQL Server Management Studio, showing a query listing all of the books from a table in a Database. The books have titles like 'To kill a Foxingbird' and other riffs on popular titles with Foxes in the name. Fox Emoji

and then you write the conceptual model of a book one time. It could look something like this.

namespace MVCDemo.Models
{
    using System;
    using System.Collections.Generic;
    
    public partial class book
    {
        public short id { get; set; }
        public string title { get; set; }
        public string author { get; set; }
        public string format { get; set; }
        public Nullable<float> price { get; set; }
    }
}

That’s the M of MVC. (borrowing that line holehog from this walkthrough)

Now that we have modeled the data in our Database, our app will know how to access it (without having to write queries).

Views are basically webpage generators

Then, you write a View to render the content, which is in enlightened HTML files with the extension of CSHTML.

@model MVCDemo.Models.book

@{
    ViewBag.Title = "Details";
}





<h2>Details</h2>









<div>
    



<h4>book</h4>




    



<hr />




    



<dl class="dl-horizontal">
        



<dt>
            @Html.DisplayNameFor(model => model.title)
        </dt>





        



<dd>
            @Html.DisplayFor(model => model.title)
        </dd>





        



<dt>
            @Html.DisplayNameFor(model => model.author)
        </dt>





        



<dd>
            @Html.DisplayFor(model => model.author)
        </dd>





        



<dt>
            @Html.DisplayNameFor(model => model.format)
        </dt>





        



<dd>
            @Html.DisplayFor(model => model.format)
        </dd>





        



<dt>
            @Html.DisplayNameFor(model => model.price)
        </dt>





        



<dd>
            @Html.DisplayFor(model => model.price)
        </dd>





    </dl>




</div>






    @Html.ActionLink("Edit", "Edit", new { id = Model.id }) |
    @Html.ActionLink("Back to List", "Index")


Remember the columns from the database? The view has those same columns here too! But…how does the view know >WHICH< book is the one we’re referencing here in this view?

Controllers – piping data into the View and other stuff

Finally, the last piece of the MVC, the controller

A picture of an old video game console controller, the Sega Genesis controller with its famous 'three button' layout of A - B - C.

No not that type of controller!

The controller is a piece of code that runs on our server. When a user tries to access something in the dB (either by clicking around in the website or by exploring around by typing in urls) the controller is the piece that controls whether the user can do what they’re wanting to do, and also defines how we relay it back to the user.

So, the controller is the most ‘code like’ piece of this whole pie.  Here’s what a controller looks like.

using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Entity;
using System.Linq;
using System.Net;
using System.Web;
using System.Web.Mvc;
using CollectionMGR.Models;

namespace CollectionMGR.Controllers
{
    public class booksController : Controller
    {
        private CollectionMGREntities db = new CollectionMGREntities();

        // GET: books
        public ActionResult Index()
        {
            return View(db.books.ToList());
        }
    }
}

This controller above states that when the user navigates to my web site, localhost, and goes to the books endpoint, we have an action called Index which shows us all of the available books.

Now, I’ve added an additional action called Details, which will render that view I showed earlier.

       // GET: books/Details/5
        public ActionResult Details(short? id)
        {
            

            if (id == null)
            {
                return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
            }
            book book = db.books.Find(id);
            if (book == null)
            {
                return HttpNotFound();
            }
            return View(book);
        }

The end result?  Well, here’s the index view.

a Screen shot of a web browser nagivated to localhost/books/Index, which lists an html table showing the items from the database

And if someone clicks the Details button next to one of those books, the Details view I showed earlier is called to render the whole thing!

But what about the Entity Framework?

Did you notice that there was basically zero querying done, though we could still retrieve and save results in our database?  That flexibility and ease of use that we’re enjoying here all comes to us courtesy of the Entity Framework.  Here’s a great site to dive deep into what it does, but it should suffice to say that the EF abstracts away the need for queries to pull data for a views, allowing us to instead retrieve results like this.

In a traditional data connection framework, you’d often see code like this, note the native SQL query baked right into the page.


<tbody>
        <?php $connect = mysql_connect("localhost","root", "root"); if (!$connect) { die(mysql_error()); } mysql_select_db("apploymentdevs"); $results = mysql_query("SELECT * FROM demo LIMIT 10"); while($row = mysql_fetch_array($results)) { ?>
                
<tr>
                    
<td><?php echo $row['Id']?></td>

                    
<td><?php echo $row['Name']?></td>

                </tr>


            <?php } ?>
            </tbody>

Compare that to this approach, first the user requests the Index page, where we paginate and return 10 results at a time. So here’s the code to actually retrieve the first 10 of those results.

 public ActionResult Index()
        {
            private MVCDemoEntities db = new MVCDemoEntities();            
            return View(db.books.ToList().Take(10));
        }

The connection to the db is handled for us via the Entity Framework, which generates the models for us when we first connect our app to our DB (covered below in the ‘How to play along?’ section). So, to distill down a bit further, the Entity Framework gives us a lot of tools to keep us from having to be SQL experts in addition to C# experts.

Recapping

In this post, we covered some of the basics of what an MVC is, and I showed an example of how using one can result in some massive time savings, especially when coupled with the Entity Framework. In the next post, we’ll drill further into the MVC as I cover how to bundle requests with a parent request, using a programming model called the Master-Detail Scenario!

Wanna play along with the rest of this blog series?

I’ve got you covered 🙂

Follow the walkthrough here to get started.  

Quickie – Join video files with PowerShell and FFMPEG

$
0
0

Caption Text says 'Join Video Files quickly, gluing stuff with PowerShell and ffMpeg', overlaid on an arts and craft scene of glues, papers, scissors and various harvest herbs

While I’m working on some longer posts, I thought I’d share a quick snippet I came up with this weekend as I was backing up a number of old DVDs of family movies.

FFMPeg has the awesome ability to join a number of video files together for you, but the syntax can be kind of strange.  Once I learned the syntax, I sought to make sure I never had to do it again, and created this cmdlet.

Usage notes

In this basic version, it will join every file in a directory, giving you Output.mkv.  Be sure your files in the directory are sequentially ordered as well, to control their position.

Ensure that FFMpeg’s binaries are available in your Path variable as well.

Later on, I may add the ability to provide which specific files you want to join, if desired 🙂

Enjoy 🙂

 

Life after Write-Debug

$
0
0

Hey y’all.  I’ve been getting verrrry deep into the world of Asp.net Model View Controller and working on some big updates to ClientFaux, but I saw this tweet and it spoke to me:

Why?  Because until recently, I was notorious for leaving Write-Debug statements everywhere.  I mean, just take a look at my local git folder.

A PowerShell console window running the following command. Dir c:\git -recurse | select-string 'write-debug' | measure This shows that there are over 150 uses of this command in my PowerShell modules. Uh, probably too many!
I *wasn’t* expecting it to be *this* bad. I’m so, so sorry.

My code was just littered with these after practically every logical operation…just in case I needed to pause my code here at some point in the future.  Actually, someone could look at my code in the past and every Verbose or Debug cmd was basically a place that I got stuck while writing that cmdlet or script.  I mean, using the tools is not wrong, but it always felt like there should be better ways to do it.

Recently, I have learned of a much better way and I want to share it with everybody.

Why not use Write-Debug?

Write-Debug is wrong and if you use it you should feel bad

I’m just kidding!  You know, to be honest, something really gets under my skin about those super preachy posts like you always find on medium that say things like ‘You’re using strings wrong’, or “You’re all morons for not using WINS” or something snarky like that.

It’s like, I might have agreed with them or found the info useful, but the delivery is so irksome that I am forced to wage war against them by means of a passive aggressive campaign of refusing to like their Tweets any more as my retribution.

That being said, here’s why I think we should avoid Write-Debug.  It ain’t wrong, but you might like the alternative better.

Pester will annoy you

If you’re using Pester, you might like to use -CodeCoverage to help you identify which logical units of your code may not have test coverage.  Well, Pester will view each use of Write-Debug as a separate command and will prompt you in your code coverage reports to write a test for each.  A relatively simple function like this one:


Function My-ShoddyFactorialFunction {param($baseNumber)

    Write-Debug "Starting with base number of $baseNumber"

    $temp = $baseNumber

    ForEach($i in ($baseNumber-1)..1){

        Write-Debug  "multiplying $temp by $($i)"

        $temp = $temp * $i

    }

    Write-Debug "multiplying $temp by $($i)"

    return $temp

}

When this short script is run through CodeCoverage, Pester will call out each Write-Debug as a separate entities that need to be tested.  We both know that there’s no reason to write a Pester test for something like this, but if you work with sticklers for pristine CodeCoverage reports then you’ll have to look out for this.

Not guaranteed to be present on every PowerShell host

Did you know that not every PowerShell host supports Write-Debug?  Since it is an interactive cmdlet, consoles that operate headlessly don’t support it.  This means that Azure Automation for one does not support the cmdlet, so it will basically be ignored, at best.

As developers of PowerShell scripts and tools, we’re accustomed to having the fully fledged PowerShell console available to us, but our code may not always execute in the same type of environment.

For instance, once I was working on a project for a customer with very long PowerShell Run Script steps embedded into System Center Orchestrator.  I wrote some functions for them, one of which involved creating and deleting ServiceNow Tickets.

I was very big at that time on creating ‘Full and Proper’ advanced cmdlets and “Doing it the right way™ so I went totally overboard with $ConfirmImpact and PSCmdletShouldProcess usage.  The code worked great in my local IDE so we deployed it to production and our runbooks started failing.

Why?  Well the host in which Orchestrator runs PowerShell Scripts runs headless, and when it tried to run my cmdlets, it threw this error.

Exception calling "Invoke" with "0" argument(s): 
"A command that prompts the user failed because the host program or the 
command type does not support user interaction. The host was attempting to request confirmation with the following 
message : some error123

This lesson taught me the point that I shouldn’t always count on all input streams and forms of user interaction being available in my code.

Not a great user experience anyway

Back to our first function, imagine if I wanted to debug the value of the output right before we exited.  To do so, look at how many times I have to hit ‘Continue’!

This sucks.  And it really sucks when you’re doing code reviews.

Write-Debug make Peer Reviews super suck

If you’re fortunate enough to work on a team of Powershell slingers, you almost definitely have (and if you don’t, start on Tuesday!) a repository to check in and review code.

And if you’re doing this the right way, no one has access to push untested code until it goes through review in the form of a pull request.

So what happens when you need to test ‘why’ something happens in your coworkers code? If you were me, you would have to litter your colleagues (hopefully) clean code with tons of debug statements. These you have to remember to roll back or you get annoying messages from git when you try to change branches again.

I was changing my peers code while reviewing it.  It was bad and I feel bad.

So what should I do instead?

It turns out that there has been an answer to this problem just hiding in my consoles for years and I’ve mentally ignored them this whole time.

If you’ve never used a breakpoint before, prepare to be amazed!  Whether you use the ISE, Visual Studio, or VS Code, breakpoints are a great tool that let you set an ephemeral debug point without editing the original file!

Breakpoints allow for ephemeral debugging without editing the original file!

They essentially function just the same as a Write-Debug statement, but you can add and remove them without editing the original code, and are deeply integrated into our favorite editors to unlock all kinds of goodness.

How to use them

If you’re in the PowerShell ISE (obligatory WHAT YEAR IS THIS.png) , simply highlight a line on which you’d love to pause your code, then hit F9. Then run the code and PowerShell will automatically stop in a debug command line.

Hit ‘F9’ to set the breakpoint, then run the code.

The code will execute like it normally would until it reaches the breakpoint line at which point…

You get a Write-Debug style command prompt but never had to change the source code!

The same goes for Visual Studio Code, which is even better, as it includes a point-in-time listing of all variable values as well!

Depicts the Visual Studio Code application, paused at a breakpoint in a PowerShell script. The UI is broken into two columns, with the script on the left hand column with a command prompt beneath it. On the right column, there is a list of all variables and their current values.

It doesn’t stop here!  You can also hover over variables to see their value in real time! 

This was a huge game changer for me, as I used to type the names of variables over and over and over into the shell to see their current values.  Now, I just hover, like you see below.  Note the little boxes which appear over the cursor!

Shows a paused VS Code instance, where my cursor is moving above various variables, above which their current values are revealed! Awesome!

But the awesomeness doesn’t stop there!

When you’re paused at a breakpoint, you can also proceed through your code line by line.  The same keys work in either VS Code, VS or ISE.

Key Function
F5 Continue running when paused
F9 Set a breakpoint on this line
F10 Step Over – run this line and stop
F11 Step Into – go INTO the functions called on this line
Shift+F11 Step Out – move your paused breakpoint out to the calling function

These commands will change your debugging life.

In the demo below, I show how Step-Over works, which runs the current line but doesn’t jump into the definition of any functions within it, like Step-Into does.

Now, let’s go back to our initial example and set a breakpoint to test the value on that last line.

See how easy that was?  This is why I believe that once you learn of the power of ultra instinct–er, once you learn about Breakpoints, you’ll simply never need Write-Debug again!

Security camera footage of me using Breakpoints for the first time

Still confused about the difference between Step Over, Step Into and Step Out?  I don’t blame you, checkout this great answer from StackOverflow which does a good job shining light on the distinction.

Debugging
DebugInVsCode

Quickie: ConvertTo-PSCustomObject

$
0
0

Do you ever need to quickly hop between PowerShell tabs in VScode, or have data you want to move from one session to another?

Sure, you could output your data into a .CSV file, a .JSon file, or one of hundreds of other options.  But sometimes it’s nice to just paste right into a new window and get up and running again.  For that, I wrote this small little cmdlet.

 Function ConvertTo-PSCustomObject{
    Param($InputObject)
    $out = "[PSCustomObject]@{`n"
    $Properties = $InputObject | Get-Member | Where MemberType -eq Property
    ForEach ($prop in $Properties){
        $name = $prop.Name
        if ([String]::IsNullOrEmpty($InputObject.$name)){
            $value = $null
        }
        else {
            $value = $InputObject.$name
        }

        $out += "`t$name = '$value'`n"
    }

    $out += "}"
    $out
}

And the usage of it:

ConvertTo-PSCustomObject

ClientFaux 2.0 – Completely re-written, faster than ever

$
0
0

As mentioned on the stage at MMSMOA, ClientFaux 2.o is now available.  Completely re-written with as a WPF GUI with automated certificate generation, multi-threading, and all the bells and whistles.

Oh, and Hardware inventory now works!

Download it and give it a try now!  To use, install it on a desktop/laptop/VM which is on a network segment which can reach your CM server.

http://bit.ly/ClientFaux

Launch ClientFaux and click to the Configure CM tab and provide your CM Server FQDN and three letter Site code.

Then click to the Device Naming page and provide your desired naming pattern and starting and ending numbers.

You can also increase the number of threads (I’ve tested up to 12 threads and seven is a good happy medium for resource usage, but feel free to go crazy).

Then to see it in action…click to the ‘Ready’ page and hit ‘Ready!’ and away we go!

 

The Big Warning

This is designed for DEMO or TestLab CM instances.  I do not recommend running it against your Production CM instance as it can create thousands and thousands of CM clients if left running for a few hours!  This can be hard to filter out of data for reporting, dashboards and the like.

ClientFaux2.0Demo
Viewing all 109 articles
Browse latest View live