Quantcast
Channel: FoxDeploy – FoxDeploy.com
Viewing all 109 articles
Browse latest View live

SCOM: Quickly find Update Rollup Version

$
0
0

It’s SO tedious to track down the update rollup version of SCOM, as the SCOM console still doesn’t have this information available (only major releases!), so you end up looking through the registry or digging into files trying to look at the file version manually.

I wrote this little script in PowerShell. Simply CD into the drive where SCOM is installed and it will track down the SCOM install directory for you, then pull out the Update rollup version and return it to screen.

Download



Coding for speed

$
0
0

I must say that I learned a lot about speed, and how coding structure matters when you’re going for the gold, as I was reviewing the entries from the Hadoop PowerShell challenge. The winners are at the end of this post, so zip down there to see if you won!

I’ll use the post to cover some of what we learned from the entries here!  Here’s our top three tips for making your PowerShell scripts run just that much faster!

When searching through files, don’t use Get Content

As it turns out Select-String (PowerShell’s text searching cmdlet) is capable of mounting a file in memory, no need to gc it first. It’s also MUCH slimmer in memory, and has speed for days.  Look at the performance difference in a common scenario, search 10 directories of files with it, and with Get-Content.

#Get-Content | Select-String example
 dir $pgnfiles | select -first 10 | get-content | Select-String "Result"

#Select-String Only example
 dir $pgnfiles | select -first 10 | Select-String "Result"

Testing GC | Select-String...3108.5527 MS
Testing Select-String Only...99.1534   MS

Using Select-String alone is a 31x Speed Increase!  This is pretty much a no-brainer.  If you need to look inside of files, definitely dump your Get-Content steps.  Credit goes to Chris Warwick for this find.

Be careful with $collection += $object

We see this structure a LOT in PowerShell,

#init my collection
$collection = @()

ForEach ($file in $pgnfiles) {

  $collection += $file | Select-String "Result"

 }

 $collection

This structure sets up a ‘master list’, then gets a glob of objects, iterate through them, do some stuff and add a new entry to a master list. At the end, display the list.

Why shouldn’t I do this?

PowerShell is based off of dotnet and some dotnet variable types including our beloved string and array are immutable. . This means that PowerShell can’t simply tack your entry to the end of $collection, like you’d think.


No, instead PowerShell has to make a new variable equal to the whole of the old one, add our new entry to the end, and then throws away the old variable. This has almost no impact on small datasets, but look at the difference when we go through 100k GUID here!

Write-Output "testing ArrayList.."

(measure-command -ex {$guid = new-object System.Collections.ArrayList
1..100000 | % {
$guid.Add([guid]::NewGuid().guid) | out-null

}

}).TotalMilliseconds

Write-Output "testing `$collection+=..."

(measure-command -ex {

$guid = @()
1..100000 | % {
    $guid += [guid]::NewGuid().guid
    }

}).TotalMilliseconds

testing ArrayList...    7784.5875  MS
testing $collection+=...465156.249 MS

Sixty times faster!!!  The really crazy part, you can watch PowerShell’s RAM usage jump all over the place, as it doubles up the variable in memory, commits it, and then runs GarbageCollection.  Watch how the RAM keeps doubling, then halfing!

GIF
I didn’t actually think it would be this dramatic!

 

How do I not use $collection += structure in my code?

Array list will be your new best friend. In one project we were migrating customers from two different remote desktop systems into one with some complex powershell code. There was a section of the code which built a list of all of there files and omitting certain ones. When we swapped out $string += for array list, we dropped out execution time from six minutes to only 20 seconds! A huge performance boost with this one tip!

Array list is a bit different from a regular string here’s how you do it. First you have to make a new array list (which developers call instantiating an instance of a class =i felt so cool typing that), like so:

$collection = New-Object System.Collections.ArrayList

Next, we iterate through each object, and check out how we add them to our collection. We call the ArrayLists .Add() method, instead of using the += syntax. Finally at the end, we get the whole list back out by using return, or just putting the variable name in again.

ForEach ($file in $pgnfiles) {
  $result = $file | Select-String "Result"
  $collection.Add($result)

 }
 return $collection

You might notice when you run this that you see something like this:

numbers
Ohhh, so many numbers

ArrayList is a bit weird.  when you add an entry to it, ArrayList responds back with the index position of the new item you added.  In some use case in the world, this might be helpful, but not really to us.  So, we just pipe our .Add() statement into null, like so:

$collection.Add($result) | Out-Null

Some people put [void] on the front of the line instead, I try to avoid it, seems confusing and very ‘developery’ too me.

The fastest way to read a file, stream reader

I was simply astounded to see the tremendous speed difference between using PowerShell’s Get-Content cmdlet versus the incredibly fast StreamReader.

Here’s why Get-Content can be a bit slow.  When you’re running Get-Content, or Select-String, PowerShell is reading the whole file into memory at once.  It parses it and dumps out a object for each line in the file, sending it on down the pipeline for processing.

This is VERY SLOW on big files.  If you’d like to know a bit more, read Don’s great post on Get-Content here, or Keith’s write-up here.

When we’re working with large files, or lots of small files, we have a better, option, and that is the StreamReader from .Net. It IS fundamentally different in how it presents the content from the file, so here’s a comparison.

#Working with Get-Content

#Read our file into File
$file = Get-Content $fullname

#Step through each line
foreach ($line in $file){
    #Do something with our line here
    #ex:
    if($line -like "[Re*")
       {
       $results[$line]+=1
       }
}

And now, with StreamReader

#Same concept but with StreamReader

#Setup a streamreader to process the file
$file = New-Object System.IO.StreamReader -ArgumentList $Fullname

:loop while ($true )
{
    #Read this line
    $line = $file.ReadLine()
    if ($line -eq $null)
    {
        #If the line was $null, we're at the end of the file, let's break
        $file.close()
        break loop
    }
    #Do something with our line here
    if($line.StartsWith('[Re'))
        {
        $results[$line]+=1
        }

}

So, now that you’ve seen how it works, how much faster and better is it?

Speed results

The numbers speak for themselves

Method Time
Get-Content 3562 MS
 StreamReader  133 MS

StreamReader is 26 times faster!

Man, I wish someone would make a PowerShell snippet for StreamReader

Me too!  So here you go.  Load this into the ISE and you’re set.

$snippet = @{
    Title = 'StreamReader Snippet'
    Description = 'Use this to quickly have a working StreamReader'
    Text = @"
    $fullname = #FilePathHere
begin
    {
        $results = @{}
    }

    process
    {
        $file = New-Object System.IO.StreamReader -ArgumentList $Fullname

        :loop while ($true )
        {
            $line = $file.ReadLine()
            if ($line -eq $null)
            {
                $file.close()
                break loop
            }
            if($line.StartsWith('[Re'))
            {
                #do something with the line here
                $results[$line]+=1
            }
        }
    }
    end
    {
        return $results
    }
}
"@
}
New-IseSnippet @snippet

This syntax comes to us by way of u/evetsleep, /u/Vortex100 and Kevin Marquette, from Reddit/r/powershell!

Other ways to speed up your code

I know I said three methods, but I wanted to give a little extra.

Runspaces are crazy fast – Boe Prox turned in an awesome example of working with RunSpaces, here.  If you’d like to read a bit more, check out his full write-up guide here. This guide should be considered REQUIRED reading, if speed is your game. Amazing stuff, and incredibly fast, much better than using PowerShell Jobs.

Taking out your own Trash – This cool tip comes to us from Kevin Marquette.  If PowerShell has some monster objects in memory, or you just want to clean things up, you can call a System Garbage Collection method to take out your trash, like so:

[GC]::Collect()

True Speed, going native – The fastest of the fast approaches used native c# code which powershell has supported since v 3. Using this, you gain a whole slew (that’s a technical term) of new dotnet goodness to play with. For examples of this technique, check out what Tore, Oysind and Mathias did.

Can PowerShell beat Linux, or Hadoop for that matter?

From the original post that started this whole thing, Adam Drake’s Can command line tools be faster than your Hadoop cluster?

[using Amazon Web Services hosting…] with 7 x c1.medium machine[s] in the cluster took 26 minutes…processing data at ~ 1.14MB/sec

All of these entrants can proudly say that their code DID beat the Hadoop cluster.  Boe Prox , Craig Duff, Martin Pugh, /u/evetsleep /u/Vortex100 and kevin Marquette, Irwin Strachan, Flynn Bundy, David Kuehn, and /u/LogicalDiagram from Reddit, and @IisResetme!  All eleven averaged a minimum of 10.76 MB/sec.  Their code all completed in less than six minutes, much faster than the 26 minutes of the mighty seven node Hadoop cluster!

But can PowerShell beat Linux?

When I saw that Adam Drake, a master of the Linux command line and Bash tools, was able to process all of the results in only 11 seconds, I knew this was a tall order.  We gave it our all guys, there’s no shame in…BEATING that time!

gAmazingly, our two Speed Demons,  Tore Groneng, and Øvind Kallstad, working in conjunction with Mathias Jensen, turned in a blazing fast time of eight seconds, each!  To be specific, Øvind’s time was 8,778 MS, while Tore beat that by an additional 200 MS.   This represents a data throughput of 411.75 MB/s!  This is close to the maximum speed of my all SSD Raid-0, so they REALLY turned in quite a result!

360 times faster than the Hadoop cluster. Astounding!

Winners!

I’m now pleased to announce the winners of the Hadoop contest.  I was so impressed with the entries that I decided to pick a bonus fourth winner.

Speed King Winner – This one goes to Tore Groneng.  He worked closely with Mathias Jensen, and turned out an incredible 8 second total execution.  For comparison, this is a 200x speed increase over the results of the Hadoop Cluster from our original challenge.  He should be proud.

A close runner-up was Øvind Kallstad, with a very honorable time of 8778 MS.

Most Best Practice Award – This one goes to Boe Prox, with a textbook perfect entry, including object creation, runspaces, and just plain pretty code.

Regex God – This award goes to Craig Duff, who blew my socks off with his impressive Regex skills!

One-liner Champion – This award was well earned by Flynn Bundy, who managed to turn out a very respectable time of two minutes, and did it all in a one-liner!  His code ALMOST fits in a single, tweet, in fact!  Only 216 characters!

If your name is mentioned here, send me a DM and we’ll work out getting you your hard-earned stickers:)

Name Link Time(ms) Hours:Min:Sec Winner
Tore Groneng https://gist.github.com/torgro/4b8aa80ad5b9b2da351b#file-get-chessscore-ps1 8525 00:00:08.7673.32 Speed King!
Boe Prox https://gist.github.com/proxb/eba9b262e1dcb593ec94 28274 00:00:28.25447.28 Most Best Practice Award
Craig Duff https://gist.github.com/duffwv/eaf16d733fdb00e4d6e8#file-beatinghadoop-ps1 39813 00:00:39.35832.08 Regex God Award
Flynn Bundy https://gist.github.com/bundyfx/1ef0455eb9bcbcc2d627 119774 00:01:59.107797.31 One-liner Champion

Thank you to everyone who entered.  The leaderboards have been updates with your times, and I’ll add your throughput when I get the chance this week!


Building Better PowerShell Dashboards

$
0
0

First off, YUUGE props to Flynn Bundy for shining lights on the possibility with his post Making DSC Beautiful and @Neeco of HTML5Up.com for these gorgeous HTML5 and CSS templates.

If you check out HTML5up.com, there are a ton of absolutely beautiful templates, for free! (Well, you have to leave a link to the site, unless you pay $20, then you can edit it to your heart’s content).

templates

Some of them REALLY lend themselves well to a dashboard system for consumption of data.

…you know, PowerShell makes an excellent data collection and processing system.

It even has  native HTML capabilities, as we’ve covered previously in our post: Using ConvertTo-HTML and CSS to create useful web reports from PowerShell.  If you’re lost and don’t even know where to start, begin here.  I’ll bet we could make some REALLY cool looking dashboards using PowerShell and Neeco’s templates!

Let’s make a cool PowerShell Dashboard

So, I’ll start by finding a template that I like.  I choose the gorgeous Phantom, which is also the top one from the list.  Now, you might be asking yourself “FoxDeploy, did you even look at all of the templates first?” to which I would respond: SURE.

Let’s take a look at Phantom.   It’s got a nice set of fonts and a good layout, with a big title glob of text, then a smaller description below it.  It’s followed by a big element or DIV called Tiles, with colored squares inside of it, called articles.

PhantomBreakdown.png
Breaking down the Phantom Template

Let’s take a look into the code and see how this is represented.

PhantomBreakdownCode

A few things jump out at me here.  Looking back at the image of the template itself, I see the first three squares/cards/cubes are red, blue and green.  Going back to the code, I don’t see the colors listed there but I DO see a style, a different one for each.   It looks like the color of the tile is controlled by the style property in it’s declaration, like this:

<span class="html-tag">
<article <span class="html-attribute-name">class</span>="<span class="html-attribute-value">style1</span>"></span> 

If you see a property like class= or id= within a HTML element, that’s a good clue that the Cascading Style Sheet (cascading meaning you can have a base one for the site, then special sub-sheets for specific pages, and overlap them all in a precise, cascading order) CSS will do some special processing on it when it’s displayed to the user.

What’s CSS?

 If CSS is totally new to you, it’s a great concept that allows us to pull the design and colors out of our HTML webpages.  Instead of specifying what font to use for this section of the page, and what color to make the background, we pull all of that style gunk out and leave behind just the meat and potatoes (the content, that is) of our site in HTML.  All style goes into the Cascading Style Sheet–the CSS file.

As we saw in the screen shot, each of the squares had a different color, and looking at the code, the only real difference between each of the squares in code was that a different style was listed. So, we’ll look into the CSS files and see what it says for coloring.

For this and all web design work, I like to use Visual Studio Code, by the incredible David Wilson [MSFT].  Especially for CSS, it makes finding color assignments super easy, since it depicts the color in a little box next to it, you know, in case you don’t say things like “Wife, your eyes are the most beautiful shade of #7eccfb”

The colors are down near line 2700.  (Hit Control+G to bring up the ‘Go to line’ box, and type in the number.)

CSS

So we can see that style1 is red, style2 is blue, style3 is green, etc.  Now I know what I want to do…

Time to Code

I’m going to make a dashboard to show the status of my Hyper-V VMs.

As I tend to do, first I’ll begin with a working PowerShell sample.  I’ll run Get-VM to see all of my VMs.  If the status of the VM is running, I’ll use the Green (style3) indicator.  If it’s Stopped, I’ll use the Red (style1), and if it’s something else, I’ll use style2.  This would include Critical or some other weird state.

$VMS = get-vm | sort State

ForEach ($VM in $VMS){
    $Name=$vm.Name

    if ($vm.State -eq 'Off'){
        $style = 'style1'
        }
        elseif($vm.state -eq 'Running'){
        $style = "style3"
        }
        else{
        #VM is haunted or something else
        $style = "style2"
        }

    #Now we know what state to pick
}

I know what I need to set for each square, but don’t know how to add my squares to the actual index.html of this page.

And now, to do something unholy to the HTML

I use an unorthodox approach that totally works well.  Once we understand how the HTML in index.html is rendering the page, what we’ll do here will make perfect sense.

Starting at the top of the document, let’s visualize what each chunk of code represents when parsed by a browser…

breakingcode

So, that’s the top part.  After that, beginning with the <section class="tiles"> tag, we have a big repeating structure which gives us all of the squares/tiles.

breakingcode2

Finally, beginning with the closing </section> tag, we have the bottom of the page, with it’s contact forms and all of that.

breakingcode3

To do this the easy way, let’s just cut it into three files!

I’ll take the core file here (which is index.html) and I’ll break it into two chunks.  Everything from the top of the file including the line <section class=”tiles”> goes into head.html.  Now, start at the bottom of the file and take the last line all the way up to and including the line </section> and save that as tail.html.

Now we need to make our cards

Structure of a card/tile/square

Let’s look into the structure of one of these tiles for a moment.

breakingcode4

I can see how this should look.  I’ve already got my code to say what style to use, so when I’m making a card for each VM, I’ll set the style to change the color of the square for On/Off/Other.

Next, instead of ‘Magna’ within the Header2 tags, I want my VM Name.

If the machine is turned on, I’d also like to see it’s CPU usage and RAM pressure.  Finally, when I hover over the tile, a little section of text appears…I think that would be a cool place to list where the machine’s VHD files are, and it’s uptime.

I’ll add another if{} scriptblock, and within this one, I’ll test to see if the VM was online. If it was, I’m going to recast it’s $name property, to add a new line after the name, with RAM and CPU.  I reuse $name, so that no matter if the machine is on or off, I can have the same block of code make a square for me.

<

pre>PhantomDashboard

#if the VM is on, don't just show it's name, but it's RAM and CPU usage too
    if ($VM.State -eq 'Running'){

        $Name="$($VM.Name)

         RAM: $($VM.MemoryAssigned /1mb)

         CPU: $($VM.CPUUsage)"
    }

I also want to have a little description of the VM, like where it’s VHD files live, etc. So I’ll set the value of $description like this:

$description= @"
        Currently $($VM.Status.ToLower()) with a

        state of $($VM.State)

        It was created on $($VM.CreationTime)

        Its files are found in $($VM.Path)
"@

We’ve got all the bits we need to make a card, we can now just drop in the HTML for a card in a here-string, and put the variables we’ve made here in place of the name and descrption.

$tile = @"
<article class="$style">
            <span class="image">
                <img src="images/pic01.jpg" alt="" />
            </span>
            <a href="generic.html">
<h2>$Name</h2>
<div class="content">

$($description)</div>
</a>
        </article>

"@

And now, repeat after me…

String concatenation isn’t ALWAYS evil.

Because that’s totally what we’re about to do. We broke the file into three bits. Now it’s time to put it back together. To end the for-each scriptblock for each card, we’ll add the current card to $main.

Then, we build our completed file, by adding $head + $main + $tail, and then we dump that into an HTML file. Easy peasey!

$main += $tile
#EndOfForEach
}

$html = $head + $main + $tail

$html > .\VMReport.html

Final Touches

Now you’ll probably want to open up head.html and replace the text there with your branding. You’ll also want to add in an image, most likely.

Adding the time

To add in the current time the report was generated, add in a string we can replae when importing the file. I added the string %4 to line 4 in head.html, like so:

<div class="inner">
<header>
<h1>FoxDeploy Health Monitoring Dashboard</h1>
At a glance monitoring of status of VMs in Hyper-V updated at %4
</header>

this gives me an easy anchor to replace when I import the file, so I can use -replace %4 with the current time, like this:

$head = (Get-Content .\head.html) -replace '%4',(get-date).DateTime
Auto refreshing the page

I’d like to make the page automatically reload every 30 seconds, so add this line to your head.html page.

<meta http-equiv="refresh" content="20"; URL="path to your report.html">
Run forever

It would also be nice to have this automatically run until the end of time, so I’ll just add an open-ended for loop to the script, and then add a Start-Sleep timeout at the end. This way, the report will generate once every 15 seconds or so, and the browser will auto refresh every 20 seconds, so the two should be mostly in sync.

#Add to first line of the script
For(;;){

#Last line of script
Start-Sleep -Seconds 15}

And the finished productPhantomDashboard

Next Steps

I’ve not completed this part, but a KILLER next step would be to make these buttons work when you click them.  Currently, they all link to generic.html, but instead, you could use this same process to create a page for each VM and name it VMname.html.  Then when you build the card, add the appropriate link and bam, you have a fully functional VM dashboard.

If you go this route, consider adding a Windows Event view, or deeper VM statistics.  You could really go hog-wild here.  Another cool idea is to make use of the images provided in this template, and provide a background image for the tiles.

I’ve got you this far, time for you to make it your own.

I’m just scrolling till I see the word ‘Download’

Here you go, buddy:) Code Download


SCCM Reporting – Can’t save a report

$
0
0

Ever run into this issue where you can’t save a report you’re editing in the report builder?

Untitled

“Failed to save report (report server URL).  The sortExpression Expression for the grouping refers to the field ‘ProductName’ Report item expressions can only refer to fields within the current dataset scope’

This is a REALLY irritating one.  It happens when you edit a copy of one of the in-box SCCM reports and change the columns being returned.  Without us knowing it, there are a lot of settings customized that allow us to click on the top of each column to sort the rows based on our preferences.

When we change the columns returned in an apport, we need to also update the header textbox for each column.

To fix this, right click up here, and go to interactive sorting and click each <<Expr>> box, then choose’Text Box Properties’

Untitled

And go to Interactive Sorting.  You might notice that the value listed in the box is no longer relevant to the rows you’re returning in your report now.  (Happens to me ALL the time, I always find a good starting report, then save a copy and edit (it’s so hard to get the background looking pretty!))

Untitled
Click the drop down box and change the name to a valid column to fix this issue

Especially if you edited a built-in report, you’ll need to do this for the header of each column.  That means each of these guys:

Untitled

With this done, you should now be able to save the report again.


SCCM – USMT on Windows 10 ‘too many files for sharing’

$
0
0

You might see this occasionally in your environment when trying to run a USMT capture on Windows 10:

toomanyfiles00

USMT Returned error (0x00000024), 80070024 too many files opened for sharing.

This is actually a bit of a redherring error, and we should open scanstate.log as recommended in the SMSTS log file.  At this point, the error should jump out at you:

toomanyfiles01

Only Windows XP, Windows Vista, Windows 7 and Windows 8 are supported as sources.

You’ll see this error message if you’re using the RTM version of the Windows 10 ADK, version number 10.0.26624.0.

Solution

Upgrade and use the 1511 release of the ADK.  As it turns out, Windows 10 was not a supported operating system with the USMT files in the RTM bundle of the ADK.

You should simply upgrade to the newest release of the ADK, version 1511, then update the source files for your USMT Task Sequences.

Note: don’t install the 1511 version of the ADK without installing the hotfix which came out in December.  Also, make sure you update your boot.wims afterwards too.

Once you do that, you’ll see a much happier log file for ScanState.

toomanyfiles02

 

Happier SCCM Log, doesn’t immediately fail like the first one.


Planning for SCCM Current Branch

$
0
0

I write about PowerShell and automation tools a LOT.  However, I pay my bills as a Consultant by designing, installing and supporting System Center products; most often SCCM (ConfigMgr).

For this reason, I’ve been scouring the web and Tweeting my thumbs off recently to scrape together what information I can on the new version of SCCM, to be prepared when my customers ask questions about it.

This is mostly an info dump of what we know and what we suspect about how Current Branch will play out for SCCM.  This plan is currently in place for a number of my customers, including some big enterprise customers. It’s how I’m doing it, but I will admit that I don’t have any secret info here (nothing NDA breaking :0 here).

That being said, If you think I’m wrong, call me out (but be ready to back it up with a source).  I plan to revisit this article to keep it up to date, because we honestly don’t know yet what some parts of this are going to look like.

SCCM as a Service = Current Branch

Some people refer to it as SCCM as a Service, but don’t mistake this for Intune.  If you’re sitting on an SCCM 2012 environment, you may wonder what this is and what it means for you.

It’s the SCCM we love except it’s also getting a TON of engineering effort and love from Microsoft right now.  We’re getting these new mini-releases semi regularly, a few times a year, and we’re getting a ton of new quality of life and feature updates for the SCCM Admin.  Redmond is listening.

You can call it simply SCCM Current Branch.  No more long names like System Center 2012 R2 ConfigMgr w/ SP1 or other tomfoolery, we have actual easy to pronounce names now.  Two digits for the year, two for the month, just the way Ubuntu linux has been done for years and years.

What we’ve got so far

The first release of SCCM Current Branch was SCCM 1511, meaning 2015 November was its ship date.  Since then, we’ve had another release of Current Branch, 1602.  From that pattern and the new monthly Tech Previews, it does look like we’ll be getting CB releases a few times a year, maybe quarterly or a bit longer than that.  These are real production releases of SCCM, with killer features like automatic console upgrade, multiple deployments from one ADR, and tons of other great new abilities.  Don’t confuse them for the monthly releases though, which are called Tech Previews.

What’s Tech Preview?

In addition to the Current Branch releases, we’ve got another new thing, these Tech Previews builds which have come out pretty much monthly, but those aren’t meant to be used in production, and have the words ‘TECH PREVIEW’ all over everything.  Don’t use them in prod, you’re gonna have a bad time.  Maybe.

They are awesome though because you can see huge changes and improvements in SCCM from month to month.  If you really want to stay on top of things, have SCCM Tech Preview up and running in your test lab.  Just know that you can currently never convert a Tech Preview build over for regular use, they have a limited time line and then are done.  TP is for experimental builds and show off new features, but you won’t get support if you try to run your company off of them.

Make no mistake, the Current Branch releases are ready for production. Over 8,600 companies have deployed it today, to more than 12MM endpoints.

How is Current Branch really different?

Previously with SCCM, it wasn’t uncommon to stand up an environment, install clients, build some DPs and then go about daily operations for months or years without ever applying SCCM updates to your servers.  If you encountered a problem with OSD or PXE, you might Google and find a hotfix and install that, but for the most part you could count on years and years of support on whatever broken janky version you happened to install.

Things are changing.  It’s a new and leaner Microsoft with a focus on listening to user feedback and pushing builds out the door.  Microsoft shifting to the much more open and social UserVoice system for feedback is a testament to this, as is the huge success of the Windows Insider Program.  In order to support this change, like a lot of companies, MS is contracting the platforms they’ll support at any given time.  This means that if you decided to go with a Current Branch build, if you call Microsoft for help three years later, don’t be surprised if you’re recommended to update your environment before they’ll help.

With Current Branch, count on applying SCCM Updates to your environment at least once a year, to receive support.  SCCM Product lead Aaron Czechowski has said this publicly on the ConfigrMgr blog.

Here’s the super cool thing, with Current Branch, updating SCCM is really, really easy.

Nothing extraneous to download, all updating happens right in the SCCM console via this new node ‘Servicing and Updating’

Just right click and hit install to apply the new Current Branch build
Just right click and hit install to apply the new Current Branch build

All you do is enable a new role in your environment and then refresh this view and you should see the update bits start trickling down (more on that process here).

It used to be a big pain to upgrade SCCM, with needing to hit every primary, and then all the clients and run SQL actions as well.  Things are much easier now.  We already got the awesome feature of SCCM Client Auto Upgrade from 2012 SP2, while 1602 added a brand new feature of SCCM Console Auto Upgrade as well.  Super simple!  No more tracking down Admin Console users to push Console upgrades anymore.  It now happens automatically when you launch the console after an upgrade.

image163[1]

What can I expect from support?

Now, with SCCM as a service we still own the infrastructure locally BUT we’re agreeing that we will keep our SCCM more up to date than we might have in the past.  We get frequent updates and know that MS is listening to us to make ConfigMgr better, and in exchange we’ll apply the updates every so often.

We’ve seen something similar with other System Center products, if you call for support for Orchestrator or SCOM and you’re using the RTM bits, you’re VERY likely to be told to apply at least some of the Update Rollups released in the last few years to see if they fix your issue.

If you really want to, you could install 1602 and sit on your butt for five years, but if you want an Engineer or PFE to look at your environment, they’re going to tell you to install the most recent or second most recent patch before they invest a lot of time on your system.

This (push to update before getting support)is such a common response that within my company we call this the two-back rule, because you need to be on either current, or two releases back.  

I guess we can call it the ‘year-back’ rule now.

For Service Packs, this is nothing new.  Support for the initial release falls off a year after a Service Pack ships.  If SP2 comes out, SP1 is dropped after a year as well. Now we’re doing the same thing with all releases, which honestly is how it should have been all along.

SCCM Lifecycle

Beyond this, should we see regular quarterly releases of the Current Branch, this will only reinforce the notion of ‘stay current or support yourself’.  This is actually a pretty common refrain from Redmond these days (Windows 10 Upgrade Prompts anybody?)

Now that Current Branch is out, is SCCM 2012 still supported?

We have confirmation on this front, SCCM 2012 was released as a milestone product and has guaranteed Mainstream support through July 2017, with extended support until July 2022.  If you’re still using SCCM 2012 in 2022, you must be working for the government.

To help us keep all of these dates in mind, I made this graphic.

SCCM_Lifecycle_FoxDeploy

Key takeaways

  • SCCM 2012 and R2 are good through July 2017 Mainstream and 2022 extended support.  If a new Service Pack ships for them during this time, you have a year until you need to install it for support.
  • Current Branch releases can be used for a year, but must be upgraded to within the previous year if you want support.
  • You don’t have to apply every Current Branch release, you can skip them.  SCCM does a really good job letting you hop from one release to another (As demonstrated in the tech preview releases)

I hope this was helpful.  Leave your feedback for me below.  If it’s years from now and this is really out of date, make sure to prod me and call me names too :p

 

 


Part V – Building Responsive PowerShell Apps with Progress bars

$
0
0

series_PowerShellGUI

This post is part of the Learning GUI Toolmaking Series, here on FoxDeploy. Click the banner to return to the series jump page!


Where we left off

If you’ve followed this series, you should know how to make some really cool applications, using WPF for the front-end and PowerShell for the code-behind.

What will now probably happen is you’ll make a cool app and go to show it off to someone with a three letter title and they’ll do something you never imagined, like drag a CSV file into the text box….(true story).  Then this happens.

ISE_poop_the_bed

We don’t want this to happen.  We want our apps to stay responsive!

In this post we’ll be covering how to implement progress bars in your own application, and how to multi-thread your PowerShell applications so they don’t hang when background operations take place. The goal here is to ensure that our applications DON’T do this.

SayNo

 

Do you even thread, bro?

Here’s why this is happening to us…

if we are running all operations in the same thread, from rendering the UI to code behind tasks like waiting for something slow to finish, eventually our app will get stuck in the coding tasks, and the UI freezes while we’re waiting.  This is bad.

Windows will notice that we are not responding to the user’s needs and that we’re staying late at the office too often and put a nasty ‘Not Responding’ in the title. This is not to mention the passive-aggressive texts she will leave us!

If thing’s don’t improve, Windows will then gray out our whole application window to show the world what a bad boyfriend we are.

Should we still blunder ahead, ignoring the end user, Windows will publicly dump us, by displaying a ‘kill process’ dialog to the user.  Uh, I may have been transferring my emotions there a bit…

All of this makes our cool code look WAY less cool.

To keep this from happening and to make it easy, I’ve got a template available here which is pretty much plug-and-play for keeping your app responsive. And it has a progress bar too!

The full code is here PowerShell_GUI_template.ps1.  If you’d like the Visual Studio Solution to merge into your own project, that’s here.  Let’s work through what had to happen to support this.

 A little different, a lot the same

Starting at the top of the code, you’ll see something neat in these first few lines: we’re setting up a variable called $syncHash which allow us to interact with the separate threads of our app.

$Global:syncHash = [hashtable]::Synchronized(@{})
$newRunspace =[runspacefactory]::CreateRunspace()
$newRunspace.ApartmentState = &amp;quot;STA&amp;quot;
$newRunspace.ThreadOptions = &amp;quot;ReuseThread&amp;quot;
$newRunspace.Open()
$newRunspace.SessionStateProxy.SetVariable(&amp;quot;syncHash&amp;quot;,$syncHash)

After defining a synchronized variable, we then proceed to create a runspace for the first thread of our app.

  What’s a runspace?

This is a really good question.  A runspace is a stripped down instance of the PowerShell environment.  It basically tacks an additional thread onto your current PowerShell process, and away it goes.

Similar to a PowerShell Job, but they’re much, much quicker to spawn and execute.

However, where PSJobs are built-in and have tools like get-job and the like, nothing like that exists for runspaces. We have to do a bit of work to manage and control Runspaces, as you’ll see below.

Short version: a runspace is a super streamlined PowerShell tangent process with very quick spin up and spin down.  Great for scaling a wide task.

 

So, back to the code, we begin by defining a variable, $syncHash which will by synchonized from our local session to the runspace thread we’re about to make.  We then describe $newRunSpace, which will compartmentalize and pop-out the code for our app, letting it run on it’s own away from our session.  This will let us keep using the PowerShell or ISE window while our UI is running.  This is a big change from the way we were doing things before, which would lockup the PowerShell window while a UI was being displayed.

If we collapse the rest of the code, we’ll see this.
 
unnamed

The entire remainder of our code is going into this variable called $pscmd.  This big boy holds the whole script, and is the first thread which gets “popped out”.

The code ends on line 171, triggering this runspace to launch off into its own world with beginInvoke().  This allows our PowerShell window to be reused for other things, and puts the App in its own memory land, more or less.

Within the Runspace

Let’s look inside $pscmd to see what’s happening there.

unnamed (1)

 

Finally, something familiar!  Within $pscmd on lines 10-47, we begin with our XAML, laying out the UI.  Using this great tip from Boe, we have a new and nicer approach to scraping the XAML and search for everything with a name and mount it as a variable.

This time, instead of exposing the UI elements as $WPFControlName, we instead add them as members within $syncHash.  This means our Console can get to the values, and the UI can also reference them.  For example:

Synchash
Even though the UI is running in it’s own thread, I can still interact with it using this $syncHash variable from the console

Thread Count: Two and climbing

Now we’ve got the UI in it’s own memory land and thread…and we’re going to make another thread as well for our code to execute within.  In this next block of code, we use a coding structure Boe laid out to help us work across the many runspaces that can get created here.  Note that this time, our synchronized variable is called $jobs.

This code structure sets up an additional runspace to do memory management for us.
This code structure sets up an additional runspace to do memory management for us.

For the most part, we can leave this as a ‘blackbox’.  It is efficient and practical code which quietly runs for as long as our app is running.  This coding structure becomes invoked and then watchs for new runspaces being created.  When they are, it organizes them and tracks them to make sure that we are memory efficient and not sprawling threads all over the system.  I did not create this logic, by the way.  The heavy lifting has already been done for us, thanks to some excellent work by Joel Bennett and Boe Prox.

So we’re up to thread two.  Thread 1 contains all of our code, Thread 2 is within that and manages the other runspaces and jobs we’ll be doing.

Now, things should start to look a little more familiar as we finally see an event listener:

unnamed (2)

 

We’re finally interacting with the UI again.  on line 85, we register an event handler using the Add_Click() method and embed a scriptblock.  Within the button, we’ve got another runspace!

This multi threading is key though to making our app stay responsive like a good boyfriend and keep the app from hanging.

Updating the Progress Bar

When the button is clicked, we’re going to run the code in its own thread.  This is important, because the UI will still be rendered in its own thread, so if there is slowness off in ‘buttonland’, we don’t care, the UI will still stay fresh and responsive.

Now, this introduces a bit of a complication here.  Since we’ve got the UI components in their own thread, we can’t just reach over to them like we did in the previous example.  Imagine if we had a variable called $WPFTextBox.  Previously, we’d change the $WPFTextBox.Text member to change the text of the box.

However, if we try that now, we can see that we get an error because of a different owner.

differentowner
Exception setting Text The calling thread cannot access this object because a different thread owns it.

We actually created this problem for ourselves by pushing the UI into its own memory space. Have no fear, Boe is once again to the rescue here.  He created a function Called-Update window, which makes it easy to reach across threads.  (link)

The key to this structure is its usage of the Systems.Windows.Threading.Dispatcher class.  This nifty little guy appears when a threaded UI is created, and then sits, waiting  for update requests via its Invoke() method.  Simply provide the name of a control you’d like to change, and the updated value.


Function Update-Window {
        Param (
            $Control,
            $Property,
            $Value,
            [switch]$AppendContent
        )

        # This is kind of a hack, there may be a better way to do this
        If ($Property -eq &amp;quot;Close&amp;quot;) {
            $syncHash.Window.Dispatcher.invoke([action]{$syncHash.Window.Close()},&amp;quot;Normal&amp;quot;)
            Return
        }

        # This updates the control based on the parameters passed to the function
        $syncHash.$Control.Dispatcher.Invoke([action]{
            # This bit is only really meaningful for the TextBox control, which might be useful for logging progress steps
            If ($PSBoundParameters['AppendContent']) {
                $syncHash.$Control.AppendText($Value)
            } Else {
                $syncHash.$Control.$Property = $Value
            }
        }, &amp;quot;Normal&amp;quot;)
    }

We’re defining this function within the button click’s runspace, since that is where we’ll be reaching back to the form to update values. When I load this function from within the console, look what I can do!

GIF_better
Enter a caption

 

With all of these tools in place, it is now very easy to update the progress bar as we progress through our logic.  In my case, I read a big file, sleep for a bit to indicate a slow operation, then update a text box, and away it goes.

If you’re looking to drag and drop some logic into your code, this is where you should put all of your slow operations.

Update-Window -Control StarttextBlock -Property ForeGround -Value White
start-sleep -Milliseconds 850
$x += 1..15000000 #intentionally slow operation
update-window -Control ProgressBar -Property Value -Value 25

update-window -Control TextBox -property text -value &amp;quot;Loaded File...&amp;quot; -AppendContent
Update-Window -Control ProcesstextBlock -Property ForeGround -Value White
start-sleep -Milliseconds 850
update-window -Control ProgressBar -Property Value -Value 50

Update-Window -Control FiltertextBlock -Property ForeGround -Value White
start-sleep -Milliseconds 500
update-window -Control ProgressBar -Property Value -Value 75

Update-Window -Control DonetextBlock -Property ForeGround -Value White
start-sleep -Milliseconds 200
update-window -Control ProgressBar -Property Value -Value 100

Sources

That’s all there is to it! The hard part here was containing our app into separate threads, but hopefully with the template involved you can easily see where to drop you XAML and how to make your application hum along swimmingly!

I could not have done this post without the many examples provided by Boe Prox on his blog:

Writing WPF Across Runspaces
PowerShell WPF Radio Buttons
Multi-runspace Event Handling
Asynchronous event handling in PowerShell

Additionally, I had help from Joel Bennett (JayKul) of HuddledMasses.org.

I learned a lot from reading over Micah Rairdon’s New-ProgressBar cmdlet from his blog, so check that out too.  Finally, Rhys W Edwards has a great cmdlet also on TechNet, with some more good demos if you’re looking for help or inspiration.

 


Use PowerShell to download video Streams

$
0
0

DownloadingVideoStreams

We live in an amazing world of on-demand video and always available bandwidth, where people can count on full reception at all times on their device.   If you want to watch cool videos from events or conferences, you can just load them on when you’re on the road with no issues, right?

Yeah right.

Streaming is cool and all, but there are times when it’s nice to have videos saved locally, like the huge backlog of content from MMS and TechEd.  However, a lot of streaming services want you to only view their videos within the confines of their web page, normally with a sign-in session.

In this post, I’ll show you a few ways to download videos you’ll run across online, and how you can use PowerShell to download some of the REALLY tricky ones.

How to do this on most platforms

If I need to save a video from YouTube or other sites like it, I go to KeepVid, first and foremost.

Google isn’t a fan of this site as they want you loading up YouTube and watching ads whenever you watch a video, so they try to dissuade you from entering the site. They do this by displaying this scary warning page if you browse to the site from a google search, but the site can be trusted, in my experience.

Scary
This message is FUD! it’s safe to use!

This is an easy to use website which uses Javascript to parse out the streaming behavior of a video and then presents you with a link to download your video in many different resolutions.

Options

This works for about 60% percent of sites on the web, but some use different streaming JavaScript platforms which try to obfuscate the video files.

How to manually save a video file using Chrome

If KeepVid doesn’t work, there is a way to do what it does manually.

I’ve been into Overwatch recently, and have been watching people play on Streamable.  Sometimes you see a really cool video and you want to save it,  like this one of this beast wiping out pretty much everyone in eight seconds.

Let’s fire up Chrome and hit F12 for the developer tools.  Click on the Network tab.

00

This will show us a waterfall view of elements on the page as they’re downloaded and being used.  We can even right click individual items to open them in a new tab.

Now, browse to the site with the video in question and click Play (if needed).  You need to trigger the video to begin playing for this to work.  Watch as all of the elements appear and look at the one with the longest line.  If it’s one giant long line, you’ve found a .mp4 or .ts file somewhere, which is the video was want to keep.

GIF

In this gif, my mouse wouldn’t appear but I let the site load, hit Play, and then click on the longest line in the timeline view on top. I then right click the item with the type ‘Media’ and here you can grab the file URL or open a new tab to this URL.  Do that and then you can save the video file.

This technique works for a LOT of the streaming videos on the Web and is especially good when your video won’t download using keepvid.

However, some sites use insidious methods to make it nearly impossible to save files. For them…

How to deal with the REALLY tricky ones

I have been all about learning Chef recently.  I see it as the evolution of what I do for a living, and I think in two or three years, I’ll be spending a lot of time in its kitchen.  So I’ve been consuming learning materials like a fiend.  I found this great video on demand session by Steven Murawski.

preview_1460658919

And I signed up for the presentation.  I watched the talk but was sad to see no link to download the video (which I would need, with no reception later that day). So I used the same Developer Tools trick I showed below and hopped into the tab, only to see this.

01

See how there are many different video files with an ascending number structure?  This site uses the JW player, similar to the platform used by Vimeo.  This is a clever streaming application, because it breaks apart files into 10 second snippets which it stitches together at playback.

Rather than one file to download, there are actually hundreds of them, so we’ll need to find an easy way to download them all.  I used the chrome developer trick to download one chunk and popped one of these mp2 files in VLC, and found that each snip was ~ 10 seconds long, and the video was an hour, so I’d need to download roughly 360 files.

Obviously I wasn’t about to do this by hand.

Figuring out the naming convention

If we look at the file URL, we see the video files seem to have this format:

02

If we could use some scripting tool to reproduce this naming convention, we could write a short script to keep downloading the chunks until we get an error.

Recreating the unique URLs isn’t too hard. We know that every file will begin with

video_1464285826-2_

then a five digit number, followed by

.ts

. We can test the first five chunks of the file with a simple

1..5

Put them all together to get:

foreach($i in 1..5) {"video_1464285826-2_$i.ts"}

Finally, to put the number in the right format, we just need to use $i.ToString(“00000”), which will render a 1 as 00001, for instance. Now to test in the console

download

Downloading the files

We can use PowerShell’s Invoke-WebRequest cmdlet to download a file.  Simply hand it the -URI you want to download, and specify an output path.

To use this, pick the destination for the file for line 1, and then for line 2, replace this with the baseURL of your video file.(If the file is http://www.foxdeploy.com/videos/demo1.mp4, then the baseurl would be http://www.foxdeploy.com/videos/).

$outdir = "c:\temp\VOD"
$baseUrl = "http://someserver.com/asset/video/"
cd $outdir
$i = 50
do {
 $url = "$baseUrl/video_1464285826-2_$($i.ToString("00000")).ts"
 Write-Host "downloading file $($i.ToString("00000"))..." -nonewline
 try { Invoke-WebRequest $url -OutFile "$outdir\$($i.ToString("00000")).ts" -PassThru -ErrorAction Stop | Tee-Object -Variable request | Out-Null}
 catch{
 write-warning 'File not found or other error'

 break
 }
 write-host "[OK]"
 Start-Sleep -Seconds 2
 $i++
 }
until ($request.StatusCode -ne 200)

After dropping in the right base URL and specifying your file naming convention, hit F5 and you should see the following.

GIF1

Joining the files back together

At this point we’ve got loads of files, but we need to combine or concatenate them.

This is possible through VLC, but Video LAN client will create timestamp errors (fast forward won’t work) if you use it. It’s better to re encode them.

To join the files, you’ll need FFMpeg.  Install it then run it from the start menu (which adds FFMpeg to your Path Environmental Variable, we need this later!).

Important! Open a new PowerShell prompt and try to launch ffmpeg

If it doesn’t work, copy ffmpeg into your C:\windows\system32 folder.

Assuming you need to merge a bunch of video files into one, just browse to the directory where you saved your files, and then run the following code.  Replace line 2 with the path to the source files (and the right extension), then on line 4, replace with the desired file name.

#replace with the location containing files to merge
$source = c:\temp\videos\*.ts

#destination file
$output = "$home\Video\output1.ts"

#this looks weird, but FFMpeg must have files in a pipe seperated list, very weird working with PowerSherll!
$files = (Get-ChildItem $outdir | select -expand Name ) -join '|'

#execute
ffmpeg -i "concat:$files" -c copy $output

Accepting Challenges

Have another bulk file download/management task you need to tackle with PowerShell?  Leave me a message and I’ll help you figure it out.



Cloning VMs in Hyper-V

$
0
0
It’s a common enough scenario: build one template machine and then mirror that to make a testlab, so you’d think this would be a built-in feature of Hyper-V, but its not.
Luckily, it’s not too hard to do once you know how, and I’ll guide you through the tough parts

Overall process

We’ll be following these steps to get clones of our master VM.
  • Create a source/master VM and install all common software and features on it
  • Prepare it for imaging using sysprep
  • Shutdown the source VM and remove it from Hyper-V
  • Create differencing disks using the Source VM’s VHD as the parent disk
  • Create new VMs, using the newly created differencing disk
Create a source VM

To begin, create a new VM and name its VHD something like master or template.  We’ll be building this one as the source for our VMs, and will eventually have to shut it down and never turn it back on again.  If we accidentally delete the VHD for it, or start it up again, we can make changes to it which will break all of our clones.

So make sure you give it a name that will remind you to not delete this guy!

master

Install Windows and whatever common apps you’ll want your source machine to use, and when you’ve got it to the point that you’re ready to copy it out…

Sysprep our VM

In our scenario here, we’ve built a source image and want to put it on other VMs.  Imagine if we wanted to push it down to multiple different laptops and desktops, however.  In that case, we’d need to ensure that all Hyper-V specific drivers and configurations are removed.  We also need Windows to run through the new user Out of Box Experience (OOBE), when Windows detects hardware and installs the right drivers, etc.

In the Windows world, particularly if machines are in an Active Directory Domain, you need to ensure that each machine has a globally unique identifier called a System Identifier, or SID.  This SID is created by Windows automatically during the OOBE process.  If you try joining two machines with the same SID to an AD Domain, you’ll get an error and it won’t be allowed, as a potential security risk.

duplicateSID

To avoid this, and because it’s a best practice, we’re gonna sysprep this badboy.

Also, I should note that there’s no going back.  Once we sysprep this machine, it will shutdown and we’re done with it.  If we turn it back on, we’re ‘unsealing’ the image and need to sysprep again.

How to sysprep a machine

Once all of the software is installed, launch an administrative command prompt and browse to C:\windows\system32\sysprep.exe, and then select ‘Enter System Out of Box Experience’ and Generalize.  Under Shutdown Options, choose ‘Shutdown’

sysprep

When this completes, your VM will shutdown.

Shutdown and remove

At this point, remove the source VM from Hyper-V.  This will leave the files on disk, but delete the VM configuration.  You could leave the VM in place, just remember to never boot it again.  If you boot the parent vm, it will break the chain of differencing.

create differencing disks & create new vms

You could do this by hand in the console, or you could just run this PowerShell code.  Change line 2 $srcVHDPath to point to the directory containing your parent VHD.

Change line 4 $newVHDPath to point to where you want the new disk to go.  This will create a new Differencing VHD, based off of the parent disk.  This is awesome because we will only contain the changes to our image in the differencing disk.  This lets us scale up to having a LOT of VMs with a small, small amount of disk space.

Finally, change line 8 -Name NewName to be the name of a VM you’d like to create.

#Path to our source VHD
$srcVHDPath = "D:\Virtual Hard Disks\Master.vhdx"

#Path to create new VHDs
$newVHDPath = "D:\Virtual Hard Disks\ChildVM.vhdx"
New-VHD -Differencing -Path $newVHDPath -ParentPath $srcVHDPath

New-vm -Name "NewName" -MemoryStartupBytes 2048MB -VHDPath $newVHDPath

That’s all folks!
If you wanted to create five VMs, you’d just run this:
ForEach ($number in 1..5){
#Path to our source VHD
$srcVHDPath = "D:\Virtual Hard Disks\Master.vhdx"

#Path to create new VHDs
$newVHDPath = "D:\Virtual Hard Disks\ChildVM0$number.vhdx"
New-VHD -Differencing -Path $newVHDPath -ParentPath $srcVHDPath

New-vm -Name "ChildVM0$number" -MemoryStartupBytes 2048MB -VHDPath $newVHDPath
}
FiveVmsinFiveSecs
Let me know if this was helpful to you, and feel free to hit me up with any questions:)

SCCM 1511 Upgrade Hangs Fix

$
0
0

Recently for a customer, we ran into an issue in which the SCCM 1511 upgrade was hanging at the following screen.

SCCM1511Hangs
Backing up files for upgrade

If we open the SCCM install log file on the primary site, found at C:\ConfigMgrSetup.log, we will see the following message:

SCCM1511log
Notifying Site Component Manager of Site Shutdown

This step should only take a few minutes to complete, if you’ve waited a while, like 20 minutes for us–then go ahead and help SCCM out.

It’s trying to kill the SMS Component Manager service, and the SMS Exec service. If you’ve got a complex environment, it can take a long time to complete this step. Go ahead and stop the services manually using the task manager.

If this doesn’t work (it didn’t work for me, the services hung at ‘stopping’), you can use powershell to kill the service instead.

From the Task manager, look at the process ID for your SMS component manager service, and then run

ohno5

Stop-Process -ID SMSExecID,SMS_SITE_COMPONENT_MGRID

And your install should proceed with no issues!


Thinking about stateless applications

$
0
0

GOINGSTATELESS (1)


When I first looked into AWS and Azure, the notion of scaling out an application was COMPLETELY blowing my mind.  I didn’t get it, at all.  Like, for real, how would that even work?  A server without a persistent disk?

This short post is not going to tell you precisely how to do devops, or even give you pointers on how to build scaling apps in Azure and AWS.  No, instead, I’ll share an interesting conversation I had on reddit recently, and how I tried to explain the notion of stateless applications to someone with questions.

The initial question

q1
How is Docker different from Vagrant?

My reply

q2

Their follow-up

q3
Could you please provide an example of a stateless environment?

AWS is a great example of how you could setup a stateless application.

It’s easy to configure an application with a load balancer. We can use the load balancer to gauge how many people are trying to hit our site at a given time. If traffic exceeds the capacity of one host, we can tell our web service to add another host to share the load.

These new workers are just here to help us with traffic and keep our app responsive and fast. They will probably be instructed to pull down the newest source code on first boot, and be configured not to save any files locally. Instead, they’ll probably get a shared drive, pooled among all of the other workers.

Since they’re not saving files locally, we really don’t care about the host machine. As long as users have completed their session, it can die at any point. This is what it means to be stateless.

The workers make their mark on the world by committing permanent changes to a DB or shared drive.

So, new worker bees come online as needed. They don’t need to be permanently online though, and don’t need to preserve their history, so in that sense they are stateless. After the load drops, the unneeded little workers save their changes, and then go to sleep until needed again in the future.

Actually they’re deleted but I always feel sad thinking about my workers dying or being killed, so I have to think about it in different terms

Just my take on how I think of designing and deploying a stateless application. What do you think?  Did I get it wrong?


SCCM 1602 – Unable to upgrade client solved

$
0
0

This was a bit tricky!  We completed an SCCM upgrade for one customer from SCCM 1511 to 1602, and made use of the nice pre-production client validation feature.

This allows you to specify a collection of test systems to receive the new SCCM client, for you to validate in your environment.

After a few days of validation, we were ready to pull the trigger and upgrade everyone. This is done under Administration \ Cloud Services \ Updates and Servicing \ Client Update Options.  However, when we tried to do this, it was grayed out!

sccm01

Root Cause

Before trying to upgrade the client, I thought we should un-check the pre-production Collection box in Hierarchy Settings.  This is done in Administration \ Sites\ Hierarchy Settings.

sccm02

Don’t do this!  If you uncheck this box, the SCCM ui will detect it, and gray out the SCCM won’t display the UX we need to promote the SCCM client to production.

Fix

Make sure that you check the Pre-production client box.  If this isn’t checked, SCCM doesn’t know to show you the UI for upgrading the client across production!

sccm03

Once this is done, you can go to Updates and Servicing, and click Client Update Options.

sccm04

Complete this UI and SCCM will automatically uncheck the pre-production client for you as well.  Thanks SCCM!

sccm05


SCCM 1602 Query – All Online Machines

$
0
0

Quickpost: SCCM 1602 Query – All Online Machines

With the Advent of client activity indicators in SCCM 1606:

t01

We can now see which machines are online at a given time.  I love these green checkboxes.

I thought it would be cool to try to make a collection of only currently online machines.  So, into the query editor we go!  We’ll add a new query rule, and then use the wizard to add a new value.  This is all that you need to grab only the currently online systems.

t02

This collection works VERY well for Incremental Updates.  However, Scheduled Updates don’t make much sense

And the end result:

t03
They’re all online!  So green!

 


SCCM 1602 nightmare upgrade

$
0
0

This week, we had a scary ConfigMgr 1602 upgrade.

Of course, as a consultant you have to be cool like a fighter pilot in the face of adversity, as crying is frowned upon by customers when they see your hourly rate.  So when everything falls over, and there are spiders coming out of the air conditioner, you say ‘hmm, that’s strange’ and then whip out your laptop to begin opening log files like a fiend.

It was a day like any other

Before the upgrade, I ran through a practice run on my test lab domain, to try to prepare myself. We then used Kubisys to mirror our production SCCM and ran /TestDbUpgrade. All good.

However during the install we saw the install hang for a long time trying to stop sccm services.

Note :We saw this before with this same instance of SCCM when we upgraded to 1511; the install froze for an hour trying to stop the services. At that time, we manually stopped the SMS exec service and component Manger and the install proceeded.

So when the install froze again, we gave it ten minutes before manually stopping the SMS exec service. Install proceeded like normally and all looked fine in the logs until we tried to open the console.

ohno01
Configuration Manager cannot connect to the site

When I see errors like this, I immediately thing SMS Provider.

What’s the SMS Provider?
Good question!  While we tend to think SQL when we think SCCM, in reality ConfigMgr really stores a lot of information in the WMI repository on the Primary sites and the CAS.  Additionally, WMI plays a role in how data is stored in the SQL Database for ConfigMgr as well.

The SMS Provider is critical for allowing this interaction between the SCCM Console, WMI and SQL.  If you don’t have any working SMS Providers you can’t use the ConfigMgr console!

 

So we knew the SMS provider (which does a bunch of WMI stuff) likely couldn’t be reached, so I opened up the primary sites SMSProvider log \primary\SMS_SiteCode\logs\SMSProv.log and check out this nasty looking message!

 

ohno02
Relevant piece: Failed with error “WBEM_E_SHUTTING_DOWN”

Huh, that don’t look good.  Even though my install of SCCM Completed, WMI was shutting down, so far as the SMS Provider was concerned?  Huh….

I wanted to see how WMI was doing, so I tired running a few WMI queries with PowerShell, and all errored out.  So I checked out Services.msc and sure enough, the WMI Service was in the ‘stopping’ state.

ohno03

I tried my normal tricks, like looking up the process for this service in task manager, then killing the process.

the ultimate trick up my sleeve, manually killing processes for services
the ultimate trick up my sleeve, manually killing processes for services

But even this failed, giving me an error of ‘process not in valid state’, which was really weird.

We tried to reboot the machine as a final effort, but it hung forever at shutting down, probably because of the issue with WMI.  With WMI stuck in this state of ‘Stopping’, SCCM could never commit its final write operations, so the services wouldn’t stop ever.

We had to go big…rebooting the VM via vSphere.

Seriously that’s all you did, reboot it?

Yeah, kind of an unsatisfying ending, I’ll admit, but everything was operating swimmingly after the reboot!


SCCM 1606 Cloud Proxy Guide

$
0
0

Configmgr in the cloud

SCCM 1606 brings a cool new feature to us, allowing us to manage machines even if they aren’t in the office. We can push Windows updates, deploy software, and also configure devices using SCCM Client Settings and DCM, even if a machine is half-way across the world!

This feature is called the Cloud Proxy service 🔗, and in this step-by-step guide I’ll tell you why it’s cool and how to do it!

What problems does this solve?

One of the biggest challenges to the SCCM Admin in managing machines is handling those systems which rarely are in the office.

Some types of staff– such as our sales team –might cover a region and never bring their machine to the home office. If they don’t VPN either, and you don’t have DirectAccess set up, you might only see a machine once a year. Just a couple hours a week or month to push app updates ensure Antivirus is current and get those Windows updates installed. Very challenging.  You know what it’s like, for some users you just have to send an e-mail like this:

Please come to the office at some point this year, I’ll even buy donuts!

Cloud Proxy in SCCM tp1606 allows us to configure our environment to use Azure and its global footprint to extend the functionality of our management point, distribution point and even software update point to the Web. It’s like a freaking aircraft carrier for ConfigMgr, it extends our sphere of influence to cover the entire globe!

To the veterans out there, this might sound similar to a current feature however…

How is this different from IBCM though?

SCCM has offered a feature called Internet based client management for a while now.  It does cover some of the same ground as Cloud Proxy, however the key difference between the two is that with IBCM, we are taking ownership of all of the work of securing access to our SCCM Infrastructure from the outside Web.

That means adding new servers into a DMZ and all of that network change request and security compliance meetings (BARF) which goes with a big, scary change.  In IBCM, we’ll also have clients hitting our SCCM Infrastructure from over the Web so we also need to worry about our upload speed and take steps to ensure that serving content out doesn’t impact the quality of service for our internal users too.

Compare this to the solution offered by Cloud Proxy, in which we allow Azure and Microsoft to shoulder the burden for some of those tasks, and only have to worry about our SCCM server having a route available to Azure instead.

Azure is not a free meal

However there are Azure costs for running this.

In my test lab with a handful of machines with Azure Proxy, it cost about $2 a day to run, purely to keep the Azure Servers online.  Speaking entirely out of my butt, I wouldn’t expect the compute costs to be too high for managing machines, but I would factor in some fluff factor when presenting the costs to management, if you’re doing something vastly different than me, you might be spending more like $5 a day to keep the lights on the Azure Cloud Proxy Service.

Note: This is with two Azure hosts for redundancy, although you might decide to try to run with one host or maybe you need 10 depending on your risk tolerance.

You also will pay for data transfer out of Azure.  For the first 5 TB, the rate is $0.087 / GB, which is absurdly cheap.

To put this into perspective, let’s say you need to deploy Adobe Premier (it’s about 1 GB) to your entire remote marketing team, all 1000 of them (dear lord, can you imagine having to deal with 1,000 advertising primmadona’s?  So much plaid and skinny jeans…).

If they’re all remote, that’s about a terabyte of traffic, so it’d cost about $85 to deploy that one app.  That ain’t free but it’s a lot cheaper than the license for ANY app, and probably less than what the company would pay for one hour of your fully loaded cost to the employer🔗 .  Management will not care.

A more realistic scenario is Windows Updates or AV updates.  The average Forefront Definitions package is 250KB.  Three of those a day, 30 times a month is 24MB per system.  For those same thousand computers, it’d only be  24 GB, or $2 to ensure your machines always have up to date AV Definitions delivered by your company.

These are estimates for generic situations, so read up on pricing 🔗 before you decide to commit.

Overview of the steps

We’ll go through the following steps in this order.  This diverges slightly from Microsoft’s documentation 🔗 but I have found that the order presented here prevents some irritating rework which will VERY likely come up if you follow MS’s guide.

  • Come up with a name for our SCCM Cloud Proxy Service
  • Make a new cert template to use with the Cloud Proxy Service
  • Request the cert from the CAS /primary
  • Export the certificatie twice, once as a pfx and once as a. Cer
  • Upload the cer as an authorized management cert in Azure
  • Setup the proxy service in SCCM
  • Configure roles to use the service
    • Optional : configure a DNS Record for the service
  • Begin managing clients wherever they are

Prerequisites

To get started, we’ll need a few things setup or readily available.

  • Know our Azure subscription ID
  • Have the ability to create new Certificate Templates (Enterprise Admin is the easiest way to get this, or request delegation otherwise)
  • Already have SCCM operating in HTTPS mode.  Follow this guide if you’ve not done that yet.   Microsoft🔗
  • Have SCCM 1606
Finding our Azure Subscription ID

To find your Azure Subscription ID, sign in to Azure, go to the Classic portal and then down to settings.  You’ll see your ID listed here as shown below.

16 subscription ID
The Subscription ID of ham ham ham probably won’t work for you.

 

Name our SCCM Cloud Proxy Service

While we’re still in Azure, we should come up with a good name for our Cloud Proxy Service.

Here’s why the name matters: the way this whole thing works is that–once configured–the next time a client requests an update for policy, they’ll receive settings for using the Cloud Proxy Service as an IBCM Point (effectively), and will try to access the client at <servicename>.domain.com.

This needs to route to <serviceName>.cloudapp.net, which is Microsoft Azure’s root domain used for almost all Azure accessible machines and services.  This is true not just of ours, but for every one in the world who uses Azure for websites, services and things like SCCM Cloud Proxy.

This means that our ConfigMgr Cloud Proxy Service MUST be unique in the world.
If you fail to do this, you’ll get errors like this one later on in the process.

Unable to create service, the name already exists
Unable to create service, the name already exists

To avoid this, let’s find a good name for our service using a built-in feature for Azure that will only show us valid addresses.   Still in the Azure Portal, click New, Compute \ Cloud Service \ Quick Create and then use the box which appears here to test out the name for your Cloud Service.

test the cloud service name
Every permutation of ‘cloud’, ‘SCCM’ and ‘Slow Moving Software’ I could think of was already taken

As we can see, SCCMCloud was already taken, but after enough permutation, I found a good one.

test the cloud service name 1
Rolls right off the tongue

Don’t create the service!  We just did this to make sure our name wasn’t taken yet!

Write this stuff down, you’ve got both the name of the service, and our Azure Subscription. We’re ready to move on.

Make a new cert template to use with the Cloud Proxy Service

Since we’re opening this stuff up to the whole web using Azure, we are going to need some security and that means PKI certificates.  We’ll make a new Certificate Template, configure it just so and allow our SCCM Server which will host the Cloud Proxy Connector role to enroll in this cert.  Don’t worry, I’ll walk you through the whole process.

First, connect to a machine which has Certificate Authority with an account that has appropriate permissions.  Domain or Enterprise Admin will cut it. Launch the CA Console. Go down to Certificate Templates and choose Manage.

00Make a new cert

Scroll down to Web Server and choose duplicate.

01 Duplicate WebServer

If you’re prompted for Compatibility, always choose the oldest one.  Go with Server 2003 if it doesn’t default to that already.

On the General tab, it will default to the name of ‘Duplicate of WebServer’ which is garbage, so change the Template Display Name  to something like ‘SCCM Cloud Certificate‘.02 new cert

Next on the Request Handling tab, make sure to check the box for ‘Allow private key to be exported’ .  If you miss this one, you have to start over.

03 cert

Next, on the Security Tab, remove the check for Enroll for Enterprise Admins.  You can probably skip this step, but I’d do it anyway.

04 remove ent admins enroll perm cert

Next, click Add and specify a security group which contains your SCCM servers, and make sure they have at a minimum the Read and Enroll Permission.

05 add new group

That’s all the changes we have to make so hit OK and then close the Certificate Templates window.

Back in the Certificate Authority console, click Certificate Templates \ New \ Certificate Template to Issue.

06 issue this cert

Choose the cert template we just created, SCCM Cloud Certificate.  (or whatever you called it)

07 enableit

Request the cert from the CAS /primary

Now we’ve created a whole new type of Certificate and allowed our SCCM Servers to request it.  At this point, either GPupdate or reboot your SCCM Server which will host the Cloud Proxy Connector Role so it will update Workstation Group Policy.

On the SCCM Server to host the Cloud Proxy Connector Role, launch the MMC and add the Certificates Snap-in, for the Computer.

08 request cert

Now go to Personal \ Certificates \ All Tasks \ Request New Certificate

09 request cert

In this next window, you should see a fancy new cert available with the name we chose earlier, but it will say More information is required to enorll for this certificate.  Click that text.

10 almost there

In the Certificate Properties wizard which appears, on the General tab, enter the name of our SCCM Cloud Service.  Mine was FoxDeploySCCMProxy.foxdeploy.com, but yours is whatever you came up with in Azure.

correct cert name req

Once you’ve put your name in, hit OK and then Enroll.12 yay

And now we should see our brand new certificate in the console here, issued to our cloud service.  confirm our cert

Export the certificate twice, once as a pfx and once as a .cer

One of the core tenants of PKI is validating who you’re talking to and only trusting those who are vetted by someone you trust.  We created this cert so that our machines will trust the Cloud Proxy service when they interact with it later in lieu of our SCCM Servers.  So now that we’ve requested this cert, we need to export it in two different formats and put those files in the right place.

On the SCCM Server, select the certificate for our Cloud Proxy Service and choose All Tasks \ Export.  

13 export the cert

On the first run through, choose Yes, Export the private key.

14 yep

When you export the certificate with the private key, you must secure it with a password so pick something good. Don’t forget this as you’ll be prompted for it in about five minutes!

15 best password

Put the certificate somewhere safe and then run through the wizard again.  This time choose ‘No, do not export the private key’ and choose the .cer file format (the default works fine).

two certs
Don’t lose the files. Make sure you have one in both .cer and .pfx

Now you should have two separate cert files, one with a .pfx and one in the .cer format.

Upload the cer as an authorized management cert in Azure

If you don’t want to constantly enter credentials for Azure, you can use management certificates instead, and that’s just what we’re going to do with the .cer file we just created.  Later on in this process the SCCM Wizard will use this same certificate file to authenticate itself against Azure, and them make all the changes we need for Cloud Proxy to Work.

Log back into Azure \ Settings \ Management Certificates \ Upload

16 upload a cert

In the next page, browse out to the .cer file you created and plop her in there.  Then hit OK and you’re done.

upload

Setup the proxy service in SCCM

It only took 1700 words before we are ready to open the SCCM Console.  We’re here!  Fire up the SCCM console and oh yeah, be sure you’re running 1606 tech preview.  Browse over to Administration \ Cloud Services \  Cloud Proxy Service and choose ‘Create Cloud Proxy Service.’

17 admin cloud services cloud prox serivce

On the next page, paste in your Azure Subscription ID, and browse to the .pfx certificate we exported.18 Setting up cloud proxy

Now, the most important page:

  • Service Name – the Service Name we tested earlier in Azure (so if you tested SCCMisCool.cloudapp.net, enter only SCCMisCool).
  • Description – will end up in Azure as the flavor text for this new Azure Cloud Service.
  • Region – Pick a geographical region which makes sense for your company
  • Instance Number – How many instance you want to run.  At this time there is no guidance on how many you should have but two is the default
  • Certificate File – Select the .pfx file one more time
  • Root Certificate File – this should probably say management certificate instead, it’s the .cer file.
  • Verify Client Certificate Revocation – you would know if you needed to do this based on your organizational standards

1 actually signing up for the cloud service!

Alright you made it! Now verify everything looks cool in the summary page and hit Next.

2 summary

And we’re off.  You can monitor the install status by refreshing the SCCM  Console under Administration \ Cloud Services \ Cloud Proxy Service, or if you’re a real man, open up CloudMgr.logs.  You should see nothing for a bit and then ‘Starting to Deploy Service’

3 seven seconds in heaven

After a few minutes you will see ‘Deployment instance status for service <ServiceName> is ReadyRole.’

You can also monitor this installation within Azure by clicking to Cloud Services and watching your new Cloud Proxy Service appear here.  6 building instances

 

6.1 Service is running 2
Elapsed time between pictures is roughly ten minutes

With this completed, we now have our Proxy SCCM roles running in Azure.  The final step is to install the connector locally and then configure which roles we want to use the service.

Install the connector and configure roles to use the service

Back in the SCCM Console, go to Administration \ Sites and Roles and choose to add a role to whichever SCCM Server you want to talk to clients on the internet via Azure.

3.1 install the cloud proxy connector point role

In the next page, choose your Cloud Proxy Service from the drop down box. You can ignore the text about Manually installing the client cert, as we’ve already done so.

3.2 install the cloud proxy connector point role 2

Now, open up SMS_CLOUD_PROXYCONNECTOR.log, and chances are you’ll see this:

4 add a dns alias

Text:ERROR: Failed to build Http connection f201bcf3-6fee-48d2-af38-0e7311588f23 with server FOXDEPLOYSCCMPROXY.FOXDEPLOY.COM:10125. Exception: System.Net.WebException: The remote name could not be resolved: 'foxdeploysccmproxy.foxdeploy.com'

If you see this error, this means that you need to add a CNAME record in DNS.  If you’re using Windows DNS, the record should be setup like the following:

DNS Record

Once this is done, do an ipconfig /flushdns on your SCCM Server and you should see the log files clear up.

5 service gets created

Now that SCCM can talk to Azure, we’re in the money.  All that remains is to configure the roles we want to use the Cloud Proxy Service.

Browse to Administration \ Site Configuration \ Servers and Site Systems and choose the server with the Cloud Proxy Role.  Go to Management Point \ General and make sure that HTTPS and Allow Configuration Manager Cloud Proxy Traffic are selected.

6.2 configure MP for cloud proxy

Once you do this, it will trigger a reinstall of the Management Point if needed, to configure HTTPs.  Be sure to monitor the install from MPSetup and MPMsi.log for a healthy install.

Begin managing clients wherever they are

And we’re finished!  The final step is to refresh policy on some SCCM Clients and take them outside the boundaries of the network.  You’ll know that the client is talking to Azure when you by monitoring ClientLocation.log and you should see the new Cloud Proxy Management Point appear as an Internet Management Point.

Client get's new MP

Additionally, from the Configuration Manager Control Panel, you’ll see values filled out now under the Network tab for Internet Based Management Points.

Client WORKS
You’ll also see the site listed as ‘Currently Internet’ on the General tab as well

What’s next

Now you’re free to manage this client mostly the same as if it were in the office, with Software Updates, Software Installation, new Client Settings and Antivirus Definitions as well!  You’ll enjoy up to date Hardware and Software inventory as well!

Be sure to configure each one of these additional roles from the SCCM Console as well.

Did I miss something?  Leave me a comment or shoot me an e-mail / tweet.  stephen [at] foxdeploy dot com.  Twitter: @FoxDeploy

Source

New Capabilities in SCCM tp 1606🔗

Configuring SCCM 2012 R2 in HTTPS 🔗

Configuring a cloud DP 🔗



Enabling PowerShell Event Logging

$
0
0

Powershell logging

For one of my customers, we tried to enable PowerShell Module logging for ‘Over the shoulder’ event logging of all PowerShell commands.  We were doing this and enabling WinRM with HTTPs to help secure the company as we looked to extend the capabilities of PowerShell Remoting throughout the environment.  However, when we tried to enable the Group Policy Setting, it was missing in the GPMC!

In this post, we’ll walk through why you might want to do this, and what to do if you don’t see the settings for PowerShell Module Logging.

What is PowerShell Module logging?

PowerShell module logging allows you to specify which modules you’d like to log via a Group Policy or regkey, as seen in this wonderful write-up (PowerShell <3’s the blue team).

It allows us to get an ‘over-the-shoulder’ view, complete with variable expansion for every command a user runs in PowerShell.  It’s really awesome.  You can check the Event Log on a machine and see the results and all attempted PowerShell commands run by users.  If you then use SCOM or Splunk, you can snort these up and aggregate results from the whole environment and really track who is trying to do what across your environment.

PowerShell remoting

We loved it and wanted to turn it on, but when we opened the GPMC..

missin
Options should appear here under Computer \ Admin Template\Windows Components\Windows PowerShell

We were missing the options!

Enabling options for PowerShell Module and Event Logging

This is because the machine I was running this from was a Server 2008 machine, and these features were delivered with Group Policy in Server 2012 / Windows 8.1.  2008

The fix is simply to download the Windows 8.1 & Server 2012 ADMX files.

Install the missing ADMX templates

Note: These only need to be installed on one machine in the environment, the one from which you are writing Group Policy.

When you run the installer, copy the file path when you install the files.  The installer does not import them for you, but simply dumps them to a folder on this system.

File path

Find the appropriate ADMX file

Next, we can look into the .ADMX files on disk to see which one contains our settings. Since I knew the setting was called ‘Enable Module Logging’ or something like that, I just used PowerShell’s Select-String cmdlet to search all the files for the one that contained the right setting.  We’re able to do this because ADMX files are simply paths to Registry Keys and some XML to describe to the end user what these keys control.

Gross Oversimplification Warning: Really that’s all that Group Policy is in the first place: a front-end that allows us to specify settings which are just Regkeys, which get locked down so the end user can’t change them.

PS C:\ $definition =&quot;C:\Program Files (x86)\Microsoft Group Policy\Windows 8.1-Windows Server 2012 R2\PolicyDefinitions&quot;
PS C:\ dir $definition | select-string &quot;EnableModule&quot; | select Filename

finding the file

This tells us the file we need is ‘PowerShellExecutionPolicy.admx’. I then opened it in Visual Studio Code to see if it was the right file.
find our policy

This was the file!

Warning: Make sure you find the matching ADML file, which is in the appropriate nested folder.  For instance, if you speak english, you’ll need the \PolicyDefinitions\en-us\PowerShellExecutionPolicy.adml file too.

Failure to copy the ADML will result in strangeness in the GPMC, and the policies might not appear.

Copying the files

We now just need to copy the ADMX and the matching ADML file, which will be found in the appropriate language folder.

Copy these to ‘%systemroot%\PolicyDefinitions’, and be sure to move the .ADML file into the ‘en-us’ folder.  You should overwrite the original files.  If you can’t, delete the originals and then copy the new ones in.

Copy the template
I had to take ownership of the original files, then give myself full control permissions.  After that, I was able to overwrite the files.

Reload the GPMC

The final step is to completely close the Group Policy Management Editor and Management Console.  Then reload it and browse back down to Computer \ Admin Template\Windows Components\Windows PowerShell.  While these settings also exist under User Settings, those are a relic of PowerShell development and are ignored.

options exist!

Event correlation

I’ve been asked this question before.  If you’re wondering which GPO causes which event, see this chart.

evennts

References

Thanks to this article here for the refresher on importing 2012 Admin Templates onto a 2008 machine.


Safely storing credentials and other things with PowerShell

$
0
0

storing Credentials

Hey guys,

This post is mostly going to be me sharing an answer I wrote on StackOverflow, about a technique I use in my PowerShell modules on Github to safely store credentials, and things like REST Credentials.  This is something I’ve had on my blogging ‘To-Do’ list in OneNote for a while now, so it feels nice to get it written out.

I hope you like it, feel free to comment if you think I’m wrong!

The Original Question

I currently have a project in powershell which interacts with a REST API, and the first step after opening a new powershell session is to authenticate myself which creates a websession object which is then used for subsequent API calls. I was wondering what the best way of going about storing this token object across all Powershell sessions, because right now if I authenticate myself and then close & reopen powershell I need to re-authenticate which is rather inconvenient. I would like the ability to in theory authenticate once and then whenever I open up powershell be able to use my already saved websession object. At the moment I store this websession object in $MyInvocation.MyCommand.Module.PrivateData.Session
Original Question

My Take on Safely Storing objects on a machine with PowerShell

Since I’ve written a number of PowerShell Modules which interact with REST APIs on the web, I’ve had to tackle this problem before. The technique I liked to use involves storing the object within the user’s local credential store, as seen in my PSReddit PowerShell Module.

First, to export your password in an encrypted state. We need to do this using both the ConvertTo and ConvertFrom cmdlets.

Why both cmdlets?

ConvertTo-SecureString makes our plaintext into an Encrypted Object, but we can’t export that. We then use ConvertFrom-SecureString to turn the encrypted object back into encrypted text, which we can export.

I’m going to start with my very secure password of ham.

$password = "ham"
$password | ConvertTo-SecureString -AsPlainText -Force |
  ConvertFrom-SecureString | Export-CliXML $Mypath\Export.ps1xml

At this point, I’ve got a file on disk which is encrypted. If someone logs on to the machine they can’t decrypt it, only I can. If someone copies it off of the machine, they still can’t decrypt it. Only me, only here.

How do we decrypt the text?

Now, assuming we want to get the same plain text back out to use late, we can add this to our PowerShell Profile, you can import your password like so.

$pass = Import-CliXML $Mypath\Export.ps1xml | ConvertTo-SecureString
Get-DecryptedValue -inputObj $pass -name password

$password
>"ham"

This will create a variable called $password containing your password. The decryption depends on this function, so be sure it’s in your profile: Get-DecryptedValue.

Function Get-DecryptedValue{ param($inputObj,$name) $Ptr = [System.Runtime.InteropServices.Marshal]::SecureStringToCoTaskMemUnicode($inputObj) $result = [System.Runtime.InteropServices.Marshal]::PtrToStringUni($Ptr) [System.Runtime.InteropServices.Marshal]::ZeroFreeCoTaskMemUnicode($Ptr) New-Variable -Scope Global -Name $name -Value $result -PassThru -Force }

And that's it! If anyone knows who originally wrote the Get-DecryptedValue cmdlet, let me know in the comments and I'll give them full credit!


Coming to Ignite? Come to my session! 

$
0
0

I am deeply humbled (and a bit scared) to be invited to deliver a session at Microsoft Ignite this year! 

I’ll be delivering the HubTalk for the topic of ‘Intro to PowerShell’ this year! By far my biggest audience yet, I’m super excited! 

If you are coming to Ignite, please sign up for my session, link is here

I’ll be working on my slides for the next six weeks, so some of my posts might be a bit delayed. 

If you are coming to Ignite, please come heckle me and win swag. If possible, immediately sidetrack the discussion into the weeds on some minor issue while I grossly over simplify everything. :p

Wish me luck! 


OP-ED: Why PowerShell on Linux is good for EVERYONE and how to begin

$
0
0

POWERSHELLonlinux

Sounds impossible, huh?

Ever since the beginning of the Monad project, in which Microsoft attempted to bring the same rich toolbox of Piped commands that Linux has enjoyed for ages, Microsoft enthusiasts have been clamoring for confirmation of PowerShell on Linux.  But it forever seemed a pipedream.

Then, in February of 2015, Microsoft announced that the CORE CLR (Common Language Runtime) was made open source and available on Linux.  As the Core CLR is the “the .NET execution engine in .NET Core, performing functions such as garbage collection and compilation to machine code”, this seemed to imply that PowerShell might be possible on Linux someday.

To further fan the fires of  everyone’s excitement, the creator of PowerShell, Jeffrey Snover–a self-proclaimed fan of the Bash shell experience in Linux– has been dropping hints of a unified management experience ALL OVER THE PLACE in the last year too.

And now today with this article, Microsoft released it to the world.  Also, here’s a great YouTube video about it too.

Available now on OSX, Debian and Ubuntu, PowerShell on Linux is here and it is awesome!

Get it here if you can’t wait, or read ahead to see why I’m so excited about this!

Why is this great news for us Windows folks?

For we PowerShell experts, our management capabilities have just greatly expanded. Update those resumes, folks.

This means that the majority of our scripts?  They’ll just work in a Linux environment.  Have to hop on Linux machines from time-to-time?  PowerShell already used Linux aliases, which limited the friction of hopping between OSes but now we can write a script once and generally be able to assume that it will work anywhere.

I did say GENERALLY

With PowerShell on Linux we will not have WMI or CIM.  Furthermore, we’ll be enacting system changes mostly by tweaking files instead of using Windows APIs and methods to do things (which honestly was kind of the harder way to do it anyway).  And there’s no Internet Explorer COM object or a bunch of other crutches we might have used.

But a lot of things just work.

So this is great news for us!  Linux is a vastly different OS than Windows but I encourage you to start trying today.  Since you already know PowerShell, you’ll find it that much easier to interact with the OS now.

What does this mean for the Linux community?

This is good news for Linux fans as well.  We Microsofties and Enthusiasts are not coming to drink anybodies milk shake.  The Bourne Again Shell is NOT dead, and we’re not trying to replace it with PowerShell!

If anything this will signal the dawn of a new era, as loads of skilled Windows devs and operators will now be trying their hands at Linux.  Some of these will inevitably be the type to tinker with things, and will likely result in a new wave of energy and excitement around Linux.

The age of collaboration?  It’s just getting started.  The Power to write scripts and run them on any platform, and to bring the giant crowd of PowerShellers out into Mac and Linux can only mean good things for everyone.

Just like Bash on Windows, PowerShell on Linux is a GOOD thing, people.  Those who think it’s anything but are completely missing the point.

Developing on any platform

It also frees us up for all sorts of development scenarios.  You don’t NEED a Windows OS anymore, as you can write your code on an Ubuntu or OS X machine.

Similarly, you can use Bash on Windows to write shell code to execute on your Linux Machines.  Or write PowerShell code instead.

No longer are you stuck on the platform you want to execute on.

How do I get started?

Getting started is very easy.  First, spin up a VM in Azure, AWS or Hyper-V and install Ubuntu or CentOS.  Or do this on your Mac if you’re on Capitan.

Now simply follow the instructions for the platform below:

Platform Downloads How to Install
Windows .msi Instructions
Ubuntu 14.04 .deb Instructions
Ubuntu 16.04 .deb Instructions
CentOS 7 .rpm Instructions
OS X 10.11 .pkg Instructions

Once the install is completed type ‘PowerShell’ from the bash shell and away you go.  Right out of the gate you’ll have full intellisense and loads of core PowerShell and Linux specific commands.  I’ve been amazed at how much of my code ‘just works’.

Updating PowerShell

There’s no WMF on Linux, so upgrading PowerShell on Linux is a bit different.  As new releases are posted HERE, download the new .deb file.  You can run this manually, which will launch Ubuntu Software Center.

updating.png

Or you can always update a deb Debian Package from the command line too.

sudo dpkg -i ./newFile.deb

updating2

Where’s the PowerShell ISE?

There is NO ISE release…yet.

However you can use Visual Studio Code and its Integrated Console mode to get something that feels very similar.

Note: these steps will cause Terminal to automatically load PowerShell.  If you don’t want this to happen, don’t do them.

First, download Visual Studio Code here.

Code Install

Choose to install via Ubuntu Software Center

Code Install2

Next, launch Terminal and type sudo gedit ~/.bashrc this will launch your Bash Profile, which is pretty much where the idea for PowerShell profiles came from [citation needed].  We’re going to tell Bash to Launch PowerShell by default when it opens.

Now, go to the very bottom line of the file and add this content

echo "Launching PowerShell"

powershell

It should look like this when completed.

Setting PowerShell to launch

Save the file and reopen bash (the actual name of the Linux Terminal) to see if it worked.

Finally, launch Visual Studio Code by Clicking the Linux Start Button 😜 and typing ‘Code’

Code Install3

The last step is to click ‘View->Integrated Terminal’ and then you should feel right at home.

feels like the ISE
We’ve got Syntax Highlighting, cool themes and a functional PowerShell Console in the bottom, AWESOME!

As time goes on, we should have F5 and F8 support added to Visual Studio Code as well, to make it feel even more the ISE.  And this isn’t just a substitute for the ISE, but also a very capable and powerful code editor in it’s own right.

One more thing

Do you hate the Linux style autocomplete, where it displays multiple lines with possible suggestions?

t

If so, then run:

Set-psreadlineoption -editmode Windows

Let’s dig in and become Linux experts too!


WinRM and HTTPs – What happens when certs die

$
0
0

winrm-https

Follow-up!

See the follow-up post 👻The Case of the Spooky Certificate👻 for what happens during a renewal!


For one of my largest customers, a burning question has been keeping us all awake at night:

Where does the soul go when an SSL Certificate expires?

Er, I may be getting too caught up in this ghost hunting theme (I blame the Halloween decorations which have been appearing in stores since the second week of July!  Spooky!). Let me try again.

If we enable WinRM with HTTPS, what happens when the certificate expires?

Common knowledge states that WinRM will stop working when a certificate dies, but I wanted to prove beyond all doubt, so I decided to conduct a little experiment.

What’s a WinRM listener?

Before you can run commands on remote systems, including anything like PSexec and especially remote PowerShell sessions, you have to run the following command.

WinRM quickconfig (-transport:https)

This command starts the WinRM Service, sets it to autostart, creates a listener to accept requests on any IP address, and enables firewall exceptions for all of the common remote managment ports and protocols WinRM, WMI RPC, etc. For more info…

The last bit of that command, transport:https determines whether to allow traffic over regular WinRM ports, or to require SSL for extra security. By default, in a domain we have at a minimum Kerberos encryption for remoting–while non-domain computers will use ‘Negotiate’ level of security–but sometimes we need to ensure a minimum level of tried and true encryption, which https and ssl provides.

How WinRM uses certificates

For a complete guide to deploying certificates needed for WinRM Remoting with SSL, stop reading and immediately proceed to Carlos’ excellent guide on his blog, Dark Operator.

In our usage case, security requires we use HTTPs for WinRM Communications, so we were pretty curious to see what WinRM does to implement certs.

When you run winrm quickconfig -trasnport:https , your PC checks to see that you’ve got a valid cert, which issued by a source your computer trusts, which references the common name of your computer and is valid for Server Authentication.  Should all of these be true, a new listener will be created, which references in hard-code the thumbprint of the cert used.

When a new session connects, the listener looks at the thumbprint and pulls the cert related from the cert store and uses this to authenticate the connection.  This will work fine and dandy..but when a certificate expires…is WinRM smart enough to realized this and update the configuration of the listener?

Testing it out: making a four-hour cert

To put this to the test, we needed to take a PC from no WinRM HTTPS listener, give it a valid cert, and then watch and see what happens when it expires.

I already had valid PKI in my test environment, thanks to Carlos’ excellent guide I referenced earlier.  All I needed to do was take my current cert template, duplicate it, and set the expiry period down to a small enough duration.

First, I connected to my CA, opened up Certification Authority and choose to Manage my Certificates.

Next, I right-clicked my previous WinRMHttps template and duplicated it.  I gave it a new name and brought the validity period down to 4 hours, with renewal open at 3 hours.

01-making-a-4-hour-cert
Four hours was a short enough duration for even my limited attention span–Oh a squirrel!

Satisfied with my changed, I then exited Cert Management, and back in Certification Authority, I chose ‘New Template to Issue

02-issue-the-cert

I browsed through the list of valid cert templates and found the one I needed, and Next-Next-Finished my way through the wizard.

03-deploy-the-cert

Finally,I took a look at my candidate machine (named SCOM.FoxDeploy.com), and ran GPUpdate until the new cert appeared.

00-no-cert
F5-F5-F5-F5-F5-F5-F5-F5-F5-F5-F5-F5-F5-F5-F5-F5-F5
08-omg-cert-expires-soon
Armed with a new Four Hour Cert I was ready to rock

World’s shortest WinRM Listener

I took a quick peek to see if there was a Listener already created for HTTPs, and there wasn’t.

04-validate-no-listener

So I ran winrm quickconfig -transport:https and then checked again.05-winrm-https-exists

To validate which certificate is being used, you can compare the output of dir wsman:\localhost\Services' to what you see under MMC->Certificates->Local Computer->Personal, as seen below.06-validate-cert

And for the magic, if both computers trust the same CA, all you have to do is run the following to have a fully encrypted SSL tunnel between the two PCs.

Enter-PSSession -ComputerName RemotePC.FQDN.COM -UseSSL

07-connecting-over-ssl

Now, I had merely to play the waiting game…only three hours to go!

The Waiting Game Sucks

I walked away from the PC at this point and came back after dinner, diapers and begging my children to sleep.

threehourslater

I left the PSSesson open, and was surprised to see the following message appear when I tried to run a command

cert-expired
Starting a command on the remote server failed with the following error message: The Server Certificate on the destination computer has the following errors: The SSL Certificate is expired.

Here’s the full text of that error message.

Starting a command on the remote server failed with the following error message: The Server Certificate on the destination computer has the following errors:  The SSL Certificate is expired.

Once the cert expires, you can’t run ANY commands on the remote computer, until you reconnect without SSL.  Interestingly, you can’t even run Exit-Psession to return to your PC if this happens.  I had to kill PowerShell.exe and relaunch it to continue.

All attempts at future reconnections also fail with the same error.

cert-expired2

In short summary:

When the cert expires, WinRM doesn’t realize it and keeps presenting the old cert.

In other words :yo winRm gone be broke

But what about auto renewal?

One question that came up over and over is whether auto renewal would step around this problem.

It won’t. It  SHOULDN’T  When a new cert is requested, you’ll always end up with a new cert, with new validity periods and other data will change as well.  All of this means there will be a different hash, and thus a different thumbprint.

This means that the previous listener, which to our understanding is never updated should not continue to function.  However, some people have reported that it does, and thus I’m digging in even deeper with a more advanced test.

Our take-aways

Today, WinRM’s implementation of SSL presents problems, and in some way is incomplete.  Microsoft is aware of the issue, and it is being tracked publicly both in GitHub and UserVoice.

Show your support if you’re affected by this issue by voting for the topics:

We’re working on a scripted method to repair and replace bad certificates, which is mostly complete and available here.  GitHub – Certificate Management.ps1.

When this problem is resolved, I will update this post.

Edit: I’m performing additional research around cert autorenewal and will update you all with my findings!


Viewing all 109 articles
Browse latest View live