Quantcast
Channel: FoxDeploy – FoxDeploy.com
Viewing all 109 articles
Browse latest View live

PowerShell – Testing endpoints that perform Anti-forgery verification

$
0
0

First off, big thanks go to 🐦Ryan Ephgrave, an incredibly talented and easy to work with PowerShell and dotnet god I have the pleasure to learn from over at #BigBank™ (its a great thing LinkedIn doesn’t exist…)

We had a situation arise recently where we needed to create some Integration tests in Pester to validate a long list of web pages to be sure they responded after a deployment.  I started out manually writing a litany of Pester tests by hand like this:


Context 'Post Deployment Validation' {
    It 'Website #1 should be accessible' {
        $url = 'https://someserver:someport/someEndpoint'
        $results = Invoke-WebRequest -Uri $url -UseDefaultCredentials
        $results.StatusCode | should be 200
    }

    It 'Website #2 should be accessible' {
        $url = 'https://someOtherserver:someport/someEndpoint'
        $results = Invoke-WebRequest -Uri $url -UseDefaultCredentials
        $results.StatusCode | should be 200
    }[...]
}

I spoke with the team about what I was doing and Ryan drew my attention to the very neat TestCases of Pester, which you can read more about here.

With a bit of work, I converted my long list of tests (which I typed by hand…why?  Because I finally got a PS4 and I stayed up too late playing Sekiro!) into a JSON file like this.

[
    {
        "SiteName" : "Our Home Page",
        "Url" : "https://someserver:someport/someEndpoint"        
    },
    {
        "SiteName" : "Our WebApp #1",
        "Url" : "https://someOtherserver:someport/someEndpoint"        
    }
]

Then to hook this up to our Pester test from before and…

Context 'Post Deployment Validation' {
    $EndPointList = Get-Content $PSScriptRoot\Endpointlist.json | ConvertFrom-Json
    $paramArray = @()
    ForEach($instance in $EndPointList){
        $paramArray+= @{
            'SiteName' = $instance.SiteName
            'URL' = $instance.URL
        }
    }

    It '<SiteName> should be accessible' -TestCases $paramArray {
        Param(
            [string]$EndpointName,
            [string]$URL
        )

        $results = Invoke-WebRequest -Uri $url -UseDefaultCredentials
        $results.StatusCode | should be 200
    }
}

Then we run it to see…

But what about the post title?

You guys, always sticklers for details.  So this covered a lot of our use cases but didn’t cover an important one, that of making sure that one of our internal apps worked after a new deployment. It was a generic MVC app where an authorized user could enter some information and click a button to perform some automation after an approval process.

The issue was that as you could imagine, security is a concern, so time has been spent hardening tools against attacks like Cross-Site Request Forgery attacks.  Which is great and all, but made automated testing a pain, namely because any attempt I made to submit a test request resulted in an error of one of the following:

The required anti-forgery form field __RequestVerificationToken is not present.

The required anti-forgery cookie __RequestVerificationToken is not present

So what’s a dev to do?  Send a PR to disable security features?  Create some new super group who isn’t subject to the normal processes, just used for testing?

Of course not!

How MVC Antiforgery Tokens work

Any good production app is going to first and foremost use AspNet.Identity and some kind of user authorization system to ensure that only approved users have permission to use these tools.  If you don’t anyone who can route to the web app can use it.  This is bad.

So let’s assume we’ve done our diligence and we have our web app.  A user has permission to the app and they’re following safe browsing behavior.

Let’s imagine the app is a simple user management app, something like this, which has a simple class of Users, perhaps with a field to track if they have admin rights or not.

class FoxDeployUser
{
    public String UserName {get;set;}
    public String SamAccountName {get;set;}
    public bool IsManager {get;set;}
}

Now imagine if your user account has administrative rights to make changes to this system. If so, your account could easily navigate to a Users/Edit endpoint, where you’d be prompted with a simple form like this to make changes to a user account.

The scary thing…if the account we are using for this portal is always permitted, and doesn’t have a log in process, then any site while we are browsing the web could make a change to this portal.

Here’s how it would work, assume I want to make a change to this user.  I load up the /Users/Stephen endpoint and type in my values and hit Save, right?  What happens in the background (And which we can see in Chrome Dev tools) is that a form Submission is completed.

It simply POSTS back to the web server the contents of a form.  And you know what else?  Any website you visit can contain JavaScript that performs the exact same kind of AJAX Post to the web server.  There are even JavaScript utilities that will automatically discover webservers on your network.  So with this in mind, imagine visiting a webpage that looks pretty innocuous:

Black mode = evil website

Clicking the Post button there will send an AJAX Post formatted like the following:

$("button").click(function (e) {
            e.preventDefault();
            $.ajax({
                type: "POST",
                url: "https://MyInternalApp:44352/Users/Edit/1",
                data: {
                    UserID: 1,
                    UserName: "Stephen",
                    SamAccountName: "AD-ENT\\Stephen",
                    IsManager: "true"
                },
                success: function (result) {
                    alert('ok');
                },
                error: function (result) {
                    alert('error');
                }
            });
        });

So this is an attack from one-website, through the user’s PC, to another website they have access to!

Will it work? If it does, I’ll click the button from one site and we’ll see the user’s ‘InManager’ property change in the other site.

Wow that’s terrifying

Yep, I thought so too.  Fortunately for all of us, there are a lot of ways to mitigate this attack, and most MVC frameworks (citation needed) ship with them out of the box.  In ASP.net MVC you signal that we should protect an endpoint against a CRSF attack by adding this Annotation to the method.

// POST: Users/Edit/5
        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Edit(int id, [Bind("UserID,UserName,SamAccountName,IsManager")] User user)
        {

This adds a novel little hidden form box to the UI which contains a one-time use token, embedded in both the form and the cookies.

Here’s an example of the normally hidden element, which I’ve revealed using Chrome Dev tools.

Now if I attempt to submit this form, I’ll encounter an error, since my attack won’t be able to retrieve the form as the user, get the cookies, and then repost back to the endpoint.  Since my post won’t have the one-time code needed to do this, it will be rejected at the Controller level.

Testing an endpoint which has CSRF Protection

Now, to the meat of the issue.  As part of my Test Suite, I need to run a post through this endpoint and validate that the service after an update is able to perform this business function.

I can do this by maintaining a PowerShell WebSession to get the matching cookies and then submit them using Invoke-RestMethod.

Describe 'WebApp Testing' {
$Request = Invoke-WebRequest -Uri https://someserver:someport/Users -SessionVariable Session -UseBasicParsing -UseDefaultCredentials
$TokenValue = ''
ForEach($field in $Request.InputFields){
    if ($field.Name -eq '__RequestVerificationToken'){
        $TokenValue = $field.value
    }
}

$header = @{
    '__RequestVerificationToken' = $TokenValue
}

$fields = @{
    '__RequestVerificationToken' = $TokenValue
    'UserName' = 'TestUser'
    'SamAccountName' = 'QA\TestUser'
    'IsManager' = $false
}

It 'WebApp1 : Should edit a User' {
    $Response = Invoke-WebRequest -Uri https://someserver:someport/Users -SessionVariable Session `
        -Method Post -UseBasicParsing -UseDefaultCredentials -Body $fields -Headers $header
    $Response.StatusCode | should be 200
}

It 'WebApp1 : Should throw when the user has no token' {
    {Invoke-WebRequest -Uri https://someserver:someport/Users `
        -Method Post -UseBasicParsing -UseDefaultCredentials -Body $fields -Headers $header } | should throw    
}
}

My first integration tests.  I’m so proud.  And I’m also kind of ashamed too, because up to this point I’d been manually loading two dozen web pages and making requests by hand to validate deployments.

Thanks for reading!

CSRF Attack

Progressive Automation: Part I

$
0
0

Progressive automation - real world automation in increasing complexity

In this series, I thought it’d be fun to walk through the common phases of an automation initiative and specifically show how I love to handle this sort of situation when it arises today.

We’ll walk through recognizing a good opportunity to move a manual task to automation covering these three main steps, over the next few posts here:

  • Begin with something terrible and manual and ease the pain by adding a simple script
  • Increase the sophistication and take it to the next level by adding a User Interface
  • Migrate our Automation from a PowerShell UI to a simple and easy asp.net portal which calls a script to run the task

Depending on the amount of steam I have left, we may even go one step further and make our dotnet site more advanced, if you all are interested ☺

Our goal is to go from ‘hey it actually worked’ to ‘It works pretty well now’, to ‘hey it actually still works!’

Tell me where it hurts

You should always start your automation by finding the biggest pain points or wastes of time and starting there.  Ideal cases are things that:

  • Require your specific manual intervention (+3 points)
  • Have to happen in an off hour or over the weekend (+5 points)
  • Are hard to do, or repetitive  (+5 points)
  • Have a nasty penalty if you get them wrong (+5 points)

Add them up and if you’re over 10 then you should think about automating it. Hell, if it’s over 6, you should automate it.

A printable checklist of the points from the 'when to automate' list above
Surely Stephen didn’t really spend three hours on this thing. Or make a ‘chillwave’ version of it for basically no reason!

😎🌴Alternate Super Synth Wave Version also available🌴😎

Background for the task at hand

A sysadmin and engineer friend of mine posed an interesting question at MMSMOA this year (easily the best conference I’ve been to in a long time, I’d go if you have the chance!)

He has a domain migration taking place at a client and they needed to put just the devices that were migrating that week into a collection which would have the appropriate Task Sequences and software available for it.  The penalty for missing this?  Machines not getting upgraded (+5 points)

When the primary sccm client is installed on the machine in the acquisition domain, he needed the machines to go into a collection in the primary sccm environment. That collection would have the cross-domain migration TS advertised to it as required.

His process for this had been to have some technicians deploy the client out to the target devices and then they’d e-mail him the computer names, and he would have to go edit the Collection, adding those new devices to it. Other folks couldn’t do it because they weren’t familiar with CM, so it had to be him too!  (Requires his attention?  +3 points) He ended up having to very closely watch his email during migration weekends… Working over the weekend?  (+5 points)

People, we are at a thirteen here, this of course is totally unacceptable. Get stuff from an email? Do things manually? No no we had to engineer a fix (and this kind of thing is why MMS is awesome, we had a few wines, enjoying the music and atmosphere of the Welcome Reception and whiteboarded out a solution)

Solving this problem with automation

If the technicians were trained in CM, they could have simply set the devices as collection members and called it a day. But there was no time or budget to train 5-10 people in CM. So we had to think of an alternative.

We couldn’t just add all devices ahead of time to a collection because their device name would change in the process, and furthermore we didn’t want to disturb users during the day with the migration and the tons of apps they would be getting. So we then thought about using the device’s BIOS Serial Number (GUID) which would stay the same even after a reimage (since he wanted the devices to stay in the collection as well).

But the devices who would get migrated could fluctuate even up to the hours before a migration, when my friend was already out of the office.  Furthermore, for reporting purposes, they wanted to ‘babysit’ the recently migrated devices for a few days to ensure they recieved all software, so we couldn’t just put everybody there.

But we were getting close to a solution.

  • Line of business admins would know who were definitely going to migrate towards the end of day on Friday
  • Those Users would leave their devices in the office to be updated over the weekend
  • Inventory data from their devices in the old CM environment would be available and the user’s computer names would be known and confirmed
  • Devices would be manually F12’ed and PXE boot into the USMT Migration Task Sequence for the new CM Environment and domain
  • If their devices could only somehow end up in the ‘Migrated Devices’ collection in the new CM, we would be set, because the Required apps in that collection would have all of the apps those users would need

The Process we came up with

There are probably a number of different and better ways this could be handled (I was thinking of something clever using the time since the devices were added to the new CM instance as a Query Rule for the collection, but didn’t vet it out), but we hashed out retrieving the BIOS Serial Number and using that as a query rule for the Collection.

We came up with a simple scheduled task that would run a script. It ain’t amazing but it’s enough to get though this need and we can then use the bought time to make something a bit nicer too.

The script will :

  • Look for devices which have been added to a CSV file by the LOB guys
    • If nothing there, exit
  • Compare them and see if any of them are not already in our Processed list ( a separate CSV file we will control which they cannot alter)
    • If all devices have been processed, exit
  • Hit the old CM server via SQL and retrieve the needed GUID info from V_R_System
  • Add new collection rules for each item found, trigger a collection refresh
  • Add records to processed list and then exit

Or, in flowchart form, complete with a little XKCD guy.

pictured is a flow diagram which repeats the bullet points of this process
Guess how long it took to make this flowchart? Now multiply it by four. Now we’re in the neighborhood of how long it took

Since we will only ever add a device once to the new collection, we could safely set this to run on a pretty aggressive schedule, maybe once every 15 minutes or so.  If the new CM were really under a lot of load, of course this could be altered greatly.

And now, let’s code

OK, enough theory and planning (although this is kind of my favorite part about having been an automation consultant, and now my current role).

To begin with, users have their own spreadsheet they update like this, it’s a simple CSV format.

HostName	Processed	ProcessedDate
SomePC123
SomePC234
SomePC345

They are free to add new hostnames whenever they like.   Their files live on a network drive which the central automation server can access.  The script is pretty self-explanatory for the first half, standard checking to see if the file shares are there, then checking the files themselves to see if we have any rows which we haven’t marked as processed yet.

$Date = Get-date
$LOBDrives = "\\someDC\fileShare\ITServices\LOB_01\migrationfile.csv",
             "\\someDC\fileShare\ITServices\LOB_02\migrationfile.csv",
             "\\someDC\fileShare\ITServices\LOB_03\migrationfile.csv"
$masterLog = "\\someDC\fileShare\ITServices\master\migrationfile_reference.csv"
$ValidLobFiles = @()
$RecentlyMigratedCollectionName = "w10_Fall_Merger_RecentlyMigratedDevices"

Write-Verbose "Online at $($date)"
Write-Verbose "Looking for new items to process"
Write-Verbose "Found $($LOBDrives.count) paths for processing"

If (Test-path $masterLog){
	Write-Verbose "Found master file for reference"
	$ProcessedLog = import-csv $masterLog -Delimiter "`t"
}
else{
	Throw "Master file missing!!!"
}
ForEach($LOBFile in $LOBDrives){
	If (Test-Path $LOBFile){
		Write-Verbose "Found $($LOBFile)"
		$ValidLobFiles += $LOBFile
	}
	else{
		Write-warning "Could not resolve $($LOBFile) for processing"
	}
}

$itemsToProcess = New-Object System.Collections.ArrayList
ForEach($validLObFile in $ValidLobFiles){
	$fileCSV = Import-CSV $ValidLobFile -Delimiter "`t"
	ForEach($item in $fileCSV){
		If ($item.Processed -ne $true){
			If($ProcessedLog.hostname -notContains $item.HostName){
				[void]$itemsToProcess.Add($item)
			}
			else {
				Write-warning "$($item.Name) was already processed, ignoring"
			}

		}
	}
}

Write-Verbose "Found $($itemsToProcess.Count) items to process"

This was all pretty boiler plate, but it’s about to get more interesting. Next up, we have a short custom PowerShell cmdlet which uses a custom SQL cmdlet one of my peers–the venerable and always interesting Fred Bainbridgepublished for lightweight SQL Queries.

Function Get-FoxCMBiosInfo {
	param([string[]]$ComputerNames)

	$items = New-Object System.Collections.ArrayList
	ForEach($computerName in ($ComputerNames.Split("`n").Split())){
		If ($computerName.Length -ge 3){
			[void]$items.Add($computerName.Trim())
		}
	}

	$inStatement = "('$($items -Join "','")')"

	$query = "
	select vSystem.Name0,vSystem.ResourceID,BIOS.Caption0,Bios.SerialNumber0
		from v_r_system as vSystem
		join dbo.v_GS_PC_BIOS as BIOS on BIOS.ResourceID = vSystem.ResourceID

	where vSystem.Name0 in ($inStatement"

	Invoke-mmsSQLCommand $query
}

It returns objects like this.

Name0	ResourceID	SerialNumber0                     Caption0
SCCM	16777219	4210-1978-6105-2643-9803-3385-35  NULL
DC2016	16777220	7318-9742-4948-8961-3362-1212-32  NULL
W10-VM	16901071	6145-4101-5130-6042-4046-8711-91  NULL
SomeFox	16901086	7318-9742-4948-8961-3362-1212-32  Hyper-V UEFI Release v3.0	

This lets me then run the rest of the script, stepping through each item we need to process and adding Query Rules for the BIOS Serial Number to CM in the new environment.

#Look up SQL values
$BIOSValues = Get-FoxCMBiosInfo $itemsToProcess.Name

#Add new direct rules

$Collection = Get-CMDeviceCollection -Name $RecentlyMigratedCollectionName

ForEach($item in $BIOSValues){
Add-CMDeviceCollectionQueryMembershipRule -CollectionName $CollectionName `
  -QueryExpression 'select SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from  SMS_R_System inner join SMS_G_System_PC_BIOS on SMS_G_System_PC_BIOS.ResourceId = SMS_R_System.ResourceId where SMS_G_System_PC_BIOS.SerialNumber = "$($item.SerialNumber0)"'
  -RuleName '$($item.Name0)'
} 

#loop back through original files and mark all as processed 

ForEach($LOBFile in $LOBDrives){
	If (Test-Path $LOBFile){
		Write-Verbose "Found $($LOBFile)"
		$fileCSV = Import-CSV $ValidLobFile -Delimiter "`t"
		forEach($line in $fileCSV){
			if ($itemsToProcess.HostName -contains $line.HostName){
				$line.Processed	= $true
				$line.ProcessedDate = get-date 

			}

			$newCSV += $line
		}
		export-csv -InputObject $newCSV -Path $LOBFile -Delimiter "`t"
	}
	else{
		Write-warning "Could not resolve $($LOBFile) for processing"
	}
}

#update master file
ConvertTo-Csv $itemsToProcess -Delimiter "`t" | select-object -skip 1 | add-content $masterLog

Finally, we update the migration files for each LOB, as well as our central master record and then sleep until the next hour comes along.

Why don’t you use the Pipe anywhere?

We found at work that there are various performance penalties which can add up when performing complex operations in the PowerShell Pipeline.  For that reason, we still use the pipeline for one off automation tasks but in scripts, it just much easier to debug and test and support to use ForEach commands instead.

Next time

So that takes us from the task, through ideation, through a pretty good working solution to handle this terrible task.

Join us in phase two where we make this more sophisticated with a UI, and then phase three where we move the whole process to a centralized web UI instead.  Have some other ideas?  Drop me a line on twitter or reddit and we’ll see if we can work it into a future post.

ConfigMgr Tech Preview Install Guide

$
0
0

Hey all,
After seeing Adam Gross’ very interesting content on CM TechPreview’s new AdminService feature, I immediately started to wonder how I could go about using it in place of remote WMI Operations.
So I connected to my stale Tech Preview Environment (it was TP 1806, lol!) and found it had expired 😢.
After googling for 14 seconds, I found no one had made a completely slap-dash guide to deploying the current version of CM Tech preview complete with all of the links you’ll need, so I decided to do that here.
note: I am assuming you’ve installed ConfigMgr **a lot of times** before this, so I won’t go too in-depth into what you need to do for each step.  Where relevant I provide a link to a post with the exact step you need to do, in case you’re not sure.

Have an AD domain

You must have a domain to setup ConfigMgr.  Womp womp.  If you need a domain controller, make a new Server 2019 VM and follow this blog post for a one-click domain controller install.

Make a Service Account

You don’t want to be stuck doing this when you get to the SQL Install step so do it now.  Make a new account and set it to never expire and give it limited perms.
Do not place it in Domain Admins or Enterprise Admins

Install an OS

 Make a VM first, give it two-four cores, give it two NICs one on the same subnet as your Domain Controller and the other with External Web Access.
 Spec it out, give it 12-16 GB of RAM.  Give it fast disks.  Give it at least two disks.  One for OS only and the other for CM and SQL.  You could go for the three drive config and put SQL and CM binaries on separate drives, you do you.
C:\ - OS
D:\ - SQL</span></div>
E:\ - ConfigMgr
 For OS, I choose Server 2019.  Download the ISO here. 🔗

FoxDeploy’s Patented Bad Idea Tip:You can dramatically improve the speed of a CM install if you disable Windows Defender Real Time protection first!

SQL installs much quicker without it and CM installs in 50% of the time, from 44 minutes down to just 20 with my hardware!

Just remember to turn Defender Back On!

First Boot

 Rename your machine first then reboot then join to a domain.
 Reboot your new CM machine.
 While this is rebooting RDP to your Domain Controller and go to ADSI Edit and give your CM Server Full Control permissions on the SystemsManagement container.  Follow this to make the container if you don’t know how.  
  Now is a good time to extend ADSchema with ExtAdSch.exe if you haven’t yet.
 Back on the Server, start downloading and installing updates. If you begin downloading updates you need to let the first batch of updates before you can do a SQL Install (or you’ll get all the way to the end and it won’t allow your to install because of a pending reboot)

 Install SQL

  I like to do SQL first so I can setup WSUS DB ‘the right way’ by co-locating it on my CM SQL.  If you want to have a holy war about it, I’m ready for it, 1v1 me irl.
  For SQL Version, I choose SQL Server 2017 SP1. Download Link 🔗
  Install SQL, nexting your way through and then choosing ‘Database Engine‘ only when prompted for features.
Where’s Reporting Services?
  I know, how weird!  Turns out in the time we’ve been away from ConfigMgr consulting the world went and changed and now Reporting Services is a second process.
   Install SQL Server Features and DB to your SQL drive D:\ if you’re following along.
  • Provision your SCCM Service account as an admin in the SQL Install process
  • Set the services to auto start using your Service account too.  (You can make a separate SQL Service account if you’d like, but since it’s a lab, I am lazy on this regard)
  When SQL Server finishes, you can install SSMS and Reporting Services without rebooting.    Download link for SQL Reporting Services. 🔗
Where’s management studio?!
  It’s separate now too!  Here’s the download link for SSMS 🔗.
  You can also find links for all of the above directly within the SQL Installer, as shown here.
  Last Step for SQL: Reboot (SQL Server requires it) then set the max RAM amounts.  I use 8192-9002 MB.

Install CM PreReqs

 I used to do this with a one liner in PowerShell but I love that the PreReq tool from the Nikolaj Anderson from SCConfigMgr is just amazing and does it all for you!  ConfigMgr Prerequisite tool download link 🔗
Go to the Sites tab and choose ‘Primary Site’ and click install.
Then go to Roles Tab and choose Management Point and click install as well.
 Then go to the ADK tab and choose:
  • Windows 10 ADK Version 1903
  • Windows 10 ADK Version 1903 (WinPE Add-on)

The ADK steps take a few minutes each so wait until you the ‘Install Completed’ Prompt before moving on.

Next, ‘WSUS’ and choose ‘SQL SERVER” then click install.
ℹ  You can also use this tool to configure your SQL Memory limits too if you’d like and you skipped this before.  Just Click ‘Settings’ then ‘Connections’ and type ‘localhost’ for your SQL Server name.  You can then go to the SQL Server tab and use it to lock in the needed memory limits for SQL

Install CM

  If you’ve been fast we might be at only 25 minutes by this point!
  This will download all of the content to “C:\SC_Configmgr_SCEP_TechPreview1907\”.
  You can launch the Installer by running "C:\SC_Configmgr_SCEP_TechPreview1907\SMSSETUP\BIN\X64\setup.exe", but first I recommend running "C:\SC_Configmgr_SCEP_TechPreview1907\SMSSETUP\TOOLS\CMTrace.exe" and setting it as your default log file viewer.
  The install is dead simple, just install CM to your E:\ drive we provisioned earlier.
You can close the wizard when you see this message.

Update to TP 1910

  Open up the console and Click to Administration\Updates and Servicing and then install ‘Configuration Manager Technical Preview 1910’.  Download should take -30 mins or less!

IMPORTANT: Watch CMupdate.log for info, rather than babysitting the Update Pack Installation Status viewer.  If you click ‘Refresh’ at the wrong time for the Install Viewer, it will lose connection to the Provider and stop updating.

However, the install is still in process, as you can see in the CMUpdate.Log file.

  And now that you’re here…time to visit Adam’s post and learn all about the exciting CM TechPreview new and improved AdminService!

YouTube Video Metadata Scraping with PowerShell

$
0
0

Trigger Warning : I discuss eating disorders and my opinions pro-eating disorder media briefly in this post. If this content is difficult for some, I recommend scrolling past The Background and resuming at The Project instead.

Background

I ❤ YouTube. I have learned so much about development from folks like I am Tim Curry, or from the amazing Microsoft Virtual Academy courses from Jeffrey Snover and Jason Helmick (original link ).

Most days I catch the repeats from Stephen Colbert, and then jam out to synthwave or chillhop. In fact, I listened to one particular mix so many times while learning c# that I still get flashbacks when I hear the songs on it again…sleepness nights trying to uncover everything I don’t know. I even have my own Intro to PowerShell Video that I think my mom watched 70,000 times.

My kids grew up singing songs from Dave and Eva, Little Baby Bum, Super Simple Songs and now Rachel and the TreeSchoolers, and it was one of the first services I signed up for and still pay for today (aside from NetFlix, and that one stint where I got CDs through the mail, yeah…)

But a few months ago I heard that YouTube will recommend videos which are pro eating-restriction and bulimia within four videos of the sorts of content targeted at young children. I have a history with people who experience these disorders and want to be sure we face it head on in my family, but that doesn’t mean I will allow impressionable minds to be exposed to content which presents this issue in a positive light.

If YouTube is not going to be safe for the type of stuff my children want to watch, I needed to know.  Unfortunately the person who told me of this can not remember their source, nor could I find any decent articles on the topic, but I thought that this smelled like a project in the making.

 The Project

I wanted to see which sorts of videos YouTube will recommend as a user continues to watch videos on their site. I started with two sets of videos, one for girls fashion and the other for weight loss information.

Fashion 1, Fashion 2, Fashion 3

Weight 1, Weight 2, Weight 3

For each video, we would get the video details, its tags, its thumbnail and then also the next five related videos.  We’d continue until we hit 250 videos.

 Getting set up

Setting up a YouTube API account is very simple. You can sign up here. Notice how there is no credit card link? Interestingly from what I could tell, there is no cost to working with the YoUTube API. But that is not to say that it’s unlimited. YouTube uses a Quota based program where you have 10,000 units of quota to spend a day on the site. Sounds like a lot but it is really not when doing research.

Operation Cost Description
v3/videos?part=snippet,contentDetails 5 retrieves info on the video, the creator, and also the tags and the description
v3/Search 100 retrieves 99 related videos
SaveThumbnail 0 retrieves the thumbnail of a video given the videoID

I hit my quota cap within moments and so had to run my data gathering over the course of a few days.

As for the thumbnail, I couldn’t find a supported method of downloading this using the API, but I did find this post on StackOverflow which got me started.

The Functions

Once I wrote these functions, I was ready to go:

Connect-PSYouTubeAccount is just another credential storage system using SecureString.  Be warned that other administrators on the device where you use this cmdlet could retrieve credentials stored as a SecureString.  If you’re curious for more info, read up on the DPAPI here , or here,  or ask JeffTheScripter, as he is very knowledgable on the topic.  FWIW this approach stores the key in memory as a SecureString, then converts to string data only when needed to make the web call.

The Summary

You can access the data I’ve already created here in this new repository, PSYouTubeScrapes.   But just be aware that it is kind of terrible UX looking through 8,000 tags and comments, so I took a dependency on the awesome PSWordCloud PowerShell module which I used to make a wordcloud out of the most common video tags.

A note on YouTube Comments: they contain the worst of humanity and should never ever be entered by any person.  I intentionally decided not to research them or publish the work I did on them, because, wow.

So, here is a word cloud of the two datasets, generated using this script.

A word cloud of the most commong tags for Weight loss videos traversed with this tool, including 'theStyleDiet', 'Commedy' Beauty', and 'Anna Saccone', who seems to be a YouTuber popular in this area
Anna Saccone has a LOT of fashion and weight videos, but seemed pretty positive from what I saw

The Conclusion

All in all, I felt that the content was pretty agreeable!  if the search for children’s videos DID surface some stranger children’s videos like this one, I have to say that I didn’t think any of the videos were overly negative, exploitative, nor did I see any ‘Elsagate’ style content.  That’s not to say that YouTube is perfect, but I think it seems safe enough, even if I will probably review their YouTube history and let them use YouTube Kids instead of the full app.

Have a set of recommended videos you’d like me to search like this?  Post them in a thread on /r/FoxDeploy or leave a comment with your videos and I’ll see what we come up with.

If you conduct your own trial with this code and example and want to share, feel free to submit a pull request to the repo as well (note that we .gitignore all jpeg and png files to keep the repo size down).  You can access the data I’ve already created here in this new repository, PSYouTubeScrapes.

YouTubeScraping Header

Quick Guide – Setting up Remote Management of your child’s PC

$
0
0

MANAGING YOUR

With everyone working remote now, it’s really helpful to have a method to remote control your kid’s computers, especially if they are hard to keep on task like mine.

So I wrote this short guide to help you get a handle. This guide expects you to have two computers, one for you, one for your kids to use.

Whoa I need a computer for the kids?

This guide is only going to cover PCs, not tablets.  Sorry.

If you need to buy one, this is what I’m now recommending.  Walmart has a new in-store brand of computers which is surprisingly great for the money, called their Motile line. It can do light to moderate gaming like Minecraft and Fortnite, and also handle video editing if you’ve got a budding YouTuber, as well as programming. And the best point? It’s user upgradable so you can add more RAM or get a bigger hard drive down the road.

walmartMotile

Motile 14″ AMD Laptop with Radeon 3 Graphics, 128 GB SSD and 4GB RAM

Linus Tech Tips did a great and funny video on this laptop too if you’re interested. 

/\ The above are not my affiliate links.

Note: a word about Remote Management

This method will setup Remote Viewing and control of your child’s computer

It is imperative that you treat your children with maturity, allow them breaks and make sure they know you can remote into their PC.

You don’t want to be stuck at home with kids who feel you’ve abused their trust by spying on them.  Only use these powers for good.

If you disagree with this, I don’t care so please keep that perspective to yourself.

How to enable Remote Management

On the child’s computer, login using the administrator’s account.

This is simple, just login using the main account, our goal here is to setup an account for our homeschooler child to use which will have restricted permissions.

Hit Start, then type ‘User Accounts’, then Add, edit or remove other users.

remote01

Next, under ‘Other Users’, click ‘Add someone else to this PC’.

remote-2

Then select ‘I don’t have this persons sign-in information’

remote03
I know, the UX is pretty weird here, because it’s shuttling you into setting up a Microsoft account, which you don’t need for this guide.

Finally, provide a user name and a password, if you’d like, for your new user.

remote04

We’re doing this to make sure your child isn’t able to install programs or ‘mess up’ the computer. When they want to install an app, they’ll have to come to you for the admin password (hint: this is the account you used to login and make this account for them).

Now, before you logoff, it’s time to setup the Remote Management Client.

Installing TightVNC

We’re going to do this with the awesome and free TightVNC, it’s available here.

Direct Download

On the computer we want to control, we’ll going to launch the installer we just downloaded and then click Next, Accept the Terms of the Agreement and Next. 

This will bring us to image three below, where we choose Custom Install .

remote05
Next, Next, Custom, Next, Disable Viewer, Next Finish

Now, on screen four, right-click ‘TightVNC Viewer’ and then choose ‘This Feature will not be available’, then click Next.

Next, on screen 5, make SURE to check ‘Register TightVNC Server as a System Service’ is selected.

remote06
Configuring ‘Register TightVNC Server as System Service’ makes the remoting app launch when Windows boots, no matter who is using it.

And then click Next until at the below screen ‘TightVNC Server : Set Passwords”

remote08
Do not leave a remote client open with no password.  Seriously, don’t do it.

It is critically important to use a password.  This password is required any time you want to remote in to your child’s PC.  Create a password and keep it safe, this is very important.

You should also password protect the Administrative Interface too, possibly with a different password.

Last step – note the computer name!

You will need the computer name to connect to this PC.  You can get it, or set it, by hitting ‘WindowsKey+X -> System Management’

remote-pcname

That’s it, now you can remote into this PC from another computer on the network!

AND, because your child will be running as a standard user account, this means that they cannot uninstall the app either, and also need your consent (and for you to login) to install new apps, great for keeping the PC pristine.  Running as a Standard User mitigates the vast majority of hacks, and Cryptolocker too!

 Setting up Remote Viewer

On the computer you are going to connect from –  run the same installer as above.  However, this time when you get to screen 4, right-click ‘TightVNC Server’ and choose ‘this feature will not be installed’.

The PC you’re connecting from doesn’t need and shouldn’t have TightVNC Server running on it.

remote05
This time, on image 4, disable TightVNC Server, and only install Viewer

To connect, hit Start -> TightVNC Viewer.  Then type in the name of your kid’s computer and hit Connect. Bam, it’s that easy.

remote-tightVNCUI

This setup has worked great for me to be able to quickly pop in and reopen the Scholastic Learn from Home courses and other materials when my kids accidentally close it every three minutes.

Where to go from here? I’m thinking of adding some additional guides, like how to prohibit certain apps from running, or keeping a Chrome browser window open

Progressive Automation Pt II – PowerShell GUIs

$
0
0

In our previous post in the series, we took a manual task and converted it into a script, but our users could only interface with it by ugly manual manipulation of a spreadsheet. And, while I think sheetOps (configuring and managing a Kubernetes cluster with a GoogleSheets doc!) are pretty cool we can probably do better.

So in this post, I’ll show how I would typically go about building a PowerShell WPF GUI from an existing automation that kind of works OK.

Analysis

To begin making a UI we need to start by analyzing which values a user will be entering, considering what inputs make sense for that, and then thinking if there is anything the user will need to see in the UI as well, so, looking back to the first post…

To begin with, users have their own spreadsheet they update like this, it’s a simple CSV format.

 

HostName Processed ProcessedDate
SomePC123
SomePC234
SomePC345

Previous our users were manually adding computers to a list of computer names. That kind of scenario is best handled by the TextBox input. Or if we hate our users, we can make them provide input with a series of sliders.

Me: The ideal phone number input control doesn’t exis–


Gif Credit – Twitter

So we need at least a TextBox.

We need a confirmation button too, to enter the new items. We also need some textblocks to explain the UI. Finally, a Cancel/Reset button to zero out the text box.

We should also provide feedback of how many items we see in their input, so we should add a label which we can update.

That brings us up to:

  • Inputs
    • TextBox for ComputerNames
    • Buttons
      • OK
      • Cancel
  • Display Elements
    •  Welcome / Intro Text
    •  Confirmation Area
    •  Updatable Label to show count for devices input
    • DataGrid to show current contents

A note on TextBoxes:
As soon as we provide TextBoxes to users, all kinds of weird scenarios might happen.  Expect it!

For instance, users will copy and paste from e-mails in Outlook, or from Spreadsheets in Excel. They might also type in notepad a list of computers separated by Newlines (/r/n) carriage returns. Or maybe they’re more of the comma-separated type, and will try to separate entries with Commas.  These are all predictable scenarios we should account for in our UI, so we should give the user some kind of confirmation of what we see from their typing in the TextBox, and our form should handle most of the weird things they’ll try.

That’s why we need Confirmation. If you provide UI without confirmation, users will hate you and e-mail (or worse, they might call you!!) for help, so be sure to do it the right way and think of their needs from the get go, or you will enjoy getting to hear from them a lot.

Don’t make UI that will make your users hate you, like this one
depicts a Microsoft Windows 95 Era application with Volume Control as the title.  Instead of a volume dial as normally seen, this app in the screenshot has 100 different radio buttons to click on to change volume.

With all of these components in mind, time to get started.

Making the thing

We’re going to open up Visual Studio, pick a WPF app and then do some drag and dropping. If you are getting a bit scared of how to do it, or what you should do to install it, check out some of the previous posts in my GUI Series, here!

You should end up with something like this:

Which will look like this when rendered!

Shows a pretty ugly UI
Easily the ugliest UI we’ve done so far

To wire up the buttons, I wrote a few helped functions for the logic for the buttons, which look like this.


function loadListView(){
    $global:deviceList = new-object -TypeName System.Collections.ArrayList
    $devices = import-csv "$PSScriptRoot\devices.csv" | Sort-Object Processed
    ForEach($device in $devices){
        $global:deviceList.Add($device)
    }
    $WPFdevice_listView.ItemsSource = $global:deviceList
}

function cancelButton(){
    $WPFok.IsEnabled = $false
    $wpfdeviceTextbox.Text = $null
    $wpflabelCounter.Text="Reset"
    }

$wpfdeviceTextbox.Add_TextChanged({
    if ($wpfdeviceTextbox.Text.Length -le 5){
        return
    }
    $WPFok.IsEnabled = $true
    $deviceTextbox = $wpfdeviceTextbox.Text.Split(',').Split([System.Environment]::NewLine).Where({$_.Length -ge 3})
    $count = $deviceTextbox.Count
    $wpflabelCounter.Text=$count
})

$WPFCancel.Add_Click({
    cancelButton
})

$WPFok.Add_Click({
    $deviceTextbox = $wpfdeviceTextbox.Text.Split(',').Split([System.Environment]::NewLine).Where({$_.Length -ge 3})
    ForEach($item in $deviceTextbox){
        $global:deviceList.Add([pscustomObject]@{HostName=$item})
    }
    set-content "$PSScriptRoot\devices.csv" -Value $($deviceList | ConvertTo-csv -NoTypeInformation)
    cancelButton
    loadListView
})

To walk through these, we set an arrayList to track our collection of devices from the input file in loadListView, then define behavior in the $WPFok.Add_Click method to save the new items to the output.csv file. This is simple, and much harder to mess up than our previous approach of telling users to update a .csv file manually.

🔗Get the complete source here 🔗

Wait, where’s the beef XAML Files?

You may also notice a new method of loading up the .XAML files.

[void][System.Reflection.Assembly]::LoadWithPartialName('presentationframework')

$xamlPath = "$($PSScriptRoot)\$((split-path $PSCommandPath -Leaf ).Split(".")[0]).xaml"
if (-not(Test-Path $xamlPath)){
    throw "Ensure that $xamlPath is present within $PSScriptRoot"
}
$inputXML = Get-Content $xamlPath
$inputXML = $inputXML -replace 'mc:Ignorable="d"','' -replace "x:N",'N' -replace '^<Win.*', '<Window'
[xml]$XAML = $inputXML

After some time away from writing PowerShell GUIs, I now think it is unnecessarily verbose to keep your .xaml content within the script, and now recommend letting your xaml layouts live happily next to the script and logic code. So I’ve modified the template as shown here, to now automatically look for a matching named .xaml file within the neighboring folder. Simple and easy to read!

Next time

And that’s that! Was this the world’s best GUI? Yes. Yes of course it was!

Join us next time where we explore a whole new world, don’t you dare close your eyes, of aspnet core as an alternative way of approaching automation.

If you’re still looking for something to do, try this out this great walkthrough of terrible UI traits by a UI design consulting firm. Whatever you do, don’t do this in your UI and you’ll be off to a good start.

EXGf0WuU0AAv2zo[1]

Joining Microsoft

$
0
0

Picture of the author in front of the Microsoft Logo sign in Redmond Washington on the microsoft campus

I have really loved these last three years with #BigBank #SpoilersItWasWellsFargoAllAlong and made some great friends and had some awesome experiences creating and sharing sessions at MMS with my friends I made along the way.

My career for the last ten years has been focused on automating, deploying, and managing Microsoft technologies. And now now, I’m going to get a chance to help work on them as well!

Starting May 18th, I am happily joining Microsoft’s Azure Compute team as a Developer. I’ll be remaining in Atlanta, and working from home for the foreseeable future.

What to Expect

This blog has always been a place for me to show you how I do it and I will continue to do the same thing, with my own same flavor and perspective. All thoughts and perspectives will be my own and will not be my employers.

I’ll update this blog in the coming weeks when I have tips to share about what I’ve been working on, or as post ideas strike me!

DIY Microsoft Teams On-Air Light!

$
0
0

Children. You love them. They in turn, run into your meetings all the time. Sometimes wearing pants.

Wouldn’t it be great to have a way to keep them informed of when Daddy or Mommy is in a meeting? Something nice and big and obvious that they can just totally ignore, right?

That’s why I sought to design my own perfect on-air light, to automatically turn on when I joined Teams Meetings.  Won’t you join me in this journey together, and you can build your own?

Banner graphic, says 'How-To guide, On-air light' and depicts a light with red letters that say 'ON-AIR', illuminated and hanging above a door frame

Why make one?

Great question, and if either of these describe you, you can probably just stop and buy one of the off the shelf products that answer this need.

  • I don’t have a closed door I can work behind.
  • It would be ok to just have something for my desk
  • I enjoy inflicting my children upon others

But if you do want to make your own…read on!

You will need…

  • Wemo Smart Switch (I’m using the small rectangular ones)
  • Wemo App installed on a device
  • A free Account for IFTT
  • The Lync 2013 SDK (just the `Microsoft.Lync.Model.dll` to be precise)
  • A suitable Lightbulb
  • A very talented partner to lovingly fashion your On-Air light for you!

With all of the products acquired, let’s get started.

Setting up the Smart Switch

This can be surprisingly hard. If you buy the three or five packs of the Wemo Mini Smart Switch, rectangular style, they will likely be the Mini.82C or F7C063. Depending on your luck and if you buy the bulk packaging, you might end up with ones like I got, which had Firmware so old the Wemo Smart app as of July 2020 would be unable to configure them.

If that happens to you, here’s how to get them going.

  1. Plug in Smart Switch
  2. On your phone, disable automatic Wireless Switching in your WiFi settings (this is on you to find, but it’s probably under Advanced settings)
  3. When the Switch blinks White/Orange, connect directly to its WiFi network manually.
  4. Now open the Wemo App.
  5. It will launch in the ‘Add new device’ experience, so proceed to now connect as usual, give the device a good name, then update the Firmware.

Do this for each switch to make your life easier.

I’m calling my device ‘MeetingLight’.

Before moving on, you should have one plug connected to a light or fan or whatever that responds when you turn it on and off with the Wemo app.

Connecting Switches to IFTTT

If-This-Then-That is an awesome resource, an automation engine that provides endless capabilities and is really amazing and wonderful.

I like it. I like it a lot.

In this section, we’ll create a new flow we can use that starts with a Web Push and ends with asking Wemo nicely to do something for us.

Login to https://ifttt.com/ and click on Create.

Shows the If this then that user interface, with a box highlighting the word 'Create'

Click on ‘This’ and choose ‘Webhooks’:

You actually do click the plus sign or the word This!

This is the icon you are looking for.

Shows a textbox with the word 'webhook' entered and below it a large picture which also says webhook.
The Webhook logo is so pretty!

Select ‘Recieve a web request’

Show the IFTTT UI, and a box with the text 'Receieve a web request' is displayed, which says the automation will be triggered whenever a web request occurs
This is so cool!

 

Next, choose ‘That’, where we’ll tell IFTTT what to do when this flow happens.

Shows the If this then that logo again, but now the IF contains the webhook logo, showing this flow begins with a webhook
The User interface speaks to me! See, it’s the same logo but now it calls out that the flow begins with a Webhook. Excellent UX.

Search for Wemo Smart Plug and you’ll have to login to an oAuth process to connect the services together.

the IFTTT ui, now with the heading 'Choose action service', and in the text box to search, Wemo Smart was entered. The only option is the Wemo smart plug as the action to trigger.
You’d pick your smart bulb, fan or crockpot if you were turning those on and off when entering a meeting…

Hm, maybe a flow to trigger my George Forman grill to make some bacon for me?

Shows a list of possible wemo actions, including Turn On, Turn Off, Turn on then off, and Toggle back and forth
There are a lot of possibilites here!

Now, pick the smart device we setup way back in section one to enact the action upon.

I am picking ‘Meeting light’.

Shows the final step of the IFTTT flow, confirming the starting and stopping points of the flow and has a large 'FINISH" button at the bottom.

Finally, click ‘Finish’ on the review and finish page, and go ahead and try it out to confirm your flow works.

💡 Do this one more time to setup a ‘TurnOffTheLight’ flow too! 🤓

My two flows are named meetingStart and meetingStop.

Retrieve the URL and convert to a PowerShell Function

This part is so easy, still within IFTTT, click on ‘Documentation’ from the Maker:Webhooks page.

Depicts the maker\webhooks page and shows a box drawn around the large 'Documentation' button on the corner.
Clicking here on the Documentation button shows you how to formulate your requests to IFTTT.

I only drew a big box in the screen shot because I, embarrassingly, just couldn’t find it!  The next page shows you your API key and how to trigger your events.

shows a heavily redacted screenshot of the IFTTT UI page, where the URL format needed is displayed to trigger flows. They are in the format of https://maker.ifttt.com/trigger/{event name goes here/with/key/{API key goes here}

This will show you how to formulate your request and the URL to hit.

https://maker.ifttt.com/trigger//with/key/

But aren’t these tokens in the clear?

 

No they are not. With HTTPS, as we have discussed before on this blog in The Case of the Spooky Certificate, even the URL itself is secured and passed as an encrypted body parameter. Only the target server, in this case maker.ifttt.com is transmitted in clear.

Now let’s make these into the worlds ugliest PowerShell functions.

Function meetingStart {
irm https://maker.ifttt.com/trigger/meetingStart/with/key/apiKeyGoesHere -method Post
}

Function meetingStop {
irm https://maker.ifttt.com/trigger/meetingStop/with/key/apiKeyGoesHere -method Post
}

And to test them…

Connecting to Microsoft Teams

Here you will need the Microsoft Lync 2013 SDK. You don’t have to install it, just open the .exe with 7Zip then manually run the x86 flavored .msi.

Or if you’re really cool, extract that too and just get this dll file, Assemblies\Desktop\Microsoft.Lync.Model.dll.

You can also just search for it on the web, some folks bundle it on Github with their projects.

Once you have that…

As of this writing, retrieving user presence through the Graph API requires special permissions. Some tenants, like your companies Office 365 tenant might allow regular users a token to retrieve delegated info, but not all tenants do this. If they don’t then you may require Tenant Admin permissions to hit the Graph API and get presence state back.  I felt that was kind of overkill to turn on a light, if you ask me so I looked to other options.

Wait, what is user presence?

It’s the the Office Unified Communications (sometimes called Office UC) term for being Away, Present, Presenting).

Next up, I had poor luck using the modern OfficeUC SDK to connect directly to Teams to retrieve the status and gave up after a few hours, in the interests of staying true to the ‘pressures on’ hackathon spirit.

So, to retrieve status, we will query it from Skype4Business! How elegant, right?

Getting into it

The root of our woes is that the presence of a person is protected info, and rightly so. Imagine if a vendor knew the second you sat down at your desk and could call you every time.  It would get old, and fast.

To be trusted with presence info, apps like Office, Teams and Skype all had to do some heavy lifting to retrieve and set our Presence state, and we can only view that info about peers if we authenticate and use our account, or are federated, which means using an account.  Again, heavy lifting.

So, in order for us to do it in code, here’s what we can do.

Add-Type -Path "C:\Program Files (x86)\Microsoft Office 2013\LyncSDK\Assemblies\Desktop\Microsoft.Lync.Model.dll";

#Gets a reference to the currently running Skype4Business client
$lyncclient = [Microsoft.Lync.Model.LyncClient]::GetClient()

#Gets a reference to our special contact object from Skype
$myContact = $lyncclient.Self.Contact;

#Calls our contact to update the status and retrieve an `Availability` property back
$myState = $myContact.GetContactInformation("Availability")

See, even retrieving our own state results in a call that Lync/Skype4Business processes for us.

But it works! Now to bake the whole thing into some code to run…

And it works! When I join a call or a meeting, in just a few moments, the light outside my door turns on!

Shows a make-shift 'on-air' light of the kind you would find in a news radio booth to indicate the host is live on air.
Ain’t she a beaut!

I realize that my instructions on how to actually make the On Air light fixture are akin to this.

a humorous image showing how to draw an owl in two steps.  The first step is two simple circles.  The next step shows an incredibly ornate drawing of an owl with the instructions 'now draw the rest of the damn owl'

My wife made the whole thing for me!  She used a leftover children’s crafting lunchbox and some black and red vinyl for the graphic, which she cut out using a Cricut machine.

What’s next?

I’ll update this as I find better ways to do it, of course. Wait, before you leave, do you know of a better way!?

Share it in the comments or on our subreddit! Did you make your own? I’d love to see it!

Tag me on Twitter @FoxDeploy and I’ll retweet the coolest on-air lights folks create.

Turning on Light with PowerShell

PowerShell quickie – function to make your Mocks faster

$
0
0


In C#, writing unit tests is king, and Moq is the hotness we use to Mock objects and methods, like the MockObjects we get with Pester in PowerShell.

But one rough part of it is the syntax for Moq, which requires you to write a handler and specify each input argument, which can get pretty verbose and tiresome.

To ease this up, try this function, which will take a method signature and convert it into a sample Mock.Setup or Mock.Verify block, ready for testing

Viewing all 109 articles
Browse latest View live