Azure AD Proxy, OpenID SSO, and Azure AD Request Identification via Header Values

Backstory

I recently found myself writing some OpenID/SSO code and realized that for some reason Azure AD Proxy doesn’t rewrite the header value for replyurl. This means that while you connect to Azure AD Proxy to access your app, when your internal app then attempts to authenticate to Azure AD via OpenID (or SAML), once it is successful it returns you to the internal URL not the proxied url.

Manually Setting the RedirectUri / Reply URL

First you must understand, you can not set this value on the Azure side, it MUST be set on the app. In our case we wrote our own app so to fix it we wrote code to trap the event for OnRedirectToIdentityProvider and then set our own hardcoded Azure AD Proxy external URL. We cleaned this up by making the external URL a parameter in the configuration file instead of code itself.

options.Events.OnRedirectToIdentityProvider = (context) => {
    context.ProtocolMessage.RedirectUri = <Azure AD External URL/SSOpath>;
    await Task.FromResult(0);
}

Determining an Azure AD Proxy Client Request from a Normal one

Next up we didn’t want to just hard code the azure external url. This would mean we could never use the internal URL for testing. So we also added in a check of the following request header value:

Name: HTTP_X_MS_PROXY
Value: AzureAD-Application-Proxy

We now check to see if HTTP_X_MS_PROXY is present and if so change the RedirectUri to the Azure AD Proxy External URL. Otherwise, we let it return the internal URL.

Microsoft Ignite 2022 Review | A good start, but hopefully not repeated

Before we talk about Ignite 2022, let’s layout my personal biases first. I have attended 16 Microsoft TechEd/Ignite Conferences, not having missed one starting in 2004. A fact I hold with a lot of pride. I largely attribute my professional achievements and skills to these events. While it’s always fun to watch newbies miss the important parts of the event for parties, free beer, expo swag, etc., the real professionals use these events to stay ahead of the constant state of change in the industry. One more bias, I do have a distaste for Microsoft Marketing in recent years. They overachieve to sell products that don’t exist yet, or ones that do, but don’t function at the levels they claim. More why that matters later. 

I am a Microsoft Administrator of 20 years who witnessed the fall of Novell, deployed NT4 and then AD2000+. Fast-forward to now:  ahead of the curve with thousands of Azure AD joined (not hybrid) workstation clients, over 200 SAML SSO apps via Azure AD, MFA with Number Matching (aka Password-less) as part of the onboarding process, and almost no VPN requirements for my end users. 

Sadly, I must admit I didn’t get enough out of the event. What upsets me most about this fact, is that next year is slated to be another two-day format which is core to why I didn’t get enough out of the event this year. 

“Two days isn’t enough.”

Both mornings had keynotes that consumed over 25% of the entire event. Those keynotes can help provide guidance on where Microsoft thinks it is going. However, it’s mostly just Microsoft Marketing laying groundwork for hopefully self-fulfilling prophecies. Increasingly, they are selling ideas or concepts of future features which aren’t even in private preview yet. The note on company direction is useful, but not at the cost of 25% of the event.

Mr. Nadella (Microsoft CEO) couldn’t even be bothered to deliver the keynote in person, even though the event was in his home state. This felt like a major smack in the face to those like me that flew 6 hours to attend the event in person and speaks to just how little importance that half day+ worth of keynote actually mattered. 

Let’s cover the three most critical components of Ignite. The deep technical sessions, the face time with product managers/engineers, and the networking with peers. 

Not enough Deep Technical Sessions

There is always an internal fight between event planners and speakers over the technical depth and time length of a session. Level 200 sessions are almost always useless to me. It’s a sales pitch, I don’t need the sales pitch, I’m already sold, I need to know how to deploy and manage the solution. 

Far too often the needed real-world knowledge doesn’t make it into docs.microsoft.com (which is perpetually outdated, incorrect, or simply missing critical details as a byproduct of Microsoft’s newfound agility). The level 300/400 sessions are hosted by PMs, Engineers, and MVPs. These professionals always deliver value without the filter of marketing’s specter, and they provide enough tactical information to actually start deploying solutions (or avoiding the gotchas). 

There were not enough deep technical sessions. This gets back to my point that a day and a half isn’t enough time to cover all Microsoft product solutions that I need to be an expert in. There wasn’t even a specific session about Microsoft Teams Shared Channels, and that’s the exact kind of session I needed and expected this year.

Face time with Product teams” 

The next most important feature of these events is face time with product managers and engineers. It’s where I can really get straight answers. Its access so pure and helpful to solving our design issues or providing critical feedback that tends to have an impact on future releases. 

I had almost no face time this year, which was infuriating. There wasn’t enough expo space for each product team to have their own area. Instead, a scheme was devised to use that precious day and a half of session time to have “ask the expert” time windows where a given product team might be in a specific area for about 2 hours. If those two hours overlapped with a not-to-be-missed-session you ended up having to choose. 

Opportunities to Network with Peers wasn’t as prevalent as it should have been” 

There were very short periods of time between sessions. This left little time to strike up conversations with people I was sitting next to. Also, the meals were so basic that people didn’t spend a lot of time at meals nor was there even an hour to eat lunch. The lack of a proper vendor expo hall made this worse as there was no reason to stick around for the end of day free drinks and snacks. 

Cost and Time Constrained”

I gave the Microsoft events team a lot of leeway for this event. I wouldn’t be shocked if they didn’t know if they were going to put on the event at the start of 2022. This short window of time to throw the event together caused things like no swag. To be clear, I don’t care if I get a 17th backpack and in fact my wife will be thrilled to not have to make me pick one to toss this year. But a lot of people were wondering if Microsoft was being cheap with no swag. I don’t think so for that, I think logistically they couldn’t pull it off. 

But on the topic of cheap, I wonder how much the event budget played a part. Less attendees, far less vendors, perhaps many of the issues like length of event, lack of enough large session rooms, not enough space in the hub for all product teams to have a home base, or even lack of enough proper sessions – can these all be blamed on cost? 

“This was a v1 Hybrid Infant event”

Microsoft Event staff seem downright giddy about flushing out this half in person half online format. I had heard comments about perhaps future years would have multiple in person locations and sessions broadcasted to other locations and to remote users. 

While I think the idea is “cool”, I think the event staff are losing sight of what the event “should” be. I get this awful feeling the event is turning into one big sales pitch instead of what it “needs” to be for education. More now than ever before, the lack of authored books or proper documentation coming from the product teams means this event must fill the gap. That is, if Microsoft wants to see its customer deploy its new solutions. 

One misconception that was abundantly clear, was the idea that we would waste part of our day and a half of session time watching the online only content. While it’s true, many of the sessions were recorded or were online only, I think that skips an important fact:  after this week my carriage turns back into a pumpkin. I will be thrusted into never ending backlogs and my time for skills advancement will be over.

Speaking of the rushed chaotic nature of the event,  I was not the only person who thought there were sessions on Friday the 14th. With this misunderstanding, I booked my travel home on Saturday (#NoRedEyes). That left me in Seattle for the whole day with no event to go to. I ended up finding great spots in Starbucks Roastery and the Seattle public Library to get through as many recorded sessions as I could. At 1.25x play speeds, and armed with skip 10 seconds ahead, I did in fact get through 10 of them. That is far more than the other days. If we can’t persuade Microsoft to bring back the 4.5-day format, I would likely book through Saturday again next year, just so I have that one last day to learn more. 

For 2023 I would like to see the event restored to almost 5 days. This would ensure enough time to jam in all those level 200 keynotes / sales pitches and leave room for the level 300 sessions for my colleague’s needs. They also need a big enough hub so that each product team has a defined space, and they need to force those experts to be in that space during end of day drinks and food. They need larger session rooms, and more of them. They need to encourage more MVPs to submit technical session ideas. Better yet, they should open up to customers asking what sessions they would like to see (Microsoft Teams Shared Calendars cough cough). They need to make the gap in between sessions larger, at least 30 minutes, and a full hour for lunch so there is time to go into the hub. They need to run sessions until 6pm and start them earlier (like they did in past years).  

To be clear, I learned things, just not enough for a whole year. Almost like Moore’s law, the rate of change in M365/Azure is accelerating year after year, and I’m getting more staff to manage it all. I need more technical information to be as successful as in previous years. Like I said, this year was the first one back. Microsoft gets a pass this year, but next year can’t be like this year, or I fear I won’t be able to keep at the bleeding edge of innovation and security at my company. 

Calendar Invites from Office 365 forwarded to GMail / G-Suite lack Accept / Reject buttons

This post might be a bit of bear. I write it mainly for myself as a point of reference but perhaps it can help others.

In our case, we had recently acquired a new company that used Google G-Suite / GMail. While we waited to migrate them over we setup Mail Enabled User Objects (without Mailboxes) on Office 365 as Stubs. These stubs provided GAL entries for these employees and leveraging the “targetAddress” attribute forwards all emails to those users mailboxes on G-Suite (a different email domain).

For the most part this worked well. We get Calendar Free / Busy from the objects as well email forwarding worked. Except sometimes Calendar Invites did not have accept or reject buttons.

We finally got to the bottom of this… has everything to do with two factors (both are really the same but worth going through the motions)

  • TNEFEnabled Flagging must be set to $False ($null isnt good enough) in PowerShell
  • “Use Rich-Text Format” Set to “Never” in the ECP/Mail Flow/Remote Domains/<domain>

Connect to Microsoft Exchange Online PowerShell Module then run this:

Get-RemoteDomain | select Name, TNEFEnabled

If you dont have the GMail / GSuite domain listed add it with new-remotedomain:

New-RemoteDomain -Name <Name of External Domain> -DomainName domain.com

Then run this command:

Set-RemoteDomain -Identity -TNEFEnabled $false

Next up we want to validate things with RTF

  • Goto the ECP: https://outlook.office.com/ecp
  • Navigate to Mail flow on the Left
  • Navigate to Remote Domain on the top
  • Find the domain in question
  • Ensure “Use rich-text format:” is set to “Never”

That should be it, within 30 minutes or so to have setting sync to all exchange servers it should be working once more.

What I think I understand better now is the MS KB Docs are incorrect, $null on TNEFEnabled means to default to user defaults. You must use $False to force the corrective action.

Fix: NVidia Shield (Moonlight) selecting the wrong Monitor

A while back I stopped paying for consoles and put my efforts to a good PC rig. However I still like laying down on the couch and using a XBOX Controller. Moonlight fixed this for me (using a 4K Apple TV and full ethernet) . Full FPS, full resolution (with RTX I might add), no lag, perfection!

But there was a problem when I upgraded my PC. Moonlight kept using the right (wrong) monitor instead of dead center. This made it so I would have to get up, go in my office and force the game onto the wrong monitor (or worse).

However after a lot of trial and error I figured out how to fix it.

First you need to make sure the monitor in question is in fact the “BIOS Default”. What does that mean? Well for me, when I power on the tower the Dell logo shows up on that screen. I had to swap around DP cables until that happened.

Next you need your preferred monitor to be the Windows first found. Notice I didn’t say primary? NVidia doesn’t respect the primary monitor flag (they should but they dont).

Some background: Windows makes “profiles” for every unique pairing of monitors. It does this by using Monitor Serial Numbers which is why swapping cables doesn’t really fix the issue. My assumption is NVidia looks for Monitor 00 and that’s the one it uses. So the real trick is to get WINDOWS to address your preferred monitor first.

To get Windows to make your preferred monitor #00 (what I am calling first found) you need to figure out which cable its connected to. Make sure its the only one attached, then go to the following section of the registry:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\GraphicsDrivers\Configuration

Delete all the sub keys of Configuration. I did this a few times, never did me much harm although be aware it could create issues for you. A system restore point might be a good idea or at least an export of Configuration *(right click -> export).

Then disconnect all other monitors except the one you care about and reboot. Once rebooted plug in your other monitors. You will have to reorder them again. And that should do it.

I figured this out after realizing even after purging drivers and configs I found it odd Windows always knew how to put the monitor order back to gather again (even when swapping cables). That is how I found these keys which kills that saved profile. The only other part to figure out was how to make sure the monitor I cared about was first.

Hope it helps and happy gaming!

Stop Chrome (or any app) from preventing Screen Locking and/or Screen Saver

A minor problem that has plagued me for some time, I would be done for the day, leave the home office, and yet hours later all 4 screens were still left on. I hate paying for the power to leave my screens on all night plus the fact it reduces the screens longevity. Most importantly, its a security issue. I want my computer to lock when I am not at it. Many times I press Win+L to lock but sometimes I forget.

I generally leave my security cams up on the top screen, and I was fairly sure Chrome has a way of telling Windows to not go to sleep because media was playing. Well, I was right.

Detecting the Issue

Simply run this command to see what is holding up the system:

powercfg /requests

Notice there under DISPLAY: that Chrome is playing video?

The Fix

The block Chrome from preventing the computer from sleeping simply run this command (change it from chrome to another app name if its not chrome):

powercfg -requestsoverride PROCESS chrome.exe awaymode display system

Enjoy,

-Eric

PowerShell Error: The underlying connection was closed: An unexpected error occurred on a send

I got mad the other day, trying to do a simple wget (i.e. invoke-webrequest) to an Azure Function I made and I was getting:

The underlying connection was closed: An unexpected error occurred on a send

I tried switching to .NET Webclient but still same error.

What was more frustrating is that it worked on my dev machine, worked on the server I was running to code on in a browser, just not in powershell.

The Fix

Apparently PowerShell version 5 defaults to TLS 1.0. Azure Functions require TLS 1.2. The fix is super simple, just add this in your code on its own line:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Can not install Exchange Online PowerShell MFA Update due to ClickOnce Application Security Settings

If you can not install the Exchange Online PowerShell Update which enables MFA (which can be found here: https://outlook.office.com/ecp -> Hybrid -> Second Button) due to Windows not letting you install it then edit this area of the registry:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\Security\TrustManager\PromptingLevel

Set Internet to Enabled (and if that isnt enough set them all to Enabled)

Once installed set back to Disabled

 

 

Record Hyper-V Console

Every few months thanks to Windows 10 its time to roll out a new image. This is a simple yet tedious task.  Thanks to modern day multi-tasking its easy to miss something in testing of new images. Then I have to restart the whole process wasting time.

This script will record the screen by taking screenshots every second. I suppose you could use a 3rd party tool to merge them together into a video if you really needed to.

The script includes de-duplication of images so if the screen stops moving so does the recording. That plus using JPEG format makes the images fairly small.

Oh yes, dont forget to “Run as Admin”

Enjoy!

{{CODE1}}

Big thanks to Ben Armstrong for the original work on this script:

https://blogs.msdn.microsoft.com/virtual_pc_guy/2016/05/27/capturing-a-hyper-v-vm-screen-to-a-file/

Windows Update Stuck on “Searching for Updates” on Windows Server 2012 R2

This one was a nightmare. If you search the internet for “Searching for Updates” you will find a lot of pages but none that I saw had this resolution.

In my case on my server the problem was actually related to Flash updates. After working with Microsoft Support it was discovered that a large number of pending Adobe Flash updates were causing the search to never finish so the fix was to manually update Flash.  This was done by installing KB3214628

Hope this helps someone else out, this took MS Support weeks to figure out.

-Eric