Azure AD Proxy, OpenID SSO, and Azure AD Request Identification via Header Values

Backstory

I recently found myself writing some OpenID/SSO code and realized that for some reason Azure AD Proxy doesn’t rewrite the header value for replyurl. This means that while you connect to Azure AD Proxy to access your app, when your internal app then attempts to authenticate to Azure AD via OpenID (or SAML), once it is successful it returns you to the internal URL not the proxied url.

Manually Setting the RedirectUri / Reply URL

First you must understand, you can not set this value on the Azure side, it MUST be set on the app. In our case we wrote our own app so to fix it we wrote code to trap the event for OnRedirectToIdentityProvider and then set our own hardcoded Azure AD Proxy external URL. We cleaned this up by making the external URL a parameter in the configuration file instead of code itself.

options.Events.OnRedirectToIdentityProvider = (context) => {
    context.ProtocolMessage.RedirectUri = <Azure AD External URL/SSOpath>;
    await Task.FromResult(0);
}

Determining an Azure AD Proxy Client Request from a Normal one

Next up we didn’t want to just hard code the azure external url. This would mean we could never use the internal URL for testing. So we also added in a check of the following request header value:

Name: HTTP_X_MS_PROXY
Value: AzureAD-Application-Proxy

We now check to see if HTTP_X_MS_PROXY is present and if so change the RedirectUri to the Azure AD Proxy External URL. Otherwise, we let it return the internal URL.

Microsoft Ignite 2022 Review | A good start, but hopefully not repeated

Before we talk about Ignite 2022, let’s layout my personal biases first. I have attended 16 Microsoft TechEd/Ignite Conferences, not having missed one starting in 2004. A fact I hold with a lot of pride. I largely attribute my professional achievements and skills to these events. While it’s always fun to watch newbies miss the important parts of the event for parties, free beer, expo swag, etc., the real professionals use these events to stay ahead of the constant state of change in the industry. One more bias, I do have a distaste for Microsoft Marketing in recent years. They overachieve to sell products that don’t exist yet, or ones that do, but don’t function at the levels they claim. More why that matters later. 

I am a Microsoft Administrator of 20 years who witnessed the fall of Novell, deployed NT4 and then AD2000+. Fast-forward to now:  ahead of the curve with thousands of Azure AD joined (not hybrid) workstation clients, over 200 SAML SSO apps via Azure AD, MFA with Number Matching (aka Password-less) as part of the onboarding process, and almost no VPN requirements for my end users. 

Sadly, I must admit I didn’t get enough out of the event. What upsets me most about this fact, is that next year is slated to be another two-day format which is core to why I didn’t get enough out of the event this year. 

“Two days isn’t enough.”

Both mornings had keynotes that consumed over 25% of the entire event. Those keynotes can help provide guidance on where Microsoft thinks it is going. However, it’s mostly just Microsoft Marketing laying groundwork for hopefully self-fulfilling prophecies. Increasingly, they are selling ideas or concepts of future features which aren’t even in private preview yet. The note on company direction is useful, but not at the cost of 25% of the event.

Mr. Nadella (Microsoft CEO) couldn’t even be bothered to deliver the keynote in person, even though the event was in his home state. This felt like a major smack in the face to those like me that flew 6 hours to attend the event in person and speaks to just how little importance that half day+ worth of keynote actually mattered. 

Let’s cover the three most critical components of Ignite. The deep technical sessions, the face time with product managers/engineers, and the networking with peers. 

Not enough Deep Technical Sessions

There is always an internal fight between event planners and speakers over the technical depth and time length of a session. Level 200 sessions are almost always useless to me. It’s a sales pitch, I don’t need the sales pitch, I’m already sold, I need to know how to deploy and manage the solution. 

Far too often the needed real-world knowledge doesn’t make it into docs.microsoft.com (which is perpetually outdated, incorrect, or simply missing critical details as a byproduct of Microsoft’s newfound agility). The level 300/400 sessions are hosted by PMs, Engineers, and MVPs. These professionals always deliver value without the filter of marketing’s specter, and they provide enough tactical information to actually start deploying solutions (or avoiding the gotchas). 

There were not enough deep technical sessions. This gets back to my point that a day and a half isn’t enough time to cover all Microsoft product solutions that I need to be an expert in. There wasn’t even a specific session about Microsoft Teams Shared Channels, and that’s the exact kind of session I needed and expected this year.

Face time with Product teams” 

The next most important feature of these events is face time with product managers and engineers. It’s where I can really get straight answers. Its access so pure and helpful to solving our design issues or providing critical feedback that tends to have an impact on future releases. 

I had almost no face time this year, which was infuriating. There wasn’t enough expo space for each product team to have their own area. Instead, a scheme was devised to use that precious day and a half of session time to have “ask the expert” time windows where a given product team might be in a specific area for about 2 hours. If those two hours overlapped with a not-to-be-missed-session you ended up having to choose. 

Opportunities to Network with Peers wasn’t as prevalent as it should have been” 

There were very short periods of time between sessions. This left little time to strike up conversations with people I was sitting next to. Also, the meals were so basic that people didn’t spend a lot of time at meals nor was there even an hour to eat lunch. The lack of a proper vendor expo hall made this worse as there was no reason to stick around for the end of day free drinks and snacks. 

Cost and Time Constrained”

I gave the Microsoft events team a lot of leeway for this event. I wouldn’t be shocked if they didn’t know if they were going to put on the event at the start of 2022. This short window of time to throw the event together caused things like no swag. To be clear, I don’t care if I get a 17th backpack and in fact my wife will be thrilled to not have to make me pick one to toss this year. But a lot of people were wondering if Microsoft was being cheap with no swag. I don’t think so for that, I think logistically they couldn’t pull it off. 

But on the topic of cheap, I wonder how much the event budget played a part. Less attendees, far less vendors, perhaps many of the issues like length of event, lack of enough large session rooms, not enough space in the hub for all product teams to have a home base, or even lack of enough proper sessions – can these all be blamed on cost? 

“This was a v1 Hybrid Infant event”

Microsoft Event staff seem downright giddy about flushing out this half in person half online format. I had heard comments about perhaps future years would have multiple in person locations and sessions broadcasted to other locations and to remote users. 

While I think the idea is “cool”, I think the event staff are losing sight of what the event “should” be. I get this awful feeling the event is turning into one big sales pitch instead of what it “needs” to be for education. More now than ever before, the lack of authored books or proper documentation coming from the product teams means this event must fill the gap. That is, if Microsoft wants to see its customer deploy its new solutions. 

One misconception that was abundantly clear, was the idea that we would waste part of our day and a half of session time watching the online only content. While it’s true, many of the sessions were recorded or were online only, I think that skips an important fact:  after this week my carriage turns back into a pumpkin. I will be thrusted into never ending backlogs and my time for skills advancement will be over.

Speaking of the rushed chaotic nature of the event,  I was not the only person who thought there were sessions on Friday the 14th. With this misunderstanding, I booked my travel home on Saturday (#NoRedEyes). That left me in Seattle for the whole day with no event to go to. I ended up finding great spots in Starbucks Roastery and the Seattle public Library to get through as many recorded sessions as I could. At 1.25x play speeds, and armed with skip 10 seconds ahead, I did in fact get through 10 of them. That is far more than the other days. If we can’t persuade Microsoft to bring back the 4.5-day format, I would likely book through Saturday again next year, just so I have that one last day to learn more. 

For 2023 I would like to see the event restored to almost 5 days. This would ensure enough time to jam in all those level 200 keynotes / sales pitches and leave room for the level 300 sessions for my colleague’s needs. They also need a big enough hub so that each product team has a defined space, and they need to force those experts to be in that space during end of day drinks and food. They need larger session rooms, and more of them. They need to encourage more MVPs to submit technical session ideas. Better yet, they should open up to customers asking what sessions they would like to see (Microsoft Teams Shared Calendars cough cough). They need to make the gap in between sessions larger, at least 30 minutes, and a full hour for lunch so there is time to go into the hub. They need to run sessions until 6pm and start them earlier (like they did in past years).  

To be clear, I learned things, just not enough for a whole year. Almost like Moore’s law, the rate of change in M365/Azure is accelerating year after year, and I’m getting more staff to manage it all. I need more technical information to be as successful as in previous years. Like I said, this year was the first one back. Microsoft gets a pass this year, but next year can’t be like this year, or I fear I won’t be able to keep at the bleeding edge of innovation and security at my company. 

PowerShell Error: The underlying connection was closed: An unexpected error occurred on a send

I got mad the other day, trying to do a simple wget (i.e. invoke-webrequest) to an Azure Function I made and I was getting:

The underlying connection was closed: An unexpected error occurred on a send

I tried switching to .NET Webclient but still same error.

What was more frustrating is that it worked on my dev machine, worked on the server I was running to code on in a browser, just not in powershell.

The Fix

Apparently PowerShell version 5 defaults to TLS 1.0. Azure Functions require TLS 1.2. The fix is super simple, just add this in your code on its own line:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Query Azure SQL Database Table via Powershell

Real quick one… I have used similar code for ages to query local on-premise SQL databases. However Azure requires the use of encrypted connections.  Here is some fully working code:

#Set Defaults (Optional) which allows you to skip defining instance, user, and password each time
$AzureDefaultInstanceName = “myUniqueAzureSQLDBName”
$AzureDefaultUserID = “myUserIDToAzureSQL”
$AzureDefaultPassword = “myPasswordToAzureSQL”

#The actual function
Function get-azureSQL (
[string]$InstanceName = $AzureDefaultInstanceName
,[string]$UserID = $AzureDefaultUserID
,[string]$Password = $AzureDefaultPassword
,[string]$Query){

$connectionString = “Server=tcp:$($InstanceName).database.windows.net,1433;”
$connectionString = $connectionString + “Database=$($InstanceName);”
$connectionString = $connectionString + “User ID=$($UserID)@$($InstanceName);”
$connectionString = $connectionString + “Password=$($Password);”
$connectionString = $connectionString + “Encrypt=True;”
$connectionString = $connectionString + “TrustServerCertificate=False;”
$connectionString = $connectionString + “Connection Timeout=30;”

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection
$SqlConnection.ConnectionString = $connectionString

$SqlCmd = New-Object System.Data.SqlClient.SqlCommand
$SqlCmd.CommandText = $Query
$SqlCmd.Connection = $SqlConnection

$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SqlAdapter.SelectCommand = $SqlCmd

$DataSet = New-Object System.Data.DataSet
$SqlAdapter.Fill($DataSet) | Out-Null
$SqlConnection.Close()

return $DataSet.Tables[0]
}

#Querying Azure SQL using Defaults defined above
get-azureSQL -Query “select * from logs”

#Querying Azure SQL without Defaults
get-azureSQL -InstanceName “myUniqueAzureSQLDBName” -UserID “myUserIDToAzureSQL” -Password “myPasswordToAzureSQL” -Query “select * from logs”

Sure you can install the Azure Powershell module and then the SQL commands too but most of the time you need to get in quick and grab something, this code is super fast and works every time for me and best of all…. no installs.

If this helped you or you want to suggest an improvement, please just leave it in the commands.

Enjoy,

-Eric