RSS Subscription 167 Posts and 2,643 Comments

Exchange 2010 Site Resilient DAGs and Majority Node Set Clustering – Part 3

Welcome to Part 3 of Exchange 2010 Site Resilient DAGs and Majority Node Set Clustering.  In Part 1, I discussed what Majority Node Set Clustering is and how it works with Exchange Site Resilience when you have one DAG member in a Primary Site and one DAG member in a Failover Site.  In Part 2, I discussed how Majority Node Set Clustering works with Exchange Site Resileince when you have two DAG members in a Primary Site and one DAG member in a Failover Site. In this Part, I will show an example of how Majority Node Set Clustering works with Exchange Site Resilience when you have two DAG members in a Primary Site and two DAG members in a Failover Site.

Part 1

Part 2

Part 3

Real World Examples

Each of these examples will show DAG Models with a Primary Site and a Failover Site.

4 Node DAG  (Two in Primary and Two in Failover)

In the following screenshot, we have 4 Servers.  Four are Exchange 2010 Multi-Role Servers; two in the Primary Site and two in the Failover Site.  The Cluster Service is running only on the four Exchange Multi-Role Servers.  More specifically, it would run on the Exchange 2010 Servers that have the Mailbox Server Role. When Exchange 2010 utilizes an even number of Nodes, it utilizes Node Majority with File Share Witness.  If you have dedicated HUB and/or HUB/CAS Servers, you can place the File Share Witness on those Servers.  However, the File Share Witness cannot be placed on the Mailbox Server Role.

So now we have our five Servers; four of them being Exchange.  This means we have five voters.  Four of the Mailbox Servers that are running the cluster service are voters and the File Share Witness is a witness that the voters use to maintain cluster quorum.  So the question is, how many voters/servers/cluster objects can I lose?  Well if you read the section on Majority Node Set (which you have to understand), you know the formula is (number of nodes /2) + 1.  This means we have (4 Exchange Servers / 2) = 2 + 1 = 3.  This means that 3 cluster objects must always be online for your Exchange Cluster to remain operational.

But now let’s say one or two of your Exchange Servers go offline.  Well, you still have at least three cluster objects online.  This means your cluster will be still be operational.  If all users/services were utilizing the Primary Site, then everything continues to remain completely operational.  If you were sending SMTP to the one of the servers in the Failover Site or users were for some reason connecting to the Failover Site, they will need to be pointed to another Exchange Server that is operational in the Primary Site or the Failover Site. This of course depends on whether the user databases are being replicated from a mailbox database failover standpoint.

But what happens if you lose a third node in which all DAG members in the Failover Site go offline including the FSW? Well, based on the formula above we need to ensure we have 3 cluster objects operational at all times.  At this time, the entire cluster goes offline.  You need to go through steps provided in the site switchover process but in this case, you would be activating the Primary Site and specify a new Alternative File Share Witness Server that exists in the Primary Site so you can active the Exchange 2010 Server in the Primary Site.  The DAG will actively use the File Share Witness since there will be 2 Exchange DAG Members remaining which is an even number of nodes.  And again, when you have an even number of nodes, you will use a File Share Witness.

But what happens if you lose two nodes in the Primary Site as well as the FSW due to something such as Power Failure or a Natural Disaster? Well, based on the formula above we need to ensure we have 3 cluster objects operational at all times.  At this time, the entire cluster goes offline.  You need to go through steps provided in the site switchover process but in this case, you would be activating the Failover Site and specify a new Alternative File Share Witness Server that exists (or will exist) in the Failover Site so you can activate the Exchange 2010 Servers in the Failover Site.   The DAG will actively use the Alternate File Share Witness since there will be 2 Exchange DAG Members remaining which is an even number of nodes.  And again, when you have an even number of nodes, you will use a File Share Witness.

Once the Datacenter Switchover has occurred, you will be in a state that looks as such.  An Alternate File Share Witness is not for redundancy for your 2010 FSW that was in your Primary Site.  It’s used only during a Datacenter Switchover which is a manual process.

Once your Primary Site becomes operational, you will re-add the two Primary DAG Servers to the existing DAG which will still be using the 2010 Alternate FSW Server in the Failover Site and you will now be switched into a Node Majority with File Share Witness Cluster instead of just Node Majority.  Remember I said with an odd number of DAG Servers, you will be in Majority Node Witness and with an even number, the Cluster will automatically switch itself to Node Majority with File Share Witness?  You will now be in a state that looks as such.

Part of the Failback Process would be to switch back to the old FSW Server in the Primary Site.  Once done, you will be back into your original operational state.

As you can see with how this works, the question that may arise is where to put your FSW?  Well, it should be in the Primary Site with the most users or the site that has the most important users.  With that in mind, I bet another question arises?  Well, why with the most users or the most important users?  Because some environments may want to use the above with an Active/Active Model instead of an Active/Passive.  Some databases may be activated in both sites.  But, with that, if the WAN link goes down, the Exchange 2010 Server in the Failover Site loses quorum since it can’t contact at least 2 other cluster objects.  Again, you must have three cluster objects online.  This also means that each cluster object must be able to see two other cluster objects.  Because of that, the Exchange 2010 Server will go completely offline.

To survive this, you really must use 2 different DAGs.  One DAG where the FSW is in the First Site and a second DAG where its FSW is in the Second Site.  In my example, users that live in the First Active Site would primarily be using the Exchange 2010 DAG Members in the First Active Site which would be on DAG 2.  Users that live in the Second Active Site would primarily be using the Exchange 2010 DAG Members in the Second Active Site which would be on DAG 1. This way, if anything happens with the WAN link, users in the First Active Site would still be operational as the FSW for their DAG is in the First Active Site and DAG 2 would maintain Quorum.  Users in the Second Active Site would still be operational as the FSW for their DAG is in the Second Active Site and DAG 1 would maintain Quorum.

Note: This would require twice the amount of servers since a DAG Member cannot be a part of more than one DAG.  As shown below, each visual representation below of a 2010 HUB/CAS/MBX is a separate server.

The Multi-DAG Model would look like this.

Share

Exchange 2010 Site Resilient DAGs and Majority Node Set Clustering – Part 2

Welcome to Part 2 of Exchange 2010 Site Resilient DAGs and Majority Node Set Clustering.  In Part 1, I discussed what Majority Node Set Clustering is and how it works with Exchange Site Resilience when you have one DAG member in a Primary Site and one DAG member in a Failover Site.  In this Part, I will show an example of how Majority Node Set Clustering works with Exchange Site Resilience when you have two DAG members in a Primary Site and one DAG member in a Failover Site.

Part 1

Part 2

Part 3

Real World Examples

In Part 1, I showed a Real World example when you have one Exchange DAG member in the Primary Site and one Exchange DAG member in the Failover Site.  In this Part, I am showing a Real World example when you have two Exchange DAG members in the Primary Site and one Exchange DAG member in the Failover Site.

3 Node DAG  (Two in Primary and One in Failover)

In the following screenshot, we have 3 Servers.  Two are Exchange 2010 Multi-Role Servers; one in the Primary Site and one on the Failover Site.  The Cluster Service is running on all three Exchange Multi-Role Servers.  More specifically, it would run on the Exchange 2010 Servers that have the Mailbox Server Role. When Exchange 2010 utilizes an even number of Nodes, it utilizes Node Majority with File Share Witness.  Because we have an odd number of Nodes, we are utilizing Node Majority and will not utilize a File Share Witness.

So now we have our three Servers; all three of them being Exchange.  This means we have three voters and do not need a File Share Witness as we have a third node.  So the question is, how many voters/servers/cluster objects can I lose?  Well if you read the section on Majority Node Set (which you have to understand), you know the formula is (number of nodes /2) + 1.  This means we have (3 Exchange Servers / 2) rounded down = 1 + 1 = 2.  This means that 2 cluster objects must always be online for your Exchange Cluster to remain operational just like if we were utilizing 2 DAG members with a File Share Witness.

But now let’s say one of your Exchange Servers go offline.  Well, you still have at least two cluster objects online.  This means your cluster will be still be operational.  If all users/services were utilizing the Primary Site, then everything continues to remain completely operational.  If you were sending SMTP to the Failover Site or users were for some reason connecting to the Failover Site, they will need to be pointed to the Exchange Server in the Primary Site.

But what happens if you lose a second node? Well, based on the formula above we need to ensure we have 2 cluster objects operational at all times.  At this time, the entire cluster goes offline.  You need to go through steps provided in the site switchover process but in this case, you would be activating the Primary Site and specify a new Alternative File Share Witness Server that exists in the Primary Site so you can active the Exchange 2010 Server in the Primary Site.  The DAG won’t actively use the File Share Witness but you should specify it anyways because part of the Failback process is re-adding the Primary Site Servers back to the DAG once they become operational. And once you re-add the second DAG node, you now have two DAG members in the DAG which will want to switch the DAG Cluster into a Node Majority with File Share Witness which is why you need to still specify a File Share Witness.

But what happens if you lose two nodes in the Primary Site? Well, based on the formula above we need to ensure we have 2 cluster objects operational at all times.  At this time, the entire cluster goes offline.  You need to go through steps provided in the site switchover process but in this case, you would be activating the Failover Site and specify a new Alternative File Share Witness Server that exists (or will exist) in the Failover Site so you can activate the Exchange 2010 Server in the Primary Site.   The DAG won’t actively use the File Share Witness but you should specify it anyways because part of the Failback process is re-adding the Primary Site Servers back to the DAG once they become operational.

Once the Datacenter Switchover has occurred, you will be in a state that looks as such.  An Alternate File Share Witness is not for redundancy for your 2010 FSW that was in your Primary Site.  It’s used only during a Datacenter Switchover which is a manual process.

Once your Primary Site becomes operational, you will re-add the Primary DAG Server to the existing DAG which will still be using the 2010 Alternate FSW Server in the Failover Site and you will now be switched into a Node Majority with File Share Witness Cluster instead of just Node Majority.  Remember I said with an odd number of DAG Servers, you will be in Majority Node Witness and with an even number, the Cluster will automatically switch itself to Node Majority with File Share Witness?  You will now be in a state that looks as such.

Part of the Failback Process would be to switch to a FSW Server in the Primary Site.  Once done, you will be back into your original operational state.

Now the final step of the Failback Process would be to re-add your final remaining DAG Member in the Primary Site.  Once done, your cluster will switch back into a Node Majority Cluster and will no longer be utilizing the FSW.

As you can see with how this works, the question that may arise is where to put your the majority of your Exchange DAG Members?  Well, it should be in the Primary Site with the most users or the site that has the most important users.  With that in mind, I bet another question arises?  Well, why with the most users or the most important users?  Because some environments may want to use the above with an Active/Active Model instead of an Active/Passive.  Some databases may be activated in both sites.  But, with that, if the WAN link goes down, the Exchange 2010 Server in the Failover Site loses quorum since it can’t contact at least 1 other cluster object.  Again, you must have two cluster objects online.  This also means that each cluster object must be able to see one other cluster object.  Because of that, the Exchange 2010 Server will go completely offline.

To survive this, you really must use 2 different DAGs.  One DAG where the majority of your Exchange 2010 DAG Members is in the First Site and a second DAG where the majority of the Exchange 2010 DAG Members is in the Second Site.  Users that live in the First Active Site would primarily be using the Exchange 2010 DAG Members in the First Active Site.  Users that live in the Second Active Site would primarily be using the Exchange 2010 DAG Members in the Second Active Site. This way, if anything happens with the WAN link, users in the First Active Site would still be operational as the majority of its Exchange 2010 DAG Members for their DAG is in the First Active Site and DAG 1 would maintain Qourum.  Users in the Second Active Site would still be operational as the majority of its Exchange 2010 DAG Members for their DAG is in the Second Active Site and DAG 2 would maintain Quorum.

Note: This would require twice the amount of servers since a DAG Member cannot be a part of more than one DAG.  As shown below, each visual representation below of a 2010 HUB/CAS/MBX is a separate server.

The Multi-DAG Model would look like this.

 

Share

Exchange 2010 Site Resilient DAGs and Majority Node Set Clustering – Part 1

I’ve talked about this topic in some of my other articles but wanted to create an article that talks specifically about this model and show several different examples in a Database Availability Group (DAG)’s tolerance for node and File Share Witness (FSW) failure.  Many people don’t properly understand how the Majority Node Set Clustering Model works.  In my article here, I talk about Database Activation Coordination Mode and have a section on Majority Node Set.  In this article, I want to visibly show show some real world examples on how the Majority Node Set Clustering Model works.  This will be a multi-part article and each Part will have its own example.

Part 1

Part 2

Part 3

Majority Node Set

Majority Node Set is a Windows Clustering Model such as the Shared Quorum Model, but different.  Both Exchange 2007 and Exchange 2010 Clusters use Majority Node Set Clustering (MNS).  This means that 50% of your votes (server votes and/or 1 file share witness) need to be up and running.  The proper formula for this is (n / 2) + 1 where n is the number of DAG nodes within the DAG. With DAGs, if you have an odd number of DAG nodes in the same DAG (Cluster), you have an odd number of votes so you don’t have a witness.  If you have an even number of DAGs nodes, you will have a file share witness in case half of your nodes go down, you have a witness who will act as that extra +1 number.

So let’s go through an example.  Let’s say we have 3 servers. This means that we need (number of nodes which is 3 / 2) + 1  which equals 2 as you round down since you can’t have half a server/witness.  This means that at any given time, we need 2 of our nodes to be online which means we can sustain only 1 (either a server or a file share witness) failure in our DAG.  Now let’s say we have 4 servers.  This means that we need (number of nodes which is 4 / 2) + 1 which equals 3.  This means at any given time, we need 3 of our servers/witness to be online which means we can sustain 2 server failures or 1 server failure and 1 witness failure.

Real World Examples

Each of these examples will show DAG Models with a Primary Site and a Failover Site.

2 Node DAG  (One in Primary and One in Failover)

In the following screenshot, we have 3 Servers.  Two are Exchange 2010 Multi-Role Servers; one in the Primary Site and one on the Failover Site.  The Cluster Service is running only on the two Exchange Multi-Role Servers.  More specifically, it would run on the Exchange 2010 Servers that have the Mailbox Server Role. When Exchange 2010 utilizes an even number of Nodes, it utilizes Node Majority with File Share Witness.  If you have dedicated HUB and/or HUB/CAS Servers, you can place the File Share Witness on those Servers.  However, the File Share Witness cannot be placed on the Mailbox Server Role.

So now we have our three Servers; two of them being Exchange.  This means we have two voters and a File Share Witness.  Two of the Mailbox Servers that are running the cluster service are voters and the File Share Witness is just a witness that the voters use to determine cluster majority.  So the question is, how many voters/servers can I lose?  Well if you read the section on Majority Node Set (which you have to understand), you know the formula is (number of nodes /2) + 1.  This means we have (2 Exchange Servers / 2) = 1 + 1 = 2.  This means that 2 cluster objects must always be online for your Exchange Cluster to remain operational.

But now let’s say one of your Exchange Servers go offline.  Well, you still have at least two cluster objects online.  This means your cluster will be still be operational.  If all users/services were utilizing the Primary Site, then everything continues to remain completely operational.  If you were sending SMTP to the Failover Site or users were for some reason connecting to the Failover Site, they will need to be pointed to the Exchange Server in the Primary Site.

But what happens if you lose a second node? Well, based on the formula above we need to ensure we have 2 cluster objects operational at all times.  At this time, the entire cluster goes offline.  You need to go through steps provided in the site switchover process but in this case, you would be activating the Primary Site and specify a new Alternative File Share Witness Server that exists in the Primary Site so you can active the Exchange 2010 Server in the Primary Site.  The DAG won’t actively use the File Share Witness but you should specify it anyways because part of the Failback process is re-adding the Primary Site Servers back to the DAG once they become operational.

But what happens if you lose two nodes in the Primary Site? Well, based on the formula above we need to ensure we have 2 cluster objects operational at all times.  At this time, the entire cluster goes offline.  You need to go through steps provided in the site switchover process but in this case, you would be activating the Failover Site and specify a new Alternative File Share Witness Server that exists (or will exist) in the Failover Site so you can activate the Exchange 2010 Server in the Primary Site.   The DAG won’t actively use the File Share Witness but you should specify it anyways because part of the Failback process is re-adding the Primary Site Servers back to the DAG once they become operational.

Once the Datacenter Switchover has occurred, you will be in a state that looks as such.  An Alternate File Share Witness is not for redundancy for your 2010 FSW that was in your Primary Site.  It’s used only during a Datacenter Switchover which is a manual process.

Once your Primary Site becomes operational, you will re-add the Primary DAG Server to the existing DAG which will still be using the 2010 Alternate FSW Server in the Failover Site and you will now be switched into a Node Majority with File Share Witness Cluster instead of just Node Majority.  Remember I said with an odd number of DAG Servers, you will be in Node Majority and with an even number, the Cluster will automatically switch itself to Node Majority with File Share Witness?  You will now be in a state that looks as such.

Part of the Failback Process would be to switch back to the old FSW Server in the Primary Site.  Once done, you will be back into your original operational state.

As you can see with how this works, the question that may arise is where to put your FSW?  Well, it should be in the Primary Site with the most users or the site that has the most important users.  With that in mind, I bet another question arises?  Well, why with the most users or the most important users?  Because some environments may want to use the above with an Active/Active Model instead of an Active/Passive.  Some databases may be activated in both sites.  But, with that, if the WAN link goes down, the Exchange 2010 Server in the Failover Site loses quorum since it can’t contact at least 1 other voter.  Again, you must have two voters online.  This also means that each voter must be able to see one other voter.  Because of that, the Exchange 2010 Server will go completely offline.

To survive this, you really must use 2 different DAGs.  One DAG where the FSW is in the First Site and a second DAG where its FSW is in the Second Site.  Users that live in the First Active Site would primarily be using the Exchange 2010 DAG Members in the First Active Site.  Users that live in the Second Active Site would primarily be using the Exchange 2010 DAG Members in the Second Active Site. This way, if anything happens with the WAN link, users in the First Active Site would still be operational as the FSW for their DAG is in the First Active Site and DAG 1 would maintain Qourum.  Users in the Second Active Site would still be operational as the FSW for their DAG is in the Second Active Site and DAG 2 would maintain Quorum.

Note: This would require twice the amount of servers since a DAG Member cannot be a part of more than one DAG.  As shown below, each visual representation below of a 2010 HUB/CAS/MBX is a separate server.

The Multi-DAG Model would look like this.

 

Share

Exchange 2007 UM to Exchange 2010 UM Partial Upgrades and Redirects

General Information

There’s two ways to migrate to Exchange 2010 UM:

  • Full Upgrade
  • Partial Upgrade

In a Full Upgrade scenario, you are doing a big bang migration for your Exchange 2007 UM users and moving them all to Exchange 2010 UM at the same time.  At the same time, you are replacing your Exchange 2007 UM Servers within your UM Dial Plan with Exchange 2010 UM Servers.

In a Partial Upgrade, you are going to  have Exchange 2007 UM Servers and Exchange 2010 UM Servers coexist within the same Dial Plan.

It is important to note how the call flows work in a Partial Upgrade Path.  You can see this documented very well here. In order for the Partial Upgrade process to work, the documentation clearly states, “When you install the first Exchange 2010 UM server and add it to an existing Exchange 2007 organization, you must add the Exchange 2010 UM server to an existing UM dial plan that contains Exchange 2007 UM servers. Then you must configure each IP gateway or IP PBX to send all incoming calls to the Exchange 2010 UM servers within the same UM dial plan.

The key part to note is that you must configure each IP Gateway object that is in the Dial Plan to now send ONLY to Exchange 2010.  The problem with the article, is that it does state this clearly and does show example of call flows, but what isn’t really explained is what exactly is happening on the Back-End.  And that, is what I am here to explain.

The basic jist of it, is that Exchange 2010 will redirect the IP Gateway to Exchange 2007 where necessary.  But let’s say you have a PBX connected to a gateway which is connected to UM.  Exchange 2010 UM will always redirect the gateway for an Exchange 2007 user and the gateway will connect directly to Exchange 2007 UM.  The gateway never has to relay any information back to the PBX in this case so there are no considerations you have to make for the PBX here.  The only consideration you should make is to make sure that the gateway has been certified against Exchange 2010 UM before you decide to do your partial upgrade.  The certified gateway/IP-PBX for Exchange 2007 is here and the certified list for Exchange 2010 is located here.

With that said, the redirects from Exchange 2010 to Exchange 2007 work a couple different ways depending on the circumstances.  Thanks to Chun from Microsoft for providing me with these details that were documented in great detail.

There are two broad categories on how the redirection happens:

  • Before UM 2010 accepts the invite, it knows the call is for an UM 2007 user (e.g., diversion exists and UM can tell that the call is for a 2007 user). In this case, we simply use 302 redirect.
  • UM 2010 needs to accept the invite before it knows the call is for an UM 2007 user. E.g., someone calls into the subscriber access from a phone that we cannot resolve to a user. UM needs to answer the call first, and wait for the user to punch in the mailbox extension. In this case, UM will send a REFER to the gateway to cause the gateway to send a new INVITE to the same UM 2010 server. But in the REFER header, we stick in a couple of information which shows up in the new INVITE. The UM 2010 server sees this information, realizes it is for a 2007 user, and redirects the call to UM 2007.

Example

Now let’s take a look at a real life migration example from a procedural standpoint.  Let’s start off with not having Exchange 2010 yet.  We have our IP-PBX which is sending data to an IP Gateway which is then sending data to Exchange 2007.

We then build our Exchange 2010 Server, install Exchange 2010 UM Role on it, and we then add it to our Dial Plan which will then consist of both Exchange 2007 and Exchange 2010 UM.  Keep in mind, when using OCS as the IP-PBX, you must be on at least OCS 2007 R2 CU5 and Exchange 2010 SP1 to be able to allow Exchange 2010 UM SP1 and Exchange 2007 to be in the same Dial Plan.  The reason for this is Exchange 2010 SP1 introduces capabilities that allow OCS 2007 R2 CU5+ and/or Lync to be able to do a user lookup, determine if they’re on Exchange 2010 or Exchange 2007 and route to the appropriate Exchange Version (2007 or 2010) regardless if they’re in the same Dial Plan.

As can be seen above, we now have Exchange 2010 and Exchange 2007 in the same Dial Plan.  We have also started routing all traffic to Exchange 2010.  If the call is for an Exchange 2007  User, Exchange 2010 will redirect the IP Gateway to start talking to Exchange 2007 to service those Exchange 2007 users.

 

 

Share

Export Spoken Name in Exchange 2010 UM

I was asked by a client recently if there was anyway to export the Spoken Name in Exchange UM to a WAV file.  You can’t export this to a WAV file but you can export it to a WMV-9 file which you can then use other means to convert it to a WAV file.

Now when I say SpokenName, I am referring to the audio you hear when you press this audio icon in the Outlook Contact Card.

To export this, the steps are relatively simple (though I didn’t figure this one out on my own and a very helpful Microsoft fellow CYC gave me most of the code).

?View Code POWERSHELL
1
Export-RecipientDataProperty -Identity  -SpokenName | foreach-object { Add-Content -Value $_.FileData -Path "C:\Exports\identity.wma" -Encoding Byte }

Let’s look at an example.  I will export my own Spoken Name to WMA file.

We can see that no file currently exists.

We will now export the data to a WMV-9 file as well as re-verify that the file was created.

Share

Exchange 2010 Site Resilience, Multiple DAG IPs, and Cluster Resources

Exchange 2010 allows us to have Database Availability Group (DAG) members in several AD Sites.  For every subnet a DAG member’s MAPI NIC is in, we must obtain a DAG IP.  This DAG IP is a separate IP than is located on the MAPI NICs themselves. We take this DAG IP to the DAG using the Set-DatabaseAvailabilityGroup command.

Multiple DAG IPs

Let’s take a look at an example of how the architecture may look.

Taking a look at the above Visio diagram, we have two sites, Primary Site and DR Site, with one node in each.  The MAPI NIC in the Primary Site has an IP Address of 172.17.24.200.  That means that we’ll need to have a DAG IP that lives in this same subnet.  We choose a DAG IP of 172.17.24.120.  The MAPI NIC in the DR Site has an IP Address of 172.16.24.200. That means that we’ll need to have a DAG IP that lives in this same subnet.  We choose a DAG  IP of 172.16.24.120.

In order to add these MAPI IP Addresses, we’ll need to run the following the command.

Note: IPs on Replication NIC’s subnet do not get added to the Database AvailabilityGroupIPAddresses. Only MAPI NIC Subnets get added.

Keep in mind, when adding additional IPs in the future, it is important that you include all existing DAG IPs.  The Set-DatabaseAvailabilityGroup -DatabaseAvailabilityGroupIPAddresses property is not additive.

To verify the DAG IPs were added successfully, let’s check out our DAG Properties.

In Exchange 2010 SP1, we have the ability to add our DAG IPs via the GUI. If we go to the DAG Properties, we now see we can manage our Witness Server and Alternate Witness Server.

This allows us to do our IP Address configuration right from the GUI instead of needing to use Set-DatabaseAvailabilityGroup  with the DatabaseAvailabilityGroupIPAddresses property and needing to worry about all previous IP Addresses being included since the property isn’t additive.

Cluster Resources

So, let’s take a look at what really happens to the cluster resources and what determines which DAG IP is active.  Let’s open the Failover Cluster Manager.  Start > Administrative Tools > Failover Cluster Manager.

After selecting our DAG, let’s take a look at the cluster resources.  We can see from here that we have two Network IP Resources.

But let’s take even a deeper look.

Select the DAG from within the Cluster Core Resources > Right-Click > Choose Properties.

Now let’s take a look at the Dependencies Tab.

As we can see, the two DAG IPs are set up with an OR dependency which means that the cluster can activate either DAG IP at any given time.  As we saw earlier, the 172.16.24.120 IP is the existing DAG IP that is online which means the DRSiteNode’s DAG IP is currently the online Network IP resource.

Let’s run a cluster command so we can failover the default “Cluster Group” from one cluster node to another.

We now see the PrimarySiteNode is the node that has the “Cluster Group.”  Let’s go ahead and take a look at the Cluster Resources again and see which Network IP Resource is online.

Looks like the PrimarySiteNode’s DAG IP is now Online instead of the DRSiteNode’s DAG IP.  This means that the Network IP Resource that is online depends on which DAG Node has the “Cluster Group.”  If you recall from my previous articles, the DAG Node that has the “Cluster Group” is the DAG Node that acts as the Primary Active Manager.  The Primary Active Manager is the DAG Node responsible for choosing what databases get activated in a failover.  For more information on Active Manager, click here.

Share

Exchange 2007/2010 Connection Filtering and Transport Configuration

Connection Filtering Basics (Blocking Connection to the Server)

Many of you know what Connection Filtering is in Exchange. It allows you to control what IPs are allowed and what IPs are blocked.   Taking a look at the following image, we can see exactly what parts of Anti-Spam utilize the connection filtering agent.

In the following image, we can see in what order the anti-spam agents run.

If you utilize the IP Block List, if something is blocked, the connection dies there.  Let’s take a look at the IP Block in action and how the connecting server’s connection is terminated.  For starts, let’s take a look at the connecting machine’s IP.

Let’s make a telnet to the server on port 25.

We see the connection works just fine.  Now, let’s go add the client IP to the IP Block List. To do this, Select IP BlockListRight-Click > Select Properties > Click Add > Enter Client IP Address.

Now let’s try Telneting to the Server over port 25 again.

As we can see, we cannot communicate via port 25 to the SMTP Server anymore due to the connecting IP being on the IP Block List.

Connection Filtering and Non-Exchange SMTP Filtering Appliances/Servers

One of the big things here, is that Connection Filtering happens based on the last untrusted IP Address.  One of the biggest things that are overlooked when using the Exchange or Forefront Connection Filtering Agent is that it is very important for you to enter the trusted SMTP IP Addresses in your organization.

This will need to be done via your Hub Transport Server.  To modify the trusted SMTP IP Addresses in your organization, go to Organization Configuration > Hub Transport > Global Settings > Message Delivery.

It is very important when using Connection Filtering to enter ALL trusted IP Addresses that handle SMTP in the organization.  This includes any type of SMTP Appliance/Server that is sending traffic to Exchange.  This includes Ironport, Sendmail, Barracuda, etc…  The reason why is, the way Connection Filtering works, is that it looks at the sending server’s IP Address and does the lookup on that.  But, let’s say it’s the Edge Transport Server and it’s receiving mail from an Ironport.

Do you really want the Connection Filtering lookup to lookup the Ironport IP?  Of course not, Ironport is an internal server.  Connection filtering ignores any IPs listed in the above Message Delivery list.  This means, if an Exchange Edge server receives mail from an Ironport, if the Ironport IP is on that list, the Exchange Edge will then do a Connection Filteirng lookup on the last untrusted IP which would be the server that sent the mail to the Ironport (that is if the server that sent mail to Ironport is not also another internal device that is on the above list.

So, make sure you add all trusted IPs (Exchange and non-Exchange that are handling SMTP) internal to your organization to make sure Connection Filtering is working as it should be.

Share

Next »