RSS Subscription 168 Posts and 2,769 Comments

Archive for October, 2009

Exchange 2010 RTM DAG using Server 2008 R2 – Part 1

Now that Exchange Server 2010 is RTM and Server 2008 R2 is RTM, I thought it would be nice to create a multi-part article on how to use create a Database Availability Group (DAG) on two Exchange Server 2010 RTM nodes utilizing Server 2008 R2 as their Operating System. This article is to guide you through the entire process from preparing Server 2008 R2 for Exchange 2010 RTM, installing Exchange 2010 RTM, creating databases, creating a DAG, adding our nodes to the DAG, and then replicating our databases between both servers.

Part 1

Part 2

Part 3

Part 4

Lab Setup

Guest Virtual Machines

One Server 2008 R2 Enterprise (Standard can be used) RTM x64 Domain Controller.

Two Server 2008 R2 Enterprise (Enterprise Required) RTM x64 (x64 required) Member Servers where Exchange 2010 RTM will be installed with the Mailbox, Client Access Server, and Hub Transport Server roles.

One Server 2008 Enterprise (Standard can be used) RTM x64  server that will be our File Share Witness (FSW) Server.  This box will not serve any other purpose in this lab other than FSW.


  • You have a domain that contains at least one Server 2003 SP2 Domain Controller (DC).
  • You have configured the IP settings accordingly for all servers to be on the same subnet which includes the public NICs for both Failover Cluster nodes. I have provided the IP scheme of my lab below, but this will vary depending on your needs and VMware configuration.

Computer Names

DAG Node 1 – SHUD-EXC01

DAG Node 2 – SHUD-EXC02

Domain Controller – SHUD-DC01


Configuration of  Exchange 2010 DAG Nodes

Processor: 4

Memory: 1024MB

Network Type MAPI NIC (MAPI Network)

Network Type Replication NIC (Replication Network)

Virtual Disk Type – System Volume (C:\): 50GB Dynamic

Storage Note: In a real-world environment, depending on the needs of the business and environment, it is best practice to install your database and logs on separate disks/spindles; both of which are separate from the spindles that the C:\ partition utilize. We will be installing Exchange 2010 RTM databases/logs on the same disks/spindles for simplicity sakes for this lab.  While Exchange 2010 databases move a lot of the IO for databases to sequential IO, there’s still quite a bit of Random IO occurring and is still recommended to place the database and logs on separate spindles.

Network Note: A single NIC DAG is supported.  It is still recommended to have at least one dedicated replication network.  If using only a single NIC, it is recommended for this network to be redundant as well as gigabit.

Configuration of  Domain Controller

Processor: 4

Memory: 512MB

Network Type External NIC

Virtual Disk Type – System Volume (C:\): 50GB Dynamic

IP Addressing Scheme (Corporate Subnet otherwise known as a MAPI Network to Exchange 2010 DAGs)

IP Address – 192.168.1.x

Subnet Mask –

Default Gateway –

DNS Server – (IP Address of the Domain Controller/DNS Server)

IP Addressing Scheme (Heartbeat Subnet otherwise known as a Replication Network to Exchange 2010 DAGs)

IP Address – 10.10.10.x

Default Gateway – 10.10.10.x

Subnet Mask –

LAB Architecture

Some notes about this architecture:

  • Exchange 2010 DAGs remove the limitation of requiring Mailbox Only Role Servers as existed with Exchange 2007 Clustered Servers
  • Exchange 2010 is no longer Cluster Aware and only utilizes very few pieces of the Failover Cluster Services such as Cluster Heartbeat and Cluster Networks.  More on this in an upcoming part.
  • UM is supported on these two DAG nodes but is recommended to be installed on separate servers
  • For HTTP publishing, ISA can be utilized.  For RPC Client Access Server publishing (which ISA cannot due as it publishes HTTP traffic only) with CAS Servers on the DAG nodes, you must use a hardware load balancer due to a Windows limitation preventing you from using Windows NLB and Clustering Services on the same Windows box.  Alternatively, you can deploy two dedicated CAS Servers and utilize Windows NLB to load balance your RPC Client Access Server traffic.
  • Two node DAG requires a witness that is not on a server within the DAG.  Unlike Exchange 2007, Exchange 2010 automatically takes care of FSW creation; though you do have to specify the location of the FSW. It is recommended to specify the FSW to be created on a Hub Transport Server.  Alternatively, you can put the witness on a non-Exchange Server after some prerequisites have been completed.  I will be deploying the FSW on a member server (which happens to be my OCS Server in my lab) and will display the prerequisite process for achieving this.

Preparation of Exchange 2010 RTM DAG Nodes

Network Interface Card (NIC) Configuration

First thing we will want to do is configure the IP Configuration of both the MAPI NIC and the Replication NIC.

We will want to rename our MAPI NIC connection to MAPI and our Replication NIC connection to Replication. To do so, go to Start > Right-Click Network > Properties.

Once in the Control Panel, Choose Change Adapter Settings.

Now you will be presented with the Network Connections window. This is where you can modify the network properties for each NIC in your server. For your Internal Corporate Connection which is also your MAPI Network, rename your Local Area Connection to Internal. Likewise, for your Private Heartbeat Connection which is also your Replication Network, rename your Local Area Connection to Replication. After you have done this, it will look something similar to the following:

Network Interface Card (NIC) Configuration

First thing we will want to do is configure the IP Configuration of both the MAPI and Replication NIC.

Part of the assumptions earlier in this article as that you have a properly configured TCP/IP Network where all nodes are properly connected to the TCP/IP Network. Because of this, I will skip the Public TCP/IP Configuration and proceed to configuring the Private Heartbeat NIC.

Important: When configuring the MAPI NIC, you can leave IPv6 enabled if you are using Server 2008 R2.  There is an issue with Server 2008 (still exists in SP2) that prevents IPv6 from listening on port 6004 that prevents Outlook Anywhere from working. You can read more about that here. Again, Server 2008 R2 does not have this issue.  So if you happen to be installing Exchange 2010 on Server 2008, disable IPv6 as discussed below.  If using Server 2008 R2, feel free to leave IPv6 enabled.

Note: You can, if you’d like, disable File and Printer Sharing for Microsoft Networks.  In Exchange 2007 SP1, Microsoft provided the ability to allow for continuous replication to occur over the private network.  Because Exchange 2007 utilizes SMB for log shipping, it is required to have the File and Printer Sharing enabled.  Exchange 2010 no longer utilizes SMB and now utilizes TCP.  More on this later in an upcoming Part.

In addition to disabling IPv6 from the NIC Properties, I would follow these instructions here to fully disable IPv6 on your Exchange 2010 system as disabling it on the NIC itself doesn’t fully disable IPv6.  While the article is based on Exchange 2007, it’s a Windows based modification and will apply to a system running Exchange 2010 as well.

Double-Click or Right-Click > Properties on the Replication NIC to begin configuration.

Uncheck the following:

  • Internet Protocol Version 6 (TCP /IPv6) – Disable IPv6 in the registry as well as noted above.

Select Internet-Protocol Version 4 (TCP /IPv4) and press the Properties button. For NodeA, the only TCP/IP configuration we will need, is the IP Address and Subnet Mask. NodeA’s IP configuration will be while NodeB’s IP configuration will be

Go into the Advanced NIC configuration settings by clicking the Advanced button. From there, you will navigate to DNS tab and de-select “Register this connection’s addresses in DNS.”

Select the WINS tab and de-select “Enable LMHOSTS lookup” and configure the NetBIOS setting to “Disable NetBIOS over TCP/IP.”

Once you are done configuring the Advanced settings, press OK three times and you will be back at the Network Connections screen. From here, choose Advanced and select Advanced Settings

You will be presented with the Binding Order for your current NICs. Ensure that the MAPI NIC is on top by selecting MAPI and pressing the green up arrow key on the right-hand side of the dialog.

Exchange 2010 Operating System Prerequisites

Server 2008 SP2 and Server 2008 R2 prerequisites are quite different.  Because our servers are going to be deployed on Server 2008 R2, we will follow the guidance for deploying on Server 2008 R2.  You can see the prerequisite requirements here.

We will be doing our prerequsite installations via PowerShell.  You can open PowerShell by going to Start > Run > PowerShell.

You will first have to import the module for ServerManager.  Afterwards, depending on the roles that are installed, different prerequisites are required.  Because we are going to be installing HUB/CAS/MBX, the command we would run is the following:

Add-WindowsFeature NET-Framework,RSAT-ADDS,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Lgcy-Mgmt-Console,WAS-Process-Model,RSAT-Web-Server,Web-ISAPI-Ext,Web-Digest-Auth,Web-Dyn-Compression,NET-HTTP-Activation,RPC-Over-HTTP-Proxy,Failover-Clustering -Restart

Note: The installation documentation does not have you include Failover-Clustering in the above command.  I add it anyways since we’ll be using it for the DAG.  I you don’t add it in the above command, you can just add it below when you enable the NetTcpPortSharing.  If you don’t add it below, when you add the first node to the DAG, Failover Clustering will be installed anyways.  I like to install it beforehand though.

Finally, we’ll want to modify the NetTcpPortSharing service to start automatically.


Well folks, that is all for Part 1 of this article. For Part 2, I will go through the installation process of one our DAG nodes that will contain the Client Access Server, Hub Transport Server, as well as Mailbox Server roles.


Exchange 2010 24×7 Online Defragmentation and Online Database Scanning

Exchange has an Online Maintenance task that runs every night.  In the Exchange Management Console (EMC),  go to Organization Configuration > Mailbox > Database Management Tab > Right-Click our Database > Properties >  Maintenace Tab. We can then see our Maintenance Schedule.

In Exchange 2010, this will appear as:

As you can see, in Exchange 2010, there is a new option that is enabled by default.  This option is the “Enable background database maintenance (24 x 7 ESE scanning).  This option is not Online Defragmentation, but is rather Database Checksumming.  More on this later…

In Online Maintenance, there’s several tasks that run such as dumpster cleanup, purging mailboxes based on retention, etc…  You can see a full list of these tasks here.  When these eleven tasks successfully finish, an Online Defragmentation (OLD) process runs.  Microsoft explains OLD as, “The intention for online defragmentation is to free up pages in the database by compacting records onto the fewest number of pages possible, thus reducing the amount of I/O necessary. The ESE database engine does this by taking the database metadata, which is the information in the database that describes tables in the database, and for each table, visiting each page in the table, and attempting to move records onto logically ordered pages.”

There is also a process called Online Maintenance Database Checksumming.  Checksumming checks the integrity of the database by looking through every database page since there was no guarantee OLD would successfully look through every page to ensure there is no corruption.  This process is entirely sequential and doesn’t cause a performance problem on the database.  The issues with this method in Exchange 2007 RTM is that this process ran at the end of Online Maintenace and because of that, resiliency is effected as these processes temporary suspend continous replication.  In Exchange 2007 SP1, Microsoft provided registry keys to allow you to run background checksumming.  You can read more about these processes and the registry keys at the bottom of this article here.

In Exchange editions prior to Exchange 2010, we can monitor OLD by checking out the available 70x Event IDs in the Event Viewer’s Application Log.  Similarly, you can verify the amount of whitespace that has been created in the database by viewing the 1221 Event ID.   The list of Event IDs for Exchange versions prior to Exchange 2010 is as follows:

  • 700 – Starting
  • 701 – Completed
  • 702 – Resuming
  • 703 – Completed Resumed Pass
  • 704 – Interrupted and Terminated
  • 1221 – Whitespace Amount

This has all changed quite a bit in Exchange 2010.  OLD2 is the new version of Online Defragmentation and no longer occurs at the end of the Online Maintenance Schedule.  Instead, it runs 24 x 7 on a database. It is throttled so it does not negatively affect performance.  You cannot modify OLD2 to run as OLD did in earlier versions of Exchange.  OLD2 is not configurable. Because of this, the need to troll the above Event IDs is no more.  Instead of trolling 70x Event IDs, Exchange 2010 will only notify you if something goes wrong with Online Maintenance.  That way all the 70x error codes do not appear as spam.  If you see a 70x in Exchange 2010, you know there is a problem. Keep in mind though, that this is all in regards to Mailbox Databases.  Queue Databases still have 70x Event IDs.

If you need to check available whitespace, you can now do this via the Exchange Management Shell (EMS).  Please keep in mind, this only pulls available whitespace from the root of the B-Tree database. If you want to find available whitespace for the entire database, you would have to dismount your databases and use eseutil. If your database is called Database1, the command would be:

Get-MailboxDatabase Database1 -Status | FL AvailableNewMailboxSpace

Note: The -Status switch is required when you need to contact the database directly for the following pieces of information:

  • BackupInProgress
  • Mounted
  • OnlineMaintenanceInProgress
  • Available free space in the database root

As stated, OLD2 is throttled and doesn’t negatively affect performance.  If you have an interest in monitoring performance of OLD2, you can do so by using the following perfmon counter set: MSExchange Database -> Defragmentation Tasks.

In Exchange 2010, there are two ways you can configure Online Database Scanning (checksumming).  The first is the default option shown in the first image in this article.  By default, it runs as a 24 x 7 process on the Active Database.   You can uncheck this option which will then revert Online Database Scanning so that it runs after all online maintenance tasks are completed.  Because most online maintenance tasks complete within an hour, this process works reasonably fine for smaller databases (under 500GB).  Microsoft now supports up to 2TB databases.  Anything larger than 500GB should definitely have the default set which is to run this process 24 x 7 to ensure it completes.  Exchange 2010 was designed with the mindset that Online Database Scanning should complete at least once every three days.  If it does not, Exchange 2010 will provide a warning event in the Event Logs.

Thanks to Matt Gossage, Program Manager for Storage at Microsoft for providing much of this information. You can see Matt Gossage’s level 300 webcast on Exchange 2010 Storage here.


OCS 2007 R2 Load Balancing – Response Group Service Unavailable

I ran into an issue where we had two OCS 2007 R2 Front End Servers behind an F5 Load Balancer.  We kept getting “This service is temporarily unavailable” from our Communicator Clients after we configured the Communicator 2007 R2 Response Group Tab.  For those that are unfamiliar with this tab, it is a web based extension to the Communicator interface that allows users to log in and out of groups.  For more information about the Response Group Tab, click here.

If we take a look at the F5 documentation for OCS 2007 R2 here, we see the following configuration requires for the Response Group Service (RGS):

What this is saying, is that our client will be connecting over 5071 TCP to our Load Balancer in order to communicate with the RGS.  This is incorrect!  5071 TCP is indeed used for Response Group Service communication, but this is only for Front End to Front End server communication.  The RGS has something called a matchmaking service.  The service on the client is just a website that communicates to the Load Balancer over port 443.  So, the million dollar question… Why do we get a service unavailable?

When you’re dealing with the RGS, each Front End Server has a Matchmaking service.  From the Technet Documentation:

Each Front End Server has a Match Making service, which is an internal service that is responsible for queuing calls and finding available agents. Only one Match Making service per pool is active at a time–the others are passive. If a Front End Server with the active Match Making service becomes unavailable, one of the passive Match Making services becomes active. The Response Group Service does its best to make sure that call routing and queuing continues uninterrupted. However, there may be instances when active calls are lost as a result of the transition. Any calls that are in transfer when the service transition occurs are lost. If the transition is due to the Front End Server going down, any calls currently being handled by the active Match Making service on that Front End Server are also lost.

The Match Making service is what utilizes 5071 TCP.  But as stated earlier, this is only for Front End Server to Front End Server communication.  Our Front End Servers need to be able to communicate with each other without traversing the load balancer.  This means that that each server must be able to contact DNS, get the IP of the other server, and then communicate with that IP over 5071.  This is key as to why we’re encountering the issue.

Sometimes, depending on the environment, servers behind load balancers will have multiple IPs assigned.  One for connectivity from the load balancer and another IP for other server operations such as management.  These Front End Servers had their default gateways set to the F5 and each Front End IP that was used for the F5 were on different segments.  The problem here is when one Front End tried to communicate to the other Front End, it would query DNS and get the IP and it would route to the F5.  The F5 would then route it back but the Front End Server saw it coming from the load balancer and think it’s an unauthorized server for RGS requests.  This is why Communicator would see the service as unavailable.

There are a few ways to fix this issue:

  • Modify hosts file on each Front End Server so they are communicating to the correct IP which are on the same segment
  • Rework your load balancing configuration so the Front End Servers only use 1 IP which is where the load balancer sends the traffic and have the Front End IPs be able to directly talk to each other.
  • Modify DNS so all the traffic destined to the FQDN of the Front End Server would go directly to the Front End IP which is on the same segment as the other Front End IP.
  • If you must keep both Front End Servers on separate subnets and have them route through the load balancer, if possible, modify the load balancer so the requests appear to be coming from the original host that sent the request instead of the load balancer.

When it comes down to it, you just need to make sure that when 1 Front End Server talks to another, it needs to appear that it is coming from the other Front End Server instead of the load balancer so that it is an authorized host for RGS requests over 5071 TCP.


Exchange Unified Messaging – OVA not playing voicemails

I ran into an issue today where Exchange Unified Messaging’s Outlook Voice Access (OVA) feature was not playing old voice mails.  For example, when you dial into Outlook Voice Access, you will hear the following:

“You have no new voice messages and no new e-mail messages.  Please say voicemail, e-mail, etc….”  A tip here is that you can press 1 if you want OVA to revert to a touch tone (DTMF) interface.

So in our scenario, say “Voice Mail.”  OVA will say that you have no voice messages. Now here is where the problem is.  We definitely do have voice messages in our folders, but they are no longer in our Inbox.  For some users, it would play their voice mails just fine even if they weren’t in the Inbox, but for others, it would not. For the user’s that didn’t work, this is what our Voice Mail Search Folder displays.

If we Right-Click our Voice Mail Search Folder and Choose Customize this Search Folder, we may see the following:

If you were to click Browse, you would see the following:

The reason I say may, is because the “Mail from these folders will be included in this Search Folder” by default.  What I noticed was, is that when I looked at some mailboxes, they had it set to Inbox and when I looked at other, it had it set to the following which had no problems with OVA playing back the voice mails:

If you were to click Browse, you would see the following:

The key thing here, is that when you dial into OVA, it plays back your voicemails based on what the Search Folder displays.

The difference between the two, is that the checkmark was set at the Mailbox and search all subfolders, clicking on your Search Folder would find all voicemails throughout your entire mailbox.  But there’s a catch here though if your Voice Mail Search Folder utilizes the Inbox selected method for its search parameters instead of the checkmark being at the Mailbox level.

You may think to yourself, “This should work even if I only have Inbox selected and set it to search in all subfolders.”  After all, that would be a logical though since your subfolders are subfolders based off of the parent Inbox. That’s what I thought.  But it just wouldn’t work.  And it wasn’t just this one user, it was all the users who utilized the checkmark at the Inbox level even if it was set to search subfolders.  Another thing you may think to yourself is, “This should work then if I change deselect the Inbox and then choose the entire Mailbox and search in subfolders.”  This didn’t work either if you initially had a checkmark at the Inbox level instead of at the Mailbox level.  What I ended up realizing, is that I had to re-create the Voice Mail Search Folder completely for things to work properly.   It’s as if something was entirely wrong with the Voice Mail Search Folder.

The interesting thing here, is that none of these users customized this search folder but they were UM enabled at different times.  I’m thinking an Exchange patch modified the way the Search Folder parameters were configured. The reason why I think it’s an Exchange patch and not an Outlook patch, is because we saw this behavior the same with a mix of Outlook 2003 and Outlook 2007 clients.  Add in the fact the the way this Voice Mail Search Folder gets created, is when a user is UM enabled.

So onto re-creating our Voice Mail Search Folder…  During troubleshooting this, I discovered a bug that was confirmed with Microsoft which you should know when deleting Search Folders.  There is an Outlook 2007 and Outlook 2010 bug that occurs when deleting Search Folders in Cached Mode.  So in order to get our Search Folders re-created as how they really should be, I set Outlook in Online mode, deleted the Search Folder, sent the user a new Voice Mail, and saw the Voice Mail folder get re-created.  The Search Folder was set to search the entire mailbox.  When I dialed into OVA, it would play all voice mails no matter where they were stored.  I then re-set Outlook back into Cached Mode.

Once this Voice Mail Search Folder was re-created, the search parameters were displayed as such:

Now if I look at my Voice Mail Search Folder, I successfully see my voice mails stored in my Saved Voicemails subfolder:

And because my Search Folder is finding my voice mails from subfolders, when I dialed into OVA, it successfully played back all those voice mails as well.