RSS Subscription 167 Posts and 2,643 Comments

Recovering from Server 2008 CCR Cluster Failure with Forcequorum

Server 2003 and Server 2008 Cluster Models are different in the following ways:

  • Server 2003 utilizes the Shared Quorum Model for Single Copy Clusters and utilizes Majority Node Set for Cluster Continuous Replication Clusters.
  • Server 2008 utilizes Majority Node Set for both Single Copy Clusters and Cluster Continuous Replication Clusters.  When using Single Copy Clusters with Server 2008, it is recommended to use Node Majority with Disk Witness.  When using Cluster Continuous Replication, it is recommended to use Node Majority with File Share Witness.

When working with Cluster recovery methods, the method differentiates a little bit from when using the Shared Quorum Model than using the Majority Node Set Model.

In this article, I will focus on recovering from Server 2008 Cluster Continuous Replication Site Failure (kind of — more info later).  I was debating on creating this article as a multi-part series for setting up CCR and then showing how to recover using Forcequorum, but decided not to.  This is because I believe Andy Grogan does a fine job in his how to set up CCR on Server 2008 that I decided not to.  You can read Andy’s great article here.

Forcequorum Changes in Windows 2003 and Windows 2008

One thing I do want to mention is that forcequorum in Windows 2003 and 2008 differentiate quite a bit.  While the functionality provides the same result, some things in the background are quite different.  First of all, in Windows 2003, /forcequorum can be used as maintenance switch.  In Windows 2008, this is no longer the case.  When running /forcequorum on a Windows 2008 Cluster Node, the paxos for the cluster is increased; you can call this inifnity.  What this means is, is that the node you run /forcequorum on becomes the master/authoritative node for any cluster joins.  Because of this, it is imperative that your cluster node you run /forcequorum on is a fully configured cluster node.  More information can be found on this here.  This to Tim McMichael for making me aware of this change to forcequorum in Windows 2008.

An example of the above would be the following:

You have a 4 node Single Copy Cluster.  You have installed Windows Clustering Services on all 4 nodes but you have only installed Exchange 2007 on 3 nodes.  On the 4th node you have not installed Exchange on, you run /forcequorum.  That 4th node is now the authoritative copy for all cluster nodes.  Whenever the cluster service starts, it performs a join operation.  Because the 4th node is now the authoritative/master node for cluster configuration, your 3 other servers will have Exchange failures because when their cluster service starts up the next time, the cluster information for Exchange on that server is wiped due to the authoritative copy on the 4th Exchange node being wiped. That is why, in Windows 2008, you need to ensure that the 2008 node you run /forcequorum on is fully configured.  Think of it as doing an authoritative copy on a Domain Controller that has been shut down for 2 weeks.  You’ll lose any password changes, accounts that have been created, accounts that were deleted will be back, and any other AD related information in the last two weeks will be back.

The Lab

While my lab is all in one site, the recoverability method is similar as it would be if it were a geographically dispersed cluster.  When the Exchange Servers in one site fails, that means the Active CCR in one site went down and the File Share Witness went down.  In a Geo-Cluster, your Passive and now the new Active Node (at least it should be the new Active… but read on) will not be able to start.

Why is this?  Well, a Majority Node Set Cluster requires 50% of the Nodes + 1 to be up.  Because 2/3 witnesses are down, the cluster services on your Passive Node will not start (which is what I meant by it should be the new Active).  Now what about the kind-of I spoke about earlier?  Well, since all the services are in one site, I will be skipping the method where I re-create the file share witness on a Hub Transport Server in the original datacenter and utilize the new FSW I already have created.  If you were indeed doing Geo-Cluster, you may want to re-provision the FSW back on the original node to get everything moved back over to the main site.  That’s the only difference.  This will make more sense as you read on as I show how to both move the FSW back to the original node and the method we actually do in regards to skipping that process.

So to re-iterate, my lab is in one site and will be showing you how to recover and provide additional information on what you would do if you wanted to re-provision your FSW back to a Hub Transport Server if you were failing back everything to your original datacenter. To start this process, I will pause my Active Node and my delete the file share witness that exists on my Hub Transport Server.  To recover, I will do a forcequorum on my second node, re-create the file share witness, have my cluster point to the new file share witness, bring up my old Active Node which will now be the new Passive Node, and have it point to the new file share witness.

The Environment

Before we dive into the process, let’s talk a little bit about the environment.  I have two Domain Controllers, one Hub Transport Server running on Server 2003 x64 R2 SP2, and two CCR Mailbox Servers running on Server 2008 x64.  All this is being run on Hyper-V managed by System Center Virtual Machine Manager 2008 RTM.

On SHUD-EXC1, our File Share Witness is located in the FSM_EXCluster file share.  This share has full permissions to our Cluster which is named EXCluster$.  Our Security/NTFS Permissions are set to Administrator and EXCluster$ full control.  Our Exchange CMS is named Cluster.  Yes… I know…  The names are backwards and the cluster name should be Cluster while the Exchange CMS should be EXCluster.  Oops I guess…

Taking a look at our Cluster in the Exchange Management Shell, we can see that our Cluster is currently healthy.

We can also see that moving the Cluster from SHUD-EXCN1 to SHUD-EXCN2 is successful meaning that the Cluster is indeed healthy.

I did move the Cluster back to SHUD-EXCN1 though just to make sure failover is working to and from both nodes.

Ok, hooray!  We have a successful and healthy cluster to test on.  So let’s get on with the good stuff.

Failing the Cluster

First thing I’m going to do is delete the file share.  We can see the share no longer exists on SHUD-EXC1.

Now, let’s pause SHUD-EXCN1 which is the current Active CCR Cluster Node.

We can check the services and event log on SHUD-EXCN2.  We can see that the Information Store service won’t start and we get quite a few event log failures such as the following (and there’s more than just what’s below).

Recover the Cluster on SHUD-EXCN2

So the first thing we want to do is a Forcequorum on SHUD-EXCN2.  You typically could bring up SHUD-EXCN1 but we’re acting as if SHUD-EXCN1 is having issues and we can’t get it up right now and we really need to get our cluster our to serve clients. Guidance on doing a forcequorum on both Server 2003 and Server 2008 for Exchange 2007 can be located here.

We will be performing the following steps to get our SHUD-EXCN2 running properly:

  • Provision a new share for FSW on SHUD-EXC1.  If you are doing a GeoCluster, you can do this in Site B which is where your Passive Node would be.
  • Force quorum on SHUD-EXCN2 by running the following command: net start clussvc /forcequorum
  • Use the Configure Cluster Quorum Settings wizard to configure the SHUD-EXCN2 to use the new FSW share on SHUD-EXC1.
  • Reboot SHUD-EXCN2.
  • Start the clustered mailbox server.

Provision New Share

We can run the following commands to re-create the folder for the FSW, share it out, and apply the correct permissions.

mkdir C:\FSM_New_EXCluster
net share FSM_New_EXCluster=C:\FSM_New_EXCluster /Grant:EXCluster$,Full
cacls C:\FSM_New_EXCluster /G BUILTIN\Administrators:F EXCluster$:F

Forcequorum on SHUD-EXCN2

Now is the time to force our cluster services to start on our soon to be Active Node which was previously the Passive Node.  We will do this by running the following command: net start clussvc /forcequorum.

Configure our new Cluster Quorum

Go into the Failover Cluster Management tool in Start > Administrative Tools.  Then Right-Click on our Cluster FQDN > More Actions > Configure Cluster Quorum Settings.

Choose Node and File Share Majority.  Click Next to Continue.

Enter the location for the new File Share.  Click Next to Continue.

You can go through the rest of the prompts and we will see the Cluster Quorum has successfully been configured to point to the new File Share Witness.

Remaining Steps

The remaining steps are very simple.  Reboot the node and start the Clustered Mailbox Server.  Upon restarting, you will see that the Cluster Service is successfully started.  Congratulations.  This means the connection to the File Share Witness is working because have have over 50% of our witnesses (2/3 is > 50%) online.

We can then verify we have Outlook Client Connectivity.  And as the screenshot shows, we do successfully have Outlook Connectivity.  Hooray!

Unfortunately though, we can still have our old Active Node SHUD-EXCN1 down as we can see in the Exchange Management Console.

Bringing SHUD-EXCN1 Back Online

Now we need to get SHUD-EXCN1 back online and in a healthy replication state.  If you did all the above in a real Geo-Cluster, you’d want to run the following steps:

  • Provision a new share on a HUB in Datacenter A.
  • Bring SHUD-EXCN1 online.
  • Reconfigure the cluster quorum to use the FSW share on HUB in Datacenter A.
  • Stop the clustered mailbox server.
  • Move the Cluster Group from SHUD-EXCN2 to SHUD-EXCN1.
  • Move the clustered mailbox server from SHUD-EXCN2 to SHUD-EXCN1.
  • Start the clustered mailbox server.

But because we’re running in the same site in the lab, we’re just going to skip the creation of the new FSW and use our existing one.  Because of this, our steps will be:

  • Bring SHUD-EXCN1 online.
  • Stop the clustered mailbox server.
  • Move the Cluster Group from SHUD-EXCN2 to SHUD-EXCN1.
  • Move the clustered mailbox server from SHUD-EXCN2 to SHUD-EXCN1.
  • Start the clustered mailbox server.

So let’s bring up SHUD-EXCN1.

Now on SHUD-EXCN2, in the Exchange Management Shell, we will run the following command to stop the Clustered Mailbox Server (GUI/CLI method here):

Stop-ClusteredMailboxServer -Identity Cluster -StopReason:Recovering CCR Node”

Let’s move the Clustered Mailbox Server to SHUD-EXCN1 using the following command (GUI/CLI method here):

Move-ClusteredMailboxServer -Identity Cluster -TargetMachine SHUD-EXCN1 -MoveComment “Moving CCR”

We will also need to move the default Cluster Group to SHUD-EXCN1 using the following command:

cluster group “Cluster Group” /move:SHUD-EXCN1

We willl then want to verify both the Cluster Mailbox Server (Cluster) and Cluster Group are both on SHUD-EXCN1.  Don’t forget that we did a Stop-ClusteredMailboxServer so we should see that Offline and the Cluster Group online.

We now want to start the Clustered Mailbox Server by running the following command:

Start-ClusteredMailboxServer -Identity Cluster

Now let’s verify that our Cluster is in a healthy replication state.

And to just make sure, let’s verify Outlook Connectivity still works.

Congrats, you now have a completely restored CCR Cluster!

Share

Exchange 2007 SP1 and Server 2008 information

wanted to share some of my findings with running Exchange 2007 SP1 on Server 2008. I’ve noticed and heard of several issues and information that I believe people should be cognizant about.

Here are the issues and general information I have heard of and experienced so far that seems to be valuable to share. If you disagree with anything I am sharing, have found it works in a different way for you, and/or want to include your findings and any tidbits of information you may have, please feel free to comment.

  • Hub Transport Server Role fails when IPv6 is disabled on that server – FIXED – If either of these 2 bullets occur, you need to fully disable IPv6 and not just uncheck it.  This requires the same fix as the next section which discusses broken Outlook Anywhere.

    • If IPv6 is disabled prior to the installation of Exchange Server 2007, when installing the Hub Transport Server role, your Hub Transport Server role will fail to install
    • If IPv6 is disabled after the installation of Exchange Server 2007, you may experience some Exchange services failing to start
  • Outlook Anywhere is broken under certain conditions- FIXED @ http://technet.microsoft.com/en-us/library/cc671176.aspx

    • Outlook Anywhere is not working for Outlook 2007 with IPv6 enabled (More information can be found from the following URLs: http://blog.aaronmarks.com/?p=65 and http://www.buit.org/2008/01/04/outlook-anywhere-is-broken-on-ipv6-in-windows-server-2008). More information below.
    • This bug consists of the fact that IPv6 is not listening on the loopback port 6004 (RPC/HTTP Proxy Service). This is causing Outlook Anywhere to fail with Outlook 2007. Not sure if this happens with previous versions of Outlook. The reason for this is because Server 2008 prefers communication using IPv6 over IPv4. Since IPv6 is not listening on port 6004, Outlook Anywhere will fail.
    • TCP 0.0.0.0:6001 0.0.0.0:0 LISTENING
      TCP 0.0.0.0:6002 0.0.0.0:0 LISTENING
      TCP 0.0.0.0:6004 0.0.0.0:0 LISTENING
      TCP [::]:6001 [::]:0 LISTENING
      TCP [::]:6002 [::]:0 LISTENING

  • NTLM seems to be very buggy with Outlook Anywhere. There are lots of reports of Outlook Anywhere NTLM Authentication not being functional when using Server 2008. More information can be found from the following URL: http://blog.aaronmarks.com/?p=65 FIXED in Release Update 8 for SP1 – Update to latest Rollup/Service Pack or type the following command: %Windows%\inetsrv\appcmd.exe set config /section:system.webServer/security/authentication/windowsAuthentication /useKernelMode:false
  • OAB Generation fails on Server 2008 Clusters.  More information can be found from the following URL: http://www.spyordie007.com/blog/index.php?mode=viewid&post_id=25 FIXED in Release Update 5 for SP1 – Update to latest Rollup/Service Pack. You may also need to deploy the following hotfix for Server 2008 clusters here in which more information about this hotfix and what it fixes available here.
  • There is an HP Document (http://h71028.www7.hp.com/ERC/downloads/4AA1-5675ENW.pdf) which goes over some testing with varying network latencies using CCR over an OC3 link with a network latency simulator. I wanted to give an overall summary of their findings.
    • 20 ms latency – All the log files were shipped over properly and all CCR databases auto-mounted properly
    • 30-40 ms latency – Some manual mounting will be required to mount all your databases as the latency will prevent all logs to be shipped over fast enough for automatic mounting
    • 50+ ms latency – Log shipping mechanism was out of control
  • In regards to SCR and the network latency topic. SCR is a manual failover mechanism. Because of this, CCR is a lot more dependent on network latency due to its automatic failover mechanism. Microsoft does provide recommendations on how to tune SCR for latency on the Exchange Technet Library which can be found here. The problem here is the article is geared for Server 2003 Networking. As for real world SCR scenarios, I have been told that a mailbox server that contains ~6,000 mailboxes has been successfully failed over to an SCR target across the world over a 200 ms link.
Share

Unattended Server 2008 Base Image Creation using WSIM/Sysprep

In Windows Server 2003, creating a master image in which Sysprep was used to invoke an unattended installation was a fairly straight forward process. It consisted of the following:

  1. Installing Windows Server 2003
  2. Insert Server 2003 CDROM into the CDROM Drive
  3. Navigate to X:\Support\Tools\Deploy.cab
  4. Copy sysprep.exe and setupcl.exe to C:\Sysprep
  5. Copy Setup Manager to C:\Sysprep
  6. Open Setup Manager and create a Sysprep.inf file with the settings you want for an unattended installation
  7. Run Sysprep (Sysprep would automatically detect Sysprep.inf)

In Windows Server 2008, creating a master image is no easy feat. To briefly explain the process (will be detailed throughout the rest of this article), you must first download the Windows Automated Installation Kit (1GB in size) which you can download here, load install.wim, and create a sysprep.xml file. You would finally run the built-in Sysprep utility and tell it to use the sysprep.xml file you just created along with some other options

Once you have downloaded the Windows Automated Installation Kit, you will need to burn it via your favorite burning utility; mine is InfraRecorder which is free. Once it’s burned, go ahead and install it on your Vista or Server 2008 machine (we’ll be using Server 2008). Once it’s installed, open the Windows system Image Manager (Start > All Programs > Microsoft Windows AIK > Windows System Image Manager).

In order to begin creating a Sysprep.xml file, you will need to load a Windows Image File (WIM). Make sure that you are using the Windows Automated Kit Installation version (or above) for Vista and Server 2008 that is linked to in the beginning of this article.  Otherwise the WIM you try to load will be incompatible with the version you are using.

The WIM file we will be using is located on our Server 2008 CD-ROM (X:\sources\install.wim). X refers to the drive letter of your CD-ROM Drive. Proceed to entering your Server 2008 CD-ROM to your Server 2008′s CD-ROM Drive.

Once you have done so, in the Windows System Image Manager, go to File > Select Windows Image.

Browse to the location of the install.wim file. As stated above, this file is located at X:\sources\install.wim. X refers to the drive letter of your CD-ROM Drive.

Once install.wim has been selected, choose Open. This will bring up a new window which allows you to select the version of Windows Server 2008 you will be using as your Master Image. The edition we are currently running Server 2008 on and want to continue using for future cloned guests will be Enterprise. Select Enterprise and click OK to Continue.

We now see our selected Windows Server 2008 Enterprise Image is loaded into Windows System Image Manager.

We will now want to begin the process of configuring our new Answer File which we will name sysprep.xml. In the Windows System Image Manager, go to File > New Answer File.

We now see our newly created Answer File is loaded into Windows System Image Manager.

Now that we have a WIM loaded and an Answer File created, the two are associated with each other and you now have many customizable settings under your Windows Image.

There are many settings I want to change, and I will leave this up to you as the point of this blog entry is to get you started on the basic concepts of getting the Master Image created. At the very least, I will show you how to remove Internet Explorer Enhanced Security Configuration so the Administrators constantly don’t get bogged down with Internet Explorer security prompts.

Note: I take no responsibility for you doing this in production and getting hacked due to you reducing the security of a production machine. Do this at your own risk.

Right-Click on amd64_Microsoft-Windows-IE ESC_6.0.6001.18000_neutral and choose Add Setting to Pass 4 specialize.

Once you add the setting to Pass 4 specialize, you see this setting get added into the Answer File. From here, you can select amd64_Microsoft-Windows-IE ESC_6.0.6001.18000_neutral and modify the settings in the properties. For purposes of this lab, I chose both IEHardenAdmin and IEHardenUser and set them both to false.

Some other popular options you may want to do are as follows:

  • Auto-generated computer name
  • Organization and Owner Information
  • Setting language and locale
  • Setting the initial tasks screen not to show at logon
  • Setting server manager not to show at logon
  • Configuring the Administrator password
  • Creating a 2nd administrative account and setting the password
  • Running a post-image configuration script under the administrator account at logon
  • Setting automatic updates to not configured (to be configured post-image)
  • Configuring the network location
  • Configuring screen color/resolution settings
  • Setting the time zone

These settings are outlined in Brian W. McCann’s sample Sysprep.xml file located here. Even though my article shows you the steps required to create your own Sysprep.xml from scratch, I would still use Brian’s Sysprep.xml file as a baseline as he has popular options that most users are going to want. Why re-invent the wheel? Just copy his XML code, save it into your open Sysprep.xml file, and open it within Windows System Image Manager.

Once you are satisfied with all your modifications to your answer file, save the answer file to C:\windows\system32\sysprep\ as sysprep.xml by pressing Control + S and choosing C:\windows\system32\sysprep\ as the save location and file name as sysprep.xml. Click Save to Continue.

My final Sysprep.xml file which was derived using Brian’s Sysprep.xml file as the baseline looks as follows.

The next step would be to Open a Command Prompt, Navigate to C:\Windows\System32\Sysprep and Type the following:

sysprep /generalize /oobe /shutdown /unattend:sysprep.xml

Once this command is initiated, you will see a window pop up showing Sysprep doing its’ magic.

Once Sysprep is finished working, the system will shut down. You can now clone your shut down machine which will provide you with a nice Sysprep’d copy of Windows Server 2008.

Before I conclude this article, I wanted to express some of my opinions on this entire process. I find it a lot more tedious to do than the method we used for Server 2003. The SetupManager laid out options very nicely and was intuitive to define the settings you wanted. Now, you must go through the process of downloading a 1GB file, burning it, installing it, figuring out all the options you want added to your XML, etc… I personally think that going forward, I will just create a base machine, shut it down without running a Sysprep, clone it, and just run NewSID which can be found here. This is actually what I did for my Exchange 2007 SP1 SCC using Server 2008 Starwind article series. Granted you won’t want to use NewSID if you are doing this in production as you risk the chance of Microsoft not supporting you.

Also, I am not a Microsoft Deployment guy, so I understand that for production, there’s a much larger picture where this tool is a lot more integrated and it is a really great tool when using it with the Microsoft Deployment Tool (MDT). But I am speaking from merely of a perspective of wanting to Sysprep a machine for easy cloning via Virtualization Tools.

Either way, I hope this article helps you out with the process of creating a base image for Server 2008 to assist you in getting new Server 2008 machines up and running as quickly as possible.

Share

Exchange 2007 SP1 SCC using Server 2008 StarWind iSCSI – Part 4

Welcome to Part 4 of this article series. In Part 1, we started off by discussing the goal of this lab. That goal is to showcase Server 2008′s built in iSCSI Initiator software to connect to an iSCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 SP1 Failover Clustering. We first discussed what the lab setup is going to be using VMware Workstation, and then proceeded to the configuration of RocketDivision’s StarWind iSCSI Target software. We then went into Exchange 2007 and did the initial iSCSI Initiator connection to our iSCSI Target.

In Part 2, we prepared our Cluster Nodes by installing any prerequisites needed prior to the cluster formation and Exchange 2007 SP1 installation. When that was complete, we continued with our iSCSI configuration by adding our LUNs to the Cluster Nodes, partitioned these LUNs, formatted these LUNs, and ensuring that shared disk storage was working as intended.

In Part 3, we formed our cluster beginning with Node A followed by Node B. Once our cluster was formed, we will proceed with configuring the cluster to ensure optimal operating for our Exchange server. This consisted of cluster network configuration, quorum configuration, etc. Once configuration was completed, we validated cluster operations. This included testing failover.

In this final Part, we will install Exchange into our Cluster. The first step will be to install the Active Clustered Mailbox Role followed by our Passive Clustered Mailbox Role. We will then proceed with how to manage our new Exchange Cluster.

Part 1

Part 2

Part 3

Part 4

Active Node Exchange 2007 Cluster Installation (NodeA)

Final Preparation

We have finally reached the point where we will install Exchange 2007. Don’t forget that one of the prerequisites is to already have a Client Access Server and Hub Transport Server deployed. If you have not done this yet, I suggest you go do this before proceeding.

Insert your Exchange 2007 SP1 media (SP1 media required) and insert it into our Active Node. In the case of this lab, we are using VMware, so I will be mounting an ISO image to our Active Node (NodeA).

Please ensure that NodeA is currently the Active Node before proceeding. Go to Start > Administrative Tools > Failover Cluster Management > Expand our Cluster > Nodes. Once here, we can view both Nodes and see what disks they currently own.

If NodeA does not currently have ownership of our Database and Disk Quorum disk, run the following commands:

Cluster group “Available Storage” /move:<ActiveNodeName>

Cluster group “Cluster Group” /move:<ActiveNodeName>

Note: There are two Cluster Groups. The first is Available Storage which contains our Database Disk. The second is the Cluster Group which contains our Quorum Disk. It is only essential that NodeA owns the Database disk for installation. For safe measures, I still like to make sure the node we are working on owns both the Database and Quorum Disk.

Installation

Run Setup.exe and choose to Install Exchange Server 2007 SP1. This will bring you to several Pages in which you should review, accept, and continue. These pages include the Introduction Page, License Agreement, and Error Reporting, . Review this information and click Next to Continue.

Once you have reached the Installation Type page, select Custom Exchange Server Installation. We will want to use this option because the Typical Exchange Server Installation installs the Hub Transport Server Role, Client Access Server Role, and Mailbox Server Role. Because we are installing the Mailbox Server Role on a Cluster, we are limited to installing only the Mailbox Server Role. This is the reason why we have installed a Hub Transport Server and Client Access Server on another server prior to installing the Mailbox Server Roles on our Cluster Nodes. Click Next to Continue.

At the Server Role Selection page, choose Active Clustered Mailbox Role. As you can see, all other options have been greyed out and you are forced to install the Management Tools. Click Next to Continue.

At the Cluster Settings page, choose Single Copy Cluster. Then specify the name of the Clustered Mailbox Server Name. This is the name your users will see when specifying what server their mailbox is housed on. Finally, choose the path your database files will be installed. You cannot choose the root path and will be forced to create a subfolder. Click Next to Continue.

Select the IP Address that the Cluster Mailbox Server (CMS) EXServer01 will listen on. In the case of this lab, NodeA uses 192.168.119.160, NodeB uses 192.168.119.161, so we will use 192.168.119.162. We do not need to specify a Second Subnet as we are not deploying our Cluster across multiple subnets. Click Next to Continue.

Choose your Client Settings. If you have computers running Outlook 2003 or earlier or Entourage, choose Yes. Otherwise, choose No. If the wrong option is chosen, don’t worry, you can always add public folders once Exchange is installed. Click Next to Continue.

You will begin to see Readiness Checks being run for both the Mailbox Role as well as the Clustered Mailbox Server. Once this is completes successfully, click Install to Continue. If you have any failures, those failures will need to be remedied prior to continuing with the cluster installation.

Installation will commence. Upon a sucessful instatllation completeion, you will see status of all installation steps shown as Completed. If cluster installation has been unsuccessful, troubleshooting will need to ensue to ensure you can get Exchange installed on the cluster successfully. Clear the check box, “Finalize installation using the Exchange Management Console.” Click Finish to continue.

You will be prompted to reboot, but do not reboot. There is one step you will want to do prior to a reboot. Open the Exchange Management Shell (Start > All Programs > Microsoft Exchange Server 2007 > Exchange Management Shell).

We will now stop the CMS by running the following command:

Stop-ClusteredMailboxServer <CMSName> -StopReason Setup -Confirm:$false

You may now proceed to reboot NodeA. One thing to note, is that when you reboot NodeA, the disks will be moved over to NodeB which does not have Exchange installed. Because of this, once NodeA is back up, you will want to move the CMS group, Available Storage group, and Cluster Group group back to NodeA.

To get a list of the existing Cluster Groups that are installed, type the following command in the Command Prompt:

Cluster Group

As we can see, the Cluster Groups successfully moved over to NodeB. The reason why we wanted to turn off the CMS prior to shutting down, is because NodeB does not have Exchange installed and we don’t want the CMS try attempt to come online.

Run the following three commands to move all three groups back over to NodeA:

Cluster group “Available Storage” /move:NodeA

Cluster group “EXServer01″ /move:NodeA

Cluster group “Cluster Group” /move:NodeA

We will now want to move the storage that is currently in the Available Storage group over to the CMS group, EXServer01. The Database disk, named database, is the only disk currently in the Available Storage group. To do this, we will run the following command:

Cluster res “Database” /move:”EXServer01″

Continue by making the Database disk a dependency of our Exchange Database. To find out how you will want to format the Database name for the dependency, open up the Failover Cluster Management MMC. Expand our Cluster > Services and Applications > CMS (EXServer01).

Take a look at the highlighted text. That is the name of our Database we will use in our Cluster dependency command. We will now want to make the Database disk a dependency of our Mailbox Database by running the following command:

Cluster EXCCLUS01 res “First Storage Group/Mailbox Database (EXServer01)” /AddDep:”Database”

The final configuration of NodeA is to configure the physical disk resource policies so that a failure of a disk resource does not cause failover of the CMS to another node by running the following command:

Cluster EXCCLUS01 res “Database” /prop RestartAction=1

Passive Node Exchange 2007 Cluster Installation (NodeB)

Final Preparation

Insert your Exchange 2007 SP1 media (SP1 media required) and insert it into our Passive Node. In the case of this lab, we are using VMware, so I will be mounting an ISO image to our Passive Node (NodeB).

Please ensure that NodeA is currently the Active Node before proceeding. Open a Command Prompt and type the following command:

Cluster group

We should see NodeA as the owner of all three Cluster Groups. If NodeA does not currently have ownership of all the Cluster Groups, run the following commands:

Cluster group “Available Storage” /move:NodeA

Cluster group “EXServer01″ /move:NodeA

Cluster group “Cluster Group” /move:NodeA

Installation

Run Setup.exe and choose to Install Exchange Server 2007 SP1. This will bring you to several Pages in which you should review, accept, and continue. These pages include the Introduction Page, License Agreement, and Error Reporting, . Review this information and click Next to Continue.

Once you have reached the Installation Type page, select Custom Exchange Server Installation. We will want to use this option because the Typical Exchange Server Installation installs the Hub Transport Server Role, Client Access Server Role, and Mailbox Server Role. Because we are installing the Mailbox Server Role on a Cluster, we are limited to installing only the Mailbox Server Role. This is the reason why we have installed a Hub Transport Server and Client Access Server on another server prior to installing the Mailbox Server Roles on our Cluster Nodes. Click Next to Continue.

At the Server Role Selection page, choose Passive Clustered Mailbox Role. As you can see, all other options have been greyed out and you are forced to install the Management Tools. Click Next to Continue.

You will begin to see Readiness Checks being run for both the Mailbox Role as well as the Clustered Mailbox Server. Once this is completes successfully, click Install to Continue. If you have any failures, those failures will need to be remedied prior to continuing with the cluster installation.

Installation will commence. Upon a sucessful instatllation completeion, you will see status of all installation steps shown as Completed. If cluster installation has been unsuccessful, troubleshooting will need to ensue to ensure you can get Exchange installed on the cluster successfully. Clear the check box, “Finalize installation using the Exchange Management Console.” Click Finish to continue.

Once you have reached this step, congratulations, your Exchange Cluster has finally been fully deployed. You will be prompted to reboot. Go ahead and do so.

All there is really now is to start the CMS back up, and you’re done; besides general configuration. To start the Exchange CMS, open the Exchange Management Shell (Start > All Programs > Microsoft Exchange Server 2007 > Exchange Management Shell).

We will now start the CMS by going on NodeA and running following command:

Start-ClusteredMailboxServer <CMSName> -Confirm:$false

Just to ensure that all Cluster Groups are online, run the following command:

Cluster Group

Post Installation

Generally, now would be the time to go do your general configuration. This includes licensing, configuring the Autodiscover Service, set Quotas, etc…

Before we do any of that, let’s make sure that the CMS will fail over to to NodeB. You can use the Cluster Group /move command, but it is best practice to use the Exchange Management Shell (EMS) command, Move-ClusteredMailboxServer. This is required in CCR Clusters due to the Cluster command not being Microsoft Cluster Service Aware which can ultimately break the log shipping mechanism.  You can read more about using Cluster Group /move vs Move-ClusteredMailboxServer here.

Let’s move our CMS over to NodeB by running the following command in the EMS:

Move-ClusteredMailboxServer EXServer01 -MoveComment “Failover to NodeB” -TargetMachine:NodeB -Confirm:$False

After running this command, go into the Failover Cluster Management MMC. Expand our Cluster > Services and Applications > CMS (EXServer01). There are a few things to take note of here. There are two preferred owners of this CMS, NodeA and NodeB. This means, if NodeA is the current owner of the resources of this CMS and it goes down, NodeB will take over. The same goes in a vice versa scenario.

As we can see, the current owner is NodeB which means the Move-ClusteredMailboxServer command was successful. All the “Other Resources” which are the Exchange Resources are also currently online. We have a successful verified Exchange Cluster failover.

Moving the CMS via the EMS is not the only way to move a CMS. Ever since Exchange Server 2007 SP1 was released, the ability to move a CMS to another node was added into the Exchange Management Console (EMC). So let’s go check out this command and move the CMS back over to NodeA, but this time, by using the EMC (Start > All Programs > Microsoft Exchange Server 2007 > Exchange Management Console). Then Expand Server Configuration > Mailbox > Choose Managed Clustered Mailbox Server from the Action Pane.

Select the option “Move the clustered mailbox server to another node.” Select Next to Continue.

Select NodeA as your Target Machine and set the Move comment to whatever you like. Select Next to Continue.

Review the Configuration Summary. Once satisfied, Choose Move to Continue.

Once again, after executing this move, go into the Failover Cluster Management MMC. Expand our Cluster > Services and Applications > CMS (EXServer01). As we can see, the current owner is NodeA which means the move via the EMC was successful. All the “Other Resources” which are the Exchange Resources are also currently online. We have a successful verified Exchange Cluster failover.

Summary

Well folks, that is all for Part 4 of this article and concludes this article series. To recap on what was included in Part 4 of this article series, we first started off recapping what was included in Part 1, Part 2, and Part 3 of this article and what the goal of this lab is for. It is to showcase Server 2008’s built in iSCSI Initiator software to connect to an iSCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 Failover Clustering. In Part 2, we left off at the final stages of disk preparation. All of the shared disks were successfully portioned, formatted, and named. In Part 3, we formed the cluster, beginning with Node A followed by Node B. We then proceeded with configuring the cluster networks, quorum, and validated our failover cluster worked.

In Part 4, we installed the Exchange 2007 Active Clustered Mailbox role and the Passive Clustered Mailbox role. We then performed management on our Clustered Mailbox Server (CMS) by showing how we can move the CMS via the Exchange Management Shell (EMS) as well as using the Exchange Management Console (EMC).

I hope these articles will help you out on your endeavor to installing Exchange 2007 on Windows Server 2008. Thank you for viewing.

Share

Exchange 2007 SP1 SCC using Server 2008 StarWind iSCSI – Part 3

Welcome to Part 3 of this article series. In Part 1, we started off by discussing the goal of this lab. That goal is to showcase Server 2008′s built in iSCSI Initiator software to connect to an iSCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 SP1 Failover Clustering. We first discussed what the lab setup is going to be using VMware Workstation, and then proceeded to the configuration of RocketDivision’s StarWind iSCSI Target software. We then went into Exchange 2007 and did the initial iSCSI Initiator connection to our iSCSI Target.

In Part 2, we prepared our Cluster Nodes by installing any prerequisites needed prior to the cluster formation and Exchange 2007 SP1 installation. When that was complete, we continued with our iSCSI configuration by adding our LUNs to the Cluster Nodes, partitioned these LUNs, formatted these LUNs, and ensuring that shared disk storage was working as intended.

In this Part, I will be forming our cluster beginning with Node A followed by Node B. Once our cluster is formed, we will proceed with configuring the cluster to ensure optimal operating for our Exchange server. This consists of cluster network configuration, quorum configuration, etc. Once configuration is completed, we will validate cluster operations. This includes but is not limited to testing failover.

Part 1

Part 2

Part 3

Part 4

Failover Cluster Installation (NodeA)

Validate a Configuration

All of our prerequisites have been completed. It is finally time to get the cluster up and running. The first step is to go on NodeA while NodeB is shut down (or paused will suffice in VMware). Go to Start > Administrative Tools > Failover Cluster Management.

This will launch the Failover Cluster Management MMC. The section we will be working with the most is Management.

The first thing we will want to do is Validate a Configuration. This will help ensure that our NodeA has met the prerequisites for cluster formation. Click Validate a Configuration to proceed and then Click Next to bypass the Before you Begin window. Enter the name of our first node, NodeA and click Add. Click Next to Continue.

You are presented with a list of checks that will occur. If you would like to learn more about these checks, click More about cluster validation tests in the bottom part of the window. Click Next to Continue.

You will begin to see each Inventory item be checked. It will result in a Success, Failure, or Not Applicable. Once this is complete, the Cluster Validation Report is displayed. If you have any failures, those failures will need to be remedied prior to continuing the cluster formation.

Create a Cluster

Now that our cluster is validated, we can proceed with the creation of the cluster. Go back to the Failover Cluster Management MMC and then back to the Management section.

Click Create a Cluster. This will launch a wizard which will assist us in creating our cluster. Click Next to bypass the Before you Begin window. Enter the name of our first node, NodeA and click Add. Click Next to Continue.

Select an IP Address that you would like to use for administering the cluster. A name for the cluster must also be created. We will use EXCLUS01 for the cluster name and an IP Address of 192.168.119.220 for the Cluster IP. Click Next to Continue.

We are now provided with confirmation of the settings we will use when forming the cluster. Click Next to Continue.

Installation will begin and a progress bar will be displayed.

Once this is complete, the Cluster Summary Report is displayed notifying you whether cluster installation has been successful or unsuccessful. If cluster installation has been unsuccessful, troubleshooting will need to ensue to ensure you can get the cluster installed successfully. Click Finish to continue. The Failover Cluster Management MMC re-appears. You will now see that there is an EXCCLUS01 hierarchy with options to modify and manage your cluster. This gives you re-assurance that the cluster installation completed successfully.

Adding Cluster Storage

Before we bring up the second Node, we need to ensure we add the shared storage to the cluster due to the cluster installation not detecting shared storage and adding it automatically. As stated in this article series, we want the cluster service to have complete control over access to the shared disks. If both nodes are fighting for disk access at the same time, there is a risk of data loss or corruption. This is why we have only had 1 Cluster Node booted at any given time. When in the Failover Cluster Management MMC, Click on Storage in the hierarchy of EXCLUS01. You will see that no storage exists in the cluster.

In the Action Pane, Click Add a disk. Make sure both disks are selected. Click OK to Continue.

Cluster NodeA now has full control over both disks.

Select Cluster Disk 1 and choose Properties in the Action Pane.

Do the same for Cluster Disk 2 but rename it to Quorum.

Failover Cluster Installation (NodeB)

Validate a Configuration

All of our prerequisites have been completed. It is finally time to get the cluster up and running. The first step is to go on NodeB (It is safe to have NodeA up as the cluster service has control over the disks). Go to Start > Administrative Tools > Failover Cluster Management.

This will launch the Failover Cluster Management MMC. The section we will be working with the most is Management.

The first thing we will want to do is Validate a Configuration. This will help ensure that our NodeB has met the prerequisites for cluster formation. Click Validate a Configuration to proceed and then Click Next to bypass the Before you Begin window. Enter the name of our first node, NodeB and click Add. Click Next to Continue.

Select an IP Address that you would like to use for administering the cluster. A name for the cluster must also be created. We will use EXCLUS01 for the cluster name and an IP Address of 192.168.119.220 for the Cluster IP. Click Next to Continue.

You are presented with a list of checks that will occur. If you would like to learn more about these checks, click More about cluster validation tests in the bottom part of the window. Click Next to Continue.

You will begin to see each Inventory item be checked. It will result in a Success, Failure, or Not Applicable. Once this is complete, the Cluster Validation Report is displayed. If you have any failures, those failures will need to be remedied prior to continuing the cluster formation.

Joining NodeB to Cluster

While on NodeB, open the Failover Cluster Management MMC. Since NodeB is not a part of the cluster, we will see no cluster to manage. Right-Click Failover Cluster Management > Manage a Cluster.

Note: Joining NodeB to the cluster will require less information than it did when initially creating the cluster. This is because your 192.168.119.0 network has been chosen to be the network that administers the cluster.

Type in the Cluster Name EXCLUS01. The NetBIOS name or FQDN should both work if name resolution is properly configured in your environment. Click OK to Continue.

Right-Click our EXClus01 Cluster and choose Add Node…

This will launch a wizard which will assist us in joining our existing EXCClus01 cluster. Click Next to bypass the Before you Begin window. Enter the name of our second node, NodeB and click Add. Click Next to Continue.

At this point, you will be asked to go through another validation which tests both NodeA and NodeB together. One test that is done is taking storage offline to test storage between the cluster nodes. For example, testing disk failover, testing operating system versions between both nodes, and a slew of other tests to ensure that both nodes will function properly together in a cluster . Since I have shown how the validation tests work twice, I will not include a how-to screenshot on running a third validation test. Click Next to Continue once the validation pass succeeds.

We are now ready to add NodeB to our cluster. Click Next to Continue.

Installation will begin and a progress bar will be displayed.

Once this is complete, the Add Node Summary Report is displayed notifying you whether adding NodeB to the cluster has been successful or unsuccessful. If adding the node has been unsuccessful, troubleshooting will need to ensue to ensure you can get NodeB successfully added to the cluster. Click Finish to continue. The Failover Cluster Management MMC re-appears. You will now see that there is NodeB under the Node section in the EXCClus01 cluster hierarchy. This gives you re-assurance that NodeB was added to cluster successfully.

After adding a second node, your disk witness will automatically be selected. In the case of this lab, our disk witness was set to use the database disk. This will need to be changed.

This will be modified later in the article.

Configuring Cluster Network

NIC Configuration

We will now want to configure the cluster networks. In Server 2003 clustering, we had three options:

  • Private
  • Public
  • Mixed

Administrators would configure the NICs in one of two different ways depending on the cluster design/needs:

Method 1 (Public/Private)

Public NIC – Public

Private NIC – Private

Method 2 (Mixed/Private)

Public NIC – Mixed

Private NIC – Private

In Method #1, the Public NIC could only be used for client communication and not heartbeat communication while the Private NIC was the only NIC used for heartbeat communication.

In Method #2, the Public NIC and Private NIC were used for hearbeat communication but the Public NIC was the only NIC allowed to accept client communication via the corporate network. In this case, the Private NIC was given a higher priority for cluster communication so the cluster hearbeat would preferrably use the Private NIC. In case of Private NIC failure, you would still be able to use the Public NIC for temporary heartbeat communication. This is my preferred method for reasons of redundancy, and is also the method that is used in Server 2008.

Note: When configuring clustering in Server 2008, you cannot use one NIC as Public and one NIC as Private anymore. You must use one NIC as private and one NIC as mixed (which would be Method 2).

Clustering NIC configuration options are as follows:

When in the Failover Cluster Management MMC, Click on Networks in the hierarchy of EXCLUS01. You will see that two Networks exist.

There are three types of Cluster Use:

  • Enabled = Mixed
  • Internal = Private
  • Disabled = Unmanaged

Select Cluster Network 1 and choose Properties in the Action Pane.

We will then want to take a look at the options that are specified on this Cluster Network 1. We see that this is the NIC that belongs to our corporate network that we will want to use for both Client Communications as well as heartbeat communications. As I said earlier, we must configure 1 NIC to be mixed and 1 NIC to be private; this NIC being the public NIC as it belongs to our public 192.168.119.0/24 network.. Selecting both “Allow the cluster to use this network” and “Allow clients to connect through this network” equate to mixed mode. After ensuring these settings are correct on your Public NIC, rename the Cluster Network 1 to something that is more intuitive, such as Public.

Select Cluster Network 2 and choose Properties in the Action Pane.

We will then want to take a look at the options that are specified on this Cluster Network 1. We see that this is the NIC that belongs to our private heartbeat network that we will want to use solely for heartbeat communications. As I said earlier, we must configure 1 NIC to be mixed and 1 NIC to be private; this NIC being the private NIC as it belongs to our private 10.10.10.0/24 network. Selecting “Allow the cluster to use this network” without the option “Allow clients to connect through this network” equate to private mode. After ensuring these settings are correct on your Public NIC, rename the Cluster Network 2 to something that is more intuitive, such as Private.

Hearbeat Tolerance Configuration

Exchange 2007 also requires we use Cluster.exe to configure tolerance for missed cluster heartbeats. To do this, open a Command Prompt.

We will first want to ensure that each of our Cluster Nodes are currently online. To do this, type the following command in the command prompt: cluster EXCClus01 Node

Ensure that the Status for each node is Up. If this is successful, run the following two commands on your cluster to configure the heartbeat tolerance:

cluster EXCClus01 /prop SameSubnetThreshold=10

cluster EXCClus01 /prop CrossSubnetThreshold=10

Configuring Disk Majority Quorum

Earlier in the article, it was stated that once NodeB joined the cluster, the Disk Witness Disk was automatically chosen. Unfortunately, the disk witness went onto the Database disk instead of the Quorum Disk.

To configure the Cluster Quorum Settings, Right-Click EXClus01 > More Actions > Configure Cluster Quorum Settings…

Click Next to bypass the Before you Begin window.

We are presented with what type of Quorum we want to use. Ensure that “Node and Disk Majority (recommended for your current number of nodes” is selected. Click Next to Continue.

We can now see why the Database was being used for Quorum. There is a checkmark for the Database to be used. Uncheck this and place a checkmark next to Quorum. Click Next to Continue.

We are now ready to add NodeB to our cluster. Click Next to Continue.

Configuration will begin and a progress bar will be displayed.

Once this is complete, the Configure Cluster Quorum Settings Summary Report is displayed notifying you whether configuring the Cluster Quorum has been successful or unsuccessful. If configuring the Cluster Quorum has been unsuccessful, troubleshooting will need to ensue to ensure you can get the Cluster Quorum successfully configured. Click Finish to continue. The Failover Cluster Management MMC re-appears. You will now want to go back into the Storage section and verify the Quorum is configured to use the Quorum disk.

Now that we have everything configured with the cluster, we will want to test failover to make sure the cluster is functioning properly before we attempt to install Exchange. For this, I disabled both NICs on NodeA. I then went onto NodeB, opened the Failover Cluster Management MMC, and looked at the Storage. As you can see, both disks moved to NodeB. I opened the volumes via Windows Explorer and successfully viewed the .txt files I created in previous articles. Success!

I then proceeded to pausing my lab in VMware. I began by pausing NodeB and then verified that storage successfully moved to NodeA; which it did. Success again!

Summary

Well folks, that is all for Part 3 of this article. To recap on what was included in Part 3 of this article series, we first started off recapping what was included in Part 1 and Part 2 of this article and what the goal of this lab is for. It is to showcase Server 2008’s built in iSCSI Initiator software to connect to an iSCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 Failover Clustering. In Part 2, we left off at the final stages of disk preparatation. All of the shared disks were successfully partioned, formatted, and named.

In Part 3, we formed the cluster, beginning with Node A followed by Node B. We then proceeded with configuring the cluster networks, quorum, and validated our failover cluster worked.

For Part 4, I will detail the following:

  • Install the Exchange 2007 Active Clustered Mailbox Role in our Single Copy Cluster
  • Install the Exchange 2007 Passive Clustered Mailbox Role in our Single Copy Cluster
  • Management our Exchange Cluster
Share

Exchange 2007 SP1 SCC using Server 2008 StarWind iSCSI – Part 2

Welcome to Part 2 of this article series. In Part 1, we started off by discussing the goal of this lab. That goal is to showcase Server 2008′s built in iSCSI Initiator software to connect to an iSCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 SP1 Failover Clustering. We first discussed what the lab setup is going to be using VMware Workstation, and then proceeded to the configuration of RocketDivision’s StarWind iSCSI Target software. We then went into Exchange 2007 and did the initial iSCSI Initiator connection to our iSCSI Target.

In this Part, I will be preparing our Cluster Nodes by installing any prerequisites needed prior to the cluster formation and Exchange 2007 SP1 installation. When that is complete, we will continue with our iSCSI configuration by adding our LUNs to the Cluster Nodes, partitioning these LUNs, formatting these LUNs, and ensuring that shared disk storage is working as intended.

Part 1

Part 2

Part 3

Part 4

Prerequisite Installation on Cluster Nodes (NodeA and NodeB)

Downloading XML Files for prerequisite installation

To prepare your server for Exchange installation as well as Cluster installation, there are a number of prerequisites that are needed on each node. The Microsoft Exchange Team presented several XML files which allow you to install the necessary prerequisites for each type of node; whether that may be a standalone Client Access Server, Hub Transport Server, Mailbox Server, Clustered Mailbox Servers, or a Unified Messaging Server.

There is also an XML file for the Typical Installation which includes the Hub Transport Server, Client Access Server, as well as a Mailbox Server Role. Instead of reinventing the wheel, head on over to the blog article that explains these XML files. You can visit that blog entry here which is based of the Technet article here. To download these XML files, go to the following URL here. Save them somewhere on your hard drive (files will be stored on C:\ on both Cluster Nodes) and transfer the following XML files to each Cluster Node:

  • Exchange-Base.xml
  • Exchange-ClusMBX.xml

Because part of the assumptions are that you have already deployed a Client Access Server as well as a Hub Transport Server, I will not detail the installation process for each of these roles. That can be explained by reading the URLs provided just above.

Installing prerequisites using XML files

The prerequisite installation on both nodes will be identical. Log on to to each cluster node (order is of which cluster node is done first is irrelevant), and open the Command Prompt.

Once in the Command Prompt, we will use the first XML, Exchange-Base.xml, which checks for the following tools and installs if not currently installed:

  • RSAT-ADDS – Active Directory Domain Services Remote Management Tools which includes LDIFDE and other Directory Services Tools
  • PowerShell

To install these tools using the Command Prompt, type the following command: ServerManagerCMD -ip C:\Exchange-Base.xml

You will need to ensure the server is rebooted prior to running the Exchange-ClusMBX.xml prerequisite installation. Once the server is back up, proceed to opening the Command Prompt again. Once in the Command Prompt, we will use the second XML, Exchange-ClusMBX.xml, which checks for the following tools and installs if not currently installed:

  • Failover Clustering
  • Web-Server Role (Internet Information Services 7.0)
  • Web-Metabase
  • Web-Lgcy-Mgmt-Console
  • Web-ISAPI-Ext
  • Web-Basic-Auth
  • Web-Windows-Auth

To install these tools using the Command Prompt, type the following command: ServerManagerCMD -ip C:\Exchange-ClusMBX.xml

Adding LUNs to Cluster Nodes (NodeA and NodeB)

In Part1, we used each cluster node’s iSCSI initator to establish connectivity to our StarWind iSCSI target. This exposed both iSCSI target’s, but the LUNs were not added to either of the Exchange Cluster Nodes. In order to do this, it is imperative that you only have one Exchange Cluster Node up at any given time until Clustering is installed.

The reason for this is because data could be lost or corrupted if both disks are fighting for disk access at the same time. Once clustering is installed on at least one node, you can bring up the second node as the clustering service will prevent disk control to the node who is not considered the Active Cluster Node. The process of installing Clustering is as follows:

Setting up shared disks (Node A)

In Part 1, we left off exposing the iSCSI targets to both Cluster Nodes. Now that each node’s iSCSI Initiator can see these targets, let’s begin setting up the shared disk. To proceed, ensure that Node A is turned on and Node B is turned off to avoid lost data and/or corruption. By taking a look at Disk Management (Start > Administrative Tools > Server Manager > Disk Management), we will see that no shared disks have currently been added to Node A.

Let’s go back to the iSCSI Initiator (Start > Administrative Tools > iSCSI Initiator). Taking a look at the targets, we can see that both are set to Inactive.

For each iSCSI Target, click the “Log on…” button and place a check mark in the “Automatically restore this connection when the computer starts.” Click OK to Continue.

You will now see that both iSCSI Targets have been Connected (Activated) on Node A.

Go back into Disk Management (Start > Administrative Tools > Server Manager > Disk Management). We now see that two new shared disks have currently been added to Node A.

We will want to bring both of these disks Online. You can do this by Right-Clicking Disk 1 > Choose Online. Do the same for Disk 2.

Now that both disks are Online. We will want to Initialize these disks. You can do this by Right-Clicking Disk 1 > Choose Initialize.

When Initializing Disk1 and Disk 2, choose the following options. Click OK to Continue.

Now that we have Initialized both Disk 1 and Disk 2, we will partition both those disks as a Simple Volume and format both volumes as NTFS (I hope nobody still uses FAT!). You can do this by Right-Clicking the unallocated space for Disk 1 and Disk 1 > Choose New Simple Volume. This will bring you to the Welcome to New Simple Volume Wizard. Click Next to Continue.

You will now have to specify the Volume Size. In this example, we are specifying the Volume Size for our database volume. You will need to do these steps on the Quorum volume as well. Choose the maximum allocatable space available. Click Next to Continue.

Assign the drive letters accordingly. The drive letter D will be for the Database volume and the drive letter Q will be for the Quorum Volume.

Note: You may have to change the drive letter for any CD-ROM, DVD-ROM, or any other volume that may be installed on your system to use the drive letter you want. You can read here for more information on how to change a drive letter.

For larger servers, you may want to use Volume Mount Points instead of Drive Letters if you would be using more than 26 volumes. Volume mount points are also good for LCR implementations as you can easily switch the target path of the Mount Point if 1 location becomes corrupt. Click Next to Continue.

You must finally format the volume. I would give the volume a name, such as Database or Quorum. I would also choose Quick Format. Quick Format prevents a chkdsk being run on the disk prior to a format. Click Next and then Finish to Complete this Process.

When completing this process on both disks, your Disk Management MMC should look similar to the following image.

As an optional but recommended step, I would recommend opening both volumes and creating a .txt file. This will allow you to verify after adding both disks to Node B, that the shared functionality is properly working.

Verifying Disk Configuration (Node B)

We will now need add the fully partioned and formatted disks to Node B. Shut down Node A followed by booting up Node B once Node A has finished shutting down. In the case of this lab, a VMware pause will suffice if you successfully added the clustering option when you created your iSCSI Target within StarWind.

If you forget to choose the Clustering option, you will receive a Connection Error message when attempting to log on to the target. You can do one of two things. The first being is to shut down Node A completely to release the connection to StarWind (not recommended). The second option is to delete the iSCSI target, re-create it within StarWind with the Clustering option enabled. Then go back onto both Nodes, exposing the Target to both nodes, set up the shared disk on Node A and go through the disk initialization, partitioning, and the formatting process explained above. This is recommended since we will need to simulate a Cluster environment in future Parts to this article series.

By taking a look at Disk Management (Start > Administrative Tools > Server Manager > Disk Management), we will see that no shared disks have currently been added to Node B.

Let’s go back to the iSCSI Initiator (Start > Administrative Tools > iSCSI Initiator). Taking a look at the targets, we can see that both are set to Inactive.

For each iSCSI Target, click the “Log on…” button and place a check mark in the “Automatically restore this connection when the computer starts.” Click OK to Continue.

You will now see that both iSCSI Targets have been Connected (Activated) on Node B.

Go back into Disk Management (Start > Administrative Tools > Server Manager > Disk Management). We now see that two new shared disks have currently been added to Node B. Unlike when we did this with Node A, we can see that the disks are formatted and partitioned, but are not online.

Because the disks are not online, we will want to bring both of these disks Online. You can do this by Right-Clicking Disk 1 > Choose Online. Do the same for Disk 2.

After the disks have been brought online, they will most likely be using different drive letters than you assigned on Node A. Because of this, you must assign the drive letters to match the same letters you used on Node A. The drive letter D will be for the Database volume and the drive letter Q will be for the Quorum Volume.

Note: You may have to change the drive letter for any CD-ROM, DVD-ROM, or any other volume that may be installed on your system to use the drive letter you want. You can read here for more information on how to change a drive letter. When completing this process on both disks, your Disk Management MMC should look similar to the following image.

If you performed the optional but recommended step of adding a .txt file to both volumes to ensure shared disk communication was working, now would be the time to open both volumes (both D:\ and Q:\) to see if the .txt files are there. If you do indeed see the .txt file, shared disks is working as intended. If you do not see the .txt file, troubleshooting shared disks will need to ensue.

Summary

Well folks, that is all for Part 2 of this article. To recap on what was included in Part 2 of this article series, we first started off recapping what was included in Part 1 of this article and what the goal of this lab is for. It is to showcase Server 2008’s built in iSCSI Initiator software to connect to an iSCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 Failover Clustering.

In Part 1, we left off at exposing the iSCSI LUNs to our Exchange 2007 Cluster Nodes. In Part 2, we prepared our Cluster Nodes by installing any prerequisites needed prior to the cluster formation and Exchange 2007 SP1 installation. When that was complete, we continued with our iSCSI configuration by adding our LUNs to the Cluster Nodes, partitioned these LUNs, formatted these LUNs, and ensured that shared disk storage was working as intended.

For Part 3, I will detail the following:

  • Form the cluster, beginning with the Node A followed by Node B
  • Configure the cluster networks
  • Configure the cluster quorum
  • Validate the failover cluster
Share

Exchange 2007 SP1 SCC using Server 2008 StarWind iSCSI – Part 1

Now that Exchange Server 2007 SP1 and Server 2008 is RTM, I thought it would be nice to create an article on how to use Server 2008′s built in iSCSI Initiator software to connect to an ISCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 Failover Clustering. The ISCSI software that will be used is RocketDivision Starwind. This article is to guide you through the entire process from setting up the ISCSI Target Software, preparing Server 2008 for Exchange 2007, installing Exchange 2007 in a SCC Failover Cluster, and managing your SCC Failover Cluster.

Part 1

Part 2

Part 3

Part 4

Lab Setup

Guest Virtual Machines

One Server 2008 Enterprise (Standard can be used) RTM/SP1 x64 Domain Controller which contains the Starwind ISCSI Target software. Exchange 2007 SP1 will be installed with the Hub Transport Server and Client Access Server roles.

Two Server 2008 Enterprise (Enterprise required) RTM/SP1 x64 (x64 required) Member Servers where Exchange 2007 SP1 will be installed with the Mailbox Server role for Failover Clustering

Assumptions

  • You have a domain that contains at least one Server 2003 SP2 Domain Controller (DC).
  • You have configured the IP settings accordingly for all workstations to be on the same subnet including the public NICs for both Failover Cluster nodes. I have provided the IP scheme of my lab below, but this will vary depending on your needs and VMware configuration.
  • You have an existing Exchange 2007 Hub Transport Server as well as a Client Access Server. For the sake of this lab, I will installing the Hub Transport Role as well as the Client Access Server Role on our DC. This is not a recommended practice for production, but for this lab, we will do so to consolidate and conserve resources. This article does not go over the installation or configuration of these roles.

Configuration of VMware Workstation for Failover Cluster Nodes

There is no official VMWare support for Server 2008 at the time of writing this article. The latest version and build is VMWare 6.0.2 build-59824. There is currently “experimental” support which you will see when specifying the Operating System as you create your Virtual Machine. Through my experiences writing Part 1, I did not encounter any real issues related to Windows Server 2008 and VMware Workstation 6.0.2 build-59824.

SCC Failover Clusters using Node Majority with File Share Witness Quorum are supported, but Node Majority with Disk Witness Quorum are preferred. For this lab, we will be using the Node Majority with Disk Witness Quorum. One of the new features of the Disk Witness Quorum, is that it essentially the Quorum Disk from Windows Server 2003 with added benefits. All nodes within the cluster gets a vote, but with the new Disk Witness Quorum model, the Quorum Disk gets a vote as well. So essentially, if your Quorum Disk goes down, your Cluster is still operational.

Processor: 2

Memory: 848MB

Network Type - Public NIC - Network Address Translation (Used so Virtual Machines get an IP Address without taking up IP Addresses at a client’s site while still being granted Internet access through NAT functionality)

Network Type – Private NIC - VMnet9 (Shared with Node2)

Virtual Disk Type – System Volume (C:\): VMware SCSI 18GB

Virtual Disk Type – Exchange Database/Logs (D:\): iSCSI 1GB

Virtual Disk Type – Disk Witness Quorum (Q:\): iSCSI 500MB

Note: The Virtual Disk for the Exchange Database and Disk Witness Quorum will be created within Windows as part of the ISCSI initiation process and will not be created in the VMware properties. Also, in a production envirnonment, depending on your design, you will most likely expose separate LUNs to separate your Database and Logs due to various reasons such as performance, recoverability, etc. For the purpose of this lab, we will allow for the database and logs to co-exist on the same LUN for reasons of consolidation.

Configuration of VMware Workstation for Domain Controller/Hub Transport Server/Client Access Server/StarWind

Processor: 2

Memory: 1112MB

Network Type - Network Address Translation (Used so Virtual Machines get an IP Address without taking up IP Addresses at a client’s site while still being granted Internet access through NAT functionality)

Virtual Disk Type – System Volume (C:\): VMware SCSI 20GB

IP Addressing Scheme (Public Subnet)

IP Address – 192.168.119.x

Subnet Mask – 255.255.255.0

Default Gateway – 192.168.119.2

DNS Server – 192.168.119.150 (IP Address of the Domain Controller/DNS Server)

IP Addressing Scheme (Private Cluster Heartbeat Subnet)

Node A: IP Address – 10.10.10.60

Node B: IP Address – 10.10.10.61

Subnet Mask – 255.255.255.0

Preparation of Cluster Nodes (NodeA and NodeB)

Network Interface Card (NIC) Configuration

First thing we will want to do is configure the IP Configuration of both the Public and Private NIC.

We will want to rename our public NIC connection to Public and our heartbeat NIC connection to Private. To do so, go to Start > Right-Click Network > Properties.

This will bring up the Network and Sharing Center which presents a list of tasks to you on the left-hand side of the Window. Click on Manage Network Connections.

Now you will be presented with the Network Connections window. This is where you can modify the network properties for each NIC in your server. For your public connection, rename your Local Area Connection to Public. Likewise, for your private heartbeat connection, rename your Local Area Connection to Private. After you have done this, it will look something similar to the following:

Part of the assumptions earlier in this article as that you have a properly configured TCP/IP Network where all nodes are properly connected to the TCP/IP Network. Because of this, I will skip the Public TCP/IP Configuration and proceed to configuring the Private Heartbeat NIC. A quick note though – When configuring the Public NIC, I would remove IPv6 but leave both Link-Layer options checked.

Double-Click or Right-Click > Properties on the Private NIC to begin configuration.

Uncheck the following:

  • Internet Protocol Version 6 (TCP /IPv6)
  • Link-Layer Topology Discovery Mapper I/O Driver
  • Link-Layer Topology Discovery Responder

Select Internet-Protocol Version 4 (TCP /IPv4) and press the Properties button. For NodeA, the only TCP/IP configuration we will need, is the IP Address and Subnet Mask. NodeA’s IP configuration will be 10.10.10.60/24 while NodeB’s IP configuration will be 10.10.10.61/24.

Go into the Advanced NIC configuration settings by clicking the Advanced button. From there, you will navigate to DNS tab and de-select “Register this connection’s addresses in DNS.”

Select the WINS tab and de-select “Enable LMHOSTS lookup” and configure the NetBIOS setting to “Disable NetBIOS over TCP/IP.”

Once you are done configuring the Advanced settings, press OK three times and you will be back at the Network Connections screen. From here, choose Advanced and select Advanced Settings

.

You will be presented with the Binding Order for your current NICs. Ensure that the Public NIC is on top by selecting Public and pressing the green up arrow key on the right-hand side of the dialog.

Rename Computer and Join to Active Directory Domain

Windows Server 2008 will automatically assign the computer a random computer name. Because of this, we will change the computer name, join the computer to the Active Directory domain, followed by a reboot. To do this, use the GUI as you normally would in previous versions of Windows, or you can use PowerShell by proceeding with the following steps:

Enter the following lines of code (code thanks to justaddcode.com) separately in your PowerShell console (PowerShell must first be installed by opening a Command Prompt and typing ServerManagerCmd -i PowerShell). Once PowerShell is installed, you can open a PowerShell window by navigating to Start > All Programs > Windows PowerShell 1.0 > Windows PowerShell or by clicking on Start > Type PowerShell in search field:

$comp = get-wmiobject Win32_ComputerSystem

$comp.Rename(“NodeA”)

$comp.JoinDomainOrWorkgroup(“Shudnow.net”,”domainPassword”,”MYDOMAINdomainAdmin”,$null,3)

Shutdown -r

If you are making these changes on NodeB, ensure that you enter NodeB in the PowerShell code.

Reboot the Cluster Failover Node to complete configuration changes.

Starwind ISCSI Target Configuration

RocketDivision provides an ISCSI Target compatible for Windows Server 2008. This product is called StarWind. The free version does not provide the capability for more than one node to connect to a target at the same time. I will be using a licensed copy of StarWind to provide you the knowledge needed to fully install a Single Copy Cluster using the Windows Server 2008′s built-in iSCSI initiator.

One thing I want to make you aware of, is that many of us have become accustomed to minimizing utilities to the notification area (system tray) by clicking X. If you do this with StarWind, it will actually close the program instead of minimizing it to the notification area. Also, every time you shut down/reboot, you will have to connect your connection. Your Virtual Disks will still have saved, thankfully. So please be cognizant about this before you continue with your lab.

Once the software is installed on a machine (easy install… no tutorial needed), open StarWind and Right-Click on your default connection and choose Connect.

You will then be presented with a password prompt with the default username of test as well as a default password of test. This is configurable in the Connection Properties.

Once your credentials have been entered and OK has been pressed, you will notice that the previously greyed out Connection is now colored. This will allow you to go enter your Registration information for your connection via the Help drop down.


Now that we have a functional connection, we have to add a device to it to allow our cluster nodes to be initiate an iSCSI connection to obtain iSCSI-connected disks. To do this, Press the Add Device button on the Toolbar. Select the type of Device you wish to use. For purposes of this lab, we will use an Image File device. Click Next to Continue.

Then choose Create New Image. Click Next to Continue.

You will now need to enter the information needed to create the new disk image. The file extension should end with an .img. As you can see from the image below, the image name path might look like something you are not accumstomed to. Click the … button to assist you in selecting the location you would like to create your image. The image name path will automatically be filled in for you. All that will be needed is to fill in the image name.img filename. Finally, specify any additional values you may want such as image size, compression, encryption, etc. Click Next to Continue.

When configuring the following screen, you must ensure you Select “Allow multiple concurrent iSCSI connections (clustering). Click Next to Continue.

Choose a Target Name. This is optional, and if you enter nothing, a default Target Name will be provided. For purposes of this lab, we will specify a Target Name of Server2008SCC. Click Next and Finish to complete the creation process of your disk image.

Once your disk image is created, your StarWind interface should like similar to the following Window.

Repeat the steps above to create one additional image file for your Disk Witness Quorum. This disk should be 500MB in size. You will also need to ensure you change the Target Name for the new Disk Image. For this new Disk Witness Quorum, I have named the Target Name as Server2008SCCQuorum. After you are completed, your StarWind interface should look similar to the following Window.

Exchange 2007 ISCSI Initator Configuration

To begin configuration of the Exchange 2007 Initiator so it can obtain access to the Virtual Disks provided by StarWind, we must first open the iSCSI Initator Console. You will want to do all of the following on both NodeA and NodeB. It is safe to keep both nodes up currently as we won’t actually be exposing any disks to Exchange 2007 until Part 2 of this article series.

Go to Start > Control Panel > Administrative Tools > iSCSI Initator > Click Yes to Continue.

The next option is personal preference. You can choose no if you want to manually configure the firewall. My recommendation would be to Choose Yes to ensure the firewall rules get properly added. Click Yes to Continue.

You will also need to go into the Windows Firewall on the Server which contains StarWind and ensure both a TCP incoming and outgoing Firewall rule is created for port 3260. From my experiences, disabling the Windows Firewall will disable all connectivity with other machines. When I turned off the Windows Firewall, all connectivity to that machine was completely cut off. If anybody knows why this may be, drop me an e-mail. Thanks!

As a side note, one of the things I did do, is log on each server, go into the Windows Firewall properties, and set inbound connections to Allow for the Domain Profile, Private Profile, and Public Profile.

Configuring the Windows Firewall is out of the scope of this article. To learn more about the Windows Firewall, visit the following article:
http://www.windowsnetworking.com/articles_tutorials/configure-Windows-Server-2008-advanced-firewall-MMC-snap-in.html

When you have successfully done the above steps, you can now proceed with the iSCSI Initator Configuration.

To connect the iSCSI Initator to the iSCSI Target, Click Add Portal > Enter IP Configuration for iSCSI Target Server. Click OK to Continue.

This will expose the targets you created within StarWind as shown in the following image.

Summary

Well folks, that is all for Part 1 of this article. To recap on what was included in Part 1 of this article series, we first started off discussing what the goal of this lab is for. It is to showcase Server 2008′s built in iSCSI Initiator software to connect to an iSCSI Target and deploy a Single Copy Cluster (SCC) for Exchange 2007 Failover Clustering. We first discussed what the lab setup is going to be using VMware Workstation, and then proceeded to the configuration of RocketDivision’s StarWind iSCSI Target software. We then went into the Exchange 2007 Cluster Nodes (NodeA and NodeB) and proceeded with the initial iSCSI Initiator connection to our iSCSI Target.

For Part 2, I will detail the following:

  • Install Exchange Cluster Node Prerequisites prior to Cluster formation and Exchange 2007 SP1 Installation
  • Steps required to expose the disks created in Part 1 to both Exchange Cluster Nodes
Share

Next »