Monday, December 20, 2010

DFS notes

There are a few notes I want to make about managing DFS - since it's really the main reason our offices are using servers, you can imagine I like all the tools I can get to keep an eye on it. Fortunately, these have greatly increased since Server 2003!

First, the Replication Diagnostic Health Reports are incredibly useful - dare I say more useful (and a lot easier) than wading through the event logs for the File services (which are, as I'm doing the initial replications, creating upwards of 20,000 events every 24 hours...even with filters, this is a daunting volume to try and make sure you haven't missed anything).

To create one, expand Roles, File Services, DFS Management and Replication in turn in the Server Manager window. You should see a list of all your replication groups, so right-click whichever one you want to create a report on and choose, "Create Diagnostic Report". The wizard will ask you for the type of report you want to create; in this case we want to create a "Health Report". In the next screen, choose where to save the report and if you want to rename it. In the next screen, make sure the servers you want to check are included and click Next. Then choose if you want to include backlogged files and count file sizes for each member (I do both and it's pretty quick, even on shares of 100+ GB). Once the report is generated, it will open up in Internet Explorer automatically. If the bottom of the report says, "Report loading, please wait", you need to disable the enhanced security for Administrators in Internet Explorer (Server Manager for your local server -> Configure IE ESC), then reload the report.

There's a pretty good amount of information in the reports, but it surfaces the most important stuff right in the section headings - primarily in the ERRORS and WARNINGS sections. Items listed here don't necessarily correspond directly to warnings or errors in the event log - they're things you really do care about. If a server is still getting the initial replication, that will show up as a WARNING on that server. Mostly, this is the information I was looking for, so that was nice and easy! Drill through the rest of the info for lots more information - I was interested to see that RDC allwed me to save, on average, between 85% and 95% of the bandwidth for replication...a HUGE improvement over my poor old Server 2003 days!

There was one other event I found on one of our reports that I need to mention here though: an error that One or more replicated folders have content skipped by DFS Replication.

Well, I didn't like that message at all, so I was happy to find out the reports give great details. In my case, it listed all the "Files that cannot be replicated under the replicated folder", under the heading that tells me they're not replicating because they're either temporary or symbolic links.

Running attrib.exe on the files listed didn't show me anything, so I found this excellent article that told me what tool to use in order to see if the files in question were, in fact, flagged as temporary.

They were.

I have no idea why AutoCAD marked them as temporary, but nobody who has ever had to manage AutoCAD installs will be surprised that it's AutoCAD's fault.

It's a bit harder to clear the temporary flag on the files, but the article nicely provided a PowerShell command that will remove the temporary flag on all files in a directory, in this case d:\data:

Get-childitem D:\Data -recurse | ForEach-Object -process {if (($_.attributes -band 0x100) -eq 0x100) {$_.attributes = ($_.attributes -band 0xFEFF)}}

Not bad!

Scheduling Replication Health Reports - Now that all my errors should be clear, I thought it would be beneficial, especially in the first few weeks of using the new servers, to get a fresh report every morning to check for replication issues. But even though we've only got 8 replicated folders under our DFS root, it's still a bit of a pain to create the report for each replication group manually. Fortunately, I found a TechNet Article onto which someone had appended a useful link and bit of code - it's the actual command you can use to create a report; in this case, from a batch file.

So I created a batch file with eight lines, each resembling the following:

dfsradmin health new /RgName:domain.tld\root\data /refmemname:domain\server /repname:c:\dfsreports\data.html /fscount:true
where "root" is the shared folder used by the DFS root and "data" is the replicated directory (I just named the report the same thing as the replicated directory). Set that up as a scheduled task to run daily, and I'm all set!

Wednesday, December 15, 2010

Policies, Users and Computers

Finally, the last step before we can start getting clients on the domain!

In this step, we'll set up user accounts and make some policies for the computers belonging to our domain. Fortunately, there are only a few non-intuitive parts of this process.

One of the big benefits of being on a domain is that the administrator can set all kinds of policies to control how the computers work - if it's usually configurable by a user, it's probably able to have a policy set on it to override it. It sounds like policies are about removing control from the user, but in reality, they're mostly used to customize the computer so that they don't have go to EVERY computer and set hundreds of little options. For instance, one of my major policies specifies that the "Offline Files" feature should be turned on, with appropriate folders automatically made always available offline, without any user interaction at all.

There are two kinds of policies: computer policies (which apply to any user logging on to that particular computer) and user policies (which apply to a user, no matter which computer they log on to). As with everything else in Windows, it uses the "folder" model - policies are applied to a folder, and anything in it (including sub-folders and their contents) get the policy applied. However, if two policies conflict, the "lowest" policy will overrule the "higher one". So if I apply a policy turning on Offline Files to the folder containing all the domain computers, but then make a sub-folder and apply a policy that turns offline files off, any computers in the sub-folder will have offline files turned off.

OK, let's get started.

Open up your Server Manager window, and under "Roles", expand "Active Directory Domain Services", "Active Directory Users and Computers", and finally expand the item named with your domain name.

Among the folders (here, they're actually OUs - Organizational Units) are one named "Computers" and another named "Users". This is the default place where new users and new computers will go. Unfortunately, for reasons I can't begin to imagine, you can't apply policies to these folders. So I actually go in and create new OUs called "EmployeeComputers" and "EmployeeUsers".

It will work if you left it like this, but you'll have to manually go in every time you add a user or computer to the domain and drag them into the correct folder - which is a pain. It's much better to actually make those the default containers.

Now, go up to the top of the window and in the "View" menu, check "Advanced Features". Now right-click on each of those OUs you created and choose "Properties". Go over to the "Attribute Editor" and scroll down to "distinguishedName". It'll look something like
OU=EmployeeComputers,DC=domain,DC=com
. Write that down or copy-n-paste it into notepad or something. Do the same thing for the EmployeeUsers too. When you're done, go back to "view" and un-check "Advanced Features".

Now, if I haven't lost you yet, here's the other part that you'd never be able to logic through - to set them as the default containers for those object types, you need to drop out to the commandline.

Click start, type
cmd
and hit enter. You'll get a black console window with a C:\ prompt. Yes, seriously.

Go to system32:
cd c:\windows\system32
I know it's a 64-bit OS, all the critical tools are still in System32. The commands you need are "redirusr" and "redircmp", followed by a space and the appropriate distinguishedName of the OU. For instance:
redirusr OU=EmployeeUsers,DC=domain,DC=com
redircmp OU=EmployeeComputers,DC=domain,DC=com


Each command should say it completed sucessfully, then you can close the console.

Now, onto actually making policies for the domain.

Click Start, and under "Administrative Tools", open "Group Policy Management".

This window will look fairly similar due to the fact that it shows (most of) the same OUs as the "Users and Computers". There are a few more object scattered among the OUs however - they look like little script document icons. These are policies. By default, there's a "Default Domain Policy" applied just under your Domain, and expanding the "Domain Controllers" OU will show you the other default policy, "Default Domain Controllers Policy". Click these and go to the "Settings" tab to see what all they do. (It's a lot).

Now, it is possible to just edit these policies and just have one massive policy (well, two, since the Domain Controllers do need to be more locked down than any other computers on the network) with ALL the settings you want to set in it, but I find it much easier to make lots of policies that pretty much do one thing each, and apply them as appropriate.

To make a policy, right-click the "Group Policy Objects" icon and choose "New". Name it according to what you want it to do, then poke around the settings to make it do what you want. Once you close it, you'll see it's now a little icon under "Group Policy Objects" - drag it from here up to whatever OU you want it to apply to.

For instance, I've got a policy that changes the password requirements. (Those settings are in Computer Configuration\Policies\Windows Settings\Security Settings\AccountPolicies\Password Policy). That policy is applied to the whole domain, right under Default Domain Policy so it overrides whatever settings I apply in it.

Turning on Offline files has to happen at the computer level, so I create a policy called "Offline Files On", set Computer Configuration\Policies\Administrative Templates\Network\Offline Files as I want, then put that on my "EmployeeComputers" OU. Setting specific offline files happens at the user level, so I make another policy, set User Configuration\Policies\Administrative Templates\Network\Offline Files\Administratively Assigned Offline Files as I wish, and link that on my "EmployeeUsers" OU.

Other policies I set include Disabling EFS so no data can be lost if I have to reset someone's password, adding Domain Users to the Administrators group of the client machines, enabling Remote Desktop and allowing it through the firewall, etc. You can even use policies to set the wallpaper, screensaver, homepage, etc...like I said, if it's Microsoft software and configurable, there's probably a policy for it.

Once your policies are all set, go back to Active Directory Users and Computers, right-click your "EmployeeUsers" OU and start adding "New" "User"s. You'll be able to specify their first password, whether they have to change it the first time they log on, their name, etc. On the "Profile" tab of their properties, pay special attention to the "Home Folder" option - here's where you can automatically map Z:\ (for instance) to your DFS share at \\domain.tld\root

Once you've got one user set up like you want, you can also right-click their name and choose "Copy" to make an identical user - it will only prompt you for a different name, password and username.

OK, I think we're ready - let's start getting computers on the domain!

Configuring Backups

The approach I take to backups is a very multi-tiered approach. In our office a lot of the concerns about theft or damage to the physical office resulting in a loss of data is mitigated by DFS and the fact that our data is nearly instantly replicated between our offices (located in different cities); the likelihood of anything physically happening to both offices at the same time is very remote.

A far more common danger is user mistakes - files are deleted that shouldn't have been, or changes are overwritten and need to be rolled back. The biggest problem here is the fact that you never know what point in time you'll need to roll back to. Fortunately, Windows Server includes a number of technologies we can take advantage of to be able to handle almost any need that's arisen.

Enter "Shadow Copies". If you use Windows Vista or Windows 7, you may be slightly familiar with this technology; it's called "Previous Versions" in the file properties. The best part, other than the simplicity, is that it's available to any user on the domain - they don't really NEED to ask me to restore this file or that.

Shadow Copies work at the partition level - you keep shadow copies of an entire drive letter or not at all. It works by taking a "snapshot" of the drive when you first set it up, then twice a day (by default) it looks for any files that have been updated since the last snapshot and grabs a copy of the current version. When someone goes into the "Restore Previous Versions" dialog on the client machine, they're presented with a list of all available updates. If they're looking at a file, they'll see just the timestamps of the file when shadow copies saw an update. If they're looking at a whole folder, they'll see every shadow copy time available. In either case, they can restore the file or directory in-place, or open it up to make sure it's the right version, or restore it to an alternate place. It's incredibly useful, and takes a surprisingly small amount of disk space to keep several months' worth of twice-a-day snapshots.

To set up Shadow copies, open up "Computer" and right-click the drive on which your data resides, choosing "Configure Shadow Copies". If you just select "Enable" on the resulting dialog, it will just use the defaults; store shadow copies on the same volume as the data itself with some default size limits. I much perfer highlighting the drive, then clicking "Settings" to choose a drive on which to keep the backups and configure max sizes (when you hit the limit, it starts deleting shadow copies, oldest first...so you've got kind of a moving window of times you can roll files back to). For several reasons, it's best to locate the shadow copies on a different drive than the data itself; performance is far better on separate (physical) drives, and also that way if the drive stops working, the shadow copies don't die along with the data itself.

Once the options are set, click OK to return to the Shadow Copies dialog, then "Enable" the shadow copies. It will take an initial snapshot right away, then proceed as scheduled from there on out.

(By the way, if you ever want to stop making new shadow copies but not erase the shadow copies you've already made, don't click the "Disable" button [which deletes all existing shadow copies] - instead just delete the scheduled task that captures the copies - the "Next Run Time" will change to "Disabled", but the shadow copies will still remain available for opening or restoring).

Easy, isn't it? And it's a GREAT tool!

Serving files: DFS

DFS (Distributed File Services) is an incredibly useful set of services for setups like mine. If you only have one server in one office you might not need it, but for anything more than that, I really do recommend you look into it and utilize it.

At it's heart, DFS is kind of just a list of shared folders available on your network. But the list is actually presented as a single shared folder with sub-folders. Those sub-folders are the shares you want to list. Which means that instead of mapping a different drive letter for each share, you can map just one drive letter for the "list", and access the "real" shared folders like sub-folders.

Say you have three file shares your users need:
\\docserver\accounting
\\docserver\files
\\docserver\company


Instead of your users having to remember the paths (which would change if you ever had to replace a server) or having to map three different drive letters to the three shares, you could use DFS to publish the folders under the shared folder (which is called a "DFS Root") \\domain.com\shares. They could then map a single drive letter (let's say Z:) to that share, and the three shares just become sub-folders (called "DFS Targets":
z:\accounting
z:\files
z:\company


Easy, right? Well, there are a lot of implications to the technology.

For one, since the shares are just published by DFS, the actual shared folders don't all have to be on the same hard drive...or even the same computer! Also, and most powerfully, each target can actually map to MULTIPLE identical shares on different computers. Say you had \\docserver01 and \\docserver02 and they each had an "accounting" share on them; z:\accounting can actually point to BOTH of these shares. The original idea was that if you had to reboot docserver01 for some reason, people can keep working on the share since docserver02 would still be available...they wouldn't even know one of the target computers was offline.

Of course, in that case, you have to make sure that each of those shares remain identical at all times - which is why DFS also includes replication.

DFS is really identical for offices like ours with multiple locations because I can have a full copy of all our data on each server (that is, in each office). No matter which office a user is in, they can connect to the share (since the root of the share is the domain, rather than a specific computer), and thanks to your Active Directory sites, DFS will know which share is in the same location and point the user to that server, rather than forcing them out over the VPN/WAN link to the "other" office. Also, thanks to DFS replication, as soon as a user saves a file in one office, it's replicated to the other office so everyone sees all the same data, all the time. In a lot of ways, it also has the benefit of being an off-site backup for each office. Even if one or the other office burns down, for instance, all the data has been replicated to the other office (which is presumably not burning down at the same time), so no data is lost. I think the only thing it doesn't protect against is someone actually gaining access to your network and trashing your files (which would of course get replicated to both machines)...but that's what real backups are for, right?

OK, now that you know WHY we're using DFS, I'll get into the how.

Open up your "Server Manager" window. Right-click "roles" and choose "Add new role". DFS is part of "File Services", so select that and choose "Next". For which sub-roles you'd like to install, choose "File Server", "Distributed File System" (which will auto-select both of the entries within it), and Windows server 2003 file services and indexing services.

Now it will ask you if you want to create a DFS share now, which you can do...but I had a lot of prep work to do first, so I chose "Create Later", then click "Install".

It's possible, if this is your first venture into servers, that you don't have a huge data pool you need to bring forward. If that's the case, you can just create your DFS share right now and set up all the structure as you go, knowing that as you (or your users) do add data it will get replicated everywhere you tell it to.

But it's more likely that you've already got a bunch of data you need to make available. Depending on exactly what form it's in currently, you may have a lot of work as I did.

Your main task is to get all your data where you want it to be. It may be on other servers in your office (I'm pretty sure the DFS targets have to be hosted by actual Windows Server OSes, not normal client Windows machines), or maybe you want to move it all on to your new server.

My biggest challenge was that, as our old Server 2003 servers died, they stopped replicating (our files just ended up getting to big for the old technology to handle), so I had to manually create a single pool of data that represented all the latest files, drawing from two existing servers (in a different domain, of course), each of which might have newer files than the other scattered throughout all the shares.

I ended up using Robocopy quite a bit - at the commandline, entering lines like
robocopy \\oldsvr01\share1 \\oldsvr02\share1 /l /mir /r:0 /ndl

then reversing the orders of \\oldsvr01 and \\oldsvr02 gave me a list of all the files that were newer on the first server than the second server. I figured out which list was the longest, then used a command like
robocopy \\oldsvr01\share1 \\newsvr01\share1 /r:0

to get those files into place on the new server. Unless there were exceptional cases calling for a manual copy-n-paste of one file or another, I then used a command like
robocopy \\oldsvr02\share1 \\newsvr01\share1 /e /r:0 /xo /xl
to copy just the files that were newer on oldsvr02 into the new server.

But hopefully you don't have to do any of that.

Once you've finally got all the latest files ready to go, there's just one more thing I want to mention. In my case, we've got two servers for two offices, but when I was setting them up, I had the luxury of having them both together in one office. So once I had all the files on one server on the drives I wanted them on, I just copied them right over to the second server myself, rather than asking DFS to replicate all the files (we're well over 100GB right now) to a blank share. DFS after 2003-era Windows Servers allow you to "seed" the data in this way - when it starts replication it sees that the data is identical and doesn't re-send it over the network. Which is very nice. Otherwise I'd be waiting for all of that 100GB to get sent to the other office over the fairly slow internet connection.

So, your data is all in place. Only one thing left to do: make sure the appropriate folders are shared and all permissions are correct.

In my case, I've got all of our data on one drive in each server. We have 8 shares, and they're all right in the root of the drive, with nothing else on the drive. So I actually go to the security settings for that drive and erase all permissions, then I go back in and set "Domain Users" and "System" to have "Full Control", and replace all permissions on all folders and files within the drive. (FYI, you have to give "System" full control if you want shadow copies, which we'll get to in the next article). You may want more fine-grained control over permissions, which is fine. Your DFS shares do also have the option of turning on "Access-Based Enumeration", which just means that it will actually hide any shares or folders that a user doesn't have permission to access.

Now I went through each of our eight folders I wanted to publish as shares. I turned on sharing and made sure that the share permissions were set to allow "Domain Users" to have "Full Control". One more thing I like doing: I add a dollar sign ($) to the end of the share name...so the share name might look like "docs$". That's just a little code that tells Windows not to display the share if someone browses to the server over the network. It's still available: if they type in the path in the address bar it'll open, they just can't double-click to open it from a list.

One more share you need: an empty folder which will serve as the "root" of your DFS share. This can be anywhere; you won't be putting any files into it. The only oddity is that if it's mapped to a drive letter on the client machine, the "free space" displayed for the drive is whatever free space there is on the drive where the root is hosted...which probably doesn't have anything to do with the space actually available for the data in the DFS shares. The share name here should be the same as the "root" you give the DFS share (see below). Don't put a dollar sign at the end of this share, since it's the one you want to actually be visible, accessible, and used.

Got all your shares set up? Great. Time to get started!

Open up your "Server Manager" window. Expand "Roles", "File Services" and "DFS Management". Right-click "Namespaces" and choose, "New Namespace".

(FYI, and this single fact is half the reason I wanted to publish this blog, the information available for Foundation Edition with regards to DFS is confusing and contradictory. Despite what anything you read implies, I'm here to tell you, and I've tried it, YOU CAN MAKE AS MANY DOMAIN-BASED NAMESPACES AS YOU WANT WITH FOUNDATION EDITION. You are limited to a single stand-alone namespace (which looks like \\server\root rather than \\domain.tld\root and is not available if that single server is not turned on), but you can have multiple domain-based namespaces). You would not believe what it took for me to get that answer.

Start by providing one of your domain controller names as the "namespace server". It doesn't matter which one; we'll add the other one to the identical configuration in a moment.

Now give the name of the root you want. This will look like the share name, if you're used to normal file sharing. So if it's mapped to Z: on users' machines, your domain is "domain.com" and you call it "Files", the "name" of the drive will look like:
Files (\\domain.com) (Z:)


If you don't have a share on the machine that's the same name as the root name you give it, it will prompt you to create a new share. You probably want to click "Edit Settings" and make sure it has the permissions you want it to have.

The next window will prompt you if you want to create a Domain-based namespace or a Stand-alone namespace. There's almost no reason you wouldn't want a Domain-based namespace in this case, and I like the features it gives if you also "Enable Windows Server 2008 mode".

The next steps will just confirm your settings and create the root.

Now we want to add your other server as a root server too, so that either can go down without any impact on your file availability. Under "Namespaces" in the "Server Manager" window, you'll now see your DFS root. Right-click it and choose, "Add Namespace Server". Give it the name of your other DC, and make sure that (if you haven't already made the root share on the second server) the shared folder it is going to create has all the right settings.

Now you're ready to start adding the targets (which will look like subfolders of your DFS share). Under "Namespaces" in the "Server Manager" window, you'll now see your DFS root. Right-click it and choose, "New Folder".

It will ask you for the name of the folder. Call it whatever name you want your users to see as a sub-folder of the DFS root...this one doesn't have to be the same as the share name. Then, choose to "Add" a folder target, and browse to the appropriate one of the shares you created earlier with your data - the ones that end in dollar signs. If you have multiple shares, on whatever servers, that should be identical, add those too.

If you add multiple servers, it will then ask you if you want to enable replication on the shares (you do). Set that up however you'd like, but do pay special attention to the fact that you can tell the replication how much bandwidth to use for replication at any time of any day of the week. That might be interesting to you if your servers share bandwidth, as mine do, with a VoIP system (for instance)...throttling replication back to a smaller amount of bandwidth will keep it from breaking up your voice traffic.

One more thing I like to do with multiple sites: Right-click the DFS root and go to properties. On the "Referrals" tab, I change the "ordering method" to "exclude targets outside of the client's site". That ensures that, no matter what, the clients will not be directed to open any files across the slow VPN/WAN connection. This should be unnecessary, due to the site costing and transports you set up earlier, but at least in Server 2003, I had some issues with clients getting the wrong referral, resulting in very poor performance.

WOW, that was a long one! But DFS is awesome - you'll be glad you've got it.

Setting up Sites (geographical locations)

OK, so you've got a domain. Now you need to configure it to get it all set up the way you want it. A lot of this will depend greatly on how you're going to be setting up your domain, but here's what I've done. Adjust or ignore any of my posts labeled "4. Domain configuration" as needed for your situation.

The first thing we'll do is set up what Active Directory calls "Sites". FYI, this has NOTHING to do with websites. Since I've got two physical office locations, I'm going to set up two different geographical "sites" within Active Directory, so I can configure how they talk with each other. This isn't necessary if you have only one office.

Open your Server Manager. You can either do that by closing the "Initial Configuration Tasks" window (that action causes the Server Manager to open by default), or clicking the "Server Manager" button (which should be the first pinned icon in the task bar). You can also find it in the start menu.

This is a truly useful window, a one-stop-shop of sorts where you can get all kinds of information about what your server is doing and configure it. So on the left side of the window, expand "Roles" if it's not already. One of the entries below it is "Active Directory Domain Services". Click that, and the right side populates with event notifications from the last 24 hours, a list of services associated with the role, and even suggestions for what to do next for the best practices and experiences.

For now though, just go back to the left side of the window, and keep drilling down, expanding "Active Directory Sites and Services", "Sites" and "Inter-Site Transports" in turn. Now, under "Inter-Site Transports", click "IP". The right side of the window changes to show you a single item, probably called, "Default_IP_Site_Link". This item represents the internet connection between the servers in your different locations...there are all sorts of properties you can apply to it to govern how the servers use that link.

However, that name isn't very clear on what it is, so right-click on that and rename it to something that will be useful to you - something like "Inter-office WAN link" that actually tells you what it is. If you have several locations, you can even create multiple transports to really have fine-grained control on how they talk with each other, but I'll get back to that in a minute.

Once that's renamed, just go back up a few levels on the left side of the window and click "Sites" under "Active Directory Sites and Services". Again, the right side of the window will show you the two "sub-folders" under "Sites" in addition to a single actual "Site" object. It's also named something useless like, "Default_First_Site", so right-click on it and rename it to something better, like the name of the city your first location is in. Now right-click "Sites" on the left side of the window and choose to make a "New" -> "Site" to represent your other office. Part of that process is to choose the transport to use for this office - since there's only one for now, just choose it. Repeat for as many offices as you have.

Now, go to the first site - the one you renamed. There's a "sub-folder" under that site called "Servers". You'll find your Domain Controller in here. If this is the site it is actually in, great. Otherwise, drag it out into the "Servers" sub-folder of the site it should actually serve. Come back and do this whenever you add a new Domain Controller.

If you only have two sites, this part is done now. But if you have three or more, you may want to configure each link separately. Maybe two of your sites are always online but the third is only online during business hours, for instance. To set up different rules between each of the different sites, go back to "IP" under "Inter-Site Transports" and right click to make a "New Site Link". Name it appropriately, then choose the two sites that link should govern. Then right-click the first link and remove any site that shouldn't be governed by that transport.

Now go through each of your transports...right-click them and choose "Properties". From here, you can set a schedule for which hours the servers can talk with each other over the link, assign a "cost" for each link, etc. Costing is an interesting idea that you may want to look into, even if you only have two sites, if you have multiple internet connections.

For instance, if you have one connection that's for normal traffic or VOIP traffic and a separate internet connection dedicated to the server traffic, you'd set up a transport for each connection but assign different COSTS to them. The one dedicated to the server would be the lower cost so it would get used primarily. But if that link went down, it would try using the higher cost link to make sure the data gets through.

Anyway, there's one more thing to do: tell it how to figure out which site a computer is in automatically. Each physical location probably is using it's own subnet of IP addresses, assigned by the DHCP server in that office, so we tell the computers to look up which site their IP address is in to know where it is physically located each time it asks for an IP address.

Back up a level again and click on "Subnets" under "Sites". Right-click it to make a "New Subnet". Now use Network Prefix Notation to tell it which range of addresses belong to which site. For instance, 192.168.1.0/24 is any address in the 192.168.1.x range.

As a side note, sites are also VERY useful for DFS shares, which we'll get to later...and this time, it's for the client computer's benefit. So it really is worth it to get this set up.

Friday, November 19, 2010

The Biggie role: Active Directory

Congratulations! Your server should be all ready to be promoted to the first Domain Controller, thus creating your network domain!

To start, go to that "Initial Server Configuration" window and choose "Add Role". This time, select "Active Directory Domain Services".

It will give you some good general information and probably tell you there are some .net frameworks you need to install. Let it do so. Now what it's actually doing is kind of installing the installation files for Active Directory...it's not actually setting up the domain yet. One of those pieces of information it gave you was that once this is done, we'll have to run dcpromo.exe.

So, once that wizard starts, click start and type in "dcpromo" without the quotes, then hit Enter.

Now it ask you what domain you want to create (note that with Foundation edition, you can't make it a sub-domain - it has to be the root of your domain), then it will be using DNS to go out and find out who is "in charge" of the domain. Since you've configured it to use itself as the DNS server to ask, and you've also told it to respond to any requests for information about your domain name with information pointing to itself, it will get the answer back, "I am!". At that point, it allows you to create the domain and you're off and running. There are a few more questions it will ask you (passwords, etc), but you should be able to walk through the wizard fairly easily. Once it's done, it will naturally require a restart, after which your domain has been created.

First role: DNS

For starters, you should probably review the DNS discussion I had in the Preparation section...I'm going to be a little light on explanation here. But this is how to set up your server to run DNS in order to later run Active Directory (that is, be a Domain Controller).

First, you need to give your server a static IP address - click the "Configure Networking" link in the "Initial Server Configuration" window and a new screen will open up with all your network cards in it. Right-click your NIC and choose "Properties". Now highlight the Internet Protocol Version 4 (TCP/IP v4) item and click the "Properties" button. Fill in your static IP address, subnet mask and default gateway, then point the primary DNS server to the same address as your static IP address with no secondary DNS server.

If you try to go to any websites, you'll probably notice that you can't seem to get there - all the domains cannot be found. No worries, we're about to fix that.

In the "Initial Server Configuration" window, click "Add Role". Read through the first window, then you can probably check "Don't show this again" and continue. The role you want to add first is "DNS Server". Once that's done, your browser will be able to find all your websites again (but seriously, don't make it a habit of surfing on your server).

Now, click Start, expand "Administrative Tools" and open "DNS"

At the top of the tree on the left side, you'll see your server. Expand that (if it's not already) and you'll see, among other items, "Forward Lookup". Highlight that, then right-click it and choose "New Zone".

In the wizard that opens, it's first going to ask for the type of zone. We're going to set up a Primary Zone (that is, the "root" of your domain). When it asks for the zone itself, give it the same domain name as you'll use for your network, including .com or .local or whatever. It will prompt you to name the new file for storing the configuration, the default name should be fine.

Now it will prompt you if you want to allow dynamic updates or not. This can be a nice feature if you want it, but it's totally optional. If you allow dynamic updates, then whenever one of the computers on your network gets assigned an IP address, it will log that address in to your DNS server automatically. That way, if your domain is companyname.com and Bob's computer is named "bob", you know that, from within your network, you'll always be able to get to his computer by using the address "bob.companyname.com".

The wizard will warn you that there's a security risk in allowing both secure and non-secure dynamic updates, but we'll fix that once we're running a real domain, so just ignore the warning for now and allow the updates. Click "Finish" to close the wizard.

Great. Now when you go to create your domain, your server will be asking itself for permission, which it will grant. You've just nipped the biggest headache with domains in the bud!

Now, if you've read my "Ahead-of-time Preparations" entry about DNS Configuration, you know that if your website and/or e-mail addresses are the same name as your network domain (and the DNS zone you just set up), you're the only person in the world who can't communicate with those servers. Ironic, huh? Fortunately, it's an easy fix.

You need to get the DNS entries on the "real" nameserver for that public domain, and copy them into your private DNS server configuration. The big ones to look for are, of course, www and any MX-type records (which are the e-mail servers).

You don't have to copy any SOA records, but all the "A" type records, "CNAME" type records (www should be one of these) and "MX" type records are going to be the important records to copy. Note that, when setting records on the root of the domain itself (mostly the "A" type records), you just leave the "Name" field blank.

Begin configuration

Each time you boot, a window will be waiting for you, called "Initial Configuration Tasks". Start at the top:

- Windows should already be activated, so you can skip that one.
- You probably need to change your time zone...this is really important for computers on a domain, because they need to make sure they're getting current information.
- It's a good idea to change the computer name to whatever you're going to call it - something like DC01 (for Domain Controller). This will require a restart.
- Now, enable Automatic Updates, then check for and install any critical updates. I also installed all the optional updates that weren't a "best-practice checklist"...I'll come back to these later. This will likely require a restart as well.

Getting to the desktop

Finally, you're ready to boot up your new server!

Make sure you've got a keyboard, mouse, monitor cable and network cable plugged in.

Start the server and wait through the boot screens and Vista-era black-screen-with-progress-bar splash (why it doesn't use the Windows 7-era splash?), and the OOBE (out-of-box-experience) part of setup will start. Fortunately, this is very short and easy:
  1. Accept the EULA(s)
That's it.

Once it gets to a login screen, click the "Administrator" username and it will tell you that you must change your password. Type your chosen password twice, and click the arrow to get in to your new desktop. That was easy, wasn't it!!?

As a side note, I went into my BIOS and disabled PXE boot because it was adding unnecessary time to my boot sequence. Again, this is on a Dell PowerEdge T110. If you don't know what PXE boot is, you probably don't need it, especially on a server preinstalled with Foundation.

DNS Setup

Many of the services we'll be setting up work best in a "domain" environment, which is why I'm going to be making my new servers Domain Controllers, among other things. A domain NAME is something like "google.com", and you can actually buy a public name like that ($10/year fee at most places), but you don't have to. If you buy a public domain name, you can not only use it to set up a domain for your computers to be managed by your servers, but also to have a website and e-mail, which are the only things most people think domains are for.

BUT!!! If you already have a website and/or e-mail solution (maybe gmail and a free blog, for instance), there's no need to actually buy a public domain address just to run your network - you can call it whatever you want, and don't even need a dot-anything at the end of the name (though many times in this case, I see people actually name the domain something like "domain.local").

Now, no matter what, when you go to create your domain, your first domain controller is going to ask the nearest DNS (Domain Name Service) server for the address of the authoratative server for that domain name - to make sure it's allowed to create that domain. One of the required bits of information when you register a domain name is the address of a name server, which is a DNS server. That DNS server holds the master records that tell other machines what address to translate your domain name to, including the address of the authoratative server. If you've purchased a domain name, it is possible to set your server as the authoratative server for that domain, meaning that anyone asking for an address in your domain will actually be getting the answers from your server, whether they're asking from a machine within your network, or somewhere out in the world. The traffic that could potentially generate might be too much for your servers which, if they're running Foundation, are pretty low-end after all.

It's far better to let your registrar's own name servers handle all the requests from the outside world. But maybe you can see the problem building up...if your servers aren't the authoratative name servers for the domain, how can you get permission to set your servers up as domain controllers? An even bigger problem is if you don't buy the public domain name - if your server starts asking other servers who is in charge of that non-existent domain name, it's not going to get any answers.

The solution for both problems is the same...you "trick" your servers. What you actually do is set your server up as a DNS server, just for handling requests within your own network, and give it it's own address as the address of the authoratative server. Finally, you tell it to use itself as it's own server to get answers from. That way, when it asks it's DNS server (itself) for the address of the authoratative server, it gets it's own address, and grants itself permission to create a domain.

I know it seems like a security risk that you can just "trick" them in this way, but it's really not. Because the only computers in the world that will be asking your server to translate domain names, are computers which are configured to use your server as their DNS server. Nobody is going to do that, of course, except the computers on your network. So even if you told your computer that it's authoratative for the domain microsoft.com and set up a domain in that name, the only computers that are going to think that your server is microsoft.com, are your own users. Everyone else will get info from the actual microsoft.com name server. The result of that would be that your users would be unable to communicate with the real microsoft.com computers (e-mail, web site, etc), but nobody else would be affected.

One final problem for those of us whose network domain uses the same domain name as your website and/or e-mail domain. As I said, all the computers in your network will be using your server to translate DNS addresses rather than the "public" nameservers contained in your domain name registration. So ironically, you end up being the only users in the world who can't get to your own website.

The answer to this one, fortunately, is easy enough: we'll just copy all the DNS records from the public nameservers for your domain onto your own server. That way, they get the same IP Address for the website-hosting computer from within your network as everyone else.

So...all the background is out of the way, so what should you do ahead of time?
  1. Register your public domain name, if you want to
    • I do recommend allowing the registrar to host the public name servers
    • If you don't already have e-mail and a website, most of them also offer these services for additional fees. Again, since we're using Foundation server, I recommend out-sourcing these services rather than trying to run them from your server...it's a lot easier, too! Personally, we use the free version of Google Apps which allows up to 50 e-mail addresses/users
  2. Find out how to get to your DNS listings for your domain name, as we'll need to copy these later
  3. If you're not going to register a public domain name, make sure that whatever you do decide to name your domain won't block your access to someone else...call it something that will never be a website like "complanyname.local" or just plain "companyname" without a dot-anything.
  4. The way I have my network set up (and the way I do recommend) is to have a router that serves as both a firewall for my network as well as a DHCP server, assigning IP addresses to any computer within my network that asks. Though you can configure your server to fulfill both these roles (if you have two network cards in your server) it does add a LOT of complexity to your setup. So make sure you've got a router that does both of these things, and that it's configured correctly. If you have multiple offices, you may want one that creates a VPN link between the offices as well...that's how mine is set up.

Wednesday, November 17, 2010

Capturing a disk image

My new servers have arrived! And happily, when I opened the boxes, I found I was mistaken about the lack of installation media in my previous post. However, I figure that it's always better to have too many options rather than too few in situations like this. Also, Dell usually includes a plain (although re-labeled) OS installation DVD (which I really appreciate), then on a separate DVD, includes all the drivers - and typically many more than are actually necessary.

To be sure, I'm glad they do it this way rather than not giving you a "pure" OS installation, but wouldn't it be nice to have an install of just the OS and the drivers your machine actually needs?

Basically, that's what this disk image will be.

So set up the server with a monitor and keyboard, then use a paper clip to open the DVD drive and pop in the WinPE and ImageX disk we made last time. Then push the drive closed again, and start up the computer.

Most computers will check for a bootable CD/DVD and prompt you to, "Press any key to boot from CD/DVD", but to be safe, check the POST screen to see if there's a key you can press to bring up a boot device selection menu. Either way, it should prompt you to press a key: do so, and you'll see a Windows 7-esque boot splash screen, followed finally by a console window over an "Aura" background.

Switch around from drive to drive to make sure you know where everything is. X: is usually a virtual "RAM Disk" with some tools we won't use, and C: usually starts the physical disks. On these servers, C: is a Recovery partition, D: is an empty data partition (I requested my single hard drive have a 80GB partition for the OS, and the rest be a separate, empty partition). E:, when I asked for a directory of the contents, shows me the expected Windows, Program Files and Program Files (x86) directories, along with other directories such as "Dell", "Drivers" and "Hotfixes". Bringing up F: then, showed me the contents of my actual bootable CD, including ImageX. Note the drive letters do not relate at all to the drive letters they will actually be from within Windows.

So, once you find your system partition (E: in my case) and your optical drive (F: in my case), switch to the optical drive, then run ImageX. For instance, here's the command I used:

imagex /compress maximum /check /flags "Foundation" /capture e: e:\OOBE.wim " Foundation OOBE" "Dell Windows Server 2008 R2 Foundation (Pre-OOBE)"

"Foundation", " Foundation OOBE" and "Dell Windows Server 2008 R2 Foundation (Pre-OOBE)" are all strings you can modify to help you identify the desired disk image if you ever go to restore it. They represent the Edition, the Name of the image, and a description of the image, respectively.

On my servers, it took about 10 minutes, and the image file is about 2.3 GB (2.45 billion bytes)

Now eject the CD and "exit" WinPE. Later on, you can, of course, burn the WIM file you created in the root of the system drive to a DVD if you want to get it off the physical drive (always a good idea).

Tuesday, November 16, 2010

WinPE and ImageX

The fact that Foundation edition is OEM-only has a few implications...among these, the facts that you can neither install a trial version to test it out before investing, and that it's apparently not possible to obtain installation media, even once you do buy it.

Now, I've had great success with Windows 7, but I'm not comfortable assuming that I'll never need to reinstall these servers, especially now that I can "seed" replication groups with good data.

So before the servers arrive, I've gone through the process of re-familiarizing myself with various parts of the Windows AIK (Automated Installation Kit).

My primary goal is to get a bootable CD that I can boot the servers from ON FIRST BOOT and create an image of C:\, still prepped in the OOBE. That way, if (when) I do need to reinstall in the future, I can wipe out the C:\ and push this image back onto it, allowing me to start over with initial setup again. Essentially, I'm creating my own installation media, though I won't be including it in the normal setup routine.

The tool Microsoft has developed for capturing a disk image is called ImageX, and the bootable CD environment it will run in is called WinPE. Here's how to make the disk you need:
  • Install WAIK
  • Create a WinPE Disk (documentation)
    • Start->All Programs->Microsoft Windows AIK->Deployment Tools Command Prompt
    • copype.cmd amd64 c:\winpe_x64
      • Windows Server 2008 R2 is 64-bit only
    • copy c:\winpe_x64\winpe.wim c:\winpe_x64\ISO\sources\boot.wim
    • copy "c:\Program Files\Windows AIK\Tools\amd64\imagex.exe" c:\winpe_x64\iso
    • oscdimg -n -bC:\winpe_x64\etfsboot.com C:\winpe_x64\ISO C:\winpe_x64\winpe_x64.iso
    • exit
  • Burn the CD
    • File is at c:\winpe_x64\winpe_x64.iso
    • On Windows 7, you don't need a separate ISO burner
    • Label it "WinPE and ImageX"

Why I'm doing this

As the first post in this blog, I thought I'd explain what I'm doing with it and what I intend for this to grow into.

First, about our company:

We're a pretty small company (in the range of 10 employees, though a few positions come and go from time to time), but we have locations in two different cities. It's an architecture firm, so we need lots of storage space, and our work can be pretty graphics-intensive.

Five years ago, we finally got our first real servers, replacing a haphazard system of locally-stored files and networked computers in a workgroup. I'd worked with DFS during college (since I enjoy this stuff as a hobby, I assisted the network administrator for the College of Architecture for a few semesters), so I knew that was the best way to provide everyone with a single place for all our files, and to ensure that everyone in both offices had the same data.

Alas, we purchased our servers about 6 months before Server 2003 R2 was released, and the original version of Server 2003 was missing some features I would come to mourn not having. Highest among these was the ability to "seed" a replication group...when it came time to reformat the servers and start over (remember, 2003 was based on XP-era technology), I had to actually erase all our files and re-sync them afresh from the other server once it was back up. Usually, that's but an annoyance, but in this case the files were synched over a relatively slow WAN connection, and it would take months to fully propogate, during which time the office with the reinstalled server would be working off the remote server...very slowly. The only time I actually did this, I ended up physically taking the first server to the second office for a weekend to let it sync locally, then brought it back up and plugged it back in at it's own office once that was done. Yikes.

There are a number of issues that have come up besides that ever-present cloud over me, but finally we've reached the point where the servers have stopped syncing our data with each other at all, so it's time to replace them.

I've evaluated a number of options - SANs (Drobo), cloud computing (Live, Azure), thin clients and alternate Server editions (SBS), etc to try and keep costs down as much as possible in this pretty tight time, but I kept coming back to on-site full-meal-deal Windows Server boxes. The glimmer of hope I had was that the far less expensive Foundation edition might meet my needs: Active Directory and DFS. Not much good documentation exists for Foundation, but I've spent several weeks working with a very helpful team over at Dell and finally came to the conclusion that it will, in fact, meet my needs - my total bill thus coming to about 1/3 what it otherwise would have.

There aren't many businesses that fit in the crosshairs of the Foundation demographic (fewer than 15 users, but with needs of a real Server operating system), which is doubtlessly why there's so little good information on it specifically, so I'm hoping that by documenting my "travels" through this obscure system, there might be someone else who is helped by my experience, in addition to being a good start to my documentation of our new network infrastructure.

Tomorrow my new servers arrive, and thus begins the adventure. Hopefully I'll emerge victorious on the other side.