At it's heart, DFS is kind of just a list of shared folders available on your network. But the list is actually presented as a single shared folder with sub-folders. Those sub-folders are the shares you want to list. Which means that instead of mapping a different drive letter for each share, you can map just one drive letter for the "list", and access the "real" shared folders like sub-folders.
Say you have three file shares your users need:
\\docserver\accounting
\\docserver\files
\\docserver\company
Instead of your users having to remember the paths (which would change if you ever had to replace a server) or having to map three different drive letters to the three shares, you could use DFS to publish the folders under the shared folder (which is called a "DFS Root") \\domain.com\shares. They could then map a single drive letter (let's say Z:) to that share, and the three shares just become sub-folders (called "DFS Targets":
z:\accounting
z:\files
z:\company
Easy, right? Well, there are a lot of implications to the technology.
For one, since the shares are just published by DFS, the actual shared folders don't all have to be on the same hard drive...or even the same computer! Also, and most powerfully, each target can actually map to MULTIPLE identical shares on different computers. Say you had \\docserver01 and \\docserver02 and they each had an "accounting" share on them; z:\accounting can actually point to BOTH of these shares. The original idea was that if you had to reboot docserver01 for some reason, people can keep working on the share since docserver02 would still be available...they wouldn't even know one of the target computers was offline.
Of course, in that case, you have to make sure that each of those shares remain identical at all times - which is why DFS also includes replication.
DFS is really identical for offices like ours with multiple locations because I can have a full copy of all our data on each server (that is, in each office). No matter which office a user is in, they can connect to the share (since the root of the share is the domain, rather than a specific computer), and thanks to your Active Directory sites, DFS will know which share is in the same location and point the user to that server, rather than forcing them out over the VPN/WAN link to the "other" office. Also, thanks to DFS replication, as soon as a user saves a file in one office, it's replicated to the other office so everyone sees all the same data, all the time. In a lot of ways, it also has the benefit of being an off-site backup for each office. Even if one or the other office burns down, for instance, all the data has been replicated to the other office (which is presumably not burning down at the same time), so no data is lost. I think the only thing it doesn't protect against is someone actually gaining access to your network and trashing your files (which would of course get replicated to both machines)...but that's what real backups are for, right?
OK, now that you know WHY we're using DFS, I'll get into the how.
Open up your "Server Manager" window. Right-click "roles" and choose "Add new role". DFS is part of "File Services", so select that and choose "Next". For which sub-roles you'd like to install, choose "File Server", "Distributed File System" (which will auto-select both of the entries within it), and Windows server 2003 file services and indexing services.
Now it will ask you if you want to create a DFS share now, which you can do...but I had a lot of prep work to do first, so I chose "Create Later", then click "Install".
It's possible, if this is your first venture into servers, that you don't have a huge data pool you need to bring forward. If that's the case, you can just create your DFS share right now and set up all the structure as you go, knowing that as you (or your users) do add data it will get replicated everywhere you tell it to.
But it's more likely that you've already got a bunch of data you need to make available. Depending on exactly what form it's in currently, you may have a lot of work as I did.
Your main task is to get all your data where you want it to be. It may be on other servers in your office (I'm pretty sure the DFS targets have to be hosted by actual Windows Server OSes, not normal client Windows machines), or maybe you want to move it all on to your new server.
My biggest challenge was that, as our old Server 2003 servers died, they stopped replicating (our files just ended up getting to big for the old technology to handle), so I had to manually create a single pool of data that represented all the latest files, drawing from two existing servers (in a different domain, of course), each of which might have newer files than the other scattered throughout all the shares.
I ended up using Robocopy quite a bit - at the commandline, entering lines like
robocopy \\oldsvr01\share1 \\oldsvr02\share1 /l /mir /r:0 /ndl
then reversing the orders of \\oldsvr01 and \\oldsvr02 gave me a list of all the files that were newer on the first server than the second server. I figured out which list was the longest, then used a command like
robocopy \\oldsvr01\share1 \\newsvr01\share1 /r:0
to get those files into place on the new server. Unless there were exceptional cases calling for a manual copy-n-paste of one file or another, I then used a command like
robocopy \\oldsvr02\share1 \\newsvr01\share1 /e /r:0 /xo /xlto copy just the files that were newer on oldsvr02 into the new server.
But hopefully you don't have to do any of that.
Once you've finally got all the latest files ready to go, there's just one more thing I want to mention. In my case, we've got two servers for two offices, but when I was setting them up, I had the luxury of having them both together in one office. So once I had all the files on one server on the drives I wanted them on, I just copied them right over to the second server myself, rather than asking DFS to replicate all the files (we're well over 100GB right now) to a blank share. DFS after 2003-era Windows Servers allow you to "seed" the data in this way - when it starts replication it sees that the data is identical and doesn't re-send it over the network. Which is very nice. Otherwise I'd be waiting for all of that 100GB to get sent to the other office over the fairly slow internet connection.
So, your data is all in place. Only one thing left to do: make sure the appropriate folders are shared and all permissions are correct.
In my case, I've got all of our data on one drive in each server. We have 8 shares, and they're all right in the root of the drive, with nothing else on the drive. So I actually go to the security settings for that drive and erase all permissions, then I go back in and set "Domain Users" and "System" to have "Full Control", and replace all permissions on all folders and files within the drive. (FYI, you have to give "System" full control if you want shadow copies, which we'll get to in the next article). You may want more fine-grained control over permissions, which is fine. Your DFS shares do also have the option of turning on "Access-Based Enumeration", which just means that it will actually hide any shares or folders that a user doesn't have permission to access.
Now I went through each of our eight folders I wanted to publish as shares. I turned on sharing and made sure that the share permissions were set to allow "Domain Users" to have "Full Control". One more thing I like doing: I add a dollar sign ($) to the end of the share name...so the share name might look like "docs$". That's just a little code that tells Windows not to display the share if someone browses to the server over the network. It's still available: if they type in the path in the address bar it'll open, they just can't double-click to open it from a list.
One more share you need: an empty folder which will serve as the "root" of your DFS share. This can be anywhere; you won't be putting any files into it. The only oddity is that if it's mapped to a drive letter on the client machine, the "free space" displayed for the drive is whatever free space there is on the drive where the root is hosted...which probably doesn't have anything to do with the space actually available for the data in the DFS shares. The share name here should be the same as the "root" you give the DFS share (see below). Don't put a dollar sign at the end of this share, since it's the one you want to actually be visible, accessible, and used.
Got all your shares set up? Great. Time to get started!
Open up your "Server Manager" window. Expand "Roles", "File Services" and "DFS Management". Right-click "Namespaces" and choose, "New Namespace".
(FYI, and this single fact is half the reason I wanted to publish this blog, the information available for Foundation Edition with regards to DFS is confusing and contradictory. Despite what anything you read implies, I'm here to tell you, and I've tried it, YOU CAN MAKE AS MANY DOMAIN-BASED NAMESPACES AS YOU WANT WITH FOUNDATION EDITION. You are limited to a single stand-alone namespace (which looks like \\server\root rather than \\domain.tld\root and is not available if that single server is not turned on), but you can have multiple domain-based namespaces). You would not believe what it took for me to get that answer.
Start by providing one of your domain controller names as the "namespace server". It doesn't matter which one; we'll add the other one to the identical configuration in a moment.
Now give the name of the root you want. This will look like the share name, if you're used to normal file sharing. So if it's mapped to Z: on users' machines, your domain is "domain.com" and you call it "Files", the "name" of the drive will look like:
Files (\\domain.com) (Z:)
If you don't have a share on the machine that's the same name as the root name you give it, it will prompt you to create a new share. You probably want to click "Edit Settings" and make sure it has the permissions you want it to have.
The next window will prompt you if you want to create a Domain-based namespace or a Stand-alone namespace. There's almost no reason you wouldn't want a Domain-based namespace in this case, and I like the features it gives if you also "Enable Windows Server 2008 mode".
The next steps will just confirm your settings and create the root.
Now we want to add your other server as a root server too, so that either can go down without any impact on your file availability. Under "Namespaces" in the "Server Manager" window, you'll now see your DFS root. Right-click it and choose, "Add Namespace Server". Give it the name of your other DC, and make sure that (if you haven't already made the root share on the second server) the shared folder it is going to create has all the right settings.
Now you're ready to start adding the targets (which will look like subfolders of your DFS share). Under "Namespaces" in the "Server Manager" window, you'll now see your DFS root. Right-click it and choose, "New Folder".
It will ask you for the name of the folder. Call it whatever name you want your users to see as a sub-folder of the DFS root...this one doesn't have to be the same as the share name. Then, choose to "Add" a folder target, and browse to the appropriate one of the shares you created earlier with your data - the ones that end in dollar signs. If you have multiple shares, on whatever servers, that should be identical, add those too.
If you add multiple servers, it will then ask you if you want to enable replication on the shares (you do). Set that up however you'd like, but do pay special attention to the fact that you can tell the replication how much bandwidth to use for replication at any time of any day of the week. That might be interesting to you if your servers share bandwidth, as mine do, with a VoIP system (for instance)...throttling replication back to a smaller amount of bandwidth will keep it from breaking up your voice traffic.
One more thing I like to do with multiple sites: Right-click the DFS root and go to properties. On the "Referrals" tab, I change the "ordering method" to "exclude targets outside of the client's site". That ensures that, no matter what, the clients will not be directed to open any files across the slow VPN/WAN connection. This should be unnecessary, due to the site costing and transports you set up earlier, but at least in Server 2003, I had some issues with clients getting the wrong referral, resulting in very poor performance.
WOW, that was a long one! But DFS is awesome - you'll be glad you've got it.
No comments:
Post a Comment