Types of Hardware Servers

Tower Servers

Tower servers look a lot like PCs. Each tower server is a standalone machine that is built into an upright case.

Tower servers are used mostly in smaller datacenters. Larger datacenters typically avoid the use of tower servers because of the amount of physical space that they consume and because they tend to be noisy.

Another disadvantage to using tower servers is that the cabling can get messy. Server racks and blade server chassis usually have features that are designed to manage cables, but tower servers have no such features.

Rack Servers

As the name implies, rack servers are servers that are mounted within a rack. The rack is of a uniform width and servers are mounted to the rack using screws. Each rack can accommodate multiple servers and the servers are typically stacked on top of each other.

Because racks are designed to accommodate standard sized components, many hardware vendors offer rack mountable networking components other than servers. For example there are rack mountable network appliances (such as hardware firewalls) and rack mountable switches.

Rack mount components follow a form factor that is referred to as a rack unit. A standard rack mount server is referred to as a 1U server meaning that it is 1 rack unit in size. A 2U server consumes two rack units of space within the rack. Some vendors also offer 4U and ½U servers. The larger form factors are usually used when the server needs to be able to accommodate a large amount of storage.

Blade Servers

Like rack servers, blade servers also adhere to a standard size and mount inside a special “rack”. In the case of a blade server however, the rack is known as a chassis.

Blade servers tend to be vendor proprietary. You can’t for example insert a Dell blade server into an HP chassis.

The reason why blade server design is proprietary is because unlike a rack server, which is fully self-contained, blade servers lack some of the components that they need to function. For example, blade servers do not have power supplies.

The blade server chassis is designed to accept various modular components, including the blade servers themselves. For example, a chassis might contain a power supply unit, a cooling unit, and a blade server. The actual chassis design varies from one vendor to the next, but most blade server chassis are designed to accommodate multiple power supplies, multiple blade servers, and a variety of other components (such as network adapters, storage modules, and cooling modules). With the exception of the cooling components, individual blade servers are mapped to the individual modules or components.


Kerberos: The basic protocol

The Kerberos authentication protocol is the default authentication protocol of Windows Server 2003. This section examines how the protocol works by breaking down the complexity of the protocol into five steps.

The first two excerpts provide important introductory information to consider while reading through the five steps. Then, step 1 explains how Kerberos uses symmetric key cryptography to authenticate entities. Step 2 describes the three different entities that the Kerberos protocol deals with and why a key distribution center (KDC) is necessary, step 3 sheds light on the connection between the session key and the master key and step 4 describes the two ways in which the KDC distributes the encrypted session keys to the user and the resource server. Finally, step 5 explores an important weakness in the protocol involving the Ticket Granting Ticket limiting the use of the master keys.

The two excerpts at the end pull together the five steps and include a brief explanation of how Kerberos extensions relate to Windows 2000, XP and Windows Server 2003. Helpful diagrams are provided throughout the section to help readers visualize the various steps.


The following sections explain the basic Kerberos protocol as it is defined in RFC 1510. Those not familiar with Kerberos may be bewildered by the need for numerous diverse keys to be transmitted around the network. In order to break down the complexity of the protocol, we will approach it in five steps:

    • Step 1: Kerberos authentication is based on symmetric key cryptography.


    • Step 2: The Kerberos KDC provides scalability.


    • Step 3: A Kerberos ticket provides secure transport of a session key.


    • Step 4: The Kerberos KDC distributes the session key by sending it to the client.


  • Step 5: The Kerberos Ticket Granting Ticket limits the use of the entities’ master keys.


Integrated Lights-Out, or iLO, is a proprietary embedded server management technology by Hewlett-Packard which provides out-of-band management facilities. The physical connection is an Ethernet port that can be found on most Proliant servers of the 300 and above series.

iLO has similar functionality to the lights out management (LOM) technology offered by other vendors, for example Sun/Oracle’s ILOM, Dell DRAC, the IBM Remote Supervisor Adapter and Cisco CIMC.



iLO makes it possible to perform activities on an HP server from a remote location. The iLO card has a separate network connection (and its own IP address) to which one can connect via HTTPS. Possible options are:

  • Reset the server (in case the server doesn’t respond anymore via the normal network card)
  • Power-up the server (possible to do this from a remote location, even if the server is shut down)
  • Remote console (in some cases however an ‘Advanced license’ may be required for some of the utilities to work)
  • Mount remote physical CD/DVD drive or image
  • Access the server’s IML (Integrated Management Log)
  • Can be manipulated remotely through XML-based Remote Insight Board Command Language (RIBCL)
  • Full CLI support through RS-232 port (shared with system), though the inability to enter Function keys prevents certain operations

iLO provides some other utilities like virtual media (CD, floppy), virtual power and a remote console. iLO is either embedded on the system board, or available as a PCI card.







VPN in Server 2012R2

A virtual private network also known as a VPN is a private network that extends across a public network or internet. It enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network.


Virtual Private Network can be straightforwardly installed and configured on a Windows Server 2012 R2 Essentials by running the Set up Anywhere Access wizard and selecting Virtual Private Network (VPN) option on the following screen.


If you want to know about Remote Web Access, or run through the sequential screens of Anywhere Access wizard, please visit this post.

When you choose to enable VPN using this wizard, the following roles/features get installed on the Essentials Server: Remote Access, DirectAccess and VPN (RAS), IP and Domain Restrictions, IIS Management Scripts and Tools, Network Policy and Access Services Tools, and Windows Internal Database.

You can also enable these roles/features from the Server Manager or PowerShell command-lets, however on Windows Server Essentials we recommend enabling it using the Set up Anywhere Access wizard.

It’s noteworthy that Windows Server 2012 R2 Essentials allows client machines to join their server without having to be inside the company network using a feature called Remote Domain Join. So, if VPN is enabled on Server Essentials, you may connect a remote client to the local network via VPN, run the Connect wizard from http://<servername>/connect or http://&lt;domainname>.remotewebaccess.com/connect URL and join the remote client to the server. The process is very simple and straightforward.

As a prologue to discuss some common issues with VPN on Windows Server 2012 R2 Essentials, let us first glance through the default Routing and Remote Access (RRAS) settings. You may also find the specifics about these settings on TechNet.

Note: Server Essentials automatically manages the routing for VPN, and therefore Routing and Remote Access (RRAS) UI is hidden on the server to prevent tampering of RRAS settings. As a result, to view, change or troubleshoot the Remote Access settings, you need to install Remote Access GUI and Command-Line Tools using Server Manager or the following PowerShell command:

Add-WindowsFeature RSAT-RemoteAccess-Mgmt

This feature enables Routing and Remote Access console and respective command-line tools to manage VPN and DirectAccess. Note that this role may not be required on the server unless you need to change the settings for VPN or DirectAccess.

Default Settings of VPN on Windows Server 2012 R2 Essentials

To check the default settings for the VPN, open Routing and Remote Access Manager. Right click server name, and select Properties.

On the General tab, IPv4 must be enabled:


The Security tab consists of the Authentication Methods… and SSL Certificate Binding:


The Authentication Methods should have Extensible authentication protocol (EAP) and Microsoft encrypted authentication version 2 (MS-CHAP v2) enabled. You can confirm it by clicking the Authentication Methods… button on the Security tab.


The SSL Certificate Binding section on the Security tab displays the certificate active for VPN. This also indicates that we enable VPN on SSL and that you do not have to allow any port other than port 443.

Let’s move on to the IPv4 tab. By default the VPN clients are set to receive IP from DHCP, but you may require to change it to a Static address pool for troubleshooting purposes.


On the IPv6 tab, the options Enable IPv6 Forwarding and Enable Default Route Advertisement are selected by default.


The IKEv2 tab consists of the default options to control the IKEv2 client connections and Security Association expiration.


The PPP tab contains the settings for Point-to-Point protocol and are as follows:


The Logging tab on the server properties page contains the level of logging enabled for Routing and Remote Access.


To enable additional logging for the Routing and Remote Access, select the option Log additional Routing and Remote Access information. Once this option is selected additional log files are created in the %windir%\Tracing directory that provide deeper insight to troubleshoot RRAS issues. Make sure to disable the additional logging once the troubleshooting is complete.

You may also gather and modify information for Remote Access from an elevated Windows PowerShell terminal. Here are some common commands:



  Get-Command -Module RemoteAccess   Displays a list of commands available with RemoteAccess module
  Get-RemoteAccess   Displays the configuration of VPN and DirectAccess (DA)
  Get-VpnAuthProtocol   Displays authentication protocols and parameters set on the VPN
  Get-VPNServerConfiguration   Displays VPN server properties

Here is a sample output:


You can look at the help file of each of these commands for a detailed description. Better yet, you can use the following command to insert the help contents of each of these commands for the module RemoteAccess to a text file as:

$(foreach ($command in (Get-Command -Module RemoteAccess)) {Get-Help $command.Name} ) | Out-File HELP.txt

Different Between Remote Desktop Connection & Windows Remote Assistance?

What Is Remote Desktop Connection?

Remote Desktop Connection is a Windows tool that allows you to access and control a computer from a remote location.

Windows Remote Assistance, Remote Desktop Connection

There are a few things you need to know and set, in order to successfully establish a remote desktop connection to another computer:

  • The computer to which you will connect has to allow remote connections. This is set from the host computer’s System Properties.
  • You need to know the name or the IP address of the computer you want to connect to. If you’re trying to connect to a computer in the same LAN as you, you can use its name or IP address. In case you’re trying to connect to a computer over the Internet, you’ll need the IP address of the host computer. Using its name won’t work.
  • You have to know the credentials of an administrator account from the host computer, or the credentials of a standard user account that has been enabled for Remote Desktop connections.

Once you’ve connected to a remote computer, you will gain full control of it. That means you can use the remote computer as if you are in front of it. You can access any documents, run all programs, use any devices that are connected to it, etc.

The host computer will display nothing on its screen, so no one will see what you are doing remotely.

If you want to know more about Remote Desktop Connection, read these tutorials:

What Is Windows Remote Assistance?

Windows Remote Assistance is a tool that allows you to remotely give or receive technical support to or from other Windows users.


Windows Remote Assistance, Remote Desktop Connection

In order for Windows Remote Assistance to work, there are a few things that you need to set up:

  • The user who will receive assistance has to have Windows Remote Assistance enabled in his/her computer’s System Properties.
  • The user in need of help has to request assistance via Windows Remote Assistance.
  • The person providing the technical assistance will need to know the connection password set by the user who needs help.
  • The user in need of assistance has to approve the remote connection.

Once the remote connection is established, both users will see the same computer screen. If the user who asked for assistance wants to, he/she can share the control of his/her computer. This way, users at both ends will be able to control the computer’s mouse and keyboard.

For further information on Windows Remote Assistance, check this tutorial: How to Provide Remote Support with Windows Remote Assistance.

What Are The Differences Between Remote Desktop Connection & Windows Remote Assistance?

Going through the previous questions in this article, you already know what’s different between these two Windows feature. However, let’s sum them all up:

  • Remote Desktop Connection works only if the host computer allows remote connections, while Windows Remote Assistance works only if the user receiving assistance allows Remote Assistance connections to his/her computer.
  • Remote Desktop Connection allows you to take full control of a remote computer (including exclusive access to the Desktop, documents, programs, etc.), while Windows Remote Assistance allows you to give partial control to your own computer (shared desktop, mouse and keyboard) in order to get help from a remote friend or technical person.
  • Remote Desktop Connection requires you to know the credentials of an account found on the remote computer, while Windows Remote Assistance requires an invitation.
  • Remote Desktop Connection doesn’t need any additional permissions, while Windows Remote Assistance asks the user seeking help to manually accept an incoming remote connection.
  • Remote Desktop Connection will show the computer screen only on the client computer (the user that initiated the remote connection), while Windows Remote Assistance will show the same Desktop to both parties involved.


Remote Desktop Connection and Windows Remote Assistance have similar names and both are used for remote connections to other computers. However, as we’ve seen in this article, their purposes are quite different. If you have questions, use the comments below to let us know.

Offline domain joining

  • No Network
  • Unattended to install windows.
  • If Writable DC replication happen rarely with RODC
  • Does not  require forest/domain to be raised




C:\Djoin /Provision /Domain ami.net /Machine WS2 /Savefile C:\   (Enter)

WS2join.txt (File Created in C:\)

Go to users and computers and check the Computer account would be created or not



CMD with Admin Privilege

C:\Djoin /RequestODJ /Loadfile E:\WS2join.txt /Windowspath %windir% /LocalOS



Note : for log in  Server DC must online in the network.


Stateful vs Stateless Server

A stateful server keeps state between connections. A stateless server does not.

So, when you send a request to a stateful server, it may create some kind of connection object that tracks what information you request. When you send another request, that request operates on the state from the previous request. So you can send a request to “open” something. And then you can send a request to “close” it later. In-between the two requests, that thing is “open” on the server.

When you send a request to a stateless server, it does not create any objects that track information regarding your requests. If you “open” something on the server, the server retains no information at all that you have something open. A “close” operation would make no sense, since there would be nothing to close.

HTTP and NFS are stateless protocols. Each request stands on its own.

Sometimes cookies are used to add some state to a stateless protocol. In HTTP (web pages), the server sends you a cookie and then the browser holds the state, only to send it back to the server on a subsequent request.

SMB is a stateful protocol. A client can open a file on the server, and the server may deny other clients access to that file until the client closes it.

MBR (Master Boot Record) and GPT (GUID Partition Table)

Set up a new disk on Windows 8.x or 10 and you’ll be asked whether you want to use MBR or GPT. GPT is the new standard and is gradually replacing MBR.

GPT brings with it many advantages, but MBR is still the most compatible and is still necessary in some cases. This isn’t a Windows-only standard — Mac OS X, Linux, and other operating systems can also use GPT.

What Do GPT and MBR Do?


Understanding Hard Drive Partitioning with Disk Management
In today’s edition of Geek School, we’re going to talk about how to use Disk Management… but we’re going to… [Read Article]

You have to partition a disk drive before you can use it. MBR (Master Boot Record) and GPT (GUID Partition Table) are two different ways of storing the partitioning information on a drive. This information includes where partitions start and begin, so your operating system knows which sectors belong to each partition and which partition is bootable. This is why you have to choose MBR or GPT before creating partitions on a drive.


MBR’s Limitations


Beginner Geek: Hard Disk Partitions Explained
Hard disks, USB drives, SD cards — anything with storage space must be partitioned. An unpartitioned drive can’t be used… [Read Article]

MBR standards for Master Boot Record. It was introduced with IBM PC DOS 2.0 in 1983.

It’s called Master Boot Record because the MBR is a special boot sector located at the beginning of a drive. This sector contains a boot loader for the installed operating system and information about the drive’s logical partitions. The boot loader is a small bit of code that generally loads the larger boot loader from another partition on a drive. If you have Windows installed, the initial bits of the Windows boot loader reside here — that’s why you may have to repair your MBR if it’s overwritten and Windows won’t boot. If you have Linux installed, the GRUB boot loader will typically be located in the MBR.

MBR works with disks up to 2 TB in size, but it can’t handle disks with more than 2 TB of space. MBR also only supports up to four primary partitions — if you want more, you have to make one of your primary partitions an “extended partition” and create logical partitions inside it. This is a silly little hack and shouldn’t be necessary.

MBR became the industry standard everyone used for partitioning and booting from disks. Developers have been piling on hacks like extended partitions ever since.


GPT’s Advantages


HTG Explains: Learn How UEFI Will Replace Your PC’s BIOS
While most people may be familiar with a PC’s BIOS, they may not know what it is or what it… [Read Article]

GPT stands for GUID Partition Table. It’s a new standard that’s gradually replacing MBR. It’s associated with UEFI — UEFI replaces the clunky old BIOS with something more modern, and GPT replaces the clunky old MBR partitioning system with something more modern. It’s called GUID Partition Table because every partition on your drive has a “globally unique identifier,” or GUID — a random string so long that every GPT partition on earth likely has its own unique identifier.

This system doesn’t have MBR’s limits. Drives can be much, much larger and size limits will depend on the operating system and its file systems. GPT allows for a nearly unlimited amount of partitions, and the limit here will be your operating system — Windows allows up to 128 partitions on a GPT drive, and you don’t have to create an extended partition.

On an MBR disk, the partitioning and boot data is stored in one place. If this data is overwritten or corrupted, you’re in trouble. In contrast, GPT stores multiple copies of this data across the disk, so it’s much more robust and can recover if the data is correupted. GPT also stores cyclic redundancy check (CRC) values to check that its data is intact — if the data is corrupted, GPT can notice the problem and attempt to recover the damaged data from another location on the disk. MBR had no way of knowing if its data was corrupted — you’d only see there was a problem when the boot process failed or your drive’s partitions vanished.



GPT drives tend to include a “protective MBR.” This type of MBR says that the GPT drive has a single partition that extends across the entire drive. If you try to manage a GPT disk with an old tool that can only read MBRs, it will see a single partition that extends across the entire drive. The MBR ensures the old tools won’t mistake the GPT drive for an unpartitioned drive and overwrite its GPT data with a new MBR. In other words, the protective MBR protects the GPT data from being overwritten.

Windows can only boot from GPT on UEFI-based computers running 64-bit versions of Windows 10, 8.1, 8, 7, Vista, and corresponding server versions. All versions of Windows 10, 8.1, 8, 7, and Vista can read GPT drives and use them for data — they just can’t boot from them without UEFI.

Other modern operating systems can also use GPT. Linux has built-in support for GPT. Apple’s Intel Macs no longer use Apple’s APT (Apple Partition Table) scheme and use GPT instead.


You’ll probably want to use GPT when setting up a drive. It’s a more modern, robust standard that all computers are moving toward. If you need compatibility with old systems — for example, the ability to boot Windows off a drive on a computer with a traditional BIOS — you’ll have to stick with MBR for now.

Hyper-V Virtual Hard Disk (VHD) Types

imageVHD is a file format employed in Microsoft virtualization solution. Essentially it operates and behaves much just like a physical hard disk, while in fact it is a file. There has been much information already available regarding VHD and those who are not familiar with this format should review Virtual Hard Disk Getting Started Guide first.

There are various way to create and manage a VHD. For those who are deployment focused or prefer operating via a command prompt, DiskPart is available. On the other hand, with GUI there are also Hyper-V Manager and Disk Manager with VHD operations.

In this post, the focuses are on the VHD operations with Hyper-V Manager. And there are really just three routines: creating, editing, and inspecting a VHD. One can start these routines from Action dropdown menu and Actions pane of Hyper-V manager once a Hyper-V host is highlighted. To create, edit, or inspect a VHD, simply click the corresponding option as shown above.

The following individual routines present the user experience after a user starts a particular routine by clicking a particular option indicated by the top level heading. Also notice that the term, VHD, depending on the context stands for either a virtual hard disk itself or the format of a virtual hard disk.

1. (Creating) New VHD

 image When creating a VM in Hyper-V Manager, one can at the same time create a VHD on the fly. Here the dialog shows the default settings of a new VHD to be added along with creating a VM. Normally this is in a process of installing OS to the VHD, then or later, from an installation media or a network image store.
image In Hyper-V Manager, first highlight an intended Hyper-V host, then create a new VHD by clicking the New option in Action dropdown menu or Actions pane.
Using Disk Manager in Computer Management, one can also create or attach a VHD. Notice there are however only two VHD types available, Fixed and Dynamic, when creating one in Disk Manager. In Windows Server 2012 desktop, there is a useful desktop shortcut to access Disk Manager. Simply holding down Windows key and pressing the X key at the same time will pop up a menu of frequently used tools with a shortcut to Disk Manager.

VHD Formats

image During the process of creating a VHD, need to first specify the format. In Windows Server 2012, a new format, VHDX, is available in addition to VHD. There is a noticeable difference in the storage capacity between the two formats. Further VHDX also provides data corruption protection during power failures and optimizes structural alignments to prevent performance degradation on new, large-sector physical disks. Hyper-V Virtual Hard Disk Format Overview has additional information detailing these VHD formats.

VHD Types

There are three VHD types and each is with target scenarios.

Fixed Size

This type allocates storage at VHD creation time. The size of a Fixed Size or Fixed VHD, as the name indicates, stays the same throughout the life of a disk. Since all available storage is allocated at creation time, a Fixed VHD offers a predictable and best performance on operations relevant to storage allocation and is recommended for production use.

In the process, Windows Server 2012 defaults the format of a new blank VHD to VHDX and the size to 127 GB. Here, the shown routine reset the size and created a 5GB VHD on the local hard disk. The 5 GB size here is chosen due to limited disk space availability on the associated hard disk. To create a VHD for installing OS, for example, the size of the VHD should be large enough to include OS, patches, applications, temp storage, page files, buffer space, etc.



Dynamically Expanding

This type of a VHD is first created with just housekeeping (or header/footer) information, i.e. the name, location, maximum size, etc. of the disk. As data are written into a Dynamic VHD, the total size of the VHD will grow accordingly. Here is a routine to create a 5 GB Dynamic VHD.

So a Dynamically VHD is rather small in size when first created and the size grows as data are written into the disk. At any given time, a Dynamic VHD is with a size of the actual data written to it and the housekeeping information. Notice, upon deleting data from a Dynamic VHD, the space of those deleted data is not reclaimed till an Edit Disk/Compact operation is operated upon which.

A Dynamic VHD is recommended for development and testing, since relatively small footprint to manage. A server intended to run applications not disk intensive is also a possible candidate for a Dynamic VHD. Still when it comes to performance, a Fixed VHD always performs better than a comparable Dynamic VHD in most scenarios by roughly 10% to 15% with the exception of 4k writes, where Fixed VHD performs significantly better as documented in Hyper-V and VHD Performance – Dynamic vs. Fixed.



A Differencing VHD is a so-called child disk based on a linked parent disk. Creating a child disk by specifying the parent disk establishes the parent-child relationship. Since then a child disk stores those changed/modified data of the parent disk, i.e. the write operations to the parent disk. Here the screen flow shows how to create a Differencing VHD.

Again, a Differencing VHD is a child disk which stores the delta of an associated parent disk. For instance, if a differencing disk is created and linked to a parent disk containing a generalized sysprep image, a VM based on the child disk will then store all subsequent changes and customization like system identity, accounts, profiles, applications, data, etc.

Using a child disk to deploy a VM maintains a consistent base image, however the parent-child dependency also decreases the portability. For instance, when a parent disk is relocated, all child disks must reconnect with the parent disk to validate the relationship with a current path.

The concept of a child disk and the ability to separate/isolate changes from the parent disk also introduces interesting scenarios to facilitate IT operations by capturing, applying, undoing, reverting, or merging a child disk (i.e. implementing changes stored in a child disk) to an associated parent disk. In fact, taking a snapshot of a VM is to in essence freeze the current state and make it a parent disk based on current state, and at the same time create a child disk to capture all subsequent changes. And a best practice in testing a patch for an examined VM is to take a snapshot of the VM before and after applying the patch to ensure the ability to predictably and precisely apply or back out those changes introduced by a patch, should it become necessary.

For testing, troubleshooting, forensic analysis, and those processes requiring capturing a particular state of a runtime environment, a VM snapshot which is based on the concept of a Differencing VHD is a great tool. A VM snapshot is nevertheless not to be employed a backup solution. Since each snapshot introduces a parent-child dependency of the runtime environment when the snapshot was taken, and over time a series of backups will results in a multi-level hierarchy of snapshots with nested parent-child dependencies which is not only prone to data corruption and operational errors, but likely prolonging the restore time with a chain of dependencies.

image Here in File Explorer, Dynamic and Differencing VHDs initially contain only housekeeping information and do not allocate all declared storage. The initial size of each is far from the declared size, 5 GB. While a Fixed VHD allocates all declared storage at creation time.

2. Editing Disk

Depending on the type of a VHD, various editing options are available. The following are a few examples.

Example: Compacting VHD

This operation compacts the file size of a VHD, while the configured storage capacity remains the same. Notice for a Dynamic disk, the size of the disk grows as data are written. However deleting content does not automatically reclaim the associated space. A compact operations is necessary to possibly reduce the file size.


Example: Converting Format

For backward compatibility, here is a routine to edit and change the format of a disk from VHDX to VHD. Since this operations will create a new disk with a copy of the source content, there is an opportunity to specify both the format and the type of the new disk. And here in addition to the format, the type is changed from Fixed to Dynamic. In other words, the operations to convert a VHD in effect copy the source disk to a newly created disk with a specified format and a selected type.

Converting a format does not apply to a Differencing VHD since both the format and the type are dependencies between a child disk and its parent and not to be changed for the parent-child link to work, although the Convert option is available for a Differencing VHD.


Example: Expanding Dynamic Disk

To increase the size of a Dynamic VHD, edit and expand the disk. The process is fairly straightforward.


Example: Merging Disk

To permanent introduce changes captured in a child disk, edit a child disk and select the option to merge the child disk into the parent disk. On the left, the process shows that the changes can be directly merged into the parent disk itself or a newly created Dynamic or Fixed disk. This routine is likely to follow a successful test/validation of a target patch or a new device driver against a child disk with an existing deployment image as the parent disk, for example.

3. Inspecting Disk

In an event that some inconsistency is identified in a parent-child relationship, a disk inspection is necessary.
image From Hyper-V Manager, highlight a target Hyper-V host and click Action menu to inspect a VHD. An inspection will display pertinent information of a disk including: format, type, location, and size. Here an inspection shows the Dynamic VHD which I created (originally as a 5 GB dynamic disk indicated by the file name) was extended to 10 GB.

Validating Differencing Disk

imageimage For a differencing disk, an inspection displays the information of a child disk and reveals the parent-child relationship. And for an existing parent-child pair, the Inspect Parent button indicates the relationship is currently validated. And clicking the button will display the properties of the parent disk as shown here.

Reconnecting Parent Disk

image Once a parent-child relationship is established by successfully creating a Differencing Disk, i.e. child disk, any changes of the parent disk such as applying a new patch or changing the path to the parent disk will invalidate the parent-child link. The recommendation is to set a parent disk to read only. In an event that the parent disk is relocated, the child disk needs to reconnect with its parent disk. At this time, inspecting the child disk will display a red cross indicating an error and a Reconnect button. Here this error was introduced by relocating the parent disk to a new location.To validate the link, click Reconnect button at this time.
imageimageimageimage Clicking the Reconnect button and specifying the new location to reference the parent disk will resolve the issue as shown in this routine.Once validated, the wizard displays the information of the child disk with the Inspect Parent button for inspecting the parent disk and indicating the parent-child relationship is again validated.


Active Directory (AD)

Active Directory (AD) is a directory service that Microsoft developed for Windows domain networks. It is included in most Windows Server operating systems as a set of processes and services.[1][2] Initially, Active Directory was only in charge of centralized domain management. Starting with Windows Server 2008, however, Active Directory became an umbrella title for a broad range of directory-based identity-related services.[3]

A server running Active Directory Domain Services (AD DS) is called a domain controller. It authenticates and authorizes all users and computers in a Windows domain type network—assigning and enforcing security policies for all computers and installing or updating software. For example, when a user logs into a computer that is part of a Windows domain, Active Directory checks the submitted password and determines whether the user is a system administrator or normal user.[4]

Active Directory uses Lightweight Directory Access Protocol (LDAP) versions 2 and 3, Microsoft’s version of Kerberos, and DNS.



Active Directory, like many information-technology efforts, originated out of a democratization of design using Request for Comments or RFCs. The Internet Engineering Task Force (IETF), which oversees the RFC process, has accepted numerous RFCs initiated by widespread participants. Active Directory incorporates decades of communication technologies into the overarching Active Directory concept then makes improvements upon them.[citation needed] For example, LDAP underpins Active Directory. Also X.500 directories and the Organizational Unit preceded the Active Directory concept that makes use of those methods. The LDAP concept began to emerge even before the founding of Microsoft in April 1975, with RFCs as early as 1971. RFCs contributing to LDAP include RFC 1823 (on the LDAP API, August 1995),[5] RFC 2307, RFC 3062, and RFC 4533. [6] [7] [8]

Microsoft previewed Active Directory in 1999, released it first with Windows 2000 Server edition, and revised it to extend functionality and improve administration in Windows Server 2003. Additional improvements came with subsequent versions of Windows Server. In Windows Server 2008, additional services were added to Active Directory, such as Active Directory Federation Services.[9] The part of the directory in charge of management of domains, which was previously a core part of the operating system,[9] was renamed Active Directory Domain Services (ADDS) and became a server role like others.[3] “Active Directory” became the umbrella title of a broader range of directory-based services.[10] According to Bryon Hynes, everything related to identity was brought under Active Directory’s banner.[3]

Active Directory Services

Active Directory Services consist of multiple directory services. The best known is Active Directory Domain Services. Commonly abbreviated as ADDS or simply AD.[11]

Domain Services

Active Directory Domain Services (AD DS) is the cornerstone of every Windows domain network. It stores information about members of the domain, including devices and users, verifies their credentials and defines their access rights. The server (or the cluster of servers) running this service is called a domain controller. A domain controller is contacted when a user logs into a device, accesses another device across the network, or runs a line-of-business Metro-style app sideloaded into a device.

Other Active Directory services (excluding LDS, as described below) as well as most of Microsoft server technologies rely on or use Domain Services; examples include Group Policy, Encrypting File System, BitLocker, Domain Name Services, Remote Desktop Services, Exchange Server and SharePoint Server.

Lightweight Directory Services

Active Directory Lightweight Directory Services (AD LDS), formerly known as Active Directory Application Mode (ADAM),[12] is a light-weight implementation of AD DS.[13] AD LDS runs as a service on Windows Server. AD LDS shares the code base with AD DS and provides the same functionality, including an identical API, but does not require the creation of domains or domain controllers. It provides a Data Store for storage of directory data and a Directory Service with an LDAP Directory Service Interface. Unlike AD DS, however, multiple AD LDS instances can run on the same server.

Certificate Services

Active Directory Certificate Services (AD CS) establishes an on-premises public key infrastructure. It can create, validate and revoke public key certificates for internal uses of an organization. These certificates can be used to encrypt files (when used with Encrypting File System), emails (per S/MIME standard), network traffic (when used by virtual private networks, Transport Layer Security protocol or IPSec protocol).

AD CS predates Windows Server 2008, but its name was simply Certificate Services.[14]

AD CS requires an AD DS infrastructure.[15]

Federation Services

Active Directory Federation Services (AD FS) is a single sign-on service. With an AD FS infrastructure in place, users may use several web-based service (e.g. internet forum, blog, online shopping, webmail) or network resources using only one set of credentials stored at a central location, as opposed to having to be granted a dedicated set of credentials for each service. AD FS’s purpose is an extension of that of AD DS: The latter enables users to authenticate with and use the devices that are part of the same network, using one set of credentials. The former enables them use this same set in a different network.

As the name suggests, AD FS works based on the concept of federated identity.

AD FS requires an AD DS infrastructure, although its federation partner may not.[16]

Rights Management Services

Active Directory Rights Management Services (AD RMS, known as Rights Management Services or RMS before Windows Server 2008) is a server software for information rights management shipped with Windows Server. It uses encryption and a form of selective functionality denial for limiting access to documents such as corporate e-mails, Microsoft Word documents, and web pages, and the operations authorized users can perform on them.

Logical structure

As a directory service, an Active Directory instance consists of a database and corresponding executable code responsible for servicing requests and maintaining the database. The executable part, known as Directory System Agent, is a collection of Windows services and processes that run on Windows 2000 and later.[1] Objects in Active Directory databases can be accessed via LDAP, ADSI (a component object model interface), messaging API and Security Accounts Manager services.[2]


A simplified example of a publishing company’s internal network. The company has four groups with varying permissions to the three shared folders on the network.

Active Directory structures are arrangements of information about objects. The objects fall into two broad categories: resources (e.g., printers) and security principals (user or computer accounts and groups). Security principals are assigned unique security identifiers (SIDs).

Each object represents a single entity—whether a user, a computer, a printer, or a group—and its attributes. Certain objects can contain other objects. An object is uniquely identified by its name and has a set of attributes—the characteristics and information that the object represents— defined by a schema, which also determines the kinds of objects that can be stored in Active Directory.

The schema object lets administrators extend or modify the schema when necessary. However, because each schema object is integral to the definition of Active Directory objects, deactivating or changing these objects can fundamentally change or disrupt a deployment. Schema changes automatically propagate throughout the system. Once created, an object can only be deactivated—not deleted. Changing the schema usually requires planning.[17]

Forests, trees, and domains

The Active Directory framework that holds the objects can be viewed at a number of levels. The forest, tree, and domain are the logical divisions in an Active Directory network.

Within a deployment, objects are grouped into domains. The objects for a single domain are stored in a single database (which can be replicated). Domains are identified by their DNS name structure, the namespace.

A domain is defined as a logical group of network objects (computers, users, devices) that share the same Active Directory database.

A tree is a collection of one or more domains and domain trees in a contiguous namespace, linked in a transitive trust hierarchy.

At the top of the structure is the forest. A forest is a collection of trees that share a common global catalog, directory schema, logical structure, and directory configuration. The forest represents the security boundary within which users, computers, groups, and other objects are accessible.

Icons-mini-page url.gif Domain-Boston
Icons-mini-page url.gif Domain-New York
Icons-mini-page url.gif Domain-Philly
Icons-mini-page tree.gif Tree-Southern
Icons-mini-page url.gif Domain-Atlanta
Icons-mini-page url.gif Domain-Dallas
Icons-mini-page url.gif Domain-Dallas
Icons-mini-folder.gif OU-Marketing
Icons-mini-icon user.gif Hewitt
Icons-mini-icon user.gif Aon
Icons-mini-icon user.gif Steve
Icons-mini-folder.gif OU-Sales
Icons-mini-icon user.gif Bill
Icons-mini-icon user.gif Ralph
Example of the geographical organizing of zones of interest within trees and domains.

Organizational units

The objects held within a domain can be grouped into Organizational Units (OUs).[18] OUs can provide hierarchy to a domain, ease its administration, and can resemble the organization’s structure in managerial or geographical terms. OUs can contain other OUs—domains are containers in this sense. Microsoft recommends using OUs rather than domains for structure and to simplify the implementation of policies and administration. The OU is the recommended level at which to apply group policies, which are Active Directory objects formally named Group Policy Objects (GPOs), although policies can also be applied to domains or sites (see below). The OU is the level at which administrative powers are commonly delegated, but delegation can be performed on individual objects or attributes as well.

Organizational units do not each have a separate namespace; e.g. user accounts with an identical username (sAMAccountName) in separate OUs within a domain are not allowed, such as “fred.staff-ou.domain” and “fred.student-ou.domain”, where “staff-ou” and “student-ou” are the OUs. This is because sAMAccountName, a user object attribute, must be unique within the domain.[19] However, two users in different OUs can have the same Common Name (CN), the name under which they are stored in the directory itself.

In general the reason for this lack of allowance for duplicate names through hierarchical directory placement, is that Microsoft primarily relies on the principles of NetBIOS, which is a flat-file method of network object management that for Microsoft software, goes all the way back to Windows NT 3.1 and MS-DOS LAN Manager. Allowing for duplication of object names in the directory, or completely removing the use of NetBIOS names, would prevent backward compatibility with legacy software and equipment. However, disallowing duplicate object names in this way is a violation of the LDAP RFCs on which Active Directory is supposedly based.

As the number of users in a domain increases, conventions such as “first initial, middle initial, last name” (Western order) or the reverse (Eastern order) fail for common family names like Li (李), Smith or Garcia. Workarounds include adding a digit to the end of the username. Alternatives include creating a separate ID system of unique employee/student id numbers to use as account names in place of actual user’s names, and allowing users to nominate their preferred word sequence within an acceptable use policy.

Because duplicate usernames cannot exist within a domain, account name generation poses a significant challenge for large organizations that cannot be easily subdivided into separate domains, such as students in a public school system or university who must be able to use any computer across the network.

Shadow groups

In Active Directory, organizational units cannot be assigned as owners or trustees. Only groups are selectable, and members of OUs cannot be collectively assigned rights to directory objects.

In Microsoft’s Active Directory, OUs do not confer access permissions, and objects placed within OUs are not automatically assigned access privileges based on their containing OU. This is a design limitation specific to Active Directory. Other competing directories such as Novell NDS are able to assign access privileges through object placement within an OU.

Active Directory requires a separate step for an administrator to assign an object in an OU as a member of a group also within that OU. Relying on OU location alone to determine access permissions is unreliable, because the object may not have been assigned to the group object for that OU.

A common workaround for an Active Directory administrator is to write a custom PowerShell or Visual Basic script to automatically create and maintain a user group for each OU in their directory. The scripts are run periodically to update the group to match the OU’s account membership, but are unable to instantly update the security groups anytime the directory changes, as occurs in competing directories where security is directly implemented into the directory itself. Such groups are known as Shadow Groups. Once created, these shadow groups are selectable in place of the OU in the administrative tools.

Microsoft refers to shadow groups in the Server 2008 Reference documentation, but does not explain how to create them. There are no built-in server methods or console snap-ins for managing shadow groups.[20]

The division of an organization’s information infrastructure into a hierarchy of one or more domains and top-level OUs is a key decision. Common models are by business unit, by geographical location, by IT Service, or by object type and hybrids of these. OUs should be structured primarily to facilitate administrative delegation, and secondarily, to facilitate group policy application. Although OUs form an administrative boundary, the only true security boundary is the forest itself and an administrator of any domain in the forest must be trusted across all domains in the forest.[21]


The Active Directory database is organized in partitions, each holding specific object types and following a specific replication pattern. Microsoft often refers to these partitions as ‘naming contexts’.[22] The ‘Schema’ partition contains the definition of object classes and attributes within the Forest. The ‘Configuration’ partition contains information on the physical structure and configuration of the forest (such as the site topology). Both replicate to all domains in the Forest. The ‘Domain’ partition holds all objects created in that domain and replicates only within its domain.

Physical structure

Sites are physical (rather than logical) groupings defined by one or more IP subnets.[23] AD also holds the definitions of connections, distinguishing low-speed (e.g., WAN, VPN) from high-speed (e.g., LAN) links. Site definitions are independent of the domain and OU structure and are common across the forest. Sites are used to control network traffic generated by replication and also to refer clients to the nearest domain controllers (DCs). Microsoft Exchange Server 2007 uses the site topology for mail routing. Policies can also be defined at the site level.

Physically, the Active Directory information is held on one or more peer domain controllers, replacing the NT PDC/BDC model. Each DC has a copy of the Active Directory. Servers joined to Active Directory that are not domain controllers are called Member Servers.[24] A subset of objects in the domain partition replicate to domain controllers that are configured as global catalogs. Global catalog (GC) servers provide a global listing of all objects in the Forest.[25][26] Global Catalog servers replicate to themselves all objects from all domains and hence, provide a global listing of objects in the forest. However, to minimize replication traffic and keep the GC’s database small, only selected attributes of each object are replicated. This is called the partial attribute set (PAS). The PAS can be modified by modifying the schema and marking attributes for replication to the GC.[27] Earlier versions of Windows used NetBIOS to communicate. Active Directory is fully integrated with DNS and requires TCP/IP—DNS. To be fully functional, the DNS server must support SRV resource records, also known as service records.


Active Directory synchronizes changes using multi-master replication.[28] Replication by default is ‘pull’ rather than ‘push’, meaning that replicas pull changes from the server where the change was effected.[29] The Knowledge Consistency Checker (KCC) creates a replication topology of site links using the defined sites to manage traffic. Intrasite replication is frequent and automatic as a result of change notification, which triggers peers to begin a pull replication cycle. Intersite replication intervals are typically less frequent and do not use change notification by default, although this is configurable and can be made identical to intrasite replication.

Each link can have a ‘cost’ (e.g., DS3, T1, ISDN etc.) and the KCC alters the site link topology accordingly. Replication may occur transitively through several site links on same-protocol site link bridges, if the cost is low, although KCC automatically costs a direct site-to-site link lower than transitive connections. Site-to-site replication can be configured to occur between a bridgehead server in each site, which then replicates the changes to other DCs within the site. Replication for Active Directory zones is automatically configured when DNS is activated in the domain based by site.

Replication of Active Directory uses Remote Procedure Calls (RPC) over IP (RPC/IP). Between Sites SMTP can be used for replication, but only for changes in the Schema, Configuration, or Partial Attribute Set (Global Catalog) GCs. SMTP cannot be used for replicating the default Domain partition.[30]


In general, a network utilizing Active Directory has more than one licensed Windows server computer. Backup and restore of Active Directory is possible for a network with a single domain controller,[31] but Microsoft recommends more than one domain controller to provide automatic failover protection of the directory.[32] Domain controllers are also ideally single-purpose for directory operations only, and should not run any other software or role.[33]

Certain Microsoft products such as SQL Server[34][35] and Exchange[36] can interfere with the operation of a domain controller, necessitating isolation of these products on additional Windows servers. Combining them can make configuration or troubleshooting of either the domain controller or the other installed software more difficult.[37] A business intending to implement Active Directory is therefore recommended to purchase a number of Windows server licenses, to provide for at least two separate domain controllers, and optionally, additional domain controllers for performance or redundancy, a separate file server, a separate Exchange server, a separate SQL Server,[38] and so forth to support the various server roles.

Physical hardware costs for the many separate servers can be reduced through the use of virtualization, although for proper failover protection, Microsoft recommends not running multiple virtualized domain controllers on the same physical hardware.[39]


The Active-Directory database, the directory store, in Windows 2000 Server uses the JET Blue-based Extensible Storage Engine (ESE98) and is limited to 16 terabytes and 2 billion objects (but only 1 billion security principals) in each domain controller’s database. Microsoft has created NTDS databases with more than 2 billion objects.[40] (NT4’s Security Account Manager could support no more than 40,000 objects). Called NTDS.DIT, it has two main tables: the data table and the link table. Windows Server 2003 added a third main table for security descriptor single instancing.[40]

Programs may access the features of Active Directory[41] via the COM interfaces provided by Active Directory Service Interfaces.[42]

Single server operations

Flexible Single Master Operations Roles (FSMO, pronounced “fizz-mo”) operations are also known as operations master roles. Although domain controllers allow simultaneous updates in multiple places, certain operations are supported only on a single server. These operations are performed using the roles listed below:

Role name Scope Description
Schema Master 1 per forest Schema modifications
Domain Naming Master 1 per forest Addition and removal of domains if present in root domain
PDC Emulator 1 per domain Provides backwards compatibility for NT4 clients for PDC operations (like password changes). The PDC runs domain specific processes such as the Security Descriptor Propagator (SDP), and is the master time server within the domain. It also handles external trusts, the DFS consistency check, holds current passwords and manages all GPOs as default server.
RID Master 1 per domain Allocates pools of unique identifiers to domain controllers for use when creating objects
Infrastructure Master 1 per domain/partition Synchronizes cross-domain group membership changes. The infrastructure master should not be run on a global catalog server (GCS) unless all DCs are also GCs, or the environment consists of a single domain.


To allow users in one domain to access resources in another, Active Directory uses trusts.[43]

Trusts inside a forest are automatically created when domains are created. The forest sets the default boundaries of trust, and implicit, transitive trust is automatic for all domains within a forest.


One-way trust
One domain allows access to users on another domain, but the other domain does not allow access to users on the first domain.
Two-way trust
Two domains allow access to users on both domains.
Trusted domain
The domain that is trusted; whose users have access to the trusting domain.
Transitive trust
A trust that can extend beyond two domains to other trusted domains in the forest.
Intransitive trust
A one way trust that does not extend beyond two domains.
Explicit trust
A trust that an admin creates. It is not transitive and is one way only.
Cross-link trust
An explicit trust between domains in different trees or in the same tree when a descendant/ancestor (child/parent) relationship does not exist between the two domains.
Joins two domains in different trees, transitive, one- or two-way.
Forest trust
Applies to the entire forest. Transitive, one- or two-way.
Can be transitive or nontransitive (intransitive), one- or two-way.
Connect to other forests or non-AD domains. Nontransitive, one- or two-way.[44]

Forest trusts

Windows Server 2003 introduced the forest root trust. This trust can be used to connect Windows Server 2003 forests if they are operating at the 2003 forest functional level. Authentication across this type of trust is Kerberos-based (as opposed to NTLM).

Forest trusts are transitive for all the domains within the trusted forests. However, forest trusts are not transitive between forests.

Example: Suppose that a two-way transitive forest trust exists between the forest root domains in Forest A and Forest B, and another two-way transitive forest trust exists between the forest root domains in Forest B and Forest C. Such a configuration lets users in Forest B access resources in any domain in either Forest A or Forest C, and users in Forest A or C can access resources in any domain in Forest B. However, it does not let users in Forest A access resources in Forest C, or vice versa. To let users in Forest A and Forest C share resources, a two-way transitive trust must exist between both forests.

Management solutions

Microsoft Active Directory management tools include:

  • Active Directory Users and Computers,
  • Active Directory Domains and Trusts,
  • Active Directory Sites and Services,
  • ADSI Edit,
  • Local Users and Groups,
  • Active Directory Schema snap-ins for Microsoft Management Console (MMC),

These management tools may not provide enough functionality for efficient workflow in large environments. Some third-party solutions extend the administration and management capabilities. They provide essential features for a more convenient administration processes, such as automation, reports, integration with other services, etc.