Tuesday, May 31, 2016

Exchange 2010 (SP3) - Search index problems

As an Exchange administrator, we may encounter the "Slow FindRow Rate" error in the Application log of our servers.

The Microsoft KB 2764440 article explains that "large values indicate that applications are crawling or searching mailboxes and that server performance is being affected."

We will usually notice this because some monitoring software alerts us that a certain threshold has been exceeded. In this case, it is the "\MSExchange IS Mailbox(*)\Slow FindRow Rate" performance counter that is over "10". In my case, the values were sometimes at 11 or 12.

This threshold exists at the level of the mailbox database (as opposed to a particular mailbox or the entire server). So the problem could affect one mailbox database but not the others. We should also remember that each mailbox database has its own "search index". 

Based on my research, it looks like this can be due to applications that search the mailbox database directly and do not use the Exchange search index.

This article, for example, is rather old (2008), refers to Exchange 2003 but describes perfectly what I was observing:

I quote:

"This could lead up to the dreaded RPC dialog box or the balloon stating the Outlook is requesting data from the server."

This is exactly what was being observed on the client side and, indeed, Outlook was in online mode (not to be confused with "Exchange Online" (Office 365)).

Otherwise, I understand that there is a "Slow FindRow" method and (since Exchange 2003?) a more efficient "Fast FindRow" method but the underlying hardware must be able to take advantage of the new method (or else Exchange reverts to the older less efficient method).

Other factors:
  • A high number of items (over 5000) in the Inbox, Sent Items, Deleted Items, Calendar and Contacts folder. Note: some users keep years and years of appointments in their calendar.
  • Apparently shared calendars can play a role.
  • Third-party applications that access the mailbox directly can be the source of the problem.

Some of the sources are old (2006-2008) so I am not sure to what extent the concepts presented would apply to the latest versions of Exchange (2013 and 2016) of which the IOPS performance is supposed to be vastly improved over earlier versions. On the other hand, it is on Exchange 2010 that I have observed references to "Slow FindRow".

It also appears that this error can be related to corruption of the search index (refer to Microsoft KB 2764440 cited above). In the case of a corrupt search index, we have a number of possible corrective actions.

"Manual" recreation of the search index

On all versions of Exchange (to my knowledge), we can delete the search index manually after stopping certain services. When we restart those services, the search index will be recreated.

On Exchange 2010, we could proceed as follows:

1. Stop services with the command...

net stop MsExchangeSearch

or with PowerShell:

Stop-Service MsExchangeSearch

2. Delete the subfolder named "CatalogData-<some-GUID-here>" that contains the search index. This subfolder is in the same location as the other files that constitute the mailbox database. For example:

And the content looks like this:

3. Start services with the command...

net start MsExchangeSearch

or (PowerShell)

Start-Service MsExchangeSearch

Note: of course, we can also use services.msc (the graphic interface) to stop and start the services.

Apparently (I have not experimented with this myself), there are two services to stop (and then start) with Exchange 2013 (and 2016?).
  • MSExchangeFastSearch
  • HostControllerService

Recreation of the search index with a script

In Exchange 2010, we can use the ResetSearchIndex.ps1 script to rebuild the index. We have a number of options...

After designating the mailboxdatabase with the Get-MailboxDatabase cmdlet:

Get-MailboxDatabase “DB01" | .\ResetSearchIndex.ps1 [-force]

Note: click to enlarge.

With the -force parameter, we avoid the prompt:

We also have this option:

ResetSearchIndex.ps1 [-force] “DB01"

Note: in all cases, execute the command from the Scripts directory, that is, by default:

C:\Program Files\Microsoft\Exchange\v14\Scripts

It may be necessary to replace "Exchange" with "Exchange Server". Of course, the path could be entirely different depending on what was chosen at the time of installation.

If you examined the execution of the script in the screenshots above attentively, you may have noticed that it uses the same three-step process we completed earlier, but manually:
  1. Stop services.
  2. Delete the search index folder.
  3. Start services.
Note: this script is no longer available with Exchange 2013 (and 2016?).

Update the "catalog" (DAG only)

If the mailbox database is part of a Database Availability Group (DAG) we have one more option: we can overwrite the catalog from another source (the partner Exchange server if we have a simple pair (there can be up to 16 Exchange servers in a DAG)).

Update-MailboxDatabaseCopy DB1\EX13-2 -CatalogOnly

Note: rebuilding the search index may take a certain amount of time and may use a good amount of system resources. You may want to schedule this accordingly.

Sunday, May 29, 2016

Exchange 2010 - retention policies and calendar items

This post, to a certain extent, is a simple "note to self" concerning Exchange retention policies and calendar items. I'm certainly not the first to dedicate a blog post to this subject, since the change in question took place in August 2012 with Exchange 2010 SP2 Rollup 4.

What change?

Before SP2 Rollup 4, retention policies applied to mailbox items but not calendar items (or tasks).

I suspect most Exchange administrators have already managed this change and probably some time ago. For those still managing Exchange 2007, or a version anterior to Exchange 2010 SP2 RU4, it is a significant difference of which we should be aware, especially if our management has a reason to retain older calendar items (or tasks).

And if you are asking "but what are retention policies", I'll direct you to this link:

Messaging Records Management

So, if we applied a default policy tag that would delete mailbox items after 18 months, Exchange would remove items from the "Inbox" or "Sent Items" folder but not calendar items (or tasks).

Exchange 2010 SP2 RU4 changed this behavior: now calendar items older than 18 months (in my example) will also be deleted.

That may or may not be acceptable for your organization.

If not, we have to add a key (DWORD) called...


... in the registry at the following location:


And assign the value "0"

We only need to add this value ("0") if we do not want the "mailbox assistant" to delete calendar items. If we do want calendar items to be deleted after a certain time, we do not need to do anything, since this behavior is the new default. In other words, it would make no sense to add the registry key and then assign the value of "1".

I have made this adjustment in both a test and production environment and have observed that it does function as expected. I could present a series of screenshots showing the "before and after" but reverting back to the status quo ante (previous status quo) and then making the change once again does not seem to be the most judicious use of my time.


For more information, here are some other sources:

Step-by-step instructions (with screenshots) by Satheshwaran Manoharan (Technet article):

How to Disable Retention Policy from Applying on Calendar and Tasks in Exchange 2010

Microsoft announce by Ross Smith IV (Exchange Team Blog):

Calendar and Tasks Retention Tag Support in Exchange 2010 SP2 RU4

Comments on the change by Tony Redmond:

Automatic clean-out of Calendar and Task items now possible (but carefully)

Note: The Exchange Team blog (yes, the guys that develop Exchange) and Tony Redmond's blog are both very useful sources for information on the latest developments in the Exchange world.

Saturday, May 28, 2016

Exchange 2010 (SP3) - Health check scripts (Cunningham)

There are a number of tools that allow us to validate our Exchange configuration and verify everything is functioning properly. In Exchange 2007 and 2010, we had the Exchange Best Practices Analyzer (ExBPA) which unfortunately is absent from Exchange 2013 and 2016. Exchange also offers a number of cmdlets that can be used to view the status of services, replication, mailbox databases, mail queues, Database Availabiity Groups and much more.

Here are some examples of cmdlets that I use rather frequently to verify the status of my Exchange environment:
  • Test-ServiceHealth (this verifies that all Exchange services are running)
  • Test-SystemHealth (this is essentially the command-line equivalent of the ExBPA).
  • Test-ReplicationHealth (this tests replication health between nodes of a Database Availability Group or "DAG").
  • Get-MailboxServer (in particular, this displays the "-databasecopyactivation" value)
  • Get-Mailboxdatabase | Get-MailboxdatabaseCopyStatus (This shows the status of our mailbox databases: mounted? dismounted?, etc.).
  • Get-DatabaseAvailabilityGroup -status | fl (this shows the status of the Database Availability Group).

When I was first learning these cmdlets, using them over and over again was a good method to memorize them. But entering cmdlet after cmdlet is not the most rapid and efficient method of quickly evaluating the health of your Exchange environment.

Fortunately, we can "script" much of the above and this approach has several avantages. Once configured, it requires less time and, with some scripts, we can even email the results in HTML format. This report can be sent to the Exchange administrators and, if desired, to managers who might be interested in the status of the messaging system.

Two of the most useful scripts I have recently evaluated were written by Exchange MVP Paul Cunningham:
  • Get-DAGHealth.ps1
  • Test-ExchangeServerHealth.ps1
You can read more about them on his (excellent) website exchangeserverpro.com

Here is the link to the page about the scripts:


Note: the DAG health check was incorporated into the Exchange Server Health check.

Running the scripts

So we download the script to a folder, extract it (if it is compressed as a zip file), open properties and unblock it.

I placed the two scripts here (for example):


We can run the scripts directly at the command line as follows:

(Click to enlarge)

For the first script, the output indicates that our Exchange environment is healthy. Apparently, the older DAG Health script does not produce output on the screen but it does expedite an HTML report to the email address that is configured.

Note: in the DAG Health script, we indicate the email address in an associated .xml file. For the Exchange Server Health script, we indicate the email address in the script itself.

Note: once again, although I tested both scripts, one would most likely just use the more general Exchange Server Health script since it covers everything concerning the DAG and more.

Scheduling the execution of the scripts (Task Scheduler)

The best way to use these scripts (in my opinion), is to schedule a task that will run them and indirectly send the HTML report to whoever should receive it (the Exchange administrators, for example, or perhaps certain managers).

I attained this objective by opening Task Scheduler and creating the new task as follows...

On the "General" tab, name the task and select the options shown below (with red dot):

On the "Triggers" tab, select "New":

Configure the task to run according to the schedule you desire, for example:

Likewise, on the "Actions" tab, select "New":

And configure the action:

For "Program/script" you can enter the full path to the powershell.exe executable file, but I discovered that this is not necessary (yes, the scripts are designed to run using PowerShell - we do not need to invoke the Exchange Management Shell or "EMS"). As for the arguments (that in my configuration designate the location of the script itself), I used the following terms:

 -NoLogo -NonInteractive -File "C:\Scripts\Get-DAGHealth.ps1" -Detailed -Sendemail

Or for the Exchange Server Health script (preferred):

-NoLogo -NonInteractive -File "C:\Scripts\Test-ExchangeServerHealth.ps1" -ReportMode -Sendemail

Other combinations may function just as well, In my case, my test user received the HTML report in is mailbox as expected.

Sunday, May 22, 2016

Exchange 2010 (SP3) - Virtual Directory Authentication Settings (some repair options)

In my blog posts on the Citrix NetScaler VPX (used to load balance Exchange), I adjusted some setting of the OWA virtual directory. These settings are crucial for client access to the mailbox via HTTP(S). Knowledge of the default settings and especially repair options are essential for the Exchange administrator.

Some opening remarks

What are virtual directories? They are IIS web shares associated with "real' Windows directories (or folders) containing the files that constitute the various client access services accessible via IIS. For example, if we look at the ecp virtual directory (Content View) and then "Explore" (in the "Actions" pane), we see that this directory provides access to the following Windows folder: 

C:\Program Files\Microsoft\Exchange Server\v14\ClientAccess\ecp

In fact, if we arrange the windows as follows, virtual directories and Windows folders are almost aligned line for line:

Default authentication and SSL settings (Exchange 2010)

These are the settings for the Exchange features that I consider the most useful to understand in my environment (Exchange 2010 multi-role server):

Note: unless otherwise indicated, the authentication settings listed are ENABLED.

As for SSL, all directories listed above require SSL except:
  • OAB
  • PowerShell
  • PowerShell-Proxy.

RpcWithCert requires a client certificate.

Other remarks:
  • Exadmin, Exchange and Exchweb (assuming they are even present) are legacy virtual directories that were used with Exchange 2003.
  • I do not use Public folders or Unified Messaging.
  • Please consult other online documentation if you are interested in virtual directories not presented above.

Reset virtual directories - EMS (PowerShell)

It is possible that virtual directory settings can become misconfigured or corrupt. If all else fails, we can delete and recreate the virtual directory in question.

Before Exchange 2010 SP1, we had to perform this operation in the EMS with the PowerShell cmdlets Remove-OWAVirtualDirectory and New-OWAVirtualDirectory.

If you have more than one Exchange server, make sure you designate only the virtual OWA directory that you want to remove (and recreate). This cmdlet, for example, would remove all the OWA virtual directories in the Exchange organization: 

Get-OwaVirtualDirectory | Remove-OwaVirtualDirectory

Warning: do not execute that command on your production Exchange server(s).

We can see the various OWA virtual directories in the organization with this cmdlet:

[PS] C:\>Get-OwaVirtualDirectory | fl Name,Server

Name   : owa (Default Web Site)
Server : EX13-1

Name   : owa (Default Web Site)
Server : EX13-2

Note: you can use the format-table (ft) option (the default - no need to specify it) or the format-list option (fl). I use the latter because it allows me to keep the output on the left side. It is simply a screen display option and does not affect the number of objects displayed.

We can remove the OWA virtual directory with any one of these combinations of cmdlets (and there are probably even more options):

Get-OwaVirtualDirectory "owa (default web site)" | Remove-OwaVirtualDirectory

Note: I would be cautious with the cmdlet above but apparently when we specify the OWA virtual directory, PowerShell only returns the local virtual directory.

So to be safe, we can indicate the server name:

Get-OwaVirtualDirectory "ex13-1\owa (default web site)" | Remove-OwaVirtualDirectory

Or, since we are indicating the server now, we could simply use this cmdlet (no pipeline needed):

[PS] C:\>Remove-OwaVirtualDirectory "ex13-1\owa (default web site)" -whatif
What if: Outlook Web App virtual directory "ex13-1\owa (default web site)" is being removed.

Note the use of the -whatif parameter which allows us to see if the cmdlet is valid but also tells us what will happen. It can be useful to execute the cmdlet to see what will happen... without the action actually happening.

We recreate the OWA virtual directory with this cmdlet:

New-OwaVirtualDirectory -InternalUrl 'https://mail.mitserv.net/owa' -WebSiteName 'Default Web Site'

Note: we will have to reconfigure the other parameters as well, such as the "-ExternalUrl" but we can do that once the virtual directory has been recreated. This is another example of the wisdom of documenting the Exchange configuration - more on that in the next section (below). Otherwise, we could look at the configuration of the OWA virtual directory on another Exchange server (if we have one... ).

For the changes to take effect, we must restart IIS with the following command: iisreset /noforce 

At this point, we can reconfigure the ActiveSync virtual directory parameters, the Urls for example.

Reset virtual directories - EMC

Since Exchange 2010 SP1, we can reset the virtual directories from the EMC

Regardless, it makes sense to document the existing settings before making changes. We can take screenshots of the various tabs in the virtual directory properties or we can use certain PowerShell cmdlets. Concerning the ActiveSync virtual directory, we can execute this cmdlet for that purpose:

[PS] C:\>Get-ActiveSyncVirtualDirectory "EX13-1\Microsoft-Server-ActiveSync (Default Web Site)" | fl

Note: I will not post all the output since it is quite long.

The internal and external Urls interest me the most and are quite important: if they do not match the names on the certificate we use for the client access services, users will encounter error messages when accessing these sites. So for just the Urls, I can use this cmdlet:

[PS] C:\>Get-ActiveSyncVirtualDirectory "EX13-1\Microsoft-Server-ActiveSync (Default Web Site)" | fl name,*nalurl*

Name        : Microsoft-Server-ActiveSync (Default Web Site)
InternalUrl : https://mail.mitserv.net/Microsoft-Server-ActiveSync
ExternalUrl : https://mail.mitserv.net/Microsoft-Server-ActiveSync

At this point (after taking at least minimal notes on the settings to reconfigure), we can reset the virtual directory. I'll illustrate the process with the following screenshots...

Open the EMC (Exchange Management Console), go to Server Configuration | Client Access and right-click on the server where the virtual directory will be reset. Select "Reset Virtual Directory" in the menu:

We then select the virtual directory to reset:

Note: click "Next" or "Finish" as needed.

In fact, I'll reset the ActiveSync virtual directory:

Result: the Urls are changed to their default values:

[PS] C:\>Get-ActiveSyncVirtualDirectory "EX13-1\Microsoft-Server-ActiveSync (Default Web Site)" | fl name,*nalurl*

Name        : Microsoft-Server-ActiveSync (Default Web Site)
InternalUrl : https://ex13-1.mynet.lan/Microsoft-Server-ActiveSync
ExternalUrl :

There are a number of options to re-enter the previous values of the virtual directory settings. For the most part, the default settings are not changed, except for the Urls. To set the preferred values, we can configure the parameters in the appropriate section of the EMC (Exchange Management Console) or use the EMS (Shell). For OWA, ActiveSync and OAB, it does not matter. However, some virtual directories (such as the EWS directory - Exchange Web Services) can only be configured at the command line, so I'll illustrate the process with that option.

This is one approach (using variables):

[PS] C:\>$AsURL="https://mail.mitserv.net/Microsoft-Server-ActiveSync"

[PS] C:\>Set-ActiveSyncVirtualDirectory "EX13-1\Microsoft-Server-ActiveSync (Default Web Site)" -InternalUrl $AsURL -ExternalUrl $AsURL

We now have the values that were present before resetting the virtual directory:

[PS] C:\>Get-ActiveSyncVirtualDirectory "EX13-1\Microsoft-Server-ActiveSync (Default Web Site)" | fl name,*nalurl*

Name        : Microsoft-Server-ActiveSync (Default Web Site)
InternalUrl : https://mail.mitserv.net/Microsoft-Server-ActiveSync
ExternalUrl : https://mail.mitserv.net/Microsoft-Server-ActiveSync

Additional remarks

Many properties for Exchange reside in Active Directory. In fact, this is true for the virtual directories. If we open ADSIEdit (configuration partition) and go to the sections in the images below, we can see some now familiar virtual directory settings for OWA:

In his blog, Dave Stork relates an incident where he was not able to recreate the OWA virtual directory with any of the methods described above:

Fixing a broken OWA 2010 Virtual Directory

I'm going to summarize the steps below (from what I understood):

  1. Recreate the virtual directory in IIS (I'm not sure of the exact process here).
  2. Creates a new object on the problem server in ADSIEdit (class msExchOwaVirtualDirectory) - in his situation, this object was missing
  3. Copy attributes from a "good" Exchange server.
  4. It looks like at this point OWA was accessible.
  5. He runs the resets the directory at this point.

I do not know if this procedure is supported by Microsoft but this is might become handy if all else fails. For this reason, I am linking to his post here.

Warning: avoid using ADSIEdit unless you absolutely must do so. It is easy to make mistakes and there is no undo button or recycle bin.


Reset Client Access Virtual Directories

Friday, May 20, 2016

NetScaler VPX - Part 10 (user authentication with Active Directory - LDAP)

In my previous blog post, I examined local user management: user accounts created for the various NetScaler administrators on the appliance itself. In the present blog post, I will demonstrate how we can regulate access to the NetScaler using an external user database such as Active Directory.

We configure external authentication in this section of the management interface:

NetScaler > System > Authentication

Note: you can click on the images to enlarge.

Besides simple local authentication (what we saw in the previous blog post), we can use LDAP (Active Directory is based on LDAP), RADIUS or TACACS. In this blog post, I'll use LDAP.

So what do we need to do?

We have to:
  1. Designate an authentication server.
  2. Create a policy that directs authentication requests to that server.
  3. Bind that policy globally.

Designate the external authentication server

Click on "No LDAP authentication server" (see the right pane of the screenshot above) and then, under the "Servers" tab, click on "Add":

There are a decent amount of settings to configure so I've taken three screenshots of what is in fact a single page.

I designate the LDAP server with a name: srv_LDAP_DC2. This does not have to be the exact same name of the domain controller itself. I then enter the IP address of the server in question as well as the other settings shown below:

Note: yes, you may have noticed the Security Type "PLAINTEXT" and wondered if this setting is secure. Are the credentials sent in plain text? In some of the documentation consulted, I've seen this parameter left as is, perhaps because the objective of the documentation was simply to demonstrate how LDAP authentication is configured, without necessarily addressing security best practices. If you are using LDAP in a production environment, it would be a good idea to evaluate the other options. I may take another look at this, perhaps with a WireShark capture, but my time is not unlimited. 

We then enter the "Base DN" that the LDAP authentication query will target. Most examples I have seen show the domain. You could apparently target a more specific object (an organizational unit perhaps) but all users to authenticate would have to be in that container or a sub-container. We also need to enter an Active Directory account that the NetScaler will use to access (in our case) Active Directory. Enter the account name and password as shown below: 

Note: the use of the "Retrieve Attributes" link can be a challenge. First, based on what I have read in some sources (online forums), it seems that the "retrieval action" actually takes place on the management computer from which you access the NetScaler management interface. Therefore, this computer itself must be able to access Active Directory. If it cannot locate the domain controllers (lack of connectivity or incompatible DNS settings), we will not be able to "retrieve attributes". In my case, after much trial and error, I observed that the correct attributes in "Other Settings" (see below) had the correct values by default, even without clicking the "Retrieve Attributes" link. In fact, clicking the link caused a second field to appear under each of the fields shown below but with no other options from which to chose. In the end, I avoided clicking the link on later configuration attempts.

In "Other Settings", select the following values (if they are not already present):

We click on "Close" or "Done" as needed to return to the LDAP Servers tab. Remember to save the NetScaler "Running Configuration" so the changes become permanent.

If we want to test our configuration, we can go to the following section of the management interface, observe the "Status" of the (LDAP) authentication server and also click on "Test":

NetScaler > Authentication > Authentication Servers

Create the authentication policy

In the section shown below, click on the "Policies" tab and then "Add":

NetScaler > System > Authentication > LDAP

We need to do the following:

  1. Provide a name for the policy.
  2. Designate the server we just created (click on the "down arrow" to show the choices).
  3. Select the "ns_true" policy expression (once again, click on the "down arrow" for choices).

Click on "Create".

Bind the authentication policy (globally)

We can bind an authentication policy (or other types of policies) to a particular virtual server or "globally". In this case, we want to create a global binding. So, once the policy is created and we are back on the "Policies" tab of the LDAP page, we click on the "Global Bindings" button:

Select the policy we just created by clicking on the arrow ("greater than" symbol):

Click  "Select":

Then "Bind":

And finally, "Done":

Interaction with Active Directory and testing

In Active Directory, I will create two security groups to which a NetScaler "command policy" will be assigned:

I invented group names that include the name of the command policy (SuperUser, Read-Only). In fact, when we create the matching "system group" in the NetScaler (next step) the group names will have to match exactly, letter for letter (and they are case-sensitive).

Note: as explained in the previous blog post, a "command policy" is essentially a set of permissions that allows the user to execute certain operations on the NetScaler - or simply have read-only access.

Back on the NetScaler, we go to the following section to create the system groups that are associated with their equivalent in Active Directory (click on "Add"):

NetScaler > System > User Administration > Groups

I create a system group with the exact same name as the corresponding group in Active Directory and then click on "Insert" to select a command policy:

For the group "NetScaler_Admins_SuperUser", we logically select "superuser" and then click "Insert":

Click on Create:

Note: I repeat the same process for the group "NetScaler_Admins_Read-Only".

If everything has been configured correctly, that is all we need for the NetScaler to authenticate administrators via Active Directory. Please note that is is for authentication of users who access the NetScaler to manage the NetScaler. We can also use the NetScaler to authenticate users who will only pass through to access network resources but that is a different subject and different procedure.

How can we test authentication - and for that matter, user rights as regulated by the command policy?

I will add the user "Alan.Reid" to the "NetScaler_Admins_SuperUser" group in Active Directory:

Note: yes, for this part, I assume the reader has minimal knowledge of Active Directory user and group managment. I will not illustrate the procedure step-by-step.

I also add the user "Alex.Heyne" to the "NetScaler_Admins_Read-Only" group in Active Directory (no screenshot).

At the Citrix NetScaler logon page, the user enters their name as configured in Active Directory. Below, it is "alex.heyne" but in other networks, it could be "aheyne":

Alex Heyne can logon (and since there is no local account for him, we know LDAP authentication is working, which validates our configuration above) and... we can see that he does have limited permissions:

This is somewhat strange since Alex Heyne does have the "read-only" command policy and one would think he could at least read the version information. In any case, he cannot make changes (enabling new features, for example).

Next, I'll connect as Alan Reid and we can see immediately, once logged on, that his "superuser" command policy grants him greater permissions (in fact, permissions equal to those of the default administrator account nsroot):

Moreover, he can make changes to the NetScaler configuration (no error message when he clicks OK):

Friday, May 13, 2016

NetScaler VPX - Part 9 (user management)

Upon installation (as a virtual appliance), the NetScaler VPX has a "default" user account named nsroot (with nsroot as the password as well). This account is an administrator account and as such can execute any type of operation on the appliance.

One of the first things we should do, in observation of best security practices, is to change the default password "nsroot". We can do so by clicking on the "Change Password" option:

We may also want to create a user account for each person responsible for managing some aspect of the NetScaler. In general, this is often recommended for accountability: if we audit events on the appliance, we can determine who did what. This is obviously impossible if 10 different administrators log on as "nsroot".

We may also want the different administrators to have different levels of access. Some may need to make changes while read-only access may suffice for others.

We create additional accounts in the Users section.

Click on "Add" and then enter (at minimum) a user name and password.

Click "Continue". On the resulting page, we click on "No System Command Policy"...

We select a "Command Policy" which is essentially a set of permissions that allows the user to execute certain operations on the NetScaler - or simply have read-only access (more on this subject later): 

Then bind the command policy to the user:

When finished, click on "Done". Now we have a second user (with "sysadmin" rights):

For practice, you can logon with the new account and verify that it can accomplish the desired tasks.

As I discovered, a user with sysadmin privileges can enable features, for example, but cannot access (or even view) the user list:

Note: yes, clicking on OK allowed be to make the changes (but always remember to click the floppy icon to save the running configuration):

On the other hand, the new user could not even view the list of users:

We would have to grant that user the "superuser" command policy. Although the term "admin" (sysadmin) might imply more authority than "user" (and even "superuser") the sysadmin has limited rights in some sections of the NetScaler, while "superuser" is on par with the nsroot account.

This Citrix document outlines user management and the different types of command policies in particular (see the chart):

Configuring Users, User Groups, and Command Policies

Remember that NetScaler credentials are case-sensitive. If we create a user called NS_Admin1, this will not work (even with the correct password):

The creation of additional users offers some flexibility but works best if we have a single NetScaler or perhaps a pair of NetScalers. But the more NetScalers we have (and the more administrators we have), the more inefficient user creation on individual appliances becomes.

If possible, it would be preferable to manage authentication and authorization with an external database already containing an account for the various users we would like to designate as NetScalers administrators. Active Directory is one example of such an external database (and the one I will use in the following blog post). Better yet, we could use an Active Directory group, called "NetScaler_Admins" perhaps, and assign a command policy to this group. Subsequently, any users belonging to the group in question would not only be allowed to logon to the NetScaler but also inherit the rights granted to the Active Directory group of which they are a member.

I will outline the configuration of external authentication (with Active Directory) in my next blog post.