IBM System Verification Test for Windows 2008 64bit Server with Domino 8.5.1
October, 2009
1 Overview
The IBM System Verification Test (SVT) objective is to execute a set of test scenarios against a test configuration that contains the key requirements and components that will create a load on the Windows 2008 64 bit machine. This testing was performed using test scripts currently used by Domino SVT team. The system/performance testing can be used as a model for the capacity, configuration and costs for refreshing the Domino Server infrastructure. The initial testing will leverage the IBM test lab and resources to provide real world understanding of the performance and scalability of the Lotus Domino Server 8.5.1.
One's perception of system quality is governed under the statement of overall system reliability. A widely accepted definition of software reliability is the probability that a computer system performs its destined purpose without failure over a specified time period within a particular execution environment. This execution environment is known formally as the operational profile, which is defined in terms of sets of possible input values together with their probabilities of occurrence. An operational profile is used to drive a portion of the system testing. Software reliability modelling is therefore applied to data gathered during this phase of testing and then used to predict subsequent failure behaviour during actual system operations.
A reliability test is one that focuses on the extent to which the feature or system will provide the intended function without failing. The goal of all types of testing is the improvement of the reliability program with specific statements about reliability specific tests. Reliability is the impact of failures, malfunctions, errors and other defect related problems encountered by customers. Reliability is a measure of the continuous delivery of the correct service (and, the time to failure).
SVT's purpose of running Reliability tests was to ascertain the following:
· Data population for all parts of the infrastructure to force set limits to be achieved and passed
· Running sustained reliability scripts at >100% maximum capacity. Assessing :
· Breakpoints
· System stability pre and post breakpoint
· Serviceability
· Forcing spikes and anti-spikes in usage patterns
· Exposing SMTP, IMAP, POP3 services to 110% of their maximum load
· Flushing out the DB Table spaces to their maximum, proving the maximum, proving ability to recover/get back to a good place when the maximum limits have been exceeded
· Proving serviceability errors and warnings when thresholds are hit
2 Evaluation Strategy
The following section outlines the test environment and strategy to evaluate Domino 8.5.1.
2.1 Test Environment
We have utilized the Windows 2008 64bit server configuration shown below:
The environment consists of two Windows 2008 64bit servers with 2 Domino partition servers running on each physical machine. Each partition server hosted up to 500 active users.
The hardware on the two servers was identical, apart from the fact that the SERVER2 server had half the physical memory installed (4GB). SERVER1 had 8GB installed.
DAOS was enabled for host SERVER1 but not for SERVER2. Transaction logging was enabled on both servers.
The design task was set to run nightly using the default settings.
2.2 Evaluation Criteria
The performance of Domino 8.5.1 is evaluated under the following criteria:
- Server CPU: The overall CPU of the server will be monitored over the course of the test. The aim is for the server CPU to remain below 75% over the course of the test allowing the server to function appropriately. It is acceptable for the CPU to occasionally spike above this level for a short period of time, but it must return to below 75%. High CPU results from the server being stressed due to processes running such as compact, fixup or replication or from user load or any other third party programs.
- Domino Processes CPU: The previous metric monitors the overall CPU of the server, however, the CPU consumption of Domino specific processes will also be monitored individually. In this manner the CPU consumption of Domino specific processes may be evaluated.
- Server Memory: The server memory metric represents the amount of physical memory available on the server. If the available memory becomes low the server performance could be compromised.
- Server Disk I/O: The disk is a source of contention when a server is under load and performing a high number of read/write operations. The disk queue length is measured to determine if the disk I/O operations are resulting in a bottleneck for the system performance.
- Network I/O: These metrics monitor the network utilization to ensure the bandwidth consumption is acceptable and that the network is not overloaded.
- Response Times from the End-user Perspective: The server response times for user actions represent how long a single user must wait for a given transaction to complete. This metric captures the user experience of the application with the server. At times, response times will be longer when a server is under load. When response times increase over an extended period, or persist at high levels (e.g. when a database or view takes longer than 30 seconds to open), they indicate that performance indicators are being hit and detailed analysis must be performed to determine the source of the slowdown and seek remediation.
- Open Session Response Times: In addition to monitoring the individual action response times, the Open session response times will also be evaluated in order to ensure the server remains responsive over the course of the tests.
2.3 Tools
In order to simulate user activity and capture the evaluation metrics discussed in section 2.2 a number of tools must be used.
- Server.Load : The Server.Load tool is the IBM Lotus Domino load generation tool which can be used to measure and characterize various Lotus Domino server capacity and response metrics under load. The load is generated by running workloads that simulate the behavior of Lotus Domino client-to-server operations. The workloads enable simulating consistent, repeatable load against the Lotus Domino server. Server.Load additionally captures action response times as discussed in section 2.2 which may be recorded and analyzed.
- Domino showstats data: The Domino showstats captures important server metrics, a Server.Load client driver may be used to execute the showstats console command at regular intervals for each server in the configuration and will provide Domino-specific data. The resulting data is logged in a text file and may be graphed for analysis.
- Open session: The Open session tool measures mail file request/response times. It will open a view of a mail database at a set time interval and record the response time in milliseconds. As a result, a server slow down may be identified by analyzing the resulting response times.
- Windows Perfmon: This tool comes as part of the Windows operating system and allows performance data to be captured and graphed as required.
2.4 Evaluation Setup
The Server.Load tool will be used to place load on the Domino server. In order to simulate realistic load on the Domino server a total of 4 client drivers running Server.Load will be used. The test was run over a 7 day period.
Both NRPC and HTTP users were active for 24 hours a day, 7 days a week for this test so the system is “stressed” more than for a typical working day alone. This also means the system will be performing its “housekeeping” activities with an active user load.
In order to isolate the performance of the Domino server under load from a single user’s perspective for Notes mail, a client driver will execute a “single user” Server.Load script with the OpenSession tool. The results represent a single user experience of how the application will perform at busy times of the day when the server is heavily loaded.
3 Scenario: Online Mode
The scenario evaluates the performance of Lotus Notes Clients in online mode. Online mode means that the user mail files are stored and maintained on the Domino server. Every time a user performs an action the request is sent to the server and the mail file is modified and updated on the server side.
N85Mail Script (NRPC) |
Workload Actions | Action Count per hour per user current script | Action Count per 8 hour per user current script |
Refresh inbox | 4 | 32 |
Read Message | 20 | 160 |
Reply to all | 2 | 16 |
Send Message to one recipient | 4 | 32 |
Send Message to three recipient | 2 | 16 |
Create appointment | .166 | 1.32 |
Send Invitation | .167 | 1.34 |
Send RSVP | .167 | 1.34 |
Move to folder | 4 | 32 |
New Mail poll | 4 | 32 |
Delete two documents | 4 | 32 |
Total Messages sent | 8.5 | 68 |
Total Messages sent with attachment ( att. size = 50 kb ) (10%) | .83 | 6.65 |
Total Messages sent with attachment ( att. size = 10 mb )(.5%) | .042 | .33 |
Average Message size | 100 kb | 100 kb |
Total Transactions | 44.50 | 356 |
Message Distribution in N85Mail Script |
Message size distribution | Percent of messages sent | Attachment size ( if any ) |
0 < size <= 1k | 32.0% | N/A |
1k < size <= 10k | 3.6% | N/A |
10k < size <= 100k | 57.0% | 50 KB |
100k < size <= 1mb | 6.8% | N/A |
1mb < size <= 10mb | .4% | 50 MB |
average message size = 100 kb |
Table 2
Table 1 shows the workload of the N85Mail script. The script reflects the average workload that is expected to be performed by a single user over the course of a working day. The resulting mail distribution is shown in table 2.
N85DWA Script (HTTP) |
Workload Actions | Action Count per hour per user current script | Action Count per 24 hour per user current script |
Refresh inbox | 4 | 96 |
Read Message | 20 | 480 |
Reply to one message | 4 | 96 |
Send Message to one recipient | 4 | 96 |
Send Message to three recipient | 4 | 96 |
Create appointment | 4 | 96 |
Send Invitation | 4 | 96 |
Send RSVP | 4 | 96 |
Move to folder | 4 | 96 |
New Mail poll | 12 | 288 |
Delete two documents | 4 | 96 |
Total Messages sent | 20 | 480 |
Total Transactions | 68 | 1632 |
Table 3
Table 3 shows the action workload of the built in N85DWA script with modifications to the attachment size. The script reflects the workload that is expected of a single user over the course of a day.
Message Distribution in N85DWA Script |
Message size distribution | Percent of messages sent | Attachment size ( if any ) |
0 < size <= 1k | 7.8% | N/A |
1k < size <= 10k | 60% | N/A |
10k < size <= 100k | 30% | 50 KB |
100k < size <= 1mb | 2% | N/A |
1mb < size <= 10mb | .2% | 10 MB |
|
Table 4
The resulting mail distribution is shown in table 4.
4 Conclusions and Summary
The test results demonstrate that each IBM System x3650 configured as described in this report, can support 500 concurrent Notes 8.5.1 NRPC users and 200 active HTTP (iNotes) users with an average response time of well below 1 second.
These results are based on running x3650 system as a dedicated Domino server in a two Domino partition configuration. The addition of other application workloads will affect the number of users supported as well as the response time. Achieving optimum performance in a customer environment is highly dependent upon selecting adequate processor power, memory and disk storage as well as balancing the configuration of that hardware and appropriately tuning the operating system and Domino software.
Appendix A: Overall Test Setup and Software Versions
Number of Client Systems
For the Notes 8.5.1 online mode test, 6 driver systems were used, of which 4 systems were configured as load drivers (2 ran 500 NRPC users and 2 ran 200 DWA users), 1 machine running the OpenSession and ShowStats tools and one Administration client.
The configuration used for the driver systems was as follows:
Load driver/showstat & OpenSession machines:
- Intel ® Xeon ™ CPUs, 3.60GHz with 2GB memory
- C: Partition (80GB - NTFS) - Microsoft Windows XP Professional SP2 and Notes 8.5.1 Gold Lotus Notes client
Number of Server Systems
Two IBM X-3650 systems with a Dual Core 3 GHz Intel Xeon processor, 8GB/4GB of Memory for hosting the domino mail partitions.
The disk configuration used for the system under test follows:
- 1 x C: drive (OS/Swap/Program files)
- 1 x D: drive (1st Domino Partition Data Directory)
- 1 x E: drive (2nd Domino Partition Data Directory)
- 1 x F: G: drives -2 partitions on a single disk drive (Transaction logs for both Domino servers)
Software Versions:
Software versions used on the system under test were as follows:
- Microsoft Windows Server 2008 (64bit) SP2
- Lotus Domino 8.5.1 Gold build Windows 64bit
Software versions used on the client driver machines were as follows:
- Microsoft Windows XP Professional Version 2002 SP2
- Lotus Notes 8.5.1 Gold client and Domino Administration 8.5.1 client for Microsoft Windows XP Professional SP2
Appendix B: System Configurations
System Under Test:
System | 2 x IBM System x3650 (with 2 domino partitions) |
Processor | One Intel® XEON® CPU 5160 @3.00 GHz (dual core) per physical server
|
Memory | 8GB – DPAR1/DPAR2
4GB – DPAR3/DPAR4
|
Model of Machine | |
Disk Drive | 4 x 300GB 10K RPM internal local disk drives
|
Operating System | Microsoft Windows Server 2008 (64bit) SP2
|
Domino Server | Lotus Domino 8.5.1 Gold Build for Windows Server 64bit
|
NOTES.INI settings from DPAR 1
ServerTasks=Replica,Router,Update,AMgr,Adminp,Sched,CalConn,RnRMgr,HTTP,IMAP,LDAP,POP3
NSF_BUFFER_POOL_SIZE_MB=750
ConstrainedSHMsizeMB=2048
Create_R85_Databases=1
NOTES.INI settings from DPAR 2
ServerTasks=Replica,Router,Update,AMgr,Adminp,Sched,CalConn,RnRMgr,HTTP,IMAP,LDAP,POP3
NSF_BUFFER_POOL_SIZE_MB=750
ConstrainedSHMsizeMB=2048
Create_R85_Databases=1
NOTES.INI settings from DPAR 3
ServerTasks=Replica,Router,Update,AMgr,Adminp,Sched,CalConn,RnRMgr,HTTP,IMAP,LDAP,POP3
NSF_BUFFER_POOL_SIZE_MB=750
ConstrainedSHMsizeMB=2048
Create_R85_Databases=1
NOTES.INI settings from DPAR 4
ServerTasks=Replica,Router,Update,AMgr,Adminp,Sched,CalConn,RnRMgr,HTTP,IMAP,LDAP,POP3
NSF_BUFFER_POOL_SIZE_MB=750
ConstrainedSHMsizeMB=2048
Create_R85_Databases=1