ShowTable of Contents
Embedded Experiences allow you to access business critical actions from other applications without leaving your email. This brings collaboration in-context and results in tighter integration across iNotes, Connections, Notes, app dev (XPages), and 3rd-party products and services. The IBM Connections activity stream is accessed from iNotes using OpenSocial gadgets. These gadgets are small components written in XML, HTML, and JavaScript that display custom content. The gadgets are served using Fiesta, the IBM reference implementation of the Shindig OpenSocial specification. The Connections Activity Stream provides a list of recent, relevant social and integrated business process activities occurring in your personal network or community.
The measurements shown in this article use the Rational Performance Tester tool version 8.2 against the IBM Domino 9.0 running Shindig, IBM Connections 4.0, and IBM Domino 9.0 mail server. The Rational Performance Tester tool simulated Internet users that were performing or browsing common Embedded Experience activity stream operations through a single Domino mail server using iNotes. Each user performed the following operations:
1. Login
2. Open iNotes mailbox
3. Open status update notification
1. Re-post status update
2. Comment on status update
4. Open forum topic notification
1. Comment on forum topic
5. Logout ( simulated )
Hardware and IBM Domino and Connections server configurations
The mail server hosts 4000 iNotes users with mail files based on the StdR9Mail template. The users are partitioned into 10 Connections communities with 400 users in each community. Each community was seeded with 25 community status updates and 25 forum topic postings. This resulted in each user having 50 Embedded Experience notification emails in their respective mail files.
Table 1. Shindig server configuration
Server version | IBM Domino 9.0 |
Hardware | VM running on VMWare ESX 4.1 |
Processors / speed | 4 VCPU's 2.26 Ghz |
Memory | 8 GB |
Active logical volumes | 1.5 TB x 1 ( XIV Storage ) |
Operating system | Microsoft Windows Server 2008 R2 64 bit |
Table 2. Mail server configuration
Server version | IBM Domino 9.0 |
Model | Intel Xeon E5630 |
Processors / speed | 4 cores 2.53 GHz ( 8 threads ) |
Memory | 96 GB |
Active logical drives | 4 LUN x 2 TB ( 1000 mail files per LUN )
IBM EXP 3512 RAID 5 ( 24 x 450 GB disk drives ) |
Operating system | Microsoft Windows Server 2008 R2 64 bit |
Table 3. Connections application server
Server version | IBM WebSphere Application Server Version Network Deployment 7.0.0.25 and IBM Connections 4.0 |
Model | Intel Xeon E5620 |
Processors / speed | 4 cores 2.4 GHz x 2 processors ( 16 threads ) |
Memory | 48 GB |
Active physical drives | 850 GB single drive |
Operating system | Microsoft Windows Server 2008 R2 64 bit |
Table 4. DB2 server ( for Connections )
Server version | DB2 Enterprise Server DB2/NT64 9.7.6 |
Model | Intel Xeon X5560 |
Processors / speed | 4 cores 2.8 GHz x 2 processors ( 16 threads ) |
Memory | 32 GB |
Active logical drives | 11 LUN x 50 GB ( 10 disk drive RAID 10 array ) |
Operating system | Microsoft Windows Server 2008 R2 64 bit |
Table 5. Edge proxy server
Server version | IBM WebSphere Edge Components Caching Proxy 7.0 with fix level 7.0.0.4 |
Model | Intel Xeon E5645 |
Processors / speed | 6 cores 2.4 GHz x 2 processors ( 24 threads ) |
Memory | 48 GB |
Active physical drives | 550 GB single drive |
Operating system | Microsoft Windows Server 2008 R2 64 bit |
Table 6. Notes.ini settings in addition to those commonly used for these tests
Shindig server | HTTPJVMMaxHeapSize=2048M
HTTPJVMMaxHeapSizeSet=1 |
Performance test results
The full workload included reading Embedded Experience notifications as well as posting responses. User loads were incremented in multiples of the community size. When a widget response is posted, such as commenting on a status notification, a new notification is sent to all users in that community. For this workload, a response from a user in 1 community would trigger a notification to be sent out to the other 399 members of the community. This causes a heavy load on the mail server due to SMTP mail routing. User loads that are not even multiples of the community size would incur the response load of a partial community without a corresponding increase in the transaction rate. A load of 450 users spanning two communities would only add the transaction of 50 users but would incur the SMTP routing load of all 400 users of the second community. Figure 1 shows the transaction rate and response times for the full workload. Response times are well below 1 second for all runs. The limiting factor on performance for this workload was the CPU utilization due to SMTP routing on the mail server. As the load increased, the notification rate exceeded the capacity of the mail server. Figure 2 shows the CPU utilization for the servers in this environment.
Figure 1. Full workload transaction rate and response time
Figure 2. Full workload CPU utilization
Full Workload
Given the mail server bottleneck, a second series of test were run with the response operations disabled. This workload would only open the status and forum topic notifications without posting responses. The user load increments were not bound by the community size for these tests. Figure 3 shows the transaction rate and response times for the rendering workload. Figure 4 shows the CPU utilization for the servers in this environment. With a 4000 user load, rendering response times remained stable with Shindig and mail server CPU utilization under 35%. For these rendering tests, the mail server was not a bottleneck due to the lack of SMTP routing load.
Figure 3. Rendering transaction rate and response time
Figure 4. Rendering CPU utilization
Conclusion
Shindig performance was tested with two workloads. The first was a rendering workload that only performed embedded notification reads while the full workload also posted responses. Shindig rendering performance was maintained with sub-second response times at 140 TPS under a 4000 user load. This rendering workload only performed notification reads and represents best case performance. Full workload performance including the sending of responses was limited to 90 TPS with sub-second response times under a 2400 user load. The limiting factor in these full workload tests was the mail server configuration and the choice of community size. Smaller communities would have had less of an impact on the mail server. Additionally, all mail users were being served from a single server which created an SMTP routing bottleneck. Splitting the users across multiple servers could lessen the routing load and allow the user load to be increased further.