This forum is closed to new posts and responses. Individual names altered for privacy purposes. The information contained in this website is provided for informational purposes only and should not be construed as a forum for customer support requests. Any customer support requests should be directed to the official HCL customer support channels below:

HCL Software Customer Support Portal for U.S. Federal Government clients
HCL Software Customer Support Portal


Feb 11, 2015, 1:40 AM
16 Posts

SSD and servers - any advice?

  • Category: Administration
  • Platform: Windows
  • Release: 9.0.1
  • Role: Administrator
  • Tags: Hardware,SSD,Performance
  • Replies: 7

Hi there,

I have a server struggling a bit with disk IO, so thought I would head on over and see what the community can advise.  I have done the standard steps re transaction logs on different drives and other basic performance checks, now looking for some advice re SSD, and getting the best 'bang for buck'.

The server runs 4 main databases, each in size in excess of 40Gb.  Most of this size is due to attachments in a business application.  Approx 50'000 documents in the database, archving attachments on a regular basis.  I do get some semaphore errors when there is lots going on eg. updating FT indexes / having a high resource agent running, however these are not THAT common.  I do try to have the agents running out of hours, and have tried to limit the data in the system to what is needed, rather than what is preferred.

I am wondering what people might suggest with the following regarding SSDs, and whether or not this would be  good way to go.

  • putting the FT index and transaction logs on a SSD (let's call it SSD#1)
  • putting the nsf databases on an SSD (SSD#2)
  • maybe putting the pagefile on SSD#1

The company has a tight budget., the server is a reasonable build and only 1.5 yrs old, so would like to spend any budget I can get my hands on in the best manner.  My server runs a Dell PERC H310 raid controller.  We have a small user base (10 users).  

Any suggestions?

A

Feb 11, 2015, 6:38 AM
7 Posts
Some other ideas

Have you looked into fragmentation - not just at the disk level, but at the NSF level.  Since you're considering SSDs, which would be an expense, I'll point out that there's a commercial product called Defrag.NSF that addresses this very well and might be better bang for the buck than SSD, but there's also an OpenNTF project called DominoDefrag that you could look at while still absorbing the expense of SSDs (but defragging is usually not recommended with SSDs due to limited benefits and lifespan issues).   On a semi-related note, you mentioned archiving attachments but you also mentioned limiting the data, and if that includes deleting a lot of documents you could have a large buildup deletion stubs, and I've seen that have a significant impact on performance. I presume that this is because of the ftragmentation that deletions causes, but even without running one of the defrag tools I found that purging deletion stubs does help.

The other idea is DAOS, which you also didn't mention. I don't have much hands-on experience with it myself so I'll leave it to others to correct me if I'm wrong, but my understanding is that DAOS can reduce i/o and improve performance.

-rich

Feb 11, 2015, 10:31 AM
16 Posts
Thanks

Thanks for your suggestions rich.  I did not mention DAOS because there is no duplication of attachments across the application, and DAOS does not (I stand to be corrected) assist with reducing the 64Gb limit (because you have the same problem in the event of having to un-DAOS - the abbreviated reason).

Deletion stubs are a good suggestion too.  Deletions are being purged as per standard period.  The application does not have any local replicas, so I will look at reducing this period.

I have heard of the defrag tool before but had not thought of it.  I'll look into that and see if there are any improvements.

Appreciate the suggestions.

A

Feb 12, 2015, 12:03 PM
6 Posts
DAOS
Using DAOS for those databases (even without double attachments) will not only reduce the size of the database. It also has a benefit on maintenance tasks and backup times!

Also check the design of the views and the FTI settings for their update frequency (most developers do not care about this). regarding the hardware/configuration, check if you can implement some or all of the options mentioned here: http://www.wissel.net/blog/d6plinks/SHWL-7RB3P5.
Feb 20, 2015, 1:13 AM
57 Posts
Re: SSD and servers - any advice?

1) It is my understand that DAOS absolutely removes the 64GB limit with regards to the attachments.  You can go WELL beyond 64GB for the total logical size so long as the individual NSF stays under 64GB.  Supposedly problems really start to occur with dbs above 32GB.

Also, though you are right that without much duplication you won't see much *overall* savings, you *will* see savings inside the NSF itself by storing all the attachments outside of it.

See this and the section on when to use DAOS: http://www-10.lotus.com/ldd/dominowiki.nsf/dx/daos-best-practices#When+A+Notes+Database+Should+Use+DAOS

It contains very large attachments.
In this case, it may not matter how many other NSF files hold the attachments in question. If they're large enough, the simple step of storing them outside the NSF can make common operations against that database much faster.

More Reading:
http://blog.nashcom.de/nashcomblog.nsf/dx/maximum-database-size-still-64gb-what-about-daos.htm?opendocument&comments

Deployment guide that has link to DAOS estimator that will give an idea of savings.
http://www-10.lotus.com/ldd/dominowiki.nsf/dx/DAOS_Deployment_Guide

Cautionary tale of db reaching 64BG:
http://lekkimworld.com/2011/05/18/a_tale_from_a_customer_reaching_and_exceeding_the_64_gb_limit.html

 

2) The PERC H310 is a horrible RAID card for this, sorry to say.  Very light duty, no cache, no battery backup.  Search Spiceworks for  some discussions amongst admins about the H310.  Plenty of people sent back servers with the H310 because it was way below expectations in the RAID department.   Do not use for anything but very basic -- and slow -- RAID setups.  Say, RAID 1 for the OS just to be safe.

Good news: you can upgrade to the H710 controller, which is a full-on RAID setup with cache, battery backup, etc.  I'm running a Dell PowerEdge T320 with the H710 and an 8-drive OBR10 setup (one big RAID 10) for Domino (minus transaction logs) and it is fast.  I've got several 1-million+ doc databases that are right at 10GB in size.

 

3) I'm assuming you have all of your NSFs with the latest Domino ODS and have already set "Compress database design", "Compress document data", and "Use LZ1 compression for attachments".  If not, try that in the interim.  You must do a full copy-style update of the NSFs after making these changes for the effect to be seen.  If you have to update the ODS first, you will have to run two updates (one to get the new ODS and the second after enabling the compression settings).  I saw 25-40% size drops in some databases after enabling these settings a few years ago.

---

Best news: PERC H710, compression settings and DAOS will be cheaper than and superior to any enterprise SSD solution alone where you don't make those changes (do not even consider consumer SSDs for such a thing) -- and you really must do something about the 40GB databases anyway regardless of what you are storing them on.

Mar 12, 2015, 1:57 AM
16 Posts
Thanks Mark

Thank you Mark for this very detailed response and the recommendations re the raid controller.  I am looking into this now.

We have been working with the db sizes as the business grows, however it keeps growing and is very attachment centric being in the legal space.  We do archive, ad we also strip out attachments when they are no longer needed - however they are needed for specific durations as per the business processes.  I have previously implemented the other recommendations and also noticed significant improvements.  The one reason I did not go DDM was in the event that you need to roll back DDM, you then come across the 64Gb limit and have no method of only reversing for a specific period.  I thought it might be more trouble than it was worth.  Will look into it again.

Mar 19, 2015, 7:09 PM
2 Posts
We might be able to help

Hi Allan,

As Rich mentioned we might be able to help you here. Send me an email info@preemptive.com.au and I'll see if we can help.

 

Apr 1, 2015, 7:29 AM
33 Posts
More Speed

I would separate file operations out that Write Serial Data to disks.

SSD: Temporary or Non data files

Set Conf FTBasePath=N:\FT (Directory rebuild text) (Virus scanners and backup software Exception)
Set Conf View_Rebuild_Dir=N:\REBUILD (Virus scanners and backup software Exception)
Set Conf Notes_TempDir=N:\Notes.dat\temp (Virus scanners and backup software Exception)
Set conf LOG=N:\LOG.NSF,1,0,7,15000 (Virus scanners and backup software Exception)
set conf Log_DisableTXNLogging=1
set conf MailBoxDisableTXNLogging=1
set conf UPDATE_FULLTEXT_THREAD=1
set conf FTG_USE_SYS_MEMORY=1
set conf Updaters=1
set conf RouterAllowConcurrentXferToAll=1
set conf ServerRunFaster=1, this one is just for fun.

Transactional Log N:\ (Virus scanners Exception and backup software when not using archived TLogs)
 

Harddisk: Raid of course.

Daos N:\ (Virus scanners and backup software should be timed well here)

Data on its own Disk (Virus scanners Exception, use a Domino virus scanner)

Consider 2 or more Mailboxes, even if the are not overloaded, they can become a bottle neck when there are temporary mail issues.

 


This forum is closed to new posts and responses. Individual names altered for privacy purposes. The information contained in this website is provided for informational purposes only and should not be construed as a forum for customer support requests. Any customer support requests should be directed to the official HCL customer support channels below:

HCL Software Customer Support Portal for U.S. Federal Government clients
HCL Software Customer Support Portal