Saturday, 24 July 2010

WFE Application Pool Limitations in SharePoint 2010

Note: an updated version of this post with revised recommendations is available here.

As the agenda looked so interesting I decided to drive to Ullesthorpe (home of Combined Knowledge in the midlands) on July 22 2010 to the SharePoint User Group UK (#SUGUK). Considering I live in West Berkshire (and drive a 1993 Ford Fiesta, which I am rather fond of) this turned out to be a 5 hour round trip. Was it worth it? Hell yes! In addition to two very different - but fantastic - presentations from @mattgroves (big on social and lead me to believe that SharePoint is in fact an elephant) and @SteveSmithCK (capacity planning extraordinaire), I managed to win myself a 12-month MSDN subscription! Given that the question was basically "who has the rights to deploy a sandboxed solution in SharePoint 2010?", I consider myself very lucky to have won given the number of other SharePoint addicts sat in the room with me. I think the fact that most of the other attendees appeared to already have one (I heard someone behind me talking about how she planned to give her "spares" away) helped considerably.

However, whilst I enjoyed both presentations thoroughly, one question from the audience troubled me somewhat: "is there a limit to the number of application pools you can host on each WFE server in SharePoint 2010"? While Steve's answer was "around 100 before you start to run into IIS issues", I couldn't help but shake a nagging feeling that I had read the TechNet recommended maximum was 10. In fact, I was so damn sure of that TechNet number (10 app pools / WFE) that I would have bet my brand new MSDN subscription on it.

Now I was faced with a dilemma here: did I take Steve Smith's guidance as gospel, or go with Microsoft's recommendation? I did what I figured every sensible SharePoint administrator would do in my situation: I sat down, got back in my box and reminded myself that on the odd occasion TechNet tells more fibs than my parents used to on Christmas eve.

I did however make a point of looking up the TechNet article in question this evening. It does indeed list 10 application pools per Web server as a recommended guideline - in this case the "Limit type" column indicates the figure is a supportability number. Now, I know MS supportability numbers and they are not typically less than you would expect - and I would certainly not expect it to be ten times less than the guidance provided by a SharePoint legend such as Steve Smith. As a recent example, @harbars kindly informed me that having over 50 Web applications in one MOSS farm was, in his words (tweet) "madness" - Microsoft recommend no more than 99 Web applications. I think you know which guidance I paid attention to and he doesn't live in Redmond.

To be fair to Microsoft, the article above does also state that the maximum number is largely determined by hardware capabilities and the workload that the farm is serving. Steve also confirmed this at the user group by telling me that the app pool ceiling has been lifted dramatically due to the move to a 64-bit-only architecture for SharePoint 2010. Now I know that 64-bit brings about numerous performance and scalability benefits (not least of which is a practically unlimited, continuous address space for user mode processes), but I am still keen to find out why there is such a huge discrepancy between Microsoft's stated supportability figure and the guidance of industry experts who have used the product for the best part of 2 years.

So, rather than pester Steve again, I did what I often do when I have a nagging capacity-related query: I sent the query to a few friendly folk in the SharePoint community who (despite the fact that they are quite clearly very, very busy people) normally find the time to respond to my numerous concerns. Did anyone reply this time? You bet.

@ToddKlindt gave me a pretty clear response by saying that "anything Steve says, I believe". Sounds like good advice right? He also told me that the documented MS advice was actually pretty good considering that they suggest the limitation is largely hardware dependant. For those of you that don't know Todd, he is a well respected SharePoint guru and a co-author for Professional SharePoint 2010 Administration. If you don't have it already and you are a SharePoint 2010 administrator I can whole-heartedly recommend that you add this to your one to your collection of geeky paperbacks.

@joeloleson's article entitled "What’s New in SharePoint 2010 Capacity Planning" also mirrors the advice kindly provided by Steve and Todd. Joel points out that Microsoft have adjusted some of their capacity planning guidance based simply on the fact that IIS and SQL scale better with 64-bit, and states that the number of application pools " totally depends on the RAM on the box".

So how many Web applications can you host on a SharePoint 2010 WFE server? It looks like the usual SharePoint consultancy response: "it depends on your hardware's capability and server farm load", and there is no magic number other than the guidance provided by Steve that you may run into IIS issues as you approach 100 pools on one server. Microsoft's supportability number of 10 is there as a rough guide (probably for those running a minimal WFE hardware configuration - think 8GB of RAM), but is almost completely dependent on your available hardware. The new SharePoint 2010 Administration Toolkit includes a load testing kit that should help administrators validate their hardware requirements as part of a capacity planning exercise.

I hope that clears things up a little bit for anyone else concerned that they have more than 10 application pools on their SharePoint 2010 WFE servers. I'll see you at the next #SUGUK!

Benjamin Athawes

Subscribe to the RSS feed

Follow me on Twitter

Follow my Networked Blog on Facebook

Tuesday, 20 July 2010

SharePoint disaster recovery on a budget

A post over on the MSDN forums got me thinking about disaster recovery in SharePoint. I thought I would share my thoughts with you given that availability and disaster recovery are important concepts to anyone hosting (or using) SharePoint. Throughout the article I use information contained in the 2010 and 2007 technet articles entitled "Plan for availability".

In the forum post, the OP asks for feedback on his SharePoint 2007 DR strategy:
  • The farm consists of two physical DB servers and two physical Web servers. One physical Web server is virtualised meaning there are 3 Web servers in total. Note that it is assumed that these servers also host the query and index role.
  • There is a very limited budget (as is often the case) in that only one server has been approved for DR purposes.
  • No availability (e.g. uptime %) requirements are specified.
  • No capacity (e.g. number of users, RPS) requirements are specified.
  • The OP plans to implement a "stretched farm" over a WAN link. We can assume here that they are not closely linked as it is stated that that backup server is "in another part of the world". The DR server would be added to the farm as a Web server, and other roles would be added if required.
Frankly, being given an entire server in a separate data centre for DR purposes is more than a lot of SharePoint administrators could hope for. Although in an ideal world we would all have a completely redundant DR farm in a nearby data centre on hot standby, the OPs question is a lot closer to reality for most organisations.

However, I did think of a few potential issues with the OPs suggestion:
  • Stretched farms are only a realistic scenario where they are in close proximity with high speed links. That means less then 1ms latency and 1GB/s bandwidth.
  • The SharePoint configuration and central administration content databases contain computer-specific information meaning that the restoration environment must contain the same topology and server roles - which would not be possible on a single server.
  • There is no mention of the other SharePoint infrastructure requirements, including DNS and a user directory, although we will assume for this scenario that the OP has included those services as part of their DR strategy anyway.
  • Aggregate capacity requirements need to be met on the single DR server for the duration of the servers use. This means whatever resources were used on the "live" farm servers need to be available on the standby. Amongst others this includes CPU cycles, RAM, disk capacity, disk IO requirements, and network bandwidth.









Sample aggregate resource requirements

Note that the aggregate totals displayed here are simply for the minimum system requirements and do not include network and disk capacity. Realistically, each server may well have a lot more hardware resources. For example, assuming each server has 8GB of RAM, 100GB of disk capacity and a 3Ghz dual core CPU (not uncommon in modern MOSS 64-bit environments) , the requirements are suddenly a lot tougher to meet on a budget: the OP would need a server with 40GB of RAM, 500GB disk space and a 10 core CPU!

From these observations I concluded that:
  • A separate standby farm is required - as we only have one server available this will be a standalone server with all roles on one box.
  • The standby server needs to meet the aggregate capacity requirements provided by the existing servers for the duration of the outage.
The standby type (cold, warm, hot) depends on the OPs availability requirements. These are detailed in Plan for disaster recovery (SharePoint Server 2010). Given that there is clearly a limited budget, one can assume that maintenance and configuration costs need to be kept to a minimum, in which case a hybrid approach seems preferable. Here is one possible approach based on Microsoft's guidance in provisioning a hot standby data centre:
  • Create a separate DR farm and apply all Windows, SQL, WSS and MOSS updates to match that of the live environment as closely as possible.
  • Deploy all SharePoint customisations to the DR farm
  • Create and configure all Web applications to the DR farm, restoring all content databases from the live farm.
  • Test it!
Going forward, the maintenance required really depends on required availability. If the business can't cope without SharePoint for more than a few hours than the OP needs to consider configuring SQL log shipping to ensure his content databases are synchronised, and will need to ensure all live updates are also deployed to the DR server at all times. If, on the other hand a day's worth of downtime is acceptable the OP may decide to simply document any live changes and deploy them to the DR server on an "as-need" basis.

Of course, this is just one possible approach. If you have any suggestions or improvements I'd like to hear them!

Benjamin Athawes

Subscribe to the RSS feed

Follow me on Twitter

Follow my Networked Blog on Facebook

Wednesday, 7 July 2010

Multi-Tenancy in SharePoint 2010 using host-named site collections

Having just received our SharePoint 2010 Evolution Conference DVDs, I couldn't resist browsing through the vast quantity of slide decks provided by Combined Knowledge. Spencer Harbar's deck on Multi-tenancy immediately grabbed my attention given that hosting is a core part of what we do.

Multi-tenancy in the context of SharePoint is Microsoft's term for "hosting". In SharePoint 2007 (MOSS), hosting a SharePoint environment with multiple tenants was a significant challenge. Issues included a limitation on the number of Web applications that could be created, site collection management / isolation problems and URL namespace restrictions.

In SharePoint 2010, Microsoft have to a large extent dealt with the last two issues mentioned above. Creating a large number of Web applications within a single farm still causes resource problems so the preferred scaling approach for hosting companies is likely to be to go with site collections - especially considering they can now be partitioned and managed by tenants themselves.

Host-named site collections in MOSS were generally considered a no-go. On paper, they did not support managed paths or SSL termination. In practise, they were fraught with issues and administrators generally avoided using them.

So what's changed for host-named site collections in SharePoint 2010? In short, they are fixed. By this, I mean:

  • · Managed paths are now supported.
  • · Off-box SSL termination is now supported.

This makes it entirely possible to have "vanity URLs" on a per site collection basis. For example (domain name ownership issues aside), you could have www.microsoft.com for one site collection and www.ibm.com on another - within the same Web application.

I should mention at this point that there is one limitation around host-named site collections that you should be aware of: they do not support alternate access mappings and are therefore always considered to be in the default zone. Additionally, host headers cannot be applied at the Web application (IIS site) level, and doing so makes it impossible to access host-named site collections (IIS ignores requests that do not match the host binding). Read more on Technet.

Demo: setting up multiple host-named site collections in one Web app

Prerequisites:

  • · A SharePoint 2010 test server (foundation will do).
  • · Access to PowerShell.
  • · A Web application that does not have a host header applied at the IIS Web site level.


DNS Entries for vanity URLs

OK so I dont imagine setting up a couple of DNS entries is going to be tricky for the IT pros out there. However, I will include this step for those who may not be familiar with DNS administration.

  1. Add new forward lookup zones for your vanity URL domains, e.g. "microsoft.com" and "ibm.com".

  2. Within each lookup zone created above, add a new host, e.g. "www" with an IP for your test server. On my test VPC I just added 127.0.0.1.















DNS entries for vanity URLs

Create Web application

If you don't have an appropriate test Web application, go ahead and create one. Ensure that you do not apply a host header at this stage.













Ensure you don't apply a site level host header

Create host-named site collections using PowerShell

Open up the SharePoint 2010 Management Shell and enter the following commands to create a new site collection for each of your desired vanity URLs. In this case I have created a site collection with the host name www.microsoft.com:

New-SPSite http://www.microsoft.com -OwnerAlias DOMAIN\username -HostHeaderWebApplication http://servername








Entering the PowerShell commands; don't forget the dash before HostHeaderWebApplication!

You should now be able to access your new site collections using the vanity URLs you have specified. You have successfully created multiple host-named site collections within one Web application demonstrating the use of vanity URLs to cater for multi-tenant scenarios in a scalable manner.











All done - your new host-named site collections should be accessible

Summary

Host-named site collections are vastly improved in SharePoint 2010 when compared to previous versions. This, combined with the other benefits mentioned above such as improved tenant administration and data partitioning for site collections provides a compelling case for hosting companies to upgrade. If you are interested in multi-tenancy in SharePoint 2010 I would strongly recommend you check out Spencer's post linked at the beginning of this post.

Subscribe to the RSS feed

Follow me on Twitter

Follow my Networked Blog on Facebook