Showing posts with label security review. Show all posts
Showing posts with label security review. Show all posts

Tuesday, 20 October 2009

Security: Preparing your SharePoint farm for a corporate security review (part 2 - Auditing)

The words "security review" in the context of IT can mean different things to different organisations. For large organisations seeking to outsource an IT function to a third party hosting provider it could be simply a necessary formality; a series of hoops that must be jumped through in order to reassure users that the third party solution is secure. Secure in this context typically means that the risk of the service availability, integrity and confidentiality being impacted is no more than is usual for solutions of a similar nature.


However, for smaller organisations with fewer resources, it can sometimes result in a mixture of fear and anxiety due to the consequences of failing said review. After all, who would trust their sensitive data to a hosting provider that has not been found worthy of a security stamp of approval?

The concerns raised over a security review can be minimised by ensuring your organisation has prepared itself effectively. In preparing ourselves, we typically start by reviewing a list of "frequently asked queries" that most organisations send us as a part of their security review. This weeks category is Auditing (part one on Access Controls is available here):

Please note that the views expressed herein are my personal views and are not intended to reflect the views of my employer at the time of writing.

2. Auditing
  • Q. Will common actions (e.g. accessing and editing documents) be audited?
  • A. Clients like the ability to audit access to their information - ensure there is minimal impact to performance and enable it (IIS, SharePoint, Web Proxy logs).
This sounds like an obvious one but in my experience it is often missed when deploying SharePoint sites. MOSS itself has a number of different auditing options, and although to my knowledge there are no built in retention settings they do provide a fairly detailed means of auditing actions within SharePoint. A range of different events can be audited from basic actions such as opening documents to slightly more obscure events such as searching site content. I wont go into any detail here about how to go about configuring these settings in MOSS as Bill English has already provided a nice post on his blog.

I have heard rumours that SharePoint 2010 will include retention settings as well as the current settings provided by MOSS - please let me know if you know somewhere that confirms this.

Aside from SharePoint, I would also recommend you enable IIS logging (log to somewhere other than the OS drive to prevent unnecessary disk usage) and Web proxy logging (e.g. ISA Server) where applicable. In our case, we set ISA server to log to file rather than a SQL DB for consistency with the IIS logs.
  • Q. How, and how often are the log files checked?
  • A. Weekly is typically sufficient but this may need to be more frequent depending on the nature of your service. In particular, check SharePoint access and AD audit logs.
Manually checking logs for security breaches is a necessary (albeit dull) part of maintaining a secure hosted service. A diligent server administrator should implement a range of processes to minimise the chances of a security breach going unnoticed depending on the nature of the service offering. In the context of a SharePoint site, I would recommend that access logs (that is, SharePoint audit logs and IIS logs) are compared to your ACLs. For IIS, I would recommend you use a tool such as Log Parser; for SharePoint I would start with the built in site collection usage reports and analyse the more detailed audit reports on a less frequent (e.g. fortnightly) basis.

You should also consider checking your AD logs (Windows "Security" logs on your domain controllers) on a regular basis as a part of your audit process.
  • Q. Are server log files read only?
  • A. Set all log files to read only to improve log credibility. Consider moving old logs to another server to prevent tampering.
Setting log files to be read only is an essential step toward maintaining credible log files.

Obviously someone in the organisation must retain write access to the logs but this risk can be mitigated by implementing SoD and preventing those individuals from having write access to the relevant sites (as opposed to site logs) in question. In smaller organisations, it may be easier to simply grant write access only to a very small number of individuals - perhaps those in the upper echelons of management (I'll let you decide if that's a good idea).
Mark Burnett wrote a good post on this back in 2002 (here) that I recommend you refer to for a more detailed explanation.
  • Q. Are failure and success logon attempts logged?
  • A. Consider enabling failure and success auditing based on your disk space and log retention requirements.
This seems to be another common query. In preparing for a corporate security review I would recommend you review your current log space requirements and (if you haven't already) consider enabling both failure and success auditing. I mention disk space because enabling failure and success auditing can rapidly fill a log for a large site and may not be feasible where storage is limited. This risk is further increased by the chances of a DoS attack which would result in a large number of failure audits in quick succession (see this MS article for more information).

A workaround for this problem is to enable log overwrites and limit the size of the event log based on current size and log retention requirements. For financial systems (e.g. banks) this option is typically not viable given that there are typically legal requirements to retain logs for a fixed duration of time. However, for Web applications that are of a less sensitive nature (that are not subject to the same legal restrictions) you may want to consider this as an option.
  • Q. Are logs backed up?
  • A. Ensure all logs are backed up and secured in a separate location to your Web server(s).
Again, this is something I typically get asked by financial organisations due to their understandably stringent security requirements. Whether your clients have asked for this or not, however, I would recommend you implement a log backup process that meets your retention requirements. If logging to a file (e.g. IIS), this can easily be achieved by adding the log directory to your regular NTBackup routine.

When backing up logs, ensure that the logs are shipped to a location that is physically separate from your Web servers. This is useful from both a log integrity and DR perspective - if a security breach occurs on your Web server and there is a risk that the logs have somehow been tampered with (have you set them to be read only?), you have a second copy for verification. If your Web servers suffer a hardware failure, you still have log backups to meet any legal requirements that may be imposed on your service.

Subscribe to the RSS feed

Follow me on Twitter

Follow my Networked Blog on
Facebook

Add this blog to your
Technorati Favourites

Tuesday, 13 October 2009

Security: Preparing your MOSS farm for a corporate security review (part 1 of 3)

I would like to begin this post by thanking those that have recently subscribed to this blog via RSS. I always have a hard time deciding whether a blog should consume valuable space within my Google RSS reader so greatly appreciate the thought.


This post is primarily intended for those hosting or planning to host secure MOSS Web applications, although most of these points could also be applied to most Web applications that host sensitive content over the security minefield that is the Internet.

Having recently been through numerous corporate security reviews with potential clients, I thought it would be interesting to share some of the common queries that I get asked, along with an explanation and indication as to what constitutes an "acceptable" answer in terms of security. I imagine a lot of MOSS system administrators that host secure, Internet facing Web applications have been through the numerous security related hoops described below (including, in most cases a "PenTest" - I will detail preparation for that in another post). I thought it would be useful to document some of the most common queries.

Please note that the views expressed herein are my personal views and are not intended to reflect the views of my employer at the time of writing.

Without further ado:

1. Access Controls
  • Q: How many different access levels are there for your application?
  • A: 2 levels of access control is normally sufficient.
This one depends on the nature of your application. In most cases, an externally accessible MOSS Web application will contain multiple access levels. Similarly, in cases where said MOSS Web application is third party hosted, clients may not have access to carry out potentially dangerous functions that only an "administrator" should be able to perform. Typically, most users will have either "read" or "contribute" access, whereas power users might have "design", or even "full control" for some sites.

Security conscious organisations look for multiple access levels to allow for SoD. In my limited experience, the larger the organisation, the tighter the desired level of access control and, in MOSS terms the more users with "read" access. Having two clearly defined access levels is normally sufficient to satisfy most clients requirements but expect to implement more if your client(s) wish to implement access controls similar to their own organisational structure.
  • Q: Are users configured with individual usernames and passwords?
  • A: A must have for secure sites with auditing.
Pretty simple - in a secure environment where auditing is required, individual logons are a necessity rather than a luxury and your clients should demand it.
  • Q: Is a password policy enforced, including forced password changes?
  • A: Enforce a strong password policy; forced password changes are recommended.
We are forever told that choosing a secure password is a must when using sensitive applications - especially those hosted over the Internet. However, we as users can't be relied upon to resist entering a password any more secure than "chocolate123" or "admin1982". This seems to be a pet hate of large corporations, and rightly so. Enforce a secure password policy - at a minimum add some page level validation; preferably force the complexity requirement within your directory structure (e.g. Active Directory).

Security conscious organisations often look for forced password changes. However, it is not always required and this requirement does depend on the sensitivity of the application along any mitigating factors (e.g. a strong password policy and account lockouts, see below). In my experience, it is not essential.
  • Q: Does the application support account lockouts?
  • A: Enforce account lockouts with a realistic threshold
Automated account lockouts represent a basic, but effective measure against brute force attacks and a mitigating factor where password changes are not forced. I would go as far to say that it is an essential feature for applications that contain sensitive data - and so will your clients.

With regard to the threshold, many organisations will try to stick with their own internal account lockout policy. Commonly requested thresholds based on my experience are between 3 and 5, so I would recommend aiming for something around that area. For applications that do not require such a strict threshold, you may consider increasing this to a number between 6 and 10. This still provides some protection against brute force attacks whilst removing any frustration users might have over account lockouts.
  • Q: Is there a documented process for revocation of a users access?
  • A: Provide a clear, documented process that minimises chance of human error
In a lot of cases, I find that procedures for providing access are clearly defined and documented. Staff know exactly what is required in order to grant a new user access to a secure site. However, this is not always the case when it comes to revoking a users access, and is often left unchecked. If a user can't log in, your support staff will know about it soon enough. If they can log in after their account has "expired", you may find that you aren't ever notified there is an issue.

Reassure clients that you have a clear process and implement it. If you receive notification that a users access should be revoked by a set date, enforce account expiration in your directory structure and add a suitable description. In MOSS, you could go one step further and create a list of accounts marked for revocation.

Thats all for today. Join me soon for part 2. Still to come: Auditing, Change Management, Data Protection, Disaster Recovery, Infrastructure and Physical Security.

Subscribe to the RSS feed

Follow me on Twitter

Follow my Networked Blog on
Facebook

Add this blog to your
Technorati Favourites