← Older posts

Netwrix Survey: Do You Actually Audit Changes?


  Spice IT  

Posted on by
View all posts by Jeff Melnick →

Netwrix-Survey-Do-You-Actually-Audit-Changes-text

There is no doubt, that organizations of all kinds and sizes face everyday changes to their critical IT systems. As the topic of administrative security becomes more and more important, the hazards of data breaches, information leakage and downtime must be avoided and any changes must be tracked down and documented. Netwrix surveyed 577 IT Pros from 24 different industries to get a picture of how it is actually done in different sorts of companies. The goal was to see, how IT departments handle the impact of changes made and what methods they use to maintain security and system availability.

It seemed important for Netwrix to provide multiple perspectives for different IT organization types. In this survey you will find a correlation between the change auditing processes and company size, industry and number of IT department employees.

What did we find out? Oh, it was not all that predictable. Of course, some findings seemed quite logical. For example, the bigger the organization, the more attention is paid to IT security and thus change monitoring and auditing are taken care of more thoroughly. Enterprise companies (1000+ employees) have bigger IT departments and claim the highest level of processes in place to manage IT changes, weighing in at 80%. It was also easy to foresee, that small organizations are more exposed to different kinds of security breaches. Their IT departments, which more commonly consist of only one specialist, normally just cannot handle all the processes.

But what came as quite a shock was that 62% do NOT audit changes to IT systems. Moreover, many of those, who answered positively, considered just having system log data enough. It gets worse. 57% admitted to not documenting the changes made. And this trend shows across all three types of organizations: SMB, Midsize and Enterprise.

What is this tendency leading to? Well, first of all, problems with audits. Without documented changes, it is impossible to answer auditor’s questions. Who has made what changes to what systems? This information just disappears in endless logs. Monitoring and alerting on changes is never enough. There needs to be a system that checks, if all the changes are tracked, a system providing complete visibility across the entire IT infrastructure. Especially when IT department employees make changes, that cause security breaches, there should be a way to track the triggers quickly and to be ready to report them at any time and as detailed as possible.

See the complete “The State of IT Changes Survey 2014” and share your thoughts with us in the comments section below.

 

 

Posted in Articles | Tagged , , , | Leave a comment

Heartbleed: A Common-Sense Approach to a Real Network Problem


  Spice IT  

Posted on by
View all posts by Brian Keith Winstead →

Heartbleed--A-Common-Sense-Approach-to-a-Real-Network-Problem-text

You’ve probably heard all about the Heartbleed vulnerability by now. Although it might not mean the total collapse of the Internet, as initial wild reports suggested, it’s also not something you can simply ignore. There are definitely some steps business IT departments should take to ensure their networks are secure.

The first step is to educate yourself and your team about the Heartbleed vulnerability—what it is, how it works, and where you might be at risk. I won’t go into detail here; since this problem was announced a couple weeks ago, ideally you’ve already taken this step. If not, you can find a bounty of information on the web, but here are few good places to start:

Once you understand the problem, you need to evaluate your environment and determine if you need to update any instances of OpenSSL. Are your servers up to date? Do you run any in-house developed applications that might use OpenSSL? Do you have any networked appliances that could be using a compromised version of OpenSSL for authentication? Keep in mind that some VPNs and some mobile devices might also rely on this open source encryption standard, so be sure to be thorough in your evaluation.

A common recommendation is to update your certificates with a Certificate Authority (CA). Because it can be difficult to tell if you have already been compromised through Heartbleed, which would mean the security of any certificates is no longer valid, it’s best to revoke any existing at-risk certificates and replace them with new ones. Check with your CA for the procedures if there’s any doubt.

Although detection of a Heartbleed compromise might be difficult, it isn’t necessarily impossible. With good network monitoring, unusual traffic or information requests could be spotted. If nothing else, this is a good reminder that careful monitoring should always be part of your overall security plan. Remember, the Heartbleed vulnerability was in the wild for two years undetected; if someone had attempted to exploit this hole to access your network, would your network monitoring have raised the red flag?

Finally, this is also a good time to revisit your network password policies. As part of Heartbleed clean-up, you probably need to have end users reset their passwords, particularly if there’s any chance your systems were open to attack. Of course, you’re always safer if you use passwords that expire after a set period and prompt users to change passwords at that time—and this includes administrator passwords. Also, as Orin Thomas at Windows IT Pro points out, this is a good opportunity to consider two-factor authentication. If nothing else, it would be worth your while to educate your users about two-factor authentication for personal sites such as Facebook and Gmail, particularly if they access those sites from the business network (and let’s face it, we know they do).

With a little common sense and action, you can ensure your network is protect against the Heartbleed vulnerability. Using good network monitoring and establishing safe password policies now can also help you in the event that similar problems are disclosed in the future.

 

Interested in this topic? Visit our special webinar on Heartbleed bug! You will soon be able to find all the information about it here.

 

 

Posted in Articles | Tagged , , | Leave a comment

Exchange 2013 CAS Configuration – Part 1


  Spice IT  

Posted on by
View all posts by Krishna Kumar →

Exchange-2013-CAS-Configuration---Part-1

We continue the “Deep Dive” series. In it you might find the answers to some of your technical questions. The industry experts will provide their insights on several topics and research some new features of most popular applications. The first article of the series can be found here. You are welcome for comments and discussion!

Exchange 2013 has brought major architecture changes and now there are just two roles – Client Access Server (CAS) role and Mailbox Server role, whereas the previous version of Exchange had five server roles.

Mailbox Server role: It includes traditional features like Mailbox Database, transaction logs, but now it also hosts Client Access protocol, Transport service and Unified messaging server.

Client Access Server role: It is an office client access protocol like HTTP, POP and IMAP and SMTP. Outlook no longer uses RPC to connect to the Client Access Server, it uses RPC over HTTPS (also known as Outlook Anywhere) and Outlook client doesn’t connect using fully qualified domain name (FQDN) or CAS Array as it was done in all previous versions of Exchange. It uses user’s mailbox GUID + @ + the domain portion of the user’s primary SMTP address to connect to the mailbox. This is a huge benefit to overcome limitations and complexity of Exchange 2010 RPC Client Access service on the Client Access Server.

In this article we will show, how to configure Exchange 2013 Client Access Server with various settings.

Creating and configuring SSL Certificate

Exchange 2013 uses SSL Certificate, which is used for secure connection between a client and a server.  Mobile devices, computers from both inside and outside the corporate network can be considered as clients. By Default Exchange automatically installs self-signed certificate on both CAS and Mailbox Server. This self-signed certificate is used to encrypt communications between CAS and Mailbox Server. Self-signed certificate has some limitations like Outlook Anywhere may not work with it and it needs to be replaced with a third-party digital certificate. Below you will find the process of creating and configuring a SAN certificate on Exchange 2013 CAS. It can be done using Exchange Admin Center (EAC) or PowerShell. We will follow the PowerShell steps to create and configure the certificate.

1. Access Exchange management shell and type the command below. This will generate a certificate request “DSR” file, which needs to be sent to a CA vendor like Verisign.

$Cert = New-ExchangeCertificate–Server BlueExch01 –GenerateRequest–Friendlyname Exchange2013Certificate –SubjectName “c=US,o=Blue,cn=mail.blue.com” –DomainName “mail.blue.com,autodiscover.blue.com,legacy.blue.com” –PrivateKeyExportable $true

Set-content -path “C:\CertReq2013.req” -Value $Cert

1

Click to see  full size image

2. Verify the content CSR generate file C:\CertReq2013.req using CSR Decode from the link below. Then send the CSR file to Verisign to generate the certificate http://www.sslshopper.com/csr-decoder.html

3. Once you receive the certificate from a 3rd part CA, import the certificate on the same server, where it was generated, using the command below and assign the certificate to IIS and SMTP services.

 Import-exchangecertificate -FileData ([Byte[]]$(Get-Content -path “C:\VerisignCert.cer” -Encoding byte -ReadCount 0)) 

Get-ExchangeCertificate –ThumbPrint “A549B0F9EDB7FD9C5A1C2012D4C2B64E10D3EC34″ | Enable-exchangecertificate -Services “IIS,SMTP”

2

Click to see  full size image

4. Now that we have assigned the certificate on the first server, we again need to export and import it on all other Client Access Servers in the organization. Below is the PowerShell command to export the certificate in the PFX file format. This command will prompt credentials to save in the certificate. The same credentials need to be used while importing the certificate on another server.

$certexport = Get-ExchangeCertificate –DomainName “mail.blue.com” | Export-ExchangeCertificate –BinaryEncoded:$true –Password (Get-Credential).password 

Set-Content –Path c:\cert_export.pfx –Value $certexport.FileData –Encoding Byte

3

Click to see  full size image

5. To Import the certificate on the new CAS server, copy the certificate C:\cert_export.pfx to a local computer and use the PowerShell command below to import the same. When it prompts the credentials, enter the same username and password, which was used during the export. Once it is imported, assign a certificate for its services.

Import-ExchangeCertificate –FileData ([byte[]](Get-Content –Path c:\cert_export.pfx –Encoding Byte –ReadCount 0)) –Password (Get-Credential).password

Get-ExchangeCertificate –ThumbPrint “A549B0F9EDB7FD9C5A1C2012D4C2B64E10D3EC34″ | Enable-exchangecertificate -Services “IIS,SMTP”

4

Click to see  full size image

6. If you are in a very large organization and you have multiple servers then, you can import it, using the PowerShell script remotely across all the CAS servers.

$servers = Get-ClientAccessServer | Select Name

foreach($server in $servers){

Import-ExchangeCertificate –FileData ([byte[]](Get-Content –Path <path_to_exported_certificate> –Encoding Byte –ReadCount 0)) –Password (Get-Credential).password –Server $server.name  }

In this part of the article we created and configured the certificate on all CAS servers in an organization.

In the next part, you will see how to configure CAS URLs, Outlook Anywhere and Send Connector settings.

 

 

Posted in Articles | Tagged , , , | Leave a comment

Can Security and Compliance Coexist?


  Spice IT  

Posted on by
View all posts by Michael Fimin →

Can-Security-and-Compliance-Coexist-text

Both security and compliance are all about establishing (and implementing) standards that ensure an environment where company assets and data is accessed and utilized properly. So if you were asked, “Do you think security and compliance really coexist?” you’d most likely think it a dumb question and say, “Of course.”

But what if we challenge that notion a bit — not so much to explore if they can coexist, but whether they do.

Security is about access — whether you can get to systems and data. Compliance is about behavior — what you do with it once you get there. So to truly see security and compliance coexist, you need a way to tie the two together to get a full picture of what kind of access is available and who utilized that access.

For the purposes of this article, let’s focus on the access each IT person has and the work being done within critical systems to establish and maintain a secure environment. But let’s do so from the perspective of a compliance audit. Here there is a distinct secret to bridging the gap between security and compliance and some critical changes every organization should audit.

The Secret

It’s not until an actual audit that it becomes evident that security and compliance are not aligned. Why? Because of one simple issue — documentation. During an audit, security pros are asked for proof that security has been maintained in a known state — where a security pro is aware of the state of security at any given point in time. Without knowing who was given access, it’s pretty tough to tell whether the folks with access behaved themselves. And access can change quickly — someone can have access to a resource today and have it removed tomorrow.

Take the reasonable auditor question of, “Who has been a member of the Domain Admins group within the last year?” (You can, of course, insert your own administrative group if you’re not using Active Directory.) It’s a reasonable question — the auditor simply wants to know who has had access to make critical changes to security, affecting access to sensitive data. It’s at this point that the security pro realizes they haven’t documented every change made to this group and have no reasonable way to come up with an answer to the question, thus failing at least that part of the audit.

So the secret is simply documentation. As long as the configuration of and changes to security are documented, security aligns with compliance. Now the hard part – how do you document every change? Without third party solutions, it’s not an easy task, but it is as simple as documenting security changes — but that does mean every change.

What Changes Should I Be Watching?

Every Change? That’s a daunting task and no IT or security pro really has the time. So let’s try to break down everything to something a little more manageable and look at the changes that will assist in meeting compliance objectives.

  1. Who’s in charge? – Remember, we’re talking about changes so it’s important to have a historical record of who has been added to groups or given direct access that gives someone rights to establish security for the rest of the organization. This includes first and foremost any directory service that serves as the basis for all other security. To properly meet an auditor’s needs, you’re going to need to know exactly who has administrative access and the duration during which each person has that access. Keep in mind this applies to directory services, databases, cloud-based systems — basically any environment that either houses sensitive data or provides the basis for access to that environment.
  2. What are they doing? – Because we’re focusing on IT, knowing what those in charge are doing with their permissions is equally important. Many systems provide logging so you can see what actions are taken. Specifically, you’ll want to be able to see any change made that impacts security, when it occurred and by whom.
  3. What changed? – This is where the rubber meets the road. Your ability to provide this critical level of detail will determine whether you pass or fail an audit. You might be asking, “Isn’t this the same as the previous requirement?” But there’s a difference. We’re talking about knowing what specifically was changed – the before and after values for a given change. Knowing that Fred modified the Database Administrators group membership only satisfies the first two requirements in this list. It’s when you know that Fred added Bob to this group that you can complete the audit detail required.

Let’s move this from an academic discussion to something a little more real world by talking about how possible is this really?

Most systems provide some kind of logging that will provide you some or all of the detail listed above. Given that there will, no doubt, be an overwhelming amount of log data, you’ll need to employ some kind of Event Log Management or SIEM solution at a minimum to consolidate your data into something more manageable. This, however, may only help with the first two of the three requirements listed above if the log data isn’t detailed enough. Ideally, employing a solution specifically focused on auditing changes will allow you to meet all three requirements and make answering your auditor’s questions a simple and efficient task.

Security and Compliance are simply looking at the infrastructure from two different sides of the story — one looking forward (e.g., “what kind of access should I give Bob?”) and the other looking back (e.g., “Who gave Bob rights?”). It’s the documentation you maintain around the security changes you make that’s going to allow these two practices to live harmoniously. Keep track of those changes and you’ll be ready for an audit, no matter what they ask.

What is the situation like in your IT infrastructure? Feel free to share your opinion in the comments section below!

 

 

 

Posted in Articles | Tagged , , | Leave a comment

DEAD BY APRIL: 5 Greatest Features of Windows XP


  Spice IT  

Posted on by
View all posts by Jeff Melnick →

DEAD-BY-APRIL-5-Greatest-Features-of-Windows-XP-text

Does anyone of you guys remember the first days when Windows XP became available? I do! If I had to describe my first impressions of it, I would say “It just works!” Preceded by the release of a buggy and unstable Windows ME, the old dude XP was introduced almost twelve and a half years ago in the era when machines were big, noisy and slow. What’s interesting is that the XP is Microsoft’s longest living OS. Windows ME and 95 both lasted for six years, Windows 98 – for roughly eight.

Today is the last day of the Windows XP, as Microsoft officially kills its support. So we decided to remember the best features, which have happened to XP throughout its era. Here is my countdown of things that I admired about this OS.

#5 CD Burner

This was so cool! Thanks to this built-in function I could simply drag sound files onto my CD-ROM drive’s desktop icon… and my CD was ready.

#4 Device Driver Rollback

I could uninstall any driver updates and continue using the previous “stable” version. Actually, this feature is #1 according to my colleagues’ opinions.

#3 Internet Connection Firewall

In the official Microsoft’s documentation this feature is explained as a “security system that acts as a protective boundary between a network and the outside world.” Well, it did a great job protecting the system using packet filtering to block LAN, Point-to-Point Protocol over Ethernet (PPPoE), VPN, and dial-up network traffic.

#2 Remote Desktop

One of the coolest features that helped me as an IT guy to do my job remotely in a critical situation. Back in the days this feature literally sold me on Windows XP.

#1 Remote Assistance

Remote Assistance worked in conjunction with Remote Desktop and helped corporate Help desks handle remote troubleshooting. The user had to fill out a ‘Request for Help’ form and email it to the Help desk. The form prompted the user for an activation period and a password. The email message then enabled the support personnel to connect to the user’s PC to diagnose and fix the problem.

Did I miss something? What have you most liked or hated about the XP? Share your thoughts in the comment section below!

 

Posted in Articles | Tagged , | Leave a comment

Does Auditing Have a Role in Your Security Strategy?


  Spice IT  

Posted on by
View all posts by Michael Fimin →

michael1txt

When you hear the word audit and consider how it pertains to your security strategy, most organizations think of auditing as something you do once a year or when a problem arises. But just how should auditing be an integral part of your security strategy?  In fact, for most security plans, auditing is a critical missing piece – one that’s needed throughout the security lifecycle to make your life easier both when you’re audited and even when you’re not.

Where Does Auditing Fit In?

In order to figure out where auditing should play a part in your security strategy, let’s start by defining what should be audited as part of a security plan. To do this, look first at how security is implemented in its most simple form and work backwards to auditing’s role in your strategy.

Security can be viewed in three simple steps – assess, assign, audit. (Now, don’t jump ahead thinking “ok, ok – so the audit part is at the end.” As you’ll see, auditing will play a part every step of the way.)

As you think about the three steps, the first thing you do as part of implementing your security strategy is to assess the current state of security: What permissions are in place? Who has elevated rights? Who is able to access the resources in question? And so on. Once you complete your assessment, you need to make some new security assignments – additions, deletions, and modifications – to ensure the system is properly locked down. Lastly, you audit the usage of the environment being secured to make sure you’ve properly implemented the security.

If you see something out of place – for example, a user that shouldn’t be utilizing a resource or system – you’ll go back and start the process again by assessing what rights are in place in order to determine how they gained access, followed by an assignment that fixes the issue, and then another audit of use to ensure the security is now correct.

So, Assess. Assign. Audit. (repeat)  Got it? Good.

What Should You Audit?

Based on where you are in the process, you’ll be auditing different aspects of your security. Like the three steps, break down what you need to audit into three simple groups:

· The State of Your Security – The assessment step requires an understanding of more than just who has permissions to a critical system or resource. To truly have a handle on the state of your security, it requires visibility into who currently has rights to manipulate those permissions, as well as who had permissions or administrative rights historically. Think about this all too common scenario for a moment – someone who previously had permissions to sensitive data gave another user rights to access that data. Once accessed, the permissions were removed.  If all you audit is today’s permissions and administrative rights, you will never know the true state of your security.

This is a tough one to address.  It’s easy enough to pull the current state of security – you could even use the default administrative tools for the resource in question and take screenshots of the current security settings (not that this is an advocated methodology – this is simply to point out that there is no excuse for not auditing the current state of security).  It’s the historical security that is going to be a problem.  Without a solution in place that takes some kind of snapshots or backups of your security, you’re going to miss out on states of security in between your audits.

· The Changes You Do and Don’t Know About – The assignment step requires it’s own auditing as well.  The example above demonstrates that auditing the changes made to your most critical systems is crucial to maintaining your security strategy. (We’d all like to think we know about every change, but even you have made changes in the past week that no one else in your IT department knows about, right?) You need to audit every change made not only to the resources you want to protect, but to the systems that provide access to those resources. As an example, you would want to audit changes to a payroll database, but also changes to your Directory Service, whose groupings of user accounts provide access to the database in the first place. And if you really want to get detailed, you’d be auditing the changes to any group that has access to modify the membership of the group with access to the database. And so on and so on…

While auditing the state of your security is somewhat an ex post facto activity, auditing changes should be an ongoing exercise. This means, at a minimum, you should be looking at security logs that identify changes. Event Log Management (ELM) and Security Information Event Management (SIEM) solutions that can notify you of changes can help here. If you’re more serious, utilizing solutions focused on auditing changes is a better choice.

· Access to Your Most Critical Systems and Data – This was the most obvious of the three, probably because the step is named auditing in the first place.  Remember, here your focus is to audit the use of the permissions – who is accessing patient records, who is copying files from a server to a USB stick, who is reading or writing to your customer database. In the spirit of the three security steps, this would be done to identify use cases where the security you thought you implemented doesn’t match up with the use you’re seeing. Auditing access is both a reactive and proactive activity. The proactive part detects points where you’ll restart the assess, assign, audit process looking for how someone got improper permissions.  The reactive part identifies potential security breaches that may have already occurred. So it’s important to make this as real-time as is possible so you can either react to breaches or quickly lock down a gap in your security.

Many systems provide logging of access, allowing you to utilize the same ELM and SIEM solutions. The challenge will be consolidating seemingly related log entries into intelligent individual actions.  Depending on the logs you’re using, a single activity can constitute a number of log entries larger than one. Solutions focused on auditing access also exist and may provide the intelligent parsing of log data required to make this part of your auditing plan more productive and effective.

So, Does It?

Now that we’ve spent some time labeling parts of your security strategy that require auditing and defining the auditing that needs to take place, the question needs to be asked once again.

Does Auditing have a Role In Your Security Strategy?

Using the definitions above, your answer is likely, “No.”

Remember, security isn’t static – it’s ever changing by, potentially, a lot of people in your organization. There is a lot to security that requires on-going auditing to ensure you have a handle, no matter what is changing and by whom.  By taking advantage of auditing as a central part of your security strategy, you’ll more easily be able to tell if you’re truly secure.

 

 

Posted in Articles | Tagged , | Leave a comment

Netwrix Releases Amazing New Technology!


  Spice IT  

Posted on by
View all posts by Jeff Melnick →

April-Fools text

Today we are glad to announce a long-awaited release of a new groundbreaking technology. Netwrix Auditor is now programmed to capture the brain activity of system administrators and can predict their actions that can impact your company’s security or compliance, before these actions happen. It uses a combination of Electroencephalography (EEG), Bluetooth low energy technology (BLE) and our patented prediction algorithm.

This new technology is the result of almost a decade of research and development by Netwrix engineers and neuroscientists of world leading laboratories.
Click on the video below to learn more about this amazing technology.

April-Fools-video

 

Posted in Uncategorized | Leave a comment

Is Cloud Dead? Do IT Pros Still Trust the Cloud Providers?


  Spice IT  

Posted on by
View all posts by Jeff Melnick →

Is-Cloud-Dead-text

As new technologies come to our life, it is always tempting to fall under their magic charms. But with the time passing it is important not to stay overwhelmed by “the newest and shiniest thing on the market”, but rather watch out for how it actually works, what risks are implied and if it is actually worth trusting it.

Not long ago, a question was asked on Spiceworks community, and actually got a lot of responses. This is definitely one of the hottest topics in the IT community these days, especially after the Snowden and NSA scandals. In this article you’ll find out what the Spice Heads think about the current state of the Cloud, if they still use Cloud providers and are going to continue adopting the new Cloud Technologies.

According to the Spice Heads, there are several things to consider before deciding, whether to trust a Cloud provider with your own data, or not. Here are the five most important ones:

#1 What Do I Require from a Security Standpoint?

Before you make a decision, whether to rely on a Cloud provider or to build an “in-house” IT infrastructure, you should prioritize your needs. Security, reliability and availability of data should be the first priorities.

1 Force Flow: For some people/organizations, storing data in the Cloud is great. For others, it’s not so great. It all depends on what your requirements and goals are for storing/retrieving data.

 

2Veet8676: I do feel that using the Cloud or not should depend on how critical and sensitive your data is, and to what extent you are willing to expose it to a third party.

 

#2 How Sensitive Is My Data?

The next thing you have to define is what kind of data you have: how sensitive it is and how tragic it can be for you to get a leak or to lose it all together. It is important to understand the difference between business information and personal data. It would probably make sense to trust a Cloud service with one kind of data and only rely on an “in-house” and well-controlled system with another.

3Gary D Williams: I’ve said it before and I’ll say it again: Cloud is your data, the most important thing you have, on someone else’s hardware and that someone else will not care about your data as you do. Understand that, accept its limitations and you’ll be fine. Expect the Cloud to be a never ending resource of uptime and performance and you will be in for a nasty shock. I don’t trust the Cloud at all but I see its potential and its uses.

4Keg: Personal Cloud services have no place in business. The risk is too great for data leakage. No trust whatsoever in the Cloud. Private cloud for me is the way to go.

 

5Jim4232: I know at my level the data security is not quite to the level as most others but I also understand that Cloud services may be in the adolescent period in its level of service and security. Just like anything else it will grow and get better. Just get the best information you can and make the best decision you can based on your future needs, whatever it might be.

#3 Does the Data Belong to Me or to My Company?

Your decision should also depend on your purposes: what suits a private person and the data he might have can be not applicable to a huge enterprise. There is a different degree of risk, and the losses, caused by a leakage, can be tremendously incomparable. But it seems, that most of IT guys share a thought, that SaaS as well as Cloud are most suitable for small and medium-sized business, as opposing to an Enterprise market.

6Kris (Spiceworks): SaaS is just the way things are going to be.  I see absolutely no reason for SMB/Mid-market organizations to run their own Exchange infrastructure, for example.  Unless the organization is willing to invest in full-time domain experts in Exchange (along with all of the supporting hardware and software), then SaaS providers will do it better.  They will have purpose-built data centers, trained staff, and Exchange expertise. That goes for most other LOB applications as well, including CRM (Salesforce), HRM/Finance, etc.  We should embrace these services and utilize them to their full extent. Security worries should just go away.  I guarantee Salesforce will do a much better job at securing customer data than a lone IT guy with no formal background in info-sec.

7SD_GS500E: If you have data you’re going to be responsible for, I don’t trust the Cloud any further than I can throw it. As for SaaS, I do think that poses a lot of promise for small companies.

 

#4 How Do I Approach My Data?

It is important to decide, how and when you access your data. This can help you make the right choice about how to handle it and where to store it.

8Rich_Bruklis: The Cloud is essentially a remote data center. It can be vulnerable to outage and security issues. You would think that a ‘name brand’ Cloud will invest to prevent just about anything bad from occurring. In that case, I would probably trust the Cloud a bit more than my own data center (I can’t afford dual redundant generators, triple network connections, etc). As far as security, I think the Cloud folks have enough in place to make it as good as if not better than what I have. Good security starts at home. If you have a good strategy and tactics locally, I am pretty sure that can translate to the Cloud.

# 5 What Is the Cost vs. Benefit Ratio?

When it comes to cost, you should ask yourself these questions:

  • How will I deal with Cloud service outrages?
  • What downtime can I afford?
  • Is it really important for me to know, where my data is actually stored?

If you are relatively flexible on these things, then a Cloud provider might suit you well. Plus, it is cheaper in most cases.

9

Gravesender: There was a considerable debate about Cloud vs. in-house for Exchange. We opted for in-house for a number of reasons:

1. We lack affordable access to reliable internet connectivity with decent bandwidth. This has been a chronic problem since I started here 10 years ago. Our in-house Exchange lets most folks keep working if we lose outside connectivity. Shared calendars and such are very important here. On the other hand, our sales manager was a big advocate of the Cloud because he could, in theory, remain in touch through his smart phone if anything went wrong with our server or connectivity.

2. The boss has a suspicious nature and doesn’t trust the Cloud.   

3. As far as operating costs go, once we got Exchange going, it pretty much runs itself. I have to maintain an in-house network for some LOB applications that are not really suitable for the Cloud, so I have to be here in any case.

So, as you can see, everyone has their own reasons to blame or defend Cloud providers and SaaS. But it is important to understand, what kind of data do you have and also don’t omit, that there are always risks in giving your data to someone else even if it is a trusted contractor. Just don’t forget, that the best choice is always based on your needs and not on what seems to be the trendiest thing out there.

What do you think about the Cloud services? What role does Cloud have in your company?

 

 

Posted in Articles | Tagged , | Leave a comment

Exchange 2013 Multi site CAS URL configuration with global name space for site resilience


  Spice IT  

Posted on by
View all posts by Krishna Kumar →

Exchange-2013-Multisite-CAS-URL-Configuration-with-Global-Namespace-for-Site-Resiliency

Herewith we start a series, that we decided to call “Deep Dive”. In it you might find the answers to some of your technical questions. The industry experts will provide their insights on several topics and research some new features of most popular applications. It’s a great read, and you are welcome for comments and discussion!

Exchange 2013 has simplified the Client Access Server (CAS) design architecture using global/single namespace. This is because of the way CAS works in Exchange 2013. Outlook no longer uses RPC protocol to access the email, but instead it connects to the CAS server and accesses the mailbox through https protocol.  The CAS server makes a direct connection with the mailbox server, if the mailbox exists on the same AD site. If not, then it decides, whether to proxy the request or to redirect the request to another CAS/Mailbox infrastructure. It queries the Active Manager to determine the mailbox server hosting the active copy and it will proxy the request to mailbox server. This occurs irrespectively of a mailbox location. One CAS server redirects its request to another CAS server only if there is a telephony related request or an OWA request.

These changes have simplified the design and reduced the number of CAS URL requirements for Exchange 2013 server in Multisite environment. It eliminates the need for separate OWA URL namespace in primary and secondary data centers. Finally it also eliminates the requirement of having a separate namespace for RPC client access in these data centers. Instead, it is possible to have a single namespace for both internal and external, regardless of where the mailbox is located. The primary requirement for this design is that there should be no network latency and utilization between the data centers.

From an external you can configure a DNS round-robin between the VIPs of the load balancer in each data center.  Users from the Internet will use the DNS round-robin to connect to any available VIP and access their mailbox. Major concern of the design is that the most proxy traffic will be out of site.

Similarly from internal, configure DNS round-robin between the VIPs of the load balancer in each of the data center for the name space Mail.Blue.com. This will cause outlook client to receive multiple VIPs IP address for a given fully qualified domain name (FQDN) Mail.Blue.com and thereby providing a failover option at the client. If the client tries to connect with a VIP IP address and the connection fails, the client waits for about 20 seconds and then tries for the next VIP address in the list. Technically, automatic recovery should happen in 21 seconds.

E.g. below is the two Internet facing AD site with Active/Active users configured in the DAG, configure DNS round-robin between the two load balancers.

1

 Click to see  full size image

Below is the CAS URL configuration between the two intranets facing AD site with Active/Active users with Single namespace design. Single namespace allows one common URL configuration across all the CAS servers and on both data centers. With failure of server/data center, users will be automatically connected to another server or data center without any impact for the end users.

2

 Click to see  full size image

3

 Click to see  full size image

Single namespace has simplified the Exchange design and the major requirement to implement this solution is to have a solid network connection between the two data centers with no latency, throughput or utilization issue. While both the data centers are internet facing sites, it is only needed to make sure that both the AD sites have Load balancer configured with VIPs. This design reduces the number of SAN required in the certificate, we just need one certificate with two SAN for a complete organization.  This option provides site resilience between the sites, without much administration effort.

 

 

Posted in Articles | Tagged , , , | Leave a comment

Securing Active Directory Administration – Part 3


  Spice IT  

Posted on by
View all posts by Brian Svidergol →

Securing-Active-Directory-Administration-part3

We continue the series of posts about aspects of securing Active Directory administration, created by Brian Svidergol. There are three parts to this series, each one dedicated to a specific problem around securing Active Directory administration. To get a deeper insight into the topic, you are welcome to read the first part, focusing on the Principle of least privilege, and the second post, telling about infrastructure considerations. Today, in the final part of the series, you can learn about auditing, monitoring and alerting.

The topic of monitoring alone could take up an entire book.  So could auditing and alerting!  So, for this blog post, we will only talk about a few small areas of these systems at a high level.  The most common thing I see and hear about monitoring an environment is that administrators get too many alerts.  And too many alerts usually leads to one thing – lack of response or slow response for incidents.  Often, people are aware of this conundrum.  But it doesn’t change the configuration for a lot of organizations.  I want to talk about 3 areas – why you should audit success and failure, how monitoring plays into securing Active Directory, and reducing the number of alerts so that incident response is improved.

  • Why you should audit success and failure.  Look at your auditing configuration.  Are you auditing only for failures?  If so, you aren’t alone!  But, it is time to rethink that strategy.  Auditing for success and failure is a highly recommended strategy.  Let me bring up a couple of examples on the important of auditing for success.  First, you are investigating a configuration change on a domain controller.  The change was made.  That required successful authentication and some type of remote administration (RDP or MMC).  In this case, the entire path is littered with successful actions.  If you aren’t capturing these with auditing, how much harder will it be to figure out who/what/when?  Quite a bit.  Another example – you see a very large amount of failure audits related to a service account authenticating to a domain controller.  The good news – you captured the failed attempts with auditing.  The bad news – you aren’t sure if the attempts stopped because the attacker gave up or because he actually got in.  Auditing success and failure helps tie things together like this.
  • How monitoring plays into securing Active Directory.  There are a number of topics on monitoring Active Directory…so many that entire white papers have been written on it.  For today, I’m not going to dive into health monitoring and thresholds.  Instead, let’s first focus on security groups.  Let’s talk about the Domain Admins group (although the upcoming recommendation applies to all of the critical security groups).  You should monitor all changes to the group membership (additions and subtractions).  If you don’t, you probably won’t ever know if anybody was temporarily added (for example, to cause havoc on your network) and then removed a couple of hours later.  Now let’s change gears to Group Policy.  Group Policy is often an area that is overlooked when monitoring Active Directory.  Many ugly scenarios come to mind with Group Policy:  attacker wants to launch a phishing attack across an enterprise but the current IE settings prevent or reduce the effectiveness – quick change to a GPO and go, attacker needs to push some malware to all domain joined computers – create a GPO and go.  As you can see, monitoring Group Policy plays a big role in the overall monitoring strategy.  You need to know when new GPOs are created, when GPO links change, and when GPO settings change.  You also want to know who performed the changes.  Finally, administrative subnets comes into play.  Having centralized administrative subnets allows for highly targeted monitoring and opens up the potential to automate big swaths of the initial monitoring configuration.  Instead of targeting monitoring at specific servers, specific IPs, or specific groups (containers/groups/or other collection), you can target monitoring at entire subnets.
  • Reduce the number of alerts.  For anybody that has ever deployed System Center Operations Manager (OpsMgr), you already know where I’m going.  Today, monitoring tools are incredibly sophisticated.  They come out of the box with a vast array of monitors built in and customized for server roles.  OpsMgr has management packs for just about everything!  That is the good news.  The bad news is that the moment you turn everything on and complete the base configuration, administrators will be overwhelmed at how many “problems” exist in their environment.  At first, it is exhilarating – tons of activity and a bunch of low hanging fruit.  But then, after days pass, then weeks pass, the alerts keep coming.  The next thing you know, the nifty little Monitoring folder you created in your Inbox is loaded up with thousands of unread alert messages.  You stop looking at the Monitoring folder in real time and just peruse it when you have some down time.  This is common.  Of course, there is worse.  One time, I was working with a company that had some administrators that pulled their SMS addresses out of the notifications because the amount of SMS alerts was outrageous.  I’ve talked to many people about reducing the number of alerts.  But it usually takes an incident before action is taken.  The typical incident begins with an outage.  Once everything is back up, the investigation of the outage often looks at monitoring.  Certainly, it would be a good time to add or enhance the monitoring so that the administrators will know about this ahead of time next time, right?  Yes, but in this case, administrators did know ahead of time.  They got 15 alerts on Wednesday afternoon – currently messages #550-564 in their Monitoring folder that contains 2600 messages.  Thus, nobody even noticed them.  When such an incident occurs, people will finally begin to look at reducing the number of alerts to make things more manageable.  You can reduce the number of alerts or hire 10 guys to stare at monitors and logs all day (or, in organizations with big budgets, you can do both).  My strategy for reducing alerts is pretty simple.  Every time an issue crops up (outage, degradation, etc.) – I look at monitoring.  I find out if we knew about it and if we did – did we know about it early enough to take action and avoid an issue?  I find out if the monitoring gave us enough information to take action.  I find out if the right people were notified.  Often, administrators in one department receive alerts for more than just their department.  This occurs on multiple teams.  Then, when an alert comes in, even if administrators did see it, nobody takes action because they figure somebody else did.  In my monitoring strategy, I also ask a key question when an alert comes in.  Does the alert require me or anybody else to take action?  If not, then the alert’s value is suspect.  In a perfect world, every alert would equate to action by the recipient.  People would only get alerts if they needed to take action and the alert would contain everything they needed to know.  To summarize my monitoring strategy of reducing the number of alerts, follow these steps:
  1. Ascertain whether any action was taken on an alert that was received.  If not, why not?  Could the alert be removed without impacting anything?  Then remove it.  The data will still be in the monitoring database for reporting but it won’t clog up your phone or e-mail.
  2. Are you getting alerts that others are taking action on?  If so, are they also getting the alerts?  If so, consider removing yourself.  If not, add them to the alert and then remove yourself.  Only those that will take action should get an alert.
  3. Are you getting notified on your phone via SMS about issues that are not critical (such as disk space is at 20% free)?  If so, remove yourself.  SMS notifications should only occur for production issues and outages or impending production issues and outages.  If a disk is running at 20% but won’t fill up for 8 more days, you shouldn’t be getting notified by SMS yet.

I hope I touched on some topics that hit home and get you to start thinking about some of these areas in your environments.  As Bruce Schneier said way back in April of 2000, “security is a process, not a product”.  Have fun, and safe computing!

 

 

Posted in Articles | Tagged , | Leave a comment


← Older posts