Let’s look at the software impact of upgrading to Windows 7 and what it can have on Bitlocker. As part of the hardware refresh that can occur associated with these updates the other thing that can happen is people start to look at self-encrypting drive technology (which is becoming more and more available, more and more cost effective).
SED? WHAT’S THAT?
The idea here is the drive is essentially self-defending. So you’ve got a self-encrypting drive, an SED as it’s called. It’s often abbreviated. Usually a fixed disk, which uses some kind of hardware based encryption. The standard for these is OPAL. You’ll see more and more devices that are OPAL compliant. They’re made by people like Hitachi, Toshiba, Seagate and Samsung. So there’s a number of different manufactures now and most of those are moving to some kind of OPAL complaint SED. The Trusted Computing Group estimates that in five years pretty much all drives will have some kind of self-encrypting capability and that includes not just sort of the traditional, mechanical drive but also solid state drives, too. So self-encrypting is really going to become pretty much the foundational element for at least drive based data protection.
HOW IT WORKS
And, you know, there’s a reason for that and it’s conceptually very simple. You have essentially the file system, the operating system running on the drive. In between you sort of, the drive and everything you want to do with it you have a hardware component that perform all encryption tasks. So, typically for SEDs they’re going to be using, again, AES- 128 or 256. So again it’s an industry standard encryption algorithm, but there’s a hardware piece that sits there. Everything that gets written to the drive’s encrypted. Everything that gets read from the drive is unencrypted. There’s a couple of different keys that are used. There’s a data encryption key, and an authentication key. I won’t talk about those in great detail. If you’re interested, a couple of weeks ago we had a webinar just on self-encrypting drive management where I talk in more detail about what those keys do, and where they live and how to manage them and so on.
WHY THE INTEREST?
Many organizations are looking at SEDs. Especially organizations that have looked at software based full disk, or full volume encryption. That approach of encrypting the entire drive is appealing to them. They’re now looking at SEDs, or other hardware based encryption as a way of getting the same kind of conceptual simplicity of encrypts everything, but without a lot of the headaches associated with software based full disk. And, you know, one of the things that, the first things that we hear about when people look at self-encrypting drives is, “Boy they’re really fast.” It is a much faster approach than software based encryption for the full volume because unlike full volume, software based full volume, you don’t have to encrypt everything, but it’s all happening in a specialized piece of hardware. So the drive is essentially running at full speed, and all the encryption’s taking place in specialized hardware. They’re faster to use. They’re also faster to roll out because you don’t have a lot of the upfront work that’s required to prep a system for a software based full disk. Anyone that’s deployed software based full disk knows there’s a lot of work has to go in to ensuring the drive itself, the physical drive is ready to have the encryption deployed to it. And if it’s not ready you can, you know, what’s called brick the system. You can cause the system to become completely unusable. Well that doesn’t happen with a self-encrypting drive. The drive is going to be fine. So you’ve got a lot less work up front, a lot less management, a lot less risk upfront. I think the other thing to say is there’s a lot shorter time to security. Instead of spending a long time encrypting a full volume, in order to get it to “secure” with a self-encrypting drive you pretty much turn it on and it’s ready to go. You’re pretty much security. So you know we’re talking about something in a couple of minutes rather than you know maybe, many, many, many hours. So, faster time, less time, less work upfront, less risk up front. Faster time from a performance perspective and also faster time to security. And it’s just very, very simple. The drive is encrypted all the time. You can’t tell a self-encrypting drive not to encrypt. So you’ve got a very, very simple conceptual and high performance approach. So it’s not surprising that people are looking at SEDs.
WHERE DO THEY FIT?
And again where they fit is the same places you would think a traditional full volume approach would fit. When simplicity’s very important. When, you know, you’re thinking about “well I want to keep the advances of software based full disk encryption without the pain of software based”. And it’s a great place to look at SEDs as part of that hardware refresh. As part of the whole process of cycling through your systems whether it’s every three to four to five years. That’s a great time to start looking at SEDs. But we’ll say that because it’s a full volume approach, just like BitLocker, you don’t get the high degree of granularity of encryption. In other words, I can’t at least currently easily decrypt or encrypt certain parts of the files on an SED. It’s a, everything’s encrypted all the time and therefore when I unlock them they’re usually unencrypted for everything. Again that means that there’s implications if I need to share a system or need to give a system to somebody else and to work on, I can’t just unencrypt the OS and keep the data encrypted. I’ve got to give them an unencrypted everything. And that, again, can be a challenge for sensitive information. Also, I think I will say here it’s probably so obvious it almost doesn’t need saying, yet I’m going to call it out anyway. That you have something else for everything else. In other words, you have an encryption and data protection solution for the other things your data’s going to be going to. Because you know, as data moves increasingly in a large volumes and increasing mobility, it moves on to removable media. It moves on to other devices and out into the cloud and so on. So, SEDs aren’t going to help you there. It’s a device centric form of data protection. It protects that device. And so you’ve got to think well what else am I going to put around it to protect information as that information moves to other platforms to some drives, to mobile devices and so on. Do think about that as part of your planning process.
I think the other thing I will say is that it’s great technology “But” and it’s a big but, right. Deliberately big. And the “but” is like every other piece of data security technology it requires management.
AREAS TO CONSIDER
So, there’s still things to think about even with the simplicity of robustness of a self-encrypting drive. User management, how do I define who has access to that drive and who doesn’t? There’s still questions around key recovery. There’s a preboot authentication step with a self-encrypting drive to unlock the drive. What happens when the user forgets it? How do I do reporting? Defining policies. Integrating those policies with everything else that’s going on in my organization. That preboot authentication step does require the users to understand what they’re doing. Ideally you want to tie the preboot step into [UI] typing my pin so that the drive knows it’s really me. I want to tie in that step with my authentication to active directory generally. So I don’t have to keep re-authenticating as a user just to reduce the impact on me. Remote administration, patching and so on. These are also important things to think about. One of the challenges you’ll have is you still need to have other operational processes. You still need to install software, update software, patch the operating system, and so on. You don’t want to have to physically walk through every system, nor do you want to have users leaving all those systems on 24/7 so that they don’t interfere with your patching process. You ideally want to be able to wake those systems up, unlock the drive, do the patching, relock it all and shut it down again and do it all automatically. And so that’s something you’ll want to consider. Certainly doable with SEDs but it’s definitely something you want to be thinking about, “How do I integrate SED management with the other operational stuff that I need to do as a business so I don’t break everything else I need to get done.”
CREDANT MANGER FOR SELF-ENCRYPTING DRIVES
And if you missed it about two or three weeks ago, we did launch a Credant Manager for Self-Encrypting Drives, which essentially covers a lot of those challenges. So, in other words, it reduces the risk of, reduces the work rather deploying SEDs. It also reduces the risk that will have an impact on your operational processes, such as the patch management. And it more tightly ties everything together from a reporting and policy definition perspective. And ultimately, therefore, it reduces all of sort of the work and the complexity and impact to users, impact to your administrative team, impact to your security teams and so on. So you get the benefit and the simplicity and the performance of SEDs. But you can do it from a centrally managed place with a lot less impact on everything else that’s going on.
And I have talked a few times and I’m just winding up because I know we’re running close to time on this and I want to make sure I leave time for some questions. I talked a few times about integration here. It’s important to think about integration whether you’re, as you roll out Windows 7, as you think about some of the other options you’ve got for encryption, data protection, integration is important. These stats came from a report from just about a year ago. Eight-eight percent of organizations have multiple administrators managing encryption keys. And part of that reason is there’s a lot to manage. Twenty-two percent have more than ten administrators just managing it, with access to managing encryption keys. That obviously makes it difficult to track who has access to what. And there are multiple encryption suppliers in place, each of which almost certainly has different key management and reporting infrastructures in place which makes it very difficult for you to meet your compliance requirements to prove that everything’s encrypted when it’s supposed to be. And it also increases the risk of gaps in coverage, and that’s a problem. You don’t want to have systems that come on to the network where you may not have rolled them out yet into existing processes because those processes are complicated and manual and time intensive. So you really want to try and simplify a lot of the pieces here.
THE DATA PROTECTION PLATFORM
And I think that’s one of the things that we talked to a lot of organizations about. And sort of as I wind this up, the data protection platform. You know, you think about self-encrypting drives. You think about BitLocker. You think about other devices, removable media. You start thinking about what’s going on with the cloud. The more you can tie those together into a single platform, a single set of tools to manage, define policy, build reports for your auditors, for your stakeholders, for your compliance requirements, that reduces work. it reduces risk and it reduces impact both on the users and on your own folks as well. And that is a huge win. So I will strongly recommend thinking about ways to tie these piece of technology together, and it’s certainly something we would be more than happy to talk to you about if we haven’t already done so.
So, a couple of quick conclusions. I think as you roll out Windows 7, as you think about planning for Windows 7 it is a great time to evaluate what you’ve got in place, and what your options are from a data security perspective. There are a number of new options that you’re going to be looking at, but consistently across all of them, you’re going to see the requirement for management is going to be something that’s not going to go away, and in many ways as the number of options grows that requirement for management really grows with it. So, you’ve got a lot of options, there’s a lot of powerful tools becoming available, but you’ve got to be able to manage them to get the most out of them. And integrating the management of those pieces, tying them all together is what’s going to help you reduce the cost, reduce the workload on you and recue ultimately the risk that you’re going to face breech which you know we all are more than aware of can be both extraordinarily painful and extraordinarily expensive.
THINKING ABOUT UPGRADING?
Many organizations start to think about the process of upgrading to Windows 7 because inherently with Windows 7 there are additions that you might want to make use of. This makes you question integration and the opportunities and challenges it can bring. One of those opportunities might be Windows BitLocker. Organizations with Ultimate and Enterprise editions of Windows 7 should be looking at Windows BitLocker. We’ll examine what the thinking around BitLocker should be, and how to plan and be successful with BitLocker as part of your overall strategy. As you’re upgrading, it’s a great opportunity to look at things like self-encrypting drives. There’s also a lot of buzz around removable media as part of the changed Windows 7, but at the same time, you can think about a broader strategy: Windows, removable media, mobility increasingly, and even cloud services. All of these things are having an impact on the way that enterprise organizations think about data protection.
WHAT IS BITLOCKER?
Let’s cover the highlights of BitLocker. It’s included as part of a number of different versions of Vista. I think it’s fair to say that the version that is now in Windows 7 is definitely an improvement over what was in Vista. Across the various platforms that support Ultimate and Enterprise editions of Windows 7; Windows Server 2008, and 2008 R2. The default encryption of Windows BitLocker is AES with a 128 bit key, a fairly standard encryption.
BITLOCKER OPERATION MODES
It operates in three modes: transparent operation mode, user authentication mode, and USB key mode. The operational modes that you choose for BitLocker will have an impact on the way that you plan to implement Windows 7 as you roll it out. Transparent operation mode is lowest impact as far as your users are concerned. The Trusted Platform Module (TPM), which is a hardware piece embedded in the system, provides what’s called a “root of trust.” When the system boots up it just checks that no one’s been tampering with the system while it was powered down. If it hasn’t, the system starts normally and it’s minimum impact. Transparent operation mode may be attractive from an operation perspective and ideal from a security perspective. User authentication mode requires the user identity pin. It has a pre-boot authentication step where you can set the length of that pin; this is part of the polices you set with BitLocker. Typically, BitLocker is eight digits long, but there is flexibility around that. The third way is to add another layer of authentication is USB key mode, in which you’d have to plug in a USB device in order to boot the system. Most intrusive, and I think it’s fair to say that using the TPM in conjunction with user authentication mode is probably the balance between security and minimizing the impact on users. However, all these things do have an impact because you end up increasing the security of the options you have with BitLocker and increasing the potential impact on the user. You have to keep it up and running and get it configured. And that’s something you want to think about early because management is going to be one of the challenges you want to think about.
It has great encryption, period. Good solid encryption. AES algorithm. You can configure it to 128 or 256 bit key and security that for the volumes that it covers. With full volume encryption solutions it encrypts the entire volume. That’s an impact you’ll want to think about. As I mentioned, it is an improvement over the Vista version. The Windows implementation is considerably better. It uses and AES NI processor support. That’s the Intel chips that will aid processing. And it only utilizes that TPM module. Pretty much all systems these days have a TPM in them. So, it can utilize that TPM and the nice thing about that is that it can add a degree of trust that no one has tampered with the system while it’s been off. The system has the ability to essentially detect if someone’s trying to attack it while it’s offline. The good thing is obviously that it improves security. It does mean, however, that it can have an impact on users if you’re not set up to manage it appropriately. And it will also leverage Active Directory Server 2003 or 2008. If you’re on 2003, you’ll probably require some extensions. It’s all built into Active Directory in 2008 and included in the OS, so for many organizations, the primary reason they’re looking at BitLocker is because it’s already there. It’s included. If you’ve got Windows 7 or if you’re moving to Windows 7 Enterprise or Ultimate editions, BitLocker’s already in there and it’s a fairly compelling argument to say you should at least examine it for some users.
WHERE DOES IT FIT?
BitLocker may, however, not be appropriate for everybody. Like most tools, security tools are no different. They are appropriate for some jobs, and not for others. Obviously, if you’ve got Windows 7 Enterprise or Ultimate, the sensible thing is to at least look at it. For users that do not share systems, then it makes sense because it’s a volume encryption approach. It will require you to unencrypt the entire volume when you turn it on. And as a result the full volume is unlocked and available unencrypted. But unlocked and available for use, across shared systems could be a challenge. Also, it’s for users that don’t have highly sensitive information. The reason I say that is that it’s part of the challenge with a full volume approach once it’s unlocked. If there’s highly sensitive information you might want to think about a slightly more granular approach than a full volume approach. Simply, it may be a fit for some types of users in some environments, but not for others.
CHALLENGES WITH BITLOCKER
Despite the fact that it obviously has very strong encryption, there are some challenges. I’ll be specific about a couple of these because they do have an impact on the way you think about using BitLocker. Recovery key management is one of the more significant challenges with BitLocker as it stands. Recovery keys are required with the TPM. If it senses a threat and goes into recovery mode and you don’t have that recovery key somewhere available for the user, you then have a bunch of users who potentially cannot get into their systems. A system can go into recovery mode for a lot of reasons. If it’s under attack, obviously, but even things like docking and un-docking a laptop can cause the system to go into recovery mode. Also, hitting certain function keys during the boot. If you accidentally touch the wrong function keys during boot time, that can send the system into recovery mode. It can be sensitive. Should you tinker with the way that the TPM’s set up, it can become incredibly sensitive. Bottom line, you’ve got to think about recovery key management. Also, BitLocker by itself really doesn’t provide you much in the way of auditing, logging and reporting that you would expect from an enterprise solution if you’re using multiple, different platforms and different types of users – you’re not going to have that integration either. You’ll probably have to put something else in place for reporting and auditing.
RECOVERY KEY MANAGEMENT
It’s important to understand recovery keys. The question that comes up most is, “What’s a recovery key? My system won’t boot.” The user is already in a great deal of pain at that point. Recovery keys are the things that you create in order to be able to tell that TPM that everything is fine. You only need the recovery key occasionally when the TPM asks for it, but when you need it, you really need it. It’s a 48 digit randomly generated key and you have to type it in using the function keys. You can do a couple of different things with a recovery key, sometimes called a recovery password – they’re essentially the same thing depending on how you store them. But you have to make sure that they’re available, because if it’s 3:00 a.m. my time and the other side of the world someone’s booting a system and it’s gone into recovery mode, you want to make sure you can get them that recovery key so that they can unlock their system and keep working.
RECOVERY KEY MANAGEMENT
You have a few choices when it comes to how you store recovery keys as you create them. You can tell the user to write it down on a piece of paper, but I would not recommend that. (If you wish to go ahead and tell them to do that you certainly can, though.) You can print it out. You can store it on a USB device. That’s when it becomes a sort of recovery key device that you can plug in. Or you can store it natively straight off into active directory. That is an attractive choice from a management perspective. It is not necessarily a great choice from a security perspective, because it will still store in plain text. If it’s in active directory and having the recovery keys available for anyone that has active directory administrative capabilities is not necessarily a good thing, because it means they can then get access to that system. You really have to think about what you can put in place to manage recovery keys and to help mitigate the challenges.
AREAS TO BE AWARE OF
There are some areas to be aware of with BitLocker. We talked about key management and reporting, but FIPS compliance is something else you have to consider as you roll it out. If FIPS compliance is important to you, it will have an impact on the way you configure the policies for BitLocker. You’d have to set it into FIPS compliance mode. Biometric authentication is not supported. There is support for removable media encryption, data security. But it is not necessarily optimal from a performance and reliability perspective; you might want to think about another solution. There are a number of choices when it comes to layering management with BitLocker. Credant as an organization can help you with that. There are choices out there – Microsoft provides a tool called Microsoft BitLocker Administration and Management tool or MBAM, which is part of the Microsoft desktop optimization pack. It is something I would suggest if you’re going to look at BitLocker; you’ll probably end up glancing at MBAM at some point. From an enterprise perspective, it’s probably not something that’s going to meet all of your needs. It certainly doesn’t cover all of the problems. For example, it won’t stop a privileged user or administrator from turning BitLocker off on a system if they don’t like it. But again, you’ll want to look at something to help you manage BitLocker as you roll it out, because otherwise you will find that there will be some significant holes and it’s better to plan for those earlier rather than later.
Okay so that’s enough on BitLocker. If you want more information, www.credant.com has a wealth of additional information, including whitepapers and datasheets on best practices for managing BitLocker. Additionally, there are videos of BitLocker management tools. You can actually take a look at the policies you’ll need to set and how to set them, it’s quite helpful.
Stay tuned for part two, where we’ll dive into self-encrypting drives.
PRE-BOOT AUTHENTICATION PAIN
Let’s shift gears to pre-boot authentication (PBA). That’s the step in which the user first powers up their system and types in the authentication. They’re telling the system, “Yes, it really is me. Please continue booting and unlock all of the data.” However, if you’ve lived with a pre-boot system before, you know that it can have some real challenges. If it requires the user to learn a new step, or have a different password than they normally use, they’re typing on their domain. If there are IT processes that don’t have a pre-boot authentication step and then that system might apply patches because it can potentially get broken by the pre-boot authentication step. First of all, self-encrypting drives (SEDs) implement PBA a little differently from software based full disk encryption. It’s a little simpler to hook into. As a result, you have good SED management capabilities in place.
Can SEDs be integrated into active directory? Yes. Active directory is a great way to reduce the impact on end-users. This means that if I put the correct management piece in place, I can take the authentication step from the pre-boot authentication and use that to pull it into active directory. This way, the user won’t have to authenticate twice on the same system, which is always something you want to avoid.
It’s also important to have good recovery methods in place. If you can use remote recovery then that is also a great advantage when you think about the users because you know they will forget their key at some point. You’re going to get a phone call at two in the morning from somebody on the other side of the world asking, “What’s my authentication key? I don’t remember what it is. How do I get my system powered up?” You want to be able to either get it, enable them to do that, or potentially even have some self-recovery mechanism where they can go to and answer questions and maybe help them with authenticating without having to come back to the admin. The pros and cons to that approach, but we’re already seeing both methods requested.
FASTER? SLOWER? JUST THE SAME
Then the other question that comes up often is “Doesn’t encryption slow my machine down?” The answer is no. Everything that comes back to the hardware module gets decrypted and as a result, the encryption process has absolutely no impact on performance as it stands. We’ve actually seen it ourselves during our own testing. The other thing to remember is that the time to security is much shorter. Previous approaches to full disk encryption, there the disk would have to go to a process of essentially encryption sector by sector. Self-encrypting drives are encrypted all the time. Everything is encrypted on there from the instant you enter the authentication key. Performance is a real win when you’re thinking about SED’s.
ROUNDING OUT THE SOLUTION
Hopefully this helps you as far as thinking about some of the ways that you employ SEDs. Of course, there are still things that SEDs cannot do, and there are still pieces that you need to round out to compliment all of the requirements. You’ve got to be able to provide the reports that show that systems are protected and that you’re doing your due diligence. Another element you should consider is enterprise deployment of self-encrypting drives and the drive running reports by itself. You’ve got to think about something else there.
And remember, SEDs are not going to be right for everything. You’ve also got to consider other devices. You’re going to have data moving onto removable media. You’re going to have data moving onto smartphones. Then there are also thumb drives. You have to ask, “Do I manage the encryption or manage the policies on the self-encrypting drive device?” Then you have one holistic view. “Where’s the data? What’s my security stance across the board?” It’s less work. It’s less risk to just tie all these pieces together. If you’re thinking about SEDs where possible, I recommend tying these into the same processes you’ve already got in place; it creates a winning solution across the board.
To summarize, self-encrypting drives are great. They’re extraordinarily powerful tools, but to get back what you want out of the drive, you need a management layer.
Q&A / SUMMARY
Credant can actually double encrypt SEDs. The SED has its own encryption technology that’s written to the drive, but you can implement policybased encryption on top of that which would encrypt the system again. It’s an option. I’m not saying that’s a standard that you should adopt, but it’s certainly possible and would make great sense in certain use cases.
What does Credant do in this space? We’ve just recently announced the ability to have the management layer for self-encrypting drives managed from the same set of tools that you manage data protection on all of those other platforms. It reduces the workload and meets compliance needs because the data is protected. It reduces the risk of a breach because I have the ability to both ensure the correct place on the right platform and also ensure that I’m not missing pieces – it’s all rolled up into the same set of reports. And of course it reduces operational impact on users because I can deploy policies consistently and I can manage much more effectively.
The piece that we launched is the Credant Manager for self-encrypting drives. Here’s more information on self-encrypting drives. It helps you reduce the work of deploying SED’s, again by enabling you to automate and simplify a lot of the policy definition, pushing out policy devices, switching on that authentication key, tying the users to the right level of access. It reduces the complexity of all of those pieces because you do it from one set of tools. The nice thing is because it’s all in one place it integrates all the reporting. As a result, you can automate processes like patch management and updating systems. This gives you a great deal of savings from the perspective of time and process. It also actually makes things a lot more secure. Now you don’t have a whole bunch of systems left on, powered up and potentially logged in.
Can you use BitLocker with a self-encrypting drive? Potentially you could. I’m not sure it’s a combination I would personally recommend simply because what you’re doing at that point is full disk, a software full disk approach on top of a hardware full disk approach. And I don’t think it’s necessary. If you’re looking at that, I would recommend looking at Credant’s Enterprise Edition. If you have concerns about data protection policies, then you can layer that on top of the self-encrypting drive. But it may simply be enough to have an SED in place. Again, provided it’s up and running.
There are also questions about innovations being considered around SEDs. From our perspective we think the growth and interest in self-encrypting drives is entirely natural because of the advances we already talked about. Especially again, considered against the challenges that people saw with software based full disk encryption, SEDs make a lot of sense.
Our perspective on this is we provide data protection capabilities for your entire organization. If you as a customer feel that it is the right solution to implement SEDs for one group of users, or for every user, that’s your decision. You have to make your decision based on the data, on your organization and on the kind of information you have. Our job is to provide you the tools and the capabilities to make that data protection step much simpler to reduce the risk of a breach and the risk of a failed deployment and to reduce the cost of management. Our job is to provide you with the management pieces to enable you to do that. We will be happy to help you encrypt data as it moves onto removable media and flash drives. We’ll be happy to help you encrypt data on a mac. We’ll be happy to help you encrypt data on non-SED systems. We’ll be happy to help you manage SEDs and tie them into the management of all systems. It’s really not something we want to do to push one solution or another down your throat.
We will try to give you advice based on what we see being successful in organizations like yours and given that we’ve encrypted and provided data protection management ten million plus end points to over a thousand enterprise customers, we have a lot of experience. Our objective here is to make you successful by deploying the right protection for you.
SED? WHAT’S THAT?
A self-encrypting drive (SED) is a disk that has built-in hardware-based encryption. It’s essentially a drive that is enabled to encrypt all the information that gets written to it and that encryption is done by specialized hardware that has a number of really important and significant implications for how to use it, where to use it, how to manage it and so on. They’re made by a number of manufacturers – Hitachi, Toshiba, Seagate, and Samsung, to name a few. There’s a number of organizations that are drive manufacturers that are building out their capability to supply self-encrypting drives. And the reason is that they are becoming very, very popular. Both from the perspective of people wanting to put them in, but also I think from the perspective of organizations looking at them for the first time or maybe coming back and revisiting them. The Trusted Computing Group, which obviously has something of a vested interest in this space, estimates that within five years pretty much all drives will have some self-encrypting capability built in, and that includes both the traditional disk drives and also solid state drives as well. Sometime over the next few years most of the drives that you encounter are going to be a self-encrypting drive of some kind.
HOW IT WORKS
So, how do they work? Very simply. Anything that gets written to the drive gets written via a hardware encryption module that encrypts it on its way onto the drive and then decrypts it on the way back. Pretty straight forward. Everything’s encrypted. Everything that gets written to the drive is encrypted – the whole thing is encrypted. It’s encrypted as far as the various standard are concerned – typically AES-128 or AES-256. Pretty industry standard, well-established encryption algorithms as you would expect, meaning the encryption is going to be solid and secure.
Obviously there are a number of caveats and the caveats must always, as they do with any kind of encryption discussion come down to what happens with the keys. And we’re going to talk about keys, because there’s a couple of keys that are very important when it comes to drives.
WHY THE INTEREST?
Why the interest? In many cases the reasons we’re seeing organizations look at self-encrypting drive technology are that they’re going through some kind of refresher or they’re re-evaluating their initial deployments or attempted deployments around software based full disk encryption. They like the idea of full disk encryption, but as I’m sure you know, full disk encryption can have some management challenges. So what we’re seeing is a sort of re-evaluation of the way in which we implement full disk encryption and self-encrypting drives. They are much faster than software based storage and they’re much more reliable. Data loss is less likely to happen with a self-encrypting drive because they’re much less sensitive to issues around bad sectors. They also take away a lot of the pain associated with the initial install when people need full disk to be done, defragmenting the drive, checking for bad sectors because some would often be fairly sensitive issues with the drive itself. In a nutshell, that’s a self-encrypting drive – and they’re simple.
WHERE DO THEY FIT
Where do they fit? Typically organizations interested in self-encrypting drives are really driven by a couple of things. One is they want a simple solution, and one that’s simple to live with. Full disk tends to offer simplicity since everything gets encrypted. Self-encrypting drives are a very simple way to implement encryption in software. Another great feature is that you don’t need to be able to provide different encryption for different types of users. If you don’t care whether you can save in one go, then again self-encrypting drives are a great approach. Because it is a full disk, then it’ll be unlocked all in one go rather than being unlocked in different portions for different users. That’s a consideration you have to have. And, I think the other thing to think about is fairly self-evident, but self-encrypting drives are only going to encrypt the information that’s on the drive itself. You will need to think about some information as it moves off the self-encrypting drive technology. But that being said, if that matches your requirements, then SED’s may be a fit.
OPAL – CONNECTING AND PROTECTING
I want to touch on OPAL really briefly, which is the standard for self-encrypting drive technology. It is increasingly looked to by the Trusted Computing Group and defines a number of capabilities for self-encrypting drives. I don’t intend to go through all of them in any great detail, but if you come across OPAL drives then understand that that’s really what the industry is moving to for standards for SED’s. It defines the functions and it defines a lot of the way that these drives will interact with other hardware. OPAL’s an important standard. And you should expect pretty much all the devices you’re looking at in the future to be OPAL compliant.
Let’s talk about common misconceptions and areas where there may be some confusion around self-encrypting drives. We talked a little bit about security for self-encrypting drive, and while that may seem odd, it is an important consideration. Management and applicability, as in where do you use self-encrypting drives – just where is the right place, exactly? For one, pre-boot authentication. If you’ve ever dealt with pre-boot authentication, certainly in the software world, it can have some serious impacts and can be quite a headache to manage. So let’s talk about what pre-boot authentication can look like for self-encrypting drive technology and about performance, too. One of the questions that we get a lot is, “What’s the performance impact if I go to a self-encrypting drive?”
SECURE FROM DAY 1
One of the interesting things about self-encrypting drives is that everything is encrypted all the time. The entire drive is encrypted. Everything is encrypted from day one whether you want it to be or not. It’s not possible to have a self-encrypting drive that isn’t encrypted, which sounds great. The reason is that there are a couple of keys involved. The first key that you need to know about is the encryption key. That’s the key that the drive uses to encrypt information and is created when the drive is built; the encryption creates that key. It is locked away in the hardware. That’s the key for the encryption of all information saved to the drive and coming back. The problem is that that key is available all the time. So essentially it’s like having a great system of locks on your front door, but the key is in the door every time you go out. So you might have great locks, but there’s no security. That’s where the second key comes in – the authentication key. The authentication key locks away the data encryption key. It encrypts it and locks it away so that you can’t get to it unless you have the authentication key that you take with you. That’s the key that enables you to prove that you are the authorized user of this device and the information. So, like everything else in encryption, the big challenge is key management. You must secure the authentication key and manage it appropriately.
Now, the good news is that devices are encrypted from day one and there’s no sort of setup. So the device again is going to be running, encrypting everything as it gets written to the drive itself. What if I need to go and make sure that there’s nothing on there that can’t be found somewhere else. All you do is destroy the key and the information is unusable. Once you’ve got that initial key management under control.
THEY DON’T NEED MANAGEMENT
I’m sure you’re asking, “Okay, so how would I do that?” You do that by putting in management layers and this should not be a great surprise to anybody. But if you want to be able to manage all of those keys, enable people to get access to their systems without any great difficulty and ensure that they can continue to have access, you need a management layer. So, there’s technology that enables you to activate the set policies to manage which users have access, when they can have access to remove their right to have access, of course, if you need to. You really have to think about maintaining control over who has access to authentication keys, and when they need to get access. We have to consider things like user recovery, for when a user inevitably is on the other side of the planet and lost their authentication key and can’t get in. Things like this are a big challenge when you are looking at encryption technology, especially full disk encryption approaches. One of the big complaints is in the pre-boot step, in other words, the step where the user authenticates himself, if the authentication key is difficult to manage, then it has its own patch management. People have to literally leave their systems and enable that to happen in the worst case, and that’s really not ideal at all. You want to question, “Can my management layer maybe enable me to implement while having access to patch management processes?” System loss is another challenge here. One of the challenges here is if the device is lost, how do I ensure that people can’t have access to it anymore? Can I kill those keys quickly in order to prevent people from getting in? This brings up reporting and auditing and wanting to be able to assist them. I want to be able to prove that these controls are in place. I want to make sure that the information is protected at all times. Provide auditing and compliance report into my internal stakeholders, my compliance managers and so on. These are the major things that you need to think about when you’re talking about management.
ONE SIZE FITS ALL
One of the other things is to bear in mind is the idea of one size fits all. SED’s are great and extremely effective. They are becoming increasingly more affordable as price points are coming down, but they’re still not necessarily going to be the right solution for everything. Think about the challenges with any full disk approach is that once you unlock it, it is unlocked for good. For example, if I have sensitive information on my direct device and I need to give it to an administrator or contract organization for them to work on, I need to have that drive unlocked. That could be a concern that they have access to any information that’s on that system. Another option is to provide access to what I would call “non-authorized” users. That’s also something to think about. So they’re great tools, but use them in the right place. There are always things to consider: “What happens when I’m moving onto a different system without an SED on it? What happens when I move it out into a cloud environment?” They’re a great solution, yes, but you obviously have to think beyond just that device.
Stay tuned as we shift gears to pre-boot authentication and a well-rounded SED solution.
There are many inaccurate assumptions on the differences between software and hardware encryption, management and even the benefits. Self-encrypting drives (SEDs) can be an effective tool in your data protection arsenal.
The launch of Credant Manager for Self-Encrypting Drives (part of Credant Enterprise Edition 7.3) was significant in a couple of ways, and it’s something I hear from both inside and outside the organization as our customers start to look at it and make plans to upgrade.
The first is that it recognizes a growing trend in the IT security industry – the re-evaluation of hardware-based encryption like self-encrypting drives (SEDs) as not only a viable choice to keep data secure, but a sensible and economic one too.
SEDs are very powerful tools, but like all security tools they need self-encrypting drive management, and it’s the lack of well integrated and simple to use management tools that has been partly responsible for their relatively slow adoption. Cost was certainly another factor. However it’s clear that as price points for SEDs fall (and more and more systems will be available with SEDs as a standard choice) then the time has come to look closely at what they provide.
One of the great things about SED technology is that it provides the simplicity of full-disk encryption (FDE) without the painful management overhead and user impact of older software-based approaches. SEDs are reliable, fast and far easier to roll out and live with than software FDE. With the publication of the OPAL standard, and the growing number of OPAL compliant devices, the time for SEDs has clearly arrived.
As a result, supporting them and making it easier for our customers to deploy, manage and integrate is an obvious choice for us.
Which brings me to the second reason that this launch was highly significant: it continued the process of building out Credant’s Data Protection Platform approach which we’ve been talking about for some time. The idea is simple – data moves faster than ever, and on more devices than ever, and we are working hard to make sure that information is protected across its full lifecycle. So adding support for this important and growing technology is both obvious and essential.
SEDs are important, and with the right management tool, they can be a lifesaver in the event of a breach – so take a look at what we’re doing with the 7.3 release and let me know what you think.
In the meantime, let me tell you a little more about what we see on the horizon for the Data Protection Platform and why the concept of data lifecycle protection is so important. But that’s for next time!
So what does all this stuff mean? I’ve thrown a lot of numbers and stats at you. I think there are really three
significant trends that we see when we talk to organizations about what are they worried about from a security perspective. First, we’ve got of all this change what traditional IT has to encompass. We’ve got Bring Your Own Devices, consumerization, virtualization and we’ve got cloud services. We have all of these things occurring right now and sort of churning the infrastructure of IT. All of those things are increasing the complexity of management of these systems and that’s not a good thing. We know that makes things harder to track. Harder to keep safe. Harder to report on and prove compliance. At the same time, the physical implantation of IT is changing. The way that the information is used is also changing. There’s an incredible increase in the mobility of data and I think that’s going to only keep accelerating. Information is moving faster and in greater quantities to more places than it has ever done in the past. And it will continue to accelerate and as a result, tracking that information, understanding who has it, where it is, and so on. Who has access to that? Should they have access to this information?
Meeting order and compliance requirements and so on is becoming extraordinarily difficult. I think finally, you know, in case you hadn’t noticed, insiders are a major problem. At the same time, controlling the way that insiders interact with information is also becoming a serious problem. Because of the Bring Your Own Device revolution, because of consumerization, the ability to manage insiders and the way that they use information is eroding away.
If you think about what people really want, if you think about what your users are going to be asking you for, they are expecting and demanding access to information anywhere, anytime, on whatever device they want to use. They want it now and they want it fast. They want to share it with whomever they need to share it with. They don’t want security to be a roadblock to that access.
At the same time, organizations are responsible for staying in constant control of that data. We have to control who has access to it, maintain visibility, and manage the ability to manage access to that information. These are two really conflicting requirements. They are in a sort of dynamic tension with each other, and that has to be managed.
And of course, as I mentioned earlier, cloud is really throwing gasoline on that particular fire. Two hundred and eighty eight million – if you don’t recognize it – is the number (at least) of files uploaded and accessed on Dropbox every day. That’s about a million files every five minutes. I go back to the nine percent of organizations that don’t think they’re going to see increased cloud usage this year and I say, au contraire, I think they’re already seeing an increase in cloud usage. I think increase in cloud usage is occurring inside those organizations or is occurring on devices that have been or are attached to those organizations.
End users are bringing what are essentially consumer initiated cloud services into businesses to share, to collaborate, to move, to backup and to store files in incredibly large quantities in a way that is beyond their current ability to control. This is the poster child for consumerization and its impact on our ability to secure information over the next ten years. In all honesty, most organizations have yet to get their arms around this problem.
In order to get our arms around this problem, we have to move from thinking about devices to thinking about data. It seems pretty obvious because the devices are out of our control, proliferating at a rate that we can’t manage, or are simply virtual devices that exist somewhere else. We’ve got to think about data and data-centric security. The best way to do that is to focus on building data-centric security into the way that we think about information security in general. It’s the only way to meet the challenges of consumerization and mobility. But to do that, we’ve really got to focus on the core of data-centric security, and I would argue very strongly that things like encryption and tokenization are what essentially make data self-defending. They both have to remain as the fundamentally enabling approach for data-centric security. By enabling, I mean it lets you build a data centric security mechanism, but it also enables your organization to take advantage of new technologies and new approaches to more easily facilitate what your users want to do. To do that though, you’ve got to ensure that security is seamless. It has to just work. If I take information from a thumb drive, move it onto my laptop, move it into a virtual platform, and then move it off there into a cloud storage that I access from my smartphone or tablet, all that stuff has to just happen. It has to happen in a way that doesn’t impact my ability to get my job done. Ultimately, building that capability is the core challenge for IT and IT security over the next five to 10 years.
What about you? What do you think about these core problems? What other threats are you seeing? If you have any questions, comments or feedback, I’d love to hear it. At CREDANT, we work with some of the largest organizations in the world to help them build seamless data-centric data protection that allows employees to get their jobs done without causing problems. We do it in a way that reduces risk and reduces the risk of a breach seamlessly so information can move from device to device in a managed and secure manner.
Let’s get back to looking at some of the biggest perceived data threats coming our way in 2012. Data is now increasingly mobile, but that mobility is coming at a cost. Taking a look at the Department of Health and Human Services, 39 percent of all of the protective healthcare breaches covered by HIPAA and HITECH occurred on a laptop or other portable device. Naturally, more people are going to portable devices because they’re easy to move around and easy to move information to and from. However, they’re difficult to manage especially when they’re owned by someone other than the organization. And they’re very, very, very easy to lose. CREDANT recently conducted a hotel survey that found that of the thousands of devices that are lost, 81 percent are smart phones or tablets. Interestingly, 45 percent of lost devices were never claimed. Data is more readily lost and is extraordinarily difficult to recover. This is a significant problem, one of those things a lot of organizations are talking about these days. How do we deal with removable media, with laptops with external storage devices? How do we track information to and from? The Bring Your Own Device revolution is underway that is really causing some challenges to the way that IT security and best practices get applied across the organization.
The “Aftermath of a Data Breach” study showed that only in about a quarter of the times that customer data was lost due to a breach, it was definitively encrypted. In fact, 60 percent of the time it was definitely not encrypted. Sixteen percent weren’t really sure. That’s where CREDANT comes in, it’s one of the core things that we do. We help customers manage encryption.
Of course now, more and more devices are being shipped with self-encrypting drive technology built in. The challenge is not that the capability is there. The challenge is how do I turn it on, how do I manage it, how do I ensure that I can prove that it was on at the time that the device was lost or a breach occurred. These are real headaches when you think about encryption.
What are the causes of breaches? It’s not terribly surprising that a third of the breaches in the Aftermath of a Data Breach study were caused by “negligent insiders.” You know, negligent is a fairly strong word. I mean, these are not people that are somehow criminally negligent, they are simply doing their job and they copy something onto a CD, or they move something to a thumb drive, or they leave their laptop in the trunk of their car or something gets lost, something gets stolen, something gets exposed somehow. It’s unfortunate that so many breaches occur that way because it’s essentially the low hanging fruit of data security.
So, after a breach occurs, what happens next? One of the questions asked in the study was “What steps do you believe were most helpful reducing the negative consequences of a data breach?” The number one answer is “retain an outside legal counsel.” It might sound kind of cynical, but there are actually some interesting trends occurring that are a bit more hopeful.
The second highest answer was “assess harm to victims” which I personally think is entirely the right thing to do. There was also a huge jump in the number of companies that hired external forensic experts to investigate the breach. That’s a huge step forward, because as we all know that during the initial investigation period it’s very easy for information to get lost. We’ve all heard stories of the first knee jerk reaction to a system that’s been breached is to turn it off. When really, that’s the last thing anyone should do. You don’t go shutting down systems that have been breached. You bring in forensic teams to investigate those systems.
Looking at how consumers prefer to be notified about a data breach, not one wants to pick up the phone anymore. It used to be that people thought the best way to handle notification was to quickly notify via letter or telephone, but that has fallen out of favor.
I think what we’re seeing here is a realization on the part of organizations as a whole that a breach is a bad thing. It is no longer an “oops, our bad,” kind of a hiccup. Breaches are significant things. On top of fines, we’re talking about corporate embarrassment and damage to brand. Not to mention class action law suits.
I think what we’re seeing here is an organizational shift to be more proactive. Organizations are saying, “Let us retain it. Let’s get some experts in. Let’s get some legal counsel. Let’s take things slowly and do the right things here.” That’s good news from an information security perspective because it means that that, these events are now becoming something of a boarding discussion. It’s getting elevated and over time, will be a major benefit.
Stay tuned for Monday’s final post in this series.
As we look down the road at what the next year holds, let’s take a look at the biggest perceived data threats in 2012. It’s hard not to think about Roland Emmerich’s movie 2012, but hopefully our predictions for potential threats will be a little less apocalyptic than the ones in the movie. Perhaps a little more sensible and realistic.
There are some excellent reports out there on this topic – the Ponemon Institute released “The 2012 State of the Endpoint Report” and “Aftermath of a Data Breach.” Great resources.
In general, confidence in security is not doing very well. Sixty-six percent of people, according to the studies, felt that they are not more secure than they have been in previous years or are at least unsure about their level of security. And, that may or may not be an accurate reflection of the reality. Maybe it’s in part due to the level of coverage that breaches receive and the larger scale, hacktivism type of attacks that occurred over the course of the last year. We are either in a state where people don’t trust information security or we’re in a state of change, a sort of crossroads that remains to be seen. Regardless, there are some big decisions that need to be made.
Thinking about some of the emerging trends from last year, incidents of viruses and malware rose from about 27 percent of organizations to 43 percent of organizations. However, the organizations that made data protection a priority saw that same percentage drop significantly from 61 percent to 29 percent. So what’s going on here really?
I think what’s going on is that we’re seeing organizations actually being more concerned about other issues. In fact, I think the reason is that they think they’re going to have more important things to worry about. Not to say that malware and viruses are not real problems. They certainly are. But the big ticket items I think that are really causing concern this huge growth in mobility. The increase in the number of and the range of mobile platforms is a real challenge.
Inevitably, there’s this wave of concern building around cloud computing and how we manage cloud as it starts to grow in its impact on the enterprises. So these are what I think are diverting attention away from some of the old staples of security discussion: Mobility, resources, data mobility, mobile platforms, consumerization and cloud are absolutely huge challenges.
So what is on the rise? Mobility, hands down. Organizations saying that there was a significant risk posed by mobile devices such as smartphones and tablets increased dramatically. Nine percent of 48 percent of organizations foresee this as a problem. We’re seeing mobile platforms being increasingly targeted. Also there’s the exponential growth in the Bring Your Own Device (BYOD) realm. There’s a consumerization aspect; average employees are walking into the organization saying please connect my phone, tablet, etc. There was a study published end of last year by the Computer Technology Industry Association on the use of mobile devices in healthcare. They said that at the end of last year about 30 percent of doctors were actually already accessing medical records online through applications running out on smartphones and tablets. And that number is likely to grow to something like 50 percent by the end of 2012. The challenge of managing that and of extending controls in place to cover those devices is a very significant one. It’s not really any surprise that we see a big jump in the concern about mobile devices and mobile computing on a broader scale.
Another trend that’s on the rise is the increasing amount of virtualized environments. The 2012 State of the Endpoint Report showed 52 percent of organizations felt that their investments in virtualized environments of some kind are going to increase over the course of the year, or have already increased over the course of the year. It’s sobering to note that almost half the organizations don’t have at least one, single department dedicated to virtualization security. Most organizations simply share the responsibility between departments, which blurs the boundaries of who owns what.
Other increases, which aren’t really surprising, are that 91 percent of organizations saw third party or internal cloud computing risks increase. Most organizations are planning to increase their investment in the use of the cloud. It’s probably also no great surprise that a lot of organizations are still struggling with what that cloud strategy should look like. Forty-one percent say they didn’t really have a cloud strategy yet and frankly, I can’t blame them because it is a complex question. The strategy has to embrace the entire organization and yet the very nature of the way a lot of cloud services are delivered tends to undercut the central control of the typical IT and security organization by essentially delivering services to individual business units and sometimes individual users. It’s a complex problem and it’s getting more complex.
Stay tuned for the next post, where we’ll continue looking at data threats to watch out for in 2012.
The recent release of the Cloud Security Alliance’s first whitepaper on Security as a Service is an important step for a lot of reasons.
As part of the important debate around the impact of the cloud on security practices, it’s important not to forget that the cloud can also be a positive force when it comes to information security. There’s no doubt that a wholesale move of sensitive data into cloud storage and processes is being held back by a raft of operational security concerns, as well as compliance and audit complexities. But at the same time the opportunities to actually improve security overall do exist.
In this white paper, the CSA outlined 10 types of service deliverables through the cloud itself:
- Identity and Access Management
- Data Loss Prevention
- Web Security
- Email Security
- Security Assessments
- Intrusion Management
- Security Information and Event Management
- Business Continuity and Disaster Recovery
- Network Security
At CREDANT, we’ve been closely involved in this initiative, because it’s something all of us feel very strongly about. There exists an opportunity to both improve the quality and availability of key security technologies using the cloud as a delivery mechanism.
In our case, with our singular focus on data security, encryption was the obvious vehicle. The role of encryption in enabling data to be securely stored in the cloud is pretty much universally accepted. However, the big hurdle that must be crossed now is to make the key management of that encryption secure, simple and cost-effective. If we can do that, then the opportunity to significantly move the safe use of cloud services forward will be immense.
There’s a massive amount of pent-up demand for cloud services, and making those services safe to use will have a far reaching impact on opening the cloud up for business. And that’s something worth working on.
In my last post, I took on the argument that organizations in general are fairly indifferent to information security. Yes, the breaches we see hitting the headlines are bad, but they hit the headlines precisely because they are news, not because they are the norm.
However, I also made the point that I think things are going to get worse before they get better.
What we see now is the first ripples of a change that is occurring in the very way we will have to think about information security. The real splash is yet to come, and when it does, to quote paraphrase Robert Bolt, the wave may swamp more than a few boats.
For a long, long time (at least as is measured in the computing industry) security practice was the security of “stuff.” It was measured in firewalls deployed, network packets sniffed, devices monitored, locks on doors. And all these things are good, of course. Nothing here is going away, but the center gravity for information security has shifted and it has shifted away from “things” and towards “information.”
This may seem self-evident, that information security is about the security of information but it would be a mistake to assume that’s the case. Partly this is the driven by the history of security functions – they were often an offshoot of the IT department and therefore inherited an understandable bias towards network and machine security. It really took the emergence of both compliance mandates and breach notification laws to start to accelerate thinking towards information-centric security and that shift is ongoing.
But while this sea-change is occurring, cross currents are further churning the waters. The emergence of cloud computing models essentially tears away the capability to manage security from a device-centric perspective, rather like ripping a band-aid off. It’s painful, and it’s happening quickly.
As a result of the pressure from business leaders to adopt cloud computing services, the security industry is being forced to quickly re-evaluate priorities and capabilities. Thinking about the security of devices becomes far less important when the devices in question are virtual, hosted off site, and beyond your control. Information-centric security becomes paramount because not only does it represent the core of the problem to be solved (how I keep data safe,) it may also be the only thing over which your organization has control.
Cloud does not just drive a move to data-centric thinking, it demands it. And the companies that are successful in focusing on data-centric will be the ones who can most aggressively adopt, and benefit from the cloud.
The good news is that cloud also offers an opportunity to reset the way we provide security services and capabilities. As cloud offers the opportunity to very quickly offer services at a low cost to almost everyone, the possibility to deliver best-of-breed security to every organization on the internet suddenly opens up, and that may ultimately have a beneficial effect that vastly outweighs the short-term pain of this transition.
The cloud is going to change the way we consume IT services, and in the end it must also change the way we think about securing those services and the data upon which they operate. The good news is that finally the stars may be aligning, and good business sense may become the same as good security too.