10 Benefits of Virtualization for Your Data Center

10 Benefits of Virtualization for Your Data Center

Since its emergence over two decades ago, virtualization has become industry-standard best practice in data centers from SMBs to the largest enterprises.  But what does virtualization mean in terms of business value? This post covers 10 benefits of virtualization for your data center.

Benefits of Virtualization 101

The basic idea behind virtualization is to increase the utilization and operational efficiency of physical server hardware by running multiple “virtualized” software processes on each physical server. More work can thus be done with the same physical resources.

Without virtualization, most SMB IT departments need to deploy multiple servers to meet the maximum storage and processing demands of individual applications like database instances. But in that scenario, servers typically operate at a fraction of their capacity. This is highly inefficient and drives excessive operating costs.

By using software to simulate hardware functionality and create virtual machines (VMs), organizations can run multiple operating systems and applications on one physical (host) server. Each VM is a separate software container with its own operating system and application. A hypervisor is a software that coordinates the VMs running on a physical host and dynamically allocates the host’s computing resources across those VMs.

Some of the things you can virtualize in a modern data center include servers, applications, physical networks, desktop environments, and storage capacity (pooled across multiple physical devices). 

Having quickly explained how virtualization works, now let’s talk about its business benefits.

1. Significantly reduced IT expenses

Hardware generally has the highest data center cost. Because virtualization lets you do more work with the hardware you already have, it helps keep hardware costs in check.

But optimizing hardware utilization has other cost benefits besides fewer hardware purchases. These include reduced downtime thanks to less equipment to fail, easier maintenance because there’s less to maintain, and reduced power, space, and cooling costs across your data center footprint. These all add up to significantly reduced IT expenses. 

2. Enhanced resiliency and reduced recovery time following a disaster or outage

When an outage or disaster impacts a physical server, it may need to be rebooted, repaired, or replaced—which could take hours or days. Thanks to “snapshots,” virtualized services are much quicker to replicate, provision, and redeploy, often within minutes. It’s also much easier to move VMs to a new physical location in response to flooding, fire, etc. This can simplify and speed up disaster recovery and improve the chances of successful IT recovery for your business.

3. Improved IT administrative efficiency and productivity

Fewer physical servers mean less time required to maintain your physical IT infrastructure. VMs are also faster and easier to update and patch than physical servers. 

Suddenly your IT admins will have more time to keep business services running smoothly, ensure data is backed up, deal with security tasks, address user requests, and so on. 

4. Support for DevOps and CI/CD

Because VMS are logically isolated from one another, DevOps teams can easily spin up dev or test VMs without threatening the stability of your production environment. Accidents happen, but when they’re confined to test VMs the consequences are minimized.

For example, when Microsoft issues a new Windows patch, you can clone a VM, apply and test the update, and then patch the production system. This reduces planned downtime as well as application crashes and other unplanned downtime events.

5. Reducing your data center’s carbon footprint

Despite extensive design efforts to reduce it, physical servers generate heat as a byproduct of their operation. Cooling, an energetically demanding process, is therefore required to prevent equipment from overheating.  

By virtualizing servers and using less hardware, you need less cooling. This reduces your data center’s carbon footprint and cuts the impact of your energy usage on the environment. Customers, business partners, boards and other stakeholders will all appreciate these changes.

6. Simplifying a move to cloud computing

Virtualization and cloud computing are complementary approaches to reducing data center costs and improving IT agility and resilience. When you move your workloads to VMs, that makes them simpler and quicker to move to the cloud. A “virtual” mindset also makes it easier for your IT team to move to the shared, service-on-demand world of the cloud. 

7. Reduced vendor lock-in on hardware

Because there is a layer of abstraction between VMs and the underlying physical hardware, VMs are inherently hardware agnostic. This makes them much easier to move from one manufacturer’s server hardware to another. Does that mean you can move virtual workloads seamlessly from, say, x86 hardware to an IBM mainframe? Not necessarily. But moving from Dell x86 to Lenovo x86, for example, should be no problem. 

8. More responsiveness to changing business needs

VMs are a key component of a more flexible IT infrastructure that allows IT to pivot faster around changing business needs. For example, if a virtualized web application becomes popular, IT can quickly allocate more processing power, memory, and/or storage as needed.

Contrast that with reallocation of resources among physical systems where these parameters are fixed. You might need to start by buying new hardware and waiting while it is shipped to you, installed, configured, etc. 

9. Greater IT scalability

Virtualized systems utilize resources more efficiently and therefore can scale up or down more easily. If your company is growing fast, virtualization can help you meet increasing resource demands faster while spending less to do it. 

Virtualization also eases the stress on seasonal businesses where you have peaks and valleys in utilization. Provided your overall hardware footprint is adequate, you can efficiently scale up and down on demand to meet peak needs without committing to bigger investments that you won’t utilize during slower times.

10. More available network bandwidth

By virtualizing and consolidating servers onto a smaller number of physical servers, you eliminate the need to send all that network traffic back and forth between those systems. This frees up network bandwidth and improves overall network performance.

A word of caution about virtual database servers

Because Buda Consulting sees everything through a database lens, I will mention one risk associated with virtualization with respect to database technology. Some database vendors, including Oracle, do not recognize resource partitioning in non-native virtualization for licensing purposes. This means that if you have 2,000 processor cores in your virtual environment, and you allocate only 32 of them to Oracle, you still need to license all of the cores for Oracle. If using Oracle VM for virtualization this is not an issue—but it is if you are using VMWare or other virtualization platforms. Many companies that move from physical to virtual without realizing this end up violating Oracle’s licensing terms and facing very large fines. So be aware of this issue before going virtual and plan your environments accordingly. 

Next steps

If your business hasn’t yet started taking advantage of virtualization, now is the time. Contact us today for a no-obligation consultation. A virtual strategy has never been more important as businesses look ahead to various levels of IT modernization. This includes moving more of your business processes to the cloud, or even just getting more bang for your IT buck.

5 Database Security Risks You Probably Don’t Know You Have

5 Database Security Risks You Probably Don’t Know You Have

I recently appeared on an episode of The Virtual CISO Podcast hosted by my friend John Verry titled “Confronting the Wild West of Database Security.” In our conversation, I emphasized that despite the criticality of the data involved, many companies fail to appreciate the cybersecurity risks associated with their databases. They simply don’t realize how big their database attack surface really is.

Here are 5 significant threats to your databases that we often find our clients are unaware of.

One: Inconsistent user account management

A great many of the database vulnerabilities we see relate to sloppy, inconsistent, or ad hoc management of user accounts and login profiles. Issues with privileged users, obsolete accounts, and default passwords in use very often slide under the radar. This potentially leaves the door open for unwelcome guests to pay a visit to your database. 

Two: Non-masked data in QA and dev environments

It’s scary how often we see non-masked data used in dev/test scenarios. In many cases, the production environment is well secured, but the development and QA environments are much less well secured. Yet the same data is being used in both. There’s no reason for this given the plethora of tools available for masking or obfuscating data.

Besides leading to data exfiltration, this is a potential compliance violation. Depending on your regulatory environment and the nature of the non-masked data (e.g., financial, medical, or other sensitive personal data), just the fact that you’re retaining that outside the production environment where it’s accessible to QA engineers and others who don’t have a legitimate reason to access it could be deemed a data breach.

Another danger is that code and data in dev/test environments frequently end up on developers’ local machines, which greatly increases the risk of data loss or a breach. On the podcast, John recalled an incident where a developer working for the City of New York dumped about 500,000 unmasked HR records onto his laptop, which he then left behind at a Korean restaurant. That ended up costing the city $23 million.

Three: Database sprawl

An extremely common but frequently disregarded threat to database security is database sprawl. The more databases you have, the more likely some will have unmitigated vulnerabilities that lead to compromise.

And as bad as database sprawl is on-premises, it’s exponentially worse in the cloud where everything is virtualized. It’s just too easy sometimes to spin up databases and then forget about them. Organizations need policies and processes to reduce the risk (not to mention the wasted money) from database sprawl.

Four: Pipeline leakage

A little-known database security concern that we are seeing more and more frequently is what I call “pipeline leakage.” I’m not a DevOps expert, but in my view, this “pipeline leakage” creates a very significant risk in the DevOps and CI/CD world or the data engineering world. 

Here’s what happens: Data gets taken out of a very well-protected database. Then, teams create XML, CSV, or JSON files that hold some of the data and put it somewhere else. Now it’s in temporary files or holding areas or spreadsheets that are scattered all over the place. Is the data still secure? Who knows? Teams need to be aware of this issue and clean up their processes to close this hole.

Five: Insider threats

Insider threats, both intentional and unintentional, are the root cause of something like 50% of data breaches. Whether they result from revenge, greed, or a user clicking a malicious link designed to harvest their credentials, insider attacks often target databases because of all the valuable data they contain. Yet many organizations underestimate the prevalence of insider threats and their potential impact.

To protect a database from insider threats, you need a way to log and detect activity against the database, both authorized and unauthorized (i.e., user activity monitoring). Then you need a way to alert on potential issues and investigate them. Finally, you need preventive controls like robust identity & access management (IAM) policies, such as quickly deleting unused accounts and only authorizing access to sensitive data for those who really need it. 

Now You Know About These Database Security Risks, What’s next?

The most comprehensive way to identify and prioritize your database security risks is a database security assessment. This cost-effective process covers everything from policies to user rights to auditing your databases for vulnerabilities with automated tools.

For more information on how a database security assessment can reduce your security and compliance risk, contact Buda Consulting.

 

Does an IT Service Provider Need Cyber Liability Insurance?

Does an IT Service Provider Need Cyber Liability Insurance?

First a disclaimer: I am not an attorney, and I am not a cyber liability insurance expert. I am an insurance consumer. Anything I say here should be validated by your attorney, insurance agent, or other qualified professional before acting upon it. But this may help you start the conversation and understand the issues.

As an IT service provider, Buda Consulting manages and secures mission-critical databases for our customers. We do this on infrastructure owned or controlled by them. I have been asked recently by a few customers to show proof of cyber liability insurance to help cover them in the event of a data breach. After numerous conversations with my insurance agent and others, I have learned that there is a misconception about how this insurance is applied. In this post, I hope to help other service providers and customers understand when cyber liability insurance applies and how it relates to service providers. I hope it helps them have conversations about it and to ask the right questions.

Relevant types of insurance for Data Breaches

For the purposes of this discussion, there are two main types of insurance with respect to data breaches

Cyber Liability Insurance

Cyber liability insurance protects a given organization from financial losses due to a data breach.  This coverage protects the company from financial damage resulting from breaches that occur related to data they own or control and that is housed on servers or infrastructure owned and controlled by them (this is not specific language in the contract but I believe this is conceptually accurate). The key point is that it is about their own data, not other’s data that they are working on.

Errors and Omissions Insurance

Another type of insurance is Errors and Omissions insurance. IT Service providers like Buda Consulting hold this type of insurance in order to protect us from financial liability related to errors that we might make that result in financial losses to us or our customers. 

There are different types of Errors and Omissions insurance. One type is Technology Errors and Omissions, which specifically deals with technical services. Some  (not all) Technology Errors and Omissions policies include network security and privacy liability, referred to as Cyber coverage for 3rd parties.  

So What Insurance Does a Service Provider Need?

If a service provider works on a customer’s server, and there is a data breach on that server, the customer’s cyber liability will apply. And if the service provider is at fault, the provider’s Errors and Omissions Insurance will apply – only if it is Technical E&O with Cyber coverage for 3rd parties.

 

Here are the key takeaways as I understand them:

  • Even If the service provider carries Cyber Liability Insurance, it will not apply in the case of a breach of a customer’s infrastructure. 
  • It is critical that the service provider has Technical Errors and Omissions coverage that includes Cyber coverage for 3rd parties.
  • A service provider may of course desire (or need) their own Cyber Liability Policy in order to protect them in the event of a breach of their own data, but this won’t help them with respect to the work they do for others. 
  • It is critical for customers to hold their own Cyber Liability Insurance even if their service providers hold Cyber Liability Insurance.
  • I hope this helps with conversations you may have with your insurance agents, service providers, and customers. 
How is Database Security NOT Like a Bank Vault?

How is Database Security NOT Like a Bank Vault?

As John Verry and I discussed in a recent virtual CISO podcast episode, many people think of database security as they think of bank vaults. They secure the perimeter, place the valuables in the vault (database), and then assume those valuables are as safe as if they are in the bank vault.

How Database Security and Bank Vaults Differ

If a thief gets past the physical security at the bank (doors, locks, window bars, alarms, they will still have a very difficult time getting in to the vault. (unless it is an inside job — more on that later).

But when we think of a database as a bank vault, we are missing an important difference. Bank vaults have a single point of entry, and it is secured by a complex locking mechanism that will thwart all but the most talented criminals.

Databases on the other hand, have many points of entry. Numerous administrator accounts, potentially hundreds of database user accounts, application accounts, operating system accounts that have access to the underlying data files, accounts in other databases that have access to database links into your database, network sniffers, the list goes on and on.

The Manufacturer Takes Care of That! Or Do They?

A bank vault manufacturer ensures that all of the seams on the vault are sealed properly and that all of the walls are resistant to power tools. They ensure that, in essence, there is only one point of entry. Providing more than one point of entry would render the safe less secure and would render the safe less useful.

Manufacturers of database software, on the other hand, work hard to provide as many points of entry as possible. User accounts, web services, database links, export utilities. Providing only one point of entry would render the database much less useful.

If Not the Manufacturer, Then Who?

It is clear based on these competing interests that the database software manufacturer is not and cannot be responsible to secure the database. Nor can the database host, whether this is an MSP or cloud provider that provides the server on which the database runs, or the database host in a PAAS (platform as a service — think RDS or Aurora) environment. All of these parties must provide as many points of entry as possible in order to make the databases valuable to the broadest set of customers.

Database and security professionals with a clear understanding of all of the entry points, and who work closely with the data owners or data security team, are required, in order to bring the security of a database even within reach of that of a bank vault.

More on the Inside Job

I promised more on the inside job earlier: Whether we are talking about a bank vault or a database, an insider with bad intent can render many security controls ineffective.

In a bank, an insider with the vault combination can easily bypass the most challenging part of getting to the valuables. They don’t even need a blow torch or explosives.

With a database, it is more complicated than that: An insider with a password can easily bypass both the perimeter security and the database security. At first glance this makes the database appear less secure than the bank vault, and it is — most of the time.

How to Close the Gap

There are many things that can and must be done in order to ensure that a database is and remains secure, including patching to remove known vulnerabilities, ensuring proper Disaster Recovery is in place and ensuring that encryption at rest and in transit is used. This article is about how a database differs from a bank vault, so I will only mention the points relevant to that comparison here.

In order to close the gap between the security of the bank vault and that of the database, we must eliminate or lock all unused entry points, and restrict access and track use of the remaining entry points.

While databases have many entry points, when configured properly, most enterprise-level database tools have very granular levels of control. Combining these granular levels of control with solid security procedures, we can significantly tighten the security of the database.

Some Examples:

  • Restrict the time of day that a user has access to and the machine that is being used to connect to the database with a particular username.
  • Audit all activity in the database, including the username, machine used, activity, timestamp, and other information, and take action quickly if we see activity that looks suspicious in order to reduce damage.
  • Insist that all privileged database users have individual usernames, that they are protected by two-factor authentication, and that their passwords are robust.
  • Have procedures in place and enforced to remove access immediately as part of the termination process for employees or contractors.

These practices, together with others, can render a database as safe (or perhaps even safer) than a bank vault.

But when a database instance is spun up by a developer with no Database Administration or security expertise, it is more accurate to compare the database to a cash register at the local convenience store than to a bank vault.

Listen to the conversation that John Verry and I had about this and other database security topics, and let us know what you think about how well people are protecting their critical data assets.

For more information or to schedule a consultation, please click here to contact Buda Consulting.

To Understand Risk, Ask Better Questions

The importance of asking the right questions

Readers of my blog know that a common theme has been the importance of asking the right questions. Today I will illustrate this with a true and painful (literally) personal story that really drives the point home.

The story will be about measuring risk vs reward and the importance of asking the right questions to properly assess both parts of that equation, especially the risk part. 

The Diagnosis

After a routine colonoscopy (illustrating the importance of proactive monitoring, but that is a different blog) I was recently diagnosed with stage 3 colon cancer. This means that the cancer made it beyond the colon and into the lymph nodes, but just a little. Fortunately this means that treatment options are available that are expected to be very effective.  After having surgery to remove the tumor, I was advised to undergo a three month course of chemotherapy. The surgery went very well, and after some recovery time, I was ready to begin chemo. Here is where this particular story really begins.

Risk Vs Reward

As we do in business, we are always assessing risk vs reward whenever we make any decision, including in decisions around our health.  When we do this, we must determine what the potential risks are for all available options, what the rewards are for each one, and then we try to choose the option that maximizes reward while minimizing risk.

Choices

When it comes to my particular chemo treatment plan, I need to have four two-hour infusions over the course of 3 months. As a patient, I was given the choice of two methods of infusion. One option was to have a port surgically implanted in my chest that would remain there for the duration of the three months. The other option was to have the infusions done via a standard intravenous method using a vein in my hand or arm.

I was advised by the nurse and doctor that a port is recommended and that most patients do this, but that IV is an option too if I prefer not to have the port.  It was here that I made a big mistake by not asking the right questions. When I asked why the port was recommended, the answer I was given was the following:  It is sometimes difficult to find a vein and the nurse will have to try multiple times, and that if the IV is not placed properly, it can lead to irritation of the arm.  I failed to ask followup questions, more on that later…

Weighing the Options

Option 1, Infusion Port. Risks: Small risk of infection or other complications as with any medical procedure, unsightly and potentially uncomfortable port sticking out of my chest for three months when it is only used four times. Rewards: Reduced (maybe eliminated) chance of irritation caused by the infusion, simpler actual infusion process.

Option 2, IV.  Risks: possible irritation of the arm. This was mitigated in my mind by the fact nurses have always had a very easy time when I give blood. Reward: No unsightly port to potentially get in the way of the gardening and golfing I hoped to continue doing during chemo.

The Mistake

As I have said in the past, almost all problems are caused by poor communication and this was no different. I neglected to ask a very important question. I never asked what the nurse or doctor meant by the word irritation.  When I heard the word irritation, I heard exactly what I wanted to hear (because I didn’t like the idea of a port). I heard short term redness, itchiness, and maybe a little sensitivity. But I never actually asked them what it meant. I never ascertained the actual risk, one of the most important parts of the equation.

The Result

I chose the IV option based on this flawed risk/reward analysis. 

I went at the appointed time to find a very nice infusion nurse ready to connect up to my port. The fact that she was surprised and visibly concerned that I did not have a port should have sent me running, but I stayed. 

She easily found a vein in my hand as I expected she would, and the infusion began. Everything was fine for about an hour and then I started to experience some strange feelings in my arm. I assumed that it was normal and had no pain so I didn’t think much of it. At about 90 minutes my arm started to hurt.  By the time there was about 10 minutes of infusion left, I was in excruciating pain. When it was finally over, the pain of the removal of the bandages was similar, I imagine, to having my skin ripped off. I shook violently for thirty minutes afterward and in retrospect I think I was likely in some kind of shock.

The excruciating pain lasted for about 2 days, the inability to use my arm because of inflammation and skin or nerve damage lasted about 1 week, and very significant sensitivity and less intense pain lasted 9 days.  As I write this, at 10 days, I finally feel the level of irritation that I originally imagined was the worst case scenario.

No Stupid Questions

All of this could have been avoided had I asked better questions, like “what exactly do you mean by irritation?”.  As a wonderful teacher of mine once said:  “the only stupid question is the one that’s never asked”

Needless to say, I am getting a port put in for the remainder of the infusions. Lesson learned the hard way.

How Much Does Database Disaster Recovery Cost? “It Depends”

How Much Does Database Disaster Recovery Cost? “It Depends”

How Much Does Database Disaster Recovery Cost? “It Depends” – a sometimes frustrating response that we hear frequently when we ask a question.  To some, it feels like a dodge. Maybe the person we are asking does not know, or would rather not give their opinion, or would rather not share their knowledge.

But when I hear someone respond “It Depends”, I tend to think that they are seriously considering the question. I hope that the answer will be a thoughtful, considered response.  In fact, few questions really deserve an automatic response. Most issues are nuanced, and when someone says “It Depends”, it does not mean that they are dodging the question.

A common question that we are asked by new clients is how much will it cost to implement Disaster Recovery (D/R) for their database environments,  My answer always starts the same:  “It Depends”

Database Disaster Recovery vs High Availability

Disaster Recovery is sometimes considered distinct from High Availability. For the purposes of this article, I think of them as two parts of the same whole. The objective of both is to keep your database available to your users when they need it. And when designing a solution that meets those objectives, both types of tools may be implemented. 

I think of Disaster Recovery in terms of things like backup and recovery tools and passive standby databases. The idea is to have a straightforward way of recovering and resuming operations if the primary server fails.  And I think of High Availability in terms of things like clustering, geographically distributed availability groups, and active-standby databases. The idea here is to prevent the system from ever failing in the first place.

When it comes to keeping the database available as needed, all of these tools need to be considered.

The Cost of Downtime

There are many factors to consider when thinking about Disaster Recovery.  Perhaps the most important, and I think the first that should be asked, is what is the cost of downtime?   Determining the cost of downtime to our own organizations requires asking what would happen if we were down for 1 minute, 1 hour, 1 day, or other appropriate intervals. We must consider all departments and stakeholders.  For example, in a manufacturing operation (this list of considerations is not exhaustive):

  • How many orders are typically placed in one minute, hour, day? What is the dollar value of those orders? What percentage will likely be lost forever vs delayed?
  • How many items are received during those intervals, what is the downstream impact on production if items cannot be received into the system?
  • How many items are produced during those intervals, what is the downstream financial impact if they are not produced and shipped?
  • How many orders are labeled during those intervals, how many shipped? What is the downstream impact of delays on labeling or shipping?
  • What are the upstream production impacts of not being able to produce, label, ship, or record order information (inventory space, etc)
  • What is the liability cost of not getting products or services to vendors or end customers within contractual guidelines?

These are not simple questions to answer, but the true cost of downtime can only be determined by such an exercise. 

What is Acceptable Database Disaster Recovery?

Once we know the cost of downtime, we can determine what level of disaster recovery is required in order to prevent unacceptable costs to the organization, which, of course, is the main reason to have a disaster recovery plan in the first place.  At the end of the day, the question is how much data loss or downtime is acceptable.

Of course, we would always like to say zero. Zero downtime, zero data loss, no matter what. However, implementing true zero loss Disaster Recovery may be cost-prohibitive for your organization. And moving from a zero-loss posture to a very small loss posture can reduce implementation costs vary significantly. So it makes sense to determine what the costs are and therefore what is acceptable to the organization.

Once we know the cost for an interval of downtime, we can do a cost/benefit analysis regarding the cost of implementing D/R. 

Factors That Drive The Cost of Implementation

The implementation cost of Database Disaster Recovery varies mainly on two key factors. 

  • The amount of data loss that is acceptable (known as recovery point objective or RPO)
  • The amount of downtime that is acceptable (known as recovery time objective or RTO)

For both of these factors, the lower the acceptable loss, the higher the cost, with the cost and complexity of driving down downtime generally greater than that of driving down the amount of data loss.

Implementing a Disaster Recovery scenario with zero possibility of data loss and zero downtime can be very expensive. This approach essentially requires full live redundancy across multiple geographic regions and the complexity that goes along with ensuring a seamless automatic transition of all applications from one environment to another and real-time synchronization between them.  

For many organizations, this full redundancy approach will be cost-prohibitive. And for most organizations, the cost of a small amount of downtime and a small possibility of a very small amount of data loss is acceptable and will not cause significant damage to the operation (or to profit). This compromise can mean the difference between being able to afford a Disaster Recovery Solution and not being able to do so. Having any Disaster Recovery Solution, even one without all zeroes is much better than having none.

The Bottom Line

When someone asks me how much it will cost to implement a Disaster Recovery Solution, I always say “It Depends”.  And then I ask a lot of questions. Contact us today for a consultation.