Oracle Data Encryption Options

Oracle Data Encryption Options

Oracle offers various authentication and audit features to protect data from unauthorized access. But what about data at rest in operating system files, backups or other storage media?

Protect Oracle Data At Rest With TDE

To protect data at rest, Oracle offers Transparent Data Encryption (TDE). With TDE you can encrypt sensitive data so that it is unreadable if the file it is stored in is exfiltrated or breached.

Data you encrypt with TDE is “transparently” decrypted when it is accessed by authorized users and applications. That is, decryption takes place without users even being aware that data is encrypted. Likewise, applications that process sensitive data can offer data encryption via TDE with little or no code changes.

Why use TDE? It helps ensure that your sensitive data is secure, supports compliance with a wide range of regulations like Sarbanes-Oxley (SOX), HIPAA and PCI, and can simplify your overall encryption/decryption policy and operations.

Another benefit of TDE is that it is pretty fine-grained. You can encrypt data at the column level or the tablespace level. Column-level encryption is perfect for confidential data like social security numbers or credit card numbers that are stored in table columns.

When you encrypt a tablespace, all objects created in that tablespace are encrypted automatically. Tablespace level encryption works well for tables that store sensitive data in multiple columns, or for when you want to protect an entire table and not just individual columns. It’s also handy anytime you want to avoid doing a nitty-gritty analysis of each table column to determine which ones require encryption.

To enable decryption and prevent unauthorized decryption, TDE uses a two-tiered, key-based encryption architecture. It stores encryption keys in a keystore, a hardware or software security module separate from the database. You can centrally (and automatically) manage these keystores using Oracle Key Vault.

To encrypt a tablespace, TDE uses an externally stored master key to encrypt the TDE tablespace encryption key, which is used to encrypt/decrypt tablespace data. For column-level encryption, Oracle transparently accesses a TDE master encryption key to encrypt or decrypt the TDE table key, which then encrypts/decrypts column-level data in the table.

Encryption Best Practices

Of course, your encryption strategy should be integrated with your overall information security program. Best-practice security tips related to encryption include:

      • Start by determining how sensitive the data is. Data that requires the strongest protection can be encrypted using the AES256 algorithm. Conversely, you can encrypt less sensitive data in several ways that offer performance benefits.
      • You also need to determine your approach to keystore protection based on data sensitivity. Options range from auto-login software keystores to hardware keystores. A separate keystore for TDE only is ideal if possible.
      • To limit damage from compromised admin credentials or insider threats, consider assigning separate security admins for TDE and for the database(s).
      • Backup your sensitive data using protected backup procedures.
      • Be aware that column-level encrypted data is decrypted during expression evaluation and could potentially be accessed in the associated on-disk swap file.
      • Also be aware that your Oracle data files could contain plaintext fragments (aka “ghost records” that were deleted logically from the table but still exist physically on-disk. These could potentially be accessed similarly to finding data on-disk after it has been deleted at the operating system level.

For more information on TDE, see the Oracle Advanced Security Guide online.

For expert help and guidance with encryption, backup/recovery, high availability and other business continuity and security concerns, contact Buda Consulting for a security risk assessment—the first step to finding and closing the gaps in your database security.

Database Patch News — November 2019 (Issue 1)

Database Patch News — November 2019 (Issue 1)

Welcome to Database Patch News, Buda Consulting’s monthly newsletter of current patch information for Oracle and Microsoft SQL Server. Here you’ll find information on available patches—including security patches—and desupported versions made available during the past month.

Why should you care about patching vulnerabilities and bugs? Two big reasons:

  1. Unpatched systems are a top cyber attack target. Patch releases literally advertise vulnerabilities to the hacker community. The longer you wait to patch, the greater your security risk.
  2. Along with running a supported database version, applying the latest patches ensures that you can get support from the vendor in case of an issue. Patching also helps eliminate downtime and lost productivity associated with bugs.

Here are the latest patch updates for Oracle and SQL Server:

Oracle Patches:

Oct 15 2019 Quarterly Patch Updates:

19c – Release Update 19.5 available

18c – Release Update 18.8 available

12.2.0.1 – OCT 2019 RELEASE UPDATE 12.2.0.1.191015 available.
Regular support ends Mar 2023 and extended support ends Mar 2026.

12.1.0.2 – Currently in extended support.
The last freely available patch was July 2019 for 12.1.0.2. The Oct 15 2019 Patch Set Update (PSU) is available but may require extended support purchase to access it. Patches will be release until July 2021 for this version. PSU 12.1.0.2.191015 is available.

11.2.0.4 – Entered extended support in December 2017
The last free available patch was October 2018 for 11.2.0.4. PSU 11.2.0.4.191015 is available but may require clients purchase extended support to access it.

SQL Server Patches:
SQL Server 2017 incremental servicing model (ISM)
CU17 (Latest build)—Released October 08, 2019

SQL Server 2016 Service Pack 2
Release date: April 24, 2018

SQL Server 2014 Service Pack 3 Cumulative update 4
Release date: July 29, 2019

SQL Server 2014 Service Pack 2 Cumulative update 18
Release date: July 29, 2019

Why Your Database Management Team Should Regularly Double-Check Your Backups

I’ve blogged before about the importance of checking database backups. Over 90% of new clients that we assess have backups that are either incomplete or totally unusable (true statistic!).

The following story of a backup failure that happened recently—and the successful double-check by our DBA Scott)—provides a noteworthy example of how bad backups can happen even when you think your process is working.

Recently we had a client that was looking to reduce storage costs for their Oracle RDBMS system while still meeting a legally mandated seven-year retention requirement. They were also looking to leverage AWS S3 and Glacier.

The majority of their Oracle data resided in a single history table partitioned on date; and this data was rarely, if ever, accessed once it was more than one year old. Thus S3 and Glacier were a perfect fit for this use case.

It was decided that data would be retained in the RDBMS until it was three years old.  After that, the data would be exported via Oracle Data Pump and zipped via the Linux zip utility. (A side note: In case you’re wondering why we didn’t just go with Data Pump’s native compression functionality, testing the export/zip process via Data Pump yielded a 20% lower compression ratio versus the Linux zip utility.)

Scott set about finding a way to automate the process, using Oracle Data Pump to export partitions starting with the oldest partitions first. To get started, he did what any good DBA does—he scoured the internet using Google and came up with this great example code posted by Michael Dinh to use as a starting point.

The process we decided to use was to export the identified partition, check the return code from dbms_datapump.wait_for_job.job_state to ensure the export completed successfully, and then drop the partition from the database.

After many modifications to the example code, it was time to test. Scott tested what would happen if everything went well. He also tested what would happen:

    • If the utility tried to export to a missing OS directory
    • If the directory was full
    • If the directory was read-only
    • If the export dump file was removed during the export process
    • If the export process was killed while the export job was running

The testing went smoothly, and in each case dbms_datapump.wait_for_job always returned a code other than COMPLETED.  The only time the process would drop the partition was when the return code was equal to COMPLETED, so it appeared we were ready to put this process into production use.

What we did not account for was the possibility that an Oracle bug would somehow cause dbms_datapump to fail to export the table partition rows but still return a COMPLETED code to the calling process—which is exactly what was about to happen.

The routine ran perfectly for a few weeks. Fortunately, Scott continued to closely monitor the job runs. All of a sudden he noticed that all of the export files started to report the exact same size, which was very small.

After checking the process, we found the issue and opened a ticket with Oracle support. They informed us that they believed an Oracle bug was to blame and recommended we upgrade the database to a later release.

No reason was ever found for why this suddenly started to happen after weeks of continuous use.  We did, however, learn an important lesson: when it comes to dropping table partitions after a data pump export, never rely solely on the dbms_datapump.wait_for_job return code. Always take the extra step of interrogating the export log file for the number of rows exported and completed successfully output.

In addition, the fact that we had a good current RMAN backup of the database that contained the data in the dropped partitions made this a good learning experience instead of a tragedy.  Nevertheless, this experience illustrates the importance of frequently checking your exports.

By checking to make sure that the exports were good, Scott was able to ensure that we saved the most recent backup prior to the pruning so that we could recover the data in those partitions if we ever needed to.

This is an extreme case because Oracle actually provided an incorrect success code. In most cases, the backups were either:

    • Not configured to run at all, or
    • Configured to exclude part of the database, or
    • Backup files were relocated to new volumes that were not included in the operating system backup process, or
    • The operating system backup was used without informing the DBMS that this was taking place, rendering the backup unusable.

Whatever the reasons, they are very hard to predict. The only safeguard is to check weekly (or daily, in some mission-critical cases) to make sure that what you think is happening really is.

If you don’t double-check your backups, the false sense of security you have today will end in sudden panic when you need to restore the database and can’t.

SQL Server Contained Databases and Oracle Multi-tenant: Advantages Compared

SQL Server Contained Databases and Oracle Multi-tenant: Advantages Compared

Oracle introduced multi-tenant databases in version 12c. SQL Server answered with container databases in version 2012.

These capabilities add efficiencies and reduces deficiencies, but in different ways. Pre-multitenant Oracle required more administration activities for individual databases than SQL Server, but offered more isolation and control for each database. Pre-container SQL Server required less administration activities for individual databases than Oracle, but offered less isolation and control for each database.

Essentially, these new functionalities moved both Oracle and SQL Server to a more satisfying middle ground.

SQL Server Vs. Oracle Multi-tenant

With the new changes:

  • They both allow individual databases to stand alone and be ported from one environment to another more easily.
  • They isolate much of the admin work that needs to be done on the Oracle Software level (patching, configuration, etc.) from the work that is more database-specific (backup/recovery, performance tuning, etc.).
  • They isolate all of the metadata about a particular database within the database itself, as opposed to intermingling the metadata from multiple databases.

Oracle Multi-tenant Advantages

Each approach also touts some specific advantages. For example, Oracle Multi-tenant is said to be “designed for the cloud” because it simplifies database consolidation. A multi-tenant container database can hold multiple pluggable databases, and application code connects to specific pluggable databases. This makes it easier to consolidate multiple databases onto private clouds without changing their applications, yet still control resource prioritization as you had previously.

Another advantage of the Oracle Multi-tenant approach is easier provisioning and cloning. Moving and cloning databases is an everyday admin task that multitenancy can help accelerate. For example, you can quickly copy a database and “plug it in” to a different container. Filesystems that support copy on write allow you to clone pluggable databases almost instantly.

SQL Server Advantages

SQL Server’s contained database feature offers advantages in the area of user authentication. A contained database is isolated from other databases and from the SQL Server instance that hosts it. Support for user authentication at the database level for contained databases reduces a database’s dependency on the logins of the SQL Server instance. You can also have separate firewall rules for contained database users versus users who authenticate at the server level. This user model makes it much easier to move databases between systems, including Microsoft Azure.

In short, the main goal of multitenancy is to facilitate database consolidation, while the main purpose of contained databases is to enable isolation and thus portability. Yet each approach addresses both these concerns.

If you are considering using either or both of these models, or would benefit from guidance on how to use either or both more efficiently or effectively, contact Buda Consulting to schedule a free consultation.

How to Resolve Oracle Database Licensing Issues with Processor Cores in Virtualized Environments

How to Resolve Oracle Database Licensing Issues with Processor Cores in Virtualized Environments

Virtualization technology revolutionized the RDBMS world by allowing database administrators and system administrations to utilize 90% of a server’s hardware resources, versus on average between 10%-30% utilization in non-virtualized environments. But with the push from a physical to a virtualized environment comes a host of new management tasks. One of those tasks is trying to demystify your licensing. 

Oracle Virtual Server Licensing 

In most circumstances, your virtualized environment would be running inside a virtual hypervisor. Your virtual servers would then be allocated a set of resources from a pool set in the hypervisor manager. One of the most important settings is how many virtual CPUs (vCPUs) your server is allocated.

The reason vCPUs are such an important virtual resource is because this setting directly relates to the physical processor and its cores. Processor cores is the metric many vendors like Oracle and Microsoft use to license some of their software stacks. This post focuses on Oracle virtual server licensing, which can be licensed by processor core, and explains one of the most common mistakes administrators make when setting up an Oracle environment in a virtualized environment.

A key point to understand is how the database software is licensed. Most servers purchased these days are running Intel Xeon processors containing huge core counts of 12 or higher. Let’s say you purchased a server with two Intel Xeon 4-Core processors. Oracle grades most Intel Xeon processors as 0.5 of a core license each. That means that 2 Xeon cores will equal 1 core license. So, for a system with two Xeon 4-core processors, which is 8 processors total, you will need to purchase an Oracle Database core license count of 4.

But what happens when you decide to upgrade to new servers with two Xeon 12-core processors for less than the original cost of the two Xeon 4-core processors? Oracle requires that all processor cores contained in a server are licensed for their database product when using core licensing. If you were running your database environment in a physical format, you would need to purchase 8 more core licenses, causing your operating cost to increase. 

The only way to continue with your 4 core licenses would be to virtualize your database environment—and the only way to comply with Oracle in an on-premise setting outside of an Oracle engineered system is to run Oracle VM.

Oracle VM is Oracle’s own virtualization hypervisor, and it is the only hypervisor where they honor the CPU pinning option, also called processor affinity. This option allows you to permanently allocate a physical processor’s core to a vCPU on a virtual server. Once a core is “pinned” to a vCPU on a virtual server, no other virtual server will use that processing core.

This last point is very important, because most administrators who are familiar with creating virtual servers know that you will be asked the following question to determine if you are compliant with your Oracle virtual server licensing:

  • How many vCPUs would you like to allocate to the virtual server?

vCPU Allocation

In most environments, 1 vCPU is equal to 0.5 of a physical core. Do you notice a trend yet? If we wanted to create a virtual server that complied with an Oracle Database licensing core count of 4, the virtual server would need 8 vCPUs. 

But what happens if we create the virtual machine with the correct amount of cores and never enable the CPU pinning option? According to Oracle documentation, this means you are not in compliance with your core licensing; and, in order to be in compliance, you will need to guarantee that this virtual machine only has access to the amount of cores specified in your core licensing.

To correct this problem, you will need to go into Oracle VM’s VM Tool software and assign physical CPU cores 0 to 7 (core numbering starts at 0) in order to lock the virtual server to only be able to use 8 cores. Keep in mind that this virtual server will be the only server allowed to use cores 0 to 7, and any other virtual servers hosted by the hypervisor will use physical cores 8 to 23 (remember you upgraded your servers to two 12-core Xeon processors).

After pinning the licensed amount of cores to the virtual server hosting your database environment, you are now in compliance with Oracle licensing and will avoid any costly penalties resulting from an Oracle software audit where you are found to be out of compliance. 

So when building out your virtual environments, make sure you know the requirements of your software’s licensing. Forgetting to pin CPUs to your virtual servers could end up costing your company significant money.

If you like this article, please share it with your colleagues and subscribe to our blog to get the latest updates.

Why You Should Not Thin Provision Your Production Storage

Why You Should Not Thin Provision Your Production Storage

Virtualization brought with it some other amazing technologies, one of which was thin provisioning for storage. Thin provisioning offers significant value by allowing an administrator to create a server with a large supply of storage, while actually allocating only what is currently needed.

Thin provisioning is a good option when used in a development environment or other scenario where only test data would reside on the virtual server. But there are some thin provisioning disadvantages for production environments because of the high potential for downtime and data loss.

Over-Subscription

Thin provisioning causes trouble when more storage is provisioned than is available in the underlying hardware. For instance, when you thin provision, you can store a 4 TB virtual disk on a 400 GB physical volume. As long as you use less than 400 GB of space inside the virtual volume, the setup works well. But what happens when you use 401 GB?

Over-subscription is what happens when you subscribe more storage than is available. This will cause I/O errors on your server, which can lead to irreparable damage to your virtual server and/or application. The damage is exacerbated in a production environment because it can result in partial or total data loss.

Two Ways To Remedy Over-Subscription

There are two approaches that help to remedy an over-subscription problem.

Increase Your Virtual Disk Repository

The first is to increase the over-subscribed virtual disk repository. This usually requires a SAN administrator to increase the repository volume in the SAN storage manager and rescan the volume via the SCSI bus.

Create A New Virtual Disk Repository

If you cannot extend the initial volume, the second approach is to create a new virtual disk repository and migrate some of your virtual servers to this new storage.

One of our Oracle clients is using Oracle virtualization software for storage. The original DBA (before we were involved), created 30 TB of thinly provisioned vdisks. But the entire size of the volume on which these vdisks were subscribed was only 20 TB. Everything worked fine until the total actual usage of those vdisks reached 20 TB, at which point I/O errors signaled that something was very wrong.

After gaining access to Oracle’s VM manager, we were able to see that the volume was oversubscribed. We corrected the problem using the second approach described above. We created a new disk repository and moved data files one at a time until everything was moved over and the space issue was resolved.

Conclusion

In summary, using thin provisioning can be wonderful if managed correctly with proper forecasting. But if not managed carefully, it can lead to negative disadvantages that outweigh the storage saving benefits.

If you’re using thin provisioning today, or looking for other ways to make the best use of your physical and virtual storage, contact Buda Consulting to talk over potential options and what’s right for your environment.