Oracle SQL Firewall: A New Feature That Blocks Top Database Attacks in Real-Time

Oracle SQL Firewall: A New Feature That Blocks Top Database Attacks in Real-Time

Oracle 23c introduces a very powerful and easy-to-use database security feature that many users will want to try, especially for web application workloads. Called Oracle SQL Firewall, it offers real-time protection from within the database kernel against both external and insider SQL injection attacks, credential attacks, and other top threats. 

Oracle SQL Firewall should be a huge help in reducing the risk of successful cyber-attacks on sensitive databases. For example, vulnerability to SQL injection due to improperly sanitized inputs is currently ranked as the #3 most common web application security weakness overall in the latest OWASP Top 10. This tool effectively eliminates SQL injection as a threat wherever it is deployed.

SQL Firewall is intended for use in any Oracle Database deployment, including on-premises, cloud-based, multitenant, clustered, etc. It is compatible with other Oracle security features like Transparent Data Encryption (TDE), Oracle Database Vault, and database auditing.

How Oracle SQL Firewall works

SQL Firewall provides rock-solid, real-time protection against some of the most common database attacks by restricting database access to only authorized SQL statements or connections. Because SQL Firewall is embedded in the Oracle database, hackers cannot bypass it. It inspects all SQL statements, whether local or network-based, and whether encrypted or unencrypted. It analyzes the SQL, any stored procedures, and related database objects. 

The new tool works by monitoring and blocking unauthorized SQL statements before they can execute. To use it, you first capture, review, and build a list of permitted or approved SQL statements that a typical application user would run. These form the basis of an allow-list of permitted actions, akin to a whitelist. 

You can also specify session context data like client IP address, operating system user, or program type on the allow-list to preemptively block database connections associated with credential-based attacks. This includes mitigating the risk of stolen or misused credentials for application service accounts.

Once enabled, Oracle SQL Firewall inspects all incoming SQL statements. Any unexpected SQL can be logged to a violations list and/or blocked from executing. Though the names are similar, Oracle SQL Firewall is much simpler architecturally than the longstanding Oracle Database Firewall (Audit Vault and Database Firewall or AVDF) system. You can configure the new SQL firewall at the root level or the pluggable database (PDB) level.

Is there a downside to using Oracle SQL Firewall?

In part because it is still so new, Oracle SQL Firewall performance data is not widely reported online. Transaction throughput is vitally important for many applications, so it’s possible that SQL Firewall would create unacceptable overhead even if it were minimal. The good news is that “before and after” performance testing in your environment should be straightforward using best-practice testing techniques.

Oracle SQL Firewall administrative security is robust and logically integrated with other Oracle Database admin security, so it does not introduce new security risks. For example, only the SQL_FIREWALL_ADMIN role can administer the tool or query the views associated with it. SQL Firewall metadata is stored in dictionary tables in the SYS schema, which rely on dictionary protection like other such tables in SYS.

Who should use Oracle SQL Firewall?

For any business that needs to improve application security, such as for compliance with US government supply chain regulations or as part of a Zero Trust initiative, Oracle SQL Firewall could be a good choice. It could prove especially useful in DevOps environments due to its minimal impact on application development and testing timelines

What’s next?

A goal for this blog post is to encourage organizations using Oracle 23c to implement SQL Firewall. It is a low-effort way to improve application and database security and significantly reduce information security risk associated with the sensitive data it protects.

To speak with an expert on how Oracle Database Firewall could improve your database security, and how it might fit with your overall security goals and challenges, contact Buda Consulting

 

 

 

Navigating Database Cloud Migration: How to Choose the Best Cloud Migration Services

Navigating Database Cloud Migration: How to Choose the Best Cloud Migration Services

Thinking of moving your database from your data center to a cloud or managed hosting provider? There are lots of options, and choosing the right cloud migration services for your workload takes research and planning. To get the most business value from your move to the cloud, you need a strategy that minimizes both time to benefit and business risk.

Why move a database to the cloud?

Common reasons for undertaking a cloud database migration include:

  • Reduced operating costs. In the cloud, the cloud service provider (CSP) bears the cost of maintaining, securing, and supporting the physical and virtual infrastructure your databases will run on.
  • Simplified remote access. The public cloud makes it easy to provide database access to remote workers and services.
  • Less security responsibility. Leading public clouds offer comprehensive, multi-layered security controls like data encryption, network protection for remote workers, user activity monitoring (UAM), and threat monitoring/intelligence.
  • Improved scalability. Most clouds can automatically scale data storage and workloads on demand, reducing the overhead associated with manually scaling your infrastructure. 

But the process of migrating databases to the cloud can often exceed time and cost estimates and even lead to security and compliance issues if badly executed. Choosing the right cloud migration services can help streamline key steps and make progress easier to track and manage.

What public cloud should you move to?

A primary consideration that largely dictates what cloud migration services you can pick from is the cloud environment you want to move to.

In some cases, this choice is effectively predetermined. For example, if you are running Microsoft SQL Server workloads and want to keep them in the Microsoft ecosystem, you’ll want to move to Microsoft Azure.  

Similarly, if you use Oracle Database and want to take advantage of the sophisticated cloud migration services that Oracle offers its customers, the best cloud for your workloads might be Oracle Cloud Infrastructure (OCI).

Or maybe you want to use Amazon Web Services with its rich landscape of services. If so, you might benefit from expert guidance from a trusted partner on how to structure your Amazon environment, including networking, storage, and server components. For example, not every business is ready to fully leverage the ephemeral nature of some AWS constructs. The best approach might be to move your database workloads to their own individual instances in Amazon EC2. Or for workloads that don’t require their own instances, Amazon RDS can be a good option.

Finally, if a powerful range of cloud migration services is a deciding factor in your choice of a public cloud, consider Google Cloud. Google Cloud offers multiple approaches for migrating Oracle, SQL Server, and other database workloads. Google’s highly rated cloud migration services use AI to help automate repeatable tasks, saving time and reducing the risk of errors.

What is your database migration strategy?

Another factor in which cloud migration services to use is your database migration strategy. Which strategy you pick will depend on related issues, such as whether you plan to clean up your data or institute new data governance processes as part of the migration.

The three basic database migration strategies are:

  1. Big bang—where you transfer all your data from the source database to the target environment in one “all hands on deck” operation, usually timed to coincide with a period of low database usage, like over a weekend. The advantage of a big bang migration is its simplicity. The downside is that downtime will occur, making this approach unsuitable for databases that require 24×7 availability.
  2. Zero-downtime—where you replicate data from the source to the target. This allows you to use the source database during the migration, making it ideal for critical data. This choice can be fast, overall cost-effective, and generally non-disruptive to the business. The downside of the zero-downtime option is the added complexity of setting up replication, and the risk of possible data loss or hiccups in the data movement if something goes wrong.
  3. Trickle—where you break the migration down into bite-sized sub-migrations, each with its own scope and deadlines. This approach makes it easier to confirm success at each phase. If problems occur, at least their scope is limited. Plus, teams can learn as they go and improve from phase to phase. The problem with a trickle migration is it takes more time and also more resources, since you have to operate two systems until completion.

Cloud migration services examples

Once you’ve identified your target cloud environment and your migration strategy, you can start choosing cloud migration services options.

For example, say you plan to move a business-critical Oracle database to Oracle Cloud Infrastructure using a zero-downtime strategy. One of the best cloud migration services options in this case is Oracle Cloud Zero Downtime Migration (ZDM).

A great feature of ZDM is the ability to fallback if necessary. This is Oracle’s preferred automated tool for migrating a database to OCI with no changes to the database type or version. Using a “controlled switchover” approach that includes creating a standby database, ZDM can dynamically move database services to a new virtual or bare metal environment, synchronize the two databases, and then make the target database the primary database.

At the opposite end of the cloud migration services spectrum from Oracle Cloud ZDM is Oracle Cloud Infrastructure Database Migration—a fully managed service that gives customers a self-service experience for migrating databases to OCI. Oracle Cloud Database Migration runs as a managed cloud service separate from the customer’s OCI tenancy and associated resources. Businesses can choose a simple offline migration option (similar to a “big bang” migration) or an enterprise-scale logical migration with minimal downtime (similar to a “trickle” migration). Teams can pause and resume a migration job as needed, such as to conform to a planned maintenance window.

If you want to move your Oracle, SQL Server, or other database workloads to AWS, Amazon offers a comprehensive set of cloud migration services to help automate the process. However, these tools are complex and powerful, and best used by experienced technologists. Be sure to confirm that AWS database sizing and capacity growth parameters meet your needs. You’ll also need to decide whether to use Amazon Relational Database Service (RDS) or RDS Custom, depending on the kinds of applications your database supports.

Next steps

While moving databases to the cloud offers many benefits, a high percentage of cloud database migrations falter or fail due to inadequate planning and/or a lack of specific expertise. The top public cloud environments offer purpose-built cloud migration services to streamline the process, but these are not always easy to use. The largest CSPs also support millions of users, so your business may struggle to get the individual attention you need in a timely way.

Whether your databases reside in a major public cloud or a smaller cloud or managed hosting environment, Buda Consulting is always the first point of contact for our clients. Personalized service by someone who knows your business is guaranteed. If there is ever a problem, you call us and we take it from there. 

Contact Buda Consulting to discuss how our cloud and managed hosting migration services can help your business get maximum value from moving to the cloud.  

 

A Focus on Oracle Container Databases

A Focus on Oracle Container Databases

Oracle 12c introduced a major architectural change called Oracle container databases, also known as Oracle multitenant architecture. With this change, an Oracle database can act as a multitenant container database (CDB).

A CDB, in turn, can house zero or more pluggable databases (PDBs), each consisting of schemas and objects that function just like familiar “normal” (pre-Oracle 12c) databases from the viewpoint of applications or SQL IDEs.

Contents of CDBs and PDBs

In the Oracle container database model, the CDB contains most of the working components every Oracle DBA knows, e.g., controlfiles, datafiles, tempfiles, undo, redo logs, etc. The CDB also contains the data dictionary for objects owned by the root container and those visible to all PDBs in the CDB.

Since the CDB contains most of the key parts of the database, each PDB need only contain information that is specific to itself and its schemas and schema objects, like datafiles and tempfiles. A PDB also has its own data dictionary, which includes information about objects specific to that PDB. A PDB can also have its own local undo tablespace. Each PDB has a unique ID and name. To an Oracle Net client, a PDB looks like a separate database.

Besides PDBs, a CDB can also contain zero or more application containers. These are user-created CDB components that store data and metadata for one or more application backends.

Finally, by default every CDB has one root container (named CDB$ROOT) and one seed PDB container (named PDB$SEED). The former stores Oracle metadata and common users. The latter is a template used to create new PDBs.

Deprecation and desupport of non-CDB databases

Beginning with Oracle Database 12c, Oracle deprecated the non Oracle container database architecture, and desupported it in Oracle Database 21c. This means that the Oracle Universal Installer and DBCA can no longer be used to create non-CDB instances of Oracle databases.

Desupport also means that an upgrade to Oracle Database 21c includes a migration to the multitenant architecture. This can be a significant consideration as it can change your approach to database administration.

Benefits of Oracle container database architecture

Is a move to Oracle container database architecture worth the learning curve? Why not just continue to create distinct individual databases or virtual machines (VMs)?

The benefits of moving to the CDB architecture often outweigh the “pain of change” because it can streamline your use of database resources and save you considerable operational and administrative time and costs. Pluggable databases are also easy to move between CDBs, which can increase the agility of your DBA services.

Some specific benefits of the Oracle container database model include:

  • The ability to consolidate code and data without changing existing schemas or applications.
  • Consolidating databases means you can also consolidate IT infrastructure and utilize computing resources more efficiently.
  • Consolidated IT infrastructure, in turn, can simplify monitoring and management of the database environment—including faster backups and patching. Performance tuning can also be easier with the Oracle container database model.
  • Because PDBs look like non-multitenant databases to Oracle Net clients, changes for developers working with Oracle databases are often not dramatic. Developers may notice little difference connecting to a multitenant scenario except that the connection strings have a different format.

Pluggability in the Oracle container database model

One of the top advantages of the Oracle container database model or multitenant option is the ability to unplug a pluggable database (PDB) from one CDB and plug it into a different CDB. This makes it easy to move databases, and can also be used to patch and upgrade database versions. Basically, you just unplug the PDB, move it to the CDB you plan to upgrade, and it will be patched/upgraded automatically along with the CDB.

The Oracle multitenant model also allows you to relocate a PDB to a new CDB or application container even more easily than going the unplugging/plug-in route, with near-zero downtime. During relocation, the source PDB can be open in read/write mode and fully usable.

More about application containers

Along with the Oracle container database model comes the concept of application containers. Similar to a root CDB container, you can use an application container to centralize or “containerize” one or more applications, each consisting of shared configuration, metadata and objects. These are then used by the application PDBs within the application container.

Next steps

The Oracle container database architecture can seem confusing even to experienced DBAs. But it’s more intuitive than it sounds once you’ve had a chance to work with it. The advantages of multitenancy generally far offset the learning curve for many DBAs and their companies.

To speak with an Oracle expert about leveraging the Oracle container database model in your environment, contact Buda Consulting.

How Much Does Database Disaster Recovery Cost? “It Depends”

How Much Does Database Disaster Recovery Cost? “It Depends”

How Much Does Database Disaster Recovery Cost? “It Depends” – a sometimes frustrating response that we hear frequently when we ask a question.  To some, it feels like a dodge. Maybe the person we are asking does not know, or would rather not give their opinion, or would rather not share their knowledge.

But when I hear someone respond “It Depends”, I tend to think that they are seriously considering the question. I hope that the answer will be a thoughtful, considered response.  In fact, few questions really deserve an automatic response. Most issues are nuanced, and when someone says “It Depends”, it does not mean that they are dodging the question.

A common question that we are asked by new clients is how much will it cost to implement Disaster Recovery (D/R) for their database environments,  My answer always starts the same:  “It Depends”

Database Disaster Recovery vs High Availability

Disaster Recovery is sometimes considered distinct from High Availability. For the purposes of this article, I think of them as two parts of the same whole. The objective of both is to keep your database available to your users when they need it. And when designing a solution that meets those objectives, both types of tools may be implemented. 

I think of Disaster Recovery in terms of things like backup and recovery tools and passive standby databases. The idea is to have a straightforward way of recovering and resuming operations if the primary server fails.  And I think of High Availability in terms of things like clustering, geographically distributed availability groups, and active-standby databases. The idea here is to prevent the system from ever failing in the first place.

When it comes to keeping the database available as needed, all of these tools need to be considered.

The Cost of Downtime

There are many factors to consider when thinking about Disaster Recovery.  Perhaps the most important, and I think the first that should be asked, is what is the cost of downtime?   Determining the cost of downtime to our own organizations requires asking what would happen if we were down for 1 minute, 1 hour, 1 day, or other appropriate intervals. We must consider all departments and stakeholders.  For example, in a manufacturing operation (this list of considerations is not exhaustive):

  • How many orders are typically placed in one minute, hour, day? What is the dollar value of those orders? What percentage will likely be lost forever vs delayed?
  • How many items are received during those intervals, what is the downstream impact on production if items cannot be received into the system?
  • How many items are produced during those intervals, what is the downstream financial impact if they are not produced and shipped?
  • How many orders are labeled during those intervals, how many shipped? What is the downstream impact of delays on labeling or shipping?
  • What are the upstream production impacts of not being able to produce, label, ship, or record order information (inventory space, etc)
  • What is the liability cost of not getting products or services to vendors or end customers within contractual guidelines?

These are not simple questions to answer, but the true cost of downtime can only be determined by such an exercise. 

What is Acceptable Database Disaster Recovery?

Once we know the cost of downtime, we can determine what level of disaster recovery is required in order to prevent unacceptable costs to the organization, which, of course, is the main reason to have a disaster recovery plan in the first place.  At the end of the day, the question is how much data loss or downtime is acceptable.

Of course, we would always like to say zero. Zero downtime, zero data loss, no matter what. However, implementing true zero loss Disaster Recovery may be cost-prohibitive for your organization. And moving from a zero-loss posture to a very small loss posture can reduce implementation costs vary significantly. So it makes sense to determine what the costs are and therefore what is acceptable to the organization.

Once we know the cost for an interval of downtime, we can do a cost/benefit analysis regarding the cost of implementing D/R. 

Factors That Drive The Cost of Implementation

The implementation cost of Database Disaster Recovery varies mainly on two key factors. 

  • The amount of data loss that is acceptable (known as recovery point objective or RPO)
  • The amount of downtime that is acceptable (known as recovery time objective or RTO)

For both of these factors, the lower the acceptable loss, the higher the cost, with the cost and complexity of driving down downtime generally greater than that of driving down the amount of data loss.

Implementing a Disaster Recovery scenario with zero possibility of data loss and zero downtime can be very expensive. This approach essentially requires full live redundancy across multiple geographic regions and the complexity that goes along with ensuring a seamless automatic transition of all applications from one environment to another and real-time synchronization between them.  

For many organizations, this full redundancy approach will be cost-prohibitive. And for most organizations, the cost of a small amount of downtime and a small possibility of a very small amount of data loss is acceptable and will not cause significant damage to the operation (or to profit). This compromise can mean the difference between being able to afford a Disaster Recovery Solution and not being able to do so. Having any Disaster Recovery Solution, even one without all zeroes is much better than having none.

The Bottom Line

When someone asks me how much it will cost to implement a Disaster Recovery Solution, I always say “It Depends”.  And then I ask a lot of questions. Contact us today for a consultation.

Need Continuous Database Protection across Oracle and SQL Server? Consider Dbvisit Standby MultiPlatform.

Need Continuous Database Protection across Oracle and SQL Server? Consider Dbvisit Standby MultiPlatform.

Availability of your database environment and continuous database protection is business-critical. Without continuous database protection, you can’t ensure business continuity. But it’s only a matter of time before you experience a failure. When (not if) that happens, will you be ready?

When it comes to disaster recovery, many businesses rely on conventional backup/restore procedures to protect their database from risks like operational failures, cyber-attack, disaster impacts, and data corruption. But restoring from traditional backups can be slow, taking hours or even days. Restoring from backups is also notoriously failure-prone because testing and validation are usually infrequent. Plus, depending on how frequently backups occur, you could lose hours’ worth of the most recent changes to your data.

If your organization requires rapid, resilient disaster recovery and business continuity capabilities and/or cannot tolerate data loss, you may want to consider a standby database configuration. A standby database is a copy of the primary database, usually at a remote location. It updates continuously to minimize data loss and can quickly “failover” to support ongoing operations if the primary database goes down or is corrupted.

Why use a standby database for disaster recovery and continuous database protection?

A standby server has several important advantages over traditional backup/restore tools for disaster recovery and data loss prevention:

  • It is always operational and available in seconds, not hours or days, so you can recover more quickly.
  • It minimizes potential data loss by updating continuously with minimal time lag.
  • Its operational readiness is constantly verified, which guarantees database integrity after failover.
  • It enables you to test your disaster recovery plan much more easily, with minimal risk or impact to your primary database and the applications that rely on it.
  • It can be offsite, geographically distant, and running on separate infrastructure from your primary database, which reduces disaster risk in the event of operational failure at your production site.
  • You can enjoy peace of mind knowing that your database is always backed up and can be restored or recovered at any time with no surprises.

In short, a standby database can be an ideal solution for organizations that want to ensure continuous database protection to minimize downtime, data loss, and business risk. The following figure illustrates a standby database configuration.

Meet Dbvisit, Buda Consulting’s standby database partner

Buda Consulting has considerable experience helping organizations implement backup/restore, high availability and disaster recovery solutions for their databases on Oracle, Microsoft SQL Server and open-source platforms. We have found our longtime partner Dbvisit to be a world-class standby database solution provider whose solutions are easy to use, cost-effective and backed by great customer service. Our customers of all sizes love Dbvisit, which is why we’re sharing this blog post.


We’re especially excited to share with our client base that Dbvisit now offers the industry’s first multiplatform option. Called StandbyMP, it enables you to manage standby databases for Oracle and SQL Server through a single pane of glass. Imagine confronting an outage and being able to failover all your databases automatically or with a single click! PostgreSQL support is also coming soon in 2022.

Another big advantage of Dbvisit solutions is you can deploy them on-premises, in a public cloud or on hybrid cloud. Supported public clouds include Amazon Web Services (AWS), Microsoft Azure and Oracle Cloud.

Gold Standard Disaster Recovery and Continuous Database Protection

The folks at Dbvisit are disaster recovery specialists, with thousands of customers in 120 countries and offices in North America, Europe and Asia Pacific. While they serve some of the world’s leading enterprises, including Verizon, Barclays, 7-Eleven, the US Navy, Volkswagen, PWC and CBS, Dbvisit’ exceptional support and industry-leading total cost of ownership (TCO) make them a great choice for small to midsized businesses (SMBs) as well.

According to Neil Barton CTO of Dbvisit, “Dbvisit Standby guarantees database continuity through a verified standby database that is always available and ready to take over at the moment you need it.” Even if your most trusted DBA is on vacation when an emergency occurs at 3AM, your database(s) will be protected from contingencies ranging from human error to hardware failure to hurricanes to hackers.

Dbvisit Standby solutions for Oracle and/or SQL Server promise minimal data loss (a maximum of approximately 10 minutes) and fast database recovery/failover (within a few minutes). Continuous exercising and testing maintains and validates the integrity of your standby database 24×7. This is what Dbvisit calls “Gold Standard Disaster Recovery.” It offers the following value propositions:

  • Database integrity with a verified standby database that is identical to the primary database and fully operational to ensure successful failover
  • Resilience to meet your recovery requirements across all outage and disaster scenarios
  • Automated and intuitive to eliminate manual processes, opportunities for error and dependence on highly skilled staff
  • Decision simplification to “de-stress DR”
  • Near-zero data loss
  • Cost-efficiency and low risk


Dbvisit lives up to its motto: “We believe nothing should stand in the way of your business moving forward.”

Dbvisit StandbyMP: Enterprise-class DR for multiple database platforms

Using different disaster recovery tools and processes across multiple database types has always been complex. Dbvisit’s new StandbyMP offering promises to reduce this complexity and for the first time allow customers to manage DR processes for SQL Server and Oracle SE databases through a single console. We are very excited about the multi-platform concept and are looking forward to the addition of PostgreSQL and other popular databases soon.

Prioritizing risk reduction, disaster resiliency, recovery speed and ease of use, StandbyMP delivers rapid time-to-benefit, ease of administration and automated, on-demand failover. Dbvisit guarantees database continuity and radically reduces database risk with a consistent, “Gold Standard” approach to protecting both Oracle and SQL Server databases.

“Our software costs the equivalent of two minutes’ downtime,” said Tim Marshall, Product Marketing Manager, in a recent Dbvisit blog post. “Great doesn’t have to be expensive.”

Dbvisit highlights these key value propositions for its StandbyMP solution:

Simplify – Control your Oracle and SQL Server disaster recovery configurations from a single central console
Speed up – Multi/concurrent database actions accelerate recovery across both Oracle and SQL Server
Risk down – Automation removes manual processes, hard-to-maintain scripts, and opportunities for error
Level up – Simplify your disaster recovery plans and ensure best practices are implemented across all your databases

Next steps

An industry-leading standby database solution like Dbvisit StandbyMP can be the perfect way to continuously protect your critical data—but it’s not right for every database. To connect with an expert on whether a standby database makes sense for your business, contact Buda Consulting to schedule a 15-minute conversation.

For more information on Dbvisit solutions and services, check out Dbvisit.com

 

Database Encryption: What You Need to Know

Database Encryption: What You Need to Know

These days organizations are storing, accessing, and analyzing more data than ever, both on-premises and on the cloud. As this trend accelerates, the need for effective database security grows right along with it.

Basic security controls like login/password credentials aren’t adequate to safeguard sensitive data from today’s increasingly sophisticated external and internal attacks. To reduce cyber risk, comply with regulations and give customers and other stakeholders peace of mind, many organizations need a holistic security approach that includes database encryption.

The view that database encryption comes with burdensome costs, added IT complexity and degraded performance is outdated. With today’s solutions, database encryption can be among the easiest, most affordable, and most effective security steps you can take. Multiple database encryption approaches are available, so choosing the right one for your needs is essential.

How does database encryption work?

When you encrypt all or part of a database, an encryption algorithm (there are many) converts the data from human-readable characters to ciphertext, which completely obscures the content and renders it useless to attackers. To decrypt the data and use it, you need the correct key, which the encryption solution generates.

Unlike many other security controls, like firewalls or anti-malware tools, most database encryption operates directly on the data where it is stored, often termed “data at rest.” At-rest encryption keeps your data secure if your network or database server is compromised, or if a malicious insider or cybercriminal with privileged access attempts to exfiltrate your data. Only users who have the right key can make use of the encrypted data.

What types of encryption are available?

To balance your users’ needs for access and performance with the value of your data and the risks it faces, you can choose from a range of database encryption options. These include:

  • Full database encryption, where all the data in the database is encrypted.
  • Column-/field-/table-level database encryption, where the most sensitive data elements are encrypted but others are not. This option can improve application performance and reduce system overhead by impacting only queries against encrypted data.
  • Client-side encryption encrypts the data on a user’s system before it is stored in the database. This approach puts the computational overhead of encryption on the client system, which often has cycles to spare. A further advantage is that data encrypted in this way is safe even from malicious code running on the server or within the RDBMS environment.
  • Homomorphic encryption uses complex mathematical computations to analyze encrypted data in various ways without decrypting it. This approach preserves privacy for sensitive data like health or educational records. It allows cloud service providers (CSPs), remote database administrators (DBAs), and other third parties to process data while maintaining regulatory compliance and full security. 
  • Hardware encryption, where the encryption mechanism is built into the hardware (e.g., a disk drive) where the database resides. The primary benefit of hardware encryption is that if the database environment or server is compromised, the data will remain inaccessible to attackers. 

How does Oracle handle encryption?

Oracle has long supported a feature called Transparent Database Encryption (TDE), which is both effective and straightforward to implement. TDE lets you encrypt the whole database, or only specific tables or columns. 

Oracle stores the database encryption keys in a separate Oracle Key Vault, which helps you govern your keys so that they’re secure from unauthorized access and available automatically (“transparently”) for authorized users and systems. This makes TDE a great option where you need to protect data from attacks that compromise your database servers and/or Oracle RDBMS, or where hackers gain access to the physical storage media.

How does Microsoft SQL Server handle encryption?

Like Oracle, Microsoft SQL Server also has the capability to encrypt data at rest, which it calls Transparent Data Encryption (TDE). SQL Server’s TDE offers many of the same data encryption capabilities as Oracle’s TDE. But its default key storage is different. Instead of a separate vault, SQL Server stores the database encryption key (DEK) in the database boot record for availability during recovery.

SQL Server also offers the Always Encrypted feature, which lets you encrypt highly sensitive data inside client applications and never reveal the encryption keys to the database engine. Because it segregates those who own the data from those who can view it or need to manage it Always Encrypted is ideal for maintaining security and compliance for high-value data in cloud environments, for instance.

How do open-source databases handle database encryption?

MySQL, PostgreSQL and most other popular open-source databases support third-party encryption libraries, such as pgcrypto or MyDiamo. There are also open-source toolkits for specialized types of database encryption like homomorphic encryption.

Database operations can also call on encryption functions available at the file system level in Windows, Linux, MacOS, etc. With this type of encryption, the server encrypts entire files as they are stored, potentially adding to system overhead but saving the cost of a separate solution.

Important considerations with managing encryption keys

Your encrypted data is only as secure as your encryption keys. Since they control access to encrypted data, you should store your keys separately from the database when possible. For example, both Microsoft Azure and IBM Cloud offer a “key vault” service that stores encryption keys in a hardware module for an extra level of encryption.

Your key management also needs to factor in backups, because backing up encrypted data without protecting the associated keys could be futile. One option is to consolidate your database encryption keys into a centralized key manager solution, and back them up from there.

Another consideration with encryption keys is their length. Different encryption methods rely on different key types. Longer keys, like longer passwords, are generally more secure. For example, 128-bit encryption schemes use a 128-bit key, which are deemed virtually impossible to break using today’s (non Quantum) computer systems.

The downside of longer keys—and potentially encryption overall—is higher overhead and reduced data throughput, as well as increased storage needs for the database. However, applying best practices in your implementation can reduce or eliminate many undesirable impacts.

Next steps

In response to customer demands and/or emerging security and privacy compliance requirements, more businesses are encrypting more databases in more ways than ever before. But the more data you encrypt, the more encryption keys you need to manage and the more you need to be concerned with performance impacts.

For optimal benefit to your organization, your database encryption strategy should reflect a holistic view of present and projected business needs, including cybersecurity and compliance risks, plus expert knowledge of best practices and technology options. A security risk assessment to identify weak spots is a great place to start.

If you are considering database encryption or want to optimize your current encryption approach, Buda Consulting can help you secure your business-critical data, comply with regulations and address database performance and cost issues.

Contact us to connect with an expert about a security risk assessment and related services.