Architect Your Oracle Database for Efficient Backup and Recovery

Architect Your Oracle Database for Efficient Backup and Recovery

Architecting for Backup and Recovery Efficiency with Oracle RMAN

Database architecture is critical to achieving many business objectives. These include application performance, business continuity, security, and geographic distribution.  This article will explore another often overlooked objective that can be influenced by the database architecture: backup optimization and recovery efficiency.

Very large databases are common in today’s business environment. Multi-terabyte databases are prevalent in all but the smallest organizations. Despite these large sizes, we tend to find that data that pertains to earlier time periods tends not to change, and tends to be used less frequently than more current data.  When architecting the database, the age of the data, and therefore the possibility of data changing can be used as a factor in the physical database design that can optimize backup and recovery efficiency and performance.

RMAN Backup Optimization

It is a given that all data must be backed up. But taking that backup puts load on the database that impacts application performance during the time that the backup is running. Common approaches to mitigating this impact include backing up from a standby database rather than the production database, taking offline database backups when the application is not available, and restricting the backup time to non-peak times so the machine resource usage is minimized.

However, in some environments, those options are not available. The first option, backup up from a standby database, may not be an option of you don’t have a high availability environment. Bringing the database down is not an option in a 24×7 production environment. And there are many databases that are so large that the time it takes to back up the entire database is simply too long and exceeds the non-peak times for the application.

Partitioning and Archiving

Another technique that may be used is to build partitioning and archiving into the database architecture. We can partition the data into physically separate tablespaces, and place each partition into a separate tablespace.  This allows us to isolate data from past time periods that are kept for archiving purposes but are not frequently queried and are never going to be updated.  These separate tablespaces can be backed up when the data reaches the point that it will not be changed again, and then it can be excluded from the normal backup routine.  In many databases, older data represents a very large portion of the overall database, so such a scheme can significantly reduce the backup time, thereby significantly reducing the impact on the application.

There are a number of ways to exclude tablespaces from the backup after they have reached the point where they will not be updated again, including:

  • Making the tablespaces readonly, and configuring Backup Optimization in Oracle RMAN. After this, RMAN will backup the the tablespace enough times to satisfy the retention policy and then will exclude them on subsequent backups.
  • Using the RMAN command CONFIGURE EXCLUDE FOR TABLESPACE command. Once configured, the specified tablespace will be excluded from future backups. These tablespaces can be manually included explicitly in other backup sets to ensure that the data is backed up but they can be excluded from full backups.

Here is an example of how we might use this: lets say that we have an Oracle Database Application that collects traffic sensor data. Each day we collect a large set of data from traffic sensors from municipalities around the country. We have very large tables that contain hundreds of datapoints per sensor. Each table contains hundreds of gigabytes of data stretching back 10 years. The tables are partitioned so that a new partition is created for each month, and as the data is collected, it is automatically placed into the proper partition. At the beginning of each year,  we can take a single backup of all the tablespaces that hold the data from the prior year. We know that data will never change, so we do not have to include those tablespaces in future backups.  We can set these tablespaces as readonly, and then with backup optimization turned on, RMAN will then exclude them from subsequent backups, but will still enforce the backup retention policy so you wont lose backup sets that are necessary to restore those tablespaces. An added benefit is that the backup set each week will be significantly smaller thereby reducing disk requirements for the ongoing backup sets.

Restore Efficiency

In addition to significantly reduced backup time, partitioning the data in this way also improves the efficiency of the restore process because if one partition fails, the others do not need to be restored. This can result in significant time savings during a restore.

Other Benefits

There are other benefits to partitioning your data beyond the improvements to the backup and restore process.  By separating older data which typically does not change, and is accessed less frequently,  from the newer data, we have the ability to place the older data on less costly media. And regardless of the media type, there are performance benefits to separating data onto different drives/controllers (particularly useful when using separate storage arrays as opposed to SAN environments).

Thinking Ahead

When architecting database, think about what the impact of the backup and RMAN database recovery process will look like after 10 years. Architecting the backup and restore efficiency into the database design at that time will save lots of redesign later on.

If you are struggling with cumbersome backup optimization and restore processes or are about to do a database design or redesign,  please give us a call at (888) 809-4803 x 700 and if you have further thoughts on the topic, please add comments!

If you enjoyed this article please like and share!

Duplicating and Recovering Oracle Databases with RMAN

Duplicating and Recovering Oracle Databases with RMAN

Oracle Recovery Manager Purpose And History

Oracle Recovery Manager (RMAN) was developed essentially as a way to more effectively backup and recover Oracle databases, as an automation mechanism for the same, and as a way to keep track of and managing backup sets. It was introduced in version 8 of the Oracle database, so it has been around for quite a while. In addition to managing backups, it includes other functionality that helps with managing standby databases and creating duplicate databases.

In this article I will discuss several ways that RMAN can be used to duplicate a database and to recover an existing database.  My objective is to discuss the different uses for RMAN and some important enhancements that have been made over time. I will not include specific syntax or detailed instructions but there are links to the Oracle documentation at the end of the article where you can find more detail.

In terms of managing backups, RMAN has many benefits over managing backups manually. Some of the more important benefits include its ability to track the available backups and the location of the backup files, and the ability to enforce retention strategies by preventing the removal of backup media (as long as the backup files are not deleted outside of RMAN). The robust way that RMAN manages backup sets contributes to its utility for duplicating and restoring database.

Active Vs. Backup-Based Duplication — The Evolution Of Database Duplication With RMAN

Database Duplication has been enhanced in important ways in the past few versions of Oracle.

In Oracle 10g, the only option was to create a duplicate database using the backup files and archived redo logs of the source database. This meant that the target host had to have access to these files so we had to copy these to the the target host before the duplicate database operation could take place.

Oracle 11g introduced Active Database Duplication as a new option. With this option,  the backup files are no longer required because RMAN duplicates the database by copying images of the source database files (not backup files) directly over the network.  If desired, however, we can still use the original backup-based duplication.

Oracle 12c introduced Active Database Duplication using backup sets. Now, when using this option, we can choose between using database file image copies (this was the only option when using Active Database Duplication in 11g) or using backup sets.  Using the new backup set option has a number of advantages including reduced load on the source system and the ability to use encryption and compression options when creating the new database. This is similar to using the original backup-based duplication except that we don’t have to copy the files over manually.

Key Use cases of RMAN

Create A Test Database Or Refresh A Development Database

RMAN can be used to create a duplicate database for testing or development purposes. A test database is useful for application testing, testing of Oracle Upgrades, or for refreshing a database to be used for development purposes. We can duplicate the entire database or only a portion of it.

Be aware of the security implications of using data from production in your test or dev environments and take proper precautions to protect those environments. Consider using Data Masking to prevent sensitive data from being exposed to your Test and Development Environments. A future blog post will discuss how to use Data Masking for this purpose.

Here are the high level steps for duplicating a database for test purposes (Oracle 12c):

  1. Determine our naming convention for files on the destination host. This is important if we are creating the duplicate database on the same host as the source database.
  2. Decide what type of duplication we will be doing (Active vs Backup Based, If active, Image vs Backup Sets)
  3. Make source database backups available to the destination if necessary (depends on the type of duplication)
  4. Create and prepare an instance on the destination host to hold the duplicate database.
  5. Launch RMAN and connect to the source and destination databases (in the RMAN context, the source database is known as the target database and the destination is known as the auxiliary database).  The channels and connections that we specify in this step will depend on the type of duplication we are performing.
  6. Issue the RMAN DUPLICATE command: There are various important options that will need to be set depending on the type of duplication we are performing.

Create A Standby Database For Disaster Recovery

RMAN can be used to create a standby database to be used for disaster recovery purposes. The steps for creating a standby database are similar to the steps for creating a test or dev database except that we will specify ‘FOR STANDBY’ and ‘DORECOVER’ in the RMAN duplicate command.

Optionally, when using a standby database for disaster recovery, we may also wish to configure standby redo logs on the primary database and configure the Data Guard Broker to facilitate automatic switchover and failover operations.

Create A Standby Database For Reporting

RMAN can be used to create a standby database to be used for reporting purposes. And as of Oracle 11g,  if using the separately licensed Active Data Guard Option, the reporting database can stay in recovery mode (redo logs being applied) so the data does not get stale during reporting. Prior to 11g, in order to report using a physical standby database for reporting, recovery had to be suspended while the database was open for reporting. The process of creating a standby database for reporting purposes is similar to the process of creating one for disaster recovery except that at the end of the process we open the database in read only mode.

RMAN Database Recovery

In the event of the loss of a datafile or a tablespace, RMAN can be used for database recovery. Because RMAN keeps track of when data file and archive log backups were taken and where they are located, it can easily restore these files when we issue a RESTORE command. And a a RECOVER command then performs the recovery operation.  The basic steps that we follow when recovering a database or data file are as follows:

  1. Determine what needs recovery: (Entire Database, Data File,  Tablespace).
  2. Launch RMAN and connect to the database to be restored and to the RMAN repository.
  3. Confirm that the required devices and channels are configured and configure them if necessary.
  4. Issue the required RESTORE and RECOVER commands

Notes And Links To Detailed Instructions

I hope you found this discussion of ORACLE RMAN database recovery and duplication helpful. Please leave any comments or questions and I will do my best to respond.

If you need help managing your Oracle database or just have a question, give us a call at (888) 809-4803 x 700 or visit www.budaconsulting.com.

Oracle Virtual Private Database (VPD) — Granting Just Enough Access

oracle vpd iconRobust Access Control At The Database Layer

Many Companies struggle with the difficult problem of providing fine grained access control at the database layer to sensitive data . We will discuss Oracle’s Virtual Private Database feature which provides a robust way of handling this problem for all types of database users.

Security vs Effectiveness

Securing access to sensitive data is critical. But facilitating the access to all the data that employees, customers, and vendors need to do their jobs effectively is equally important.

Meeting these competing goals can be difficult because sensitive data is sprinkled throughout our databases.  We have social security numbers in employee tables, credit card numbers in customer tables, health data in patient tables, financial data in billing tables.  In addition to protecting data for our own best interest, regulations such as PCI DSS, HIPPA, and FISMA, require us to protect our sensitive data, both from outsiders as well as insiders.

So we are left struggling to find ways to give users proper access without giving too much away. This is not easy.  Standard security features of relational databases enable us to grant privileges on a full table table to a given user or role, but not on a column or row level.  In other words, security at the database layer typically provides all or nothing access to a table.  There are a number of traditional workarounds that are used to deal with this.

Views

We can use views to provide restricted access to the data. Views can be designed to restrict access to a subset of columns , and they can restrict access to certain rows using a where clause. But there are a number of limitations to this approach.  When views are used to restrict access to certain columns, we have to create a different set of views for each user or group that may require a different set of columns. When used to restrict access to certain rows, then we either have to create different views for each group of users, as discussed earlier,  or we have to specify potentially complex logic in the where clause definitions to determine proper access.

Application Security

Another very common approach is to use the application to restrict the rows or columns that can be seen by the user. The user never directly connects to the table via the application, so the application implements all of the security rules and only displays the allowable rows and columns.

One drawback of this approach is that the security enforcement has to be implemented in each report,  screen, or form that accesses the database.  This adds complexity and time to the development and testing effort and can result in security gaps if development and testing are lacking.

Oracle Virtual Private Database

Oracle’s VPD feature addresses these security gaps. This tool gives the administrator the ability to create rich security access policies that provide fine grained access to both rows and columns of a table to a user or group of users.  This is applied at the database layer and will be in effect regardless of how a user connects to the virtual private database. There is no dependence on specific views or on specific applications. It is non-trivial to set up Oracle VPD, but it provides a high level of security when done properly. At a high level here is how you set it up. The steps below are performed for each table, view, or synonym that you are protecting.

  • Create an Application Context that is used to set and store session related information for use by the policy function.
  • Create a Policy Function that is used to help Oracle create the proper where clause that it will attach to the select statement on a table. This can use information from the Application Context (a set of Name-Value pairs that Oracle stores in Memory) that can provide session information about a user such as employee or customer number. This enables you to restrict information to a specific user or class of user.
  • Create a Policy that attaches the Policy Function you just created to the table, view, or synonym that you are protecting.  After you create a policy that attaches a function to a database object, that function will be used to automatically add a where clause to any DML statement (select, insert, update, or delete statements) that are executed against the object . Different policies can be applied for each statement type. In addition to the listed DML statement types, index-related DDL statements can also be controlled in a similar fashion.

Controlling evaluation of the policy functions

Policy functions optionally gather information from the environment (from application contexts), and then they create a where clause that will be used with the statement types designated in the policy function.  If this information is gathered, and if the where clause is constructed each time a query is executed, then it can have a negative performance impact.  To mitigate this, you can control how often the policy function is evaluated. This can be when the session begins, or each time the statement is executed, or it can occur whenever the application context has changed.  The choice of when to evaluate the function will depend on the variables involved and how frequently they may change in your environment.

Policy Groups

Policy groups enable you to more easily manage your policy functions. You can create a group of policies that can be enabled  as a group in one context (ie for one application), and a different policy group that can be enabled in a different context (for a different application). This enables you to have different security schemes depending on the application being used, but the use of default policy groups can ensure that security is always in place.

Non-Database Users

Some users are not known to the database, as is common in web applications where a single account connects to the database and the application performs the authentication. Enforcing security on these types of users typically must be done at the application layer. However, using the CLIENT_IDENTIFIER attribute of the application context enables you to restrict data for non-database users using Oracle VPD.  Note, however, that there is a whole different set of security implications for those applications, one of which is discussed in my article about Oracle Real Application Security.

SQL Server RLS (Row Level Security)

SQL Server provides similar functionality with its RLS functionality.  While RLS can be also be used to specify fine grained security controls at the row level, RLS has limitations and its feature set is not as rich as Oracle’s VPD. I plan to go into more detail on SQL Server RLS in a future blog post.

Oracle VPD – Worth the Effort?

We have seen that Oracle Virtual Private Database is a robust toolset for providing fine grained row and column level protection for sensitive data. Implementing Oracle VPD will require a lot of planning and will take time to implement. A good part of it will require developing a robust security policy, which should be done regardless of how it is to be enforced, and the remainder will be in setting up the Policy Groups, Policies, and the code to implement the Application Context.

If you would like to discuss the pros and cons of Oracle VPD for your organization chat or request a meeting or send a note to [email protected].

Plugging the Database Audit Trail Gap

 

 The Importance of Database Audit Trails and Security Assessments

Database audit trails are critical tools for discovery, investigation, and prosecution of cyber crimes ranging from credit card theft to malicious activity by ex-employees to corporate espionage.  And database security assessments are critical tools for preventing such activity. But an audit trail or a security assessment is only as good as the information that it captures. Many audit trails and security assessments take you only so far because of the way web applications are architected.  Fortunately, there is a way to fill that gap using Oracle Real Application Security

The Weak Link in Audit Trails and Security Assessments

When we perform database security assessments for clients, one of the key reports that we provide is a database user rights review. This is a report that shows which users in the organization have access to sensitive data in the database.  Unfortunately these reports are useless for many web applications because of the way authentication is done.

Similarly, when we do forensic analysis after a security incident, the audit trail should help us identify who the bad actors are, but for the same reasons, we can’t know for sure who the end user is that performed a suspicious action when a business user is not mapped directly to a database user.

The Real End User

A key requirement of data governance is an audit trail that identifies what specific users have access to sensitive data. Every major database vendor has audit capabilities that can show them what database user has accessed or modified a given piece of data. This auditing is very robust and if configured properly will comply with important security tenets, in particular , non-repudiation,  including the ability to prove the authenticity of the audit trail itself.

Unfortunately, with many web based applications, even though database auditing may be in place, weaker user access auditing is actually in practice. This is because many web applications are architected to use one (or a small number) of database user accounts known as application accounts. This is done for a variety of reasons including the difficulty of maintaining many users in the database,  the difficulty of reliably passing authentication through to the database in many environments, and the ability to share sessions for performance reasons.

The result of this practice is that the database itself does not know who the real end user is, and relies on the application sending in a user identifier, which is then looked up in a table in the database that is used for authentication (sometimes), authorization, and application-driven auditing.

The use of application accounts and application driven auditing is weaker than the use of real database accounts and database auditing for a variety of reasons.

  1. It relies on the application to do everything right. Every screen that touches data in any way has to make sure it correctly records the action and the proper user name.
  2. It relies on the database administrators to properly secure the user audit tables, user password tables, etc.
  3. It takes away the ability from standardized database security tools to generate reports describing what access an individual user has to a table or view. Custom code has to be written for each database and application to provide a User Rights Review report.

The Solution

One approach to filling this gap is with the Oracle Real Application Security feature in Oracle 12c.

Oracle’s Real Application Security (RAS) allows the use of individual database users even in a web application.  This results in the native database audit trail containing the actual end user information.

When working with Oracle Real Application Security,  fine grained database access to specific database objects is granted through an Access Control List mechanism to individual database accounts resulting in a database audit trail that is complete, robust, and non-reputable.

How it Works

At a high level, there are a set of constructs that enable the use of individual database accounts, rather than one main web application user. These constructs are building blocks and are used by higher-level constructs:

  • Traditional Oracle Roles and Privileges
  • RAS Principals
    • Direct Login Application User — Used in place of the traditional application user
    • Simple Application User — An account for each end user
    • Application Role — Manages a logical set of privileges that are defined as a set of ACL (access control lists).
  • ACE (Access Control Entry) — Controls the use of object privileges by a RAS principal
  • ACL (Access Control List) — Comprised of a set of Access Control Entries (ACE)
  • Security Class — A way of grouping privileges by database object
  • Data Security Policies
    • Data Realms — A way of restricting access to data rows based on a query
    • Column Constraints — A way of restricting access to specific columns of a table

These constructs build on each-other to provide very fine grained access control to data , this is an additional benefit to using RAS beyond the more detailed and accurate auditing capability and reporting capability.

Code Simplification

An additional benefit of this approach is that it relieves the application of two key responsibilities thus simplifying the design and reducing development effort

  1. The application does not have to manage authentication or authorization, as this is all handled at the database level.
  2. The application does not have to manage the auditing process, it can all be done through database configuration

Conclusion

Auditing and reporting user rights is critical to database security.  Many applications lose a reliable source of this information when web applications are used due to the use of application accounts.  This gap can compromise efforts to secure your data, particularly from internal threats.  If Oracle RAS is an option for you it can fill the gap

To learn more about how to secure your data visit us at www.budaconsulting.com/security.com

 

Using Oracle GoldenGate for real time data replication and integration.

Oracle GoldenGate is Oracle’s best-of-breed, strategic solution for real-time data replication and integration. It offers a rapid, robust and low-overhead approach for moving small or large volumes of transactional data between diverse operational and analytical systems—including not just various Oracle Database versions but also non-Oracle databases like SQL Server, Sybase, IBM DB2 and others.

Though extremely powerful and flexible, Oracle GoldenGate is straightforward to install and configure. Its transformation, filtering and custom processing features make it suitable for a wide range of use cases, including:

  • Data warehousing and analytics/decision support
  • High availability and disaster recovery (HA/DR) scenarios
  • Database loading and migration

One of our DBA consultants has been working with a complex, mission-critical, enterprise-wide GoldenGate implementation that illustrates how much you can do with this offering. It is being used to enable a “Know Your Customer” application, where data is replicated in real-time between production and DR (“hot-hot”) sites. The client opted to use GoldenGate over Oracle’s physical standby capability because they needed read-write access at both sites.

GoldenGate replicates data among four main regions and three satellite regions worldwide in a complex, multi-master source/target architecture. Data can flow in real-time from any site to any site. Intercontinental and regional data restrictions mean that sensitive data must be masked before sending it out of some regions. This requirement ruled out most competing solutions, which provide only storage-level replication with little flexibility for transforming or filtering the data in between sites.

This implementation also leverages GoldenGate’s DDL replication support, which further supports high availability. Anytime a table is created or modified, the DDL (and hence the modified data) is replicated across the topology. For even greater data reliability, the implementation has been extended via a “self-healing” agent that identifies any data inconsistencies and attempts to fix them automatically, with no human intervention.

As you can see, GoldenGate has a very flexible architecture. It can handle many data extraction/transformation or conflict resolution scenarios out-of-the-box or with minimal customization using built-in functions. It plays very well with Oracle RDBMS, and offers endless possibilities for writing custom PL/SQL functions or native C language routines to get the functionality you need.

Click Here to find out more about Oracle GoldenGate use cases and benefits, and how to get started.

Time to Validate Your Database Backup

Another new client, Another bad backup

I originally wrote this post in July but I am updating it today because it is so important.  Yesterday we did an initial Oracle Database Health Check for another new client and found yet again that their backups are useless. This time the client was taking backups regularly, but was not preparing the database first and was just copying the database files. Unfortunately this renders the backup useless.

Here is the original post:

One of the first things that we do when engaging a new client for Oracle or SQL Server Database Administration support is to do an inventory of their current databases and to validate that they have a good backup and recovery scheme in place.  The vast majority of the time, we find that there are either no backups being taken at all, or that not all of the data and configuration files are being backed up properly, despite the fact that the client thinks they are.

Often, there was a proper backup in place at some time in the past, but over time, the configuration of the system changed rendering the backup useless. These changes include changes in disk configuration, addition of data files, and other changes, that were made at the operating system level, configured properly in the database, but never modified in the backup configuration.

It is a simple oversight that can have catastrophic effects. We recently spent over three months helping a new client recover many years worth of data that they thought they were backup up properly, but they were wrong!

A regularly scheduled review of the backup scheme and a test recovery is the best way to ensure that your Oracle or SQL Server database is being properly backed up.

Contact Buda Consulting today for a comprehensive Disaster Recovery review of your Oracle or SQL Server Databases.