Mind the Gap – Data Security and Teenage Drivers

New Security Assessment Tool — Why it matters?

Oracle just released a database security assessment tool (DBSAT) that identifies security vulnerabilities in Oracle Databases.  I will be writing about that tool in a coming article but the release got me thinking about how little many companies do to protect their data.  Since this was prompted by the Oracle Release, this post will be Oracle tinted, but the concepts hold true for all database management vendors.  And what does this all have to do with teenage drivers?  Stick with me, I will tie it all together, I promise.

You Have Gaps

Yes, that’s right, there are gaps in your database security.  You may think there aren’t.  You may kind of know there are but you know that you can’t really make it totally secure, so you rely on your network security layer, close one eye, and tell yourself that you are secure. But you are not. Not really.

Like Kids and Cars

The truth is you can never be totally secure. There will always be a hole somewhere. But the best play is to minimize the risk wherever you can. Like when you help your kids buy their first car. You know that putting a 17 year old behind the wheel is dangerous. You know that locking them in their room is much safer then letting them behind the wheel. But you must let them drive. So you do everything possible to protect them.

Reducing the Risks

You help them buy the biggest, safest car that you can afford. You give them the best driving lessons that you can find. You teach them for hours and hours how to anticipate and avoid others on the road that are looking at their phones instead of the road. You insist on a standard shift car so they can’t text and drive. And then you hope for the best. But you did everything in your power to mitigate the risk before hoping for the best. 

The same must be true for your data. Your data will never be 100% safe. Anyone who tells you that you are is either lying or fooling themselves. But there are steps that you can take today to dramatically lower your risk, and you are probably not taking them.

You secure the network layer. You enforce strong passwords. You encrypt data in transit throughout the network. That is all great. But if that is all you do then it is like buying your kid a big old pickup truck with a strong body but no airbags or seatbelts. You are strengthening the outside, but you are neglecting the inside, where the kids are.  (OK I know its an imperfect analogy but work with me!)

To really mitigate the risk to your precious data, you must secure your data from the inside, not just the outside.  Your database software provides the capability to add significant layers of security.  You have basic features available such as data encryption, role based security, and strong internal password policies. And you have advanced security features such as Virtual Private Database, Label Security, and Transparent Sensitive Data Protection.

Missed Opportunities

In most organizations, these security features are either unused, underused, or misused.  It is an opportunity to significantly reduce data risk that is being widely missed. If you are serious about protecting your data assets, you can approximate total data security by properly implementing the appropriate combination of these strong database security features in addition to the network security that you already practice. The cost of implementing stronger security may be tiny compared to the cost of damage to your business that can be done by a breach.

Take the next step to secure your data

If you are serious about protecting your database assets, give us a call and we can help you protect your data using the tools that are available from your database vendor.

And good luck with the kids!

A Christmas Backup Tale

Don’t let the Backup Grinch steal Christmas

I have written about this topic in the past but it seems that it can never be shared too many times. As we approach the holiday season, we don’t want to be recovering data when we should be spending precious time with family and friends. So I will share another cautionary tale about how what we don’t know can hurt us. And how we can improve the chances of an uninterrupted holiday celebration.

This past weekend, during a routine health check that we perform for one of our clients, one of our Oracle DBAs found that backups starting failing a few days prior. The client had no idea that there was any problem with the backup.

In this case, the database backup itself was successful, but the ongoing control file and archive log backups failed. Basically this means that in the event of a failure, they would be unable to restore any transactions that happened since the backup.  This could result in a major loss of data.

This is another illustration of the critical importance of having a dedicated team whose responsibility it is to check the health of your database and your backups on a regular and frequent basis.

And now with my own special version of a holiday classic (a thousand apologies to Clement Moore), I wish you all healthy databases and consistent backups all through the year!

A Visit from The Buda Team

Twas the night before weeks-end, when all through production
Not a backup was running, there was an obstruction
The schedule had been set, so long ago now
No-one thinks they should check it, its just running somehow
And while you were nestled all snug in your bed
You had no way of knowing your backup was dead
But the Buda Team, ready to check out your logs
Are poised to find out that your system has hogs
Bad programs preventing the backup from storing
But now that we fixed it you can go back to snoring
In the morning we’ll tell you what happened last night
And with one great big sigh you’ll say “So glad we’re alright!”

Happy Holidays!

Tales of Business Continuity and Disaster Recovery Planning

Planning for Disaster

As we think about the enormous cleanup effort taking place in the wake of

Harvey, Irma, and Maria, we are reminded of the importance of planning to keep your business up and running when disaster strikes.  In this blog post I share the thoughts of an expert in the field of Business Continuity Planning and Disaster Recovery.

I recently had an interesting and fun conversation with Bob CohenBusiness Continuity and Disaster Recovery Practice Director at Pivot Point Security, about some key aspects of Business Continuity Planning and some cautionary tales about the risks of not doing so.

[table id=3 /]

 Notes:

I hope you enjoyed the conversation that I had with Bob.  PivotPoint and Buda work together to ensure that our client’s data assets are secure and protected against all types of threats.

If you have any questions about how to ensure that you are prepared to recover your databases in the event of a disaster, please give us a call at (888) 809-4803 x 700 and if you have further thoughts on the topic, please add comments.

 

Architect Your Oracle Database for Efficient Backup and Recovery

Architect Your Oracle Database for Efficient Backup and Recovery

Architecting for Backup and Recovery Efficiency with Oracle RMAN

Database architecture is critical to achieving many business objectives. These include application performance, business continuity, security, and geographic distribution.  This article will explore another often overlooked objective that can be influenced by the database architecture: backup optimization and recovery efficiency.

Very large databases are common in today’s business environment. Multi-terabyte databases are prevalent in all but the smallest organizations. Despite these large sizes, we tend to find that data that pertains to earlier time periods tends not to change, and tends to be used less frequently than more current data.  When architecting the database, the age of the data, and therefore the possibility of data changing can be used as a factor in the physical database design that can optimize backup and recovery efficiency and performance.

RMAN Backup Optimization

It is a given that all data must be backed up. But taking that backup puts load on the database that impacts application performance during the time that the backup is running. Common approaches to mitigating this impact include backing up from a standby database rather than the production database, taking offline database backups when the application is not available, and restricting the backup time to non-peak times so the machine resource usage is minimized.

However, in some environments, those options are not available. The first option, backup up from a standby database, may not be an option of you don’t have a high availability environment. Bringing the database down is not an option in a 24×7 production environment. And there are many databases that are so large that the time it takes to back up the entire database is simply too long and exceeds the non-peak times for the application.

Partitioning and Archiving

Another technique that may be used is to build partitioning and archiving into the database architecture. We can partition the data into physically separate tablespaces, and place each partition into a separate tablespace.  This allows us to isolate data from past time periods that are kept for archiving purposes but are not frequently queried and are never going to be updated.  These separate tablespaces can be backed up when the data reaches the point that it will not be changed again, and then it can be excluded from the normal backup routine.  In many databases, older data represents a very large portion of the overall database, so such a scheme can significantly reduce the backup time, thereby significantly reducing the impact on the application.

There are a number of ways to exclude tablespaces from the backup after they have reached the point where they will not be updated again, including:

  • Making the tablespaces readonly, and configuring Backup Optimization in Oracle RMAN. After this, RMAN will backup the the tablespace enough times to satisfy the retention policy and then will exclude them on subsequent backups.
  • Using the RMAN command CONFIGURE EXCLUDE FOR TABLESPACE command. Once configured, the specified tablespace will be excluded from future backups. These tablespaces can be manually included explicitly in other backup sets to ensure that the data is backed up but they can be excluded from full backups.

Here is an example of how we might use this: lets say that we have an Oracle Database Application that collects traffic sensor data. Each day we collect a large set of data from traffic sensors from municipalities around the country. We have very large tables that contain hundreds of datapoints per sensor. Each table contains hundreds of gigabytes of data stretching back 10 years. The tables are partitioned so that a new partition is created for each month, and as the data is collected, it is automatically placed into the proper partition. At the beginning of each year,  we can take a single backup of all the tablespaces that hold the data from the prior year. We know that data will never change, so we do not have to include those tablespaces in future backups.  We can set these tablespaces as readonly, and then with backup optimization turned on, RMAN will then exclude them from subsequent backups, but will still enforce the backup retention policy so you wont lose backup sets that are necessary to restore those tablespaces. An added benefit is that the backup set each week will be significantly smaller thereby reducing disk requirements for the ongoing backup sets.

Restore Efficiency

In addition to significantly reduced backup time, partitioning the data in this way also improves the efficiency of the restore process because if one partition fails, the others do not need to be restored. This can result in significant time savings during a restore.

Other Benefits

There are other benefits to partitioning your data beyond the improvements to the backup and restore process.  By separating older data which typically does not change, and is accessed less frequently,  from the newer data, we have the ability to place the older data on less costly media. And regardless of the media type, there are performance benefits to separating data onto different drives/controllers (particularly useful when using separate storage arrays as opposed to SAN environments).

Thinking Ahead

When architecting database, think about what the impact of the backup and RMAN database recovery process will look like after 10 years. Architecting the backup and restore efficiency into the database design at that time will save lots of redesign later on.

If you are struggling with cumbersome backup optimization and restore processes or are about to do a database design or redesign,  please give us a call at (888) 809-4803 x 700 and if you have further thoughts on the topic, please add comments!

If you enjoyed this article please like and share!

The Cloud Does Not Exist!

“Lets Move to The Cloud!”

We often hear people talk about moving their business or their data center or their software to “The Cloud“.  For many, this concept seems confusing and vague.

That’s because The Cloud does not exist!

There is no “The Cloud”. In reality, there are many clouds. And therefore we can’t  just decide to move to “The Cloud”.

Instead, there are many clouds with services being offered by many vendors. A cloud at its core is a collection of hardware and software combined by a vendor into a service offering that provides some level of computing service for their customers.  Depending on our risk tolerance,  bandwidth requirements, data custody and security requirements, level of technical expertise, and business model, one or more of these levels may make sense for our organization.  These levels are known as Infrastructure as a Service, Platform as a Service, and Software as a Service. The table below describes these levels in more detail. The technical reader will recognize that these levels are fuzzy and that the things that are included in each level and the things that we control in each level can vary from vendor to vendor but this table gives us a sense of each level.

[table id=1 /]

A source of confusion when thinking of the cloud is that it is often thought of as an external organization abstracting away the underlying technical details of our computing environment. For example, PAAS (Platform as a service) offerings abstract away everything from the physical hardware up through the operating system, leaving us to have to manage only the software frameworks we are using, which may be database management systems like Oracle, or software development platforms like Visual Studio. But in reality, it is the abstraction that is the essence of a cloud, not the fact that it is an external party providing it. Therefore we can have on-premises clouds hosted at our own data center, and private clouds managed by our own team but that are hosted at external data centers. It is a stack of software that provides the abstraction that makes it a cloud, not the vendor. The foundation of this stack is mostly virtualization and automation related software.

Journey into the Clouds

Jumping right into the clouds is difficult and scary. We can’t see things clearly with all these clouds around. We don’t know what and who we can trust.

The good news is that we can take advantage of some of the huge benefits of cloud computing without some of the riskier aspects.  When we run a private cloud or on-premises cloud, we still benefit from the virtualization and automation when provisioning servers, databases, etc, while minimizing the risk that may be introduced by using shared services or relying on external vendors. Additionally,  if we transition our software to use our own private cloud services, they will be much further along when it comes time to move to public cloud services in the future.

There are other options to make the journey less scary as well: Some vendors are providing ways to simplify the process of taking meaningful steps toward the public cloud while staying on premises. Oracle offers the Oracle Cloud Machine, a machine that can live in our own data center, offer IAAS and PAAS capabilities, and is installed and managed by Oracle, behind our own firewalls. When we are comfortable moving to the public cloud, the entire environment can be picked up and moved to Oracle’s public cloud.  And Microsoft has just announced the Azure Stack. This will enable us to use the same software stack that is running in the Microsoft Public Azure cloud, but we can run it in our own data centers, on our own hardware.  Again, after transitioning our software to use cloud services, a future shift to the Azure public cloud will be greatly simplified.

Clouds are not one size fits all

So when we think about how to transition to cloud based technology, we should stop thinking about moving to “The Cloud” because that is too simplistic. We need instead to look at each component of our IT services and think about what level of computing resources we would like abstracted away for that component, and then choose from the available clouds that provide that level of service.

For example, we may decide that for our Customer Relationship Management System, SAAS is the proper level because we are comfortable with the cloud vendor providing all of the IT administration, Disaster Recovery, and Security services, but for our Chemical Inventory Management system that holds highly sensitive formula information, we may choose to go with a PAAS solution or even an IAAS solution because we want to have more control over network and data security. And for a Financial Trading System we may insist on a private cloud IAAS solution so we have full control over all aspects of redundancy,  connectivity,  and security.

Get your head out of “The Cloud”

We are all thinking about the Cloud these days. I heard a talk by the great physicist and author Michio Kaku recently who predicts that through Artificial Intelligence and technology that can read and write our memories, we will all essentially think in “The Cloud” some day.

But for our businesses today, we have to think about the individual clouds so that we don’t get lost. For each service being provided to our employees, partners, customers, regulators, etc, we must think about the appropriate level of service and abstraction (IAAS, PAAS, SAAS), and then evaluate offerings of the cloud vendors at that level.

So when we think about “The Cloud”, we must think instead of  “The Clouds”. And we may see things a bit more clearly.

If you would like to discuss more about your Journey into The Clouds please give us a call at (888) 809-4803 x 700 and if you have further thoughts on the topic, please add comments!

If you enjoyed this article please like and share!

 

Duplicating and Recovering Oracle Databases with RMAN

Duplicating and Recovering Oracle Databases with RMAN

Oracle Recovery Manager Purpose And History

Oracle Recovery Manager (RMAN) was developed essentially as a way to more effectively backup and recover Oracle databases, as an automation mechanism for the same, and as a way to keep track of and managing backup sets. It was introduced in version 8 of the Oracle database, so it has been around for quite a while. In addition to managing backups, it includes other functionality that helps with managing standby databases and creating duplicate databases.

In this article I will discuss several ways that RMAN can be used to duplicate a database and to recover an existing database.  My objective is to discuss the different uses for RMAN and some important enhancements that have been made over time. I will not include specific syntax or detailed instructions but there are links to the Oracle documentation at the end of the article where you can find more detail.

In terms of managing backups, RMAN has many benefits over managing backups manually. Some of the more important benefits include its ability to track the available backups and the location of the backup files, and the ability to enforce retention strategies by preventing the removal of backup media (as long as the backup files are not deleted outside of RMAN). The robust way that RMAN manages backup sets contributes to its utility for duplicating and restoring database.

Active Vs. Backup-Based Duplication — The Evolution Of Database Duplication With RMAN

Database Duplication has been enhanced in important ways in the past few versions of Oracle.

In Oracle 10g, the only option was to create a duplicate database using the backup files and archived redo logs of the source database. This meant that the target host had to have access to these files so we had to copy these to the the target host before the duplicate database operation could take place.

Oracle 11g introduced Active Database Duplication as a new option. With this option,  the backup files are no longer required because RMAN duplicates the database by copying images of the source database files (not backup files) directly over the network.  If desired, however, we can still use the original backup-based duplication.

Oracle 12c introduced Active Database Duplication using backup sets. Now, when using this option, we can choose between using database file image copies (this was the only option when using Active Database Duplication in 11g) or using backup sets.  Using the new backup set option has a number of advantages including reduced load on the source system and the ability to use encryption and compression options when creating the new database. This is similar to using the original backup-based duplication except that we don’t have to copy the files over manually.

Key Use cases of RMAN

Create A Test Database Or Refresh A Development Database

RMAN can be used to create a duplicate database for testing or development purposes. A test database is useful for application testing, testing of Oracle Upgrades, or for refreshing a database to be used for development purposes. We can duplicate the entire database or only a portion of it.

Be aware of the security implications of using data from production in your test or dev environments and take proper precautions to protect those environments. Consider using Data Masking to prevent sensitive data from being exposed to your Test and Development Environments. A future blog post will discuss how to use Data Masking for this purpose.

Here are the high level steps for duplicating a database for test purposes (Oracle 12c):

  1. Determine our naming convention for files on the destination host. This is important if we are creating the duplicate database on the same host as the source database.
  2. Decide what type of duplication we will be doing (Active vs Backup Based, If active, Image vs Backup Sets)
  3. Make source database backups available to the destination if necessary (depends on the type of duplication)
  4. Create and prepare an instance on the destination host to hold the duplicate database.
  5. Launch RMAN and connect to the source and destination databases (in the RMAN context, the source database is known as the target database and the destination is known as the auxiliary database).  The channels and connections that we specify in this step will depend on the type of duplication we are performing.
  6. Issue the RMAN DUPLICATE command: There are various important options that will need to be set depending on the type of duplication we are performing.

Create A Standby Database For Disaster Recovery

RMAN can be used to create a standby database to be used for disaster recovery purposes. The steps for creating a standby database are similar to the steps for creating a test or dev database except that we will specify ‘FOR STANDBY’ and ‘DORECOVER’ in the RMAN duplicate command.

Optionally, when using a standby database for disaster recovery, we may also wish to configure standby redo logs on the primary database and configure the Data Guard Broker to facilitate automatic switchover and failover operations.

Create A Standby Database For Reporting

RMAN can be used to create a standby database to be used for reporting purposes. And as of Oracle 11g,  if using the separately licensed Active Data Guard Option, the reporting database can stay in recovery mode (redo logs being applied) so the data does not get stale during reporting. Prior to 11g, in order to report using a physical standby database for reporting, recovery had to be suspended while the database was open for reporting. The process of creating a standby database for reporting purposes is similar to the process of creating one for disaster recovery except that at the end of the process we open the database in read only mode.

RMAN Database Recovery

In the event of the loss of a datafile or a tablespace, RMAN can be used for database recovery. Because RMAN keeps track of when data file and archive log backups were taken and where they are located, it can easily restore these files when we issue a RESTORE command. And a a RECOVER command then performs the recovery operation.  The basic steps that we follow when recovering a database or data file are as follows:

  1. Determine what needs recovery: (Entire Database, Data File,  Tablespace).
  2. Launch RMAN and connect to the database to be restored and to the RMAN repository.
  3. Confirm that the required devices and channels are configured and configure them if necessary.
  4. Issue the required RESTORE and RECOVER commands

Notes And Links To Detailed Instructions

I hope you found this discussion of ORACLE RMAN database recovery and duplication helpful. Please leave any comments or questions and I will do my best to respond.

If you need help managing your Oracle database or just have a question, give us a call at (888) 809-4803 x 700 or visit www.budaconsulting.com.