Ever Dropped an Oracle Table and Wish You Hadn’t?

If you’ve ever dreamed of going to a recycle bin like you can on Windows, and “undropping” an Oracle database table… Oracle has made your dream come true!

Introduced with Oracle 10g, Oracle’s recycle bin works a lot like what you’re used to in Windows. When this feature is enabled (the default setting), dropped tables don’t actually get deleted. Instead, they “land” in the recycle bin and can be restored.

The recycle bin is actually a data dictionary table containing data about dropped objects. What actually happens when recycling is enabled is that Oracle renames each dropped table and associated objects (indexes, LOB segments, triggers, etc.) with a system-generated name that starts with “BIN$.” The table data is still available and you can query it like any other table.

To “undrop” a dropped table that’s “in the recycle bin,” you perform an operation known as a “flashback drop.” The command syntax is: FLASHBACK TABLE <name> TO BEFORE DROP. This command just renames the BIN$<name> table back to its original name. (You’ll find an excellent overview of Oracle 10g’s Flashback features here.)

The downside of the recycle bin is that dropped tables are really only renamed. Their table segments are still taking up space in your tablespace, which still counts against your user tablespace quotas. To recover the space associated with dropped tables you need to explicitly “purge” unwanted tables and associated objects from the recycle bin.

Fortunately, purging is pretty easy and flexible. If you have the SYSDBA privilege, you can purge everything from all the recycle bins with PURGE DBA_RECYCLEBIN. Or you can purge just the user recycle bin with PURGE RECYCLEBIN. You can even purge “recycled” objects by schema and user with PURGE TABLESPACE <tablespace> or PURGE USER <user>. Users have access in the recycle bin only to those objects that they themselves dropped.

It’s normal to end up with multiple versions of a table in the recycle bin. In these situations, Oracle always restores the newest version. To restore an earlier version, you can simply refer to it by its unique, BIN$-prepended name. Alternatively, you can use FLASHBACK multiple times until you restore the version you want.

The RECYCLEBIN initialization parameter turns the recycle bin feature on or off at the system or session level. Turning recycling off doesn’t prohibit you from restoring objects that were in the recycle bin before you disabled it.

If you want to keep the recycle feature enabled but bypass it when dropping specific tables, just add the PURGE clause to the DROP TABLE statement; e.g.:

SQL> DROP TABLE TABLE_XYZ PURGE;

If you’re not already using the recycle bin feature, why not turn it on and get it working for you today? There’s really nothing to lose—literally!

For expert advice on how to fine-tune use of Oracle’s recycle bin for your environment, schedule a free consultation with Buda Consulting.

For more information:

Architecting to Maximize Recovery Options in Oracle

I recently received a frantic call from a client that believed they have been hacked and had to quickly recover data. They said that data belonging to two or more of their customers had been lost.

Our customer functions essentially as an application service provider (ASP). Their customers’ data is in an Oracle database that our client manages. Our client organizes this database such that each of its customers’ applications is served by a separate schema and they all share one database.

We have advised this client on numerous occasions to separate each of their customers’ data into separate databases, or at least separate tablespaces. This is a good idea for several reasons, one of which is recoverability. Unfortunately, they resisted our suggestions and today are probably regretting that decision.

Oracle Recovery Manager (RMAN) offers a few different options for recovery. You can recover an entire database, an individual tablespace or an individual data file. But you cannot recover an individual schema (table owner) and its objects unless they are in their own tablespace.

In the case of our client, it seems that the tables were lost at some time on a Sunday night, just prior to the nightly logical backup (export). The last good logical backup was from Saturday night.

The database is in ARCHIVELOG mode, meaning that RMAN could restore the database to any point in time, including right up to the point of the data loss. However, since the schemas (each of which serves a different customer) all share the same set of tablespaces, this type of recovery would wipe out any data since that point in time—even for the schemas (customers) that were not impacted by the loss.

Because our client’s customers that were not impacted had activity since the data loss event, we had one less tool in our recovery arsenal. If our clients’ customer data had been separated into separate tablespaces or databases, we could have recovered data for their customers that suffered loss without impacting the others at all.

We are now in the process of recovering the lost data from the Saturday logical backups. When that is complete, we will be doing a full RMAN restore to another location, where we will attempt to recover any lost data since the logical backup was taken. This will be a very arduous and time-consuming process.

The moral of the story is to consider recoverability when architecting your Oracle database. If you have users or applications that stand alone and may need to be recovered without impacting others, separate them at least by tablespace; and, if possible, use a separate database. The more you separate, the greater your recoverability options are.

It’s worth noting that the pluggable database option in Oracle 12c might assist in recoverability even further, if the reason for not separating your schemas into separate databases was ease of maintenance or resource concerns. With 12c you can create a pluggable database for each of your logically separate applications or users, while keeping the administration centralized and the overhead minimized. Check it out.

If you have had similar difficulties with restoring data due to the database architecture, please share your experiences. To talk over your database architecture considerations, contact Buda Consulting.

Do You Know If Your Data Backup and Recovery Strategy Is Working… Or Not?

Your Oracle databases are the heart of business operations. To mitigate the business impacts of downtime and data loss, every Oracle database must be recoverable. This means every Oracle shop needs a strong backup and recovery strategy—one that takes into account the probabilities of hardware, software, network or procedural failure(s) in your current environment.

Losing database transactions against a customer-facing application could mean thousands of dollars in lost purchase orders and invoices. Do your backups still have you covered? Or are you vulnerable to unacceptable data loss or inability to meet recovery point objectives?

Some of the emerging challenges that can leave your Oracle data unprotected over time include:

  • Rampant data growth
  • Unstructured and/or new data types
  • New and more complex storage landscapes
  • Cloud and virtualization (of storage, applications, servers… you name it)
  • New requirements for 24×7 availability; e.g., e-commerce applications
  • New regulatory mandates
  • Lack of funding or insufficient staff to manage growing backup demands

Without a viable backup strategy, Oracle database recovery is impossible. Likewise, an inadequate recovery plan leaves the viability of backups to chance. Many Oracle shops rely on Oracle’s Recovery Manager (RMAN) utility for backup and recovery operations. But a tool does not a strategy make. To know your data is protected you need a comprehensive backup and recovery strategy—and you need to test it regularly across a range of scenarios.

According to Oracle’s recent Oracle Database Management and Data Protection Survey, slow backups and long backup windows, backup management hassles and complex backup/recovery scenarios are commonplace even in large enterprises. A couple of interesting statistics:

  • Many large organizations are struggling to backup hundreds of siloed databases adding up to petabytes of data daily.
  • Overall, less than 5% of Oracle data is protected in real-time, meaning that the other 95% is vulnerable to data loss (usually all the data generated since the last backup).
  • Speaking to the dynamic nature of backup needs, 45% of companies surveyed had changed their backup tools as a result of growing data volumes.

For many companies, Oracle database backup and recovery requirements are becoming more complex than their current staff can effectively handle. Can you even identify where your data is actually located, let alone how to resurrect it in the event of an operating system crash, DBMS crash, power loss, disk/hardware failure, cyber attack, disaster or (most commonly) human error?

A third-party Oracle DBA expert can support your data protection strategy in multiple ways, including reducing staffing and management costs associated with backup/recovery, augmenting your in-house expertise and resources, helping you deal with changing technology, and helping you review good practices and develop a cost-effective, practical strategy that will meet your business needs. An outsourced Oracle DBA expert can also leverage specialized tools and skills to help you recover corrupt, missing or deleted Oracle data.

To talk over your current backup/recovery process and challenges, and gain insight into how you can benefit from the latest technology and best practices, contact Buda Consulting.

 

4 Key Use Cases for Oracle’s Multitenant Architecture

4 Key Use Cases for Oracle’s Multitenant Architecture

If you’re thinking of moving to Oracle Database 12c, “the first database designed for the cloud,” one of the most compelling reasons could be the Oracle Multitenant Architecture option. In this revolutionary new architecture, you can “plug” many Oracle databases into a single container database—no application changes required.

Let’s quickly head off any potential confusion around the term “multitenant.” That word has been used for awhile in relation to sharing data elements (records) across databases, especially in contexts like Software-as-a-Service (SaaS) delivery. This scenario is now best referred to as tenant striped database. With Oracle’s multitenancy, you can run many databases within one container, with several databases potentially sharing a common set of metadata.

The advantages of Oracle’s multitenant architecture are sweeping, driving economies of scale across both capital and operating expenses. First, plugging multiple databases into a single, multitenant container creates the highest density yet possible, with shared memory and background processes to further enhance hardware utilization. The advantages over the schema-based consolidation possible with Oracle 11 are 1) no application changes required; and 2) pluggable databases are isolated for improved reliability and security.

Next, multitenancy enables rapid provisioning and cloning. Creating, moving and cloning pluggable databases takes just seconds with new SQL commands. Patching and upgrades are also simplified and accelerated—just unplug/plug to an upgraded container! (What will you do with all the time that will save?) The overall theme is “manage many as one” across tasks like backup and recovery. You even get new capabilities in the Resource Manager to optimize allocation of resources among pluggable databases.

What are the best use cases for Oracle Multitenant? There are quite a few but these four stand out:

  1. Application development/testing
    Multitenant makes it very quick, simple, safe and efficient for individual engineers to rapidly provision and recycle private copies of a few “master test databases.” Just the productivity benefits of this one use case might be sufficient to justify implementing multitenancy.
  2. Infrastructure consolidation
    Multitenancy supports the use of fewer, more powerful physical servers.
  3. Delivering and supporting SaaS applications
    Multitenancy is ideal for deploying separate instances of identical applications to individual “tenants.” This model is predicted to be popular among cloud vendors in particular.
  4. Enabling Database-as-a-Service (DBaaS) in a private or hybrid cloud.
    Multitenancy has a built-in self-service provisioning mechanism that makes it straightforward to enable self-service provisioning of databases; e.g., in development test environments.

Can you upgrade to Oracle Database 12c and not deploy the multitenant option? Yes… but why would you want to? You can even dip a toe in the water by plugging just one database into a container, which requires no additional license.

If you’re considering upgrading to Oracle Database 12c and want to talk over the architecture and design considerations, I invite you to contact Buda Consulting for guidance on analyzing your requirements and architecting an optimal solution. 

Get Your Oracle Databases Ready to Go Mobile

One of the big questions on the minds of more and more Oracle database users is “How do we leverage our Oracle investment to enable mobile applications?” Mobility isn’t just about using a tablet or mobile phone. It’s about providing secure, reliable and fast access to backend data and processes (like CRM and ERP)—on-demand. That means anytime, anywhere, using whatever device.

Today’s high-performance networks and advanced mobile platforms make mobility initiatives a possibility for many companies, if not a competitive necessity. It’s a good thing, too, because “doing nothing” is probably the worst mobile strategy a business could have.

Oracle calls today’s new, multi-channel database access environments “the new normal.” And they offer a multi-channel architecture and development platform that enables companies to connect to Oracle databases across Web, mobile and conventional/desktop channels.

For Oracle shops, the technology to enable mobility is the Oracle Mobile Platform, which is part of the Oracle Fusion Middleware product family. Oracle’s mobile solution also includes pre-built mobile apps for all its product lines (Oracle E-Business Suite, JD Edwards, PeopleSoft, etc.) and some Oracle Cloud applications. Not surprisingly, Oracle also has a mobile security offering.

Going mobile doesn’t happen overnight, though it’s getting easier all the time. Even besides strategy, connectivity, development, integration, deployment, synchronization and user experience there are lots of challenges with aligning your Oracle databases with mobile applications and devices. One of the biggest issues is certainly performance.

As with any application, database performance problems can wreak havoc with mobile applications. Wireless instabilities and bandwidth limitations, as well as reduced processing power on mobile devices, make mobile application performance a mega-critical concern. What’s more, users are likely to be even less tolerant of slowdowns and failures when accessing data over mobile devices.

Will your database health and performance stand up to the mobile challenge? You need to find that out before your mobile users do. From database design to storage configuration to improper use of indexes, there are many reasons why database responsiveness can be sub-par, and often it takes an expert Oracle DBA to identify and solve the problem(s).

Performance tuning to support mobile access requires a systematic, “personalized” approach that goes well beyond a cookie-cutter “performance report.” You and your mobile developers need to know that the database your mobile users will rely on is optimally configured, indexed and designed for your specific systems, storage, application and process requirements.

For example, is it feasible for you to move performance-critical data to the application tier to improve your mobile app’s response time? Oracle offers a range of options to help SMBs and enterprises optimize mobile application performance and an expert Oracle DBA can help with evaluation and implementation.

Intimately related to database performance is data quality. The less data you have to move and the more cleanly and efficiently it is structured, the few “round trips” the data has to make, and the better for your business and your users, mobile and network-connected.

Another concern when “going mobile” is the interfaces to your databases. Whatever the source, data should be cleansed and transformed, as well as properly synchronized and controlled, so that appropriate exception handling and error logging are in place to support solid performance and reliability for mobile applications.

Ready to optimize your database performance to support a successful mobile application strategy and rollout? Contact Buda Consulting to get off to a fast, efficient and effective start.