Best Practices for Error Handling in Database Stored Procedures

I recently performed an assessment of a large set of Oracle stored procedures and batch scripts that a client uses as part of their build and update process. The problem was that the build process was not very stable. During each build, something would go wrong, such as a table not being created or loaded as it should be, and it would take the development team a long time to figure out just what went wrong.

When doing an assessment like this, I look to see how well the code is formatted, structured (modularized), documented, and organized. I also look to see if it is robust (that is, it can gracefully handle failure), and how well the process is logged.

In this case, I found that the processes that were followed were well documented and well modularized. The batch scripts were organized well and used a good naming convention that made it pretty easy to understand the process flow. The PL/SQL stored procedures themselves were quite well written. They were formatted nicely and properly used some of the more sophisticated capabilities of Oracle PL/SQL, such as cursor variables and CURSOR FOR LOOPs when necessary. The developer clearly had a mastery of SQL and of PL/SQL.

But there was one glaring hole in the quality of the work. There was absolutely no exception handling or logging anywhere in the entire process. Not one stored procedure or PL/SQL script checked for errors.

So even though the build system itself was developed by a highly skilled developer, the absence of good exception handling diminished the quality of the whole system. There are two basic exception handling steps that I think should be placed in all PL/SQL code, and some that should be in any large or repeating process.

Here are a couple of the more important ones:

  • Include an exception handler for every logical grouping of PL/SQL statements (BEGIN/END block). This does two things: First, it forces the developer to think about the edge cases that might go wrong so they can handle them appropriately. This alone improves the quality of the code and the stability of the system. Second, it offers an opportunity to determine at what level the exception should be handled. This is critically important to ensuring that the process continues when it should and stops when it should not. This is especially important when nesting procedures or PL/SQL scripts.
  • Log everything, especially exceptions. By creating a separate logging procedure that you call at the beginning, at various points in within the code, and in exception blocks, you can always know exactly which processes completed before a failure. An added benefit of this approach is that the record of when each process took place helps a great deal with performance tuning because you can easily tell which processes are taking the most time.A procedure like this might take the the following parameters:
    • The name of the stored procedure it is being called from
    • An indicator of the position in the stored procedure in that procedure
    • A text string that consists of a concatenation of the values of key variables that are used in the procedure
    • A text string indicating the error if called within an exception block

These parameter values are simply written to a table and optionally to a text file via DBMS_OUTPUT. This helps enormously in troubleshooting later, and it is very easy to do. Write the logging procedure once and call it from everywhere.

There are other steps that you can take to enhance your exception handling further. But the simple steps I just described will significantly improve the stability of your code and your ability to find and resolve any errors that do arise.

If you have used other strategies to create robust, reliable code, please comment and share your ideas.

For expert advice on how to fine-tune use of Oracle’s recycle bin for your environment, schedule a free consultation with Buda Consulting.

For more information:

 

 

Ever Dropped an Oracle Table and Wish You Hadn’t?

If you’ve ever dreamed of going to a recycle bin like you can on Windows, and “undropping” an Oracle database table… Oracle has made your dream come true!

Introduced with Oracle 10g, Oracle’s recycle bin works a lot like what you’re used to in Windows. When this feature is enabled (the default setting), dropped tables don’t actually get deleted. Instead, they “land” in the recycle bin and can be restored.

The recycle bin is actually a data dictionary table containing data about dropped objects. What actually happens when recycling is enabled is that Oracle renames each dropped table and associated objects (indexes, LOB segments, triggers, etc.) with a system-generated name that starts with “BIN$.” The table data is still available and you can query it like any other table.

To “undrop” a dropped table that’s “in the recycle bin,” you perform an operation known as a “flashback drop.” The command syntax is: FLASHBACK TABLE <name> TO BEFORE DROP. This command just renames the BIN$<name> table back to its original name. (You’ll find an excellent overview of Oracle 10g’s Flashback features here.)

The downside of the recycle bin is that dropped tables are really only renamed. Their table segments are still taking up space in your tablespace, which still counts against your user tablespace quotas. To recover the space associated with dropped tables you need to explicitly “purge” unwanted tables and associated objects from the recycle bin.

Fortunately, purging is pretty easy and flexible. If you have the SYSDBA privilege, you can purge everything from all the recycle bins with PURGE DBA_RECYCLEBIN. Or you can purge just the user recycle bin with PURGE RECYCLEBIN. You can even purge “recycled” objects by schema and user with PURGE TABLESPACE <tablespace> or PURGE USER <user>. Users have access in the recycle bin only to those objects that they themselves dropped.

It’s normal to end up with multiple versions of a table in the recycle bin. In these situations, Oracle always restores the newest version. To restore an earlier version, you can simply refer to it by its unique, BIN$-prepended name. Alternatively, you can use FLASHBACK multiple times until you restore the version you want.

The RECYCLEBIN initialization parameter turns the recycle bin feature on or off at the system or session level. Turning recycling off doesn’t prohibit you from restoring objects that were in the recycle bin before you disabled it.

If you want to keep the recycle feature enabled but bypass it when dropping specific tables, just add the PURGE clause to the DROP TABLE statement; e.g.:

SQL> DROP TABLE TABLE_XYZ PURGE;

If you’re not already using the recycle bin feature, why not turn it on and get it working for you today? There’s really nothing to lose—literally!

For expert advice on how to fine-tune use of Oracle’s recycle bin for your environment, schedule a free consultation with Buda Consulting.

For more information:

Architecting to Maximize Recovery Options in Oracle

I recently received a frantic call from a client that believed they have been hacked and had to quickly recover data. They said that data belonging to two or more of their customers had been lost.

Our customer functions essentially as an application service provider (ASP). Their customers’ data is in an Oracle database that our client manages. Our client organizes this database such that each of its customers’ applications is served by a separate schema and they all share one database.

We have advised this client on numerous occasions to separate each of their customers’ data into separate databases, or at least separate tablespaces. This is a good idea for several reasons, one of which is recoverability. Unfortunately, they resisted our suggestions and today are probably regretting that decision.

Oracle Recovery Manager (RMAN) offers a few different options for recovery. You can recover an entire database, an individual tablespace or an individual data file. But you cannot recover an individual schema (table owner) and its objects unless they are in their own tablespace.

In the case of our client, it seems that the tables were lost at some time on a Sunday night, just prior to the nightly logical backup (export). The last good logical backup was from Saturday night.

The database is in ARCHIVELOG mode, meaning that RMAN could restore the database to any point in time, including right up to the point of the data loss. However, since the schemas (each of which serves a different customer) all share the same set of tablespaces, this type of recovery would wipe out any data since that point in time—even for the schemas (customers) that were not impacted by the loss.

Because our client’s customers that were not impacted had activity since the data loss event, we had one less tool in our recovery arsenal. If our clients’ customer data had been separated into separate tablespaces or databases, we could have recovered data for their customers that suffered loss without impacting the others at all.

We are now in the process of recovering the lost data from the Saturday logical backups. When that is complete, we will be doing a full RMAN restore to another location, where we will attempt to recover any lost data since the logical backup was taken. This will be a very arduous and time-consuming process.

The moral of the story is to consider recoverability when architecting your Oracle database. If you have users or applications that stand alone and may need to be recovered without impacting others, separate them at least by tablespace; and, if possible, use a separate database. The more you separate, the greater your recoverability options are.

It’s worth noting that the pluggable database option in Oracle 12c might assist in recoverability even further, if the reason for not separating your schemas into separate databases was ease of maintenance or resource concerns. With 12c you can create a pluggable database for each of your logically separate applications or users, while keeping the administration centralized and the overhead minimized. Check it out.

If you have had similar difficulties with restoring data due to the database architecture, please share your experiences. To talk over your database architecture considerations, contact Buda Consulting.

Do You Know If Your Data Backup and Recovery Strategy Is Working… Or Not?

Your Oracle databases are the heart of business operations. To mitigate the business impacts of downtime and data loss, every Oracle database must be recoverable. This means every Oracle shop needs a strong backup and recovery strategy—one that takes into account the probabilities of hardware, software, network or procedural failure(s) in your current environment.

Losing database transactions against a customer-facing application could mean thousands of dollars in lost purchase orders and invoices. Do your backups still have you covered? Or are you vulnerable to unacceptable data loss or inability to meet recovery point objectives?

Some of the emerging challenges that can leave your Oracle data unprotected over time include:

  • Rampant data growth
  • Unstructured and/or new data types
  • New and more complex storage landscapes
  • Cloud and virtualization (of storage, applications, servers… you name it)
  • New requirements for 24×7 availability; e.g., e-commerce applications
  • New regulatory mandates
  • Lack of funding or insufficient staff to manage growing backup demands

Without a viable backup strategy, Oracle database recovery is impossible. Likewise, an inadequate recovery plan leaves the viability of backups to chance. Many Oracle shops rely on Oracle’s Recovery Manager (RMAN) utility for backup and recovery operations. But a tool does not a strategy make. To know your data is protected you need a comprehensive backup and recovery strategy—and you need to test it regularly across a range of scenarios.

According to Oracle’s recent Oracle Database Management and Data Protection Survey, slow backups and long backup windows, backup management hassles and complex backup/recovery scenarios are commonplace even in large enterprises. A couple of interesting statistics:

  • Many large organizations are struggling to backup hundreds of siloed databases adding up to petabytes of data daily.
  • Overall, less than 5% of Oracle data is protected in real-time, meaning that the other 95% is vulnerable to data loss (usually all the data generated since the last backup).
  • Speaking to the dynamic nature of backup needs, 45% of companies surveyed had changed their backup tools as a result of growing data volumes.

For many companies, Oracle database backup and recovery requirements are becoming more complex than their current staff can effectively handle. Can you even identify where your data is actually located, let alone how to resurrect it in the event of an operating system crash, DBMS crash, power loss, disk/hardware failure, cyber attack, disaster or (most commonly) human error?

A third-party Oracle DBA expert can support your data protection strategy in multiple ways, including reducing staffing and management costs associated with backup/recovery, augmenting your in-house expertise and resources, helping you deal with changing technology, and helping you review good practices and develop a cost-effective, practical strategy that will meet your business needs. An outsourced Oracle DBA expert can also leverage specialized tools and skills to help you recover corrupt, missing or deleted Oracle data.

To talk over your current backup/recovery process and challenges, and gain insight into how you can benefit from the latest technology and best practices, contact Buda Consulting.

 

MySQL Parser Deficiencies

MySQL Parser Deficiencies

MySQL is a very popular database for many applications. But the parser has some important deficiencies that developers need to be aware of. The parser is inconsistent and not robust in the way that it enforces statement syntax. This may lead to ambiguity and application defects that are very difficult to find. I include one example below and link to another blog article that discusses another.

Inadequate Group By Parsing

The MySQL group by function does not require that all non-aggregated elements in the select list be present in the group by clause. This causes an ambiguity because if a given element in the select list is neither present in the group by clause, nor aggregated in the select list, then an arbitrary value is chosen (it actually seems to be the first value but I don’t know if that is always the case).  Consider this example table that contains the number of games a set of players has played in each of four seasons.

We can see that each player has played 100 games in each of the four seasons.

Now consider this simple query to get a sum of all the games that the player has played across all seasons:

As expected, we can see that each player has played 400 games.

But watch what happens if a novice developer places the season number field in the select list without placing it in the group by clause as well:

If the developer does not realize his error, he will think that each player played 400 games in season 1!

The parser should have prevented the query from being executed with the season_number in the select list when it was neither aggregated in some form or present in the group by clause.

This type of error is very difficult to find because the resulting data looks plausible unless you really know the data.

Function Syntax Inconsistency

The next example is related to the way that the parser enforces one statement syntax vs another. This involves the tolerance of whitespace (line breaks) when specifying the sum function vs the format function. This inconsistency does not concern me as much as the group by problem because it does result in an error that is visible to the developer, but it does illustrate a lack of consistency and robustness in the parser.

As MacLoghLainns Weblog describes, the SUM function requires that the opening parentheses be on the same line as the SUM keyword, while the FORMAT function does not have that restriction. View the blog article for more details.

Please reply back with comments or other issues that you have found that might help others avoid unexpected problems in their MySQL applications.

Visit www.budaconsulting.com for more information about Buda Consulting.