Database Patch News — November 2019 (Issue 1)

Database Patch News — November 2019 (Issue 1)

Welcome to Database Patch News, Buda Consulting’s monthly newsletter of current patch information for Oracle and Microsoft SQL Server. Here you’ll find information on available patches—including security patches—and desupported versions made available during the past month.

Why should you care about patching vulnerabilities and bugs? Two big reasons:

  1. Unpatched systems are a top cyber attack target. Patch releases literally advertise vulnerabilities to the hacker community. The longer you wait to patch, the greater your security risk.
  2. Along with running a supported database version, applying the latest patches ensures that you can get support from the vendor in case of an issue. Patching also helps eliminate downtime and lost productivity associated with bugs.

Here are the latest patch updates for Oracle and SQL Server:

Oracle Patches:

Oct 15 2019 Quarterly Patch Updates:

19c – Release Update 19.5 available

18c – Release Update 18.8 available

12.2.0.1 – OCT 2019 RELEASE UPDATE 12.2.0.1.191015 available.
Regular support ends Mar 2023 and extended support ends Mar 2026.

12.1.0.2 – Currently in extended support.
The last freely available patch was July 2019 for 12.1.0.2. The Oct 15 2019 Patch Set Update (PSU) is available but may require extended support purchase to access it. Patches will be release until July 2021 for this version. PSU 12.1.0.2.191015 is available.

11.2.0.4 – Entered extended support in December 2017
The last free available patch was October 2018 for 11.2.0.4. PSU 11.2.0.4.191015 is available but may require clients purchase extended support to access it.

SQL Server Patches:
SQL Server 2017 incremental servicing model (ISM)
CU17 (Latest build)—Released October 08, 2019

SQL Server 2016 Service Pack 2
Release date: April 24, 2018

SQL Server 2014 Service Pack 3 Cumulative update 4
Release date: July 29, 2019

SQL Server 2014 Service Pack 2 Cumulative update 18
Release date: July 29, 2019

Time to Validate Your Database Backup

Another new client, Another bad backup

I originally wrote this post in July but I am updating it today because it is so important.  Yesterday we did an initial Oracle Database Health Check for another new client and found yet again that their backups are useless. This time the client was taking backups regularly, but was not preparing the database first and was just copying the database files. Unfortunately this renders the backup useless.

Here is the original post:

One of the first things that we do when engaging a new client for Oracle or SQL Server Database Administration support is to do an inventory of their current databases and to validate that they have a good backup and recovery scheme in place.  The vast majority of the time, we find that there are either no backups being taken at all, or that not all of the data and configuration files are being backed up properly, despite the fact that the client thinks they are.

Often, there was a proper backup in place at some time in the past, but over time, the configuration of the system changed rendering the backup useless. These changes include changes in disk configuration, addition of data files, and other changes, that were made at the operating system level, configured properly in the database, but never modified in the backup configuration.

It is a simple oversight that can have catastrophic effects. We recently spent over three months helping a new client recover many years worth of data that they thought they were backup up properly, but they were wrong!

A regularly scheduled review of the backup scheme and a test recovery is the best way to ensure that your Oracle or SQL Server database is being properly backed up.

Contact Buda Consulting today for a comprehensive Disaster Recovery review of your Oracle or SQL Server Databases.

Using the Data Dictionary to find hidden data in SQL Server

Using the Data Dictionary to find hidden data in SQL Server

A client asked me recently how he could find a string in his SQL Server database without knowing what table or column it was in. The string was a translation of a code that appeared on one of the UI screens. This was a packaged CRM database and he had no documentation on the schema. With hundreds of tables whose names were not obvious, he had no idea where the translation might be kept, but wanted to do some reporting using the translations. It was like finding a needle in a haystack!

In order to help, I wrote a utility to find the data for him. Using the data dictionary tables and a little dynamic sql,  I created a script that would search for a string in every column of every table in the database.

We can do this by using the information_schema.columns view to create statements that insert search results into a temporary table. We want to record the table, the column, and the full contents of any column that contained the string we wanted to search for in order to provide context for the search results.

There are some complications that we have to address as we do this.  First, since we want to do a like comparison against any of the fields we must restrict the search to char and varchar fields. This is necessary because the like comparison cannot be used against xml and some other datatypes. That restriction works in this case because I was searching for a string and it was very unlikely that this string would be embedded in an xml field. Second, to prevent errors resulting from spaces, hyphens, or other special characters in table or field names, we must surround the object names with brackets — this is always a good practice when using dynamic sql with sql server.

Since this utility uses the like comparison with a wilcdard before and after the search string, indexes will not be used so performance will be an issue. This utility is best run during non-production hours and may take a long time to complete. This can be mitigated by modifying the application to remove the leading wildcard and then indexes on any of the columns would be used, but this will only find strings that start at the beginning of the column value that is being checked.

We now have a nice utility that give a report of all places where the string lives. The utility can easily be extended to handle numbers, dates, and other data types as necessary. This script works with SQL Server but similar scripts can be created for any major database that has a user accessible data dictionary.

Note that this procedure does not automatically delete the prior contents of the search_findings table. You may wish to add that to the script if you don’t want the results to be cumulative.

The instructions follow.

1. Start by downloading find-it

2. Create the table that will hold the search results using create_search_results_table.sql

3. Create the find-it procedure using create_find-it_procedure.sql

4. Issue the search with the following command:

exec find_it  ‘search string’

Using the Data Dictionary to find hidden data in SQL Server

Why I Avoid Database Hints: a followup

In an earlier blog post about database hints, I gave a number of reasons why I dont like to use hints in SQL queries. Today I wanted to follow up to share one situation where I think hints are OK, even necessary.

As described very well in an article in an article by Andrew Kelly in SQL Server Magazine (InstantDoc 129916),  hints are sometimes necessary when we wish to minimize logging during bulk loading operations. In particular, when issuing a bulk insert, using the TABLOCK hint is necessary in order to reduce logging to the minimum amount even when recovery mode is set to simple.

 I still feel strongly that we should avoid the use of optimizer hints in select statements that are embedded in applications for the reasons that I mention in my original post, but Andrew’s article describes one situation where I think they are acceptable.

If you have comments about hints that you would like to share, please reply.

Bridge the Gap Between Database Security and Application Security

Strong Database Security

Today’s database systems have sophisticated security mechanisms designed to enable the protection of data and the tracking of the viewing and modification of the data. For example, Oracle has database roles that govern what database objects a user can see and edit.  Oracle also has auditing features that enable us to track the activity of users. SQL Server has similar security features. When implemented properly, these features can be very effective at helping us secure our data.

Generic Application Logins

Unfortunately, the design of many applications, especially web applications, results in a circumvention of these sophisticated security mechanisms. Many applications are designed to use just one highly privileged login to the database for all users. In these scenarios, users log in to the application and the application controls the security from that point on.

Application security schemes generally allow the administrator to specify which functions a user can access or execute, instead of which database objects they can see or edit.  Access to these functions is generally provided by means of a role or profile.

This arrangement makes it very difficult to know which users actually have access to a specific database object and it makes it even more difficult to track such activity.

Bridge the Gap

When applications are designed in this manner, it is critical to ensure that we understand what objects can be accessed by users in order to protect sensitive information. This involves a number of things:

  1. User Rights Data: A list documenting which roles or profiles each application user has been assigned to. Ideally, this information would be kept in a table that could be queried for this information.
  2. Role or Profile Rights Data: A list documenting which functions each application role or profile have access to.  Ideally, this information would also be kept in a table that could be queried for this information.
  3. System Function Metadata: A list documenting each function in the application, which database objects that function provides access to, and the type of access (update,delete,etc). Ideally, this information would be kept in a meta data table that could be queried for this information.
  4. A cross reference that combines the above three sets of data and maps each application user to the database objects they can access, as well as what level of access they have to the object.

For packaged applications, most of this information is likely available in tables in the database, but the challenge may be finding out where it is kept. For applications that are developed in-house, it is important to document the location of this information so that it can be used for auditing purposes.

Make it Accessible

Best practice is to develop an application that can access this detailed user privilege information and provide easy report and query functionality. This is the cross reference data that combines the rights data and the system function meta data described above.  Making this information easily available (and of course securing it) can help bridge the gap between database and application security.

How has your organization dealt with this issue? Let us know what methods you have used and what areas you found challenging.

Contact Buda Consulting to see how we can help you design an effective strategy for securing your data.