The Ultimate Oracle Database Security Assessment Checklist for 2021

The Ultimate Oracle Database Security Assessment Checklist for 2021

They are two simple words, but they are two of the most feared words in business: Data Breach! Your data is your business. When companies lose their data, they lose trust and the ability to conduct much of their business.

The problem is many companies spend their time focused on network security instead of database security. Your network is important and should be secured, but your data is the lifeblood of your business. To help you focus on the safety of your database, here is an Oracle database security assessment checklist for 2021. These are some of the best practices and things you can do to secure and protect your data.

Ask Key Database Questions

When many people think about security, it is usually in a general way. They want security, but don’t really define what security looks like. Here are some questions to help you focus your thoughts on your database security.

Are You Using Built-in Security Features?

Your Oracle database has many security features built-in. These can be the first line of defense for your entire database. Many of these features are free and don’t require subscriptions, but are part of your database package.

Do You Have a Current User List?

A database should have a list of privileged users and over-privileged users. This list should show who can do what with the database. This list must stay current as a level of protection and accountability for your company.

Who Is Overseeing Security Fixes?

Oracle often releases security patches and fixes to protect your data. With the speed of business today, these can be overlooked. You should have someone who makes sure these fixes are implemented immediately. 

Are You Having Regular Database Audits?

Database auditing is how administrators review the actions of their users. They do this to see who is accessing the database. This helps them make sure that only the people who are supposed to access the database are doing it. 

What Is Your Password Policy?

Passwords have to be actively maintained, or they can become an easy entryway into databases. You must make sure that there aren’t any default or non-expiring passwords with access to the system.

Download the Oracle Database Security Assessment Tool

To help their users have safer databases, Oracle developed the Database Security Assessment Tool (DBSAT). The DBSAT is a free tool that Oracle users can implement, and it acts as a database security guide.

DBSAT will scan the database and give you a profile in different formats that helps you see the state of your security. The formats can be HTML, SLS, TEXT, or JSON. This makes the information quick and easy to implement.

The tool will show you some of the security risks that you currently have in the system. It will then recommend relevant products and features of the system you can use to help stop the risks.

The DBSAT focuses on three specific core areas with its security assessment.

1. The General Security Configuration of Your Database 

The DBSAT scan will do a scan to make sure you are minimizing database risk. It will look for missing security patches that you can implement. It will also check to see if you are using encryption auditing within your system.

2. Users and Their Entitlements

One of the main features of the DBSAT is its focus on your users and how they are accessing your system. It will identify your privileged users and show you what areas they can access, and any areas they are accessing, but shouldn’t be.

3. Identifying Sensitive Data in Your Database

The DBSAT will help you stay in compliance with regulations by focusing on your sensitive data. It will help you identify your sensitive data and recognize how it should be treated. This also helps you develop healthy auditing systems.

How You Can Use the DBSAT 

DBSAT can help you with your security practices by giving you the information you need to implement and enforce strong security for your database. With the many reports it can generate, your security doesn’t have to be forgotten.

DBSAT helps you understand your user accounts, along with the roles and privileges of each user. This helps you find and fix short-term risks. Plus, it can give you enough information to have a long-term security strategy.

Get a Database Health Check

Just like a person should have a check-up every year, you want to make sure your database gets a health check. Have someone from the outside come in and review your database configuration and policies.

They can review your parameters, database maintenance procedures, alert logs, and trace files. They can also help with many other things, like finding your data blocks and identifying invalid objects.

Look for a health check that provides a focused report so you can take action on what is needed. The report should show you possible problem areas and contain recommendations to address the problems.

Your Oracle Database Partners

Reading through this Oracle database security assessment shows that there is a lot to think of when it comes to database safety. Too often IT staff are so focused on protecting your network, the database gets forgotten.

You want to find people who are database experts and make your database their own. Buda Consulting is a group of database experts who listen to your needs and deliver on their promises.

Their passion is protecting your database and helping it to function smoothly. They handle all aspects of database creation and management. Plus, they can show you how to extract valuable insights from your database.

Contact us for a free 15-minute call and let them show you how they can be your database experts. 

Thoughts On Deleting Data – Considerations & Best Practices

Thoughts On Deleting Data – Considerations & Best Practices

In a recent blog post on database maintenance tips, I mentioned that one important facet of cleaning up the database is to remove records that we no longer need — those that don’t contribute value to the applications and users who use the database. This is an important maintenance process, but there are some equally important considerations when thinking about deleting any data from the database. 

These considerations are driven by a few key questions that we need to ask:

Why are we deleting the data? 

Performance?  Disk space cost?  Security? Organization, Simple lack of value? Let’s look at each of these reasons and look for alternatives to deleting the data in order to avoid losing any untapped future value stored in that data.

Performance

If we are thinking of removing data as a way to improve performance, can we instead use partitioning, archiving or indexing to achieve adequate performance while preserving the data? Can we tune the most expensive queries to reduce load on the system?  Can we increase resources on the server or move the database to a more powerful server?

Disk space cost

If our purpose is to reduce the cost of disk space to store the data, can we partition the data and archive the older or less-used data to lower cost storage?  Is compression an option on our hardware platform (eg, Oracle’s exadata platform)? Can we remove some indexes that are taking up space but not adding a performance boost?

Security

If we are seeking to remove the data to improve security by reducing the data footprint, can we leave the data there and achieve the level of security that we need by using an encryption scheme, virtual private database (Oracle),  or another tighter access control scheme like label security?

Organization (reducing clutter) 

If we are removing the data because we don’t want to see it —  because the reports, queries, and dropdowns in the application screens are unwieldy, can we tag records as deleted instead of actually removing them and filter queries based on these tags?  Can we create views for each table that filters these records and use synonyms to redirect apps to those views to minimize application changes?  

Lack of value

If we are certain that the data really has no value, let’s get rid of it. This is the data that just sucks energy from the system and gets in the way. But when doing so, let’s be sure to do so subject to the considerations below. Even deleting valueless data can cause problems if we are not careful.

Considerations before deleting anything

Once we are convinced that the data has to go, we have to ask all of the following questions in order to create the proper process for deleting it. 

Interfaces with external systems

Downstream and Upstream systems may break as a result of deleting data. 

Downstream systems may contain supplemental data that will be left dangling if you delete a record from your system and do not also delete it (and the associated supplemental data) from the downstream system. This can cause applications to fail or worse, can cause invalid results to appear on reports.  

Upstream systems may be subject to numerous problems as well. They may re-introduce the same records that you delete, or they may send child records that are associated with records that you deleted, causing interfaces to fail. Worse, without proper logging in your interfaces, errors like this can go undetected.

Of course, this problem can be recursive. Each of the upstream and downstream systems may have upstream and downstream systems of their own having the same potential risks and complications.

Constraints and Dependencies

Are there database constraints with triggers that would result in child data being deleted and do we want this? As we think about whether we want to delete older customer invoices for example, do we want to delete the order history at the individual item level? We may not want order history with respect to a customer after 7 years, but do we want to lose the information about the quantity of each item that was ordered over time?  If we want to keep the item order counts but not the invoices, then we may need to store that data differently in order to be able to delete the invoices.

Or conversely, is there any data that would be left dangling because there are no integrity constraints defined to delete child data?  If we delete old customer records for example, will there be customer address data, demographic data, or customer preference data left behind resulting in data inconsistencies and invalid reporting? This is just one reason why database integrity constraints are so important in a database design, but that is a topic for another post.

All these questions must be applied to the child data as well. If we delete this data, is there dangling parent data that would be left useless or meaningless without its detail and therefore should be deleted as well?

Retention Requirements

Does your organization have any retention or destruction requirements for this data? Better check with legal! Aside from your organization’s own data management policies, numerous regulations specify how long data of different classifications must be retained. For example, as mentioned in this article by USSignal, The Fair Labor Standards Act, the Bank Secrecy Act, the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), and the Federal Information Security Management Act (FISMA), among others, all specify data retention requirements. Be careful to delete any data while you may be required to produce it in the event of a legal action.

The Deletion Process

So we have confirmed that we want to delete a set of data, we have confirmed what ancillary data needs to go with it and in what systems, and we have confirmed that we can legally delete the data.  Now we have to think about how to do it safely. Here are some guidelines that we follow before hitting that delete button.

Script It

All delete commands must be scripted! All delete commands must be scripted! It was worth saying that twice. First, accidentally deleting records is an obviously bad thing, and accidental commands are much more likely when working in a command prompt in SSMS or SQLPlus than when we have carefully crafted the commands and placed them in a script with comments and logging.   

Preview and Approval

A preview of any data to be deleted should be made available to the application/database owner with authority to approve the removal of the data. This can be done by issuing the same command (same criteria) that will be used to delete but as a simple select command. It can be presented in a csv file, spreadsheet, or a live query. And the preview should either detail all of the child and parent data that will be removed, or just the parent records and a written description of the child data that will be deleted along with it.  The approver must be made aware of whether or not the delete process is reversible. This should be approved in writing before the data is actually deleted in the production environment. 

Make it Repeatable

One reason to script the process is so that it is repeatable. The script should be based on criteria such that if additional data that matches that criteria is introduced after the initial delete, we can rerun the script and catch the additional data. This is also very useful if we have the ability to issue the commands in staging or test environments before doing so in production.

Do it on Staging/Test first

It is very possible that after deleting data, the user will realize that they did not account for all of the implications of doing so. Whenever possible, the exact delete process should be done in a staging or test environment before doing so in the production environment, with end user testing before the actual production delete. 

Log it

Finally, we must log the delete. This means actually keeping a record of the records that were deleted. This does not mean keeping all of the data, just a record of a few key fields so the removal of the records can be traced should there be questions later. For example, if old invoices are removed, keep a record of the customer number, the invoice, number, and the date of those records that are removed. This can be done with simple select statements executed prior to the delete command using the same exact criteria. 

Is it worth it?

That all sounds like a lot of work, and it is. But the implications of deleting data are significant and restoring or reproducing deleted data can be extremely difficult and time consuming or even impossible, so a thoughtful, diligent process is required and worth all of that work.

Roles vs Direct Database Privileges

Roles vs Direct Database Privileges

A colleague asked me today for my opinion on database security and the best way to grant a certain database privileges to a few users in a postgreSQL database.  I will share my thoughts here and I welcome your thoughts as well. These basic database security concepts here apply to any relational database including Oracle, SQL Server, MySQL, or any database that implements roles for security.  They also apply to application security roles where the access control is managed in the application rather than the database, as is often the case. 

My colleague needed to give certain users the ability to kill other processes. He was struggling with deciding how to structure the privilege. In PostgreSQL, the privilege to instruct another process to terminate is granted by virtue of the default role called pg_signal_backend.  He was deciding between granting that role directly to the users in question, or to create a role called something like Manage_Other_Processes that would be granted to the users in question. 

Here is how I think about using roles. 

A role is really a business role

Basically, one should grant a privilege to a role rather than directly to a user when that privilege is to be granted to a group of users, instead of just one, specifically, a group of users that perform the same business function. One benefit of this approach is that this simplifies replication of one user’s privilege to another user, as in the case of one user leaving the company and being replaced by another user.  

A privilege should also be granted to a role when that privilege enables the user to perform a certain function, and when it is likely that other privileges will also be required in order for a user to perform that same function.

These considerations really get to the whole idea of roles in the first place. A role really refers to the role that the individual receiving the privilege plays in the organization. I think it’s original intent was not really to be considered a database construct, but that is how many think of it now, this misalignment is particularly reflected in the naming of the pg_signal_backend role in postgreSQL, more on that later.

Database Privileges, Security Best Practices, Keeping it Organized

A key benefit of using roles is organization. A given user may have many privileges. Update, delete, insert, select, each on tables, views, stored procedures etc. Add in system privs and a typical user has lots of privileges. Managing privileges on that many objects is a challenge. The best way to manage a large number of things is to categorize and label them. This is accomplished with roles.   

For example, I can group together all the privileges on stored procedures, tables, views, and other database objects required to manage client records, and grant them to a role called manage_client_records. And I can group together all of the privileges required to manage employee records, and grant them to a role called manage_employee_records.

Database Security and adding new users

Rather than remembering that I need to grant execute permissions on 2 stored procedures and 10 tables for managing the employee records, and on 3 procedures, and 15 tables for managing customer records, I can simply grant all of those privileges to the appropriate roles once, and grant those roles to the proper users in one simple statement.

Ease of removing or changing user access

Perhaps most importantly, I can revoke all those privileges by simply revoking the roles, enhancing security by reducing the possibility of human error resulting in dangling privileges when someone changes roles in the company. 

Ease of managing application enhancements and changes

If the developers add functionality to the application, resulting in new tables, views, or other database objects that will require access by certain application users, these new privileges can be granted to the appropriate roles, and all users that have that role will receive that privilege. No need to individually grant the privileges to individual users.

Discovery and User Access reporting

When we do database security assessments, we often generate reports that show which users have privilege to access tables, execute stored procedures, and change system configuration.

What management really wants to know, however, is not what table a user can access, they want to know what business functions each user can perform and what data they can read or edit in that capacity. Here is where using roles really shines.

A report showing the set of users that can view or manage client accounts is much more useful to management then a report that shows a set of users that have select or edit privilege on the client table, and the client address table, and the client account table, and the client transaction table, etc.  Management needs to quickly be able to see what capability users have. Roles make it much easier for them to see that.  Imagine a report showing 10 users that have been granted the manage_client_data role, and 15 that have been granted the view_client_data role.  Twenty five lines that tell the complete story. Contrast that to a report with hundreds of lines showing all tables and stored procedures that all users have access to.  Of course a detail report will be useful as well for deep analysis, and that can be generated when using roles as well.

Database Privileges and System Roles

I used application related roles as examples in this article, but the same concepts apply to system roles and application-owner roles like those that my colleague asked about, and that motivated me to write this article.  And this deserves a little more discussion and some readers may disagree with my thoughts on this and I was definitely on the fence about it. Please comment and add your thoughts if you think differently. 

The privilege that he asked about was actually already a role, not a privilege. Pg_signal_backend is a role that enables the user to terminate processes owned by other users (except super-users). While this is already a role, I feel like it is so narrowly defined that it does not satisfy the real intent of role as I discussed it above. I feel like it would not be surprising if other similar privileges (roles) of this nature are likely to be needed by the same user, given that it needs to control other processes. And I would rather see a better defined (and named) role, like Manage_Other_Processes, that includes this role and any others that will end up being necessary. And then that role can be applied to any other users that need this capability.

Similar to my discussion about user access reporting above, a role with a name like Manage_Other_Processes will tell much more during a user access report than one with the name pg_signal_backend.  

To Role or not to Role

So at the end of the day, when designing a security scheme, I try to use roles wherever it is likely that the same business function requires multiple privileges, or where the same privileges are likely to be assigned to multiple users. Please share your thoughts and contact us for more information.

When Should The Database Be Updated?

When Should The Database Be Updated?

Why if it’s not broke don’t fix it does not work for databases (or anywhere in IT for that matter)

One of the hotly debated items among IT professionals is the age-old question,”When should the database be updated?” At Buda Consulting we always like to make sure our clients are running the latest, secured and supported versions of any software in any environment we manage.  This includes software products from Oracle’s database and Microsoft’s SQL Server to PostgreSQL. But we have noticed that this has not always been the case when we come into a client’s company and perform our global product health check.  

In my experience I have worked with DBA’s and System administrators who have always said if it is working we should not touch it and I can understand why some professionals and managers may feel this way.  When your database or application is offline it creates stress as administrators are tasked with getting the service(s) back online as soon as possible.  The idea is if we do not touch anything it should just work without issue but experience shows this is not always the case.  When it comes to databases specifically, not touching a db from time to time can have catastrophic results.  

As DBA’s if we did not look at your database’s tablespace stats, we would never know when your instance was about to run out of space at the tablespace or filesystem/ASM disk group level.  Not noticing this would result in your database eventually not being able to write data which would result in your application/database crashing.  

Another excuse (yes, that is what I call not upgrading your software!) I hear from time to time is new software versions introduce bugs.  That is true but almost all software versions will introduce bugs.  Most bugs are usually outlined in the KNOWN BUGS section of a software release’s readme while others have yet to be discovered.  The part this excuse does not take into account is that new software usually fixes bugs and security exploits that were not patched in the older version of the software. Whenever you are in doubt contact Buda Consulting for a database security assessment.

Let’s determine “When should the database be updated?”

As someone who has worked in both the private and public industries of IT, I have seen the dire consequences of failing to keep your software up to date.  This is a widespread problem in most public sector entities as most do not generate revenue but provide a service for the citizens of said state.  Because money is usually very scarce most IT budgets tend to get trimmed to the detriment of the agency.  I have seen time and time again where a mainframe service was not maintained over the years because the original administrators of the platform either moved on or retired. Because these admins were the ones that implemented the platform, once they left the knowledge of administering and maintaining the platform left with them.  

This caused new staff who did not know about the platform to just “keep the lights on” and not patch the environment in fear of breaking something that was not broken.  Over time the software running the platform moved further away from the latest version of the platform until a direct upgrade path to the new platform was impossible without vendor intervention or consulting services.  Once the vendor is involved you can expect the cost of the upgrade to not be cost effective.  I have seen quotes for upgrade work as high and two (2) million dollars to upgrade mainframe systems that could have easily been avoided had both old and new administrators put forth their best effort to make sure the platform was always running the latest software.  

It is industry best practice, especially when it comes to databases, that moving to a new software version should only be done after the release of the first service pack.  For instance as of the writing of this article Oracle’s latest database software is on version 21c.  Once  service pack one of 21c (21cR1) is released, all companies using 21c base release or older software versions should have started creating an upgrade plan that should be implemented in no less than six months to a year.  Like I explained above, by not keeping your software upgraded to the latest version you put your company at risk of having to spend a lot of money down the line to hire an outside company to come in and perform the upgrade as you are no longer able to easily upgrade from one version to the next.  

So if you are running Oracle Database versions 11g or 12c, it’s time to start planning an upgrade to at least 19c or 21c.  If you are running Microsoft SQL Server 2016 it’s time to start planning an upgrade to at least SQL Server 2017 CU 24 or SQL Server 2019 CU 11.  We cannot stress enough that the old if it’s not broken don’t fix it methodology needs to go away.  In the age of constant security breaches it is very important, now more than ever, to keep your software up to date with the latest patches to make sure you are protected against the worst of the software exploits that are running around the interwebs.

And if you like this article, please share it with your colleagues and subscribe to our blog to get the latest updates. Schedule a 15 minute call with Buda Consulting today.

The Elements Of A Good Disaster Recovery Plan

The Elements Of A Good Disaster Recovery Plan

What is a Disaster Recovery Plan

The elements of a disaster recovery plan (also called a Business Continuity Plan) is a document that describes how an organization will survive a disaster. These can be natural disasters like hurricanes, fires, terrorist attacks, or any event that may prevent the business from operating. A good disaster recovery plan includes everything from how to replace lost personnel to how to relocate everything in the event that an entire building is lost. The plan includes human loss, product loss, customer loss, technology hardware loss and data loss.  This article will discuss only hardware loss and data loss, but it is important to think of the disaster recovery plan in the larger context of an overall disaster recovery plan. 

Why is a good disaster recovery plan important

When disaster strikes, time is critical, resources are stretched, and capabilities are limited.  Planning ahead for a disaster ensures minimum disruption by identifying everything that might be needed beforehand and ensuring that there is redundancy in each element. For example, a good disaster recovery plan will identify individuals responsible for restoring a database, and a backup to that individual in case the primary individual is lost. The backup individuals can then be trained properly to take action when it becomes necessary.  Once the crisis starts, it is too late to take these steps. 

Elements of a Disaster Recovery Plan

A Disaster Recovery plan includes many elements that help us be prepared in a crisis. The purpose of identifying all of these up front is to ensure that we have primary and backup human resources trained for each task that will be necessary to be performed in a crisis, and that that we have reliable backups in place of all physical and technical resources (applications, databases, servers, networks, buildings, vehicles, machinery) that will be required in order to stay in business or get back in business after a disaster. Some of the more critical elements of the plan follow. Since this is a database blog, the remainder of this article will be focused on applications and databases.

Scenarios

We want to enumerate as many possible disaster scenarios as we can in order to ensure a robust plan. As we describe each scenario in detail, we will find blind spots that we have and we will address them. The scenarios must describe what may happen, what that will look like, exactly what steps we will need take to get back in business, and exactly who will do them. Examples of technology related disasters:

  • Main data center is hit by extended power outage due to flooding damage to regional power grid
  • Infrastructure is hit with ransomware attack
  • Hurricane cuts connectivity to main data center
  • Human error causes loss of a large data table for mission critical applications.
  • Storage system firmware update causes corruption in production database

Inventory of applications (including dependencies on databases) 

Include nameless applications (reporting or analytical tools used against a data mart or data warehouse). Collecting this information on each application will help us know exactly who to call when disaster strikes, saving valuable time. Ensure that every known database is referenced here.

  • Application Owner
  • Recovery Time Objective
  • Recovery Point Objective
  • Responsible IT persons (primary and backup)
    • Application
    • Network
    • Cloud Infrastructure
    • Storage
    • Server
    • Database
    • Backup Maintenance

Test the Elements of a Disaster Recovery Plan

    • Test Procedures for each application in inventory
      • Identify systems to be used for test restore if applicable
        • Responsible party to provision these systems
      • Example Pre testing steps
        • Determine which applications/databases are in scope of this test
        • Gather data points to validate. This typically involves finding an example of both recently entered or modified data, and old data, to ensure that a full range of timeframes is represented and continues to be available after the recovery.
      • Example steps for conducting the test   — some or all of these may be applicable
        • Failover to backup database
        • Restore backup database
        • Point application to test database 
      • Example Post testing steps — some or all of these may be applicable
        • Validate the data points
        • Switch back to primary
        • Repoint the applications to primary database
  • Update the Disaster Recovery plan to reflect any lessons learned, staff changes, new, changed, or decommissioned databases, applications, or hardware.
    • Testing Schedule
      • When will tests be conducted?
        • Frequency — recommend minimum of twice per year.
        • What point in the quarter, month, week,
        • Time Of Day
  • Test Cases
    • Screens/reports to review
    • Data points to validate 
  • Responsible parties
    • Who will be responsible for conducting the test?
    • Who will be responsible for validating the results?

Living Document

As with many documents critical to our businesses, this must be a living document. This document contains names and contact information for key personnel that must be called in a time of crisis. It is critical that this document be updated regularly so changes in staff and responsibilities are reflected.  New applications and databases are added regularly as well, these must also be kept current.  Best practice is to update this document each time the tests are conducted.

Database Disaster recovery tools

One key aspect of the recovery plan from a database perspective will be the designation of a tool or tools to create standby databases that can be used in the event of a failure of the primary database. Most database tool vendors provide tools to do this. We will discuss the tools provided in Oracle for this purpose as well as a third party tool (Dbvisit). Future articles will describe DR options for SQL Server and other database.

Oracle Data Guard

Oracle provides a tool called Oracle Data Guard that can be used to configure and manage standby databases. Oracle Data Guard provides the capability to create and manage local or remote standby databases, and manage the transition from primary to standby and back. and it can create logical or physical standbys. At the center of Oracle Data Guard is a set of command line utilities entered at the Oracle console (SQL Prompt).  Oracle’s enterprise manager tool (Cloud Control) provides a graphical interface on top of dataguard and simplifies the use of the tool.  

Oracle Data Guard comes included as part of Oracle Enterprise Edition. A more powerful tool enabling greater use of standby databases is also available for enterprise edition called Active Standby. Unlike basic Oracle Data Guard, Active Data Guard has additional license fees.

DBVisit

Both of the Oracle Data Guard tools previously mentioned require Oracle Enterprise Edition.  There is no DR solution available from Oracle for Standard Edition. Fortunately, DBVisit offers a solution. Dbvisit provides the functionality to create and manage standby databases for Oracle Standard Edition. The tool offers a graphical user interface that makes creating and managing a DR solution for Oracle Standard Edition simple. And the licencing is much lower than the cost to upgrade to Oracle Enterprise Edition. If the only reason for needing Oracle Enterprise Edition are the DR capabilities of Oracle Data Guard, DBVisit is a good option. 

These are the Elements Of A Disaster Recovery Plan

In summary, a good DR plan should include everything about what an organization must do to recover from an emergency. This includes the who, what, when, where and how for the entire process from the moment that an emergency occurs to when the organization is fully recovered. 

If you would like to discuss creating and implementing a Disaster Recovery Plan, especially the Database Related components of your plan, give us a call and we can talk about the best approach.

Also, please leave comment with thoughts that you have about disaster recovery planning.  Let me know if you include things I didn’t mention. Or share stories about how a plan helped in a disaster, or how the absence of a plan hurt 🙁

And if you like this article, please share it with your colleagues and subscribe to our blog.

Database Patch News — March 2021 (Issue 7)

Database Patch News — March 2021 (Issue 7)

Welcome to Database Patch News, Buda Consulting’s newsletter of current patch information for Oracle and Microsoft SQL Server. Here you’ll find information recently made available on patches—including security patches—and desupported versions.

Why should you care about patching vulnerabilities and bugs? Two big reasons:

  1. Unpatched systems are a top cyber attack target. Patch releases literally advertise vulnerabilities to the hacker community. The longer you wait to patch, the greater your security risk. 
  2. Along with running a supported database version, applying the latest patches ensures that you can get support from the vendor in case of an issue. Patching also helps eliminate downtime and lost productivity associated with bugs. 

Here are the latest patch updates for Oracle and SQL Server:

Oracle Patches:

January 19, 2021 Quarterly Patch Updates:

21c – Released January 13, 2021, Version 21.1; no Quarterly patch yet

19c – Release Update 19.10 is available (32218494 and 321266828)

18c – Release Update 18.13 is available (32204699 and 32126855)

12cR2 – Release Update 210119 is available (32228578 and 32126871)

Regular support ends in Mar 2023 and extended support ends in Mar 2026.

12cR1 – Release Update 210119 is available (32132231 and 32126908)

Regular support ended in July 2019 and extended support ends in July 2021.

11gR4 – Patch Set Update 201020 is available (31720776)

Regular support ended in October 2018 and extended support ended December 31, 2020.

 

SQL Server Patches:

SQL Server 2019

Cumulative update 9 (Latest build) Released Feb 2, 2021
Mainstream support ends Jan 7, 2025
Extended support ends Jan 8, 2030


SQL Server 2017

Cumulative update 23 (Latest build) Released Feb 24, 2021
Mainstream support ends Oct 11, 2022|
Extended support ends Oct 12, 2027


SQL Server 2016 Service Pack 2

Cumulative update 16 Release date: Feb 11, 2021
Mainstream support ends Jul 13, 2021
Extended support ends Jul 14, 2026


SQL Server 2014 Service Pack 3

Cumulative update 4 Release date: Jan 12, 2021
Mainstream support ended Jul 9, 2019
Extended support ends Jul 9, 2024


SQL Server 2012 Service Pack 4

Release date: Oct 5, 2017
Mainstream support ended Jul 11, 2017
Extended support ends Jul 12, 2022

Note: All other SQL Server versions not mentioned are no longer supported.