What Is CUI Data? | An Expert’s Explanation

What Is CUI Data? | An Expert’s Explanation

Did you know there are 125 categories of controlled unclassified information (CUI)? With so much data that is now under the umbrella of CUI, ensuring your business recognizes which data to protect is essential. But what exactly is CUI data? Read on to learn about this type of data, how to recognize if you use it in your business, and how you can protect it.

What Is CUI Data?

CUI, or controlled unclassified information, is information that needs safeguarding. It is data that needs to be disseminated in a manner that follows the laws and regulations the government has in place, but that does not fit under Executive Order 13526 “Classified National Security Information”.

CUI is part of a government program that strives to standardize this type of data and ensure it is protected. CUI replaces the old For Official Use Only (FOUO) programs and offers more efficient and consistent policies. If a document had a label of “Proprietary” or “For Official Use Only” in the past, now it needs the CUI label.

CUI is a term that encompasses other kinds of data: Covered Defense Information (CDI) and Controlled Technical Information (CTI). They refer to technical information that applies to a military or space context and which has a distribution statement. The data can be labeled as CUI Basic or CUI Specified, which is more restrictive in its uses and the safeguards it needs.

Examples of CUI Data

Within the 125 categories of data that fit into the CUI label, you can find many subsets of information that need to be protected, but are not classified. The CUI Registry has a list of what type of data must be safeguarded following government policies, laws, and regulations. Some examples include:

  • Personally Identifiable Information (PII), which is information that can identify a particular person
  • Sensitive Personally Identifiable Information (SPII), which is information that if disclosed without permission could substantially harm or embarrass the person
  • Unclassified Controlled Technical Information (UCTI), which refers to data that has a military or space application
  • Sensitive But Unclassified (SBU), which is information that does not meet the standards for National Security classification
  • Law Enforcement Sensitive (LES), which is data that if disseminated without permission could cause harm to law enforcement procedures

There are many more forms of CUI, and you can expect everything from health records, intellectual property, technical drawings and blueprints, and much more to fall under the label of CUI data.

Identifying CUI Data

If you are an IT professional or are a government contractor of any kind, you will likely have CUI data to worry about. Most of the time, the Department of Defense will label data as CTI or CDI, as needed, but there are instances when the contractor will be creating this kind of data as they complete a project. How do you identify it, then?

Let us look at some of the things to watch for.

Contracts

Does your site have a US government contract or does it supply a US federal contract? If it does, then you most likely have CUI data you will need to safeguard.

Labeled Information

Some data will have a CUI label on it already or will be easy to identify. If you see “Export Control”, which includes information that needs monitoring, such as Export Administration Regulations (EAR) or International Traffic in Arms Regulations (ITAR), then you can expect CUI data. Labeled information refers to non-classified data that has legacy or agency designations, and that is CUI.

Defense Projects

Many Defense Federal Acquisition Regulations (DFAR) deal with CUI. If projects related to aerospace manufacturing have details that are noncommercial and technical, they are CUI. Technical information can refer to engineering and research data. It can also be engineering drawings and plans, technical orders, process sheets, manuals, datasets, studies, and much more. For defense projects that have technical information related to a military or space application, you need the label of CUI.

Non-Defense Projects

Whether there is CUI data in a non-defense federal project depends on the specifics of the project and of the contract. Federal contract information, which is CUI, is information that the government does not want released to the public, and that has been created for the government or provided by the government during a contract.

Protecting CUI Data

There are government policies and guidelines to help you protect CUI data. You have to physically protect the data using key card access or other similar locks. The data and all its backups need labeling and securing when not in use.

At the network layer, the data also needs protection. Firewalls, switches, and routers all have to protect against unauthorized access. You need OSI layers two through four. You have to have session controls in place, as well. The data has to be protected with authentication and authorization mechanisms, and it all has to take place within the control of the data owner. There are also infrastructure controls that can secure CUI data. They can be virtual machines, storage area networks, physical servers, and backup systems.

You will need to have a risk assessment completed, and there must be network scans done periodically. If there are any configuration changes needed to the system that provides access to the CUI, the process needs a documented review and an approval process. Any logs need a third-party audit on a regular basis.

Keep CUI Secure

If you work with CUI data and need the best security, we can help. At Buda Consulting, we deliver secure and reliable database systems, ensuring even the most sensitive data is safe. Contact us now to speak with an expert!

 

MySQL and MariaDB Encryption Choices for Today’s Use Cases

MySQL and MariaDB Encryption Choices for Today’s Use Cases

Long a cornerstone of data security, encryption is becoming more important than ever as organizations come to grips with major trends like teleworking, privacy mandates and Zero Trust architectures. To comprehensively protect data from the widest possible range of threats and meet the demands of these new use cases, you need two fundamental encryption capabilities:

  1. The ability to encrypt sensitive data “at rest”—that is, where it resides on disk. This is a critical security capability for many organizations and applications, as well as a de facto requirement for compliance with privacy regulations like HIPAA, GDPR and CCPA. PCI DSS also requires that stored card data be encrypted.
  2. Encrypting data “in transit” across private and public networks. Common examples include using the HTTPS protocol for secure online payment transactions, as well as encrypting messages within VPN tunnels. Zero Trust further advocates encrypting data transmitted over your internal networks, since your “perimeter” is presumed to be compromised.

MySQL and MariaDB each support “at rest” and “in transit” encryption modalities. They both give you the ability to encrypt data at rest at the database level, as well as encrypting connections between the MySQL or MariaDB client and the server.

MySQL database-level encryption

MySQL has offered strong encryption for data at rest at the database level since MySQL 5.7. This feature requires no application code, schema or data type changes. It is also straightforward for DBAs, as it does not require them to manage associated keys. Keys can be securely stored separate from the data and key rotation is easy.

MySQL currently supports database-level encryption for general tablespaces, file-per-table tablespaces and the mysql system tablespace. While earlier MySQL versions encrypted only InnoDB tables, newer versions can also encrypt various log files (e.g., undo logs and redo logs). Also, beginning with MySQL 8.0.16, you can set an encryption default for schemas and general tablespaces, enabling DBAs to control whether tables are encrypted automatically.

MySQL database-level encryption is overall secure, easy to implement and adds little overhead. Among its limitations, it does not offer per-user granularity, and it cannot protect against a malicious root user (who can read the keyring file). Also, database-level encryption cannot protect data in RAM.

MySQL Enterprise Transparent Data Encryption

In addition to the generic database-level encryption just discussed, users of “select Commercial Editions” of MySQL Enterprise can also leverage Transparent Data Encryption (TDE). This feature encrypts data automatically, in real-time, before writing it to disk; and decrypts it automatically when reading it from disk.

TDE is “transparent” to users and applications in that it doesn’t require code, schema or data type changes. Developers and DBAs can encrypt/decrypt previously unencrypted MySQL tables with this approach. It uses database caching to improve performance and can be implemented without taking databases offline.

Other MySQL Enterprise Encryption Features

Besides TDE, MySQL Enterprise Edition 5.6 and newer offers encryption functions based on the OpenSSL library, which expose OpenSSL capabilities at the SQL level. By calling these functions, mySQL Enterprise applications can perform the following operations

  • Improve data protection with public-key asymmetric cryptography, which is increasingly advocated as hackers’ ability to crack hashed passwords increases 
  • Create public and private keys and digital signatures
  • Perform asymmetric encryption and decryption
  • Use cryptographic hashes for digital signing and data verification/validation

MariaDB database-level encryption

MariaDB has supported encryption of tables and tablespaces since version 10.1.3. Once data-at-rest encryption is enabled in MariaDB, tables that are defined with ENCRYPTED=YES or with innodb_encrypt_tables=ON will be encrypted. Encryption is supported for the InnoDB and XtraDB storage engines, as well as for tables created with ROW_FORMAT=PAGE (the default) for the Aria storage engine.

One advantage of MariaDB’s database-level encryption is its flexibility. When using InnoDB or XtraDB you can encrypt all tablespaces/tables, individual tables, or everything but individual tables. You can also encrypt the log files, which is a good practice.

Encrypted MariaDB data is decrypted only when accessed via the MariaDB database, which makes it highly secure. A potential downside is that MariaDB’s encryption adds about 3-5% data size overhead.

This post explains how to setup, configure and test database-level encryption in MariaDB. For an overview of MariaDB’s database-level encryption, see this page in the knowledgebase.

Encrypting data “in transit” with MySQL

To avoid exposing sensitive data to potential inspection and exfiltration if your internal network is compromised, or if the data is transiting public networks, you can encrypt the data when it passes between the MySQL client and the server.

MySQL supports encrypted connections between the server and clients via the Transport Layer Security (TLS) protocol, using OpenSSL.

By default, MySQL programs try to connect using encryption if it is supported on the server; unencrypted connections are the fallback. If your risk profile or regulatory obligations require it, MySQL lets you make encrypted connections mandatory.

Encrypting data in transit with MariaDB

By default, MariaDB does not encrypt data during transmission over the network between clients and the server. To block “man-in-the-middle” attacks, side channel attacks and other threats to data in transit, you can encrypt data in transit using the Transport Layer Security (TLS) protocol—provided your MariaDB server was compiled with TLS support. Note that MariaDB does not support older SSL versions.

As you might expect, there are multiple steps involved in setting up data-in-transit encryption, such as creating certificates and enabling encryption on the client side. See this page in the MariaDB knowledgebase for details.

Conclusion

With data security being an increasing business and regulatory concern, and new use cases like teleworking and privacy compliance becoming the norm, encryption will certainly be used to secure more and more MySQL and MariaDB environments. 

If you’d like a “second opinion” on where and how to implement encryption to address your business needs, contact Buda Consulting for a free consultation on our database security assessment process.

If you like this article, please share it with your colleagues and subscribe to our blog to get the latest updates.

Roles vs Direct Database Privileges

Roles vs Direct Database Privileges

A colleague asked me today for my opinion on database security and the best way to grant a certain database privileges to a few users in a postgreSQL database.  I will share my thoughts here and I welcome your thoughts as well. These basic database security concepts here apply to any relational database including Oracle, SQL Server, MySQL, or any database that implements roles for security.  They also apply to application security roles where the access control is managed in the application rather than the database, as is often the case. 

My colleague needed to give certain users the ability to kill other processes. He was struggling with deciding how to structure the privilege. In PostgreSQL, the privilege to instruct another process to terminate is granted by virtue of the default role called pg_signal_backend.  He was deciding between granting that role directly to the users in question, or to create a role called something like Manage_Other_Processes that would be granted to the users in question. 

Here is how I think about using roles. 

A role is really a business role

Basically, one should grant a privilege to a role rather than directly to a user when that privilege is to be granted to a group of users, instead of just one, specifically, a group of users that perform the same business function. One benefit of this approach is that this simplifies replication of one user’s privilege to another user, as in the case of one user leaving the company and being replaced by another user.  

A privilege should also be granted to a role when that privilege enables the user to perform a certain function, and when it is likely that other privileges will also be required in order for a user to perform that same function.

These considerations really get to the whole idea of roles in the first place. A role really refers to the role that the individual receiving the privilege plays in the organization. I think it’s original intent was not really to be considered a database construct, but that is how many think of it now, this misalignment is particularly reflected in the naming of the pg_signal_backend role in postgreSQL, more on that later.

Database Privileges, Security Best Practices, Keeping it Organized

A key benefit of using roles is organization. A given user may have many privileges. Update, delete, insert, select, each on tables, views, stored procedures etc. Add in system privs and a typical user has lots of privileges. Managing privileges on that many objects is a challenge. The best way to manage a large number of things is to categorize and label them. This is accomplished with roles.   

For example, I can group together all the privileges on stored procedures, tables, views, and other database objects required to manage client records, and grant them to a role called manage_client_records. And I can group together all of the privileges required to manage employee records, and grant them to a role called manage_employee_records.

Database Security and adding new users

Rather than remembering that I need to grant execute permissions on 2 stored procedures and 10 tables for managing the employee records, and on 3 procedures, and 15 tables for managing customer records, I can simply grant all of those privileges to the appropriate roles once, and grant those roles to the proper users in one simple statement.

Ease of removing or changing user access

Perhaps most importantly, I can revoke all those privileges by simply revoking the roles, enhancing security by reducing the possibility of human error resulting in dangling privileges when someone changes roles in the company. 

Ease of managing application enhancements and changes

If the developers add functionality to the application, resulting in new tables, views, or other database objects that will require access by certain application users, these new privileges can be granted to the appropriate roles, and all users that have that role will receive that privilege. No need to individually grant the privileges to individual users.

Discovery and User Access reporting

When we do database security assessments, we often generate reports that show which users have privilege to access tables, execute stored procedures, and change system configuration.

What management really wants to know, however, is not what table a user can access, they want to know what business functions each user can perform and what data they can read or edit in that capacity. Here is where using roles really shines.

A report showing the set of users that can view or manage client accounts is much more useful to management then a report that shows a set of users that have select or edit privilege on the client table, and the client address table, and the client account table, and the client transaction table, etc.  Management needs to quickly be able to see what capability users have. Roles make it much easier for them to see that.  Imagine a report showing 10 users that have been granted the manage_client_data role, and 15 that have been granted the view_client_data role.  Twenty five lines that tell the complete story. Contrast that to a report with hundreds of lines showing all tables and stored procedures that all users have access to.  Of course a detail report will be useful as well for deep analysis, and that can be generated when using roles as well.

Database Privileges and System Roles

I used application related roles as examples in this article, but the same concepts apply to system roles and application-owner roles like those that my colleague asked about, and that motivated me to write this article.  And this deserves a little more discussion and some readers may disagree with my thoughts on this and I was definitely on the fence about it. Please comment and add your thoughts if you think differently. 

The privilege that he asked about was actually already a role, not a privilege. Pg_signal_backend is a role that enables the user to terminate processes owned by other users (except super-users). While this is already a role, I feel like it is so narrowly defined that it does not satisfy the real intent of role as I discussed it above. I feel like it would not be surprising if other similar privileges (roles) of this nature are likely to be needed by the same user, given that it needs to control other processes. And I would rather see a better defined (and named) role, like Manage_Other_Processes, that includes this role and any others that will end up being necessary. And then that role can be applied to any other users that need this capability.

Similar to my discussion about user access reporting above, a role with a name like Manage_Other_Processes will tell much more during a user access report than one with the name pg_signal_backend.  

To Role or not to Role

So at the end of the day, when designing a security scheme, I try to use roles wherever it is likely that the same business function requires multiple privileges, or where the same privileges are likely to be assigned to multiple users. Please share your thoughts and contact us for more information.

When Should The Database Be Updated?

When Should The Database Be Updated?

Why if it’s not broke don’t fix it does not work for databases (or anywhere in IT for that matter)

One of the hotly debated items among IT professionals is the age-old question,”When should the database be updated?” At Buda Consulting we always like to make sure our clients are running the latest, secured and supported versions of any software in any environment we manage.  This includes software products from Oracle’s database and Microsoft’s SQL Server to PostgreSQL. But we have noticed that this has not always been the case when we come into a client’s company and perform our global product health check.  

In my experience I have worked with DBA’s and System administrators who have always said if it is working we should not touch it and I can understand why some professionals and managers may feel this way.  When your database or application is offline it creates stress as administrators are tasked with getting the service(s) back online as soon as possible.  The idea is if we do not touch anything it should just work without issue but experience shows this is not always the case.  When it comes to databases specifically, not touching a db from time to time can have catastrophic results.  

As DBA’s if we did not look at your database’s tablespace stats, we would never know when your instance was about to run out of space at the tablespace or filesystem/ASM disk group level.  Not noticing this would result in your database eventually not being able to write data which would result in your application/database crashing.  

Another excuse (yes, that is what I call not upgrading your software!) I hear from time to time is new software versions introduce bugs.  That is true but almost all software versions will introduce bugs.  Most bugs are usually outlined in the KNOWN BUGS section of a software release’s readme while others have yet to be discovered.  The part this excuse does not take into account is that new software usually fixes bugs and security exploits that were not patched in the older version of the software. Whenever you are in doubt contact Buda Consulting for a database security assessment.

Let’s determine “When should the database be updated?”

As someone who has worked in both the private and public industries of IT, I have seen the dire consequences of failing to keep your software up to date.  This is a widespread problem in most public sector entities as most do not generate revenue but provide a service for the citizens of said state.  Because money is usually very scarce most IT budgets tend to get trimmed to the detriment of the agency.  I have seen time and time again where a mainframe service was not maintained over the years because the original administrators of the platform either moved on or retired. Because these admins were the ones that implemented the platform, once they left the knowledge of administering and maintaining the platform left with them.  

This caused new staff who did not know about the platform to just “keep the lights on” and not patch the environment in fear of breaking something that was not broken.  Over time the software running the platform moved further away from the latest version of the platform until a direct upgrade path to the new platform was impossible without vendor intervention or consulting services.  Once the vendor is involved you can expect the cost of the upgrade to not be cost effective.  I have seen quotes for upgrade work as high and two (2) million dollars to upgrade mainframe systems that could have easily been avoided had both old and new administrators put forth their best effort to make sure the platform was always running the latest software.  

It is industry best practice, especially when it comes to databases, that moving to a new software version should only be done after the release of the first service pack.  For instance as of the writing of this article Oracle’s latest database software is on version 21c.  Once  service pack one of 21c (21cR1) is released, all companies using 21c base release or older software versions should have started creating an upgrade plan that should be implemented in no less than six months to a year.  Like I explained above, by not keeping your software upgraded to the latest version you put your company at risk of having to spend a lot of money down the line to hire an outside company to come in and perform the upgrade as you are no longer able to easily upgrade from one version to the next.  

So if you are running Oracle Database versions 11g or 12c, it’s time to start planning an upgrade to at least 19c or 21c.  If you are running Microsoft SQL Server 2016 it’s time to start planning an upgrade to at least SQL Server 2017 CU 24 or SQL Server 2019 CU 11.  We cannot stress enough that the old if it’s not broken don’t fix it methodology needs to go away.  In the age of constant security breaches it is very important, now more than ever, to keep your software up to date with the latest patches to make sure you are protected against the worst of the software exploits that are running around the interwebs.

And if you like this article, please share it with your colleagues and subscribe to our blog to get the latest updates. Schedule a 15 minute call with Buda Consulting today.

SQL Server Vulnerability Assessment – Keep Your SQL Database Safe With This Microsoft Tool

SQL Server Vulnerability Assessment – Keep Your SQL Database Safe With This Microsoft Tool

By now you all know how hackers are having their way with business all over the world.  I don’t need to give examples to remind you of that. Some are mentioned here and here and I’ve written a number of blogs about the importance of protecting the database here, here and here

So instead of talking about those issues again, let’s dive right in and discuss one of the simplest ways to identify typical vulnerabilities in your SQL Server database.  This is a tool that is already available to you that can significantly minimize your risk.

Microsoft provides a tool called the Vulnerability Assessment tool that will scan your databases for typical vulnerabilities. These include configuration errors,  excessive permissions, and permissions granted to users vs roles, among others. These checks look for violations of best practices for managing a database. Before this tool was released, one had to use a third party vulnerability assessment tool like Trustwave’s Appdetective Pro, or manually run scripts to find such vulnerabilities.  

How This Assessment Tool Compares With Third-Party Tools

I have used third party tools like Trustwave’s and Impreva’s, to identify vulnerabilities in customer systems, and I have used Microsoft’s Vulnerability Assessment (VA) as well. While I have not produced a master list of vulnerability checks that are done in each system in order to do a direct comparison, it feels to me that the VA checks for fewer vulnerabilities. Also, Appdetective Pro adds other features like a discovery tool, a penetration test, and a user rights review (more on that later), but here we will focus mostly on the vulnerability assessment tool.

If you have not taken any steps to secure your database, then using the SQL Server Vulnerability Assessment tool, and taking action based on its recommendations, will probably get you 90% of the way to a secure database. I am not suggesting that you should stop there. 90% is not good enough. But 90% is much better than 0%, which is where you might be if you haven’t run any vulnerability scan at all.

An Overview Of The SQL Server Vulnerability Assessment Tool

I will mention a few highlights here to give a sense of what kinds of things are covered and will provide a link below to a comprehensive guide provided by Microsoft.

The SQL Vulnerability Assessment Tool compares the configuration of your database to Microsoft best practice rules for database management from a security perspective. According to Microsoft’s guide at this point 87 rules are checked, but some only apply to later versions of SQL server. The rules are broken down into six categories.

Authentication and Authorization

These rules ensure that only the right people are able to connect to your database.  These address the confidentiality and integrity principles of the Information Security Triad. Authentication deals with ensuring that the users are who they represent themselves to be, and Authorization deals with what data assets they should have access to. Here are a few important rules that are checked in this category:

  • Password expiration check should be enabled for all SQL logins
  • Database principals should not be mapped to the sa account. 
  • Excessive permissions should not be granted to PUBLIC role on objects or columns

Auditing and Logging

These rules check to ensure that what gets done and seen in the database is traceable and provable. This addresses the non-repudiation principle of information security and enables forensic analysis in the event of a suspected security breach. A few sample rules checked in this category include:

  • Auditing of both successful and failed login attempts should be enabled
  • Auditing should be enabled at the server level
  • There should be at least 1 active audit in the system

Data Protection

Data protection rules are primarily related to encryption. Addressing the confidentiality principle, these rules ensure that data is protected at rest and in transit. Rules such as these are checked:

  • Transparent data encryption should be enabled
  • Database communication using TDS should be protected through TLS
  • Database Encryption Symmetric Keys should use AES algorithm

Installation Updates and Patches

This category would be very helpful but I am not sure the results can be trusted. When running this on a SQL Server 2012 database, it seems that the check for patches was not executed. It did not appear in the result set either as passed or failed. So I do not recommend using this tool to determine whether you are up to date on your patches until this is resolved.

Surface Area Reduction

Rules in this category address all three principles in the information security triad. They focus on protecting the database environment by reducing the threat vectors posed by external interfaces and integrations. Some interesting rules in this category include:

  • CLR should be disabled
  • Unused service broker endpoints should be removed
  • SQL Server instance shouldn’t be advertised by the SQL Server Browser service

Comprehensive List Of Vulnerabilities

Microsoft provides this reference guide that describes all of the vulnerabilities that VA checks. The guide references which version of SQL Server each rule applies to. This guide provides a lot of good information about this tool and about securing your SQL Server database in general. It is not perfect of course. For example, the description of the check related to patches seems to have some cut and paste remnants, but there is good information there. 

How Buda Employs A Vulnerability Scanner To Protect Our Customers’ Data Assets

When we perform a database security assessment for one of our customers using this tool or one of the other vulnerability scanners, we start of course by running the tool. We then examine the result set and determine the actual risk posed by each of the reported vulnerabilities in the context of the specific database, application, and customer. Often, some of the reported vulnerabilities are mitigated by processes that the organization has in place, or by the nature of the application or the data. After filtering out those that do not represent a real threat, we create a report for management that shows the action items that need to be taken, which may include further analysis. 

For example, some of the rules may fail because no baseline has been created for which users should have access to a given role. Addressing this will involve a study of what roles should be active in the system and who should be granted access to them. This can result in creation of baselines for use in future scans. 

Trustwave’s App Detective Pro that I mentioned earlier provides a user rights review report that may be useful for creating those baselines. 

Application Authorization Schemes

The Authorization and logging related checks that these scanners perform (and the Trustwave User Rights Review) are with respect to actual database users. Many applications, however, use application based authorization. These vulnerability scanners will not be able to provide insight about user authentication or logging in those cases.  

In these cases, we create a user rights review report that identifies what data assets a given application user can access, and we ensure that application logging is robust enough to provide the necessary level of granularity to support the security objectives. 

Where to find it

The Microsoft Vulnerability Assessment tool is available in SSMS v 17.xxx and above. So you may have to upgrade your SSMS (free) in order to get this tool. But the good news is that it works with all currently supported SQL Server versions.  

In Summary

Running the Microsoft Vulnerability Scanner can be an important part of a robust security plan for your SQL Server Databases. Running this scanner is an excellent first step to identify many vulnerabilities, some of which can be easily remediated. 

It is important that an experienced SQL Server database expert implement the recommendations and that additional analysis be done beyond the results produced by the tool. Additionally, when using applications that use application level authentication, deeper study must be done to ensure the security of the data in those applications.

If you like this article, please share it with your colleagues and subscribe to our blog to get the latest updates. If you have any questions, don’t hesitate to contact us as well! 

Database Patch News — March 2021 (Issue 7)

Database Patch News — March 2021 (Issue 7)

Welcome to Database Patch News, Buda Consulting’s newsletter of current patch information for Oracle and Microsoft SQL Server. Here you’ll find information recently made available on patches—including security patches—and desupported versions.

Why should you care about patching vulnerabilities and bugs? Two big reasons:

  1. Unpatched systems are a top cyber attack target. Patch releases literally advertise vulnerabilities to the hacker community. The longer you wait to patch, the greater your security risk. 
  2. Along with running a supported database version, applying the latest patches ensures that you can get support from the vendor in case of an issue. Patching also helps eliminate downtime and lost productivity associated with bugs. 

Here are the latest patch updates for Oracle and SQL Server:

Oracle Patches:

January 19, 2021 Quarterly Patch Updates:

21c – Released January 13, 2021, Version 21.1; no Quarterly patch yet

19c – Release Update 19.10 is available (32218494 and 321266828)

18c – Release Update 18.13 is available (32204699 and 32126855)

12cR2 – Release Update 210119 is available (32228578 and 32126871)

Regular support ends in Mar 2023 and extended support ends in Mar 2026.

12cR1 – Release Update 210119 is available (32132231 and 32126908)

Regular support ended in July 2019 and extended support ends in July 2021.

11gR4 – Patch Set Update 201020 is available (31720776)

Regular support ended in October 2018 and extended support ended December 31, 2020.

 

SQL Server Patches:

SQL Server 2019

Cumulative update 9 (Latest build) Released Feb 2, 2021
Mainstream support ends Jan 7, 2025
Extended support ends Jan 8, 2030


SQL Server 2017

Cumulative update 23 (Latest build) Released Feb 24, 2021
Mainstream support ends Oct 11, 2022|
Extended support ends Oct 12, 2027


SQL Server 2016 Service Pack 2

Cumulative update 16 Release date: Feb 11, 2021
Mainstream support ends Jul 13, 2021
Extended support ends Jul 14, 2026


SQL Server 2014 Service Pack 3

Cumulative update 4 Release date: Jan 12, 2021
Mainstream support ended Jul 9, 2019
Extended support ends Jul 9, 2024


SQL Server 2012 Service Pack 4

Release date: Oct 5, 2017
Mainstream support ended Jul 11, 2017
Extended support ends Jul 12, 2022

Note: All other SQL Server versions not mentioned are no longer supported.