The Ultimate Oracle Database Security Assessment Checklist for 2021

The Ultimate Oracle Database Security Assessment Checklist for 2021

They are two simple words, but they are two of the most feared words in business: Data Breach! Your data is your business. When companies lose their data, they lose trust and the ability to conduct much of their business.

The problem is many companies spend their time focused on network security instead of database security. Your network is important and should be secured, but your data is the lifeblood of your business. To help you focus on the safety of your database, here is an Oracle database security assessment checklist for 2021. These are some of the best practices and things you can do to secure and protect your data.

Ask Key Database Questions

When many people think about security, it is usually in a general way. They want security, but don’t really define what security looks like. Here are some questions to help you focus your thoughts on your database security.

Are You Using Built-in Security Features?

Your Oracle database has many security features built-in. These can be the first line of defense for your entire database. Many of these features are free and don’t require subscriptions, but are part of your database package.

Do You Have a Current User List?

A database should have a list of privileged users and over-privileged users. This list should show who can do what with the database. This list must stay current as a level of protection and accountability for your company.

Who Is Overseeing Security Fixes?

Oracle often releases security patches and fixes to protect your data. With the speed of business today, these can be overlooked. You should have someone who makes sure these fixes are implemented immediately. 

Are You Having Regular Database Audits?

Database auditing is how administrators review the actions of their users. They do this to see who is accessing the database. This helps them make sure that only the people who are supposed to access the database are doing it. 

What Is Your Password Policy?

Passwords have to be actively maintained, or they can become an easy entryway into databases. You must make sure that there aren’t any default or non-expiring passwords with access to the system.

Download the Oracle Database Security Assessment Tool

To help their users have safer databases, Oracle developed the Database Security Assessment Tool (DBSAT). The DBSAT is a free tool that Oracle users can implement, and it acts as a database security guide.

DBSAT will scan the database and give you a profile in different formats that helps you see the state of your security. The formats can be HTML, SLS, TEXT, or JSON. This makes the information quick and easy to implement.

The tool will show you some of the security risks that you currently have in the system. It will then recommend relevant products and features of the system you can use to help stop the risks.

The DBSAT focuses on three specific core areas with its security assessment.

1. The General Security Configuration of Your Database 

The DBSAT scan will do a scan to make sure you are minimizing database risk. It will look for missing security patches that you can implement. It will also check to see if you are using encryption auditing within your system.

2. Users and Their Entitlements

One of the main features of the DBSAT is its focus on your users and how they are accessing your system. It will identify your privileged users and show you what areas they can access, and any areas they are accessing, but shouldn’t be.

3. Identifying Sensitive Data in Your Database

The DBSAT will help you stay in compliance with regulations by focusing on your sensitive data. It will help you identify your sensitive data and recognize how it should be treated. This also helps you develop healthy auditing systems.

How You Can Use the DBSAT 

DBSAT can help you with your security practices by giving you the information you need to implement and enforce strong security for your database. With the many reports it can generate, your security doesn’t have to be forgotten.

DBSAT helps you understand your user accounts, along with the roles and privileges of each user. This helps you find and fix short-term risks. Plus, it can give you enough information to have a long-term security strategy.

Get a Database Health Check

Just like a person should have a check-up every year, you want to make sure your database gets a health check. Have someone from the outside come in and review your database configuration and policies.

They can review your parameters, database maintenance procedures, alert logs, and trace files. They can also help with many other things, like finding your data blocks and identifying invalid objects.

Look for a health check that provides a focused report so you can take action on what is needed. The report should show you possible problem areas and contain recommendations to address the problems.

Your Oracle Database Partners

Reading through this Oracle database security assessment shows that there is a lot to think of when it comes to database safety. Too often IT staff are so focused on protecting your network, the database gets forgotten.

You want to find people who are database experts and make your database their own. Buda Consulting is a group of database experts who listen to your needs and deliver on their promises.

Their passion is protecting your database and helping it to function smoothly. They handle all aspects of database creation and management. Plus, they can show you how to extract valuable insights from your database.

Contact us for a free 15-minute call and let them show you how they can be your database experts. 

6 Types Of Cloud Migration Services

6 Types Of Cloud Migration Services

We are right in the middle of a huge focus in the business world on cloud migration. In fact, in 2020, cloud migration was the number one top modernization project. More than half of business leaders were working on or planning to work on cloud migration projects!

Of course, it can feel daunting to learn all about the world of cloud migration. There are so many different kinds of cloud migration. So how can you know which kind is the best for your business?

Read on to learn all about the different types of cloud migration and what they might be able to do for you!

1. Rehosting

There are basically 6 options for Amazon types of cloud migration. To explain various types of migration, people often refer to the 6 R’s. The first type of cloud migration strategy is called rehosting. Rehosting is one of the most important types of cloud migration services. First of all, it is the simplest approach to cloud migration.

The basic idea behind this strategy is that you take your applications and other systems from their current hosts and simply move them on to public cloud hosts. This does not require you to update the systems in any way.

Of course, just because updates and changes aren’t needed does not mean that they aren’t helpful. If you simply move your current applications and other systems to the cloud, they may not be able to make the most of everything that the cloud has to offer.

For example, you might not be able to use cloud monitoring systems or automated data recovery. There are even self-healing environments that the cloud can provide. However, these benefits are only available if you update your applications somewhat while putting them on the cloud.

On the other hand, if you just want to be on the cloud, you can get there very quickly. This strategy is a great way to make your initial migration. Once you are already on the cloud, maybe then you can focus on updating things to make the most of the cloud. This is also a great strategy if you are making an emergency migration.

2. Replatforming

The other types of cloud migrations will generally involve some kind of adjustment. Whatever you move on to the cloud, you will also change to take better advantage of the cloud.

Replatforming is a strategy that involves optimizing your systems for cloud use. This will allow you to use the various tools that the cloud provides.

Sometimes, making your changes in advance is more efficient. Doing your cloud migration today and then hoping to change things once they are on the cloud can take a lot of extra work.

Of course, updating your applications and systems can be tricky business. It is important that you make the right changes that let you use cloud services without messing up your functionality. This is a middle-of-the-road strategy. It is more complicated than simple rehosting, but not as transformative as other approaches.

3. Repurchasing

In some ways, repurchasing might not seem like a proper cloud migration strategy at all. This is because you do not actually take what you have off where it is and put it on the cloud itself.

Instead, repurchasing is a strategy where you simply abandon what you have now. Then you buy a version of it that is already built to fit with the cloud. Of course, you may need to transfer some of the data from your system onto the new cloud-based system.

This approach can be fast and effective. Of course, it may not be as cost-effective as other strategies. At the same time, there are some amazing new applications built to work with the cloud.

4. Refactoring

Refactoring is sometimes also called re-architecting. As the name suggests, this strategy requires rebuilding your whole system.

Of course, this can take a lot of work. In fact, even once the transition time is done, your new system might take more time to maintain properly. On the other hand, once you are done, you will have something built specifically for the cloud.

5. Retaining

The retaining strategy leads to a hybrid model. Part of your system will be on the cloud and part of it will not. Depending on your system, this might be the ideal choice.

If certain parts of your system have to be dramatically remade before they can go on the cloud, maybe you can just leave them off the cloud. The parts of your system that more easily fit on the cloud can be transferred without extensive rebuilding.

This can also be a good choice if you are trying to balance the speed of implementation and integration with the cloud.

6. Retiring

On the other hand, if something doesn’t fit on the cloud, you could always simply get rid of it. This is the idea behind the retiring strategy.

Many parts of your software architecture might be unnecessary. You can simply cut them out. Then you can move what is left to the cloud. Once it is there, you will be able to enjoy all the added functionality that the cloud provides.

Know All About the Different Types of Cloud Migration

We hope that you were able to take away something helpful from this brief article on a few of the most important different types of cloud migration. The more you know about this industry, the better prepared you will be to make choices that will enhance the functionality of your systems.

To learn more about the benefits of cloud migration and where you can find an excellent provider, feel free to reach out and get in touch with us here at any time!

Seven Reasons To Employ Database Tuning Services

Seven Reasons To Employ Database Tuning Services

If your business has invested in a database, chances are that it’s a valuable asset that’s integral to the smooth and effective performance of your operation. Unfortunately, without regular maintenance and attention, a database can begin to deliver sub-optimal performance, failing to provide the ROI you deserve.

Some database maintenance can usually be done in-house. Most databases come with a list of checks that need to be carried out regularly to ensure the database keeps working well. Beyond these day-to-day measures, there is also a need for a periodic overhaul of the database. This overhaul, also known as database tuning, is a professional service that requires a high degree of skill and experience.

Using a number of different tools, a database tuning expert can perform the adjustments needed to significantly improve the speed, efficiency, and performance of your database.

Read on to discover exactly what database tuning services consist of, and seven good reasons why your business should employ database tuning services now!

What are Database Tuning Services?

Database tuning involves completing a series of activities that are designed to improve performance and enable the database to operate more efficiently. In contrast to database management, tuning is performed less frequently. It is often commissioned in response to a change in use or awareness amongst users and/or administrators that the database isn’t meeting its intended objectives.

Seven Reasons Why You Should Employ Database Tuning Services

Database tuning services don’t just improve the efficiency of your database, they also have a number of advantages for your workforce, your customers, and, ultimately, your bottom line. Take a look at seven reasons why database tuning services could transform your operation.

Improve Retrieval Times

From loading a web page through to producing information your team needs to process orders, get the answers to inventory queries, or input data to ensure the system remains up-to-date, a fast retrieval time can really make a difference.

Recent research shows that over half of mobile phone users will desert a site if it doesn’t load in three seconds. Particularly if you have a well-populated database, all it takes is a slight dip in performance, and wait times for data retrieval will noticeably increase.

Database tuning can make your system run more rapidly, ensuring almost instant response to queries and commands.

Help Your Database Cope With Increasing Volumes of Data

As your company grows, it’s likely that the amount of data in your database will also expand. The greater the volume of data, the more strain the database is under. Unless it’s appropriately organized, it’s all too easy for glitches to develop.

Database tuning includes the use of tools that improve the structure and organization of the database. This helps to effectively manage increasing volumes of data. To an extent, database tuning helps to future-proof your database, enabling it to keep pace with your business needs as they evolve.

Helping Your Organization Cope With Change

An agile database will have the capacity to deliver what you need it to, no matter the direction your company elects to follow. Businesses don’t just increase the volume of information they store in a database, a fresh project may also require data to be stored in a different way, or different data to be recorded.

In these circumstances, database tuning can reconfigure the database commands and parameters to ensure it’s capable of completing the fresh tasks that are required. Splitting large tasks into smaller ones, for example, or altering the way in which data is recorded, could help to ensure your database can respond flexibly no matter what your objectives may be.

Increased Efficiency

If you want to increase efficiency (which should ultimately result in greater returns for less input), database tuning can maximize system resources. By freeing up capacity, your database can give you the performance you need, without the need for additional investment.

For any company that’s eager to keep overheads as low as possible, database tuning can boost database performance to the extent that a costly replacement or upgrade isn’t required.

Reduce Maintenance and Labor Input

A database that’s working sub-optimally will usually require a considerable amount of attention. Issues such as lagging, inaccuracies, difficulty in accessing relevant data, or problems with the caliber and clarity of the information in the database are all issues that need to be resolved. If your team is busy trying to sort out your database, that’s time they won’t be spending on other aspects of your business.

One of the major benefits of database tuning is that it enables the database to operate more effectively and reliably. This, in turn, reduces the need for time-consuming troubleshooting, freeing up your workforce for other tasks.

Adjust to Fluctuating Demand

Whether you’re a retailer who’s looking forward to the holiday season trading or a gym that’s hoping for an uptick in business when people act on their New Years’ resolutions, it’s likely your database will see a sudden peak in activity, followed by a trough. In some industries, this pattern can have a daily, monthly or annual cycle. In others, fluctuations can be irregular.

To ensure your database can respond effectively to changing demand, database tuning is essential.

Prolong the Useful Life of Your Database

With skilled database tuning services, you can expect to get more from your existing database. For many companies, a professional tune will enable them to keep their existing database facility for years into the future, without needing to expand capacity, upgrade or invest more cash into finding an appropriate database solution.

If you’re committed to obtaining the best value from your database investment, suitable tuning is one of the most effective ways to ensure you enjoy optimal performance and results.

To find out more about databases and the need for database tuning, get in touch with the team at Buda Consulting. Experts in database management and performance, we look forward to making your database work for you, no matter what your requirements may be.

MySQL and MariaDB Encryption Choices for Today’s Use Cases

MySQL and MariaDB Encryption Choices for Today’s Use Cases

Long a cornerstone of data security, encryption is becoming more important than ever as organizations come to grips with major trends like teleworking, privacy mandates and Zero Trust architectures. To comprehensively protect data from the widest possible range of threats and meet the demands of these new use cases, you need two fundamental encryption capabilities:

  1. The ability to encrypt sensitive data “at rest”—that is, where it resides on disk. This is a critical security capability for many organizations and applications, as well as a de facto requirement for compliance with privacy regulations like HIPAA, GDPR and CCPA. PCI DSS also requires that stored card data be encrypted.
  2. Encrypting data “in transit” across private and public networks. Common examples include using the HTTPS protocol for secure online payment transactions, as well as encrypting messages within VPN tunnels. Zero Trust further advocates encrypting data transmitted over your internal networks, since your “perimeter” is presumed to be compromised.

MySQL and MariaDB each support “at rest” and “in transit” encryption modalities. They both give you the ability to encrypt data at rest at the database level, as well as encrypting connections between the MySQL or MariaDB client and the server.

MySQL database-level encryption

MySQL has offered strong encryption for data at rest at the database level since MySQL 5.7. This feature requires no application code, schema or data type changes. It is also straightforward for DBAs, as it does not require them to manage associated keys. Keys can be securely stored separate from the data and key rotation is easy.

MySQL currently supports database-level encryption for general tablespaces, file-per-table tablespaces and the mysql system tablespace. While earlier MySQL versions encrypted only InnoDB tables, newer versions can also encrypt various log files (e.g., undo logs and redo logs). Also, beginning with MySQL 8.0.16, you can set an encryption default for schemas and general tablespaces, enabling DBAs to control whether tables are encrypted automatically.

MySQL database-level encryption is overall secure, easy to implement and adds little overhead. Among its limitations, it does not offer per-user granularity, and it cannot protect against a malicious root user (who can read the keyring file). Also, database-level encryption cannot protect data in RAM.

MySQL Enterprise Transparent Data Encryption

In addition to the generic database-level encryption just discussed, users of “select Commercial Editions” of MySQL Enterprise can also leverage Transparent Data Encryption (TDE). This feature encrypts data automatically, in real-time, before writing it to disk; and decrypts it automatically when reading it from disk.

TDE is “transparent” to users and applications in that it doesn’t require code, schema or data type changes. Developers and DBAs can encrypt/decrypt previously unencrypted MySQL tables with this approach. It uses database caching to improve performance and can be implemented without taking databases offline.

Other MySQL Enterprise Encryption Features

Besides TDE, MySQL Enterprise Edition 5.6 and newer offers encryption functions based on the OpenSSL library, which expose OpenSSL capabilities at the SQL level. By calling these functions, mySQL Enterprise applications can perform the following operations

  • Improve data protection with public-key asymmetric cryptography, which is increasingly advocated as hackers’ ability to crack hashed passwords increases 
  • Create public and private keys and digital signatures
  • Perform asymmetric encryption and decryption
  • Use cryptographic hashes for digital signing and data verification/validation

MariaDB database-level encryption

MariaDB has supported encryption of tables and tablespaces since version 10.1.3. Once data-at-rest encryption is enabled in MariaDB, tables that are defined with ENCRYPTED=YES or with innodb_encrypt_tables=ON will be encrypted. Encryption is supported for the InnoDB and XtraDB storage engines, as well as for tables created with ROW_FORMAT=PAGE (the default) for the Aria storage engine.

One advantage of MariaDB’s database-level encryption is its flexibility. When using InnoDB or XtraDB you can encrypt all tablespaces/tables, individual tables, or everything but individual tables. You can also encrypt the log files, which is a good practice.

Encrypted MariaDB data is decrypted only when accessed via the MariaDB database, which makes it highly secure. A potential downside is that MariaDB’s encryption adds about 3-5% data size overhead.

This post explains how to setup, configure and test database-level encryption in MariaDB. For an overview of MariaDB’s database-level encryption, see this page in the knowledgebase.

Encrypting data “in transit” with MySQL

To avoid exposing sensitive data to potential inspection and exfiltration if your internal network is compromised, or if the data is transiting public networks, you can encrypt the data when it passes between the MySQL client and the server.

MySQL supports encrypted connections between the server and clients via the Transport Layer Security (TLS) protocol, using OpenSSL.

By default, MySQL programs try to connect using encryption if it is supported on the server; unencrypted connections are the fallback. If your risk profile or regulatory obligations require it, MySQL lets you make encrypted connections mandatory.

Encrypting data in transit with MariaDB

By default, MariaDB does not encrypt data during transmission over the network between clients and the server. To block “man-in-the-middle” attacks, side channel attacks and other threats to data in transit, you can encrypt data in transit using the Transport Layer Security (TLS) protocol—provided your MariaDB server was compiled with TLS support. Note that MariaDB does not support older SSL versions.

As you might expect, there are multiple steps involved in setting up data-in-transit encryption, such as creating certificates and enabling encryption on the client side. See this page in the MariaDB knowledgebase for details.

Conclusion

With data security being an increasing business and regulatory concern, and new use cases like teleworking and privacy compliance becoming the norm, encryption will certainly be used to secure more and more MySQL and MariaDB environments. 

If you’d like a “second opinion” on where and how to implement encryption to address your business needs, contact Buda Consulting for a free consultation on our database security assessment process.

If you like this article, please share it with your colleagues and subscribe to our blog to get the latest updates.

Thoughts On Deleting Data – Considerations & Best Practices

Thoughts On Deleting Data – Considerations & Best Practices

In a recent blog post on database maintenance tips, I mentioned that one important facet of cleaning up the database is to remove records that we no longer need — those that don’t contribute value to the applications and users who use the database. This is an important maintenance process, but there are some equally important considerations when thinking about deleting any data from the database. 

These considerations are driven by a few key questions that we need to ask:

Why are we deleting the data? 

Performance?  Disk space cost?  Security? Organization, Simple lack of value? Let’s look at each of these reasons and look for alternatives to deleting the data in order to avoid losing any untapped future value stored in that data.

Performance

If we are thinking of removing data as a way to improve performance, can we instead use partitioning, archiving or indexing to achieve adequate performance while preserving the data? Can we tune the most expensive queries to reduce load on the system?  Can we increase resources on the server or move the database to a more powerful server?

Disk space cost

If our purpose is to reduce the cost of disk space to store the data, can we partition the data and archive the older or less-used data to lower cost storage?  Is compression an option on our hardware platform (eg, Oracle’s exadata platform)? Can we remove some indexes that are taking up space but not adding a performance boost?

Security

If we are seeking to remove the data to improve security by reducing the data footprint, can we leave the data there and achieve the level of security that we need by using an encryption scheme, virtual private database (Oracle),  or another tighter access control scheme like label security?

Organization (reducing clutter) 

If we are removing the data because we don’t want to see it —  because the reports, queries, and dropdowns in the application screens are unwieldy, can we tag records as deleted instead of actually removing them and filter queries based on these tags?  Can we create views for each table that filters these records and use synonyms to redirect apps to those views to minimize application changes?  

Lack of value

If we are certain that the data really has no value, let’s get rid of it. This is the data that just sucks energy from the system and gets in the way. But when doing so, let’s be sure to do so subject to the considerations below. Even deleting valueless data can cause problems if we are not careful.

Considerations before deleting anything

Once we are convinced that the data has to go, we have to ask all of the following questions in order to create the proper process for deleting it. 

Interfaces with external systems

Downstream and Upstream systems may break as a result of deleting data. 

Downstream systems may contain supplemental data that will be left dangling if you delete a record from your system and do not also delete it (and the associated supplemental data) from the downstream system. This can cause applications to fail or worse, can cause invalid results to appear on reports.  

Upstream systems may be subject to numerous problems as well. They may re-introduce the same records that you delete, or they may send child records that are associated with records that you deleted, causing interfaces to fail. Worse, without proper logging in your interfaces, errors like this can go undetected.

Of course, this problem can be recursive. Each of the upstream and downstream systems may have upstream and downstream systems of their own having the same potential risks and complications.

Constraints and Dependencies

Are there database constraints with triggers that would result in child data being deleted and do we want this? As we think about whether we want to delete older customer invoices for example, do we want to delete the order history at the individual item level? We may not want order history with respect to a customer after 7 years, but do we want to lose the information about the quantity of each item that was ordered over time?  If we want to keep the item order counts but not the invoices, then we may need to store that data differently in order to be able to delete the invoices.

Or conversely, is there any data that would be left dangling because there are no integrity constraints defined to delete child data?  If we delete old customer records for example, will there be customer address data, demographic data, or customer preference data left behind resulting in data inconsistencies and invalid reporting? This is just one reason why database integrity constraints are so important in a database design, but that is a topic for another post.

All these questions must be applied to the child data as well. If we delete this data, is there dangling parent data that would be left useless or meaningless without its detail and therefore should be deleted as well?

Retention Requirements

Does your organization have any retention or destruction requirements for this data? Better check with legal! Aside from your organization’s own data management policies, numerous regulations specify how long data of different classifications must be retained. For example, as mentioned in this article by USSignal, The Fair Labor Standards Act, the Bank Secrecy Act, the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), and the Federal Information Security Management Act (FISMA), among others, all specify data retention requirements. Be careful to delete any data while you may be required to produce it in the event of a legal action.

The Deletion Process

So we have confirmed that we want to delete a set of data, we have confirmed what ancillary data needs to go with it and in what systems, and we have confirmed that we can legally delete the data.  Now we have to think about how to do it safely. Here are some guidelines that we follow before hitting that delete button.

Script It

All delete commands must be scripted! All delete commands must be scripted! It was worth saying that twice. First, accidentally deleting records is an obviously bad thing, and accidental commands are much more likely when working in a command prompt in SSMS or SQLPlus than when we have carefully crafted the commands and placed them in a script with comments and logging.   

Preview and Approval

A preview of any data to be deleted should be made available to the application/database owner with authority to approve the removal of the data. This can be done by issuing the same command (same criteria) that will be used to delete but as a simple select command. It can be presented in a csv file, spreadsheet, or a live query. And the preview should either detail all of the child and parent data that will be removed, or just the parent records and a written description of the child data that will be deleted along with it.  The approver must be made aware of whether or not the delete process is reversible. This should be approved in writing before the data is actually deleted in the production environment. 

Make it Repeatable

One reason to script the process is so that it is repeatable. The script should be based on criteria such that if additional data that matches that criteria is introduced after the initial delete, we can rerun the script and catch the additional data. This is also very useful if we have the ability to issue the commands in staging or test environments before doing so in production.

Do it on Staging/Test first

It is very possible that after deleting data, the user will realize that they did not account for all of the implications of doing so. Whenever possible, the exact delete process should be done in a staging or test environment before doing so in the production environment, with end user testing before the actual production delete. 

Log it

Finally, we must log the delete. This means actually keeping a record of the records that were deleted. This does not mean keeping all of the data, just a record of a few key fields so the removal of the records can be traced should there be questions later. For example, if old invoices are removed, keep a record of the customer number, the invoice, number, and the date of those records that are removed. This can be done with simple select statements executed prior to the delete command using the same exact criteria. 

Is it worth it?

That all sounds like a lot of work, and it is. But the implications of deleting data are significant and restoring or reproducing deleted data can be extremely difficult and time consuming or even impossible, so a thoughtful, diligent process is required and worth all of that work.