Advantages and Disadvantages of Thin Provisioning

Advantages and Disadvantages of Thin Provisioning

Thin provisioning uses virtualization technology to allocate disk storage capacity on demand as your needs increase. Thick provisioning is the counterpart strategy of pre-allocating storage capacity upfront when you create a virtual disk drive.

Thin provisioning creates the illusion of more physical resources than are available in reality. For example, you can assign 1TB of virtual disk space to each of the 2 development teams, while actually allocating only 500GB of physical storage. With thick provisioning, you would need to start with 2TB of physical storage if you wanted to assign 1TB to each of those 2 teams.

Before you make a decision, here are it’s top 2 advantages and its top 2 disadvantages:

Advantage #1: Optimizing your storage utilization

In environments where storage is shared, thin provisioning lets you optimize the usage of your available storage, so it’s not sitting idle. For example, say you assign a 2TB drive to an application that ends up using only 1TB of storage. In this configuration, another application can leverage unused storage. With thick provisioning, the unused capacity is never utilized.

Advantage #2: Scaling up cost-effectively

As long as you are monitoring and managing storage effectively and can confidently predict usage trends, thin provisioning lets you incrementally add more storage capacity as needed and not buy more than you need for the immediate future. 

Disadvantage #1: Increased downtime and data loss potential

Most approaches don’t automatically account for your growing storage needs—putting your environment at significant risk for storage shortages and associated downtime issues when the volume of virtual storage provisioned exceeds the physical disk space available. This includes crashes and/or data loss on your virtual drives that can hurt user productivity and customer experience while leaving your DBAs with a big mess to clean up.

Disadvantage #2: Lack of elasticity

As just noted, thin provisioning is great for helping you scale up your storage environment cost-effectively. But it doesn’t work in reverse. If your applications need fewer services, you may need to reduce the allocations manually as part of your storage monitoring and management program unless your array controller or other technology can handle that for you.

Choose thin provisioning based on the use case

As a general rule, whether to use thin or thick provisioning depends on the balance of resources used versus allocated for your specific use case. Thin provisioning is much safer and more efficient when the resources you actually need are significantly less than what you plan to allocate. Thick provisioning is a better choice when the resources you use are close to what you allocate.

This is why Buda Consulting doesn’t recommend thin provisioning for production storage except for clients that can consistently manage and forecast data storage needs. Otherwise, this can lead to major problems and expenses that outweigh the cost savings associated with improved storage utilization. However, thin provisioning can be a good option for many businesses when used in development, testing, or other non-production scenarios.

Next steps

For expert advice on how you can best leverage thin or thick provisioning, or to explore a range of options for making the best use of physical and virtual storage in your unique environment, contact Buda Consulting.

 

Why We Don’t Always Do What The Customer Asks?

When we ask our customers why they love working with Buda Consulting, the answer we typically get is that we listen to them and we do what they ask (Apparently this is rare in the database support field).  And we almost always do.  But not always!

The Request

When I was about fourteen, I delivered the morning paper before school in Staten Island.  One of my customers, a very nice man named Mr. Olsen, asked me to make sure he knew when the paper arrived in the morning by making noise. I would throw the paper onto the porch from my bike and he asked me to throw it harder so it would hit the door and he would hear it from inside the house. 

The Result

What he didn’t count on is that I would do exactly what he asked. The next day as I passed by his house on my bike, I threw that paper as hard as I could. It sailed over the walkway, past the steps, along the porch, and crashed right through the glass panel on the front door.  It sure made noise!

Mr Olsen came out a bit shocked, and then said something like “a bit softer next time!” I offered to pay for the door (not knowing how I could ever afford to do that),  but I was fortunate that my customer took responsibility for making a request without thinking it through, because he was a nice guy, and because I was only a kid.. But I understand now that it was my responsibility to evaluate the request, make sure that the request is in the customer’s best interest, and either mitigate any risks or suggest alternatives. 

These Days

Now, all these years later, we sometimes have customers that make requests that could end up harming them. Rather than performing the action without question, we will inform them of the risk, suggest ways to mitigate that risk, and there are times that we will respectfully decline to perform the action if we feel the risk is too great. 

The Takeaway

I have learned that my job was not just to deliver the newspaper.  My job was to deliver the newspaper without causing damage to my customer’s home.  These days, the same applies to our customer’s database systems.

9 Reasons Why a Remote SQL Server DBA Just Makes Sense

9 Reasons Why a Remote SQL Server DBA Just Makes Sense

Any company needs its SQL Server databases running efficiently to meet business objectives. But SQL Server DBA skills are most likely not an organizational core competency and potentially can be outsourced to a remote SQL Server DBA. 

Is outsourcing SQL server management a good idea strategically for your organization? Outsourcing is often viewed through the lens of cost. But there are many reasons (9 of which follow) to consider a remote server DBA: 

One: Reduced SQL Server DBA costs

Remote SQL Server DBA costs generally run about 25% to 50% less than in-house DBA total costs, according to Forrester Consulting

Two: Improved quality of database administration

Many SQL Server DBA outsourcers report that their remote SQL Server DBA arrangement has resulted in an improved ability to meet service level agreements (SLAs) plus improved service quality as reported by users. 

Three: Improved focus on the business 

Leveraging a remote SQL Server DBA enables outsourcers to focus more on their core business issues and strategy by freeing up technical and management resources that had been focused on database administration.

Four: Augments in-house staff

Outsourcing SQL Server database admin liberates in-house IT staff from routine DBA tasks so they can solve other problems, which improves overall efficiency and productivity (not to mention morale). Many SQL Server DBAs also provide support outside of normal business hours when in-house staff may not be available, thus improving overall DBA coverage.

Five: The increased pool of expertise

Most remote DBA vendors employ a full team of experienced DBAs whose collective knowledge and experience would typically surpass that of a given individual DBA. Remote team members frequently pool their resources to identify the best solutions for customers. They effectively constitute a team of on-call specialists to provide the expertise you need on demand. 

Six: Better database security

More automation combined with 24×7 monitoring inherently improves database security and uptime. So does the added level of security expertise that your remote SQL Server DBA team brings to the table. Finally, a remote DBA service will perform upgrades, patching, and maintenance tasks that help reduce vulnerabilities and keep data secure.

Seven: Improved business continuity

Your database administration function is key to ensuring that system issues don’t cascade into failures that impact users, customers, etc. Having extra eyes on your SQL Server environment helps proactively prevent database issues and limits their spread and impact.

Eight: Less loss of “institutional knowledge”

When a key IT employee like a DBA leaves your business, they may take with them a wealth of institutional knowledge that is now lost to you. A remote Server DBA relationship helps ensure that your environment is consistently handled in a best-practice manner, without the glitches and delays that commonly occur when you need to replace a full-time employee.

Nine: Enhanced data integrity

A remote SQL Server DBA service will most likely give you the option to perform data cleansing; eliminate duplicate, incorrect, or missing data; and so on. This improves the quality and integrity of your data and enhances its value for decision-making.

Next steps

Buda Consulting has provided best-in-class remote and on-site DBA services for SQL Server and other database environments for over 25 years. Our certified professional staff can manage the most sophisticated database architectures. Based on your needs, we can operate as a “best-fit” extension of your team to deliver the perfect complement of database-managed services—from taking over routine tasks to addressing advanced issues.

Contact Buda Consulting to talk about your SQL Server DBA needs and explore how our remote DBA service can help.

 

 

Database Patch News — July 2021

Database Patch News — July 2021

Welcome to Database Patch News, Buda Consulting’s newsletter of current patch information for Oracle and Microsoft SQL Server. Here you’ll find information recently made available on patches—including security patches—and desupported versions.

Why should you care about patching vulnerabilities and bugs? Two big reasons:

  1. Unpatched systems are a top cyber attack target. Patch releases literally advertise vulnerabilities to the hacker community. The longer you wait to patch, the greater your security risk. 
  2. Along with running a supported database version, applying the latest patches ensures that you can get support from the vendor in case of an issue. Patching also helps eliminate downtime and lost productivity associated with bugs. 

Here are the latest patch updates for Oracle and SQL Server:

Oracle Patches:

July 23, 2021 Quarterly Patch Updates:

21c – Released January 13, 2021, Version 21.1; no Quarterly patch yet

19c – Release Update 19.12 is available (32895426 and 32876380)

18c – Release Update 18.14 is available (32524152 and 32552752)

12cR2 – Release Update 210720 is available (32916808 and 32876409)

Regular support ends in Mar 2023 and extended support ends in Mar 2026.

12cR1 – Release Update 210720 is available (32917447 and 32876425)

Regular support ended in July 2019 and extended support ends in July 2021.

11gR4 – Patch Set Update 201020 is available (31720776)

Regular support ended in October 2018 and extended support ended December 31, 2020.

 

SQL Server Patches:

SQL Server 2019

Cumulative update 11 (Latest build) Released June 10, 2021

Mainstream support ends Jan 7, 2025

Extended support ends Jan 8, 2030

 

SQL Server 2017

Cumulative update 24 (Latest build) Released May 10, 2021

Mainstream support ends Oct 11, 2022

Extended support ends Oct 12, 2027

 

SQL Server 2016 Service Pack 2

Cumulative update 17 Release date: March 29, 2021

Mainstream support ends Jul 13, 2021

Extended support ends Jul 14, 2026

 

SQL Server 2014 Service Pack 3

Cumulative update 4 Release date: Jan 12, 2021

Mainstream support ended Jul 9, 2019

Extended support ends Jul 9, 2024

 

Note: All other SQL Server versions not mentioned are no longer supported.

How Poor Communication Brought an Oracle System Down

It was very cold and early on a Monday morning when I received a call from one of my fellow system administrators. He reported that one of our production databases would not come back online after the server hosting the database was restarted. 

Most DBAs would start investigating this issue by looking at database alert logs. But my experience led me to ask my fellow system admin the following question: “What changes did you make on the server prior to the reboot?”

It was his answer to that question that allowed me to quickly understand the issue and fix it in just a few minutes. 

Apparently the system admin (not the DBA) was conducting vulnerability testing and, as a result, made a change to the main listener.ora file that disabled all databases from being able to dynamically register to Oracle database listeners. 

By default, an Oracle database will try to dynamically register to an Oracle database listener on port 1521. This registration process allows connections to the database from outside of the server. The database was online and operational, but because the dynamic registration option was disabled it could no longer register to the listener. So no users could connect to the database.

The fix for this was adding a static listener to the listener.ora for the database hosted on the server, thus allowing it to receive connections. Once the static listener was added, all users were able to connect to the production database without error.

The Technical Problem\

Let’s break this incident down in more detail:

This is the original Listener file 

LISTENER=

  (DESCRIPTION=

    (ADDRESS_LIST=

      (ADDRESS=(PROTOCOL=tcp)(HOST=MyServer)(PORT=1521))

      (ADDRESS=(PROTOCOL=ipc)(KEY=extproc))))

The administrator added one line (see below in red):

LISTENER=

  (DESCRIPTION=

    (ADDRESS_LIST=

      (ADDRESS=(PROTOCOL=tcp)(HOST=MyServer)(PORT=1521))

      (ADDRESS=(PROTOCOL=ipc)(KEY=extproc))))

DYNAMIC_REGISTRATION_LISTENER=OFF

This prevented any databases that do not have a static listener specified in the listener.ora file from accepting connections..

The Technical Solution

To correct the problem, I added a static listener to the listener.ora file (see below in red):

LISTENER=

  (DESCRIPTION=

    (ADDRESS_LIST=

      (ADDRESS=(PROTOCOL=tcp)(HOST=MyServer)(PORT=1521))

      (ADDRESS=(PROTOCOL=ipc)(KEY=extproc))))

DYNAMIC_REGISTRATION_LISTENER=OFF

SID_LIST_LISTENER=

(SID_LIST=

    (SID_DESC=

      (GLOBAL_DBNAME=MyDBName)

      (ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1)

      (SID_NAME=MySID))

)

You can find detailed information about the listener file for Oracle version 19c here.

The Communication Problem

We have mentioned in this blog before that almost all problems with technology projects are the result of poor communication. This principle holds here as well. Because the system administrator did not keep any of the DBAs on our team “in the loop” about their vulnerability testing, or the resulting changes, those changes caused production downtime.  

The Communication Solution

Any change to a server, database, or application must be communicated to all responsible parties beforehand. In fact, a better approach in this case would have been to ask the DBA to make the change to the listener file rather than the administrator making the change himself. This would have ensured that an experienced DBA had reviewed the change and understood the potential impact.

The moral of the story is: Keep your DBAs in the loop when you’re making system changes. It’s our job to proactively prevent database issues others might miss.

A Word on Database Security

While an action taken by the system administrator caused a problem in this situation, it should be applauded from a database security standpoint that vulnerability testing was conducted because it exposed a potential vulnerability (the dynamic registration). It is a best practice to disable dynamic registration unless it is necessary for the organization, and unless the associated risk is mitigated by other practices, such as changing the default listener port.  

Database vulnerability testing is a crucial part of a comprehensive IT security plan and is often overlooked. For the reasons described above, the process should always include a member of the DBA team. See a few of our Database Security related blogs here

 

4 Keys to Avoiding the Number 1 Cause of Database Project Failure

We all know that database projects and other technical/IT projects often fail. They are never completed, the results fall far short of expectations, nobody uses the new application, and so on.

Why? At the end of the day, if we look beneath the surface-level issues, the main reason for database project failure — by far — is poor communication. 

Case in point: If a project fails because of technical errors or deficiency, It’s either because the technical resources did not have the right skill set, or the requirements that they were working from were incorrect or incomplete.

If it’s the former, then there was a breakdown in communication between the resources and the project manager regarding the set of abilities that the resources have, or there was a breakdown between the project manager and the business analyst regarding what skill sets were needed for the project. If it’s the latter then there was a breakdown in communication between the business analyst and the project manager regarding what the overall system requirements were.

Another typical project failure involves missing deadlines. Typical causes of missing deadlines include resources not being available when needed, or the infrastructure not being ready when it was needed, or the business users not being ready when needed for testing or migration activities. 

Again, in all of these cases the root cause is communication. If one of the parties is not ready when they need to be, it is either because they didn’t know when they would be needed, or they incorrectly stated their availability. If the infrastructure is not available when it is needed, then either the requirements or the deadline for the infrastructure were not properly communicated to the infrastructure team, or the infrastructure team miscommunicated their ability to get the work done in time.

If you look deeper and break down the presenting problems, in almost all cases the root cause of project failures is communication. Often the communication failures occur in the very beginning of the project, during the scoping and estimate or quotation process.

Here are 4 key approaches that I use to mitigate the significant risks to project success caused by poor communication:

  1. When asking someone for a decision on an important point, I always ask twice. If the two answers differ, I ask a third time. And I continue that process until the answers become consistent. If I receive the two different answers from two different critical stakeholders, I will find a reason to send a joint email or have a conversation with both present, and I will re-ask the question in hopes of gaining consensus. (Political sensitivity and tact is critical here… Perhaps that’s the subject of another blog post…)
  2. When nailing down an important decision, I follow up in writing to validate and underscore everyone’s understanding, especially for something for which I have received two different answers over time.
  3. I treat decisions differently than statements of fact. If I ask a client, “Do your customers connect directly to your database?”, this is a statement of fact. There is a right and wrong answer to this question, and it can be validated independently. However, if I ask the customer, “How many customers do you want the database to support in five years?”, this is a decision or a target. There is no right or wrong answer. This cannot be validated except by the same individual (assuming they are the decision-maker).

    I treat statements of fact very differently from decisions/targets:

    • I validate a statement of fact in a variety of ways. I might look at the user accounts on the existing system, or I might ask someone else in the organization, or I might look at the application for clues. 
    • For decisions or targets, validation can be more difficult. As mentioned above, I ask at least twice for any decision that can impact the scope of the project. If the answers differ, or if I feel like the answer is not solid and may change (based on my client’s tone of voice, hesitation, inconsistencies with other statements or requests, or other factors), I will ask again until I am satisfied that the answer is solid.
  4. For all important points that can impact the project time or cost estimate, or the database design or implementation, I always validate them in one fashion or another before we act on them. And if I can’t validate them for some reason, I call them out separately as an assumption in the estimate or quote in order to bring it to the client’s attention and to the team’s attention, and then I mention it directly when reviewing the document with them.

To sum up: as you might expect, the antidote to poor communication is good communication. Especially going into a project, keep the above in mind. Get clarity and validate what you’re hearing. This will make you look good, your customers and technical team members will appreciate it, and your projects are much more likely to succeed.

To get optimum value and results from your database project investments, contact Buda Consulting.