An In-Depth Look at Microsoft SQL Server Managed Services with Buda Consulting

An In-Depth Look at Microsoft SQL Server Managed Services with Buda Consulting

Monitoring and managing Microsoft SQL Server databases can be challenging for many businesses. It takes significant resources and constant monitoring to ensure optimal functioning across a SQL Server database environment, in line with best practices and business goals. Many organizations lack the expertise and time to manage their SQL Server databases. This is where SQL Server Managed Services at Buda Consulting come into play.

Our certified professionals are capable of managing even the most sophisticated SQL Server environments and can take the load of managing your databases completely off your team. Our SQL Server Managed Services offering includes remote administration, performance tuning, upgrades, database security, high availability, disaster recovery, database migrations, and more.

Monitoring SQL Server Managed Services for Security and Performance

Today’s businesses need database performance and security monitoring more than ever. Even in non-regulated environments, organizations need to manage and monitor database access, ensure data is available and protected by backups, and be prepared to respond to potential breaches, including any required reporting. Organizations also need to consider other factors, including permission changes, addition or drop of columns or tables, and unscheduled changes inside the database. All this can take significant time and expertise.

Overall, SQL Server database performance and security monitoring delivered as a managed service help proactively identify and solve problems before they impact operations, while cost-effectively eliminating the constant drain of database monitoring on in-house resources. Managed services can also ensure that your SQL Server environment is safely backed up off-site to reduce the impact of ransomware and other malware attacks.

High Availability and Disaster Recovery for SQL Server

Companies now consider 24x7x365 availability of their web presence or ERP as a critical requirement for keeping up with today’s dynamic competitive environments. Few firms can afford to face significant downtime of mission-critical applications—especially their SQL Server databases. Besides reducing immediate operational impacts, your ability to seamlessly withstand outages, natural disasters, and infrastructure interruptions will elevate your level of business continuity, and thus directly protect your bottom line. Buda Consulting offers high availability and disaster recovery services as part of its SQL Server Managed Services offering to ensure business continuity and peace of mind.

What’s the difference between high availability and disaster recovery? In SQL Server environments, the former focuses on providing 100% uptime and service availability through redundant and fault-tolerant components at the same location. The latter offers service continuity and minimizes downtime via redundant services at one or more separate sites.

The importance of high availability and disaster recovery for any business cannot be underestimated. Downtime costs alone have been estimated in the millions of dollars per hour in some industries. Database downtime is what a CTO’s nightmares are made of, and this could be the result of various causes, e.g., natural disasters, power outages, or hardware/software failures. With Buda Consulting’s SQL Server Managed Services, these risks won’t keep you up at night anymore. 

Effective Management of Growing SQL Server Estates

SQL Server Managed Services involve remote monitoring to check resource usage such as I/O capacity, memory, disk space, and CPU to identify trends and predict when more capacity is necessary. Here monitoring provides timelines and history to help reveal whether a stress phenomenon concurs with a specific type of processing like a scheduled data import or a weekly aggregation. At the same time, making expertise available on-demand is critical to rapidly isolate the root causes of alerts and eliminate unanticipated issues like an increase in deadlocks or a performance drop before users experience problems.

As many SQL Server estates grow rapidly larger and more complex, the advantages of managed service expertise to deal with monitoring, administration, health/performance tuning, and troubleshooting across on-premises, public and hybrid cloud infrastructure increase as well. A managed services approach not only eliminates repetitive, manual daily tasks for your team but also enhances the business value of SQL Server database monitoring through expert use of the latest tools and best practices.

SQL Server Installation and Updates

Your SQL Server Managed Services team will collaborate on the initial installation and configuration of new Microsoft SQL Server software. In most cases, a system administrator or other IT/operations team members are responsible for physical or virtual setup and deployment of the new database server’s operating environment, while a database administrator (DBA) installs and configures the new database software. Also, the DBA handles the ongoing maintenance whenever patches and updates are necessary, as these are critical for security and optimum performance. Anytime there’s a need for a new server, the DBA will deal with data transfer from the current to the new system.

Database Upgrades and Migration

If your company is using an older SQL Server version that is holding you back, Buda Consulting’s SQL Server Managed Services can upgrade you to the current SQL Server version or a different version that better meets your evolving needs, such as the ability to leverage newer SQL Server features like Big Data Clusters, improved container support, new in-memory database capabilities, and more.

Migrating databases between SQL Server environments, such as moving from an on-premises data center to a public, private or hybrid cloud, can be challenging even for experienced IT staff. Buda Consulting SQL Server Managed Services can help you reduce delays and mitigate database migration risks while ensuring new environments are correctly configured and deployed in line with best practices and your specific needs—to eliminate data breaches, data loss, and other misconfiguration impacts. We can even help you move your SQL Server databases from your on-premises data center to a managed hosting provider, or help you choose the best hosting options for your critical SQL Server workloads.

Remote Database Administration

For organizations that want to outsource SQL Server database administration but don’t need a full managed service program, Buda Consulting provides remote database administration (remote DBA) services for your SQL Server environment. This level of support is ideal to help many small to midsized businesses (SMBs) address upgrade and patching needs, performance optimization, maintenance, and monitoring. Leveraging our remote DBA service can very often reduce operational costs, enhance system performance, and relieve stress on scarce IT resources. 

Let Experienced Pros Handle Your SQL Server Databases

Whether you want to optimize your current database performance, develop a new SQL Server database architecture or evaluate your existing database architecture, or improve security, business continuity and incident response in line with new regulations or escalating stakeholder demands, highly-qualified SQL Server Managed Services from Buda Consulting can help. Certified professionals will work with you to understand your business goals and create an optimized Microsoft SQL Server environment that consistently delivers a broad range of cost, security, availability, performance, scalability and agility benefits to your company.

In today’s fast-changing business environment, SMBs demand more from their data than ever before. If you want to focus on running your business and leave Microsoft SQL Server concerns to a trusted partner, you can rely on the database experts at Buda Consulting to provide SQL Server Managed Services that will maximize your operational efficiency, security and availability while reducing IT costs and business risk.

Contact Buda Consulting today to explore the options and benefits of Microsoft SQL Server Managed Services for your organization. 

MySQL and MariaDB Encryption Choices for Today’s Use Cases

MySQL and MariaDB Encryption Choices for Today’s Use Cases

Long a cornerstone of data security, encryption is becoming more important than ever as organizations come to grips with major trends like teleworking, privacy mandates and Zero Trust architectures. To comprehensively protect data from the widest possible range of threats and meet the demands of these new use cases, you need two fundamental encryption capabilities:

  1. The ability to encrypt sensitive data “at rest”—that is, where it resides on disk. This is a critical security capability for many organizations and applications, as well as a de facto requirement for compliance with privacy regulations like HIPAA, GDPR and CCPA. PCI DSS also requires that stored card data be encrypted.
  2. Encrypting data “in transit” across private and public networks. Common examples include using the HTTPS protocol for secure online payment transactions, as well as encrypting messages within VPN tunnels. Zero Trust further advocates encrypting data transmitted over your internal networks, since your “perimeter” is presumed to be compromised.

MySQL and MariaDB each support “at rest” and “in transit” encryption modalities. They both give you the ability to encrypt data at rest at the database level, as well as encrypting connections between the MySQL or MariaDB client and the server.

MySQL database-level encryption

MySQL has offered strong encryption for data at rest at the database level since MySQL 5.7. This feature requires no application code, schema or data type changes. It is also straightforward for DBAs, as it does not require them to manage associated keys. Keys can be securely stored separate from the data and key rotation is easy.

MySQL currently supports database-level encryption for general tablespaces, file-per-table tablespaces and the mysql system tablespace. While earlier MySQL versions encrypted only InnoDB tables, newer versions can also encrypt various log files (e.g., undo logs and redo logs). Also, beginning with MySQL 8.0.16, you can set an encryption default for schemas and general tablespaces, enabling DBAs to control whether tables are encrypted automatically.

MySQL database-level encryption is overall secure, easy to implement and adds little overhead. Among its limitations, it does not offer per-user granularity, and it cannot protect against a malicious root user (who can read the keyring file). Also, database-level encryption cannot protect data in RAM.

MySQL Enterprise Transparent Data Encryption

In addition to the generic database-level encryption just discussed, users of “select Commercial Editions” of MySQL Enterprise can also leverage Transparent Data Encryption (TDE). This feature encrypts data automatically, in real-time, before writing it to disk; and decrypts it automatically when reading it from disk.

TDE is “transparent” to users and applications in that it doesn’t require code, schema or data type changes. Developers and DBAs can encrypt/decrypt previously unencrypted MySQL tables with this approach. It uses database caching to improve performance and can be implemented without taking databases offline.

Other MySQL Enterprise Encryption Features

Besides TDE, MySQL Enterprise Edition 5.6 and newer offers encryption functions based on the OpenSSL library, which expose OpenSSL capabilities at the SQL level. By calling these functions, mySQL Enterprise applications can perform the following operations

  • Improve data protection with public-key asymmetric cryptography, which is increasingly advocated as hackers’ ability to crack hashed passwords increases 
  • Create public and private keys and digital signatures
  • Perform asymmetric encryption and decryption
  • Use cryptographic hashes for digital signing and data verification/validation

MariaDB database-level encryption

MariaDB has supported encryption of tables and tablespaces since version 10.1.3. Once data-at-rest encryption is enabled in MariaDB, tables that are defined with ENCRYPTED=YES or with innodb_encrypt_tables=ON will be encrypted. Encryption is supported for the InnoDB and XtraDB storage engines, as well as for tables created with ROW_FORMAT=PAGE (the default) for the Aria storage engine.

One advantage of MariaDB’s database-level encryption is its flexibility. When using InnoDB or XtraDB you can encrypt all tablespaces/tables, individual tables, or everything but individual tables. You can also encrypt the log files, which is a good practice.

Encrypted MariaDB data is decrypted only when accessed via the MariaDB database, which makes it highly secure. A potential downside is that MariaDB’s encryption adds about 3-5% data size overhead.

This post explains how to setup, configure and test database-level encryption in MariaDB. For an overview of MariaDB’s database-level encryption, see this page in the knowledgebase.

Encrypting data “in transit” with MySQL

To avoid exposing sensitive data to potential inspection and exfiltration if your internal network is compromised, or if the data is transiting public networks, you can encrypt the data when it passes between the MySQL client and the server.

MySQL supports encrypted connections between the server and clients via the Transport Layer Security (TLS) protocol, using OpenSSL.

By default, MySQL programs try to connect using encryption if it is supported on the server; unencrypted connections are the fallback. If your risk profile or regulatory obligations require it, MySQL lets you make encrypted connections mandatory.

Encrypting data in transit with MariaDB

By default, MariaDB does not encrypt data during transmission over the network between clients and the server. To block “man-in-the-middle” attacks, side channel attacks and other threats to data in transit, you can encrypt data in transit using the Transport Layer Security (TLS) protocol—provided your MariaDB server was compiled with TLS support. Note that MariaDB does not support older SSL versions.

As you might expect, there are multiple steps involved in setting up data-in-transit encryption, such as creating certificates and enabling encryption on the client side. See this page in the MariaDB knowledgebase for details.

Conclusion

With data security being an increasing business and regulatory concern, and new use cases like teleworking and privacy compliance becoming the norm, encryption will certainly be used to secure more and more MySQL and MariaDB environments. 

If you’d like a “second opinion” on where and how to implement encryption to address your business needs, contact Buda Consulting for a free consultation on our database security assessment process.

If you like this article, please share it with your colleagues and subscribe to our blog to get the latest updates.

7 Ways To Improve SQL Query Performance

7 Ways To Improve SQL Query Performance

How do you improve SQL query performance? That is a big question, and one that we get asked all the time. There is no one answer, but there is a process that we apply to make a difference in query performance. In this post, I will discuss some of the questions we ask, some of the diagnostics we run, and some of the steps we take to reduce the amount of time a query takes. 

The questions to ask are similar for any relational database software, so this discussion will apply to Oracle, SQL Server, MySQL, PostgreSQL, and others. I may mention tools or processes by a database-vendor specific name but, for the most part, each software vendor has something that is equivalent. 

Query tuning is a complex and iterative process, so no blog post, including this one, would be comprehensive. The objective is to help you understand how to think about tuning from a broader perspective rather than looking only at the query in question, and is more about concepts than syntax.

Questions to Ask When Looking to Improve SQL Query Performance

To narrow down where the problems are with a SQL query, we start with some basic questions about the query and how it is being executed. I will discuss each question and talk about why we ask it, and what information the answer might give us. None of these questions will tell us definitively what the problem is, but they can point us quickly in the right direction and save precious time when a client is waiting for improved response time.

Timeframe 

Is the query that we are interested in (hereafter referred to as “our query”) executed during a period when the system is heavily taxed by other processes?

  • Why we ask: If our query is executed during a very busy time, then the problem may not be with our query at all.  Reducing load on the system by examining other queries first (using this same strategy) may be more effective. So we would start by identifying and examining the most resource intensive queries first, to try to reduce overall system load. 

Proximity and Size

Does our query take the same amount of time whether it is executed locally or remotely?

  • Why we ask: If our query is executed remotely (executed in a browser or application on a server other than the database server) and if it returns a large number of rows, then it is possible that the data transfer is the bottleneck, rather than the retrieval of the data from the database. Asking this question may help us take the network out of the equation.

Result Set Characteristics 

When our query completes, does it return a large number (millions?) of rows?

  • Why we ask: When executing our query locally, if it takes a long time to complete, there are two possibilities. Either it takes a long time for the database software to find the data to return, or it takes a long time to return the data to the screen or the application. The former can be fixed by tuning the query; the latter may mean that our query is returning too many rows to be practical. In the latter case, we should revisit the intent of the query to see if an aggregated form of the data would be more usable, or if breaking the result set up into more manageable chunks makes sense. Also, a very large result set may be an indication of an error in the query itself, perhaps a missing join, or missing criteria resulting in a Cartesian product. In this case, we would look at the logic being expressed in the query and ensure that it matches the intent of the query. 

Is the result set both large and aggregated or sorted?

  • Why we ask:  Aggregation and sorting on large result sets require significant temporary space. If this is a significant part of the query operations, we want to look at the management of memory buffers, and temp space (System Global Area (SGA), Program Global Area (PGA) and temporary segments or their equivalents). We want to make sure that enough memory is allocated so that we are not excessively writing out to temp space, and that temp space is optimally sized and located.

Is the result set a (relatively) small subset of a large amount of data?

  • Why we ask:  If the database is very large, and if our query returns a small subset of the data, there are two broad solutions that may be applicable: adding or optimizing indexes, and adding or optimizing partitioning. Up to a certain data size, proper indexing alone can provide adequate performance. When data gets very large, however, a combination of indexes and partitions will be necessary to provide adequate performance when querying a subset of the data. 

Historical

Has the performance of the query degraded over time?

  • Why we ask:  If the query performed well in the past, but no longer does, look at the growth rates of data in the tables referenced by the query. If the amount of data has increased significantly, new indexes may be required that were not necessary when less data was referenced. Significant data growth may also result in optimizer statistics that no longer reflect the characteristics of the data, requiring a refresh of these statistics if they are not automatically refreshed.

Does the data being queried involve many updates or deletes (as opposed to mostly inserts)?

  • Why we ask: Data that is frequently updated may result in index or tablespace fragmentation. This may also result in invalid statistics as in the case of significant data growth. 

Conclusion

Query tuning is an iterative process and there are many other questions to ask as we get into the details. But the above questions help us see the big picture and can steer us in the right direction very quickly and help prevent us from going down the wrong path and wasting time.

If you have any other questions that you like to ask when tuning that you’d like to share, or if you have an interesting tuning story, please share in the comments. 

Relational Database Design: It’s All About The Set

Relational Database Design: It’s All About The Set

The Lost Science Of Relational Algebra And Set Theory

I originally wrote this post in 2011. Much has changed in the database technology landscape since then. Big Data Technologies such as Hadoop have gone mainstream, cloud technology and is changing how and where we think about hosting our databases.

But relational databases are still relied upon as the best option for rich transactional data.

So, since this technology is still the foundation of our mission critical systems, we should understand how to take advantage of one of the foundational elements of relational technology: The Set.

The SQL language (Structured Query Language) was built upon relational algebra. This rigorous approach to query definition is largely about set theory. This post is not a detailed technical discussion of relational algebra or set theory, instead it is about the way that relational databases are often misused.

The purpose of this article is to discuss the central theme of relational database technology and one of its greatest strengths. One that is often overlooked by those practicing Oracle Database Design or SQL Server Database Design and Database Development. I am talking about Set Theory. Relational Databases like Oracle and SQL Server are built and optimized to process sets of rows, as opposed to individual rows. Many application developers, even those that use these relational tools, struggle to think beyond the individual row. That is why the major relational database vendors have created very powerful procedural languages such as PL/SQL and T/SQL.

In many cases, developers use these tools to step row by row through a dataset (by using cursors) because they may not understand how the set operators work. This approach leads to unnecessary development and higher maintenance costs, as well a poor performance.

There are definitely times when a procedural process is necessary. But often times there are set-based alternatives that would be more efficient and much easier to develop.

In this post, I will focus on three core set operators: Union, Intersect, and Difference.

First some definitions:

Given two sets, Set A and Set B

Union:  All records from set A and all records from Set B.  If a record exists in both sets, it will only appear once in the Union. (Areas A, B, and C in figure 1).

Intersection: The unique set of records that exist in both set A and set B (Area C in figure 1).

Difference: The difference between Set A and Set B are all the records in Set A except those that also exist in Set B. (Area A in figure 1).

Vendor Differences

Relational databases implement these operators in different ways, but they all provide a relatively simple way to combine and compare similar sets of data. Oracle has the Union, Intersect, and Minus operators. SQL Server has Union, Intersect, and Except operators.

MySql has the ability to perform these operations as well, but it is more complex. For example, in order to do a difference operation, you must use a not exists or not in operator, resulting in a more complex sql statement.

Example

Lets examine how Oracle implements each of these set operations with a simple example.

This post is intended to discuss the concepts so I did not include the data and the actual query results in the post. But you can download the script to create and populate the tables with test data and run the queries here: set_tables_sql

Suppose you collect bank account events (debits, credits) from multiple sources. You place them into one common table, but also maintain the original source records in separate tables for historical purposes.  The original source records never change, but the events in the common table can be modified as necessary by the users.

Now suppose that occasionally you need to compare the transactional data in the common table to the original source data to see which rows have been changed. This is very easy using set operators.

The tables that we will use for this example follow. I used different column names in each table to illustrate that the column names do not need to be the same in each set that you are comparing. However, the number of columns in each query and the data types in each query must be the same.

Table Definitions

CREATE TABLE Event
(
Event_Id NUMBER,
Event_Name VARCHAR2(30),
Event_Description VARCHAR2(255),
Data_Source_location VARCHAR2(30),
Event_Date DATE
);

CREATE TABLE Event_Source_1
(
Event_Id_Orig NUMBER,
Event_Name_Orig VARCHAR2(30),
Event_Description_Orig VARCHAR2(255),
Data_Source_location_Orig VARCHAR2(30),
Event_Date_Orig DATE
);

CREATE TABLE Event_Source_2
(
Event_Id_Orig NUMBER,
Event_Name_Orig VARCHAR2(30),
Event_Description_Orig VARCHAR2(255),
Data_Source_location_Orig VARCHAR2(30),
Event_Date_Orig DATE
);

Example 1 — Union: Now suppose you needed to display all event names that appear in Event Source 1 and Event Source 2. The Union operator will display records from both tables, but records appearing in both tables will only appear once (unless the union all operator is specified, in which case duplicates will be displayed).

SELECT Event_Name_Orig FROM Event_Source_1
UNION
SELECT Event_Name_Orig FROM Event_Source_2;

Example 2 — Intersection: Now suppose you needed to display only events from Source 1 that have remained unchanged in the Event table. This can be done with an intersection between Event and Event_Source_1.

SELECT Event_Name,Event_Description,Data_Source_Location FROM Event
INTERSECT
SELECT Event_Name_Orig,Event_Description_Orig,Data_Source_Location_Orig FROM Event_Source_1;

Example 3  —  Difference: Now suppose you want to know all Data Source Locations that appear in the original Data Source 2 data but not in the original Data Source 1 data. This can be done by using the difference operation, implemented with the Minus operator by Oracle. This will take all the records from one set and subtract those that also exist in another set.

SELECT Event_Name_Orig,Event_Description_Orig,Data_Source_Location_Orig FROM Event_Source_1
MINUS
SELECT Event_Name_Orig,Event_Description_Orig,Data_Source_Location_Orig FROM Event_Source_2

Database Design Considerations

These powerful operators can be used to reduce or eliminate the need for cursors in many cases. The usefulness of these operators is dependent on sound database design and a well-normalized table structure. For example, a table that has repeating columns designating the same data element (as opposed to using multiple rows) will render these operators much less useful.

Conclusion

With careful database design and a good understanding of the Set management tools provided by the relational vendors, we can simplify and speed development and reduce maintenance costs. Lets think in terms in sets and get the most out of our relational database investment!

If you would like to discuss set theory or relational database design, please give me a call at (888) 809-4803 x 700 and if you have further thoughts on the topic, please add comments!

If you enjoyed this article please like and share!

Database Patch News — November 2019 (Issue 1)

Database Patch News — November 2019 (Issue 1)

Welcome to Database Patch News, Buda Consulting’s monthly newsletter of current patch information for Oracle and Microsoft SQL Server. Here you’ll find information on available patches—including security patches—and desupported versions made available during the past month.

Why should you care about patching vulnerabilities and bugs? Two big reasons:

  1. Unpatched systems are a top cyber attack target. Patch releases literally advertise vulnerabilities to the hacker community. The longer you wait to patch, the greater your security risk.
  2. Along with running a supported database version, applying the latest patches ensures that you can get support from the vendor in case of an issue. Patching also helps eliminate downtime and lost productivity associated with bugs.

Here are the latest patch updates for Oracle and SQL Server:

Oracle Patches:

Oct 15 2019 Quarterly Patch Updates:

19c – Release Update 19.5 available

18c – Release Update 18.8 available

12.2.0.1 – OCT 2019 RELEASE UPDATE 12.2.0.1.191015 available.
Regular support ends Mar 2023 and extended support ends Mar 2026.

12.1.0.2 – Currently in extended support.
The last freely available patch was July 2019 for 12.1.0.2. The Oct 15 2019 Patch Set Update (PSU) is available but may require extended support purchase to access it. Patches will be release until July 2021 for this version. PSU 12.1.0.2.191015 is available.

11.2.0.4 – Entered extended support in December 2017
The last free available patch was October 2018 for 11.2.0.4. PSU 11.2.0.4.191015 is available but may require clients purchase extended support to access it.

SQL Server Patches:
SQL Server 2017 incremental servicing model (ISM)
CU17 (Latest build)—Released October 08, 2019

SQL Server 2016 Service Pack 2
Release date: April 24, 2018

SQL Server 2014 Service Pack 3 Cumulative update 4
Release date: July 29, 2019

SQL Server 2014 Service Pack 2 Cumulative update 18
Release date: July 29, 2019

MySQL Fabric: The Best of NoSQL and Relational Databases

MySQL Fabric: The Best of NoSQL and Relational Databases

Oracle Corp. is currently the world’s second-largest software vendor—and it isn’t going to let a little thing like unstructured data stand in its way. With the recent release of its MySQL Fabric technology, which is meant to meet the demands of cloud- and web-based applications, Oracle is positioning itself to dominate the big data landscape.

Most enterprise data is still stored in relational databases written in SQL. To handle diverse data types and increase the flexibility of database structures, database developers are increasingly employing newer, open source DBMSs, especially MySQL (which Oracle maintains) and more recently NoSQL.

MySQL is currently the world’s most popular open source database. An RDBMS-based SQL implementation designed to support web as well as embedded database applications, MySQL drives some of the world’s largest websites including Google, Facebook, Twitter and YouTube. It has proven to be easy-to-use, reliable and scalable.

Despite the promise it offers for big data and real-time web applications, NoSQL has yet to evolve to deliver enterprise-grade reporting and manageability. MySQL Fabric is designed to solve these problems by delivering the best of NoSQL and SQL/RDBMS.

The new MySQL Fabric open source framework seeks to combine the flexibility of NoSQL with the robust speed of RDBMS. It should also simplify the management and scaling of MySQL databases by making it easy to manage them in groups.

MySQL Fabric offers high availability through failure detection and failover, by automatically promoting a slave database to be the new master if the master database goes down. It also offers enhanced scalability through automated data sharding, a process of separating database tables into multiple sections. Sharding helps you manage MySQL databases that are too large (or frequently accessed) for a single server.  

Other key features include:

  • Automatic routing of transactions to the current master database, combined with load balancing of queries across slave databases
  • Extensions to PHP, Python and Java connectors to route transactions and queries directly to the correct MySQL server, eliminating the latency associated with passing through a proxy

By enabling multiple copies of a MySQL database to work together, MySQL Fabric will make it easier to perform live backups and scale MySQL databases across multiple servers. This, in turn, will make it easier to safely “scale out” MySQL applications in both on-premise and cloud implementations.

The new framework will support the growing use of MySQL for high-traffic, business-critical web applications. MySQL Fabric also positions Oracle strongly against NoSQL databases like MongoDB and MySQL add-on providers like Percona. Prior to the release of MySQL Fabric, DBAs had to write code or buy third-party software to create a MySQL server cluster.

You can download the new framework as part of the MySQL Utilities 1.4.3 package at: http://dev.mysql.com/downloads/fabric/

Note that Oracle also offers the MySQL Cluster version of MySQL, which offers some advantages over MySQL Fabric, such as faster failover times and a two-phase commit to ensure that each transaction is fully recognized.

Contact Buda Consulting to talk over how these technologies can help maximize the performance and reliability of your critical, customer-facing applications.