Data breaches and lost laptops continue to dominate the headlines – and rightfully so, with hundreds of millions of customers victimized in 2007 alone. Many organizations, however, are also addressing the more strategic issue of data governance – ensuring that the data upon which they rely for critical business decisions is accurate and available at all times.
The business problems of data governance and data integrity lack the drama of a stolen laptop or lost backup tape, but can cause even greater damage to organizations and their customers, partners and suppliers.
Consider the case of Roger Duronio, the former UBS PaineWebber systems administrator, who was found guilty in July 2006 of computer sabotage and securities fraud. Unhappy with his annual bonus, Duronio planted a “logic bomb” that deleted critical data on up to 2,000 servers in the company’s central data center and in branches nationwide – costing more than $3.1 million in clean-up, plus undisclosed costs of lost business.
But rogue insiders aren’t the only source of lost data and downtime. A new Deloitte report, “Treading Water: The 2007 Technology, Media & Telecommunications Security Survey,” reveals that 75 percent of companies cite human error as the top cause of security failures.
Good Data Governance Ensures Proper Business Decisions
IBM defines data governance as “the process by which companies govern appropriate access to their critical data, by measuring operational risk and mitigating security exposures associated with access to data.”
The reason data governance is paramount is that the core of business is information. Every day, enterprises create, consume and make thousands of decisions based on data. As a result, accurate is a prerequisite to make educated decisions, both for themselves and on behalf of customers and business partners.
Like the “Butterfly Effect” – the idea that a butterfly's wings might create tiny changes in the atmosphere that ultimately cause a tornado – a single altered database value can influence financial, manufacturing or retail decisions; invalidate a quarterly financial report; distort revenue records and projections; or disrupt any number of other critical business actions.
New Database Auditing and Security Technologies
In recent years, a fast-growing class of database auditing and security technologies has emerged to give firms the tools to rapidly spot and prevent dangerous actions such as unauthorized database changes – as well as information breaches such as SQL injection attacks or unauthorized access by insiders to sensitive data.
This real-time monitoring technology uses both policy-based controls and anomaly detection to prevent unauthorized or suspicious activities by privileged users – such as database administrators, developers, and outsourced personnel – as well as potential hackers. It also monitors application traffic to identify potential fraud by end-users of enterprise applications such as Oracle Financials, PeopleSoft, Siebel, SAP and Business Intelligence and in-house systems.
Database activity monitoring solutions observe all database transactions at the network level and on database servers themselves, consolidating and normalizing audit information from disparate systems into a central audit repository. They continuously analyze all database transactions and immediately notify IT security, risk and compliance managers – via real-time security alerts – about potentially dangerous or fraudulent activities.
Key requirements include:
Forrester Research estimates the value of the database auditing and real-time protection market, which includes new licenses, support, and services, at approximately $450 million, and expects it to double by 2010 as enterprises look to automate and secure even more of their enterprise databases (“The Forrester Wave: Enterprise Database Auditing and Real-Time Protection, Q4 2007”, Forrester Research, Inc., October 2007. The complete report, which includes a comprehensive assessment of 14 large and small vendors across 116 criteria, is available for download at http://www.guardium.com/ForresterWave.)
Limitations of Traditional Database Logging Tools
Traditional database-resident logging tools have several important disadvantages. First, they impose an additional load on the database, creating an unacceptable side effect of slower performance. These tools also produce massive amounts of unfiltered log data – which means that someone still needs to manually pore through logs to find specific events that might represent unauthorized changes or breaches. (Some companies actually hire staff just for this function.)
In addition, native tools fail to provide real-time or proactive controls, since they analyze log activity “after the fact.” And they typically don’t provide all of the granular information required for compliance and forensic activities, such as information on Read access to sensitive data (required for Payment Card Industry Data Security Standards, or PCI-DSS) or sufficient detail to identify rogue clients beyond database IDs (such as IP address, source application, and application user ID.).
Similar disadvantages exist for Security Information and Event Management (SIEM) systems. These complementary technologies are important for defense-in-depth, but they share these limitations because they rely exclusively on imported log data and lack database-focused analytics.
Securing Critical Data and Delivering Rapid ROI
Forrester Consulting recognized the shortfalls of traditional logging tools in a commissioned case study it recently released about a Guardium customer. (The complete case study can be downloaded at www.guardium.com/ForresterROI.)
The customer, a Fortune 500 manufacturer of consumer food and beverage products whose brands are household names worldwide, implemented Guardium’s real-time monitoring technology to protect corporate data and enforce change controls for critical databases supporting SAP, Siebel and 22 other key financial systems.
The Forrester Consulting study quantified the risk-adjusted ROI of Guardium’s system at 239 percent and the payback period at 5.9 months for this customer, compared to the “significant labor and capital costs” that would have otherwise been required using an in-house solution and traditional database logging utilities.
According to the report, the customer determined that “native tools contain little or no intelligence, consume CPU cycles and disk space, and reduce the performance of the database. Labor is required to sort out relevant events from the massive log that would be generated.”
Prior to implementing the Guardium solution, the company lacked “standardized mechanisms for enforcing database security policies and did not conduct consistent auditing of database activities across the different database environments.” Conversely, Guardium’s solution now “allows the customer to centrally maintain consistent audit data capture and practices across all databases.”
As in many organizations, all database changes are tracked by change control IDs. This organization now uses Guardium’s platform to capture all database transactions and automate the creation of reports comparing all observed changes with approved change requests in their corporate ticketing system. Auditors increasingly require this process, known as “change control reconciliation,” to tighten internal controls around critical financial systems, especially for Sarbanes-Oxley (SOX) 404.
According to the customer, “The ability to associate changes with a ticket number makes our job a lot easier.”
Ensuring Data Integrity
Another of Guardium’s customers, a global financial services company with clients across Europe, the Americas and Asia Pacific, has created an advanced data infrastructure supporting a range of internal applications including: money management; portfolio management; bond trading; real estate investment; equities forecasting; data mining; and regression testing.
This infrastructure spans several data centers and is comprised of several hundred databases – encompassing hundreds of users whose actions could jeopardize data integrity for core business processes. Typical of any large global company, the heterogeneous environment contains a wide variety of database and application server platforms.
When it comes to ensuring data quality and data integrity, this financial firm faces three major challenges common to big enterprises.
First, data cannot be locked in a vault. It needs to be accessible via multiple mechanisms and protocols, ranging from shared, multi-tier Web applications to individual spreadsheets that make direct connections to databases. In addition, privileged users have their own mechanisms for modifying databases, and they require database monitoring at the operating system level, in addition to monitoring at the network level.
Therefore, IT and security departments need unifying solutions that provide a single set of database security policies – and a single normalized view of all database audit information – across all these different access mechanisms and database platforms.
Second, the interconnected nature of the global systems means that corrupted content can compound like the Butterfly Effect, creating a cascade of poor decisions and results. The longer these errors linger, the greater the potential harm. As a result, the firm can no longer rely on traditional auditing approaches that only examine a quarterly or annual snapshot of all transactions. Instead, it must continuously monitor all transactions to catch potential errors in real time.
Third, it’s insufficient to merely implement tighter controls; the organization must also prove that these controls actually work, with detailed reports and verifiable audit trails backing them up. This requirement is driven by both internal and external auditors.
Providing an intelligent report on what actually happened in a complex environment is difficult enough. Proving a negative – that data was not altered – can seem nearly impossible (it’s the equivalent of demonstrating that the butterfly didn’t move its wings).
Database monitoring addresses these challenges by providing a secure repository for storing all of the audit data for everything that occurs in the database environment. This audit information can’t be modified by anyone – even privileged users.
Sophisticated reporting and data mining tools extract useful information from the data, as well as policies and reports for monitoring other key database activities required by auditors, such as security exceptions (failed logins, database errors, etc.) and escalation of privileges.
Lessons Learned: Create Data Access Policies and Procedures
The financial firm began by creating a baseline of activity, which acted as a starting point to define policies about both “normal” and “unusual” business processes, using the Guardium solution to capture and analyze all transactions over a representative period of time.
For example, the company quickly identified all direct access to databases that occurs outside of its standard line-of-business applications (such as via developer tools).
Create a Culture of Compliance
One of the key values of database monitoring technology is that it provides an effective deterrent for administrators and developers who might otherwise choose to circumvent corporate policies to make changes in a more convenient manner. Even without malice, these unauthorized shortcuts endanger data integrity for the entire organization.
Employees know that violations will be documented and flagged. That makes them more careful and less likely to attempt to circumvent policies for any purpose. But the real objective is to raise awareness about the reasons for following common-sense policies – which serves any business well.
According the Fortune 500 manufacturer, “There’s a new and sharper focus on database security within the IT organization. Security is more top-of-mind among IT operations people and other staff such as developers. We now have a clearer focus on security and compliance, promoted in large part by the presence and operation of the Guardium product.”
Beyond Compliance: Increased Accuracy, Availability, Accountability, and Agility
Some organizations mistakenly take the short-term view that complying with regulations is the end-goal. But the fundamental objective of regulations is to ensure the privacy and/or integrity of critical information – which organizations should be doing anyway.
Beyond compliance, implementing automated controls around critical databases increases operational efficiency and provides businesses with a meaningful competitive advantage. It builds long-term trust with customers and partners as companies make the most informed, accurate decisions possible each day. In addition to maintaining the integrity of critical data, it minimizes the possibility of downtime due to unauthorized changes such as patches. It also makes staff more accountable, which means employees don’t waste time on unproductive or potentially disruptive activities.
Finally, it makes organizations more agile, because they can leverage standardized processes along with automated controls to make authorized changes more efficiently. In turn, this allows more rapidly implementation of new applications and technologies in support of new business initiatives.