Many businesses operate outside of safe capacity thresholds with little or no room to expand. According to the IDC, the average data center is 9 years old. However, Gartner states that any site more than 7 years old is obsolete. Overcrowded or obsolete data centers create a roadblock for growing organizations and building new data centers is sometimes the only solution. While speed-to-market is critical to success, companies that fail to assess their business needs properly will create dead-end data center design that will not deliver uptime performance goals or meet future business needs.
Avoid Major Mistakes In Building And Expansion World?
The key lies in the methodology you use to design and build your data center facilities. All too often, companies base their plans on watts per square foot, cost to build per square foot, and tier level—criteria that may be misaligned with their overall business goals and risk profile. Poor planning leads to poor use of valuable capital and can increase operational expenses.
Many organizations get overwhelmed, focusing on “speeds and feeds,” green initiatives, concurrent maintainability, power usage effectiveness (PUE), and Leadership in Energy and Environmental Design (LEED) certification. All of these criteria are critical in the decision-making process. However, the details often overshadow the big picture. Most companies miss the business opportunity in a data center expansion—an expansion driven by a holistic approach.
While there are numerous consultants in the field to help you find your way, assessing ideas and input can be overwhelming. Organizations with critical capacity requirements in the 1-3 megawatt range may fall into this risk category. The critical nature of mid-size users is no less important than mega users; however internal technical expertise to drive proper expansion plans may be limited. The result is information overload from multiple sources, leading to confusion and poor decision-making.
“Data center owners have so many problems right now. Their assets are mission-critical, but they are out of control. Power consumption is costing them a fortune. They can’t cool what they have got and cut the risk of a catastrophic outage. And if they make an investment, by the time it is built, it is already out of date” – Stanford Group
Mistake 1: Failure To Take Total Cost Of Ownership (TCO) Into Account During The Data Center Design Phase
Focusing solely on capital cost is an easy trap; the dollars required to build or expand can be staggering. Capital cost modeling is critical, but if you have not included the costs to operate and maintain OpEx your business-critical facilities infrastructure, you have severely shortchanged the overall process of effective business planning.
There are two critical components required to build data center OpEx cost modeling—the maintenance costs and the operating costs. The maintenance costs are the costs associated with the proper maintenance of all critical facility support infrastructure. They include but are not limited to OEM equipment maintenance contracts, data center cleaning expenses, and subcontractor costs for remedial repairs and upgrades. The operating costs are the costs associated with daily operation and on-site personnel. They include but are not limited to staffing levels, personnel training & safety programs, the creation of site-specific operations documentation, capacity management, and QA/QC policies and procedures. If you have failed to calculate a 3-7 year operations and maintenance (O&M) expense budget, you cannot build a return on investment (ROI) model that supports smart business decisions
If you are planning to build or expand a business-critical data center, your best approach is to focus on three basic TCO parameters: 1) capital expense, 2) operations and maintenance expense, and 3) energy costs. Leave any component out, and you will run the risk of creating a model that does not correctly align your organization’s risk profile and business expenditure profile. If you are making a decision about whether to “buy” (use of collocation/hosting) or do an internal build, the risk of not taking this TCO approach is magnified significantly.
Mistake 2: Poor Cost-To-Build Estimating
Another common mistake is the estimate itself. Financial requests made to boards of directors for capital to expand or build a data center are often too low and result in failure. The flow of decision-making looks something like this:
- The capital request is made and tentatively approved. Financial resources are allocated to investigate, capture and create a true budget.
- Time is spent driving the above budget process.
- The findings reveal that the initial budget request is too low.
- The project is delayed. Careers are impacted, and the ability to deliver service to internal and external clients and prospects is impacted.
- This takes you full circle, back to the # 1 Biggest Mistake: Failure to take the TCO approach and build a holistic financial model.
Cost to build issues can be easily avoided, but are destined to fail if you fall into trap #3
“Organizations with critical capacity requirements in the 1-3 megawatt range may fall into this risk category” – Mike Manos, Industry Expert
Mistake 3: Improperly Setting Design Criteria & Performance Characteristics
Mistake 4: Selecting A Site Before Design Criteria Are In Place
Organizations often start searching for the perfect space to build before having their data center design criteria and performance characteristics in place. Without this vital information, it doesn’t make sense to spend time visiting or reviewing multiple sites. This typical “cart before the horse” scenario happens most frequently with users in the 1-3 megawatt range. While mega users are usually experts in this arena and take into consideration power availability and cost, fiber, geographic issues such as earthquakes, tornados and flood plains, etc., baseline users often have business models that dictate a need to build or renovate a shell in their core region of business. The problem with selecting a site prematurely or based on narrow geography is that the site often cannot meet the design requirements. For instance, having your data center two floors below your high-rise office or even two blocks away is convenient, but business-critical data centers require a long list of site criteria that usually cannot be met in a multitenant space without significantly higher build costs or limiting space for future expansion. White Paper 81, Site Selection for Mission Critical Facilities, provides more information to help avoid this big mistake. Some organizations base their site search criteria on the amount of raised floor required to house their critical IT infrastructure. This can lead to the next big mistake
“While the physical design of a data center is critical, how a site is operated and maintained plays a more significant role in achieving site availability” – The Uptime Institute
Mistake 5: Space Planning Before the Data Center Design Criteria Is In Place
Mistake 6: Designing Into A Dead-End
Mistake 7: Misunderstanding PUE
Power Usage Effectiveness (PUE) is an effective tool to drive and measure efficiency. However, broad energy efficiency claims may lead to significant misunderstanding. In nearly all situations for new builds and expansions, there is a capital cost related to gaining lower PUE. Many times, organizations set a PUE goal with all the proper intentions but the calculation does not take into account all factors that should be considered. You need to fully understand what the ROI is on capital expenses to reach your goals. You need to ask yourself, what is the TCO relative to the target PUE?
There are many ways to illustrate and understand the breakdown of the balance between PUE, ROI, and TCO. Here are three cautionary examples that represent a failure or misunderstanding:
• What was the “design criteria day” for the calculation? Was it calculated or measured on the “perfect day”? Or, was the calculation based on a yearly average?
• Was the calculation based on a fully loaded or partially loaded data center operating condition? All equipment efficiency curves change based on load profiles. PUE changes daily, if not hourly, in true operating conditions.
• Finally, there is an ongoing debate regarding the efficiencies of water-cooled chillers and air-cooled chillers. Each application has multiple options for “free cooling” or “economizer” applications to lower PUE. For this example, when making your TCO/ROI business decision, you must ask yourself the following question: What is the cost of the make-up water and water treatment maintenance requirement for the water-cooled solution? Recognize that a typical 2-megawatt data center using water-cooled towers will require 50 to 60,000 gallons of make-up water per day.
Use PUE to your advantage to meet your overall business goals, but be cautious. Try not to get trapped into misusing the calculation formula to justify the overall capital expense and operating expense budgets.
Mistake 8: Misunderstanding LEED Certification
To date, the U.S. Green Building Council (USGBC) has not set specific criteria for data center LEED criteria. However, certification can be obtained using the Commercial Interiors Checklist (http://www.usgbc.org/Showfile.aspx?DocumentID=5723). There are three basic missteps that take place:
• Failure to develop a base understanding of the qualifying criteria. This can be remedied by viewing the above-referenced document.
• Pursuing LEED certification as an afterthought. Obtaining LEED certification begins with the design concept and ends with a formal certification after project completion. Engage a qualified LEED professional or consulting firm at the start of the planning process.
There will be costs related to receiving certification. Failure to take these related expenses into account will impact your TCO and business decision planning processes
Mistake 9: Overcomplicated Data Center Designs
As stated earlier, simple is better. Regardless of the target tier rating, you have chosen, there are dozens of ways to design an effective system. Too often, redundancy goals drive too much complexity. Add in the multiple approaches to building a modular system and things get complicated fast.
When engaging internally or with your chosen consultant, the number one goal should be to keep it simple. Why?
- Complexity Often Means More Equipment And Components. More parts equal more failure points.
- Human Error. The statistics are varied but consistent. Most data center drops are due to human error. Complex systems increase operational risk.
- Cost. Simple systems are less costly to build.
- Operations And Maintenance Costs. Again, complexity often means more equipment and components. Incremental O&M costs can increase exponentially.
- Design With The End In Mind. Many designs look good on paper. It is easy for you or your consultant to justify the chosen configuration and resulting uptime potential. However, if the design does not consider the “maintainability” factor when operating or servicing, the system’s uptime and personnel safety will be compromised.
Although data center design, builds, and expansions result in failure, yours doesn’t have to. By avoiding the top 9 mistakes outlined in this paper, you will be well on your way to achieving success. In summary:
1. Start With A Total Cost Of Ownership Approach
- Evaluate your risk profile against your business expense profile
- Create a model that incorporates CapEx, OpEx, and energy costs
2. Determine Your Design Criteria And Performance Characteristics
- Base this criterion on your risk profile and business goals
- Allow those criteria to truly determine the design, including tier level, location, and space plan—not the other way around
3. Design With Simplicity And Flexibility
- Use a design that will meet your uptime requirements but will also keep costs low during construction and throughout the operation—simplicity is key.
- Accommodate unplanned expansion by incorporating flexibility into the design
4. If PUE and LEED are part of your criteria, become educated on the common misunderstandings and expenses associated with each.