Our energy system is undergoing a once in a generation shift in demand, the likes of which have not been seen since American homes purchased new household appliances en masse in the 1960’s, 70’s, and 80’s. Grid operators, utility companies, and government agencies throughout the United States are forecasting a massive load growth over the next 25 years, largely attributed to two factors: the electrification of buildings and vehicles, and the addition of large load data centers needed to meet the increased use of cloud computing and artificial intelligence.
By 2028, data centers could consume up to 12% of total electricity in the United States, up from 4.4% in 2023, increasing from 176 TWh up to potentially 580 TWh - the equivalent of adding eight New York City’s to the country.
Currently, our energy system is not equipped to handle this rapid load growth - we do not have enough generating units to create the power, and we do not have enough transmission capacity to move the energy where it is needed, contributing to rate hikes. Renewable energy and transmission projects can help fill this gap, but that will take a coordinated effort from local, state and federal officials as well as grid operators and utility companies. As our society moves toward faster, smarter computing with greater need for data storage, the rapid development of data centers has only just begun.
This post seeks to offer a deeper understanding of data centers - including what they are, what makes locations attractive for data center development, the process to develop them, and data center trends in our region - while highlighting the new demands they are putting on our grid, and the need for better planning moving forward.
Every time you use an app on your phone, or search for something on the web, or even read this very post, you are relying on a data center to store and process the information. Data centers are the physical facilities (i.e. a room, a building, a complex of buildings) that house organizations’ critical applications and store the data necessary to run those applications and provide services. They are comprised of physical equipment including routers, graphics processing units (GPUs), switches, servers, and storage systems, along with complex security measures like firewalls built into the systems.
Servers are the primary component of data centers, and are housed in racks. A typical data center rack has 42 units (“U”), with each unit taking up 1.75 inches of vertical space. Server cabinets that house racks are typically at least 73.5 inches high, 19 inches wide, and 42 inches deep. Data centers will have long rows of server cabinets, with enough space in between each to allow for cooling or walking through. A server can take up between 1U and 6U, depending on the size.

Data centers are used for almost all business applications in the modern world - from emailing and file sharing, to database storage, enterprise resource planning, and increasingly, AI and machine learning, as well as cryptocurrency processing - some of the most significant sources of load growth in the technology industry. Data centers have evolved over the last few decades, with modern facilities now able to communicate across multiple sites via physical and cloud connections. . When we talk about “the cloud” we are talking about a global network of data centers.
There are several different types of data centers, but the most common ones are:
Enterprise data centers, which service a single organization;
Colocation data centers, which lease out space to multiple organizations; and
Hyperscale data centers, which are large data centers that are often used for the training of artificial intelligence.
Hyperscale Data Centers
Worldwide, there are about 700 hyperscale data centers, which is double the amount as five years ago. The average data center is roughly 100,000 square feet — the average size of data centers in the region is roughly 143,000 square feet, or just over three acres. Hyperscale data centers can reach well over a million square feet, with Google’s first hyperscale data center encompassing 1.3 million square feet and employing 200 operators.

Hyperscale data centers in Seacacus NJ. Image via Google Maps
Data centers are classified into four tiers based on their reliability and performance, ranging from small, minimal security data centers (Tier 1) to very large, very secure data centers (Tier 4). The tier levels relate to a data center’s fault tolerance and are rated for redundancy - the higher the tier, the more secure a data center is from a fault occurring anywhere in the system.
Data centers are either owned by the organization they service, such as in enterprise or hyperscale centers, or owned by data center operators who build the data centers and lease spaces to other organizations, such as in colocation data centers. Companies like Amazon, Microsoft, Google, and Meta all operate their own data centers, and companies such as Equinix and Digital Realty operator data centers that they lease to other organizations.
While data centers have some flexibility in where they can be built, there are several attributes that make a location particularly attractive to developers. Places like Silicon Valley, Northern Virginia, Dallas, and the New York Metropolitan area have all emerged as data center hubs. All of these locations share the necessary characteristics for data center development, namely:
- cloud connectivity and network availability;
- access to a stable electric grid with abundant power supply;
- proximity to the industry it will serve and/or population centers; and
- relatively predictable weather.

Cloud Connectivity + Network Availability
Cloud connectivity is especially important for data centers that process large cloud-based datasets. A place with good cloud connectivity has contractual assurances with internet providers that ensure a stable connection with limited hiccups, and direct access to fiber cables and 5G networks to connect technology to cloud service providers. SImilarly, a location needs to have good network availability via network service providers. Together, strong cloud connectivity and network availability reduce delay when retrieving large datasets or performing large cloud-based operations.
Access to a Stable Electric Grid
Compared with early data centers - which used around 2MW, modern data centers require massive amounts of energy - with the average data center built today requiring around 40 MW. This large jump in energy usage is directly related to advances in technology. For example, ChatGPT 3 reportedly used over 1,200 Megawatt hours (MWh) of energy to train its model, . and a single query reportedly uses 0.3 Wh per typical prompt and up to 40 Wh for maximum-input queries. For comparison, the average US home uses just over 10 MWh per year. Contemporary servers can range between 300 Watts and 3200 Watts each. This means each rack in a data center may use between 12.6 kW and 133.4 kW. A standard 40 MW data center may have between 300 and 3,100 server racks depending on the servers used, containing between 12,600 and 130,200 servers. If compared to older 2MW data centers using the same servers, there would be between 15 and 160 server racks depending on the servers used, containing 630 and 6,720 individual servers. A 100 MW hyperscale data center may have between 750 and 8,000 server racks depending on the servers used, containing between 31,500 to 336,000 servers.
In our region, most of the data centers were built before 2000, and used a relatively small amount of energy. The next generation of data centers however, planned to open in 2027 and beyond, are projected to consume significantly more energy.
Data center energy usage is driven largely by two factors: computing, and cooling. Data centers are rated by their Power Usage Effectiveness (PUE), or the ratio of energy used for computing against the energy used for the total facilities (including cooling, lighting, and uninterruptible power supplies). The average data center has a PUE of around 1.8, however some data centers have achieved a PUE under 1.04. With rising energy consumption from computing costs, data center operators are trying to lower their PUE through energy efficiency measures and upgraded cooling techniques. A low PUE does not necessarily mean that a data center consumes a small amount of energy. Hyperscale data centers can consume 100+ MW of power, but can still be relatively energy efficient and have a low PUE. Since data centers consume a large amount of power and run 24/7 to keep up with the demands of their customers, they need a reliable, robust grid to ensure a constant energy supply and reduce the chances of an outage. Many data center operators, such as Apple, Microsoft, and Amazon, have made commitments to use 100% renewable energy, so locations that have access to a robust, renewable energy supply are more enticing to certain developers. Often, data center operators will use power purchase agreements if operating in a deregulated energy market (such as in our region) to directly purchase energy from a renewable energy provider.
Proximity to Industry + Population Centers
Much like strong cloud connectivity, proximity to the industries and population centers being served is an important factor for data center siting. For low-latency software usage, such as video conferencing, social media and messaging apps, navigation apps, live streaming platforms, online gaming, and more, it is imperative that they have as little delay as possible. Office workers don’t want their virtual meetings to drop off and online gamers don’t want to be placed at a disadvantage due to lag. Proximity to those users allows the network to be responsive with minimal delay.

Predictable Weather
The climate and weather of an area is an important consideration when choosing a location for a data center. Cooler climates reduce cooling needs - and thus energy consumption and costs - compared with warmer climates. Operators tend to avoid areas where extreme weather events are likely to occur to reduce the chance of outages. While weather is an important consideration, data center operators often still build data centers in less-than-ideal locations.
Other Factors
Aside from these core considerations, there are a number of other factors to consider when selecting a suitable site for a data center.
Local and State Regulatory Environment
Given their interest in economic development, local and state governments tend to see data center development as a boon to local and state economies, through revenue generation and jobs (construction and some permanent ones). As such, states and some localities offer incentives to developers to locate there, though local zoning laws that allow for commercial development are required. Connecticut, New York, and New Jersey all have incentives to attract data centers. In 2021, Connecticut established the Data Center Tax Incentive Program which allows for sales and use tax exemptions of goods purchased or used by data center operators, and also certain property tax exemptions. New York has not updated their data center incentives since 2012, however their existing incentives offer sales tax exemptions for property purchased for a data center. In October 2024, New York State announced a partnership with several New York public and private universities to establish an AI computing center. New Jersey has a slew of incentives that could apply to data centers, such as the Small NJ based High Technology Business Investment Tax Credit. As data centers proliferate and seek the most favorable locations, states are beginning to take action to ensure data center owners remain there for the long run. In July of 2024, New Jersey initiated the New Jersey Next tax credit which offers a tax credit for an AI business or a large scale AI data center, on the condition that the company creates 100 full time employees, stays in the state for 10 years, and invests at least $100 million dollars. As backlash against data center development over their use of energy and role in driving up energy costs is growing, newer laws are being developed to try and stem the rising costs. In New Jersey, a draft law would require new AI data centers to find ways to supply their power from new, clean energy sources, minimize the energy required for cooling, and optimize water usage.
Energy Markets
New Jersey, New York, and Connecituct exist in deregulated energy markets. A deregulated energy market allows utility customers to select their power supplier, allowing for a cost-competitive energy market. This is favorable to data center operators as it allows them to purchase the least expensive energy in robust markets.
Physical Site Conditions
The final consideration is the physical site itself. The land itself must have access to robust infrastructure with the ability to connect to the power grid and fiber networks. If building from a greenfield site outside of urban areas, large lots are preferred. As previously mentioned, the average data center takes up around 100,000 square feet, and hyperscale data centers typically take up 10 acres, or over 435,000 square feet. This can vary significantly, depending on the servers used and what other services are in the data center.
Land Costs
There is the question of siting a data center in an urban center or a rural area – rural land typically is much less expensive than urban areas, and there is less congestion in the grid, however urban centers are ideal for low-latency applications but have stricter zoning requirements. New York City has several data centers, including ten in lower Manhattan.
Urban Design
Urban design is a key factor when building a data center near residential communities. Incorporating community feedback and smart design can reduce tension between developers and host communities, making the permitting phase of data center construction smoother.
Before a data center is built, a developer needs upfront capital - data centers cost around $10 million dollars per megawatt, meaning a 40 megawatt center will cost at least $400 million to build. There are five stages to the development of a data center.
Site Selection
This stage entails finding a site that fits the criteria described above. This phase can take up to a year, particularly in dense urban areas.Permitting
This next stage is getting permits and local approvals, as well as apply for any incentives a state may have. Typically, permits include zoning (local), building (local), energy (state), and environmental (state), and depending on municipal and state requirements, this stage can last up to a year and a half, or longer.Design and Pre-Development
The next stage includes the architectural design of the data center, as well as pre-development work such as bringing in high voltage power lines, building new substations, and connecting to water lines. Depending on any challenges that occur, this stage could take up to a year and a half.Construction
This Stage - which can take up to three years - includes building the actual core and shell of the building and installing the required components (uninterruptible power supply, Graphics Processing Units (GPUs), generators, air conditioners, servers, etc.). The components can be the most costly piece, with top of the line GPUs costing over $25,000 for a single GPU - a data center may have hundreds or even thousands of GPUs. Once construction is complete, energy must be procured.Procurement
The final phase - energy procurement - involves submitting a request for power within a given loaction’s electricity provider through power purchase agreements or grid interconnections and could take up to a year, depending on the length of the application process. . In cases where there is existing high demand on the grid, a grid operator may place a moratorium on large load interconnection. This stage is often done concurrently or towards the end of the construction process.
While not the top region for data centers in North America, the New York Metropolitan Region is considered a primary market and was recently ranked 8th among other primary market regions, behind Northern Virginia, Atlanta, Chicago, Phoenix, Dallas-Ft. Worth, Hillsboro (Oregon), and Silicon Valley. Based on data obtained from Aterio, some trends can be assessed for data centers in our region. One trend is that there have been three waves of data center build out since internet data centers started being built in the 1990s. The first saw the earliest data centers in our region grow steadily until the dotcom bubble burst in 2000. The second wave saw a modest increase in construction starting in 2008 but slowing down by 2015. Finally, post-pandemic data center growth has begun to surpass the previous decade’s with data centers rapidly expanding in 2024, and expected to continue with 2027 forecasted to see more data centers come online than any other year over the last two decades.
Another trend is the increasing amount of power these data centers consume. Despite most of the data centers in our region being built before 2000, the energy consumption of those data centers coming online in 2027 alone will surpass all of those built previously. Those data centers will need nearly 300 MW of power, compared to just under 150 MW for those built before 2000. As a result, data centers have emerged as among the biggest of uncertainties in energy forecasting, and have been under-forecasted in grid planning due to their sudden and rapid growth.
As far as location trends, data centers in our region appear to be clustered in and around New York City. The City itself has 12 data centers, with 10 in lower Manhattan. Secacus, New Jersey has a cluster of 10 data centers. Previously, most data centers in the region served Wall Street, but as AI and other data-intensive products have become more popular, new data centers will serve a wide array of industries. Out of the 94 data centers in our region, 35 are within the urban core of the region (10 miles from New York City’s financial district), and 31 are within the inner suburbs (10-30 miles from New York City’s financial district). Those data centers consume at least 336 MW of energy, including data centers that are planned but have not come online yet.