skip to Main Content
data centers in the era of artificial intelligence and big data new challenges and new solutions

The infrastructure colocation industry has been thriving for many years.

According to The Business Research Company, the global colocation market is expected to grow from $62.46 billion in 2023 to $71.27 billion in 2024, with a compound annual growth rate (CAGR) of 14.1%.

What is driving the rapid growth of the data center industry?

A Brief history of Data Centers

Over the past 60 years, data centers have evolved from simple data storage facilities to complex, high-tech infrastructures that play a crucial role in today’s digital world.

1960s: The emergence of large computers (mainframes), such as the IBM System/360, created a need for dedicated spaces to house them. These rooms provided control over temperature, humidity, and security.

1970s: As the number of users and data volumes increased, the demand for centralized management and unified storage solutions grew. The first specialized data centers began to appear, providing support for large organizations.

1980s: The development of networking technologies and personal computers fueled the demand for computing power. Companies started to establish their corporate data centers to support their IT infrastructures.

1990s: With the rise of the internet and web technologies, the need for reliable and scalable data centers surged. The global internet boom drove rapid growth in commercial data centers offering hosting services to other companies.

2000s: The advent of cloud technologies and virtualization fundamentally changed the data center market. Companies began using data centers to provide cloud services, leading to an increase in colocation service providers and infrastructure development.

2010s: Data centers became an integral part of the global economy, supporting services such as big data and the Internet of Things (IoT). Concepts like “smart” data centers and “green” technologies emerged as new trends in the industry’s development.

data centers in the era of artificial intelligence and big data new challenges and new solutions2

In the last decade, neural networks and artificial intelligence have emerged as new catalysts for growth in the data center (DC) market, alongside big data. With their rapid development, data centers are becoming the backbone of the digital economy, but along with increasing demand comes a growing list of challenges. The modern world generates enormous volumes of data and requires significant computational power for storage and processing. In this context, maintaining the reliability and fault tolerance of data centers becomes much more complex. The data center paradigm requires serious rethinking.

How can we ensure the effective operation of data centers in the era of AI and big data?

To begin with, it is essential to understand the challenges that modern data centers face as data volumes increase and artificial intelligence becomes more integrated into our lives.

High Loads. New realities compel data centers to restructure their architecture. Racks for AI-oriented servers have fundamental differences: their density (power consumption) is significantly higher than that of racks for standard servers (5–15 kW), with loads for machine learning potentially reaching 100 kW. If the engineering infrastructure of a data center does not meet current and planned loads, the risks of failures are quite high. To accommodate high-load servers, a data center must have racks with guaranteed power supply of 25 kW per rack or more.

Scalability Requirements. Data volumes are growing exponentially: today an online store or insurance company may require 10 racks, tomorrow 20, and the day after 30. To support the long-term growth of its customers, a data center must be able to quickly and easily accommodate new storage devices. Scalability is economically advantageous: it allows IT capabilities to be expanded as needed, paying only for the resources used during that period. Scalability options should be incorporated at the design stage, so when selecting a data center, companies should inquire not only about the current capacity of the facility but also about long-term growth plans.

Cyber Threats. Data centers are a constant target for cybercriminals seeking to obtain confidential organizational data or disrupt their websites. The growth of big data is directly proportional to the rise in cybercrime. To counter cyberattacks, data centers must implement robust protective measures against both physical and virtual threats. The former includes methods to prevent unauthorized access to the data center, while the latter involves Security Information and Event Management (SIEM) systems that manage risks and monitor threats to detect suspicious activity on the network.

Signal Latency. Applications dealing with big data require high-speed data processing. The more frequent and longer the delays in network operation, the more time it takes to transmit data from one point in the network to another, and users, as we know, do not like to wait: speed of operations and uninterrupted access are key indicators of online service quality. A data center housing large volumes of data and AI applications cannot function without high connectivity. The level of connectivity is ensured by having multiple telecommunications operators and traffic exchange points within the DC, as well as constructing ultra-low latency communication channels such as DWDM L1 with guaranteed bandwidth and ultra-low latencies (in the tens of nanoseconds).

Increased Heat Generation and Energy Costs. Processing a request in the ChatGPT neural network requires ten times more energy than providing an answer to a query in Google’s search engine. The higher the computational performance, the greater the demands on processor and server power.

The majority of computations for AI are performed on servers equipped with accelerators (GPUs and similar technologies), which are known for their high “appetite” in terms of electricity consumption. Typically, these servers are installed as densely as possible to accommodate more equipment in a given area; however, this leads to even greater heat generation and complicates the task of effective cooling through air.

To address these challenges, data centers are actively implementing engineering solutions to reduce energy consumption, including:

  • Liquid Cooling: reduces energy costs by eliminating the need for powerful fans and allows for optimal utilization of GPU and CPU performance.
  • Cool walls: enable racks with high computing loads to be placed next to each other without additional air conditioning between them.
  • Heat Pumps: recycle excess heat from data halls, redirecting it to areas where it can be useful.

Potential Failures. The higher the load on the data center, the greater the risks of potential failures. The fault tolerance of modern facilities and server availability is ensured through the redundancy of engineering and hardware-software infrastructure, which involves building backup communication channels, installing uninterruptible power supply (UPS) systems (including diesel generators), as well as well-structured operational processes. When installing UPS systems, modern data centers prefer modular UPS solutions (which provide reliable fault tolerance and are easily scalable), compact designs for space efficiency, high-efficiency UPS systems (which generate less heat, thus requiring less energy for cooling), and other technically advanced solutions.

IXcellerate capabilities for High-Load deployments

IXcellerate data centers adapt their infrastructure to support Big Data projects by implementing cutting-edge technologies and solutions. Today, data center clients have access to high-load deployments in dedicated halls with over 60 kW per rack, utilizing a combined air-liquid cooling system. Data halls can accommodate servers with high-performance processors, such as Intel Xeon and AMD EPYC, which provide rapid processing of parallel requests in real-time.

Additionally, the data center infrastructure allows IXcellerate customers to use storage systems based on the NVMe (Non-Volatile Memory Express) interface, which offers significantly higher data access speeds compared to traditional hard drives. This is particularly important for projects requiring fast processing of large datasets, such as in machine learning and artificial intelligence.

The overall power capacity of the ecosystem exceeds 350 MW, with a PUE (Power Usage Effectiveness) of less than 1.4. Effective heat dissipation is achieved through free cooling and low-speed ventilation (LSV).

To keep pace with progress, data center service providers need to design new facilities with scalability, reliability, security, and flexibility in mind so that all elements of the engineering infrastructure and software used can support the most advanced applications. Special requirements are placed on the energy efficiency of data centers, primarily ensured through innovative cooling systems. Another important indicator of data center efficiency is the compactness of UPS systems and increased power density of systems. While a few years ago these innovations could be identified as “desirable,” today they are undoubtedly critical survival factors for all players in the data center industry.

+7-495-8000-911
Back To Top