We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Data Centres in the Era of AI with Jay Park

Data Centres in the Era of AI with Jay Park

2024/10/29
logo of podcast Analyse Asia with Bernard Leong

Analyse Asia with Bernard Leong

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Jay Park
Topics
Jay Park: 亚太地区数据中心市场增长迅速,预计未来将有巨额投资。AI服务器的出现是推动数据中心行业发展的重要因素,它使得数据中心建设更靠近用户,以减少数据传输延迟和成本。数据中心服务器的功率密度在过去几年中急剧增长,这给数据中心基础设施建设带来了巨大的挑战。我们需要开发新的技术和解决方案来应对这些挑战,例如液冷技术和模块化数据中心设计。 Digital Edge 公司致力于成为数据中心技术公司,而不是单纯的托管公司。我们的目标是弥合发达国家和发展中国家之间的数字鸿沟,并通过技术创新解决数据中心行业面临的挑战。我们已经成功地在马尼拉和雅加达等地部署了SPLC冷却系统,实现了低于1.2 PUE的能效水平,这在该地区尚属罕见。 在数据中心设计中,我们需要关注效率、可扩展性和成本之间的平衡。减少能量转换步骤是提高效率的关键,例如减少UPS的使用。我们正在开发新的技术,例如混合超级电容器技术,以提高安全性并实现电力削峰。降低数据中心的PUE能够显著减少水资源消耗,这需要从发电和数据中心两个方面入手。 未来,我们需要更加大胆地尝试新技术,并与业界分享我们的经验和成果,共同推动数据中心行业的绿色可持续发展。

Deep Dive

Key Insights

Why is the Asia-Pacific (APAC) region experiencing massive growth in data center investments?

The APAC region is experiencing massive growth in data center investments due to the increasing demand for AI servers and data processing. According to a recent structural research report, the data center industry will spend $100 billion, with 50% of that growth occurring in APAC. AI servers require data centers to be built closer to users to reduce latency and improve efficiency, driving the need for localized infrastructure.

What challenges do AI servers pose for data center infrastructure?

AI servers pose significant challenges for data center infrastructure due to their high power density and cooling requirements. Traditional air cooling systems are insufficient, necessitating the adoption of liquid cooling technologies. Additionally, the power draw from AI servers can surge unpredictably, requiring data centers to manage peak power capacity efficiently.

How is Digital Edge addressing the environmental impact of data centers?

Digital Edge is addressing the environmental impact of data centers by focusing on reducing water usage and improving energy efficiency. They have implemented innovative cooling technologies like the State Point Liquid Cooling (SPLC) system, which has achieved a Power Usage Effectiveness (PUE) of below 1.2 in hot and humid regions like Manila. Additionally, they are developing hybrid supercapacitors to replace traditional lithium-ion batteries, reducing fire risks and improving energy storage efficiency.

What is the significance of NVIDIA's AI chips in data center engineering?

NVIDIA's AI chips are highly disruptive in data center engineering due to their unprecedented power density, jumping from 10 kW per cabinet to 130 kW. This has forced the industry to rethink cooling solutions, with a shift towards liquid cooling systems. The varying operating temperatures required by different server manufacturers also add complexity to data center design and management.

How does modular data center design compare to traditional construction?

Modular data center design offers faster construction times and reduced material waste, as components are built in factories. However, it lacks flexibility for future upgrades due to transport size limitations and fixed designs. Traditional construction allows for larger, more adaptable buildings, which is crucial as power densities continue to rise.

What role does water usage play in data center sustainability?

Water usage is a critical factor in data center sustainability, particularly in cooling systems and power generation. Digital Edge's SPLC technology reduces water consumption by up to 40%. Additionally, lowering the PUE by just 0.1 in a 100 MW data center can save the equivalent of 1,500 Olympic-sized swimming pools of water annually, highlighting the importance of efficient cooling systems.

What are the key principles for designing efficient data centers?

Key principles for designing efficient data centers include minimizing energy transformation steps, such as reducing AC to DC conversions in UPS systems and avoiding unnecessary energy conversions in cooling systems. Understanding the internal components of data centers, like AI servers, is also crucial for designing infrastructure that can handle increasing power densities and cooling demands.

How is Digital Edge innovating in power management for data centers?

Digital Edge is innovating in power management by developing hybrid supercapacitors that replace traditional lithium-ion batteries, eliminating fire risks and improving energy storage efficiency. They are also working on power shaving systems to manage peak power demands more effectively, reducing the need for customers to purchase excess capacity.

Chapters
The data center industry is experiencing massive growth, with APAC accounting for 50% of the projected $100 billion investment. This growth is driven by the increasing demand for AI servers and the need to process data closer to users.
  • APAC will account for 50% of the $100 billion data center investment
  • AI servers are driving data center growth in APAC
  • Data centers are being built closer to users to reduce latency

Shownotes Transcript

Translations:
中文

Do you manage your own IT for distributed teams in Asia? And you know how painful it is. SFL helps your in-house team by taking cumbersome tasks off their hands and giving them the tools to manage IT effectively.

Get help across eight countries in Asia Pacific from on and off boarding, procuring devices to real-time IT support and device management. With our state-of-the-art platform, gain full control of all your IT infrastructure in one place. Our team of IT support pros are keen to help you grow. So check out ESEVEL.com and get a demo today. Use our referral code ASIA for three months free. Terms and conditions apply.

So if you look at, this is according to this recent structural research report, the data center industry will spend $100 billion and about 50% of that growth will be happening in APAC. So this is massive growth. So if you look at the data centers, it has to be built where people are to better support. But we have a new kid in the block. It's called AI servers.

And it's something I have never experienced before, you know, in any industry. And this is massive. It'll do a lot of things, but it has to do data processing. So you cannot have all this data center in the, let's say, North America. People are in the APEC area. Grab that data.

back to US or North America, do all the processing and then send it out to the APEX. I just don't see that happening. So they're building the data center closer to the users where people are. And then you do all the processing there. The growth is going to be gigantic. And that's what we are seeing today.

Welcome to Analyze Asia, the premier podcast dedicated to dissecting the pulse of business technology and media in Asia. I'm Bernard Leung, and one of the key components to have generative AI is data center infrastructure. With me today, Jay Park, Chief Development Officer from DigitalAge, to help me understand the data center landscape in the age of generative AI. Jay, welcome to the show. Good morning, Bernard.

Yes, it's very interesting to have someone of your expertise to come on the show to actually help me to decipher the data center landscape. But maybe as of all times, we always want to hear the origin stories from our guests. And I want to hear your origin story. How do you start your career? Because I know there are some interesting parts of your career in building data centers. So my career goes back to 1986.

So I started out as an electric engineer.

Mainly, I was in the power plant and the semiconductor industry, supporting the industry up until 1999. When all the semiconductor industry moved away from US back in 1999, this is when I was blessed to have this opportunity to enter the data center industry. So I started in data center industry in 1999. Since then, I've been in this data center. I've been working in this industry for the last 25 years.

How did you eventually become the chief development officer of Digital Edge? I also understand that you have a stint with Facebook. Yeah. So after Facebook career, I was planning on retiring. But as a native Korean,

There was something missing in my resume and I really wanted to give something back to Asia. This was an opportunity when my previous coworkers came to me and said, "Let's start a company."

So this was an opportunity for me to really give something back, something tangible back to Asia market. So yeah, I love my job, right? Yes. And of course, given such a long, tenured career journey, what are the interesting lessons that you can share with my audience? You know, in the data center industry, it should be very similar to car industry, right? So if you look at the car industry,

When you buy a car, every year, the newer model gets something new, right? You get something new. But in the data center, it's moving. The technology and whatnot is moving way too slow. If you look at the server market, servers are changing drastically, right? So if you look back in the 1999 and 2000, where the dot-com business was booming, you're talking about 1 KVA, 2 KVA per cap.

And then for the longest time, it was slowly growing and it was stabilizing around 8 to 12 kW per cap. In the last 18 months, you went from like 10 kW per cap, shoot up to 130 kW per cap in the last 18 months, right? And then surely this is not going to stop here. I heard some rumors that

Even higher density, the cabinets will be showing up. So this is where we are. But the data center, the construction and the industry, we need to wake up and really understand what goes inside of the data center so that we can build the outer box that will last 30 years.

Wow. So actually the energy demands is actually moving so fast that even the infrastructure side also, you need to put innovations within to actually make it outward bound for more and more demand that's coming from maybe all corners of the world on there. We get to this main subject of the day where I want to talk about data centers in the era of AI. Given that you are here, I would like to know, can you talk about and

provide an overview of Digital Age and the mission and vision of the company as a data center platform company today. So let me start it off by saying we don't want to be recognized as a data center co-location company. Personally, I like to

Call our company as a data center technology company. There's a lot of issues out there today. It's working on power efficiency, reducing the water usage. There is a power capacity issue. There's a lot of moving parts as we speak.

So, you know, we as a data center, obviously, our mission and goal is to bridge the digital gap that we have currently between the developed country and developing country, right? There's some gaps. So we're trying to narrow that gap. Hopefully, everybody will be able to

be on that same level, that's our mission. We're backed by Stonepeak financially, you know, $1 billion back. So we have a very strong financial backing so that we can go build our data center so much freely. So that's a blessing that we have. Our company has grown. We established the company in 2020 and currently we have 17 data centers across the Asia APEC area.

And then we have over 400 employees today. And then hopefully our ambition is to reach up to like 800 megawatt by 2028. Hopefully it will be my personal ambition is 1 gigawatt, but we have to be reasonable there. So we're growing rapidly and we're also finding solutions for the data center industry and we're pushing. I can say that we are the leader in this market. So hopefully

Hopefully, everybody will be watching us and follow our first step. Maybe to set the context, what was the total market opportunity that Digital Age is specifically targeting in the data center business across the Asian market? I think you are more global from understanding how the company operates. So if you look at, this is according to this recent structural research report, the data center industry will spend

$100 billion and about 50% of that growth will be happening in APAC. So this is massive growth. So if you look at the data centers, it has to be built where people are to better support. But we have a new kid in the block. It's called AI servers.

And it's something I have never experienced before, you know, in any industry. And this is massive. It'll do a lot of things, but it has to do data processing. So you cannot have all this data center in the, let's say, North America. People are in the APEC area, grab that data back to U.S. or North America, do all the processing, and then send it out to the APEC. It's

I just don't see that happening. So they're building the data center closer to the users where people are. And then you do all the processing there. The growth is going to be gigantic. And that's what we are seeing today. I agree with you on that specifically because I used to work for a hyperscaler as the head of AI for SMEs.

Amazon Web Services for the Southeast Asia region. In fact, there's now all the hyperscalers. I think AWS is already a Malaysia region, Indonesia region. I think Thailand and Vietnam will be coming online at some point. So one thing, because I'm always at the end site of the user, my customers are the ones that's using the cloud infrastructure, which comes from data centers.

I'm very interested to really understand the supply chain of data centers. Can you articulate on a high level how the supply chain of data centers work from, say, planning to build all the way to the point where it's constructed and then it's being operational, whether it's for cloud vendors, whether it's for corporations who might require your help to actually manage and run the data centers as such?

Back in the days when we're talking about building data center, it was a somewhat smaller scale. People are building 10 megawatt, 20 megawatt, 30 megawatt was considered very large data center campus. With this AI come on board,

30 megawatt is nobody's interested in that size of campus anymore, right? They're starting it out with a 5 megawatt, 10 megawatt, so that they want to grow up to like 50 to 100 megawatt. So the scale of this data center is just grown exponentially. Now, right after or during COVID or even after COVID, the supply chain has been just, you know,

It got hit by a couple of things. The data center, the construction demand, and then also because of COVID, those companies, the manufacturing companies couldn't get the parts on time. So a lot of the major equipment, long-lead items, the delivery has stretched out to some of them, two years. And then the size gets bigger. So

What we are doing is that we are trying to release the long-lead item purchase early, very early. So before you even pour the concrete or the groundbreaking, we are ordering this equipment. Or maybe we might be blocking the manufacturing slots. And then what we're doing is...

We are building what we call a skid mount equipment pad. So we will provide the skid, we put a lot of the equipment that are associated to each other, the close proximity. We'll put all that equipment on the skid while we are building a building skin, the foundation and the skin. So we're not wasting any time. So this is how we're going to

shorten our construction time. So this is what we're doing personally, but you're absolutely right. The procurement has been very challenging for us. The best thing we can do is have a very cookie cutter design where we don't change equipment. So we're basically repeating the same thing over and over again. So yeah, the procurement, that has to happen way before we break ground.

If I were to double click on the question, I'm quite curious because you mentioned that the earlier stages were about, say, 10 kilowatt. Let's say in today's world, your aspiration is to go to one gigakilowatt. So how does the structure of the data centers actually change given the scale of what kind of the data centers you're building in the energy side? Yeah, so it's huge. So when you're dealing with that much of power,

You can no longer just get the utility feeder from the utility company. You have to actually build your substation on site and you have to bring a very high voltage to that campus and then you'll have to step it down. So when you build a substation, you will need a bigger land lot that size.

So it's almost impossible to build this kind of data center in the metropolitan area. It has to go outside just because you cannot get the power. And nobody, if you're the resident of wherever, you don't want the high voltage line running right behind your house, right? So it just gets really hard to build data center that size in the metropolitan area. So, yeah, that's what we're dealing with today.

There's also the reason why some of these big tech companies like Google, Amazon are working with the SRMs, the small scale nuclear reactors, or even like Microsoft having the Tremont Island, or even starting to put all the data centers right next to some power plant where they can actually get the energy straight out from the source. Henry Suryawirawan: Yeah. I would love to talk about that subject. So when you look at this demand,

People do not realize what the gigawatt means. One gigawatt means it can serve 250,000 households. That's one gigawatt. Okay. So how many people is that? Let's just say four people living in one house. That's 1 million people.

One gigawatt can support one million people. It's like a big, large city. Okay. So that's one gigawatt. So that kind of the power infrastructure coming in with just a simple few feeders, that's not going to happen. Right. So you have to build your own substations and possibly even power companies do not like that kind of infrastructure.

Power draw. This is something that I'm researching today is that I'm working closely with a one large tech company. And how does the power draw is happening during the AI data processing or computing? It actually is very interesting that the power draw could be very steady and all of a sudden it resurges.

and then it'll come down and it'll go back up to search. So that duration could be two seconds, three seconds, it's all over the map. So for the data, the users, they have to purchase enough capacity to handle that search. Whether you use it or not, you have to buy that. So what we're currently working on is an external system that we can actually do power shaving.

So the user don't have to buy the extra peak power, the capacity. So they buy less. And at the same time, from the utility perspective, they will see more steady state, a constant power draw. So that's what we are currently working on as well.

Then how does the concept of say a modular data center design evolve and what are the advantages when it comes to say building efficient and scalable infrastructure from your point of view? So, you know, there's a lot of companies now thinking about the modular data center. Personally, I have done modular data center construction with a previous company.

You can actually build faster. And I'll tell you pros and cons, basically, in my personal opinion, that you can actually build it faster, the inside box. And then you probably waste less material, right? Because you're building everything in the factory, you're using less material. So that's a great thing.

Where I personally see this problem is because you have to transport from factory to the site. When you transport, you have a certain height limitation, right? You can't transport really, really high. So that is limited. So what I don't like is, and I think maybe not everybody will agree with me on this, but just because I've done this before, changing this box...

at later times, it becomes a little bit of challenge. So giving an example, if you build a data center and you know the density is going to keep going up, so you build the data center with the high ceilings or make enough room, you can do that. And the reason why I say that is because

Building skin, the shell construction cost is relatively much cheaper than MEP cost. Okay. So it's okay to make your building a little bit bigger, higher, so that you can sort of a future proof because the density is just going to keep going up. Can you imagine, and I just heard this rumor that somebody may be coming out with a 300 KW per cap.

Now, people, even I'll just take it today, NVIDIA cab is, Jensen Huang actually opened up this discussion. Their Blackwell cabinet, it draws 130 kW per cab. So 130 kW, can you imagine how many circuits has to come in? It's a little rag.

And that's not the only rack. You got racks after racks after racks. Your ceiling will be filled with a bunch of cables, right? On top of that, you got fiber cables. On top of that, you got the liquid cooling. You got a pipe. This is a massive, massive cabinet. So we know the trend is going to get worse and worse and worse. So-

we need to make our building envelope large enough to host this kind of equipment in the future. Modular data center, when you bring it in, you're pretty much locked in. So,

Retrofitting that data center to take care of the future growth, that will be a little bit challenging in my personal opinion. But obviously there are pros. You can build it faster. You can use less materials. Everything is done in the factory, so the quality will get better. There are a lot of advantages, but the flexibility is something that I'm always a little worried about.

So actually with the demand for say faster data and processing or even like, and delivery increasing, right? How does digital age balance things like efficiency, scalability and cost in the data center design? Because you're a data center technologies company. So you probably, there are some trade-offs that needs to be taken care of as well. Yeah. So for us,

Again, and I'll go back to, always go back to the real, how do we design the data center? The more you know what goes in the box, you could do a better job designing the outer box. You have to give yourself to understand what goes in the box. So the trend is the density is going to go up.

Okay. So density is good. We are well past air cooling phase. Okay. Okay. Air cooling. So when you look at the air cooling, right, and then basically in current system is you have the water system and you're basically converting it. You make this water really cold and then you go through the coil to create that cold air. So it's really a water to air system right now.

Okay. But going forward, air is disappearing. I mean, you do need a little bit of air to cool other components, but it's disappearing. The percentage is going to get shrinking. It'll be shrinking. So we are going into more of a liquid cooling. I see this liquid cooling is very energy efficient. Why? Because for liquid cooling, you do not have to have a really cold water. You can have a local water. You can cool the GPU, the chip.

So we can raise the water temperature. You're not converting from water to air. So the PUE perspective is just going to get better. So the efficiency will get better. But you have to understand this and trust this growth. So when you build data center,

you need to go with a liquid cool. Now, when you say liquid cool, I'm not talking about, you know, perhaps maybe if you look at today's liquid cool system, you have a chilled water loop and you have a condenser water loop. And typically, condenser water loop is open cooling tower kind of loop. So you basically pump that water up to the cooling tower and then make that cooling...

bring that water cold, and then bring it back to the system. I personally, what we're doing today is now we're providing a closed-loop system. So we can use that condenser water directly to the cabinets, and there is no energy transformation. Right. It's more efficient that way. Yeah, correct. Absolutely. So energy efficiency comes from

reducing the energy conversion, whether it's electrical or it's mechanical, it's the same thing. If you do less transformation, the efficiency will kick in. And that's what we need to focus. That's a very good point. So what is the one thing you know about data centers engineering in the Asia Pacific or even globally that very few people do? You know, I got into this APEC about four and a half years ago.

and looked at this market and basically they're using a very, very old technology. And still today, they would love to stick to the old technology and they need to break this barrier. At our digital edge, we actually built the first greenfield data center in Manila and we deployed a very unique technology, cooling technology. And we obtained

below 1.2 PUE. This is unheard of in this APEC area. In Manila, as you know, it's a very hot and humid area. The weather temperature is very high. It's impossible to get

the PUE be below 1.2. We've done that. And we showed our data to the industry. So we're sharing this data. We're not just hiding. And so we're very excited about this and this is doable, but industry has to, you know, they have to

This is the engineer's job, right? I hate following someone else's footstep. If I repeat something over and over again, it doesn't excite me. Just like a car industry, when you sell at the later, something has to be different, right? Something has to be better. I think we need to have that mentality. We need to change that mentality in this APEC, the data center industry.

I mean, can you actually apply like what you already did in Manila to more downstream, downwards, like say in the southern side where it's more tropical in that situation? Yeah. So we have, I mean, anything around equator, even little bit above that, such as Thailand, you know, Vietnam. I mean, these are really hot and humid.

So, we have this technology is called SPLC, state point recalling. This technology is developed while I was at Facebook. And about maybe about 10 years ago, we developed this product together and we tested, we did a lot of life cycle tests. We beat the system to like really operating the system in a very harsh environment.

And then Facebook, the design, this is public information, so I can say that, but they have deployed this technology in Singapore data center and they were able to obtain PUE of 1.19 in Singapore. So just very close to our Manila data center, 1.193. So very similar.

So this technology is very unique and a lot of people, they're not familiar with this. So I'm going to quickly explain what this system does. When I was a young little kid growing up in Korea, back then in 60s, there were not too many houses had, you know, the household had a refrigerator. So my grandma or my mother, they always put the drinking water in this clay pot.

Clay pot. Without knowing, as a little kid, I always grabbed water out of the clay pot. But I realized that clay pot water is always cooler than the water coming out of your faucet. And I wasn't asking this question. This is exactly the same technology as SPLC.

Basically, so what's happening is that clay material, outer part of the clay pot, the clay has a lot of pores. Correct. There's a heat transfer happening through these pores. The higher temperature water molecule is sort of a little bit evaporating and makes that what's in the clay pot water pool. I see. So, as PLC is the same thing, we have a membrane.

that acts like a clay. It's got very, very tiny, tiny holes in there, but water doesn't leak. Only there's evaporation. And then you will apply this hot air through between this membrane. That makes the water cool.

And you can drop a quite large delta T across that SPLC. So at Manila, when we were doing commissioning and we were having a hard time turning on the chillers because SPLC alone was doing all the work.

So we do have what we call a pony chiller. So whatever the work that the SPOC cannot do, we'll use a pony chiller to bring that chilled water temperature down. But we were having such a hard time to turn that chiller on because SPOC alone was doing all the work. So with that technology, we were able to obtain 1.193 below 1.2 PUE.

Wow. So one other interesting part of data centers is the chips. And one question I have, because I'm in the AI space, can you explain how NVIDIA's AI chips or maybe even other vendors' AI chips, for example, Tencent, Grog, impact a data center and actually how important are they to your business?

So, you know, as I mentioned earlier, you know, we have to understand what goes in the box. We have to understand how does NVIDIA chip work. You know, currently it's very disruptive. I mean, obviously the density, no one expected. I mean, this is something like

you would expect you're about 8 to 10 KW per cab. You're expecting this thing to go maybe 20, 30, gracefully go up, right? It didn't. It jumped. It literally jumped from 10 to 130. So,

It's very disruptive. All the industry is like, how do we take care of this? On top of that, industry is still trying to decide, should we use a PG-25 cooling or direct water cooling, right?

So some companies, I mean, NVIDIA makes chips, but their servers are made by many different companies out there. So they are still, we're struggling because just imagine we have a data center and one customer wants a direct cooling, another one, you know, wants a PG25. The operating temperature of this water is going to be different. So those are the challenges that we have on top of, okay, how do we now manage

the existing data center where we have a water to air system. Now we're going back to basically if you want to utilize this, you have to put this water back to, I mean, air to even a water system or steal some of that water

directly to the cooler. So there's a lot of challenges. If everybody's using the same operating temperature, it'll be great. But I don't think that's going to happen just because the different manufacturers will require different temperature. So

Right. So it's as if that every chip generation, you have a different consideration in terms of the energy efficiency or the way how cooling actually works within that data center. Did I get that correct? That's right. But before, let's face it, everything we were able to do, even the density goes up, we were able to cool it with the air. Now,

we reached the point that the air cannot pull these AI servers. This is where we are scrambling, okay, how do we do this? So if you look at the industry today, there are so many CDU companies out there, cooling distribution unit companies,

They have their own way of doing things, right? So they're trying to come up. But the thing that I'm expecting is that if you're a colo provider or even hyperscale, whatever, you will have different colors when it comes to the cooling distribution. So as a data center providers, we have to be flexible to be able to support all these customers.

So what has changed your mind about these technologies in the data center engineering in the past 12 months, given that there's such a big jump that you alluded to earlier? So to me, it needs to be air cold. I mean, excuse me, the liquid cold system. I prefer...

And I recommend that we go with a closed-loop system, whether you use a SPLC or hybrid cooling system, so that you can use that condensed water loop to directly cool that AI chip. So having that is definitely, to me, is a must. So you have been at the forefront of data center design, specifically even from your time at Facebook, even now at Digital Age. What are some of the important

important principles when it comes to designing data centers for say maximum efficiency? As I mentioned earlier, I think we need to reduce the steps of energy transformation. So if it's an electrical system, the biggest drop, the loss that you're seeing is at the UPS. If you look at UPS,

you're changing the AC to DC, DC back to AC. Doing that process, you're talking about anywhere between 6% to 10% drop. You've seen that. There are some high efficiency UPS these days, but typically speaking, you have a pretty large energy loss there.

And then in the mechanical system, you have water-cooled chillers, and then you convert it to air, and then you air back to the liquid, things like that. So the transformation, when you do that, that's where you will lose a lot of power or the efficiency. So reducing that steps will be the key to gain the efficiency. Okay.

So to gain that efficiency, I'm just very curious to ask, what are the most innovative approaches that you have implemented at digital age to minimize the energy consumption while maintaining performance? Because what you're saying is that you want to minimize the number process of energy transfer. So I did two things and cover while I was at Facebook.

I developed this technology. It's an UPS less power distribution. So at Facebook, there is no centralized UPS system. So that was my pattern back then. Still today, a lot of companies are using it. A lot of the large companies are still adapting to that. It's called OCP server design.

So, that's one thing I've done in the past. At Digital Edge, I must say that we have deployed SPLC system in Manila and Jakarta, and it's hot and humid area, and then we are able to obtain the PUE of below 1.2. That's something that a lot of this data center industry have not heard before.

So water usage in cooling systems and like say overall energy consumption are actually major environmental challenges for data centers. I think what solutions and all technologies is Digital Age now exploring so that they can make some meaningful progress towards ESG goals? So I'm glad you brought that subject because I get actually very emotional when we talk about this subject because

We have to look at two areas when it comes to water usage. A lot of times we just look at data center, at the data center level. But there is a bigger chunk. You have to have this holistic view of water usage. So it all happens from power generation side. Power gets transferred to the data center location.

According to IEEE, I didn't make this up, you can look online. IEEE spectrum, they talk about how much water is needed to produce one kilowatt hour of power. One kilowatt hour. What is one kilowatt hour? Leaving that 100 watt light bulb for one hour. That's one kilowatt hour. Okay. Wow.

In order to produce that power, you need 95 liters of water, which is equivalent to 25 gallons of water to produce that power. So I'm only going to talk about power generation side. So if you have a 100 megawatt data center campus, which today is a reasonable size. State of the art, definitely. 100 megawatt site.

If you lower your PUE by just 0.1, so give me an example, if a typical, it could be 1.5 PUE, and you just drop it down to 1.4, you will be saving 1,500 Olympic-sized swimming pool per year. Okay. Just imagine you're putting all these 1,500 Olympic-sized swimming pool side by side. It'll look like a lake, right?

I could see why you're saying you're creating a system that's closed loop now because you're trying to make sure that water is never wasted. That's correct. There will be some wasted, right? Yeah. So giving an example of by using a SPOC, roughly depending on the geographic area, you can save up to 40%.

I mean, that's a lot, right? 100 megawatt, 1500 Olympic size swimming pool, almost less than half, just that amount you're going to be saving per year. This is not per year. That's a lot of water, right? And then depending on what kind of system you pick,

the cooling system you pick at the data center that can also save water. So such as SPLC, 40% water saving. Now at the power plant side, you're saving gigantic amount of water. I think that is even bigger than the data center, right? So you have to look at two areas when it comes to water usage. And in my view, the bigger thing that we need to

focus is a PUE because of water usage happening at the power plant side. So while doing that, of course, there are a lot of advancements now in the data center technology. Which ones actually excite you the most? And how do you see these new technologies actually shaping the future of the industry?

I think the technology, the water savings, and I'm sure this SPLC is not going to be the last. There's going to be somebody else. And I'm hoping that somebody will understand this technology or come up with some even better technology. But besides...

the mechanical cooling system. I'm really excited about what we are doing in the electrical side, because as you heard, there are a number of fire incident happened due to the lithium ion batteries.

That's right. So at Digital Edge, we actually worked with this company in Korea and developed our own patent. It's called the Hybrid Super Capacitor.

But instead of using lithium ion battery, this is a hybrid supercapacitor. What are the advantages? When you look at the cause of fire, it's happening because the batteries have a chemical liquid in each cell, and there is a chemical reaction happening. When there's a chemical reaction, it actually generates heat.

So when it discharges or charges whatever, it creates a heat. And this is a fire hazard material. What HSC does is there is no chemical reaction. It's simply a charge and discharge. It's static. So there is no heat. You don't need to worry about fire. And also you don't need to place this...

HSE in a temperature-controlled room either. You can put it in a hot area or very cold area operating with the temperature of this HSE is very, very wide. And so this particular device and we are developing this, we got this thing done to replace UPS battery. That part is done.

But I'm going one step above that and I'm working on, okay, remember I talked to you about power shaving. And this particular device can, I personally believe that we can do this power shaving. Right. And because it recharges so quickly. Correct. So not like batteries. Batteries, when it's discharged and you want to recharge it, it takes hours to recharge. Right.

This capacitor takes minutes, if not seconds. So it can handle a lot of the spikes in such a short duration. It should be able to shave that power. It'll be a perfect device. So what we're trying to do is use this device and kill two birds with one stone. Yeah. This is what really excites me.

Yeah, if I would ask you another question, like high-temperature superconductivity, I'm pretty sure you're very familiar with superconductors being in very low temperatures. If that ever happens, how would it affect data centers then? So, I mean, it'll be there. It all comes down to the capacitor itself was something like this. The problem is it cannot...

the duration of this backup time is not that big. - That's right. - So just regular supercapacitor is not minutes. This hybrid supercapacitor is not really a capacitor, it's more of sort of a, it's very innovative. We call this a capacitor, but it's more of like energy storage system. - I see. - They actually can back up, you know, the new product is going to back up like two to three minutes.

So it's pretty wide. And it also regulates the power shaving piece at the same time because you're trying to do two things with one device. I see. Right. Wow, that's interesting. And we're not that far away from it. And I plan to share this information with the industry. And here's my... It's not...

A lot of the companies, to be honest, when they innovate things or find a better way of doing things, and this is a bit different than nuclear or aviation industry today, they tend to hold their information within there and not to share. And I think we need to change that behavior as we need to share the information because

the better we do a job on this, the energy savings, improving efficiency, you know, saving waters, this is all going to be good for our children in the future. And that's what we have to do. That's our, in my opinion, that's our, it's not an option, it's our duty. So what is the one question that you wish people would ask you about data centers engineering? I would say, I would say be bold and, you know,

And don't be afraid to try new things. And it's never going to be perfect. And it's never going to be a perfect time. But exactly how long are you going to wait? And I think this is where I think we need to really focus on. Yeah, just be bold. And as long as you take a calculated risk,

I think we'll be fine. That's a good point. So my traditional closing question, what does GRID look like for digital age in building out and managing data centers in Asia Pacific? As I mentioned earlier, we'd like to be known as a data center technology company, and we'll continue to drive this market, develop a new product, develop a new system,

even share how we improve our PUEs and operations to reduce the PUE. We plan to share this information and we want to be an open company so that we can all grow together. That's our mission and it's our responsibility.

And I wish you all the best on that. And of course, definitely, I've learned a lot today talking to a data center engineering geek about, you know, all the different things about data centers. So in closing, I have two quick questions. Any recommendations which have inspired you recently? Ah,

So it just happened maybe about when I was flying in from the US. I saw this Discovery Channel in the airplane. I love that program. And I saw about this next generation power, where it's going to come from.

And what the scientist is doing today in a very small scale, and this is not just dreaming thing, this is actually happening. So very small scale, they

launch the satellite with a bunch of solar panels. They launched it outside of the orbit. So they're going to get a 24 by 7 sunlight. So it gathers that power there and then transmit this power from there to Earth.

by electromagnetic or, you know, something very interesting. The scientists are just amazed that when I heard it and I go, wow, can you imagine that, you know, whether it's a cloudy day or whatever, it doesn't matter because it's some sort of a magnetic, you know, electromagnetic field. Yeah, it's transmitting power downwards. Okay. And then, and it's clean.

right it's just it's a sunlight but the only thing is maybe there might be some you know the wavelength you know some those are the things i'm sure that the scientists will figure this thing out but when i heard it i thought that was the coolest thing ever so hopefully i'll get to see that before i die we are all living longer now so we definitely can see so how can my audience find you

You know, I often speaking at a lot of the conventions and conferences. So you can find me there. You can find me through our company marketing and the website. Yeah, it's there. So, yeah.

If you have something good to share, please contact me. I happen to know one of your colleagues. Okay, okay. He's fun to talk to. I wish I could probably get him on the show at some point. So you can definitely find us at Analyze Asia on our YouTube and Spotify channel. And of course, subscribe to our newsletter. And most importantly, thank you for having this conversation. I personally felt I benefited a lot and I look forward to speaking to you again.

Thank you for having me. I really appreciate it.