When researching the term Smart Grid it is apparent there is not one coherent definition. It seems Smart Grid describes a concept in a variety of ways, and is often compared to the Internet in potential pervasiveness and usefulness. However, the Internet has a distinctive description and a foundation in the definition of the TCP/IP protocol. NIST has taken on the challenge to coordinate the effort of defining interoperability standards analog to a TCP/IP stack, and a cornerstone definition is the GWAC 8 stack.
One observation can be made that energy consumers in their own are getting very sophisticated and energy optimized, however, from a utility view, these attributes are not accessible. The opportunity, Smart Energy Consumers need to be enabled to connect to the utility. Bridging the communication gap between Utilities and Energy Consumers seems paramount. Enabling limited load controls by the utility has potentially substantial benefits. For example it would allow limited load shedding which is probably a key feature to make the grid more effective.
The aggressive build out of Renewable Energy Sources will potentially challenge the current grid operations significantly. There will be needs to balance load and generation as they are typically not matched. The location of generation and consumption can be in very different parts of the grid too. Balancing of base load generation by coal, oil and nuclear with Wind, Solar and Biomass will be needed as it challenges the current ecosystem. Possibly new financial models for utilities will emerge. It is quiet possible that Green Energy will be more valuable than traditional Energy, therefore the access to Green Energy will be a key driver.
There are many open issues and an industry acceptable implementation has to be developed. This needs a lot of definitions, research; knowledge will be generated in collaboration and needs to be distributed. The most exiting part is the journey we are on today, This is the time to be active, be part of it to form the future of our grid.
Monday, February 1, 2010
Monday, January 25, 2010
How do Server Virtualization, CPU Utilization and IT Utilization Efficiency relate
When talking about virtualization in a Data Center and how it effects CPU utilization and Efficiencies I find myself often in a debate about definitions and not getting to the substance… So I thought it might be helpful to define some simple terms and formulas here. I promise its simple math.
On the server level, this measure of how many virtual servers are hosted on one physical server is commonly used:
# Virtual servers on one physical server (SVP);
keep in mind, that server is no longer available as a stand alone server
On the data center level:
These terms are typically used:
# Total available Servers in Data Center (ST)
# Physical Servers hosting Virtual Servers as a subset of # physical servers (PSV)
# Virtual Servers in Data Center (SV = PSV*SVP)
# Physical Servers in Data Center (SP = ST + PSV – SV)
% Virtualization Degree in Data Center (VDDC = SV/ST)
As an illustration, assuming a Data Center has the need of 1000 total servers, dependent on the virtualization choices (yellow fields) results in a significant difference in how many physical servers are needed (and generate Green House Gas Emissions).
There are two more items to consider the CPU utilization and Server Utilization Efficiency. You can read more on those here, simplistically they describe how busy the server is (CPU) and how efficient it is able to convert electricity into IT work.
Typically servers with no virtualization have a CPU utilization of 10% or less, pretty low. With virtualization this can be increased and a 40% utilization is desirable. I assumed that 10 virtual servers on one physical server gets you to 40% CPU utilization, but that might be different in your data center depending on your servers and applications.
On the server level, this measure of how many virtual servers are hosted on one physical server is commonly used:
# Virtual servers on one physical server (SVP);
keep in mind, that server is no longer available as a stand alone server
On the data center level:
These terms are typically used:
# Total available Servers in Data Center (ST)
# Physical Servers hosting Virtual Servers as a subset of # physical servers (PSV)
# Virtual Servers in Data Center (SV = PSV*SVP)
# Physical Servers in Data Center (SP = ST + PSV – SV)
% Virtualization Degree in Data Center (VDDC = SV/ST)
As an illustration, assuming a Data Center has the need of 1000 total servers, dependent on the virtualization choices (yellow fields) results in a significant difference in how many physical servers are needed (and generate Green House Gas Emissions).

Typically servers with no virtualization have a CPU utilization of 10% or less, pretty low. With virtualization this can be increased and a 40% utilization is desirable. I assumed that 10 virtual servers on one physical server gets you to 40% CPU utilization, but that might be different in your data center depending on your servers and applications.
In the table above I made the assumption we start CPU utilization at 10% in average across all servers and as we implement more virtualization in average we get to 40%, proportional to VDDC increasing. With that the servers get significantly more efficient and ITUE grows in average to 52%. These are general examples and your environment may be very different.
Purpose was to give you a quick framework on how all these values and definitions work with each other. Hope this is helpful.
Wednesday, January 20, 2010
Always wondered what Data Centers and Swimming Pools have in common?
When asked to describe how the airflow should work best in a raised floor data center the best analogy I am able to come up with is a swimming pool. Bear with me…
In today’s Data Centers the rows of server racks are frequently arranged in a hot aisle cold aisle system, if not, they should. For non Data Center folks see at the end for a quick explanation what that is. Cold air comes from the bottom and likes to stay low. It’s like water in a pool. Incoming air needs to match or being slightly oversupplied to the air used through the servers. That’s like returnflow perforations in a pool. But to get the cold air all the way up to the edge of the racks the cold aisle cannot have big leaks. Biggest leaks are:
- the front and back entrance of the aisle
- voids of racks in the aisle due to building columns, etc.
- openings in the racks, unused U’s without blanking panels.
- voids in the racks between mounting rail and side panel
What is not a leak is the top, as long as air supply is matched and like in a pool with a little overflow. In fact that overflow can be used for closed loop control of air supply. One way to achieve balanced airflow is to fill aisles sequentially until the server air intake matches the supply capacity. Vent shutters can help to reduce flow where needed.
So if all voids are closed the air will fill the cold aisle like water in a swimming pool. As simple as that. Next time you walk into the Data Center just have that visual in mind.
There are simple ways to get these leaks closed, fire proof “meat locker drapery” for the ends of the aisle, or a simple door as an example… doesn’t have to be complicated, as long as it meets your fire code. And having an open top of the row helps you with that.
Using the rest of the building as the hot aisle has a nice savings effect too. In most parts of the world it’s rare that outside temperatures are hotter than the exhaust of the servers, therefore what used to be side load for the air-conditioner is now free cooling for most of the year.
Now voids in the racks between mounting rails and side panel are also a thermal short circuit in the rack, amplified by the doors. The fans in the servers will create with the perforation of the front door suction and the with the back door perforation pressure. So instead of all server air coming in and out the doors, the pressure differential will drive circulating air within the rack. Easy test, try to get your hand between the mounting rail and the side panel, if you can, you have a problem.
Why is this all important? Inside Air-conditioning equipment has simplistically three variables describing its ability of removing heat (assuming fixed flow of coolant or compressor capacity):
- effective length/surface of coils (typically fixed)
- air flow (can be adjusted to a certain degree)
- temperature differential
Another important factor determining the efficiency is the temperature the cooling coil has to operate, warmer is better.
Without separating the cold and hot air the temperature differential between hot and cold air is not really high, a lot of back mixing happens. As a consequence it isn’t uncommon that the factory rating of air-conditioning equipment isn’t attainable because the warm air isn’t warm enough for the unit to extract the amount of heat it was rated for. Sometimes only a fraction of the recommended rated temperature differential is achieved across the coils resulting in similar loss of capacity. Ironically the Servers generate a very significant temperature differential and would make the air-conditioning coils very effective, but because the hot and cold air back mix in the room, it’s lost.
If cold air is prevented from mixing with the hot air, automatically a higher coil temperature differential can be achieved. That’s it. It gets you more cooling rating into your Data Center (which you thought you already had and paid for!) and lowers your electricity bill. It’s impossible to give a universal number of improvements, but I wouldn’t be surprised to see several percent points, even low double digit dependent from where you start.
Hot Aisle Cold Aisle:
In the cold aisle all servers have their front facing the aisle. If you walk through the cold aisle looking left and right you see the front panels of the servers. That is where the servers pull in their cooling air, therefore that is where the cold air supply is routed in. Conversely the back end of the servers face the hot aisle, that is where the air exits the servers taking the heat out of them. That is where the hot air is collected to go back to the air-conditioner. Why is this done? If the Servers would be arranged front/back front/back, the hot air from one server would blow into the front of the next server.
In today’s Data Centers the rows of server racks are frequently arranged in a hot aisle cold aisle system, if not, they should. For non Data Center folks see at the end for a quick explanation what that is. Cold air comes from the bottom and likes to stay low. It’s like water in a pool. Incoming air needs to match or being slightly oversupplied to the air used through the servers. That’s like returnflow perforations in a pool. But to get the cold air all the way up to the edge of the racks the cold aisle cannot have big leaks. Biggest leaks are:
- the front and back entrance of the aisle
- voids of racks in the aisle due to building columns, etc.
- openings in the racks, unused U’s without blanking panels.
- voids in the racks between mounting rail and side panel
What is not a leak is the top, as long as air supply is matched and like in a pool with a little overflow. In fact that overflow can be used for closed loop control of air supply. One way to achieve balanced airflow is to fill aisles sequentially until the server air intake matches the supply capacity. Vent shutters can help to reduce flow where needed.
So if all voids are closed the air will fill the cold aisle like water in a swimming pool. As simple as that. Next time you walk into the Data Center just have that visual in mind.
There are simple ways to get these leaks closed, fire proof “meat locker drapery” for the ends of the aisle, or a simple door as an example… doesn’t have to be complicated, as long as it meets your fire code. And having an open top of the row helps you with that.
Using the rest of the building as the hot aisle has a nice savings effect too. In most parts of the world it’s rare that outside temperatures are hotter than the exhaust of the servers, therefore what used to be side load for the air-conditioner is now free cooling for most of the year.
Now voids in the racks between mounting rails and side panel are also a thermal short circuit in the rack, amplified by the doors. The fans in the servers will create with the perforation of the front door suction and the with the back door perforation pressure. So instead of all server air coming in and out the doors, the pressure differential will drive circulating air within the rack. Easy test, try to get your hand between the mounting rail and the side panel, if you can, you have a problem.
Why is this all important? Inside Air-conditioning equipment has simplistically three variables describing its ability of removing heat (assuming fixed flow of coolant or compressor capacity):
- effective length/surface of coils (typically fixed)
- air flow (can be adjusted to a certain degree)
- temperature differential
Another important factor determining the efficiency is the temperature the cooling coil has to operate, warmer is better.
Without separating the cold and hot air the temperature differential between hot and cold air is not really high, a lot of back mixing happens. As a consequence it isn’t uncommon that the factory rating of air-conditioning equipment isn’t attainable because the warm air isn’t warm enough for the unit to extract the amount of heat it was rated for. Sometimes only a fraction of the recommended rated temperature differential is achieved across the coils resulting in similar loss of capacity. Ironically the Servers generate a very significant temperature differential and would make the air-conditioning coils very effective, but because the hot and cold air back mix in the room, it’s lost.
If cold air is prevented from mixing with the hot air, automatically a higher coil temperature differential can be achieved. That’s it. It gets you more cooling rating into your Data Center (which you thought you already had and paid for!) and lowers your electricity bill. It’s impossible to give a universal number of improvements, but I wouldn’t be surprised to see several percent points, even low double digit dependent from where you start.
Hot Aisle Cold Aisle:
In the cold aisle all servers have their front facing the aisle. If you walk through the cold aisle looking left and right you see the front panels of the servers. That is where the servers pull in their cooling air, therefore that is where the cold air supply is routed in. Conversely the back end of the servers face the hot aisle, that is where the air exits the servers taking the heat out of them. That is where the hot air is collected to go back to the air-conditioner. Why is this done? If the Servers would be arranged front/back front/back, the hot air from one server would blow into the front of the next server.
Subscribe to:
Posts (Atom)