
One way to think about grid computing is as the virtualization and pooling of IT resources—compute power, storage, network capacity, and so on—into a single set of shared services that can be provisioned or distributed, and then redistributed as needed. Just as an electric utility uses a grid to deal with wide variations in power demands without affecting customer service levels, grid computing provides IT resources with levels of control and adaptability that are transparent to end users, but that let IT professionals respond quickly to changing computing workloads.
As workloads fluctuate during the course of a month, week, or even through a single day, the grid computing infrastructure analyzes the demand for resources in real time and adjusts the supply accordingly.
The term utility computing is often used to describe the metered (or pay-per-use) IT services enabled by grid computing. Cloud computing (where dynamically scalable and often virtualized resources are provided as a service over the internet) is another term that describes how enterprises are using computing resources—on both private and public networks—over the internet.
Using enterprise grid computing technology, IT departments can adapt to rapid changes in the business environment while delivering high service levels. Enterprise grid computing has revolutionized IT economics by extending the life of existing systems and exploiting rapid advances in processing power, storage capacity, energy and space efficiency, and network bandwidth.
The accelerating adoption of grid technology is in direct response to the challenges facing information technology (IT) organizations. With today’s rapidly changing and unpredictable business climate, IT departments are under increasing pressure to manage costs, increase operational agility, and meet IT service-level agreements (SLAs).
0 comments:
Post a Comment